Unnamed: 0
int64 0
31.6k
| Clean_Title
stringlengths 7
376
| Clean_Text
stringlengths 1.85k
288k
| Clean_Summary
stringlengths 215
5.34k
|
---|---|---|---|
31,400 | Common integration sites of published datasets identified using a graph-based framework | The dataset containing the identified CIS from the Retroviral Tagged Cancer Gene Database is provided in Table 1 Appendix A and it is obtained by using a Cytoscape 2.8 plugin, which implements some of the features of the GBF method.The other datasets are collected using a normal Internet browser.Fig. 1 shows a Venn diagram in which two datasets are compared.The first dataset is the collection of all the genes found with the GBF method, while the second dataset is the list of genes provided by RTCGD which uses the standard window method to identify CIS and the next gene approach to discover and associate an annotated gene to the identified CIS.For further details about the two approaches, see .With the GBF method, it is possible to discover 1421 genes which are not present in the RTCGD dataset.Only 142 genes were not discovered by the GBF method while they are present in the RTCGD gene list, and 404 of the genes can be found by both methods.The workflow of the analysis is depicted in Fig. 2.The input is a dataset composed of a list of integration sites.The graph-based framework presented in is adopted to perform all the following analyses.The first step is the CIS identification and the computation of some statistics for every CIS.Further steps are optional but they have to follow the order.The second step consists of enhancing the CIS dataset with information from genomic annotated data.This step generates the gene atmosphere dataset as shown in Table 2 Appendix A. Using the GA dataset, the next step consists of the functional analysis, as shown in Table 3 Appendix A.The dataset used for the analysis should contain few attributes in order to be properly analyzed by the GBF method.Some of these attributes are mandatory and they are shown in Table 1.The mandatory attributes for the CIS enhancing phase are shown in Table 2.The method presented in allows the identification of CIS on the basis of very few attributes found in the dataset under analysis.Fig. 3 shows the flowchart of the global method that builds the model and identifies the CIS with their statistics.Starting from the dataset containing the integration sites, it is convenient to order the dataset according to the integration position to improve the algorithm efficiency.This is the data preparation part.Afterwards, as depicted in Fig. 3, the building of the model starts creating an empty graph.For every IS present in the dataset, a node is created and added to the graph.A nested loop checks if all the vertices instantiated in the graph are at a distance below a certain threshold from the current IS previously added as a node to the graph itself.An edge connecting two nodes of the same type is created and added to the graph if the distance is lower than the threshold.When all the IS from the dataset are analyzed, the main loop terminates and the graph is ready to be analyzed by the main algorithm for CIS identification.This algorithm can be implemented in different ways from an undirected and disconnected graph).An efficient version of this algorithm is presented in .When the CIS identification is performed, a set of statistics are computed.The most interesting statistics are presented in Table 3.For further details about how the statistics have been computed, see Paragraph 2.6 in .Optionally, an enhancing of the CIS dataset can follow.The purpose is to link each IS with its neighborhood on the genome retrieving annotations present in online databases.Here, we used a normal Internet browser to perform queries accessing annotated data provided online by the BioMart database .The dataset resulting from this step is shown in Table 2 Appendix A, which provides a list of transcriptional elements composing the GA of all CIS identified with the previous step.As shown in the flowchart in Fig. 3, the process that builds the GA is similar to the process that build the IS graph.The IS nodes in the graph are linked with the TE nodes if the distance on the genome is below a certain threshold.If the previous step is performed, a functional annotation using DAVID may follow.This is the last step of the main workflow shown in Fig. 2.Here, we perform this step using the RTCGD dataset and the output is shown in Table 3.Integer value given to a CIS by the plugin.Name of the CIS as it appears in the tabular exported file.It is a composition of the chromosome and the CIS number.Number of IS that compose the CIS.IS is the position of the ith IS of the CIS.For CIS with an asymmetric distribution of the IS, this approximation gives a more precise estimation.See the subsection “Statistical model, p-value and log-likelihood ratio test” in ,See the subsection “Statistical model, p-value and log-likelihood ratio test” in | With next-generation sequencing, the genomic data available for the characterization of integration sites (IS) has dramatically increased. At present, in a single experiment, several thousand viral integration genome targets can be investigated to define genomic hot spots. In a previous article, we renovated a formal CIS analysis based on a rigid fixed window demarcation into a more stretchy definition grounded on graphs. Here, we present a selection of supporting data related to the graph-based framework (GBF) from our previous article, in which a collection of common integration sites (CIS) was identified on six published datasets. In this work, we will focus on two datasets, ISRTCGD and ISHIV, which have been previously discussed. Moreover, we show in more detail the workflow design that originates the datasets. |
31,401 | Real estate market and building energy performance: Data for a mass appraisal approach | The dataset features are as follows: 1,042 records, namely single unique properties, and 16 fields, namely property characteristics or other variables.Table 1 provides an insight on the measurement scales of the variables, as well as their coding system.The measurement scales have been defined according to the theoretical framework postulated by Stevens , who defined the following four fundamental scales: nominal; ordinal; interval; ratio.Within the dataset, coding systems adopted for the variables based upon nominal and ordinal scales does not rely on a preconceived scheme, instead they are driven by the information usually available in residential property advertisements.The city of Padua locates in Northern Italy and has been chosen for the survey because of its dynamic real estate market.According to cadastral statistics , in 2013 the city was accounted for a large building stock with a total of 174,250 real estate units, mainly composed by dwellings, manufacturing buildings, retail stores and offices.The dynamism of the local real estate sector is further witnessed by the amount of transactions recorded in the national observatory on the property market .Padua has been accounted for 1,618 transactions of residential properties in 2013, namely 1.4% of housing stock.Although the ratio was considerably higher just before the 2008 crisis, and in the meanwhile a turning point has been observed in the price trend, the empirical evidence still confirms it may be included among well-performing markets .The maintenance of a real estate market monitor is among the commitments undertaken by the Italian Revenue Agency.This monitor provides a full coverage of the national territory, by dividing it into homogeneous zones.The zones are identified by means of codes composed by a progressive number following a capital letter.In turn, the letters are assigned according to five kinds of locations: B stands for downtown areas; C for inner-ring areas; D for outer-ring areas; E for the outskirts and R for the rural areas.Usually, C, D, and E zones encompass both residential and industrial settlements.The real estate market monitor divides Padua into 22 zones, as represented in Fig. 1.The perimeter of the zones is also provided in the Supplementary materials within this article as a separate kmz file, which can be read with the Google Earth application.Owing to the growing interest in building energy efficiency, several regulations have been enacted in Italy since the mid-seventies.When the Law 373/1976 entered into force, it constituted the first attempt to impose constraints on the building energy consumptions.Subsequently, a major enhancement to deal with the energy saving issue was represented by the Law 10/1991.Most recently, trying to meet the increasing consumers׳ demand for market transparency, the Decree-Law 63/2013 – implementing the European Directive 2010/31/EU – has introduced the legal obligation to include energy-related information within real estate advertisements.The data which are to be exhibited refer to the global energy performance index and the consequent energy label, as they are reported in the mandatory energy performance certificate .Nonetheless, during the first time span in which the new regulation was in force, real estate advertisements were allowed to omit the value of global energy performance index.This is the reason why the dataset mainly focuses on the energy labels.The Italian coding system of energy label relies on eight classes, from A+, which is the best one, to G, the worst.Each class correlates to the energy performance index, depending on the climate zone.The city of Padua is characterized by a yearly amount of Heating Degree Days equal to 2,383.Therefore, according to the classification of the national territory, it falls within the E climatic zone.For this zone, Table 2 shows the energy performance index thresholds, for winter heating, corresponding to the aforementioned energy labels.During the survey, no cases of dwellings classified as A+ were found, hence they are missing in the dataset.The data provided here are useful to investigate the relationship between the energy performance of housing and the expected selling price by owners.Such a relationship is predicted to be positive by the literature; nevertheless, its functional form – linear or not – and magnitude are still debated.Moreover, georeferenced data allow to analyze the potential spatial patterns of both real estate prices and building energy performances .The data have been gathered within a research branch devoted to exploring the relationships among building energy performances, the prices in the real estate market, the costs in the construction sector and the provision of affordable dwellings .The survey was conducted during the time span from 2013 April to July.A total of 1,042 property advertisements have been accessed, filtering those published on dedicated websites by both private sellers and real estate agents.Only residential properties have been considered.Main real estate advertisement-posting websites have been inspected daily during the survey period.They have been cross-checked to avoid considering the duplicate items.When an old advertisement has been found to have been posted again without changes, only initial data have been kept.On the contrary, reissued advertisements with changes led to discarding the prior ones, hence oldest data have been replaced by the newest.Advertisements are usually concise; moreover, they may contain different kinds of information and make use or not of supporting materials, such as pictures of interiors and exteriors, or floor plans.The sample here provided includes only records characterized by complete information with regard to the previously mentioned variables.On the contrary, advertisements with missing data have been discarded.Based on the location declared in the advertisements, data have been georeferenced but they are characterized by heterogeneous levels of accuracy.The sample implements the exact geographical position for dwellings whose advertisement contained the full address.The dwellings whose advertisement provided the street name, but not the building number, have been located by referring to the midpoint of the self-same street.Otherwise, advertisements providing only succinct information about the location have entailed the need to refer to the midpoint of neighborhoods. | Mass appraisal is widely considered an advanced frontier in the real estate valuation field. Performing mass appraisal entails the need to get access to base information conveyed by a large amount of transactions, such as prices and property features. Due to the lack of transparency of many Italian real estate market segments, our survey has been addressed to gather data from residential property advertisements.The dataset specifically focuses on property offer prices and dwelling energy efficiency. The latter refers to the label expressed and exhibited by the energy performance certificate. Moreover, data are georeferenced with the highest possible accuracy: at the neighborhood level for a 76.8% of cases, at street or building number level for the remaining 23.2%.Data are related to the analysis performed in Bonifaci and Copiello [1], about the relationship between house prices and building energy performance, that is to say, the willingness to pay in order to benefit from more efficient dwellings. |
31,402 | Ocular torsion responses to electrical vestibular stimulation in vestibular schwannoma | Electrical Vestibular Stimulation is a simple method for activating the vestibular nerve by directly applying cutaneous currents over the mastoid processes.The resulting change in vestibular afferent firing rate produces a sensation of head roll.This, in turn, evokes a variety of motor outputs including sway and orienting responses.EVS also activates the vestibular-ocular reflex.The evoked eye movement is primarily torsional, with minimal lateral or vertical component.Although EVS has mainly been used as a basic research tool, there is evidence for its clinical diagnostic potential.When applied in a monaural configuration, the integrity of each ear can be separately assessed.Using this approach, altered EVS-evoked responses have been reported in a variety of vestibular disorders.For example, the magnitude of ocular torsion responses are significantly reduced following intratympanic gentamicin injections.This has also been reported for the EVS-evoked sway response following streptomycin toxicity.In contrast, responses are larger in Meniere’s disease.In a series of vestibular case studies MacDougall et al. reported systematic changes in the 3D orientation of the eye movement corresponding to specific canal deficits.These studies suggest that the EVS could supplement or even replace existing diagnostic tests.But before it can be useful as a general vestibular diagnostic, it is necessary to establish the normative and pathological responses in a variety of patients.From a practical clinical perspective, it is also desirable to develop a convenient, non-invasive and affordable version of the test for assessing the ocular response to EVS.Here we measure the ocular response to EVS in patients with vestibular schwannoma, a slow-growing benign tumour arising from the Schwann cells of the vestibulocochlear nerve.Previous research has studied EVS-evoked postural sway in VS, and compared the response to stimulation of the tumour ear to that of the healthy ear.Patients exhibit greater response asymmetry than control subjects, in terms of their standing sway response.This finding provides valuable diagnostic proof-of-principle for EVS.However, this particular postural test required patients to be capable of standing unaided on a force platform with their eyes closed and feet together.Since balance problems are a common feature of vestibular disorders, this potentially rules out a large minority of patients.In contrast, assessment of the ocular response to EVS can be performed whilst seated.Aw et al. measured the ocular torsion response to brief pulses of square-wave EVS in four unilateral VS patients with large tumours.They reported longer response latencies as well as reduced velocity in the affected ear.Again, while this offers valuable diagnostic proof-of-principle, it is not well suited to routine clinical use due to the invasive nature of the scleral coils which were used.Here we employ a non-invasive method for recording the ocular response to sinusoidal EVS in darkness using an infrared-sensitive camera.We studied 25 unilateral VS patients with small to moderately sized tumours, and compare them to age-matched controls.Our main aim is to determine whether the patients exhibit significantly greater response asymmetry in terms of the ocular torsion response to sinusoidal electrical vestibular stimulation in each ear.We also performed two additional tests for direct comparison with the EVS ocular response; firstly, the EVS-evoked postural sway test used by Welgampola et al., and secondly, the head impulse test, since reduced HIT responses have previously reported in VS.The results show that our EVS-evoked ocular torsion test out-performed the HIT test in terms of discriminatory power and was marginally better than the postural sway test, while being more convenient.25 patients aged 30 to 80 were recruited from University Hospital Birmingham.The presence of a vestibular schwannoma was diagnosed by magnetic resonance imaging and quantified using the maximum extrameatal tumour diameter.17 healthy controls aged 40–80 with no known neurological or vestibular disorder were studied for the purpose of collecting normative data in a healthy population.All participants gave informed written consent to participate.The experiment was approved by South Birmingham Research Ethics Committee and performed in accordance with the Declaration of Helsinki.Patient’s tumour measurements and symptoms are presented in Table 1.Koos classification and internal acoustic canal filling were assessed by MRI.Koos classification is a four-point grading system based on the size of the tumour G1 < 1 cm, G2 1–2 cm, G3 2–3 cm, G4 > 3 cm.Fig. 1A depicts a small right-sided intracanalicular tumour while Fig. 1B depicts a large left-sided intrameatal tumour with a cisternal component.Most participants were classified as Koos grade 2, which is partially attributable to the treatment procedure, whereby anyone with a tumour over 2 cm in diameter is offered cyberKnife, ultimately resulting in their exclusion from the study.Protocol – We used the protocol described by Mackenzie and Reynolds; “Participants were seated with the head restrained for the duration of each 10 s stimulation period.Prior to each trial participants were instructed to focus on the lens of an infrared camera and not to blink before being immersed into darkness.An invisible infrared light was used to illuminate the eye during each trial.No fixation light was provided to ensure that any horizontal and vertical eye movements were not suppressed”.Electrical Vestibular Stimulation – Electrical vestibular stimulation was delivered using carbon rubber electrodes in a monaural cathodal or anodal configuration.Four electrodes were coated in conductive gel, two were secured to the mastoid processes and two overlying the C7 spinous process using adhesive tape.Stimuli were delivered from an isolation constant-current stimulator.Four conditions were repeated 3 times giving a total of 12 trials.Data Acquisition and Analysis – We used the same analysis described by Mackenzie and Reynolds; “Torsional eye movements were sampled at 50 Hz using an infrared camera.Eye movements were tracked and quantified off-line using a commercially available planar tracking software.Torsional motion was tracked using iris striations.This technique has previously been validated across stimulation frequency range of 0.05–20 Hz.Nystagmus fast phases were automatically identified and removed.The magnitude of the eye response was measured as the peak value of the stimulus-response cross-correlation.Gain was then calculated by dividing this value by the peak stimulus autocorrelation to normalise with respect to the input stimulus”.An asymmetry ratio was then calculated from the gains of both ears.Protocol - Participants stood in the centre of a force plate, without shoes, with feet together and hands held relaxed in front of them for the duration of each 60 s stimulation period.Prior to each trial participants were instructed to face a visual target at eye level, 1 m in front of them before closing their eyes for the duration of the trial.Electrical Vestibular Stimulation – EVS was delivered in a monaural configuration to evoke postural sway.EVS was applied in sequences of six 3 s impulses of 1 mA, separated by a 6 s gap.The side of the active electrode and the polarity was randomised across trials.Two sides and two polarities gave a total of 4 conditions.Four repeats of each condition resulted in a total of 24 impulses per condition.Data Acquisition and Analysis - Head position was sampled at 50 Hz in the form of Euler angles using a Fastrak sensor attached to a welding helmet frame worn by the participants.As previously described by; “any offset in yaw or roll angle between head orientation and sensor orientation was measured using a second sensor attached to a stereotactic frame, and subsequently subtracted.A slight head up pitch position was maintained throughout each trail to ensure Reid’s plane was horizontal, ensuring an optimal response to the virtual signal of roll evoked by vestibular stimulation.The evoked sway response to vestibular stimulation was recorded in the form of ground reaction forces at 1 kHz using a Kistler 9281B force platform”.Analysis of EVS-evoked shear force is depicted in Fig. 3.Similar analysis techniques to Welgampola et al. were used.To increase signal-to-noise ratio of the response, the averages to the two stimulation polarities were combined separately for the mediolateral and anteroposterior direction.As the two polarities evoked responses in opposite directions, one polarity was inverted before the averaging process took place.For the left ear, the anodal response was inverted where as for the right ear the cathodal response was inverted, this was to ensure both ears resulted in a direction response towards the right.The ‘off’ response to stimulus cessation was combined with the ‘on’ response to stimulus onset.Again, the on and off responses are oppositely directed, hence the off response was inverted prior to the averaging process.The force response was quantified as the peak force vector between 200–800 ms after stimulus on/offset.The magnitude and direction of the peak force vector within this time window was measured from a participant average.An asymmetry ratio from stimulation of each ear was calculated using the equation in Fig. 3E, where R and L represent right and left magnitude respectively.Protocol – Participants received 20 impulses while seated.HIT involves a small, rapid head rotation in yaw, evoked by the experimenter.Participants were instructed to fixate on a visual target located 1 m in front of them throughout the HIT.Calibration – Eye kinematics were recorded using electro-oculography, thus requiring conversion from µV to degrees of rotation.This was achieved by having the participants rotate the head in yaw while keeping the eyes fixated on a target, allowing a regression to be calculated between EOG and degrees of head rotation, measured using a motion tracker.The calculated calibration was used to calibrate all subsequent EOG signals into degrees.The success of this calibration process can be observed in Fig. 4A, where head position and inverted eye position closely match each other.Data Acquisition and Analysis - Eye kinematics were sampled at 1 kHz using EOG.Two non-polarizable skin electrodes were applied near the outer canthi and a reference electrode to the forehead.Prior to electrode placement the skin was prepared by rubbing the skin with an abrasive electrode gel, all excess gel was removed before the area of skin was cleaned with an alcohol wipe and left to dry.The calibrated eye position for each head impulse was low pass filtered using a 5th order Butterworth, from which eye velocity could be calculated.Head position was sampled at 50 Hz in the form of Euler angles using a Fastrak sensor attached to a welding helmet frame worn by the participants.Head velocity during the HIT was sampled at 1 kHz using a gyro sensor located on the welding helmet worn by the participant.Offline analysis of the data was automated using MATLAB software.Peak head velocity and peak eye velocity were automatically selected and used to determine the horizontal gain.A gain of 0.68 or greater was deemed normal.An asymmetry ratio was calculated for each participant.To detect if the patients healthy ear was indeed healthy, it was compared to a random selection of right and left ear responses from the control group using an independent t test.Response gain was used to quantify both HIT and EVS-evoked torsional eye movements, whereas peak force was used to quantify the magnitude of the EVS-evoked.An unpaired t test was used to compare asymmetry ratios between controls and patients.We also performed correlations between EVS-evoked postural AR’s and EVS-evoked eye movement AR’s.A correlation between tumour size and AR was also performed.Pearson correlations were used to determine significance.For all statistical tests, significance was set at p < 0.05.Means and standard deviations are presented in text and figures, unless otherwise stated.Sinusoidal EVS evoked a strong torsional eye movement, with minimal horizontal or vertical components.Therefore, only torsional eye movements were used in subsequent analysis.Fig. 5B depicts torsional eye position in two schwannoma patients and a control subject.The control subject showed similar responses to left and right ear stimulation.Both patients showed attenuated responses during ipsilateral stimulation.As reported in Mackenzie and Reynolds, there was a ∼90°phase lag between the stimulus and response, with no difference between groups, or between contralesional and ipsilesional stimulation.Response gain is illustrated in Fig. 6A. Control subjects exhibited equal gain for left and right ear stimulation.Contralesional stimulation in patients produced similar values to the control group = 0.41, p > 0.05).However, ipsilesional stimulation produced an attenuated response.This is apparent in the asymmetry ratios, where the mean values were −3.27% and −19.38% for controls and patients, respectively =2.53, p < 0.05).Fig. 7 depicts EVS-evoked ground reaction forces in two schwannoma patients and a control subject standing face-forward.EVS primarily evoked a mediolateral force response, with minimal anterior-posterior response.The control subject showed very similar responses to left and right ear stimulation.In contrast, both patients showed markedly attenuated responses during ipsilesional stimulation.In control subjects, peak force responses were similar for left and right ear stimulation.In patients, while stimulation of the contralesional ear produced similar responses to control subjects = 1.85, p > 0.05), ipsilesional forces were attenuated.This was confirmed by a significant difference in asymmetry ratio between the two groups = 3.92, p < 0.05).In addition to measuring the magnitude of the EVS-evoked force vector, we also measured its direction.With the head facing forwards, anodal EVS over the right ear evoked a postural response directed along the inter-aural axis.Schwannoma had no effect upon the direction of this response, with all controls and patients responses oriented in the same direction = 2.13, p > 0.05).Mean head and eye kinematics during the HIT test are shown in Fig. 10 for schwannoma patients.Mean head rotation amplitude was 28° and 27° for contralesional and ipsilesional directions, respectively.Gain values were approximately 1 in both patients and control subjects, irrespective of head direction.There was no difference in the asymmetry ratio between the patient and control groups = 1.29, p = 0.41).Fig. 12A shows the ocular and postural asymmetry ratios plotted against each other for the patient group.The two methods exhibited a moderate correlation.Neither ocular nor postural asymmetry exhibited any significant relationship with tumour size.However, when patients were classified according to their Koos grade, those with Koos 1 showed smaller ocular asymmetry than Koos 2 = 2.69, p < 0.05).There was no effect of Koos grade upon the postural asymmetry ratio = 1.46, p > 0.05).We measured the ocular torsion response to sinusoidal electrical vestibular stimulation using the same stimulation and recording techniques described in Mackenzie and Reynolds.The only significant modification was the use of a monaural rather than binaural stimulus, so that each ear could be assessed separately.When we applied this technique to vestibular schwannoma patients we found that the ocular response was significantly reduced in the ipsilesional versus contralesional ear.When combined with the speed, comfort and practicality of the technique, this establishes the potential utility of the EVS-evoked eye movement as a clinical diagnostic test.Mean ocular response asymmetry ratio in the VS patients was ∼20%, being significantly greater than that of control subjects.This was also true for the EVS-evoked postural response.This is broadly consistent with Aw et al., who observed a ∼50% reduction in the ocular torsion velocity in the affected ear of VS patients.However, the limited sample size, different stimulation and recording techniques, and unreported tumour size limits comparison to the current findings.In our data there was considerable overlap between patients and controls for both the ocular and postural tests.This contrasts with the results of Welgampola et al.They measured the ground reaction force response to EVS in the same way as described here, and found ∼40% asymmetry in the patient response, and zero overlap with control subjects.However, tumour size in their patient group was more than double that here.Therefore, the difference is probably related to the extent of vestibular nerve damage in the two patient cohorts.So although our test is not suitable for discriminating early-stage Schwannoma patients from control subjects, the variability within our patient group likely reflects genuine differences in vestibular function.The asymmetry in the patient ocular response correlated with that of their postural response, suggesting that both results reflect the extent of the underlying vestibular deficit caused by the tumour.The magnitude of EVS-evoked sway responses are affected by numerous factors including head orientation, biomechanics, proprioceptive acuity and baseline sway.The EVS-evoked eye movement is simpler by comparison, consisting of a tri-neuronal sensorimotor arc combined with the minimal inertia of the eyeball.Hence, the ocular response theoretically constitutes a less variable test of vestibular function.Indeed, we did observe less variability in the ocular asymmetry of control subjects compared to their postural response.But perhaps more important than subtle differences in diagnostic efficacy between the two tests is the large difference in practicality.The eye movement recording was performed over a ∼10 min period in seated subjects.It is readily applied to patients with a high degree of postural instability and/or physical disability.Indeed, two patients were unable to complete our postural test, while all undertook the ocular recordings.Furthermore, the use of infrared video offers a practical alternative to invasive techniques such as scleral coils or marking the sclera with a surgical pen to aid tracking.Patients with Koos grade 2 tumours exhibited greater mean asymmetry than those in the smaller grade 1 category, but there was no correlation between tumour size and asymmetry ratio for either test.This tallies with Welgampola et al. whose data showed no correlation between EVS-evoked force and tumour size in eight patients with tumours spanning 17–40 mm.The lack of a systematic relationship between tumour size and vestibular deficit is perhaps unsurprising, since limited or absent correlations have also been shown for hearing loss, although this may not be true for much larger tumours.Our data also exhibited no relationship between tumour diameter and hearing loss or speech discrimination.This absence of a size effect is likely due to the non-uniform manner in which tumour growth impinges upon the auditory-vestibular nerve.In addition to measuring EVS-evoked postural sway magnitude we also determined sway direction, and found this to be normal in the patient group.Furthermore, the phase lag between the EVS stimulus and the ocular response was also normal.These findings suggest that sensorimotor transformation processing for vestibular information is entirely normal in VS patients.It is simply the magnitude of the responses which are affected.In contrast to previous reports, gain values for our HIT test were ∼1 for all subjects and directions, with no significant asymmetry in the VS patients, nor any difference between patients and controls.Tranter-Entwistle et al. reported mean gains of 0.73 and 0.90 during the horizontal canal video HIT test for the ipsilesional and contralesional side, respectively, with 10 of their 30 patients exhibiting < 0.79 gain.Similarly, Taylor et al. reported vHIT gains of 0.75 and 0.9 for the horizontal canal.Potential reasons for the null HIT response here might be differences in head movement kinematics, recording techniques and patient tumour location or size.Regarding kinematics, our peak head displacement was ∼27°, being within most accepted range values for a valid HIT test: 20–40°, MacDougall et al.: 5–20°, Taylor et al.: 10–20°, McGarvie et al.:, Tranter-Entwistle et al.:).Regarding technique, we used electro-oculography rather than video for recording lateral eye movements.Although EOG has slightly poorer resolution than video, it is not immediately obvious how this would affect gain.Furthermore, any systematic change in gain caused by such technical differences would affect both directions equally so would not influence asymmetry.Regarding tumour location, VS can arise from the superior or inferior branch of the vestibular nerve.Since the horizontal canal is innervated by the superior branch, a normal HIT test might occur if damage is restricted to the inferior branch.Consistent with this, most studies do indeed show that the superior branch is less commonly affected in VS: 76% single nerve involvement with 91.4% inferior and 6% superior, 24% >1 nerve, via surgical identification.Ylikoski et al.: 80% superior, 20% inferior via caloric test.Clemis et al.: 50% superior via auditory tests.Komatsuzaki and Tsunoda: 84.8% inferior, 8.9% superior via surgical identification).However, this still does not account for the positive results of Taylor et al. and Tranter-Entwistle et al. for the horizontal canal.Regarding tumour size, this was 19 mm in Taylor et al. and ∼7–13 mm in Tranter-Entwistle et al. which is similar to, or slightly greater than our mean value of 12 mm.Hence it is not immediately apparent why our VS patients exhibited normal HIT gains, but it raises the possibility that the EVS-evoked ocular response is a more sensitive measure of vestibular deficiencies than HIT.Further comparative studies in a larger variety of vestibular disorders are needed to confirm this.The diagnostic utility of the EVS-evoked ocular response across a broader range of vestibular disorders may depend upon its precise site of action.While not established beyond doubt, EVS currents most likely alter neural firing rate via the spike trigger zone of the primary afferent.This implies that the EVS response can only reveal deficits downstream of the hair cell.Vestibular schwannoma certainly constitutes such a deficit, which explains the impaired responses seen here.However, it has also been reported that gentamicin-induced vestibular toxicity impairs EVS-evoked eye movements.Since acute gentamicin toxicity kills vestibular hair cells, this could be interpreted as evidence that EVS stimulates the hair cell rather than the primary afferent.However, vestibular afferents have a high resting firing rate, and loss of hair cell input may conceivably reduce their firing rate and/or their excitability.Such a loss of excitability could diminish the response to an externally applied current, analogous to a drop in spinal excitability presenting as a diminished H-reflex.But irrespective of the precise mechanism of action, the evidence of gentamicin-induced deficits in the EVS-evoked response provides encouraging evidence that it could diagnose peripheral as well as central vestibular deficits, at least if such deficits affect hair cell function.To establish the precise diagnostic scope of EVS requires a direct comparison against established tests, such as caloric irrigation, vestibular evoked myogenic potential and chair rotation, in a wider group of vestibular disorders.In summary, we have demonstrated that EVS-evoked eye movements can be recorded in a fast, convenient and non-invasive fashion in order to detect asymmetries in vestibular function.Further work is required to validate this technique against existing tests such as caloric irrigation, and in a wider group of vestibular pathologies.None of the authors have potential conflicts of interest to be disclosed. | Objectives: We determined if eye movements evoked by Electrical Vestibular Stimulation (EVS) can be used to detect vestibular dysfunction in patients with unilateral vestibular schwannoma (VS). Methods: Ocular torsion responses to monaural sinusoidal EVS currents (±2 mA, 2 Hz) were measured in 25 patients with tumours ranging in size from Koos grade 1–3. For comparative purposes we also measured postural sway response to EVS, and additionally assessed vestibular function with the lateral Head Impulse Test (HIT). Patient responses were compared to age-matched healthy control subjects. Results: Patients exhibited smaller ocular responses to ipsilesional versus contralesional EVS, and showed a larger asymmetry ratio (AR) than control subjects (19.4 vs. 3.3%, p < 0.05). EVS-evoked sway responses were also smaller in ipsilesional ear, but exhibited slightly more variability than the eye movement response, along with marginally lower discriminatory power (patients vs. controls: AR = 16.6 vs 2.6%, p < 0.05). The HIT test exhibited no significant difference between groups. Conclusions: These results demonstrate significant deficits in the ocular torsion response to EVS in VS patients. Significance: The fast, convenient and non-invasive nature of the test are well suited to clinical use. |
31,403 | Subcritical water extraction of organic matter from sedimentary rocks | The study of organic matter and its storage in the subsurface of Earth has led to a greater understanding of past global biogeochemical processes.The preservation of organic matter in sedimentary deposits represents a direct link to the global cycles of carbon, oxygen and sulphur over geological timescales .Fossil organic matter is primarily the product of selectively preserved biopolymers and newly generated geopolymers collectively termed kerogen, although some information-rich free organic compounds are also present in the organic inventory .To access information contained within fossil organic matter in sedimentary rocks, a method of extracting the organic matter is required.The development of extraction techniques has progressed over decades of dedicated research .The use of organic solvents as extracting agents is the standard method in the area of geochemistry .The use of organic solvents, however, has some negative aspects.The production of organic solvents is often difficult, time consuming and expensive.Once produced, organic solvents represent a substantial health hazard to those who work with them.Organic solvents are also potential contaminants of the environment.The deleterious effects of organic solvents have inspired the search for alternative solvents that can be used for extraction of organic compounds from rocks.Supercritical carbon dioxide has showed promising results and non-ionic surfactants have been shown to be capable of extracting hydrocarbons from petroleum source rocks .Perhaps the most cost-efficient and least hazardous material to use for extraction is water.Yet at room temperature and pressure water is capable of only extracting polar organic compounds.Fortunately, water can be modified to improve its performance for organic extraction.For example, steam has been used to extract hydrocarbons from petroleum source rocks and subcritical water has been utilised to extract organic materials from Atacama Desert soils .Recent work has compared the efficiency of subcritical water extraction alongside organic solvent and surfactant-assisted methods where subcritical water extraction outperformed the aqueous based method involving surfactants, but still lagged behind organic solvent extraction in terms of efficiency.Subcritical water provides great flexibility during organic extraction.The dielectric constant of liquid water changes with temperature allowing control over its ability to solvate organic compounds of different polarities.The flexibility of subcritical water presents the potential to manipulate conditions to meet the requirements of different organic mixtures.The possibility of tailored extraction is important for sedimentary organic matter because of the varieties present in nature.The ultimate sources of sedimentary organic matter are biological materials such as lipids, lignin, carbohydrates and proteins from plants, algae and bacteria.Variation in the relative contributions from the source organisms leads to different types of organic assemblage .Type I organic matter is dominated by algal remains and typically reflects lacustrine environments.Type III organic matter contains land plant remains, usually in terrestrial or near terrestrial settings.Type II organic matter is intermediate in composition and is found in marine settings.Each type of organic matter is chemically different and, therefore, can be expected to respond in a distinct fashion to extraction protocols.Once extracted from sedimentary rocks, organic mixtures can be prepared for analysis by analytical instruments.Preparation usually involves column or thin layer chromatography where the polarity of organic compounds is exploited to achieve separation into chemical fractions by elution with organic solvents of varying elution strengths.The resulting fractions can be further separated into individual components by techniques such as gas chromatography.Some extraction techniques are relatively selective so that subsequent preparative chromatography is unnecessary.One example of this approached is provided by supercritical carbon dioxide extraction of carbonaceous meteorites where post extraction preparative steps, and the associated potential loss of already small amounts of valuable extracted material, could be avoided .In this paper we examine the conditions required to extract different types of sedimentary organic matter with subcritical water.Our aim was to develop a replacement for organic solvents for the extraction of organic matter from any mineral matrix and therefore produce a method that can be applied to materials such as soils, recent sediments and sedimentary rocks.Sedimentary organic matter of types I, II and III have been subjected to various temperatures and pressures.Diversity in polarity of organic compounds within some sedimentary organic matter types suggest that fine-tuning the dielectric constants of water provides the opportunity for selective extraction that could eliminate post-extraction fractionation steps prior to further analytical steps.Sedimentary rocks containing three different types of organic matter were used for subcritical water extraction experiments.Each sample was washed with solvent and crushed before initiating the extraction procedure.High performance liquid chromatography grade DCM and MeOH were sourced from Fisher Scientific, UK.The DCM/MeOH wash should have removed any unwanted substances such as plasticisers and surface contaminants from the sedimentary rocks.The samples were then crushed to fine powders and weighed using an analytical balance.Three millilitres of 93:7 DCM/MeOH was added to pulverised source rock samples before sonication using a Sonics & Materials Inc., VCX-130 Vibra-Cell™ ultrasonic processor with a maximum frequency of 20 kHz for 10 min at room temperature.Subsequent centrifugation for 5 min at 2500 rpm effectively settled any suspended sample particles.The solvent supernatants were combined and the total extract was filtered using syringe filters possessing polytetrafluoroethylene membranes with a 0.2 μm pore size.A stream of nitrogen gas was used to reduce filtered supernatants to aliquots of 1 mL and the extracts were transferred to 2 mL vials for analysis.Each sample extraction was performed in triplicate.Fresh pulverised samples were subjected to subcritical water treatment in a purposely built extraction system.The whole system is an assembly of three main parts housed inside a gas chromatograph oven: a syringe pump, a sample cell and a cooling coil connected to a collection point.Deionised water was first flushed through the entire system and once the system was filled with water the outlet valve and the eluent valve were closed.The oven was then set to a defined temperature.Temperature is by far the dominant control on variability in the dielectric constant of water , hence in our experiments we varied temperature but maintained pressure at a standard level to ensure the heated water remained in the liquid state.To stabilise internal pressure at 1500 psi during heating the inlet valve remained open during the temperature ramp and for an additional 5 min after the set temperature was attained.The mode of extraction in the study was static and water remained in the isolated system for set durations.To study the effects of temperature, static extraction duration was fixed at 20 min for 150 °C, 200 °C, 250 °C and 300 °C.To study the effects of duration, extraction at 300 °C was also performed for 30 min, 40 min and 50 min.At the end of the extraction time both the outlet valve and collection point valve were opened simultaneously.The eluent was collected in a large conical flask.The conical flask contained at least 20 mL of DCM prior to eluent collection allowing analytes to readily partition into the organic layer.10 μL of internal standard solution of squalane and p-terphenyl was added to the DCM organic layer before the eluent was collected.Extraction at 300 °C for 20 min was performed in triplicate to enable calculation of standard deviations.To confirm that the subcritical water extraction procedure was exclusively accessing the soluble organic matter, rather than also releasing compounds by degrading the insoluble high molecular weight kerogen, previously solvent extracted samples were also subjected to subcritical water treatment at 300 °C.The eluent and DCM in the conical flask were transferred to a separating funnel where they were shaken and allowed to stand until two distinct phases formed.The more dense bottom layer of DCM was then collected.Another 10 mL portion of DCM was added to the aqueous layer and the liquid–liquid extraction procedure repeated.The DCM layers were combined and treated with anhydrous magnesium sulphate to remove any traces of water.The volume of DCM solution was reduced on a rotary evaporator and the final extract transferred to pre-weighed vials.Analyte separation was achieved using an Agilent Technologies G3172A gas chromatograph fitted with an Agilent HP-5MS column.Helium carrier gas flow rate was 1.1 mL/min.Injection volume was 1 μL and injection mode was splitless.The front inlet temperature was 250 °C and the oven temperature programme was held for 1 min at 50 °C followed by a temperature ramp of 4 °C min−1 to 310 °C, where the temperature was held for 20 min.Total run time was 86 min.Analyte identification was performed using an Agilent Technologies 5973 Mass Selective Detector set in full scan mode and employing a mass range from 50.0 to 550.0 amu.The ionisation source temperature was maintained at 230 °C and mass analyser quadrupole temperature was 150 °C.A nine minute solvent delay was employed.Mass spectra were interpreted by reference to the NIST 2008 mass spectral database.The results are quantitative in that the intensity of an ion is a measure of the response of a compound to the conditions in the mass spectrometer and quantitation relies on comparisons of the relative abundances of compounds or standards present in similar amounts and with identical ionisation efficiencies.Subcritical water extraction of solvent extracted rocks produced no organic compounds confirming that the process does not degrade the high molecular weight kerogen present within the samples.The data therefore can be considered as exclusively representing relatively low molecular weight organic compounds, in an analogous fashion to conventional DCM/MeOH extraction.Direct comparison of EOM obtained from organic solvent and subcritical water is possible.Relative to organic solvent, the extraction efficiency of subcritical water at 300 °C and 1500 psi is less for type I organic matter but greater for type III organic matter.The chemistry of the organic matter content is exerting a clear control on subcritical water extraction efficiency.The range of temperatures over which organic matter can be extracted from the different organic matter types displays substantial variability.The type I organic matter is only extractable at the highest temperatures with no EOM below 300 °C.Organic matter in type II and type III samples are extractable at all temperatures but with the greatest amount recovered at 300 °C.With 300 °C representing the most efficient extraction conditions for all samples, perhaps the most discriminatory observation is the relative amount of organic matter that can be extracted at low temperatures.Low temperature extraction efficiency decreases in the order type III > type II > type I.The ability of different organic matter types to contribute across all temperature steps is undoubtedly related to the heterogeneity of organic compound types in the various inventories.Type I organic matter is dominantly aliphatic, type III organic matter is a mixture of aliphatic, aromatic and polar compounds, while type II organic matter has an intermediate composition.Our representative sample of type I organic matter was a Lower Carboniferous lacustrine shale.Direct injection of the DCM/MeOH solvent extract without further fractionation provides an insight into the total EOM present in the rock.The DCM/MeOH extract contained a series of normal alkanes from C15 to C35, with a mode around C25 indicating an algal source for the organic matter.Isoprenoidal hydrocarbons were represented by pristane and phytane.Cyclic terpanes were represented by a series of hopanes and steranes.There were relatively few aromatic or polar compounds in the DCM/MeOH extract indicating an overall aliphatic hydrocarbon-rich sample.Data are consistent with previous work that has examined the organic geochemical constitution of these rocks .Subcritical water extraction of type I organic matter produced different responses with temperature of extraction.None of the abundant aliphatic compounds observed in the DCM/MeOH extract were evident at temperatures between 150 °C and 250 °C.At 300 °C, however, the majority of aliphatic compounds observed in the DCM/MeOH extract appear in the subcritical water extract.DCM/MeOH extraction does appear to recover more of the components that contribute to the unresolved complex mixture compared to supercritical water at 300 °C.Our representative sample of type II organic matter was an Upper Jurassic shale.The DCM/MeOH extract contained a series of normal alkanes from C15 to C35, with a mode around C23 indicating an algal source for the organic matter.Isoprenoidal hydrocarbons were again represented by pristane and phytane and cyclic terpanes by a series of hopanes and steranes.A significant unresolved complex mixture indicates the presence of aromatic and polar compounds indicating a partial contribution from terrestrially derived organic materials.Our data are consistent with previous studies that have shed light on the organic constituents of Kimmeridge Clay type II organic matter, and it is known to contain aliphatic, aromatic and sulphur compounds .Subcritical water extraction of type II organic matter produced different responses with temperature of extraction.Few of the compounds observed in the DCM/MeOH extract were evident at temperatures between 150 °C and 250 °C.There were numerous low molecular weight aromatic compounds at temperatures of 200 °C and above.Some of the compounds present between 150 °C and 250 °C are polar acids.The low molecular weight aromatic compounds probably reflect significant contributions of land-derived organic matter known to occur in type II organic matter .Moreover the low molecular weight aromatic hydrocarbons and acids display relatively high solubilities in water.At 300 °C the subcritical water extract displays concordance with the DCM/MeOH extract from the same sample.The majority of aliphatic, aromatic and polar compounds observed in the DCM/MeOH extract appear in the subcritical water extract.Our representative sample of type III organic matter was an Upper Carboniferous high volatile bituminous coal.The DCM/MeOH extract contained a range of alkylated aromatic, phenolic and oxygen-containing aromatic compounds.Isoprenoidal hydrocarbons were represented by phytane.Our data are consistent with previous studies that have investigated the organic constituents of high volatile bituminous coals .Subcritical water extraction of type III organic matter produced different responses with temperature of extraction.Significant amounts of organic compounds are extracted at even the lowest temperature.At 150 °C the subcritical water extract contains a number of low molecular weight alkylated aromatic units.At 200 °C the responses alkylated and oxygen-containing aromatic units become more obvious and there are more contributions from higher molecular weight units.Above 200 °C the responses alkylated and oxygen-containing aromatic units increase further.Contributions from higher molecular weight units become even more significant at the highest temperature of 300 °C and the aromatic responses are joined by aliphatic contributions.To assess the influence of extraction time the duration of static heating was varied from 20 to 50 min for each sample while maintaining the extraction temperature and pressure at 300 °C and 1500 psi.EOM values were used to enable an assessment of extraction time on yield.The length of extraction time has minimal impact on the yield of extracted hydrocarbons from sedimentary rocks containing type I organic matter.EOM values for 20, 30, 40 and 50 min treatment with hot water show only very minor differences between them and there is no clear indication of a benefit from extending extraction duration beyond 20 min.The observation of similar extraction yields for various temperatures was also the case for the type II organic matter containing sample.The type III organic matter containing sample also displays constant yields from experiments with 20–40 min extraction duration, but when a duration of 50 min was used the yield declined.Subcritical water extraction is an effective technique for the isolation of hydrocarbons and related compounds from sedimentary rocks.Yet the efficiency of extraction is highly dependent on the temperature used and the type of organic matter present in the rocks.If the temperature at which extractable organic matter appears is examined, the influence of organic matter type becomes evident.The temperature at which extractable organic compounds appear is lowest for type III organic matter, intermediate for type II organic matter and highest for type I organic matter.The influence of temperature on the ability to extract organic matter of different types can be explained by the increase in overall polarity of the organic inventories moving from type I to type III.At room temperature, polar organic compounds are more soluble in water than non-polar compounds.As temperature increases, the dielectric constant of water decreases and the ability of water to solvate non-polar compounds is enhanced.Hence, at low temperatures the polar compounds in type III organic matter are extracted by water, which has a dielectric constant around 45 equivalent to somewhere between acetonitrile and dimethyl sulfoxide.At moderate temperatures low molecular weight aromatic units are extracted from type II organic matter, when the water has a dielectric constant between 40 and 32 equivalent to somewhere between dimethyl sulfoxide and methanol.At relatively high temperatures the non-polar compounds in type I organic matter are extracted, when the water has a dielectric constant of 20 close to that of acetone.The efficiency of extraction and its relationship to temperature can be examined quantitatively by reference to the EOM data.For type I organic matter, significant EOM is only observed at the highest temperature of 300 °C.In the extractions of type II organic matter there is a four-fold increase in the EOM from 200 °C to 300 °C.For type III organic matter there is a two-fold increase of EOM between 200 °C and 300 °C.A trend of increasing EOM with temperature is evident for all organic matter types but is least conspicuous for those organic matter types with chemical heterogeneity where the range in polarities of components provide contributions to each temperature stage.For general extractions, higher temperatures are more efficient for these hydrocarbon-rich rocks and, although lower temperatures provide significant yields for certain types of organic matter, 300 °C and 1500 psi represent a useful universal extraction protocol for organic matter-rich rocks.Overall the optimal duration for subcritical water extraction of all organic matter types found in sedimentary rocks is 20 minutes.Longer extraction times do not necessarily correlate to higher yields of EOM.For sedimentary rocks containing types I and II organic matter, variation in extraction duration with hot water produces negligible effects.For sedimentary rocks containing type III organic matter, very long extraction durations lead to a decline in yield.The variation in sensitivity to duration of subcritical water extraction for the different organic matter types is most likely related to chemical structure.Type I and type II organic matter assemblages contain substantial amounts of aliphatic structures while type III organic matter is dominated by aromatic units.Oxidation of aromatic constituents during lengthy subcritical water extraction experiments is a previously recognised issue and may account for the drop in yield for type III organic matter-containing sedimentary rocks observed for the longest extraction times.Compounds that make up the organic inventory of sedimentary rocks can display differences in polarity.The variable polarities in organic mixtures is exploited in post extraction fractionation steps which isolate compound classes based on their similarity to elution solvent strength.Inevitably the use of post extraction fraction steps make compound isolation more expensive, lengthy and labour intensive.Subcritical water extraction provides the ability to modify temperature and therefore dielectric constant and solvent strength.Our data reveal that selective extraction by subcritical water extraction is a possibility for organic matter-containing sedimentary rocks.Polar organic compounds are usually problematic for gas chromatography-based analysis because their polar and reactive nature can cause them to perform badly during chromatographic separation and can even irreversibly damage instrument components.The extraction of polar compounds at lower temperatures followed by the subsequent selective extraction of analytically amenable non-polar compounds at higher temperatures presents a means to avoid post extraction fractionation steps.The experimental results in this study represent one of the very few case studies of hot water extraction of hydrocarbons from organic-rich rocks.The data provide evidence that subcritical water can act as an efficient substitute for the more hazardous and commonly used organic solvents.A universal extraction protocol for all organic matter types in sedimentary rocks includes a temperature of 300 °C, pressure of 1500 psi and duration of 20 min.The ability to control temperature and therefore dielectric constant of water also provides the opportunity to selectively extract specific compound classes thereby avoiding lengthy and labour intensive post extraction fractionation steps prior to detailed analysis by gas chromatography and mass spectrometry. | Subcritical water extraction of organic matter containing sedimentary rocks at 300. °C and 1500. psi produces extracts comparable to conventional solvent extraction. Subcritical water extraction of previously solvent extracted samples confirms that high molecular weight organic matter (kerogen) degradation is not occurring and that only low molecular weight organic matter (free compounds) are being accessed in analogy to solvent extraction procedures. The sedimentary rocks chosen for extraction span the classic geochemical organic matter types. A type I organic matter-containing sedimentary rock produces n-alkanes and isoprenoidal hydrocarbons at 300. °C and 1500. psi that indicate an algal source for the organic matter. Extraction of a rock containing type II organic matter at the same temperature and pressure produces aliphatic hydrocarbons but also aromatic compounds reflecting the increased contributions from terrestrial organic matter in this sample. A type III organic matter-containing sample produces a range of non-polar and polar compounds including polycyclic aromatic hydrocarbons and oxygenated aromatic compounds at 300. °C and 1500. psi reflecting a dominantly terrestrial origin for the organic materials. Although extraction at 300. °C and 1500. psi produces extracts that are comparable to solvent extraction, lower temperature steps display differences related to organic solubility. The type I organic matter produces no products below 300. °C and 1500. psi, reflecting its dominantly aliphatic character, while type II and type III organic matter contribute some polar components to the lower temperature steps, reflecting the chemical heterogeneity of their organic inventory. The separation of polar and non-polar organic compounds by using different temperatures provides the potential for selective extraction that may obviate the need for subsequent preparative chromatography steps. Our results indicate that subcritical water extraction can act as a suitable replacement for conventional solvent extraction of sedimentary rocks, but can also be used for any organic matter containing mineral matrix, including soils and recent sediments, and has the added benefit of tailored extraction for analytes of specific polarities. |
31,404 | Individual differences in resting-state pupil size: Evidence for association between working memory capacity and pupil size variability | During the last decade, some evidence for a positive relationship between estimates of individuals' working memory capacity with baseline pupil size has emerged in the cognitive science literature, indicating that individuals with high WMC have larger pupils during rest.Working memory is a theoretical construct, introduced first by Baddeley and Hitch, which refers to “a hypothetical cognitive system” involved in maintaining, manipulating and retrieving task-relevant information.An ancillary idea is that WM is capacity limited and that such a capacity widely differs between individuals.Indeed, several empirical studies have shown that WM capacity is limited and individual variation can predict performance on a wide range of cognitive tasks.This relationship also includes higher-order cognitive tasks, in which inhibition, reasoning, or problem-solving skills are required, and it may be related to general intelligence.In most experimental paradigms, baseline or tonic pupil size refers to resting-state and/or pre-trial measurements of pupil size under a constant light condition with no task and no meaningful stimuli to attend to.Resting-state baseline pupil size is measured for a few seconds to a few minutes before initiating an experimental session and pre-trial baseline pupil size is measured right before initiating each trial.Participants are typically asked to simply look at a fixation cross in the middle of the screen to limit eye movements and its consequent effect on measurements.Baseline in the present study refers to the resting-state pupil diameter.Pupil size has also been linked to the general psychophysiological construct of ‘arousal’.Consistent with this, Heitz et al. proposed that the finding that high WMC individuals had larger resting-state pupil size compared to those with low WMC might indicate a generally higher level of arousal in the high WMC individuals, which may enable them to control attention when situational interference increases.In contrast, Tsukahara et al. argued that neither a higher arousal level account nor mental effort could explain the differences in resting state pupil size between individuals with high and low level of WMC."In Tsukahara et al.'s results, both WMC and fluid intelligence correlated positively with resting-state pupil size, but WMC could not predict individuals' tonic pupil size when controlling for the effect of Gf.The larger resting-state baselines in high Gf individuals was proposed to be related to the neuromodulatory effect of the tonic activity in the locus coeruleus that leads to stronger functional connectivity in the default-mode and executive attention networks of the resting-state brain.This stronger functional connectivity, may enable higher Gf individuals to engage with relevant events more quickly, similarly to what Heitz et al. suggested.The locus coeruleus-norepinephrine system is one of the main brain areas that modulates the pupil size through its excitatory effect on sympathetic pathway and its inhibitory effect on the parasympathetic oculomotor complex.The LC is a subcortical brain structure located in each side of the rostral pons and the only source of norepinephrine in the brain.The LC projects to many cortical and sub-cortical brain areas, which allow it to modulate several cognitive and emotional processes."The LC's level of activation is vital for arousal, alertness, and awareness, and more generally for the regulation of changes between brain states and behavioral states.For example, Aston-Jones and Cohen suggested that the LC-NE system is involved in regulating the balance between exploitation and exploration modes of behavior.Neuronal recordings and stimulation in animals have shown a tight link between modes of activity in LC and pupil diameter.These modes of activity differ in the pattern of spike discharge and the properties of NE release.The phasic mode is driven by relevant external or internal stimuli and is characterized by bursts of high-frequency neuronal discharge and pupil dilations.The LC tonic activity, on the other hand, is, relative to the phasic discharge, distinguished by stochastic, slow firing rates.The tonic discharges regulate the arousal level and correlate closely with the pattern of tonic pupil fluctuations.A positive relation between LC activity and pupil diameter has also been found in human neuroimaging studies.While low tonic LC activity is associated with small baseline pupil sizes along with sleepiness or fatigue, high tonic LC activity is associated with stress, high arousal, large baseline pupil sizes, and with exploration mode of behavior.Finally, a medium level of LC tonic activity is associated with an optimal level of arousal, attentiveness, task engagement, better performance, and a medium level of pupil size.In addition to the level of activity, the variability of LC tonic firing may also be important for the functional diversity of LC, and for the behavioral flexibility, i.e. for the random exploration of reward resources, and for the dynamic regulation of arousal and wakefulness based on internal state.However, while a variable LC tonic activity can be adaptive under resting-state, it can be destructive during task performance.In fact, Unsworth and Robison found that lower working memory capacity was related to greater variability, rather than an average decrease, in pre-trial baseline pupil sizes.According to them, this may indicate the presence of a more variable LC tonic activity level and consequently more lapses of attention in individuals with lower WMC, compared to those with higher WMC.While the relation between WMC and variability of pre-trial baseline pupil sizes, and relation between WMC and mean pre-trial and mean resting-state baseline size have been previously examined, the relation between WMC and variability of resting-state baseline pupil sizes remains unknown.The association between resting-state pupil fluctuations and WMC may be quite relevant to understanding the relationship between pupil as an index of arousal and WM. Results from a recent imaging study showed that resting-state pupil fluctuations were associated with activity in the salience and executive networks.That is, resting-state pupil dilations were accompanied by increased activities in dorsal anterior cingulate cortex and anterior insula.The dACC and anterior insula are both components of the salience network and thought to be involved in tonic alertness and arousal states."Results also showed that resting-state pupil dilations were associated with increased activity in the executive network, which is linked to working memory.Moreover, they found the same pupil-brain activity correlations, along with increased activation in the thalamus, also when participants were under a sleep-restriction condition.As a result, Schneider et al. proposed that the pupil fluctuations may reflect an arousal regulation, indicating that subjects were trying to remain alert while keeping their gaze on the fixation point.The above neuroimaging findings are intriguing in light of both the “control or executive attention view” of WMC and the LC account of individual differences in WMC and attentional control, which suggest that WMC reflects a general top-down ability to control attention, through the neuromodulatory effect of the LC-NE system.However, there is limited experimental evidence for this relationship and the nature of the relationship is not clear.Results from other psychophysiological measurements of the autonomic nervous system like heart rate variability indicate that high WMC associates with higher variability in HRV.As pupillary fluctuations, HRV is under the influence of the parasympathetic nervous system and linked to cognitive functions.Individuals with higher rest HRV had higher WMC and better cognitive control.Indeed, pupillometry with eye-tracking may be the most promising method to study the relationship between arousal and the LC tonic activity with WMC, given the tight link between the LC and pupil size and that, in comparison to heart rate and skin resistance, pupil size is the most consistent index of sympathetic activity of autonomic nervous system.Moreover, neuroimaging of the LC is not only costly but methodologically challenging because of its small size and its location in the brainstem.Thus, the aim of present study is to re-assess the relation between WMC and tonic pupil size and to attempt to replicate the previous findings indicating a relationship between the state of noradrenergic modulation, as indexed by pupil size and WMC.Specifically, we hypothesized that, if the average baseline pupil is related to WMC, then there should be 1) a significant difference in resting-state pupil size between individuals with high and low levels of WMC, essentially replicating findings of previous studies; and 2) there should be a positive relation between average baseline pupil size and WMC.However, if, similar to other psychophysiological measurements of the ANS, WMC is related to the variability in tonic pupil size, then 3) we should find a positive relation between WMC estimates and the coefficients of variation of these baseline pupil sizes.We recruited a sample of N = 212 participants, most of them among students from the University of Oslo.All participants had normal or corrected-to-normal vision.All signed a consent form prior to participating and received a gift card with a value of 100 Norwegian Kroner as compensation.No information regarding the main purpose of the study was revealed to them prior to testing.A statistical Power analysis using G*Power tool based on the reported effect size of WMC on resting-state baseline pupil size showed that, with an alpha level of 0.05, at least 88 participants would be required to get a Power of 0.95.Thus, our sample is well in excess of the needed statistical power for replicating previous results.Having large enough sample size is important since dividing a sample into two groups based on median value decreases statistical power.To test our hypothesis, we included in the initial phase of several, separate, pupillometry experiments that we have been running lately in our laboratory at the department of psychology, both a measure of WMC and of resting-state baseline pupil size.Participants were asked to just look at a fixation cross presented at the middle of a blank screen and then WMC was measured.This procedure and fixation stimulus were identical in every session.However, the WMC of 20 participants were estimated before measuring resting-state pupil sizes due to the experimental setup in a specific study.Excluding these individuals from the analysis did not change the patterns of results.A binocular Remote Eye Tracking Device was used to record the pupil size.The R.E.D. can operate at a distance of 0.5–1.5 m and its recording capabilities are not influenced by room lighting.All participants were tested in the same windowless room with constant illumination.The recording sample rate was 60 Hz with a spatial resolution of about 0.1 degrees.Participants were seated in front of a 47 × 29.4 cm color, flat LED monitor with a resolution of 1920 × 1080 pixels and 60 Hz refresh rate.After finishing the calibration and validation procedure, participants were asked to just look at a black fixation cross, presented at the center of a blank empty screen.Differently from some previous studies, head movements were stabilized using a chin-rest that kept the eye-to-monitor distance constant at 57 cm.The measurement duration for the first study with this experimental setup designed to measure WMC and resting-state baseline pupil sizes was 5 min.After analyzing these data, we modified the setup and reduced the measurement duration to 2 min for the last 172 participants; simply because we found a strong correlation between the median pupil sizes obtained from the 5 min recordings and the mean pupil sizes from the last 2 min of the recording, making irrelevant collecting long-lasting recordings."Participants' WMC was estimated using the “Letter-Number-Sequencing” task.The task is a subtest of the Wechsler Adult Intelligent Scale Third Edition.Participants are presented with strings consisting of both numbers and letters combined, which are unsorted.These strings vary in length and the task is to organize the numbers in ascending order and the letters in alphabetic order.The test is discontinued when the subject fails three consecutive sequences of the same length.A total raw score is estimated for each participant by measuring the total number of correctly recalled sequences.The test can be quickly administered and, most importantly, it is highly correlated with laboratory measures of WMC.Specifically, it has a high correlation with a composite score of three separate operation span tasks and it is the most widespread measure of WM among European psychologists.After visual inspection to evaluate the quality of data, artifacts and time intervals containing blinks were replaced by linear interpolations beginning five samples before and five samples after a blink.Interpolated data were further filtered, using Hampel filter, and then smoothed, using Lowess smoothing, to exclude outliers and high-frequency, instrumental noises.The statistical analyses were repeated for raw, interpolated, Hampel filtered and smoothed pupil data to investigate whether our preprocessing procedure influences the results.Since the results did not change, we report only the outcomes from the data that had undergone the whole preprocessing procedure.Finally, to measure the median and mean pupil size for each participant, we used 100 s from the 2 min recording, skipping the first and last 10 s. All pre-processing of pupillary data was done using R.Analysis scripts are available from https://github.com/thohag/pupilParse.In addition, a WMC score was calculated for each participant.A higher score indicates higher WMC.To assess differences in baseline pupil size between individuals with high and low WMC, the median of total scores for WMC was calculated and participants were then divided into two groups with high and low WMC.To examine whether WMC relates to the variability, instead of the average, of tonic pupil size, the coefficient of variation of baseline pupil diameters was computed.Considering our recording frequency of 60 HZ and the measurement duration of 100 s, there were 6000 recorded data points for each participants.First, mean and standard deviation of these recorded baseline values were calculated for each individual, and then CoV was computed using the following formula: ∗ 100.The statistical analyses were performed with IBM SPSS version 25 and JASP free software for the Bayesian analyses.The Descriptive statistics for WMC scores and baseline pupil size in each group is presented in Table 1.As seen, there was a large range of WMC scores between individuals.After checking for normality, an independent-samples t-test was used to test whether individuals with high WM scores show larger mean resting-state baseline pupil sizes than those with lower WMC."As shown in Fig. 1, there was no significant difference in mean baseline pupil size between individuals with high WMC and low WMC, t = −0.45, p = .65; even though the difference in WMC between individuals with high and low levels of WMC was significant, t = −20.66, p < .001, Cohen's d = 2.8.Controlling for the effect of age did not change the result, and there was also no significant difference in mean baseline pupil size between females and males.The data were also examined by estimating the Bayes factor to test how probable it is to find a significant difference in baseline pupil size between individuals with high and low level of WMC.The prior was based on the default JASP Cauchy distribution and the fit of the data under the null hypothesis was compared with the fit under the alternative hypothesis.The obtained Bayes factor was 0.16, which is below a BF = 0.33, and can, therefore, be considered as substantial evidence for H0.In fact, these results are about 6.25 times more likely under the null hypothesis than under H1.A simple linear regression analysis also showed that WMC score was not a significant predictor of mean baseline pupil size with an R2 of 0.004, F = 0.87, p = .35.According to result from Bayesian regression analysis, these data are 4.35 times more likely under the null hypothesis than under H1.To examine the third hypothesis, i.e., if there is a significant difference in CoV of baseline pupil size between individuals with high and low WMC, an independent t-test analysis was run with CoV of pupil size as dependent variable and level of WMC as independent variable.Results revealed a significant difference in CoV of pupil sizes between individuals with low and high level of WMC, t = −2.98, p = .003, showing that the mean CoV was significantly larger in individuals with higher level of WMC than mean CoV in low WMC individuals.The estimated Bayes factor, was 9.09, which can be considered as moderate evidence for H1.A simple linear regression analysis showed that WMC score was a significant predictor of CoV of baseline pupil size with an R2 of 0.05, F = 10.73, p = .001.In the present study, we found a novel positive relationship between WMC and CoV of resting-state pupil size, indicating that higher WMC was associated with higher variability in resting-state pupil size.However, results did not show any significant relationship between mean resting-state baseline pupil size and working memory capacity.Therefore, we failed to replicate the findings of two previous studies that indicated that higher WMC is associated with a larger mean resting-state baseline pupil size.We note that resting-state pupil size refers to the pre-task or pre-experimental baseline pupil size, which is measured before running the experiments and differs from pre-trial baseline pupil size, which is measured right before initiating each trial.One possible reason for the present negative findings regarding average baseline pupil size may be that the relationship between the mean baseline pupil and working memory capacity may be sensitive to the characteristics of the specific WMC tests.We note that the we used Letter-Number Sequencing to measure WMC, whereas Heitz et al. used “operation- span task”, and Tsukahara et al. used three “reading-, operation- and symmetry- span tasks” to measure WMC.These three span tasks are known as “complex span task”, because, in addition to encoding the memory items, they include a concurrent information-processing requirement.However, LNS and these span tasks belong to the same class of executive-functioning WM tasks since they involve similar cognitive processes, i.e., maintaining, manipulating and retrieving the relevant information.Moreover, the LNS is the most widespread measure of WM among European psychologists and considered a reliable index of WMC.Hence, it is puzzling in the light of the original hypothesis that the present WM task would differ in its relationship to the pupil compared to the other WMC estimates.We also note that in one of the previous studies, the effect size of the relationship between WMC and baseline pupil size was strong, where high WMC participants were found to have about 1 mm larger baseline pupil size than the low WMC participants.One would expect that using WMC tests with a different level of sensitivity might influence the degree of the relationship, not its presence.It is noteworthy that Tsukahara and colleagues showed also that after controlling for the effect of fluid intelligence, the unique variance of WMC was no longer associated with the mean resting-state baseline pupil size.Thus, it is unlikely that our null finding for relationship between WMC and mean resting-state baseline pupil size can be due to differences in the scales that were used to measure WMC.Another possibility for the present negative finding is that we did not have sufficient variability in WMC scores, in our sample."To evaluate whether our WMC results are comparable to those of the previous studies, we examined the descriptive information about the WMC in Heitz et al.' study, where means and SD were reported to be, respectively, 6.33 and 24.52 for low and high WM span groups.Thus, it would seem that the mean difference in WMC scores, between those in the upper and lower quartiles of WMC distribution, was compressed in our sample compared to the mean difference in the Heitz et al. sample, but the between-group difference in WMC was still significantly different in our study."Unfortunately, the range of WCM scores was not reported in Tsukahara et al.'s study.In addition to possible differences in our samples, there are methodological differences between this and the previous studies."In contrast to the procedure in our study, as declared by Tsukahara et al. and Heitz et al., “No devices, such as a chin-rest, were used to stabilize the subject's head position.”", "Given the lack of this distance stabilizing factor, the mean pupil size will be larger when the distance between the eyes and the tracker gets generally shorter. "This distance is reported to be between 60 and 80 cm in Tsukahara et al.'s study, which can influence on the eye-tracking precision causing higher degrees of instrumental artifacts.Although this may seem unlikely to be alone an artifact capable to cause such systematic differing findings, it may contribute some additive effects.In fact, the most reliable measures of pupil size in mm is provided by systems that take into account head distance and rotation together with pupil size in video pixels, so as to compute a reliable “mapped pupil diameter”.Finally, we cannot exclude the possibility that the larger mean resting-state baseline pupil size in high WMC participants in the two previous studies refers to a higher level of arousal mediated by some other latent variables like motivational or cultural factors."Several studies have shown motivational influences on WMC, pupil size and LC activity.Incentives can modulate the relationship between WMC and pupil size.When it comes to relationship between resting-state baseline pupil size and WMC, such motivational factors may induce different level of engagement and motivation to achieve, and, in turn, modulate arousal in these individuals.The higher level of alertness, especially in individuals with high level of WMC, compared to those with low WMC, can also be related to cultural differences.While participants in the current study were either Norwegian or international students, in both previous studies they were all from the USA.We do not have any evidence that Norwegians tend to have smaller body size and, therefore, smaller eyes and pupils than North Americans."Although ethnicity did not affect the results in Tsukahara et al.'s study, cultural differences could influence on individuals' motivation, especially in competitive situations like in educational settings, where task engagement and giving a good impression is highly valued.We also note that American students, compared to European ones, apparently are used to participate routinely to cognitive testing, therefore they may have quite good metacognition of their abilities and be more engaged or aroused by tests, especially for those with metacognition of their higher capacities.Hence, future studies on the current issues should include participant populations that are different from the standard North American University samples.Indeed, while testing for the effect of “familiarity with the environment”, Tsukahara et al. found a significant relationship between being a college student and baseline pupil size.The mean baseline pupil size of the low WMC individuals in both previous studies appeared to be 1.5 mm larger than the mean pupil size of low WMC individuals in our sample.Motivational or cultural factors may contribute to this relatively large difference in mean resting-state pupil size and arousal level between samples.What seems most important is that we found that the higher WMC was associated with higher variability in resting-state baseline pupil size.This finding is novel in relation to pupillometry but is in accordance with results from other psychophysiological measurements of the autonomic nervous system such as heart rate variability.As pupillary fluctuations, HRV is under the influence of the parasympathetic nervous system and linked to cognitive functions.Individuals with higher rest HRV have been found to have higher WMC and better cognitive control.Intriguingly, findings from a recent fMRI study showed that resting-state spontaneous pupil dilations were accompanied by increased activities in components of the salience network, which is thought to be involved in tonic alertness and arousal states.Moreover, they found that increases in resting-state pupil size were also associated with increased activity in the thalamus, and in the executive network.The authors proposed that the pupil fluctuations might reflect arousal regulation, indicating that participants were trying to remain alert while keeping their gaze on the fixation point."Due to the link between the executive network and working memory, higher variability in resting-state pupil size may indicate that high WMC individuals in the present study show a higher level of arousal regulation, rather than a general higher level of arousal, compared to low WMC individuals.Nevertheless, this higher arousal regulation could also be related to differing motivational and personality traits, which would be consistent with the arousal theory of motivation.For instance, individuals with higher WMC, compared to low WMC individuals, might possess a higher level of “need for cognition” trait, which is associated with higher level of arousal seeking.Otherwise, in the absence of any particular external motivational requirements, it remains unclear what is the reason that individuals with high WMC should allocate their mental resources to keep a certain level of tonic activity in the LC and be in an alert status.What seems puzzling is that Unsworth and Robison found a negative relation between WMC and variability in “pre-trial” baseline pupil sizes, indicating that lower WMC scores were associated with more variability in pre-trial baseline pupil fluctuations.The authors proposed that the higher variability in pre-trial baselines be related to a more variable tonic LC activity lapses in the attentional control, task-off thinking, task disengagement, and worse performance in low WMC individuals when performing a task.However, pre-trial baseline is different from resting-state pupil size, which is measured before initiating the experiment, whereas the pre-trial baseline pupil size is measured right before each trial in a task, in an event-related manner.In the absence of any active task to do, higher variability in the resting-state baseline pupil size of high WMC individuals may provide a better index of variability in tonic LC activity as an individual trait towards high environmental exploration It has been argued that a main function of the LC-NE system is to support behavioral flexibility, i.e. the adaptation of the behavior to changes in the environment.Indeed, several accounts have suggested that behavioral flexibility is at the center of the functional role of norepinephrine in cognition.Following these ideas, different levels of NE activity could be associated with changes in the presence of random exploration, which would be associated with an increased likelihood of randomly stumbling into novel rewarding options.In structured and deterministic environments, such as a lab context in which the experimenter has defined the reward contingencies, behavioral choices and LC activity should optimally follow the values of the options given.However, in an unstructured environment, such as in a resting-state protocol where only a very limited number of reward contingencies are explicitly defined, it could be beneficial to explore several options, should an unexpectedly better alternative appear.The adaptive gain theory suggests that the LC-NE system helps to optimize performance by adjusting gain.According to Aston-Jones and Cohen, the tonic mode provides a mechanism by which the system can optimize performance in a broad sense, i.e. independent of specific pre-defined reward contingencies, by sampling a wide range of options and strengthening the representation of those that lead to states with the highest value for the person.That is, the optimal strategy under such conditions, is exploring the environment, and sampling different options until new sources of reward are discovered.Optimal reward harvesting could potentially be associated with a sustained high tonic mode activity, as also suggested by the results of Tsukahara et al., but could also be achieved by intrinsic fluctuations of activity.If this were the case, one would expect that suppressing such fluctuations of activity would inhibit behavioral flexibility.In a recent pharmacological study with rhesus monkeys, Jahn et al., showed that systemic injections of the α2 adrenoceptor agonist clonidine, which is known to decrease LC activity, was causally involved in decreasing variability in choices in a decision-making task.These results are in line with previous findings in rats showing that enhancement of LC input to the anterior cingulate lead to increased behavioral variation.Another possibility might be that optimal reward harvesting is achieved through an oscillating or continuously changing pattern of reward sampling, e.g. as transitions between exploration and exploitation.Furthermore, such an oscillation might have an optimum that varies for different individuals and WMC could be one factor that contributes to such individual differences.Future studies with better-controlled conditions, larger samples, and from different laboratories, will help to resolve these inconsistent findings. | Dynamic non-luminance-mediated changes in pupil diameter have frequently been shown to be a reliable index for the level of arousal, mental effort, and activity in the locus coeruleus, the brainstem's noradrenergic arousal center. While pupillometry has most commonly been used to assess the level of arousal in particular psychological states or the level of engagement in cognitive tasks, some recent studies have found a relationship between average resting-state (i.e. baseline) pupil sizes and individuals' working memory capacity (WMC), indicating that individuals with higher WMC on average have larger pupils than individuals with relatively lower WMC. In the present study, we measured pupil size continuously in 212 participants during rest (i.e. while fixating) and estimated WMC in all participants by administering the Letter-Number Sequencing (LNS) task from WAIS-III. We were unable to replicate the relation between average pupil size and WMC. However, the novel finding was that higher WMC was associated with higher variability in resting-state pupil size. The present results are relevant for the current debate on the role of noradrenergic activity on working memory capacity. |
31,405 | Data on the evaluation of structural index from aeromagnetic enhanced datasets with the application of Euldph-λ semi-automatic algorithm | The high-resolution aeromagnetic data was gridded, using the minimum curvature algorithm of Oasis Montaj 2014 version at a sampling interval of 100 m on a Universal Transverse Mercator projection.The gridded data was divided into East-West and North-South geomagnetic cross-section profile lines, as shown in Figs. 1 and 2, covering a total of 343,982 number of primary data points in a block of about 58.7 × 57.6 km .Each sub-grid data was de-trended to eliminate the mean and the trends by taking out the first order surface that best fit the data using the least-squares method.Edges of the sub-grids are narrowed by the applications of a 10-point cosine bell to damp discontinuities at the edges.The zeros were appended beyond the grid edges to obtain sub-grids of 559 by 559 data sampling points which reduced the data to 312,481 secondary data points.A two-dimensional Discrete Fourier transform, was used by the Oasis Montaj software from Geosoft® Inc to estimate each sub-grid.The two-dimensional power spectrum was calculated and simplified to a one-dimensional radial spectrum by averaging values within concentric rings about the spectral origin and normalizing with respect to its value at r = 0 .The depth to the base of the magnetic crust was then determined with the applications of the ESA technique using the average depth to the deepest magnetic horizon and the position of the spectral peak along a profile.The primary aeromagnetic raw datasets and the results from the structural index analysis are as presented in the Supplementary files.The primary data exploration involved the initial data processing methods and preparations through the applications of the necessary DFT filtering techniques of data reduction and the applications of the ESA technique to generate data of the depths to the magnetic anomaly sources in the study area.This stage was carried out to identify the best relevant variables, i.e., the depths and Structural Index, values that define the nature of the anomalies.The stage involves the choice of simple shape models that represent the anomalies.The aeromagnetic data processing was transformed from space domain to frequency domain with the applications of DFT filters.This method made the computation of continuous discrete potential field data less stressful as the speed with which the data was processed is faster than the old method of applying Taylor׳s series .In magnetic method of geophysical prospecting, the aeromagnetic data is used to prospect for both magnetic minerals and none magnetic minerals directly.The method involves tracing of ore-bearing formations and geological features such as faults /fractured, or rock contacts zones, a dyke, a pipe, a ridge, a cylindrical object, etc.Usually, the raw data collected from the field must be corrected for time variations of the Earth׳s magnetic field and the Aircraft platform motion.In addition to this, the International Geomagnetic Reference Field was used to remove non-crustal effects from the data before processing.Though this process was done by the company that acquired the data for the Nigerian Government, the Oasis Montaj package also performed the task before the applications of other filtering tools .To deviate from the known trial-and-error or indirect determination of source parameters in magnetic data interpretations, the inverse methods involve direct determinations of the magnetic source parameters from the measured data.In magnetic data interpretations, each anomaly has an infinite number of permissible sources.The interpreter needs to narrow down these infinite number of permissible sources to some smaller number by way of constraining the parameters of the source rocks.Even though, there are only two parameter sets that govern the shape of any magnetic anomaly, i.e., the shape of the causative body and the magnetic materials distributions within the body.In magnetic data analysis, one of these two must be fixed or kept constant whilst the other varied.Simple models are constantly used to estimate magnetic source parameters, that is, the depth, strike direction, inclination and declination angles, etc.However, simple pole and dipole approximation are commonly applicable in aeromagnetic anomalies associated with subsurface mineral exploration.Most airborne magnetic interpretations use a vertical prism for basement depth estimation . | A secondary dataset was generated from the Euldph-λ semi-automatic Algorithm, (ESA) developed to automatically computes various depths to the magnetic anomalies using a primary data set from gridded aeromagnetic data obtained in the study area. Euler Deconvolution techniques, (EDT), was adopted in the identification and definition of the magnetic anomaly source rocks in the study area. The aim is to use the straightforward technique to pinpoint magnetic anomalies at a depth which substantiate mineralization potential of the area. The ESA was integrated with the imaging function of Oasis Montaj 2014 source parameter from Geosoft® Inc. From the data, it could be summarized that similar tectonic processes during the deformation and metamorphic activities, the subsurface structures of the study area produce corresponding trending form. |
31,406 | How to fragment peralkaline rhyolites: Observations on pumice using combined multi-scale 2D and 3D imaging | Peralkaline rhyolites, although less common than their calc-alkaline counterparts, are nonetheless found in many settings including continental rifts, ocean islands and back-arc basins.During the Holocene, central volcanoes along the East African Rift, from Afar to Tanzania, have produced explosive ignimbrite-forming eruptions of peralkaline magma.Today, these volcanic centres threaten many hundreds of thousands of people, yet the dynamics of peralkaline eruptions are poorly understood and have never been observed directly.Despite their high silica contents, peralkaline melts have a relatively low viscosity as a result of their alkali-rich nature / Al2O3 > 1, e.g., Dingwell et al., 1998; Di Genova et al., 2013).Their volatile-free viscosity is two to three orders of magnitude lower than that of calc-alkaline rhyolites: ~ 108 Pa·s for calc-alkaline rhyolite using the model of Giordano et al. versus ~ 105.5 Pa·s for peralkaline rhyolite using the model of Di Genova et al., both at 1223 K. Peralkaline rhyolite viscosities are so low that the fragmentation threshold for brittle failure should never be reached during magma ascent and degassing unless significant microlite crystallisation takes place, though numerical modelling has suggested that initial temperature may also exert a strong control on the depth of brittle fragmentation and whether it can occur at all.Peralkaline magmas are associated with a large range of eruption styles.For example, on the island of Pantelleria, Italy, magmas with near-identical major element compositions have produced domes, lava flows, pumice cones, thick tephra fall deposits and pyroclastic flow deposits.The widespread welding and rheomorphism of the ignimbrites and fall deposits are a consequence of the low viscosity and correspondingly low glass transition temperature of peralkaline melts, which can allow deformation to continue for many days after emplacement.In this study, we use textural observations made on pumices from Pantelleria, Italy, to investigate the mechanisms of peralkaline rhyolite fragmentation.Our aim is to unravel the vesiculation and crystallisation processes in operation during magma ascent and hence understand magma properties to the point of fragmentation.Vesicle textures preserve information about bubble nucleation and growth, but are also modified by deformation, coalescence and outgassing.A crucial assumption made when interpreting pyroclast vesicle textures is that they represent the magma at the moment of fragmentation; that they have experienced no post-fragmentation modification.This assumption is valid when samples are rapidly quenched, as is the case for many pumices from Pantelleria, but the timescale over which textural modification occurs depends on magma viscosity, magma composition and the depth of fragmentation.In order to examine vesicle and crystal textures, as well as their interrelationships, in detail, we combined the complementary methods of multiscale 3D X-ray microtomography and high resolution 2D scanning electron microscopy.By integrating these techniques, we obtained high spatial resolution information about the geometry of objects in three dimensions, which is critical for understanding eruption processes.We compare our data to published textural studies of explosive eruptions, and assess similarities and differences in textures, bulk porosities, vesicle population characteristics and strain localisation features.By integrating textural and geochemical data, we reconstruct the peralkaline fragmentation process that accompanies the eruptions of these magmas, and test the limits of existing models to explain magma fragmentation.Finally, we use a fragmentation model to explore the role of overpressure inside rapidly growing bubbles as a driver for strain rate-driven fragmentation during rapid ascent.The Quaternary volcano of Pantelleria lies on the thinned continental crust of the E-W extending Sicily Channel, and has been active for at least 324 ka.The mafic northwest portion of the island is separated from the caldera-dominated, felsic southwest portion by N-S striking regional faults.The volcanic history of Pantelleria has been punctuated by ignimbrite-forming eruptions, of which the ~ 45.7 ka Green Tuff eruption was the most recent.Continuous geochemical zonation in the Green Tuff deposit, from pantellerite at its base to trachyte at its top, may represent the evacuation of a stratified reservoir of cogenetic magmas.Indeed, pantellerites are most likely formed by 70–80% fractional crystallisation of trachytic liquids.Small eruptions generating non-welded fall deposits have been most common over the last 20 ka on Pantelleria.Deposits from these eruptions have been classed Strombolian from the limited, circular extent of their tephra dispersal, in line with similar observations from Mayor Island, New Zealand.Cuddia di Mida is the site of one such Strombolian eruption, which produced a small pumice cone around the eruptive vent.Deposits from the Cuddia di Mida eruption are characteristic of the numerous small explosive eruptions that have taken place since the ~ 45.7 ka Green Tuff eruption, making it well suited to a study of the eruption dynamics and fragmentation of peralkaline magmas.The lowermost layer of the sequence is an explosion breccia and is overlain by a poorly-sorted fallout layer, which has an increasing ash content towards the top.Above this is an ashy bed overlain by a much thicker, massive, poorly-sorted fall deposit.The Cuddia di Mida deposits have not been dated, but the eruption probably occurred at a similar time to the Cuddia del Gallo eruption ka; Scaillet et al., 2013): a likely eruption window of 9.7–7.1 ka can be inferred from the ages of the nearby Serra Fastuca and Cuddia del Gallo eruptions.A bulk sample of pumice clasts was collected from a single horizon in the middle of the upper massive layer on the Cuddia di Mida cone.The unit consists of juvenile clasts ~ 1–10 cm in diameter.Grey clasts make up ~ 95 vol.%, with the remainder made up of black and mixed clasts and non-juvenile clasts which are < 10 cm in diameter.This is sample number 09PNL001 from Neave et al.Density measurements of juvenile material were carried out using the method of Houghton and Wilson, with the type of material being noted.Bulk densities were converted to porosities using a glass density of 2520 kg·m−3, calculated from the Cuddia di Mida glass composition of Neave et al. at room temperature and pressure.The grey and black pumices have indistinguishable major element glass compositions and the same glass density was therefore used for both pumice types.Grey pumices exhibit the lowest density of any juvenile material from the Cuddia di Mida eruption.In Strombolian eruptions, grey pumices are thought to represent the films that encase gas slugs and are therefore most likely to capture the moment of fragmentation.The black and mixed pumices appear to be collapsed grey pumices and therefore were not considered further as they are unlikely to capture the moment of fragmentation.Cylinders ~ 10 mm in diameter and ~ 10–20 mm in height were cut from four clasts of the grey pumices for qualitative textural analysis by SEM and XMT imaging.Two additional cylinders ~ 5 mm in diameter were cut from clasts A and C in order to acquire high quality XMT images at a range of resolutions.These two cylinders were also imaged by SEM.Full details of SEM and XMT image acquisition and processing, including the calculation of vesicle size distributions which followed the principles employed in the FOAMS software, are included in Supplementary material 1.All images, both SEM and XMT, are available from the authors upon request.As the histogram of porosities shows a bimodal distribution, a robust estimate of the average density of the whole population cannot be made owing to insufficient measurements.The broad, low porosity mode consists of black and mixed pumices whereas the narrow, high porosity mode consists exclusively of grey pumices.Sufficient measurements of the high porosity mode were made to obtain a robust estimate of its average.The clasts used for textural analysis are all grey pumices from the high porosity mode.The average porosity estimated from the bulk density of A and C is 76.2 vol.%, which compares well with the vesicularity calculated from 2D SEM images.Crystal phases are dominantly anorthoclase and aegirine augite, alongside subordinate Fe-Ti oxides and aenigmatite.The average crystal content estimated using XMT images is 3.24 vol.% and the average aspect ratio of the crystals is 2.41.Crystal size distributions were not calculated from SEM or XMT images due to the low number of crystals present, i.e., crystal populations are not statistically robust.Therefore, only crystal area contents were measured in SEM images for calculation of crystal-free vesicle number densities.No microlites were observed, even in the highest resolution SEM images.The uniform BSE intensity of the pumice glasses implies that any nanolites present must be < 0.02 μm2.Grey pumices show a variety of vesicle textures in both SEM and XMT images.In some regions, there is a sub-spherical, unimodal, isotropic vesicle population connected by thin melt films that have an overall appearance resembling a polyhedral foam.Some regions contain elongate vesicles which have thicker vesicle walls than the surrounding regions and therefore appear denser.Whilst vesicles within these regions are strongly aligned, nearby regions have different alignments and there is no overall bulk preferred orientation.Medium-sized vesicles associated with crystals are often somewhat elongated perpendicular to crystal faces and are connected to the crystals by thin melt films; the crystals themselves are often mantled by melt films.The largest vesicles are distributed randomly throughout the samples and have highly convoluted surfaces that are often, but not always, associated with crystals or regions of small vesicles.The films separating these large vesicles are very thin and often pinch out in the middle to widths thinner than the resolution of the SEM images.In SEM and XMT images, all samples display all these textures in approximately similar amounts with two exceptions: in SEM images, A5 only displays the polyhedral foam texture with occasional larger vesicles; and in XMT images, C5 displays more of the elongate and orientated deformation vesicles.Vesicle size varies by three orders of magnitude in clasts A and C with L ranging from 1.69 × 10−3 mm to 4 × 100 mm.Vesicle wall thicknesses vary from below the resolution of SEM images to ~ 30 μm.A10 and C10 contain equal proportions of circular and elongate vesicles whereas A5 contains 33% elongate vesicles and C5 62%, as observed qualitatively.Relationships between the number of vesicles per unit volume and L from the SEM data are similar for both clasts in the range L = 0.15–4000 μm, with greater variation found at the upper and lower limits of L.Stereological correction procedures from Sahagian and Proussevitch and Mangan et al., produced similar results.Vesicle properties calculated with the more widely used SP98 procedure were carried forward into further calculations.The XMT data show very similar trends for clasts A and C, with greater inter-sample variation for large vesicles.In these samples, the XMT data extend the range of L to values half an order of magnitude greater than those recovered by SEM, and the higher number of vesicles observed at larger L means less scattered data at larger vesicle sizes.At intermediate values of L, XMT and SEM data have very similar NV distributions.For cumulative vesicle number density, changes in slope at ~ 2 × 10−2 mm and ~ 5 × 10−1 mm define three segments, which can be fitted with power-law curves.At small values of L, the curve can be fitted with a power law exponent of 1.96.For intermediate values of L, d increases to 3.24 and 3.28 respectively.For large values of L, d decreases to 2.06.The average melt corrected total vesicle number density from SEM images is 2.52 × 106 mm−3, which is two orders of magnitude larger than the value of 4.23 × 104 mm−3 from XMT images.NV,totmelt values are dominated by the smallest vesicles, which can be artificially combined by XMT when image resolution is insufficient to capture the finest of melt films or artificially separated by SEM when complicated vesicles are counted multiple times on a 2D surface.When NV,totmelt is calculated using vesicles of 2 × 10−2 < L < 2 × 10−1 mm, the XMT and SEM datasets show close agreement.The spatial correlation between crystals and moderately large vesicles identified qualitatively was tested further in A10 and C10 as they contain the most crystals and were imaged with a resolution appropriate for capturing larger vesicles.The NV versus L relationship of all vesicles was compared to that of the 100 vesicles closest to each crystal quantified using 3D nearest neighbour analysis implemented in the SpatStat package in R.Due to small instabilities during repeated iterations of nearest neighbour calculations, NV versus L systematics of near-crystal vesicles are presented as a field rather than a single line.Vesicles near crystals have larger modal equivalent diameters by ~ 1.5 × 10−1 mm, verifying previous qualitative assessments.By combining SEM and XMT imaging, we were able to obtain high spatial resolution images as well as quantifying 3D relationships between objects.When applying any method with a finite spatial resolution, a population of small features may always be beyond the limits of imaging resolution.The resolution of the XMT data was insufficient to determine the finest of vesicle walls and the presence, or in this case absence, of microlites.Region of interest scanning, or higher resolution XMT laboratory systems, can yield 3D datasets with voxel resolutions down to 50 nm which would allow SEM-comparable imaging of thin vesicle walls, albeit within much smaller 3D volumes.However, the large, heterogeneously distributed high density crystals increased image noise and thus prevented observation of fine scale structures in these samples.In highly porous samples, like those investigated here, XMT image analysis generally underestimates vesicle number densities, primarily by the over-coalescence of neighbouring vesicles.Direct comparison of volcanological interpretations from SEM and XMT multiscale data should therefore be made with caution.For example, multiscale imaging studies of basaltic scoria and bombs from Villarrica observed discrepancies between SEM- and XMT-derived NV,tot values of a similar magnitude to those we observe at Pantelleria.In contrast, in datasets where vesicles are large with respect to the XMT voxel resolution, SEM and XMT datasets may agree well with each other, as reported in pumices from Montserrat.Imaging using any method where the smallest feature is less than three pixels/voxels in diameter will be subject to significant uncertainty.Segmentation and separation of the vesicles in the 3D dataset were performed by automated methods, and were entirely parameterised from the data.The processing of XMT data therefore avoided the time-consuming manual rectification required for SEM data and eliminates user-induced bias for feature recognition.The good agreement between the VSDs from both methods indicates that our SEM and XMT datasets can be combined to extend the range of L. Using XMT scans at two resolutions, it is theoretically possible to constrain VSDs over at least five orders of magnitude of equivalent diameter.XMT is able to accurately define the volume of all vesicles without using stereological corrections.This is particularly important for non-spherical elongate or coalesced vesicles, which are treated poorly by standard stereological conversions applied to 2D data.For ellipsoidal vesicles, vesicle volume calculated assuming sphericity using the 2D cross-section can significantly over or underestimate volume depending on orientation relative to the 2D section plane.Vesicles with highly complex morphologies can be counted multiple times depending on their intersection with the plane of the 2D section, affecting size distributions and number densities.The limited sample area of 2D analyses impacts on the structural information extracted, and 3D imaging is critical for textural studies.This is highlighted by sample C5, where the strong, localised and variably oriented fabric visible in the XMT images is entirely missed by the SEM data acquired in a single plane through the same sample volume.3D imaging also allowed us to quantify spatial correlations between vesicles and crystals, which was not possible from 2D data due to the limited number of crystals intersected in single slices.Grey pumices exhibit a narrow range of porosities and are texturally similar to one another – they have VSDs that are within error over the full range of L.The modal density of the grey pumices is similar to the Oira pumice cone and Ruru Pass Tephra of Mayor Island, NZ, both magmatic peralkaline eruptions of Strombolian-to-Hawaiian intensity.The power-law relationships in the cumulative VSD data imply non-equilibrium, continuous and/or accelerating nucleation and growth of bubbles; conditions common during explosive eruptions of silica-rich magmas.Power law exponents of < 2 have been shown experimentally to represent continuous nucleation and free growth of bubbles; we suggest that the smallest vesicles originated in this way.This value of d is comparable to those reported for vesicles of a similar size from Askja 1875 and Chaitén 2008, where bubble development is thought to reflect a final stage of rapid decompression that occurred shortly before fragmentation at a high degree of vapour supersaturation."For intermediate vesicle sizes, our peralkaline samples have a power law exponent of ~ 3.25, a change in slope which may have been caused by bubble coalescence overprinting continuous nucleation, a process that has been reported for Askja 1875, Chaitén 2008, Mount Mazama 7700 BP and Taupo 1.8 ka.This intermediate-sized population of vesicles includes heterogeneously distributed bubbles that we interpret as having nucleated early on phenocrysts at low degrees of supersaturation.Our largest vesicle population returns to a power law exponent typical of continuous nucleation and free growth, which we suggest could be related to dynamic processes such as tearing and deformation during fragmentation, but has not been noted in previous studies.There is a high degree of spatial heterogeneity in vesicle deformation over small length scales, suggesting that strain was localised.This is especially noticeable in C5.The presence of deformed, elongated vesicles suggests that maximum strain rates during the eruption were locally much higher than those that would be calculated using bulk parameters.However, the larger, near-crystal vesicle population shows little or no deformation, which suggests the possible formation of strain shadows around crystals.The spatial relations between crystals and deformation require further investigation before this can be quantified.To compare vesicle textures of the Cuddia di Mida eruption with those from other eruptions, literature data from a variety of magmatic eruptions are shown in Fig. 10.Fig. 10a displays NV versus melt SiO2 content for a wide range of magma compositions and eruption styles.In general, rhyolitic eruptions have higher NV than basaltic eruptions, although some basaltic Plinian eruptions reach values similar to rhyolitic eruptions.Within basaltic eruptions, Plinian eruptions tend to have higher NV than Strombolian events but the values do overlap.Conversely, NV for rhyolitic eruptions does not correlate with eruption style as the small cone-forming events have NV values similar to those from sub-Plinian and Plinian events.For example, the Cuddia di Mida eruption has NV values similar to those from a small cone-forming rhyolitic eruption on Raoul and from sub-Plinian to Plinian rhyolitic eruptions.These values are one-to-four orders of magnitude larger than basaltic Strombolian eruptions and at the maximum values for basaltic Plinian eruptions.However, the total vesicle number densities we report for the Cuddia di Mida eruption are an order of magnitude larger than those reported from member A of the peralkaline Green Tuff eruption by Campagnola et al.Fig. 10b and c only include a sub-set of the eruptions used in Fig. 10a selected to represent data from two end-member fragmentation mechanisms: inertia-driven break-up of low viscosity melt and strain-induced brittle failure.Crystal-free rhyolitic eruptions were chosen as the Cuddia di Mida eruption contains only a minor phenocryst component and no microlites, implying that a high crystal content did not lead to fragmentation.As expected, comparing NV to melt viscosity shows a very similar trend to comparing to melt SiO2 content.Small peralkaline eruptions have been compared to basaltic Strombolian eruptions in previous work due to their low viscosities.However, the viscosity and NV of the Cuddia di Mida eruption are much more similar to rhyolitic eruptions than basaltic Strombolian eruptions.This may be due to the lower diffusivities of volatile species through cooler rhyolitic melts influencing bubble nucleation and growth: with slower diffusion it is easier to nucleate new bubbles than to diffuse volatiles into existing bubbles, which results in higher NV.Fig. 10c shows vesicle size distributions for rhyolitic sub-Plinian to Plinian and basaltic Strombolian eruptions as well as our data from the Cuddia di Mida eruption.VSDs from single eruptions are similar to each other, but VSDs do not appear to correlate with eruption style or magma composition in general.Basaltic Strombolian eruptions tend to have larger vesicles compared to rhyolitic eruptions but rhyolitic eruptions also span wide ranges of vesicle sizes.However, our samples from Cuddia di Mida are more similar to those from rhyolitic eruptions than from basaltic Strombolian eruptions because they contain many small vesicles that are absent in the basaltic eruptions.The low viscosity of the peralkaline Cuddia di Mida melt does not appear to have exerted a major control on the final vesicle textures of the pumices.That is, the peralkaline rhyolites studied here resemble deposits from silica-rich, calc-alkaline eruptions with much higher melt viscosities, particularly with respect to minimum vesicles sizes and strain localisation features and Polacci et al. respectively).The pumice textures do not resemble those of scoria from basaltic, Strombolian eruptions at Stromboli or Villarrica, which are characterised by much larger vesicles.Furthermore, the NV,totmelt values and VSDs calculated are similar to those from the products of high-silica calc-alkaline eruptions of varying size.Interaction with external water is not considered to be a viable fragmentation mechanism for the Cuddia di Mida eruption due to the lack of field evidence for magma-water interaction.Furthermore, pumice clasts from Cuddia di Mida lack the fluidal shapes associated with inertia-driven fragmentation of the type observed in Hawaiian eruptions; and the total vesicle number density is one–to–four orders of magnitude larger than those found in the products of basaltic Strombolian eruptions.Therefore tearing apart of melt by bubble bursting is also not a viable fragmentation mechanism.Textural similarities between peralkaline and calc-alkaline pumices thus suggest similar brittle fragmentation mechanisms, despite differences in chemistry and physical properties.Magmas fragment in a brittle fashion when a critical, viscosity-dependent strain-rate is exceeded.Bulk magma viscosity depends on melt composition and on magma crystallinity and vesicularity.Magma water content decreases dramatically during decompression and degassing, increasing the bulk viscosity and bringing the magma closer to fragmentation.Assuming that the melt was largely degassed at the point of fragmentation, we use the PS-GM viscosity model of Di Genova et al. to calculate a melt viscosity range of 104.28 to 107.11 Pa·s at a temperature of 1075 K.The PS-GM viscosity model is based on a modified Vogel-Fulcher-Tammann equation and is specifically calibrated for peralkaline compositions.Including crystals has a negligible effect on the bulk viscosity.Samples contain elongate vesicles which implies that melt capillary numbers were high and that the bulk viscosity decreased with increasing bubble content.At the high vesicle volume fractions observed here, the standard models that relate viscosity to porosity are not applicable.It is therefore not possible to calculate the bulk viscosity at the moment of fragmentation precisely.However, assuming that the melt had an initial water content of 5 wt.%, contained 13.7 vol.% crystals when resident in the magma chamber at 1.5 kbar and carried only a negligible volume of pre-existing bubbles, we calculate a bulk viscosity of 101.54 Pa·s prior to decompression.If there was no melt-bubble separation during the initial ascent, the viscosity, bubble content and pressure-dependent melt water content up to the 50 vol.% porosity threshold can be estimated.Beyond this threshold we cannot assess the effect of bubbles on viscosity and therefore a maximum estimate for the viscosity of the bulk magma containing 50 vol.% bubbles at fragmentation is 104.15 to 106.61 Pa·s.The minimum bulk viscosity required for strain-induced fragmentation is defined as μ ≥, where r is the conduit radius, Q is the volume flux, G∞ is the elastic modulus at infinite frequency and C is a fitting parameter−0.1).For a realistic conduit radius of 10 m a mass flux of 2.4 × 108 to 3.5 × 1010 kg·s−1 is required to achieve the minimum strain rate required for fragmentation when considering the viscosities calculated above.These should be considered as minimum mass flux estimates as bulk viscosity will likely be reduced further at higher vesicle contents.The much larger Green Tuff eruption had a comparable viscosity to the Cuddia di Mida eruption during the earliest explosive, crystal-poor part of the eruption, yet the mass fluxes we calculate to be necessary for fragmentation are much larger than those estimated for both the entire Green Tuff eruption, and member A of the Green Tuff and are therefore unfeasible.Conversely, achieving fragmentation using the lower bound of the published mass fluxes for these eruptions would require a conduit radius of < 1 m. Assuming strain-induced fragmentation, the calculated minimum mass fluxes and conduit radii required for fragmentation in both small and large eruptions of peralkaline rhyolite respectively are thus geologically unrealistic.An alternative mechanism invokes bubble overpressure causing strain-induced fragmentation when gas is unable to expand over the timescale of decompression due to the tensile strength of the surrounding melt.Although there is no permeability data available for the Cuddia di Mida pumice, the overpressure required for fragmentation can be calculated from ΔPfr = σm / φ using the known porosity and magma tensile strength.With a porosity of 76 vol.%, the Cuddia di Mida pumices require a bubble overpressure of 1.3 MPa to cause fragmentation.Bubble overpressure is a function of decompression rate and melt viscosity.An NV,totmelt of 2.5 × 106 mm−3 implies decompression rates of the order 107 Pa·s−1, and the melt viscosity gives relaxation times of 1.9 × 10−6 to 1.3 × 10−3 s for 1.0–0.0 wt.% water using the expression τs = μs / G∞.The onset of non-Newtonian, unrelaxed, viscoelastic behaviour at 1.9 × 10−4 to 1.3 × 10−1 s, thus implies that average decompression rates of 1.0 × 107 to 6.9 × 109 Pa·s−1 are required for fragmentation.Even the lower of these estimates is extreme, and significantly larger than the value estimated for member A of the Green Tuff eruption.Rapid decompression following edifice collapse has been suggested to explain the explosive behaviour of other magmas with seemingly insufficient viscosity to fragment.However, edifice collapse is not a viable mechanism for driving rapid decompression on Pantelleria, where cone-forming events have defined recent silicic volcanism.Instead, the high volatile content and low viscosity of peralkaline magmas may play a crucial role in promoting rapid decompression during the initial stages of eruption.Our 3D XMT data show significant, localised bubble deformation, implying that substantial partitioning of strain across heterogeneous samples took place prior to fragmentation.Strain localisation entails a complex interaction of shear heating and volatile solubility modification that can drive gas exsolution, elastic stress unloading and changes in the rheological behaviour of vesicles.These shear bands have been observed in low viscosity magmas, such as phonolites from Vesuvius, and are thought to develop in the conduit due to lateral velocity gradients and cause outgassing.These processes result in a variable and highly heterogeneous rheology on a range of spatial and temporal scales, and a consequently variable fragmentation criterion at the bubble-wall scale.Therefore, strain localisation could have permitted fragmentation to have occurred at a lower bulk viscosity than calculated above, but requires further empirical and theoretical investigation.By investigating the textures of pumices erupted from the Cuddia di Mida vent on Pantelleria, Italy, we have inferred that, despite having bulk magma viscosities seemingly far too low, peralkaline magmas fragment by brittle failure.Integrating multiscale 2D and 3D analysis techniques on pumice samples allowed vesicle size and shape distribution characteristics to be defined across a wide range of equivalent vesicle diameters.The textures, bulk porosity, VSDs and NV,totmelt values of pumices from Cuddia di Mida are comparable with those from calc-alkaline rhyolite deposits, and imply that, despite the difference in viscosity between calc-alkaline and peralkaline rhyolites, both magma types fragment by strain-induced brittle fragmentation.We show that initial nucleation occurred on large crystals at low degrees of volatile supersaturation.This was followed by some degree of coalescence and textural maturation before homogeneous, continuous nucleation occurred during rapid ascent at higher degrees of volatile supersaturation.Our data also show a possible third regime for the largest vesicles.We show that microlite-free peralkaline pumices cannot reach classically defined fragmentation conditions under even the most extreme of permitted geological conditions, and mechanisms such as bubble overpressure driven by rapid decompression and strain localisation around crystals are suggested instead.The very high decompression rates suggested by our analysis may be aided by the high volatile content and low viscosity of peralkaline magmas.The project was conceived by ME, following the work of DAN.The manuscript arose from the M.Sci. thesis of ECH.DAN collected the samples and processed the SEM dataset.ECH acquired the XMT data and performed the analysis under the supervision of KJD.PJW provided access to the MXIF.ECH led manuscript production with further contribution from all authors. | Peralkaline rhyolites are volatile-rich magmas that typically erupt in continental rift settings. The high alkali and halogen content of these magmas results in viscosities two to three orders of magnitude lower than in calc-alkaline rhyolites. Unless extensive microlite crystallisation occurs, the calculated strain rates required for fragmentation are unrealistically high, yet peralkaline pumices from explosive eruptions of varying scales are commonly microlite-free. Here we present a combined 2D scanning electron microscopy and 3D X-ray microtomography study of peralkaline rhyolite vesicle textures designed to investigate fragmentation processes. Microlite-free peralkaline pumice textures from Pantelleria, Italy, strongly resemble those from calc-alkaline rhyolites on both macro and micro scales. These textures imply that the pumices fragmented in a brittle fashion and that their peralkaline chemistry had little direct effect on textural evolution during bubble nucleation and growth. We suggest that the observed pumice textures evolved in response to high decompression rates and that peralkaline rhyolite magmas can fragment when strain localisation and high bubble overpressures develop during rapid ascent. |
31,407 | Melting behaviour of americium-doped uranium dioxide | Americium is a minor actinide produced in nuclear fuels during their irradiation in reactors.Despite the low amount generated, Am isotopes account for a significant contribution to the long-term radiotoxicity and heat load of spent fuels .Partitioning and Transmutation is a promising strategy to decrease this contribution notably through the heterogeneous transmutation.This mode consists in incorporating Am into UO2 to form mixed dioxides.This transmutation fuel, designated as AmBB would ultimately be irradiated in the periphery of FNR cores.The radiotoxicity and heat load of ultimate nuclear waste would thus be decreased as well as the ecological footprint of deep geological repositories .In this context, several studies have been dedicated to U1−xAmxO2±δ compounds, not only to investigate synthesis methods and assess their behaviour under irradiation in reactors , but also to determine their structural and thermophysical properties, the latter remaining scarcely known .Among them, no data regarding the U1−xAmxO2±δ melting behaviour has been reported, despite the importance of such information with respect to safety margins during irradiation, notably in accidental conditions.In this study we investigate the melting behaviour under an inert atmosphere of U1−xAmxO2±δ compounds with x = 0.10, 0.15, 0.20, which corresponds to compositions close to those envisaged for AmBB.The method used for the melting experiments is based on laser heating and a self-crucible approach in order to avoid sample-crucible interactions during the measurements.It was recently applied to UO2 , PuO2 , U1−xPuxO2 and MA-doped U1−xPuxO2 samples , and proved to give more accurate values than those previously obtained through more traditional thermal analysis methods .The samples were also characterised after melting using powder X-ray Diffraction and X-ray Absorption Spectroscopy.A comparison between pre and post-melting structure and cationic charge distribution can thus be proposed, based on XRD/XAS characterisation of the same compounds already published in the literature .The samples were synthesized using two different processes.The chemical compositions are presented in table 1.The 10 and 20 mol.%-Am samples were synthesized at Joint Research Centre – Institute for Transuranium Elements using a process based on combination of and infiltration methods to produce non-contaminant beads used as precursors for sintering .The 15 mol.%-Am samples were prepared at Commissariat à ĺEnergie Atomique et aux énergies alternatives from UO2+δ and AmO2−ε starting powders following the UMACS process based on two successive thermal treatments separated by a grinding step .The sample characteristics are summarized in table 2.The melting behaviour of the current mixed dioxides was studied by laser heating and fast multi-channel pyrometry, an experimental method developed at JRC-ITU .Details of the laser-heating setup used in this research have been reported in previous publications , although the technique has been partially modified in the present work.During the laser shots, a mixed oxide disk was held in a sealed autoclave under a controlled atmosphere of pressurized argon in which the absolute pressure of oxygen was checked and was lower than 10 Pa.Such controlled atmosphere permitted, together with the relatively short duration of the experiments, to minimise possible sample decomposition, particularly linked to oxidation or oxygen losses, depending on the initial composition.This approach aims to maintain the sample integrity and its composition as close as possible to its initial value throughout the melting/freezing process.Thermograms were measured by sub-millisecond resolution pyrometry on samples laser heated beyond melting by a TRUMPF® Nd:YAG cw laser.Its power vs. time profile is programmable with a resolution of 1 ms. Pulses of different duration and maximal power were repeated on a 5 mm diameter spot on a single sample surface as well as on different samples of the same composition in order to obtain datasets of at least four usable melting/solidification temperature values for each composition.Given the limited amount and the high radioactivity of the investigated material, this dataset size was considered to be satisfactory, in that it permitted to obtain significant average values and standard deviations for each composition.The laser pulses lead to maximum temperatures between 3350 K and 3550 K.These temperatures compared with the expected values of the solid/liquid phase transitions for the pure dioxides , can be considered to be high enough to melt a sufficient amount of material to obtain a consistent thermal analysis during the cooling stage of the experiments.Excessive thermal shocks were minimised by starting each series of laser pulses from a pre-set temperature of about 1500 K, at which each sample was held, by low-power laser irradiation, for 30 s to 1 min before starting a series of high-power laser shots.The pre-heating treatment yielded also a better homogenization of the sample surface.Each series consisted of three to four pulses on the same sample spot without cooling the material below T = 1500 K.This number of pulses was empirically optimised in order to obtain a minimum number of usable data while minimising the risk of sample breaking.In fact, a sample would mostly break while rapidly cooling to room temperature.On the other hand, no more than four pulses were repeated on each cycle in order to be able to check the sample morphology regularly between one cycle and the next one.The peak intensity and duration of the high-power pulses were increased from one cycle to the other, in order to check the result repeatability under slightly different experimental conditions.This approach constituted a step forward in the laser heating technique.It ensured a better mechanical stability of the samples, on which several successive shots could be repeated to check the result reproducibility and the eventual effects of non-congruent vapourization or segregation phenomena.The onset of melting was detected by the appearance of vibrations in the signal of a probe laser reflected by the sample surface.The sample cooled naturally when the laser beam was switched off during the thermal cycle.Thermal arrests corresponding to exothermic phase transitions were then observed on the thermograms recorded by the fast pyrometers.These operate in the visible-near infrared range between 488 nm and 900 nm.The reference pyrometer wavelength was 655 nm and was calibrated according to a procedure previously reported .The normal spectral emissivities of actinide dioxides, necessary for the determination of the sample temperature, have already been studied in detail in earlier publications .Based on these previous studies and on theoretical models , the NSE of mixed dioxide samples is assumed to be wavelength-independent in the present spectral range.Under such hypothesis, the current multi-wavelength pyrometry approach yielded a constant value of NSE = 0.80 ± 0.04, valid for all the investigated compositions within the reported experimental uncertainty on the NSE.This value has been used to correct the radiance signal recorded by the pyrometers and convert it, through Planck’s blackbody law, into real absolute temperature.The error affecting the final real temperature value due to the emissivity uncertainty was calculated to be T = ±21 K at T = 3050 K.The total uncertainty of the temperature measurements was determined according to the error propagation law, taking into account the standard uncertainty associated to the pyrometer calibration T =, the sample emissivity T = and the accuracy in detecting the onset of vibrations in the RLS signal.The estimated cumulative uncertainty is thus approximately ±1% of the reported temperatures in the worst cases, with a 1 − k coverage factor .XRD analyses were performed on the laser heated samples with a Bruker D8 X-ray diffractometer mounted in a Bragg–Brentano configuration, with a curved Ge monochromator, a ceramic Cu X-ray tube and a Linx-eye detector.Scans were collected from 20° to 120° in 2θ using 0.0086° step-intervals with counting steps of 5 s. Structural analysis was performed by the Rietveld method using the JANA2006 software .The XAS measurements were performed at the European Synchrotron Radiation Facility on the Rossendorf Beamline with a current of mA in the storage ring .The XAS spectra were collected on the melted samples at the U LIII, Am LIII and U LII edges in both transmission and fluorescence mode using Oxford ionisation chambers and a Canberra energy-dispersive 13-element Ge solid-state detector with a digital amplifier.A double Si crystal monochromator was used for energy selection and the energy calibration was performed using metallic foils for whose K edges are close to the edges of interest, i.e. Y, Zr and Mo.XANES spectra were recorded at the U and Am LIII edges, whereas Am LIII and U LII were used for EXAFS measurements.U LII is preferred to U LIII because of the presence of neptunium in the samples.Data analyses and refinements were performed using the Athena and Artemis programs and FEFF 8.40 for ab initio calculations of EXAFS spectra.XANES spectra were normalised using a linear function and a 2nd order polynomial for pre- and post-edge approximation, respectively.The first zero crossings of the first and second energy derivatives were used to determine the white line and inflection point energy positions, respectively.Average oxidation states of the cations were determined by Linear Combination Fitting of the experimental normalised absorption spectra by using well-known reference spectra.The reference compounds used were U+IVO2.00 ,4O9 and3O8 as well as Am+IVO2 and an oxalate25,6·H2O) .The XANES spectra of the reference compounds have been recorded previously at the same beamline.The LCF region is relatively to the WL position.Uncertainty that for the determined molar fractions is 2 mol.% and for O/M ratio is 0.01.Fourier transforms of the EXAFS spectra were extracted using a Hanning window between 3.5 and 11 Å–1, and 3.5 and 14 Å–1 for U LII and Am LIII edges, respectively, in both cases with a Δk-factor of 2.Before conducting the laser heating experiment, the materials have been characterised by XRD and XAS, those results have been already published .It was shown that the compounds are fluorite solid solutions for all Am contents.Regarding the charge distribution, equimolar proportions of Am+III and U+V have been measured in U0.85Am0.15O2±δ meaning that the O/M ratio is close to 2.00.On the contrary, only U has been reported for the U0.90Am0.10O2±δ and U0.80Am0.20O2±δ compounds used in this work, suggesting the hypo-stoichiometry of these materials .Nevertheless, the EXAFS results obtained for the 20% sample clearly shows a shorter first distance and a longer compared to values expected from cell parameter.Such results would suggest a Am+III and U+V charge compensation as recently observed for different U0.90Am0.10O2±δ and U0.80Am0.20O2±δ samples.Hence, considering the apparent disagreement between XANES and EXAFS results in Vespa et al. and surface oxidation observed for such samples an O/M close to 2.00 is assumed for all the samples.The experimental thermograms recorded on mixed U1−xAmxO2±δ oxides in an inert atmosphere are quite similar for each composition.For this reason and also for the sake of clarity, only one example is provided in figure 1.Based on previous experience , pressurized argon was chosen as the best atmosphere to maintain, as much as possible throughout the heating/cooling cycles, the O/M ratio, nominally at 2.00 in the initial fresh samples.For these experiments, no clear evolution of the freezing thermal arrests can be observed over successive shots on the same sample, confirming that the initial composition is maintained throughout the thermal cycles.The average melting/freezing points measured in this work ratios of 10, 15 and 20mol.%) are reported in table 1 and plotted as a function of the Am content in figure 2.Based on previous investigations performed on other material systems , this melting/freezing points can be assigned to the solidus transition at the studied compositions, whereby solidus and liquidus temperatures are too close together to be effectively distinguishable with the present experimental approach.It can be seen that the addition of Am to UO2 leads to a lower melting/freezing temperature, whereas no relevant changes on the thermogram shape can be noticed.Such a behaviour is consistent with the solidus and liquidus temperatures being close enough, at the investigated compositions, not to be distinguishable using the current technique.The solid/liquid transition temperature thus decreases from an average value of K in pure UO2 to K in U0.80Am0.20O2±δ.XANES spectra at both Am LIII and U LIII edges are compared to those of reference compounds in figure 3.Both inflection point and white line positions are provided in table 3.At the Am LIII edge, the spectra of all three samples are well-aligned with the Am+III reference.A linear combination analysis of the data based on Am+IV and Am+III reference samples indicates a molar fraction of Am+III equal to 100%.Hence, Am remains trivalent in argon as would have been expected .At the U LIII edge, an analysis of the inflection point and white line maximum positions suggests that U oxidation states in the samples are comprised between those of U+IVO2.00 and4O9 reference compounds, exception made of U0.80Am0.20O2±δ, for which the inflection point is close to that of U4O9, whereas its white line maximum is at a higher energy.As presented in table 3, the corresponding U+IV and U+V mole fractions were assessed from a linear combination of reference compound spectra.The XRD patterns of the mixed U1−xAmxO2±δ oxides are presented in figure 4.After melting, U0.90Am0.10O2±δ and U0.85Am0.15O2±δ remain single fluorite-type phases whose lattice parameters are summarized in table 4.According to the O/M ratio derived from the XANES, the U0.90Am0.10O2±δ solid solution melted in argon is slightly hyper-stoichiometric while the U0.85Am0.15O2±δ oxide remains stoichiometric as the as-synthesized material .The experimental and fitted EXAFS k3-spectra at Am LIII and U LII edges and corresponding Fourier transforms are provided in figures 5 and 6, respectively.An immediate inspection of the Am LIII and U LII data shows that there are no significant differences in the periodicity of the oscillations, suggesting similar local structures for U0.90Am0.10O2±δ and U0.85Am0.15O2±δ.A good agreement between the experimental and fitted data is observed also, confirming the validity of the structural model used in the present analysis.The crystallographic parameters derived from EXAFS spectra fitting are reported in table 5.The first distance, equal to10−1 nm, is in agreement with that expected for a pure distance = 0.109 nm, r = 0.095 nm, r = 0.140 nm ).The first distance10−1 nm is slightly shorter than in UIVO2.0010−1 nm, which is consistent with the U+IV/V mixed valence considering U+IV and U+V ionic radii, = 0.100 nm, r = 0.089 nm ).The bond lengths are about10−1 nm for both Am and U, indicating a random distribution in the cationic sub-lattice.One can then observe that Am and U local environments are similar to those previously reported .At the U LII edge, the Debye–Waller factors are slightly larger to those of the as-synthesized materials , which suggests a higher structural disorder and agrees with the slight hyper-stoichiometry.On the contrary, no variation of the Debye–Waller factor is observed at the Am LIII edge.This might indicate that the interstitial O atoms are preferentially accommodated around the U atoms.As shown in figure 4, the heat treatment led to the de-mixing of the U0.80Am0.20O2±δ solid solution into two fluorite phases with close lattice parameters.Looking at figure 7, one can clearly see that the U LII EXAFS spectrum of the U0.80Am0.20O2±δ is different from that of UO2 but close to that of U4O9.The U local environment is then similar to that of U in U4O9.This agrees actually with the U valence derived from XANES and the O/M of 2.15.The O/U is equal to 2.31, i.e. between U4O9 and U3O7.Considering that oxidation at room temperature from UO2 to U3O7 occurs with an accumulation of cuboactohedron defects in the UO2 fluorite structure, such a mechanism would imply that the U local environment between U4O9 and U3O7 would evolve only slightly.On the contrary, the Am LIII EXAFS spectrum of figure 7 indicates that Am remains in fluorite-type coordination.It is interesting to note that the U local environment is clearly modified while that of Am remains unchanged.This is in agreement with the increase of the Debye–Waller factors that has been observed solely around the U atoms for the U0.90Am0.10O2±δ and U0.85Am0.15O2±δ compounds.This difference of local environment clearly shows that U and Am are not randomly distributed in the cationic lattice contrary to U0.90Am0.10O2±δ and U0.85Am0.15O2±δ.One can then assume that during the melting the solid solution de-mixed into a phase rich in Am with a UO2 structure and into a phase poorer in Am with a U4O9 structure.It is however difficult to conclude on the cause of this de-mixing.One can actually imagine that the U0.80Am0.20O2±δ solid solution de-mixes during the melting as the phase is not thermodynamically stable at high temperature.But one can also argue that the de-mixing occurred in this case, only because the as-synthesized U0.80Am0.20O2±δ compound exhibited a lower cationic homogeneity compared to the initial U0.90Am0.10O2±δ and U0.85Am0.15O2±δ compounds.The formation, in U0.80Am0.20O2±δ, of an oxygen-richer phase can be linked to oxygen redistribution between the two phases.However, the occurrence of some unforeseen issues cannot be completely excluded, such as unwanted exposition of the sample to air traces during the experimental characterisation or uncontrolled leakage of the high-pressure vessel after the laser heating tests, leading to higher oxygen impurity content in the atmosphere in contact with the sample.This point will be clarified by further investigation, in particular by extending the analysis to compositions richer in americium.It is also important to note that the ULII EXAFS oscillations are well defined up to 110 nm−1 while the signal decreases significantly after 80 nm−1 for the Am LIII.This explains the decrease of the second TF peak corresponding to the Am-metal sphere.Therefore, the local environment around U is well defined despite the addition of O whereas a loss of order is observed after 4 Å around Am atoms.The melting/freezing point decrease reported in figure 2 for samples laser heated in pressurized argon is limited to less than T = 100 K for the maximum Am/ content investigated here, 20 mol.%.One can incidentally notice that the average melting/solidification temperatures measured in the compositions U0.90Am0.10O2±δ and U0.85Am0.15O2±δ are very close K and T = K, whereas a clearer decrease is observed for U0.8Am0.2O2+δ.Of course based on these data one cannot exclude the occurrence, between U0.90Am0.10O2±δ and U0.85Am0.15O2±δ, of a three-phase equilibrium boundary, possibly involving complex liquid/solid equilibria in the presence of a miscibility gap either in the liquid or in the solid.Such a phase boundary would then exist at a constant temperature, consistently with Gibbs’ phase rule.This hypothesis, which would require more experimental data at intermediate compositions to be confirmed or ruled out, seems rather unlikely inO2±δ compounds, at least by analogy with other better studied mixed actinide systems .If one excludes the existence of such miscibility gaps, the similar average melting/solidification temperatures measured in the compositions U0.90Am0.10O2±δ and U0.85Am0.15O2±δ can be attributed to a simple statistical effect linked to the small size of the data sets.Then the current results would agree well with the ideal solution trend proposed by Kato et al. for Am/ contents up to 7 mol.%, plotted as a dotted line in figure 2, even though this model is a somewhat rough approximation.Besides the usual ideal solution approximation, it actually assumes equal heat capacities for both UO2 and AmO2, and it relies on a fictitious melting point of AmO2 extrapolated to T = 2773 K by Kato et al. .Nonetheless, it can be deduced from the present results that a similar ideal solution behaviour is most likely followed by theO2 mixture at temperatures close to melting to an even larger extent than assumed by Kato et al.Apparently, this ideal solution behaviour seems even to be compatible with a complete reduction of Am to Am+III, at least for the current low Am-doping levels.Such a behaviour is approximately followed at lower temperatures by similar solid state solutions between UO2 and trivalent cation oxides such as Bi2O3 and La2O3 .In reality, the fact that high-temperature ideal solution behaviour is observed brings along no certainty about the system behaviour at lower temperatures, where the cation valence states were measured by XAS analysis.In fact, thermal excitation can cure crystal defects and asymmetries that would result in strongly non-ideal behaviour at temperatures closer to the ambient one, as already observed, for example, in systems like .Only by measuring the cation valence temperature dependence and by consistently modelling the corresponding phase equilibria it would be possible to exhaustively describe the system from room to melting temperature.Because of the very likely changes in the oxygen content linked with the valence change and, at higher temperatures, with non-congruent vapourization, such phase boundary modelling should be extended to the whole ternary system, and cannot be limited to the pseudo-binary plane.Another point worth discussion is the effect of self-irradiation on the phase stability in the system.First, even though the samples were not free of self-irradiation-induced damage before the laser heating experiments, radiation damage is cured at temperatures exceeding 2000 K , so even more efficiently in the current experimental conditions.However, because a delay of few weeks to a couple of months was inevitable between the laser heating/cooling treatments and the post-melting material characterisation, the samples underwent some self-irradiation damage, notably those with the highest Am content.This could be viewed as a partial explanation for the complex phase splitting and higher O/M ratio observed in the U0.8Am0.2O2+δ sample as several studies have shown that α-self-irradiation-induced structural effects onO2 compounds are limited to lattice swelling reaching up to 0.8 vol.% and a small structural disorder increase .In conclusion, the main message of this work is that the melting temperature of Am-doped UO2 decreases to a limited extent and follows an approximately ideal solution behaviour up to an Am/ content of 20 mol.%.This is utterly true provided the experimental conditions under which melting is obtained are such as to maintain, at high temperature, an approximate oxygen stoichiometry without phase separation.A more comprehensive knowledge of the thermodynamic system is required to prove up to which extent, these conditions were met under the inert atmosphere used in this study.The melting behaviour of dioxide mixed with Am/ ratios up to 20 mol.% has been experimentally studied.Although a vast amount of further research is still needed for an exhaustive definition of phase boundaries in the system, sound conclusions can already be drawn from the present, first investigation.The average solid/liquid transition temperatures are K, K and K, for Am/ ratios of 10, 15 and 20 mol.%.In this case, the melting/solidification point decreases from T = 3120 K in pure UO2 to T = 3051 K in U0.8Am0.2O2±δ, following a trend similar to that of an ideal solution assuming, in line existing literature, an extrapolated melting point of AmO2 around T = 2773 K.This would also mean that if the compositions studied here are laser-heated under an inert atmosphere, their oxygen-to-metal ratios remain close to the initial 2.00 value.This high-temperature ideal solution behaviour is evidently compatible with the coexistence – proven by pre-and post-melting XANES analysis, of U+IV U+V and Am+III.However, no clear indication about the validity of such ideal solution behaviour at lower temperatures can be inferred based on the current results. | (Uranium + americium) mixed oxides are considered as potential targets for americium transmutation in fast neutron reactors. Their thermophysical properties and notably their melting behaviour have not been assessed properly although required in order to evaluate the safety of these compounds under irradiation. In this study, we measured via laser heating, the melting points under inert atmosphere (Ar) of U1-xAmxO2±δ samples with x = 0.10, 0.15, 0.20. The obtained melting/solidification temperatures, measured here, indicate that under the current experimental conditions in the investigated AmO2 content range, the solidus line of the (UO2 + AmO2) system follows with very good agreement the ideal solution behaviour. Accordingly, the observed liquidus formation temperature decreases from (3130 ± 20) K for pure UO2 to (3051 ± 28) K for U0.8Am0.2O2±δ. The melted and quenched materials have been characterised by combining X-ray diffraction and X-ray absorption spectroscopy. |
31,408 | A pH/ROS dual-responsive and targeting nanotherapy for vascular inflammatory diseases | Cardiovascular diseases remain the leading cause of morbidity and mortality worldwide .It has been estimated that CVDs may result in about 17.9 million deaths each year, accounting for ~31% of all global deaths .Vascular inflammation is closely related to the pathogenesis of a diverse group of CVDs, such as atherosclerosis , myocardial infarction , restenosis , intracranial and aortic aneurysms , stroke , and peripheral artery disease .By regulating different molecular and cellular processes involved in the inflammatory response, a large number of therapeutics have been investigated to prevent and treat CVDs .Despite major achievements in preclinical studies, desirable efficacy of most examined anti-inflammatory agents has not been fully demonstrated in clinical practice.To a large degree, this may be associated with inefficient delivery of therapeutic molecules to the site of vascular inflammation, resulting from their nonspecific distribution and rapid elimination from the circulation.Even for locally absorbed drugs in the inflamed blood vessels, their retention time is very short due to uncontrolled diffusion.Recently, nanoparticle-based targeting has been considered as a promising strategy for site-specific delivery of different imaging and therapeutic agents to detect or treat vascular inflammation .In particular, a broad spectrum of NPs have been engineered as targeting carriers for the treatment of atherosclerosis , myocardial infarction , heart failure , ischemia-reperfusion injury , critical limb ischemia , restenosis , abdominal aortic aneurysm , and ischemic stroke .In these cases, polymeric and lipid NPs, liposomes, recombinant high-density lipoproteins, cell-derived vesicles, inorganic and metal NPs, and hybrid NPs have been used for delivery of therapeutic agents varying from small-molecule drugs, peptides/proteins, to nucleic acids, for the management of CVDs associated with vascular inflammation.In addition to passive targeting via the damaged endothelial cell layer or the enhanced permeability effect, vascular targeting efficiency of NPs can be further increased by modulating their physical properties , decorating with molecular moieties , or functionalizing with specific cell membranes .On the other hand, the compositions of NPs can be designed and tailored to release cargo molecules in response to abnormally changed biochemical cues at the inflammatory sites of blood vessels .However, translation of these vascular targeting nanotherapies remains challenging.Whereas vascular accumulation of NPs can be enhanced by directly manipulating epitope-specific affinity, targeting efficiency post decoration with molecular moieties alone is still very limited.For nanotherapies derived from the cell membrane-based biomimetic strategy, the relatively complicated formulation procedures and undefined components may hinder their following large-scale production and clinical studies.Also, uncontrolled release at the vascular sites of interest need to be further improved.Increasing evidence has demonstrated that NPs responsive to dual or multiple stimuli can more precisely control their cargo release profiles at the site of action, thereby affording considerably potentiated efficacies.In this aspect, NPs sensitive to multiple biochemical signals or biochemical/physical signals have been extensively examined for targeted treatment of diverse diseases, such as cancers, diabetes, and inflammatory diseases .Nevertheless, bench-to-bedside translation of these responsive nanotherapies is not straightforward, largely resulting from more complex pharmaceutical development, especially in regard to poor reproducibility and quality control .Moreover, in vivo safety is another important issue that should be fully addressed for most responsive nanocarriers currently developed for vascular targeting, particularly for the treatment of chronic diseases, since the majority of intravenously injected NPs will be cleared by the mononuclear phagocyte system and accumulate in organs such as the liver and spleen .Consequently, there is a crucial need to develop more effective and safe NPs with desirable targeting capacity and well-controlled drug release performance in response to biochemical signals relevant to vascular inflammation.By facile chemical functionalization of cyclodextrins, our group has recently developed a series of bioresponsive materials with highly tailorable hydrolysis behaviors under acidic and oxidative conditions .Both in vitro cell culture experiments and in vivo evaluations in different animal models demonstrated that NPs based on these materials display good safety profile .Herein we hypothesize that pH and reactive oxygen species dual-responsive NPs engineered by integrating pH- and ROS-responsive cyclodextrin materials can serve as an effective and safe nanoplatform for therapeutic delivery to sites of vascular inflammation, in view of the presence of acidosis and oxidative stress at inflammatory sites.Furthermore, vascular targeting and in vivo efficacies of pH/ROS dual-responsive nanotherapies can be additionally enhanced by combination with a molecular targeting strategy.In an animal model of vascular inflammation in rats subjected to balloon injury in carotid arteries, therapeutic advantages of the dual-responsive nanotherapy were first affirmed, by comparison with non-responsive and pH- or ROS-responsive nanotherapies, using rapamycin as a model drug.Then we demonstrated in vivo targeting and therapeutic effects of the dual-responsive, active targeting nanotherapy.According to our previously established method, a pH-responsive material was synthesized by acetalation of β-cyclodextrin .Measurement by FT-IR and 1H NMR spectroscopy revealed successful synthesis of ACD.Calculation based on the 1H NMR spectrum of ACD showed that the molar ratio of cyclic acetal to linear acetal was approximately 0.62, with an acetalation degree of ~90%.In addition, an ROS-responsive material was obtained by chemical functionalization of β-CD with an oxidation-labile compound of 4-phenylboronic acid pinacol ester , which was also confirmed by FT-IR and 1H NMR spectra.According to the 1H NMR spectrum, there were approximately 7 PBAP units in each β-CD molecule.Then different NPs were prepared by a modified nanoprecipitation/self-assembly method, in which lecithin and DSPE-PEG were used to stabilize NPs and afford additional functional capacity.Of note, pH/ROS dual-responsive NPs were fabricated by the combination of ACD and OCD at different weight ratios.For NPs based on ACD/OCD at the weight ratios of 100:0, 80:20, 60:40, 50:50, 40:60, 20:80, and 0:100, they were abbreviated as ACD NP, AOCD8020 NP, AOCD6040 NP, AOCD5050 NP, AOCD4060 NP, AOCD2080 NP, and OCD NP, respectively.Regardless of different compositions, spherical NPs were successfully obtained, as observed by transmission electron microscopy.All the prepared NPs exhibited negative ζ-potential, with the mean hydrodynamic diameter varied from 121 ± 2 to 186 ± 4 nm.Of note, relatively narrow size distribution was found for different NPs.Consequently, NPs derived from ACD and OCD could be easily prepared.These results also indicated that ACD and OCD exhibited good blend compatibility.Preliminary experiments were conducted to examine in vivo safety profiles of the dual-responsive nanotherapy.We first investigated biocompatibility of the blank dual-responsive nanocarrier.After various concentrations of AOCD NP were incubated with erythrocytes for 2 h, both direct observation and quantification revealed no hemolysis even at 2 mg/mL of AOCD NP.Then in vivo acute toxicity tests were carried out in rats after i. v. injection of AOCD NP at 250, 500, and 1000 mg/kg, respectively.Rats in all AOCD NP groups showed normal daily diet and water intake, without any abnormal behaviors.At day 14 after different treatments, rats were euthanized for further analyses.All AOCD NP-treated rats exhibited organ index values comparable to those of the saline group.In addition, no significant changes were detected in typical hematological parameters for the AOCD NP groups.Clinical biochemical tests revealed no abnormal variations in representative biomarkers relevant to liver and kidney functions in rats subjected to treatment with AOCD NP at various doses.Further, inspection of H&E-stained histopathological sections of major organs indicated that there were no discernible pathological changes for AOCD NP-treated rats.Therefore, these data strongly suggested that the pH/ROS dual-responsive NPs based on β-CD-derived materials displayed good safety performance after i. v. injection, even at the examined high dose of 1000 mg/kg.Subsequently, in vivo safety studies were performed for the dual-responsive nanotherapy RAP/AOCD NP.In this case, RAP/AOCD NP was i. v. administered at 1 or 3 mg/kg of RAP twice a week.AOCD NP at 11.5 mg/kg was administered for comparison.During treatment, all animals showed gradually increased body weight.After 8 weeks of treatment, rats were euthanized and major organs were excised for further analysis.No significant differences in the organ index values were found between the saline group and the AOCD NP or RAP/AOCD NP group.The levels of hematological parameters of AOCD NP or RAP/AOCD NP treated rats were comparable to those of the saline group.Also, there were no significant differences between varied groups, with respect to the biomarkers related to hepatic and kidney functions.Further examination on H&E-stained sections of major organs revealed negligible injuries.Moreover, in view of the immunosuppressive activity of RAP, the possible side effects of RAP/AOCD NP on lymphocytes were also analyzed by immunohistochemistry.We found no significant changes in the number of T and B cells in the spleen of rats after long-term treatment with the dual-responsive nanotherapy.Moreover, similar treatment was conducted in mice in a separate study.Flow cytometry quantification of the number of T and B cells in the spleen revealed no significant changes after mice were administered with RAP/AOCD NP at either 1 or 3 mg/kg of RAP for 3 months.These results implicated that treatment with RAP/AOCD NP at relatively low doses had no remarkable negative effects on the adaptive immune system.Collectively, these preliminary findings demonstrated that both AOCD NP and RAP/AOCD NP showed good safety profile for administration by the i. v. route.To demonstrate pH/ROS dual-responsive performance of NPs based on ACD and OCD, in vitro hydrolysis tests were conducted in PBS with 1 mM H2O2 and at various pH values.For ACD NP, notably rapid hydrolysis was detected at either pH 5 or pH 6, while the presence of H2O2 showed negligible effects.Independent of pH, H2O2 dramatically accelerated the hydrolysis of OCD NP.By contrast, OCD NP showed comparable hydrolysis profiles at pH 5, 6, or 7.4.These results are consistent with our previous studies on NPs based on either ACD or OCD .As for NPs derived from ACD and OCD, their hydrolysis behaviors were dependent on both pH and H2O2, when they were separately tested in different buffers.These data revealed pH/ROS dual-responsive capacity for ACD/OCD-based NPs.Since more desirable dual-responsive character was observed for AOCD8020 NP and AOCD2080 NP, their release profiles were further compared with ACD NP and OCD NP in the same cohort of tests under different pH/ROS conditions.Likewise, dual-responsive capability was only observed for AOCD8020 NP and AOCD2080 NP at 1 mM H2O2 and with pH 5, 6, or 7.4.Moreover, in the presence of H2O2, AOCD2080 NP displayed relatively excellent hydrolysis behaviors at varied pH values, therefore it was employed for further experiments.Unless otherwise stated, AOCD2080 NP is abbreviated as AOCD NP in the following studies.After incubation in PBS, DMEM, or 10% serum for 6 or 12 h, no significant changes were observed for the mean hydrodynamic diameter, suggesting that AOCD NP possessed good stability in different media.Together, a pH-ROS dual-responsive nanoplatform was successfully established by simply using composites based on a pH-responsive material ACD and a ROS-labile material OCD.Of note, the exact sensitivity of resulting nanovehicles can be easily modulated by changing the weight ratios of ACD/OCD, thereby affording robust scale-up capacity, which is an additional advantage from the viewpoint of translation.Through the similar method as aforementioned, RAP nanotherapies were prepared based on different responsive NPs.Also, a non-responsive nanotherapy was fabricated for comparison studies, by using poly as a carrier material.Similar to the corresponding blank NPs, all these four nanotherapies exhibited negative ζ-potential, with the mean hydrodynamic diameter varying from 121 ± 2 to 179 ± 6 nm.For RAP/PLGA NP, RAP/ACD NP, RAP/OCD NP, and RAP/AOCD NP, the RAP loading content was 4.5 ± 0.4%, 9.4 ± 0.2%, 8.2 ± 0.1%, and 8.7 ± 0.4%, respectively.At the same RAP feeding ratio, the drug loading content of nanotherapies based on β-CD-derived materials were significantly higher than that of RAP/PLGA NP.This should be due to the presence of host-guest interaction between β-CD materials and hydrophobic drugs, which can facilitate drug entrapment in NPs .Then in vitro tests were performed to examine responsive release behaviors of different nanotherapies.For RAP/AOCD NP, RAP release was considerably accelerated in PBS at either pH 5 or pH 6, as compared to that at pH 7.4.Regardless of varied pH values, the presence of 1 mM H2O2 dramatically enhanced the drug release rate.Accordingly, RAP/AOCD NP showed desirable pH/ROS dual-responsive drug release capacity, which is well consistent with the dual-responsive hydrolysis profiles of the corresponding nanocarrier AOCD2080 NP.As for RAP/ACD NP, more significant changes in release profiles were observed at different pH values, with considerably rapid release in mildly acidic PBS, while the existence of H2O2 only displayed slight effects on the RAP release rate.In the case of RAP/OCD NP, its drug release behaviors were mainly dominated by H2O2 other than pH.Also, the single-responsive drug release profiles of RAP/ACD NP and RAP/OCD NP agree with the hydrolysis performance of corresponding nanovehicles.Consequently, a pH/ROS dual-responsive RAP nanotherapy can be facilely and successfully developed simply by using nanocomposites of pH-responsive and ROS-responsive materials.Endocytosis of NPs is the first step toward intracellular delivery of the loaded drug molecules, since RAP suppresses cell proliferation mainly through its cytosolic receptor, i.e., the FK506 binding protein, thereby inhibiting the mammalian target of rapamycin .We examined cellular uptake profiles in MOVAS rat vascular smooth muscle cells, using Cy5-labeled dual-responsive NPs.After 1 h of incubation, fluorescence signals in VSMCs notably increased with increase in the dose of Cy5/AOCD NP, indicating dose-dependent internalization.Consistent with the observation by confocal microscopy, quantification by fluorescence-activated cell sorting via flow cytometry further confirmed effective cellular uptake of Cy5/AOCD NP by VSMCs, in a dose-response pattern.At the same dose of Cy5/AOCD NP, we found increased distribution of Cy5 fluorescence in VSMCs with prolonged incubation.Moreover, staining of late endosomes and lysosomes by LysoTracker revealed endolysosomal trafficking of internalized Cy5/AOCD NP in VSMCs, because co-localization of green and red fluorescence signals were clearly observed.Likewise, FACS analysis indicated enhanced endocytosis of Cy5/AOCDNP with increased incubation time.By contrast, significant fluorescence appeared in VSMCs at 4 h after incubation with single-responsive NPs, i.e. Cy5/ACD NP and Cy5/OCD NP, indicating relatively slow release of loaded fluorescent molecules.For both Cy5/ACD NP and Cy5/OCD NP, further quantification by FACS revealed significantly low fluorescence signals in VSMCs at all time points examined, compared to those of Cy5/AOCD NP at the same dose.Of note, Cy5/ACD NP and Cy5/OCD NP showed comparable fluorescence intensities after 8 h of incubation.These results demonstrated that rat VSMCs could efficiently internalize the pH/ROS dual-responsive NPs based on ACD and OCD.In subcellular organelles with acidic and oxidative microenvironment, the dual-responsive NPs are able to more rapidly release the loaded drug molecules, compared to either pH- or ROS-responsive control.We first evaluated cytotoxicity of various blank NPs in VSMCs.For all examined NPs, including PLGA NP, ACD NP, OCD NP, and AOCD NP, low cytotoxicity was detected at different doses.Even at the highest dose of 1000 μg/mL, relatively high cell viability was still observed.Consequently, different responsive NPs showed low cytotoxicity, which was comparable to PLGA NP.Further, cytotoxicity of RAP nanotherapies was examined.For all RAP-loaded NPs, notable decrease in cell viability was only found when the dose of RAP was higher than 1 μM.This implied that RAP nanotherapies had low cytotoxicity at relatively low doses.In response to vascular injury after percutaneous intervention, migration of VSMCs from the media to the intima plays an important role in the development of restenosis , in which platelet-derived growth factor-BB, a potent chemoattractant and mitogen to VSMCs, is a key stimulator .Transwell migration assay was therefore performed to examine anti-migration capability of different responsive nanotherapies.Compared to VSMCs treated with FBS-free medium alone, treatment with PDGF-BB at 20 ng/mL for 24 h induced notable migration of VSMCs, as illustrated by crystal violet-stained microscopy images.By contrast, pretreatment with RAP and RAP nanotherapies for 24 h dramatically inhibited cell migration.Of note, all nanotherapies showed notably stronger anti-migration effects, compared to free RAP.Moreover, responsive nanotherapies suppressed VSMCs migration to a much more significant degree than the non-responsive nanotherapy RAP/PLGA NP.Importantly, the most potent activity was achieved by the dual-responsive RAP/AOCD NP.It should be noted that at the same dose corresponding to nanotherapies, all blank NPs displayed no significant anti-migration activity.Concomitant with migration of VSMCs, their abnormal proliferation is closely related to the neointimal hyperplasia after vascular injury .Agreeing with the previous findings , serum-starved VSMCs showed significant proliferation after stimulation with PDGF-BB.Incubation with 1 μM free RAP dramatically decreased PDGF-BB-induced proliferation.Treatment with various nanotherapies at the same dose of RAP much more effectively inhibited VSMCs proliferation, particularly in the case of responsive nanotherapies.Of note, the best anti-proliferation effect was achieved by the dual-responsive nanotherapy RAP/AOCD NP that exhibited significant difference compared to the single-responsive counterparts.In view of the crucial role of cell cycle in the proliferation of VSMCs , we evaluated the effects of different nanotherapies on cell cycle progression by flow cytometric analysis.After stimulation with PDGF-BB for 24 h, the percentage of cells in the synthetic phase was significantly increased.Treatment with free RAP and RAP nanotherapies effectively inhibited the induced G1/S transition, with the most notable effect detected for RAP/AOCD NP-treated cells.These data indicated that RAP nanotherapies attenuated VSMCs proliferation mainly by inhibiting the G1/S transition, which is consistent with the previous studies on RAP .To further delineate the mechanism responsible for cell cycle arrest at G1 phase by RAP nanotherapies, we examined the effects of different nanotherapies on typical cell cycle regulators, including cyclin D1 and p27Kip1.As well documented, cyclin D1, a nuclear protein, can promote cell cycle progression in the G1 phase , by regulating cyclin-dependent kinases via forming complexes that is required for G1/S transition .On the other hand, as a CDK inhibitor, p27Kip1 can attenuate the cyclin-CDK activity, leading to G1 arrest .After VSMCs were stimulated with PDGF-BB, the expression of cyclin D1 was notably up-regulated, while the p27kip1 level decreased, as indicated by Western blot analysis.These changes, however, were effectively reversed by the treatment with RAP nanotherapies.Notably, the dual-responsive nanotherapy exhibited relative high activity compared to either pH- or ROS-responsive nanotherapy.These results substantiated that the dual-responsive RAP nanotherapy significantly arrested the G1 phase by up-regulating p27Kip1 and down-regulating cyclin D1 in VSMCs.A rat model of vascular inflammatory disease was established by balloon-induced carotid artery injury , which was confirmed by staining of a typical artery with Evans blue.Whereas previous studies have showed that tissues with acute and chronic inflammation generally exhibit metabolic acidosis due to accumulation of lactate , it remains unclear whether there is slightly acidic microenvironment in arteries post balloon injury.We evaluated pH changes in the carotid arteries by staining their cryosections before and after injury using a pH-sensitive fluorescent probe BCECF-AM .Whereas the normal artery displayed remarkable fluorescence, the arterial tissues isolated on days 1, 7, and 14 after injury showed considerably weak fluorescent signals, indicating the presence of relatively acidic microenvironment in injured arteries.Moreover, to assess the degree of oxidative stress, cryosections of the rat carotid arteries were stained with dihydroethidium, a fluorescent probe of superoxide anion.Compared to the normal artery, the carotid artery collected at day 1 after balloon injury showed slightly high fluorescence intensity.On days 7 and 14 post injury, notably strong fluorescence was observed.In these cases, the neointima showed relatively high fluorescence than the adventitia for the injured arteries.In addition, arterial cryosections were stained with a fluorescent probe DCFDA that is sensitive to H2O2.In this case, stronger fluorescence was also observed in arteries at day 14 after injury.These results revealed the existence of both acidosis and oxidative stress in the injured sites of arteries, largely resulting from the progression of inflammation .Then we examined targeting capability of the dual-responsive NPs at the injured carotid arteries.Immediately after balloon injury, different Cy7.5-labeled NPs were administered by intravenous injection in rats.At 8 h after treatment, the carotid arteries were excised for ex vivo imaging.For the injured left carotid arteries, fluorescent signals were clearly observed in the groups treated with Cy7.5/PLGA NP, Cy7.5/ACD NP, Cy7.5/OCD NP, or Cy7.5/AOCD NP, while no fluorescence appeared at the normal right arteries.Accordingly, regardless of their compositions, the examined NPs could target the carotid artery with endothelial injury.Further quantitative analysis revealed no significant differences between different groups treated with either non-responsive or responsive NPs.This is consistent with previous findings that the targeting capability of NPs with comparable size is mainly dominated by their surface chemistry .Coincident with the imaging result, selective distribution of RAP molecules in the injured arteries was also detected after i. v. administration of different RAP nanotherapies.In this case, nanotherapies with varied responsive properties showed comparable RAP levels, while an extremely low concentration of RAP was detected in the normal artery.Subsequently, in vivo efficacy was interrogated in rats.After angioplasty injury, different RAP nanotherapies were i. v. administered at 1 mg/kg of RAP.At day 14 after different treatments, the injured carotid arteries were collected for further analyses.Observation of the H&E-stained histological section revealed notable neointimal hyperplasia in the model group, compared to that of normal group.Treatment with RAP nanotherapies showed varied degrees of benefits, with the best efficacy achieved by RAP/AOCD NP.Quantitative analysis of H&E sections indicated that the lumen area of carotid arteries significantly increased, while the intimal area was remarkable decreased after intervention with RAP nanotherapies.In both cases, the most desirable outcome was obtained by the dual-responsive nanotherapy RAP/AOCD NP.For different groups, there were no significant changes in the medial area.Further, we compared the magnitude of the proliferation index, which is a measure of neointimal hyperplasia defined as the ratio of the intimal area to the medial area.It was found that the proliferation index was significantly decreased in all nanotherapy groups.In particular, the dual-responsive nanotherapy exhibited the best effect on inhibiting the proliferation index.On the other hand, whereas i. v. treatment with free RAP at the same dose could also inhibit neointimal formation, its activity was significantly lower than that of different nanotherapies, as indicated by changes in the proliferation index.Immunohistochemistry analysis was conducted to further evaluate the anti-restenosis effects of the dual-responsive nanotherapy.Observation on α-smooth muscle actin antibody-stained arterial sections revealed the presence of α-SMA-positive cells in the neointimal area of the model group.This affirmed the fact that neointimal hyperplasia is largely contributed by migration and proliferation of VSMCs, since α-SMA is a typical biomarker of VSMCs .After treatment with different RAP nanotherapies, the total number of α-SMA positive cells was considerably decreased, although they still existed in the intima.In addition, analysis on proliferating cell nuclear antigen showed extensive distribution of proliferating cells in the neointima of saline-treated animals, which were notably reduced in all nanotherapy groups.In both cases, much better effects were found in the RAP/AOCD NP group.Previous studies demonstrated that the expression of matrix metalloproteinase-2 is significantly increased in the developing neointima of balloon-injured rat carotid arteries .Consistently, a high expression of MMP-2 was observed in arteries of the model group, while it was effectively decreased by intervention with RAP nanotherapies, particularly the dual-responsive nanotherapy.Moreover, the degree of oxidative stress was analyzed by staining 8-hydroxy-2-deoxyguanosine that has been considered as a critical biomarker of oxidative DNA damage .In this aspect, strong 8-OHdG staining was observed in injured carotid arteries compared to the normal group.8-OHdG was mainly present in the neointima, and to a lesser extent, in the media.By contrast, dramatically low expressions of 8-OHdG were detected in arteries from rats administered with responsive nanotherapies.In line with these findings, quantitative analysis revealed significantly reduced levels of typical oxidative mediators, including H2O2, malondialdehyde, and myeloperoxidase in arterial tissues, after treatment with the dual-responsive nanotherapy.On the other hand, whereas the activity of superoxide dismutase, an antioxidant enzyme, was significantly reduced in injured carotid arteries, it was efficiently rescued by treatment with RAP/AOCD NP.In addition, the expressions of typical pro-inflammatory cytokines tumor necrosis factor and interleukin-1β in the injured tissues were significantly reduced by treatment with RAP/AOCD NP, as compared to the model group treated with saline.These results demonstrated that local oxidative stress and inflammatory responses in the carotid arteries were remarkably attenuated by treatment with the dual-responsive nanotherapy.On the basis of above promising results, the pH/ROS dual-responsive nanotherapy was additionally functionalized to improve its targeting capability and efficacy.Previously, the peptide KLWVLPKGGGC sequence was demonstrated effective for targeting of vascular sites with inflammation and/or injury, because of its high affinity to type IV collagen, which is a primary component of the subendothelial basement membrane .Herein this peptide was conjugated with DSPE-PEG via thiol-maleimide click chemistry.Then Col-IV targeting, pH/ROS dual-responsive NPs were prepared by the similar nanoprecipitation/self-assembly procedures as aforementioned.Observation by TEM and scanning electron microscopy indicated that TAOCD NP exhibited spherical shape, with relatively narrow size distribution.The average hydrodynamic diameter was 131 nm, while ζ-potential was −27.2 ± 0.3 mV.Also, RAP can be packaged into TAOCD NP, giving rise to a targeting, dual-responsive nanotherapy RAP/TAOCD NP, with spherical shape and narrow size distribution as well as efficient loading of RAP.Using Cy5-labeled TAOCD NP, in vitro cellular uptake profiles were examined in VSMCs.Both observation by confocal microscopy and quantification by flow cytometry revealed dose-dependent internalization of Cy5/TAOCD NP in VSMCs.In addition, endocytosis of Cy5/TAOCD NP showed a time-response pattern.Notably, the endolysosomal pathway was mainly involved in the intracellular transport of Cy5/TAOCD NP, as implicated by the fluorescent co-localization of LysoTracker and Cy5.These results are similar to those observed for the non-targeted NPs.Consequently, incorporation of the targeting peptide did not significantly affect the cellular uptake behaviors of AOCD NP.Then the targeting capability of TAOCD NP was evaluated using Cy7.5-labeled NPs.We first examined the affinity of Cy7.5/TAOCD NP to Col-IV.After microplates were coated with Col-IV and incubated with aqueous solution containing Cy7.5/TAOCD NP, fluorescence imaging showed significantly higher fluorescent signals compared to those incubated in the non-coated plates.Moreover, pretreatment of the Col-IV-coated plates with free targeting peptide significantly reduced fluorescence intensities of Cy7.5/TAOCD NP.This result suggested that the targeting peptide introduced on TAOCD NP maintained its specific binding capacity to Col-IV.Subsequently, in vivo targeting performance of TAOCD NP was examined in rats.Cy7.5/TAOCD NP was i. v. administered immediately after carotid artery balloon injury.For comparison, the non-targeting Cy7.5/AOCD NP was used as a control.At 8 h after administration, ex vivo imaging revealed the fluorescence accumulation at the injured left carotid arteries isolated from rats treated with either Cy7.5/AOCD NP or Cy7.5/TAOCD NP.Of note, the Cy7.5/TAOCD NP group showed significantly higher fluorescence intensity at the injured site than that of the Cy7.5/AOCD NP group.Further analysis was performed after i. v. injection of Cy5-labeled AOCD NP or TAOCD NP.Microscopic observation of arterial sections also indicated relatively strong fluorescence in the Cy5/TAOCD NP group.In a separate study, immunofluorescence analysis was conducted by staining arterial cryosections with fluorescent-labeled Col-IV antibody.For rats treated with Cy5/TAOCD NP, co-localization of green fluorescence due to Col-IV and red Cy5 fluorescence could be evidently observed in the injured arteries, while the normal arteries exhibited no fluorescent signals due to Cy5.Consistently, further quantitative analyses revealed significantly higher RAP levels in the injured carotid arteries of rats treated with the targeting RAP nanotherapy, as compared to those of the non-targeting control.Collectively, these results demonstrated that passive accumulation of the pH/ROS dual-responsive nanovehicle to the injured carotid arteries can be notably enhanced by decoration with a Col-IV targeting peptide.Also, in vivo efficacy of the pH/ROS dual-responsive, targeting RAP nanotherapy was compared with that of the non-targeting nanotherapy in the same cohort of studies.RAP/AOCD NP and RAP/TAOCD NP were separately administered in rats with balloon-injured arteries by i. v. administration.Notably, only three times of i. v. injection were performed to give a clear comparison.After two weeks of treatment, the carotid arteries were excised for histopathological analysis.Consistent with the aforementioned results, examination on H&E-stained sections indicated that treatment with either RAP/AOCD NP or RAP/TAOCD NP significantly inhibited the neointimal hyperplasia.Compared with RAP/AOCD NP-treated rats, the RAP/TAOCD NP group showed significant differences with respect to increasing the lumen area as well as reducing the intimal area and proliferation index.Likewise, immunohistochemistry analysis revealed more effectively inhibited VSMCs proliferation in the intimal area after treatment with RAP/TAOCD NP, in comparison to RAP/AOCD NP.Moreover, the dual-responsive and targeting nanotherapy reduced the expression of MMP-2 and 8-OHdG to a much more evident degree than the control non-targeting nanotherapy.These results demonstrated that in vivo anti-restenosis efficacy of the pH/ROS dual-responsive nanotherapy can be further potentiated by decoration with targeting moieties.In summary, we have developed a facile and effective method to engineer pH/ROS dual-responsive nanocarriers, by combination of pH- and ROS-responsive materials derived from β-CD.By optimizing the weight ratio of the pH-sensitive material ACD and the ROS-responsive material OCD, NPs with desirable pH/ROS-responsive capability could be easily prepared.Thus obtained dual-responsive nanocarriers AOCD NP were able to efficiently package RAP, a candidate drug, giving rise to a dual-responsive nanotherapy RAP/AOCD NP.In response to low pH or a high level of H2O2, the loaded RAP molecules could be triggerably released from RAP/AOCD NP.By enhancing intracellular delivery via endocytosis and sensitive release in subcellular organelles, RAP/AOCD NP showed more potent anti-migration and anti-proliferative activity in VSMCs, compared to free drug as well as the non-responsive and related single-responsive nanotherapies.After i. v. injection, AOCD NP could passively accumulate in the balloon-injured arteries of rats.By this passive targeting and triggerable release of RAP at the injury site of arteries, RAP/AOCD NP exhibited more desirable anti-restenosis effects in rats, compared to other control nanotherapies.The targeting capacity of AOCD NP was further increased by decoration with a Col-IV targeting peptide, resulting in a pH/ROS dual-responsive, targeting nanoplatform TAOCD NP.Correspondingly, RAP-loaded TAOCD NP showed more effective in vivo efficacy than RAP/AOCD NP, with respect to inhibition of neointimal formation in rats.In addition, both in vitro and in vivo tests suggested that AOCD NP and RAP/AOCD NP displayed good safety profiles.Consequently, the developed pH/ROS dual-responsive nanocarriers and nanotherapies can be further developed for the management of restenosis and other cardiovascular diseases associated with vascular inflammation.β-Cyclodextrin and lecithin were purchased from Tokyo Chemical Industry Co., Ltd.2-Methoxy propylene was supplied by Shanghai Beihe Chemicals Co., Ltd.Rapamycin was obtained from Beijing Huamaike Biotechnology Co., Ltd.Pyridinium p-toluene sulfonate, 4-phenylboronic acid pinacol ester, 4-dimethylaminopyridine, 1,1′-carbonyldiimidazole, and Evans blue were purchased from Sigma-Aldrich.Poly with an intrinsic viscosity of 0.50:0.65 was purchased from Polysciences, Inc.1,2-Distearoyl-sn-glycero-3-phosphoethanolamine-N- was purchased from Corden Pharma."DSPE-PEG-maleimide was supplied by Xi'an Ruixi Biological Technology Co., Ltd.Penicillin, streptomycin, and fetal bovine serum were provided by Gibco."Dulbecco's modified eagle medium was obtained from Gibco.Platelet-derived growth factor-BB was purchased from R&D Systems.Cyanine 5 NHS ester and cyanine 7.5 NHS ester were purchased from Lumiprobe, LLC.4,6-Diamidino-2-phenylindole and LysoTracker Green were supplied by Invitrogen.The cell counting kit was obtained from R&D Systems.The type IV collagen targeting peptide was synthesized by Hybio Pharmaceutical Co., Ltd.Superoxide dismutase and hydrogen peroxide assay kits were purchased from Beyotime Biotech.ELISA kits of malondialdehyde and myeloperoxidase were obtained from Signalway Antibody LLC.6-Carboxy-2′,7′-dichlorodihydrofluorescein diacetate was purchased from Molecular Probes.Dihydroethidium and BCECF-AM were obtained from Beyotime Biotechnology.Antibodies against α-smooth muscle actin, 8-hydroxy-2-deoxyguanosine, and matrix metallopeptidase-2 were purchased from Abcam, while antibody to proliferating cell nuclear antigen was obtained from Santa Cruz, Biotechnology, Inc.FITC-labeled anti-mouse CD3 antibody and PE-labeled anti-mouse CD19 antibody were obtained from Invitrogen.Hydrolysis profiles of different NPs were separately carried out in 0.01 M PBS at pH 5, pH 6, or pH 7.4, with or without 1 mM H2O2.After incubation at 37 °C for varied time periods, transmittance of different NP-containing solutions was determined at 500 nm by UV–Visible spectroscopy.The degree of hydrolysis was calculated based on the transmittance values .To test pH/ROS dual-responsive drug release profiles, 5 mg RAP-containing NPs freshly prepared was incubated at 37 °C in 8 mL of 0.01 M PBS at pH 5, pH 6, or pH 7.4, with or without 1 mM H2O2, with shaking at 125 rpm.At specified time intervals, the aqueous solutions containing NPs were centrifuged at 19118g, and 4 mL of release medium was withdrawn.The same volume of corresponding fresh medium was replenished.The concentration of RAP was quantified as aforementioned.MOVAS cells were seed to 12-well plates at 2 × 105 cells per well in 1 mL of growth medium.After 24 h, the culture medium was removed and 1 mL of fresh medium containing Cy5-labeled NPs at 20 μg/mL was added, followed by incubation at 37 °C for various periods of time.Before observation, late endosomes and lysosomes were stained with LysoTracker Green at 50 nM for 2 h, while nuclei were stained with DAPI.Fluorescence images were acquired by confocal laser scanning microscopy.Similarly, dose-dependent cellular internalization behaviors were examined after incubation for 6 h.For quantification of internalized NPs by flow cytometry, MOVAS cells were seeded in 12-well plates at a density of 2 × 105 cells per well in 1 mL of growth medium.After 24 h, the culture medium was switched to 1 mL of fresh medium containing Cy5-labeled NPs at 20 μg/mL and incubated for various time periods of time.Then the cells were digested and fluorescence intensity was determined via fluorescence-activated cell sorting.Following similar procedures, dose-dependent internalization profiles were examined, with incubation time of 6 h.MOVAS cells were seeded in a 96-well microplate at 5 × 103 cells per well in DMEM and incubated at 37 °C.After 24 h of incubation, the medium was changed to fresh culture medium containing 0.5% FBS and 20 ng/mL PDGF-BB, concomitant with the addition of different RAP formulations at 1 μM of RAP.In the normal control group, cells were treated with growth medium alone, while only PDGF-BB was added in the PDGF-BB group.After incubation for 24 h, cell viability was determined by CCK-8 assay.A Transwell assay of VSMCs was carried out to evaluate the anti-proliferation activity of different nanotherapies by using a modified Boyden chamber using the Costar Transwell apparatus with 8.0-μm pore size.Briefly, MOVAS cells were seeded in 12-well plates at a density of 2 × 105 cells/well and allowed to adhere overnight.The culture medium was then switched to DMEM containing 0.5% FBS, into which PDGF-BB was added at 20 ng/mL.Cells were incubated with different RAP formulations at 1 μM of RAP for 6 h.In the positive control group, cells were induced with PDGF-BB alone, while PDGF-BB and RAP were not added in the normal control group.Subsequently, each well was washed, trypsinized, resuspended in 0.2 mL of growth medium containing 0.5% FBS, and plated to the upper chamber.The lower chamber was filled with 0.6 mL of medium containing 0.5% FBS and 20 ng/mL PDGF-BB.After 8 h, all non-migrated cells were gently removed from the upper face of the Transwell membrane with a cotton swab.Migrated cells were fixed in 4% paraformaldehyde and stained with 0.1% crystal violet.Finally, the migrated cells were quantified by counting the number of stained cells from five randomly selected fields taken with a Nikon microscope.MOVAS cells were seeded into 6-well plates at a density of 5 × 105 cells/well and treated with different RAP formulations at 1 μM of RAP for 24 h.In the positive control group, cells were stimulated with PDGF-BB alone, while cells in the normal control group were treated with medium.The cells were harvested at 24 h after different treatments and fixed by 70% ethanol, followed by incubation overnight at 4 °C.After through washing, cells were treated with ribonuclease at 37 °C for 30 min, and stained with propidium iodide at 4 °C for 30 min in the dark.The cells were analyzed by flow cytometry using an FC500 flow cytometer to determine the proportion of cells within the G1, S, and G2/M phases.MOVAS cells in the normal, model, RAP/PLGA NP, RAP/ACD NP, RAP/OCD NP, and RAP/AOCD NP groups were treated with medium alone, PDGF-BB, PDGF-BB plus RAP/PLGA NP, PDGF-BB plus RAP/ACD NP, PDGF-BB plus RAP/OCD NP, and PDGF-BB plus RAP/AOCD NP, respectively.The cells were harvested after at 24 h after different treatments to detect the levels of cell cycle-related proteins by Western blotting.To this end, cells were lysed in lysis buffer.The lysates were centrifuged at 12,000g for 30 min at 4 °C.The supernatant was collected and the protein concentrations were determined using a Bicinchoninic acid assay kit.Then samples containing 50 μg proteins were separated on 10% sodium dodecyl sulfonate-polyacrylamide gels and were transferred electrophoretically on nitrocellulose membranes.After blocking nonspecific binding with 5% skim milk, the membranes were incubated with the following primary antibodies at 4 °C overnight: Cyclin D1 antibody, dilution 1:200; p27kip1 antibody, dilution 1:500; and β-actin antibody, dilution 1:500.Subsequently, the membranes were hybridized at 37 °C for 2 h with horseradish peroxidase-conjugated secondary antibody at a 1:2000 dilution.After washing three times, specific bands were visualized by fluorography using an enhanced chemiluminescence kit.The relative densities were quantified using the Quantity One analysis system.MOVAS cells were seeded in a 96-well plate at a density of 1 × 104 cells per well and grown for 24 h.The culture medium was removed and the cells were treated with 100 μL of fresh medium containing different concentrations of blank NPs.Cells treated with fresh medium alone were used as a control.After 24 h, cell viability was measured by CCK-8 assay.Fresh blood was collected from Sprague Dawley rats via the jugular vein.Red blood cells were isolated by centrifugation at 600g for 5 min, and washed three times with sterile isotonic PBS.Then RBCs were diluted at 1:10 in sterile isotonic PBS.The diluted suspension of RBCs was incubated with 800 μL of isotonic PBS containing different concentrations of NPs.RBCs incubated with water or isotonic PBS were used as positive and negative control, respectively.The obtained suspensions were incubated at 37 °C for 2 h, under gentle shaking.After centrifugation at 600g for 5 min, the supernatant was separated and the absorbance at 545 nm was measured by UV–Visible spectrophotometry.Hemolysis percentage was calculated according to the previously reported method .All samples were tested in triplicate.The rat vascular smooth muscle cell line was obtained from Leibniz-Institut DSMZ.Cells were maintained in tissue culture flasks or plastic dishes in a humidified atmosphere of 5% CO2 at 37 °C using DMEM supplemented with 10% FBS, 50 U/mL penicillin, and 50 μg/mL streptomycin.The cell line was kept in culture for up to 30 passages.Male Sprague-Dawley rats were administered with aspirin by oral gavage and heparin by i. v. injection before surgery.Immediately, pentobarbital at 0.7 mg/kg were intraperitoneally injected for anesthesia.Subsequently, the left external carotid arteries were exposed, and the endothelium of common carotid arteries was denuded by intraluminal passage of a 2-French arterial embolectomy catheter, which was passed to the proximal common carotid artery and then withdrawn.This procedure was repeated three times.All procedures were carried out by the same operator.Endothelial injury of the carotid artery was assessed by staining with Evans blue.Briefly, 5% Evans blue diluted in saline was injected via at the tail vein at 60 mg/kg.After 20–30 min, rats were euthanized.The carotid artery was isolated, washed with saline, and fixed with 4% paraformaldehyde for 10 min.Then the carotid artery was cut along the long axis to expose the intima, and tiled on a glass slide, with the adventitia contacting with the slide.The blue-stained injured endothelium was observed by stereoscopic microscopy.Using an intracellular ratiometric pH indicator BCECF-AM, pH changes at the injured sites of carotid arteries were qualitatively evaluated.Briefly, carotid artery tissues were collected at 30 min after balloon injury.The carotid arteries were frozen in Tissue-Tek O·C.T. Compound, and were cut into 5-μm sections.Subsequently, the cryosections were incubated with 10 μM BCECF-AM at 37 °C for 0.5 h.The stained sections were observed by fluorescence microscopy.The carotid artery tissues were harvested at different time points after balloon injury in the carotid artery.Then the carotid arteries were frozen in Tissue-Tek O·C.T. Compound.The frozen arteries were cut into 8-μm sections that were placed on glass slides.Subsequently, the sections were separately incubated with 10 μM DCFDA or 5 μM DHE in a light-protected humidified chamber at 37 °C for 30 min.The stained sections were observed by fluorescence microscopy.The carotid arteries, collected at day 14 after balloon injury, were homogenized in saline using a ground glass tissue grinder.After centrifugation, the levels of hydrogen peroxide and superoxide dismutase activity were determined using the corresponding diagnostic reagent kits.The levels of malondialdehyde and myeloperoxidase were separately detected with the related ELISA kits, following procedures provided by the manufacturers.Also, the levels of tumor necrosis factor-α and interleukin-1β were separately measured by ELISA.In all cases, the contents of total proteins were measured by the BCA method.Type IV collagen was dissolved in 0.25% acetic acid at a concentration of 0.5 mg/mL.Then 100 μL of the collagen solution was added into a 96-well plate and incubated overnight at 4 °C.For the binding study, collagen-coated or non-coated plates were first blocked with 50% calf serum for 1 h and then incubated with 100 μL of Cy7.5/TAOCD NP in 25% calf serum.After 1 h of incubation, the microplates were washed with PBS containing 0.05% Tween 20 three times.Subsequently, fluorescence images were taken by an IVIS Spectrum system.Immediately after the rat carotid artery balloon injury, Cy7.5-labeled NPs were i. v. administered at 25 μg/kg of Cy7.5.After 8 h, whole carotid artery tissues were harvested and imaged simultaneously using an IVIS Spectrum system.Fluorescence intensity of the carotid artery was then analyzed.In a separate study, the carotid artery tissue was collected at predetermined time points after i. v. administration of Cy7.5/TAOCD NP in rats with injured carotid artery, followed by ex vivo imaging and quantitative analysis of fluorescence signals by the IVIS Spectrum system.Also, we examined drug distribution in the carotid arteries after i. v. adminsitration of RAP nanotherapies.After establishment of the carotid artery balloon injury model, rats received a single i. v. administration of different RAP formulations at 1 mg/kg of RAP.After 0.5 h, rats were euthanized.The left injured and contralateral normal carotid arteries were excised and homogenized in PBS.After centrifugation at 12,000g for 10 min, the supernatant was collected and proteins were precipitated via methanol.The concentrations of RAP were quantified by HPLC.Following carotid artery balloon injury in rats, Cy5/TAOCD NP was i. v. injected at 25 μg/kg.After 0.5 h, carotid artery tissues were harvested and frozen in Tissue-Tek O·C.T. Compound.The frozen arterial tissues were then cut into 5-μm sections and placed on glass slides.After nuclei were stained with DAPI, the sections were observed by CLSM.Thirty male Sprague Dawley rats were assigned into 6 groups, including a sham group, a saline-treated group, and groups separately administered with RAP/PLGA NP, RAP/ACD NP, RAP/OCD NP, or RAP/AOCD NP.Except rats in the sham group, rat carotid artery balloon injury was induced in other rats according to the above described procedures.After angioplasty, different RAP formulations were i. v. administered at 1 mg/kg of RAP on days 0, 4, 8, and 12.For comparison, in vivo efficacy of free RAP was also examined following the similar procedures.In a separate study, RAP/AOCD NP or RAP/TAOCD NP were i. v. administered at 1 mg/kg of RAP on days 0, 5, and 10 after carotid balloon injury was induced in rats.On day 14 after angioplasty, rats were euthanized.The carotid arteries and major organs were excised for further histological and immunohistochemical studies.Two weeks post injury, rats were euthanized and then were subjected to transcardial perfusion with saline, followed by fixation with 4% paraformaldehyde over 5 min at 100 mmHg.The injured segment of the left common carotid artery was excised from the surrounding tissue and immersion-fixed in the same fixative for at least 12 h.The cross-sections were prepared from the harvested arteries and stained with hematoxylin and eosin.In addition, histological sections of carotid artery tissues were separately stained with antibodies against α-SMA, PCNA, 8-OHdG, or MMP-2.Microscopic images were captured using an optical microscope, and quantitative analysis was performed with a computerized image processing and analysis program.Male Sprague-Dawley rats and male C57BL/6J mice were obtained from the Animal Center of the Third Military Medical University.Animals were housed in rat cages under standard conditions, with ad libitum access to water and food.All the animal care and experimental protocols were performed with review and approval by the Animal Ethical and Experimental Committee of the Third Military Medical University.Twelve male Sprague Dawley rats were randomly assigned into 4 groups.In AOCD NP groups, rats were treated with 1 mL of saline containing different doses of AOCD NP by i. v. injection via the tail vein.In the normal control group, rats were i. v. administered with 1 mL of saline.For all animals, signs of toxicity and their behaviors were observed for about 14 days.The body weight was checked at defined time points.After 2 weeks, animals were euthanized.Blood samples were collected for hematological and biochemical analyses.The main organs, including heart, liver, spleen, lung, and kidney were harvested and weighed.The organ index was calculated as the ratio of organ weight to the body weight of each rat.In addition, histological sections were prepared and stained with H&E.To assess the long-term toxicity of the dual-responsive nanotherapy in rats, healthy male Sprague Dawley rats were randomly assigned to 4 groups.The vehicle group was treated with AOCD NP at 11.5 mg/kg, while two groups were separately administered with RAP/AOCD NP at 1 or 3 mg/kg by i. v. injection 2 times per week for consecutive 12 weeks.In the normal control group, rats were i. v. administered with saline.All animals were observed daily for mortality, general appearance, and behavioral abnormality.Food and water consumption as well as body weight were recorded weekly.At week 12, rats were euthanized.Blood samples were collected for analysis of representative hematological and biochemical parameters.The major organs, including heart, liver, spleen, lung, and kidney were excised and weighed for calculation of the organ index.Furthermore, H&E-stained histological sections were prepared for the collected organs.In addition, immunohistochemistry analysis of splenic sections was performed after staining with anti-rat CD3 antibody.To further evaluate the effects of RAP/AOCD NP treatment on the adaptive immune system, 24 male C57BL/6J mice were randomly divided into four groups.Mice in the normal control were administered with saline, while the vehicle group was treated with AOCD NP at 11.5 mg/kg by i. v. injection.Mice in the other two groups were separately administered with RAP/AOCD NP at 1 or 3 mg/kg by i. v. injection 2 times per week for consecutive 12 weeks.During treatment, mice were monitored for any changes in the general physical conditions, such as appearance, behaviors, and mortality.At week 12, mice were euthanized.The isolated splenic tissues were cut into pieces, which were thoroughly grinded in the presence of sterile PBS.Then cells were collected by centrifugation in the lymphocyte separation medium.After incubation with a mixture of anti-CD3-FITC and anti-CD19-PE at 4 °C for 30 min, cells were washed twice and resuspended in PBS.Subsequently, flow cytometry was conducted to quantify the percentage of T and B cells.The cells were gated using the forward and side scatter for dead cell exclusion.In each sample, 10,000 events were measured, and data were analyzed using FlowJo 7.6 software.Data are presented as mean ± standard deviation."Comparisons between and within groups were conducted with unpaired Student's t-tests and repeated measures ANOVA using SPSS Statistics 20 software, respectively.A value of p < 0.05 was considered to be statistically significant.According to the previously established method , acetalation of β-CD was performed at room temperature in the presence of excess amount of MP, using PTS as a catalyst.Briefly, 4 g β-CD was dissolved in 80 mL of anhydrous DMSO, into which 64 mg PTS and 16 mL of MP was added.After 3 h, approximately 1 mL of triethylamine was added to terminate the reaction.The acetalated product was precipitated from water, collected by centrifugation, and washed with deionized water four times.The residual water was removed by lyophilization to give rise to white powder.A ROS-responsive material based on β-CD was synthesized by chemical functionalization with PBAP .Typically, 5.55 g PBAP was dissolved in 36 mL of anhydrous dichloromethane, and then 7.65 g CDI was added.After 30 min of reaction, 40 mL of DCM was added into the mixture, followed by washing with 30 mL of deionized water three times.The organic phase was further washed with saturated NaCl solution, dried over Na2SO4, and concentrated in vacuum to obtain CDI-activated PBAP.Subsequently, 250 mg β-CD and 1.52 g CDI-activated PBAP were dissolved in 20 mL of anhydrous DMSO, followed by addition of 0.8 g DMAP.The obtained mixture was magnetically stirred at 20 °C overnight.The final product was precipitated from 80 mL of deionized water, collected by centrifugation, thoroughly washed with deionized water, and collected after lyophilization.Analysis by 1H NMR spectroscopy was carried out using an Agilent DD2 600 MHz NMR spectrometer.Fourier-transform infrared spectroscopy was performed by a PerkinElmer Spectrum 100S FT-IR spectrometer.The previously established nanoprecipitation/self-assembly method was used to prepare RAP-loaded nanoparticles based on different materials .Specifically, 50 mg carrier material and 10 mg RAP were co-dissolved in 2 mL of organic solvent to obtain an organic phase.To obtain an aqueous phase, 4 mg lecithin and 6 mg DSPE-PEG were dispersed in 0.4 mL of ethanol, and then 10 mL of deionized water was added, followed by heating at 65 °C for 1 h. Then, the organic phase was slowly added into the preheated aqueous phase solution under gentle stirring.After vortexing for 3 min, the mixture was cooled to room temperature, incubated for 2 h, and then dialyzed against deionized water at 25 °C for 24 h. Finally, the solidified NPs were harvested by lyophilization.For PLGA and ACD NPs, acetonitrile was used, while solvent mixture of methanol and acetonitrile at 1:1 was employed to prepare the organic phase containing OCD or ACD/OCD blends at various weight ratios.Through similar procedures, blank NPs and Cy5-or Cy7.5-labeled NPs were prepared.First a collagen IV-targeting peptide was reduced using Bond-breaker TCEP solution, Neutral pH in PBS containing 5 mmol/L EDTA at a disulfide/TCEP molar ratio of 1:1 .Then KLWVLPKGGGC-conjugated DSPE-PEG was synthesized in 4% ethanol at a peptide/DSPE-PEG-maleimide molar ratio of 5:4, under magnetic stirring at room temperature for 4 h.The free peptide was removed by dialysis overnight.The above mentioned method was then adopted to prepare RAP-loaded and collagen IV-targeting NPs based on a blend at an OCD/ACD weight ratio of 80:20, with a DSPE-PEG-peptide/DSPE-PE molar ratio of 1:9.The size, size distribution, and ζ-potential values of various NPs in aqueous solution were measured by a Malvern Zetasizer at 25 °C.Transmission electron microscopy was performed by a TECNAI-10 microscope, operating at an acceleration voltage of 80 kV.Scanning electron microscopy was conducted on a FIB-SEM microscope.Before observation, freezing-dried samples were coated by platinum for 40 s. | Cardiovascular diseases (CVDs) remain the leading cause of morbidity and mortality worldwide. Vascular inflammation is closely related to the pathogenesis of a diverse group of CVDs. Currently, it remains a great challenge to achieve site-specific delivery and controlled release of therapeutics at vascular inflammatory sites. Herein we hypothesize that active targeting nanoparticles (NPs) simultaneously responsive to low pH and high levels of reactive oxygen species (ROS) can serve as an effective nanoplatform for precision delivery of therapeutic cargoes to the sites of vascular inflammation, in view of acidosis and oxidative stress at inflamed sites. The pH/ROS dual-responsive NPs were constructed by combination of a pH-sensitive material (ACD) and an oxidation-responsive material (OCD) that can be facilely synthesized by chemical functionalization of β-cyclodextrin, a cyclic oligosaccharide. Simply by regulating the weight ratio of ACD and OCD, the pH/ROS responsive capacity can be easily modulated, affording NPs with varied hydrolysis profiles under inflammatory microenvironment. Using rapamycin (RAP) as a candidate drug, we first demonstrated in vitro therapeutic advantages of RAP-containing NPs with optimal dual-responsive capability, i.e. RAP/AOCD NP, and a non-responsive nanotherapy (RAP/PLGA NP) and two single-responsive nanotherapies (RAP/ACD NP and RAP/OCD NP) were used as controls. In an animal model of vascular inflammation in rats subjected to balloon injury in carotid arteries, AOCD NP could accumulate at the diseased site after intravenous (i.v.) injection. Consistently, i. v. treatment with RAP/AOCD NP more effectively inhibited neointimal hyperplasia in rats with induced arterial injuries, compared to RAP/PLGA NP, RAP/ACD NP, and RAP/OCD NP. By surface decoration of AOCD NP with a peptide (KLWVLPKGGGC) targeting type IV collagen (Col-IV), the obtained Col-IV targeting, dual-responsive nanocarrier TAOCD NP showed dramatically increased accumulation at injured carotid arteries. Furthermore, RAP/TAOCD NP exhibited significantly potentiated in vivo efficacy in comparison to the passive targeting nanotherapy RAP/AOCD NP. Importantly, in vitro cell culture experiments and in vivo animal studies in both mice and rats revealed good safety for AOCD NP and RAP/AOCD NP, even after long-term treatment via i. v. injection. Consequently, our results demonstrated that the newly developed Col-IV targeting, pH/ROS dual-responsive NPs may serve as an effective and safe nanovehicle for precision therapy of arterial restenosis and other vascular inflammatory diseases. |
31,409 | Reduced respiratory motion artefact in constant TR multi-slice MRI of the mouse | MRI scanning in the abdomen and thorax of small animals is compromised by the effects of respiration motion.An isoflurane anaesthetized normal healthy mouse takes snatched breaths of about 200 ms duration with a significantly depressed and often variable respiration rate, typically 40–80 breaths/min depending on the depth and duration of anaesthesia .Prospective gating methods incorporating the automatic reacquisition of respiratory motion corrupted data have enabled highly efficient motion desensitised 3D scanning at short and constant TR in the mouse .The methods adaptively track spontaneous changes in the respiration rate to maximise acquisition during all inter-breath intervals when respiration motion is minimal, and have been shown work very well in spoiled gradient echo and balanced SSFP scan modes.These methods, however, are not able to replicate the same level of T2 contrast offered by the multi-slice RARE scan mode, which is well known to be particularly useful for tumour visualization.This paper reports a conceptually simple yet effective prospective gating acquisition scheme for efficient multi-slice scanning in free breathing small animals at any fixed TR of choice with reduced sensitivity to respiratory motion.For a spin echo scan with a 90° excitation the maximum SNR per unit time is achieved at the familiar TR = 1.26T1 which corresponds to a 44% improvement over scanning with TR = 5T1.Respiration motion artefacts have previously been minimised in triggered multi-slice scanning through the acquisition of a fixed number of slice data during inter-breath periods which forces TR to be determined by the time taken to complete an integer number of breath cycles .A long TR is required to avoid the signal amplitude modulation due to variable T1 weighting that results from a short and variable TR, and which manifests as image ghosting.The scan efficiency is compromised since TR is long.Furthermore, unpredictable events such as periodic double breaths result in unavoidable corruption of data.In instances where a respiratory triggered approach is considered too inefficient, motion artefacts have been reduced with signal averaging .A variable TR also confounds the quantification of metrics derived from magnetisation preparation schemes such as MT and CEST.Even a simple MT ratio measurement requires a fixed TR to be meaningful and reproducible .Slice selective retrospective gating strategies in small animals typically acquire data with short TR such that each line of k-space data can be sampled repeatedly over several cardiac cycles for cardiac R-wave alignment.We are aware of only one report that uses retrospective gating in conjunction with scanning at TR > 80 ms to align data with inter breath periods in order to minimise the effect of respiration motion .Scanning the mouse abdomen at TR 400 ms with a retrospective approach required 9 full repetitions to obtain a fully sampled data set from the 20 repetitions that were initially acquired in the absence of hindsight.Alternative acquisition strategies such as radial, spiral and, in the case of the RARE scan mode, PROPELLER, are able to reduce the effect of physiological motions by oversampling the centre of k-space in a manner that essentially performs low spatial frequency signal averaging, albeit at the expense of scan time .The reduced sensitivity of PROPELLER to respiration motion has enabled the ADC of mouse liver to be determined with an approximately three-fold reduction in standard deviation when compared to both ungated and respiration triggered RARE scan modes .Although a good degree of motion correction is possible, the destructive interference between echoes of the RARE CPMG echo train that is caused by a loss of phase coherence cannot be recovered.In general, the effect of motion on pure frequency encoding methods is to spread the artefact energy in all directions, which is considered to be less detrimental to image quality than the phase encode ghosting exhibited by conventional Cartesian spin-warp methods.The prospective gating multi-slice method presented here is compatible with all of these acquisition schemes, and follows the guiding principle that the best way to remove artefacts is to avoid them altogether .Conventional multi-slice MRI is characterised by having a slice index counter that cycles more quickly than the projection or phase encode index counter.By assigning each slice its own specific projection or phase encode loop index counter it is possible to scan the slices continually at the optimum TR.When a breath is registered RF pulses continue to be applied but data are not acquired, and the corresponding counters remain fixed so that the data are acquired one TR later, providing it coincides with an inter-breath period.Each slice advances through its own projection/encoding loop whenever suitable data are acquired during an inter-breath period, as determined by the spontaneous respiration rate, and the scan only completes when all of the slices have acquired the specified number of projections/encodings.Each breath is already in progress at the time of detection using threshold breath detection.The approach is refined to reacquire the slice data acquired during a user-defined time period before each breath is detected as this has been shown to dramatically improve image stability .For convenience the acquisition scheme is referred to as SPLICER , and only the data with reduced sensitivity to respiratory motion are spliced together for image reconstruction.A diagrammatic representation of the SPLICER scan mode is shown for two different TRs in Fig. 1A and B and the scan modes that have previously been used to prospectively synchronise data acquisition with inter-breath periods are shown in Fig. 1C, D and E.It has previously been reported that only small variations in physiological cycles result in sufficient asynchrony such that motion insensitive data can be sampled efficiently .This is demonstrated by simulation for a normal distribution in Supplementary material.In practice, the respiration rate typically drifts and scans complete according to the time that is available for respiration insensitive scanning.Spreading the standard ungated scan time over the inter-breath periods that are available for respiration insensitive scanning defines an extended scan time.Analysis of real respiration traces suggests that at least 97% of SPLICER scans will complete within a 10% overhead of this extended scan time.It has previously been reported that only small variations in physiological cycles result in sufficient asynchrony such that motion insensitive data can be sampled efficiently .This is demonstrated by simulation for a normal distribution in Supplementary material.In practice, the respiration rate typically drifts and scans complete according to the time that is available for respiration insensitive scanning.Spreading the standard ungated scan time over the inter-breath periods that are available for respiration insensitive scanning defines an extended scan time.Analysis of real respiration traces suggests that at least 97% of SPLICER scans will complete within a 10% overhead of this extended scan time.The prospectively gated 3D methods described previously operate at very short TR, and blocks of data with centre-out phase encoding are acquired to align the centre of k-space with inter-breath cardiac R-waves .Only the two standard projection loop counters, one for each projection encoding direction, are required for efficient 3D sequence control including the automatic reacquisition of respiration corrupted data, in contrast to the requirements of the 2D multi-slice method presented here for which each slice has its own independent projection counter.In general, the multi-slice method is deployed at much longer TR which provides ample time for respiration logic signal evaluation before RF is applied to each slice.The overhead associated with real-time decision making is therefore more of an issue for the 3D methods.All animal studies were performed in accordance with the UK Animals Act of 1986 under licences approved by the UK Home Office and with the approval of the University of Oxford ethical review committee.Animals were housed in environmentally enriched IVC cages in groups of 5 per cage, in a 12 h day-night cycle facility maintained at 22 °C in 50% humidity.All mice had ad libitum access to certified food and tap water.Female, 8-week old CBA and C57BL/6 mice were used.Anaesthesia was induced and maintained using isoflurane in room air supplemented with oxygen for MRI.Rectal temperature was monitored and maintained at 36 °C with an optical system that provided feedback to a twisted pair resistive heating system developed for MR compatible homeothermic maintenance ."Respiration was monitored and maintained at 40–60 breaths/min using a pneumatic balloon positioned against the animal's chest and coupled to a pressure transducer.The respiration signal was passed to a custom-built gating device to generate a threshold based respiration gating control signal with additional duration of about 150 ms set to last until after the completion of each breath to reduce the sensitivity to respiratory motion.A pancreatic tumour was derived from orthotopic injected KPC cells in a C57BL/6 mouse , and CaNT tumours were subcutaneously implanted in 3 CBA mice, as part of separate studies.MRI was performed on a 4.7 T 310 mm horizontal bore Biospec AVANCE III HD preclinical imaging system equipped with 114 mm bore gradient insert or a 7.0 T 210 mm horizontal bore VNMRS preclinical imaging system equipped with 120 mm bore gradient insert.RF transmission and reception was performed with a 45 mm long 32 mm ID quadrature birdcage coil.The SPLICER scheme was implemented in RARE and magnetisation transfer prepared gradient echo scan modes with evaluation of the respiration gating signal preceding the acquisition or dummy scanning of each slice.A user defined period of 150 ms determined the amount of slice data acquired immediately preceding the detection of each breath that was set to be reacquired.Combining the reacquisition with extension of the respiration gating control signal beyond each snatched breath results in a period of about 500 ms duration about each breath that is not available for data acquisition.The scanner architecture of both systems necessitates that the exact amount of data to be acquired is specified at the start of a scan.Traditionally, for a full 2D multi-slice data set, this corresponds to the acquisition of np complex data points for each projection multiplied by ns slices multiplied by nproj projections.Increasing the projection loop size by a factor of four easily ensures that scans complete naturally with a full set of acceptable data and do not terminate prematurely.To determine exactly which data should be used in image reconstruction each acquired data trace was timestamped or the projection counter index of each slice was passed to the parallel port of the spectrometer host computer for storage.The stability of the SPLICER acquisition mode was tested with 10 repeats of a 2D multi-slice RARE scan with ETL 8, effective TE 24 ms, TR 2000 ms, THK 0.5 mm, 24 contiguous slices, FOV 32 × 32 mm2 and matrix 128 × 128.For comparison, 20 repeats of an ungated scan acquired in a virtually identical scan time were acquired with otherwise identical parameters.For tumour visualization 3 repeats of the SPLICER acquisition mode were acquired with fat saturation and effective TE 48 ms.For gradient echo scanning with θ < 45°, the optimum TR is <0.33T1.Compatibility of SPLICER with steady-state magnetisation preparation was tested in tumour bearing mice with a multi-slice gradient echo scan with TE 3.5 ms, TR 400 ms, FA 30°, THK 0.422 mm, 48 contiguous slices, FOV 27 × 27 mm2 and matrix 128 × 128.MTC preparation was performed with a 2 ms sinc 90° RF pulse applied downfield at +5 kHz from water followed by a 1 ms 100 mT/m crusher gradient.The SPLICER acquisition schemes and reconstructions for both Bruker and Varian platforms are open source and freely available for download courtesy of the Bodleian Digital Library Systems and Services of the University of Oxford at https://doi.org/10.5287/bodleian:pvRrYE7Kk.Fig. 2 shows mean, standard deviation and SNR maps calculated for an example slice through the kidneys of a mouse from repeats of a RARE scan acquired with TR 2000 ms. The top row shows results from 20 repeats of an ungated scan, and the bottom row shows results from 10 repeats of a respiration gated scan acquired in a virtually identical scan time using SPLICER.The SNR of each pixel was calculated according to the mean/SD obtained from a standard statistical analysis of the magnitude time course data for each pixel.This measure of SNR includes the effect of variance due to physiological movement as well as the generic system noise and eliminates the requirement to select a specific ROI for noise analysis.It is evident that the SPLICER scan results in considerably less motion induced ghosting and the residual artefacts are primarily due to gut motion and cardiac pulsation.Fig. 3 shows SNR enhancement maps for the SPLICER acquisition mode with respect to ungated scanning for all 24 slices.The general ghosting displayed in the enhancement maps is a direct result of the respiration induced phase encode ghosting that is evident in the ungated SNR maps.The vertical bands of hypointensity are primarily a result of pulsatile blood flow in the major blood vessels that run through the imaging plane.The instabilities caused by pulsatile blood flow cannot be reduced by the SPLICER acquisition mode.The mean SNR enhancement across all 24 slices for a large circular ROI that fills a good portion of the body region in each slice was calculated to be 1.6.Considering that the ROI unavoidably includes some signal voids which register as unity, one has to conclude that, in this instance, ungated scanning would have to be performed for at least 2.5 times as long in order to achieve the same SNR as exhibited by the SPLICER acquisition mode.The observed SNR improvement will inevitably depend upon a number of factors including type of contrast encoding, extent of body motion, slice thickness, etc., but it is generally delivered because conventional signal averaging cannot recover motion-derived losses in SNR.The assessment of temporal stability, rather than single frame examination, provides an unequivocal demonstration of the signal intensity stabilisation and increased image fidelity of the proposed method.The data sets are publically available at https://doi.org/10.5287/bodleian:pvRrYE7Kk in NIfTI-1.1 format and can be viewed with ImageJ.It is particularly instructive to inspect the stability of data by selecting a slice and scrolling through the time course.It can readily be seen that the single slice data presented in Fig. 2 are entirely representative.Fig. 4 shows 12 contiguous slices from an orthotopic pancreatic tumour bearing mouse from 3 repeats of SPLICER RARE scan acquired in 2.5 min.The data of all slices are devoid of motion artefact and exhibit good tumour delineation which can be used to plan MR guided radiotherapy treatment .In this application it is particularly crucial to minimise scan time to ensure that long term animal motions such as peristalsis, bladder filling and body droop do not lead to significant deformation of the body in between the start of imaging and the delivery of radiotherapy.It is also particularly important to minimise image artefacts since they can compromise image registration and result in misdirected treatments.For quantification of MRI metrics in general it is important to minimise scan time as precision is improved when longer term physiological instabilities are minimised.We have previously observed, albeit in a 3D scan mode, that measurement of T1 in a number of organs have improved precision when acquiring data in a shorter scan time whilst minimising the effects of respiration motion .For 2D multi-slice scanning the SPLICER acquisition mode is the most efficient way to minimise the effects of respiration motion for quantitation.It must be noted, however, that through slice motion during the breath provides an opportunity for inadvertent excitation of the wrong slices.It is not straightforward to formally assess the extent to which this will confound quantitation as it will depend on many factors including slice position, slice profile, slice thickness, and depth of anaesthesia.For parts of the body where significant through slice axis motion does occur, e.g. the thorax and liver, it is possible to suspend slice selective RF excitation during the breath whilst maintaining delivery of magnetisation preparation pulses as necessary .This requires scans to be operated in a near fully relaxed mode which results in long scan times, but the real-time and adaptive tracking of the respiratory interval by the proposed method ensures that the scan efficiency is always maintained.Nevertheless, if possible, we would always choose to avoid the complications of through slice motion by deploying a short TR 3D method.The proposed method naturally enables global steady-state magnetisation preparation schemes that are applied in the absence of slice selection, such as MTC and CEST, to be maintained through periods of bulk motion.Fig. 5 shows that good image quality is produced when using gradient echo SPLICER in conjunction with a steady-state MTC preparation scheme which shows darkening of the muscle and CaNT tumours relative to images acquired without MTC preparation.The CaNT tumour in the third column shows MTC delineating regions where it is suspected that necrosis has occurred and fluid has filled the resulting cavities.Other preparation schemes are, of course, more forgiving and do not need to be applied with such fixed regularity.For example, fat saturation only requires the fat magnetisation to be saturated immediately prior to data acquisition.The proposed prospective gating acquisition scheme enables efficient multi-slice scanning in small animals at the optimum TR with reduced sensitivity to respiratory motion.The method has been implemented and demonstrated in multi-echo and magnetisation prepared spin-warp scans, and is compatible with a wide range of complementary methods including non-Cartesian scan modes and reduced data acquisition techniques.In particular, the proposed scheme reduces the need for continual close monitoring to effect operator intervention in response to respiratory rate changes, which is both difficult to maintain and precludes high throughput.The following is the supplementary data related to this article.Simulations of SPLICER acquisitions showing mean and standard deviation scan times for 1000 repeats of the acquisition of 128 respiration insensitive projections.A,C,E: mean scan time.B,D,F: standard deviation of scan time.Scan cycles were simulated in the presence of respiration patterns characterised by breath motion duration 0.5 s and mean respiration interval of 1.0 s, 1.5 s and 2.0 s with normal distribution about the mean.The scale bars are displayed in multiples of the mean scan time available for respiration insensitive scanning corresponding to 2×, 1.5× and 1.33× the conventional ungated scan times for μresp = 1.0 s, 1.5 s, and 2.0 s respectively.Simulations were performed with standard deviation of the respiration intervals ranging from to 0.2% to 20% of μresp in steps of 0.2%, and for TR ranging from 0.02 s to 4.0 s in steps of 0.02 s. Each simulation was initiated exactly coincident with the centre of a breath which is considered to be the most favourable time within the respiration cycle for maintaining synchrony between the respiration and scan cycles and, therefore, invoking reacquisition and prolonging the scan time.Supplementary data to this article can be found online at https://doi.org/10.1016/j.mri.2019.03.018. | Purpose: Multi-slice scanning in the abdomen and thorax of small animals is compromised by the effects of respiration unless imaging and respiration are synchronised. To avoid the signal modulations that result from respiration motion and a variable TR, blocks of fully relaxed slices are typically acquired during inter-breath periods, at the cost of scan efficiency. This paper reports a conceptually simple yet effective prospective gating acquisition mode for multi-slice scanning in free breathing small animals at any fixed TR of choice with reduced sensitivity to respiratory motion. Methods: Multi-slice scan modes have been implemented in which each slice has its own specific projection or phase encode loop index counter. When a breath is registered RF pulses continue to be applied but data are not acquired, and the corresponding counters remain fixed so that the data are acquired one TR later, providing it coincides with an inter-breath period. The approach is refined to reacquire the slice data that are acquired immediately before each breath is detected. Only the data with reduced motion artefact are used in image reconstruction. The efficacy of the method is demonstrated in the RARE scan mode which is well known to be particularly useful for tumour visualization. Results: Validation in mice with RARE demonstrates improved stability with respect to ungated scanning where signal averaging is often used to reduce artefacts. SNR enhancement maps demonstrate the improved efficiency of the proposed method that is equivalent to at least a 2.5 fold reduction in scan time with respect to ungated signal averaging. A steady-state magnetisation transfer contrast prepared gradient echo implementation is observed to highlight tumour structure. Supplementary simulations demonstrate that only small variations in respiration rate are required to enable efficient sampling with the proposed method. Conclusions: The proposed prospective gating acquisition scheme enables efficient multi-slice scanning in small animals at the optimum TR with reduced sensitivity to respiratory motion. The method is compatible with a wide range of complementary methods including non-Cartesian scan modes, partially parallel imaging, and compressed sensing. In particular, the proposed scheme reduces the need for continual close monitoring to effect operator intervention in response to respiratory rate changes, which is both difficult to maintain and precludes high throughput. |
31,410 | How could climate services support disaster risk reduction in the 21st century | The adoption of landmark UN agreements, notably the Sendai Framework for Disaster Risk Reduction 2015–2030,1 the 2030 Agenda for Sustainable Development,2 the UNFCCC Paris Agreement,3 the Agenda for Humanity4 and the New Urban Agenda5 have created an exciting opportunity to build coherence across different but strongly overlapping policy areas.Taken together, these frameworks and agreements provide a comprehensive resilience agenda, one that recognises that building resilience requires action spanning development, humanitarian aims, climate change response and disaster risk reduction.Moving forward, this coherence will facilitate the reduction of existing fragmentation and conflicts within the existing DRR and climate change adaptation agendas by strengthening resilience frameworks for multi-hazard assessments and actions, with the aim of developing a dynamic, targeted, preventive and adaptive governance system at global, national and local levels.Targeting knowledge and evidence through climate services to support actions consistent with this coherence will be critical.Despite the obvious links, CCA and DRR have been developed largely as separate policy domains.This has resulted from a range of reasons, including the different temporal and spatial scales considered by the two domains, the diversity of actors involved in them and the policies and institutional frameworks of relevance, as well as the differences in the terminology and methodological approaches used in research activities related to the two domains.As a result, the CCA and DRR communities are not always well connected and both generally regard the other community as covering only a subset of their work.This limited connectivity also holds true with respect to the knowledge and evidence being generated within the two communities to support decision-making processes related to extremes.Extreme weather and climate-related events are the most impactful type of natural disasters and are identified by some as being the highest risk6 to society in the last 10 years.7,The hazards8 and vulnerabilities associated with these events are projected to alter due to climate change directly, as well as a result of changes to determinants of vulnerability such as land use and demographics.Thus, access to relevant and quality-controlled climate information is crucial to enable better informed decisions aimed at addressing existing and emerging weather and climate-related risks.This includes characterising present-day risk and understanding past and future trends of extreme events, including those related to slow onset events.Such climate information can and should support both CCA and DRR policy and practice.This also suggests that to be effective, this climate information should also be integrated with social, economic and environmental objectives reflecting the comprehensive resilience agenda which necessarily requires complementary information such as land use change, demographics and insurance penetration.The challenge is to understand what information is needed and can be credibly provided, and then to work with the respective user communities to deliver it.From the perspective of CS, DRR can be regarded as a separate application area.From a broader perspective, however, the link between DRR, CCA and sustainable development suggests that the climate services required should also support DRR as an integral part of sectoral and system management and development.For example, provision of information to support disaster risk management in relation to water resources should consider that those decisions - whilst targeting DRR - are part of a broader water resource management and development system.In the context of a changing climate, there is emerging recognition that climate services are important for DRR and, as such, there is a need for the CS and DRR communities to engage in addressing the emerging potential.Towards addressing this emerging potential, this paper reflects on this engagement in the context of mutually beneficial collaboration and partnerships that are increasingly key to the joined-up thinking on design and delivery of knowledge and evidence.This includes that knowledge and evidence required to support the Sendai Framework Global Target.In doing so this paper recognises that there are barriers to achieving the required engagement, most of which can be framed around barriers to joined-up thinking and innovation.A particular focus for understanding and developing the role of CS is to link those services to the DRR planning cycle: prevention, preparedness, response and recovery.In engaging with the relevant user communities, the intention should be to ensure that the services and information provided are useful, relevant, accessible, credible and legitimate.In the context of the European research and innovation Roadmap for Climate Services, CS are defined as: “the transformation of climate-related data and other information into customised products such as projections, trends, economic analysis, advice on best practices, development and evaluation of solutions, and any other service in relation to climate that may be of use for the society at large”.10,This stresses the importance of a user-driven approach which goes beyond the mainly supply-driven Global Framework for Climate Services definition according to which CS merely “strengthen the production, availability, delivery and application of science-based climate prediction and service”.In responding to the challenges of delivering CS for DRR, future efforts targeting the development of CS for DRR should take advantage of the unique opportunities that now exist as a result of the shift in focus of the Sendai Framework from managing ‘disasters’ to managing ‘risks’ and the potential offered by addressing health as a driver for action on DRR.The shift in focus to managing risks provides a basis and opportunities for increased coherence and mutual reinforcement across the post-2015 agendas reflected in policies, institutions, goals, indicators and measurement systems for implementation.The strategies for promoting coherence and mutual reinforcement include establishing political recognition of the need for such; linking mechanisms for monitoring and reporting of linked goals and indicators; and promoting cooperation in implementation.The need for health to be a major focus of DRR and management is now recognised within the Sendai Framework as playing a critical role; strongly promoting health resilience.In this context, health is identified as a major driver and the Sendai Framework calls for resilience of national health systems, including by integrating DRR into primary, secondary and tertiary health care, especially at the local level; developing the capacity of health workers in understanding disaster risk and applying and implementing DRR approaches in health work; promoting and enhancing the training capacities in the field of disaster medicine; and supporting and training community health groups in DRR approaches in health programmes, in collaboration with other sectors, as well as in the implementation of the International Health Regulations of the World Health Organization.We recognise that currently at the local and regional level health care is not necessarily well connected to civil protection agencies dealing with DRR and environmental/spatial planning agencies dealing with CCA.Our vision includes recognition of the advantages of enhancing these connections consistent with the Sendai Framework.It is evident that while CS are critical for supporting CCA, their full potential in supporting DRR has not yet been exploited.Opportunities for focusing future climate service efforts exist across the DRR cycle, both internationally and nationally.This is also reflected in the number of relevant references within the Sendai Framework including: promoting scientific research of disaster risk patterns, causes and effects; disseminating risk information with the best use of geospatial information technology; providing guidance on methodologies and standards for risk assessments, disaster risk modelling and the use of data; and promoting and supporting the availability and application of science and technology to decision-making.Furthermore, CS are explicitly mentioned in the Sendai Framework under Priority 4: Enhancing disaster preparedness for effective response and to “Build Back Better” in recovery, rehabilitation and reconstruction.In addition, the Sendai Framework defines an early warning system as “an integrated system of hazard monitoring, forecasting and prediction, disaster risk assessment, communication and preparedness activities systems and processes that enables individuals, communities, governments, businesses and others to take timely action to reduce disaster risks in advance of hazardous events”.It has been suggested that such a system would be better informed through the effective use of targeted climate services.Of particular importance in this context is the UNFCCC report on “Opportunities and options for integrating CCA with the Sustainable Development Goals and the Sendai Framework for Disaster Risk Reduction 2015–2030″.11,This 2017 report specifically refers to the importance of the availability of climate data, climate services, and associated capacity building in delivering the integration across these agreements and frameworks.It is worth noting that in high-level documents related to DRR at the national level, the importance of taking into account longer-term climate change for prevention is often mentioned.The fact that CS are already providing fundamental data to better characterise present day and evolving risks suggests that these services could be further developed to benefit the preparedness and response aspects of the DRR cycle.Furthermore, experiences within CCA at the national and transnational levels are further promoting the significant role CS should be playing within a ‘Build Back Better’ approach.The CS community is responding.For example, DRR is one of the five priorities of the GFCS, and the Copernicus Climate Change Service has identified DRR as a key sector for the C3S Sectoral Information System.Three leading European initiatives on CS and DRR initiated a discussion on how the DRR community could be best served by new and emerging climate services as well as on the relevant challenges and opportunities.The discussion engaged experts from sectors as different as civil protection, health, insurance, civil engineers and representatives from CS providers and purveyors, including national meteorological services.As a result of the deliberations during the Climate Services for DRR workshop, a number of relevant points were raised, the highlights of which are summarised in the rest of this brief paper.CS can strengthen all phases of the DRR cycle, including through better informed climate risk and action assessments, early warning systems and response planning.The relevance of longer-term climate risks may be obvious for prevention and recovery.Yet CS that draw on high-resolution exposure and vulnerability information can also support strategic planning for better preparedness and response.CS are providing essential inputs for national adaptation strategies.But persistent low familiarity of DRR practitioners with CS and climate knowledge in general make their application for operational and strategic DRR purposes less likely.This deficiency is particularly acute considering that adaptation and resilience in relation to extreme events in the context of a changing climate will depend on the extent to which the goals within the Paris Agreement are being achieved.Of all hazards, flooding is probably the one - at least in Europe - for which climate change drivers have mostly been taken into account.Learning from this experience and extending the approach to other hazards could be a way of facilitating the interaction between DRR and the CS communities and for enhancing and demonstrating the value of CS for DRR.There are a number of options for providing CS that can support risk assessment.Longer-term climate projections are for the most part probabilistic and ensembles, including those provided by a number of CS providers, are amendable to risk assessment.Furthermore, the recent development of a storyline approach suggests that there may be other ways of estimating the likelihood of future events beyond the probabilistic approach.Data availability, whilst improving, is still a critical issue.Information related to damage and losses caused by extreme weather events represents one of the most tangible gaps.Standardisation of climate data and their harmonisation with other datasets such as damage and losses, including the way they are collected, are critical to building an effective interface between CS and DRR.These will require targeted efforts to ensure the compatibility of information sources in the context of supporting decision-making processes, but also co-evaluation to promote future refinement and development.To achieve effective climate services, there is a need to ensure that these services are decision-driven.For example, there is a need for information on vulnerability and exposure that could be included in or linked to CS.Capacity development is required to support informed engagement with the intended and potential users within the DRR community to support co-design, co-delivery and co-evaluation, and to provide a better understanding of what is available and how it can be used.The focus should be on where CS providers can add value - knowledge and evidence to support prevention and recovery - building on strengths from supporting CCA.There are additional added-value areas being developed that can provide opportunities that bridge the gap between short-term weather predictions and climate services.These include improved seasonal to decadal forecasts, and the development of near-present climatologies.Equally important is the need for developing the understanding of the CS providers on the needs and capacities of the users within the DRR community.International guidelines and good practice examples would be useful for informing and complementing capacity development at national and subnational level.Participants at the above-mentioned workshop recognised that the lack of availability or accessibility of meteorological and impact data in an event catalogue that could be shared was a particular barrier.Progress in this regard is being made with increased availability of traceable and transparent datasets describing the impacts of past events becoming more common."Efforts include Sendai Monitor, Desinventar and the European Commission's Disaster Risk Management Knowledge Centre that are open access repositories of disaster loss and impacts data.As such, future efforts in delivering CS for DRR will need to be linked to advancements in collecting and making available disaster damage and loss data.These data catalogues along with engaging with national DRR communities when developing CS for DRR have the potential to improve the quality, relevance and legitimacy of the intended services.The development of markets for public and private CS for DRR depends on the enhancement of both the supply side and demand side.For continuity and legitimacy reasons, relationships across the DRR and CS communities should be sustained over time based on a sound understanding of the targeted decision-making processes and the robustness of the services available.DRR involves a varied and diverse community composed of many sub-communities, including, but not limited, to civil protection and public health, as well as other sectors such as water, agriculture and forestry, infrastructure, tourism, and finance.Different actors involved in DRR have different capacities and needs for climate information, which should be recognised.Moreover, different actors often have very different mindsets, perspectives and roles on reducing and managing risk within the DRR cycle.Effective management and communication of risks remain a critical challenge for DRR, requiring a tailored approach for the supportive CS, which accounts for specific regional, cultural, political and sectoral characteristics of the target audience at national and sub-national level.Such a champion could be positioned within existing institutions.It could provide leadership and serve as a focal point for the development and delivery of services engaging the DRR community and other providers and purveyors.In Europe, the significant investment the European Commission has put into the Copernicus Climate Change Service represents an important step in this direction as standardised, high-quality data will be made available operationally and free of charge.While in principle CS for DRR should ideally be free to promote effective decision-making, raise awareness and demonstrate their value, it is recognised that tailoring of information for specific questions related to risk assessment and management would come at a significant price.For these latter cases, private consultants and other intermediaries become important.Finding an appropriate balance between the private and public dimensions of a growing market for CS is fraught with questions related to ethics and social responsibility.This includes dilemmas associated with those countries and actors who need the information the most but may also be those who are less likely to be able to pay for it.In responding to the challenges of delivering CS for DRR, efforts are needed by funders, providers and users of the intended climate services with the support of those UN Member States that have agreed to the adoption of the Sendai Framework and its global targets."The adoption of a comprehensive resilience agenda and the shift in focus within the DRR community from 'managing disasters' to 'managing risks' provides unique opportunities for better integrating CS into DRR decision making at the strategic and tactical levels.The results of our discussions also suggest that these opportunities are enhanced by addressing public health as a driver for action on DRR.Doing so will not be without challenges but the advantages to society offered by providing CS that supports decisions and actions taking advantage of these opportunities is critical.We suggest that sustained and informed engagement of the DRR and CS communities in the co-design, co-development and co-evaluation of CS for DRR can be effective in addressing these challenges.In terms of further developing CS, the shift in focus of the Sendai Framework has resulted in a more prominent role of science and technology in providing the evidence and knowledge on risks in all its dimensions of hazards, exposure and vulnerability.This shift and the prominence of science and technology includes expected outcomes, actions and deliverables under each of the four priority of actions of the Sendai Framework.Recognising health as a significant driver can provide impetus and a focus for developing CS that support DRR.The link between public health and DRR, especially in terms of the knowledge and information to inform policy and decisions, should be exploited and further developed to engage stakeholders at all levels towards implementing the UN Landmark agreements of 2015 more effectively by 2030.We suggest seizing these opportunities quickly to focus the collaboration between the DRR and the CS communities in both research and application of knowledge to create and deliver relevant, credible and legitimate useful, usable and used information and CS.Seizing these opportunities will require continued efforts within both the DRR and CS communities.First and foremost, these efforts will need to be reflected in the implementation of the Sendai Framework, the SDGs and the Paris Agreement and by the UN Member States who will be reporting on their respective implementation activities.Funders of research and CS should broaden their perspectives of the role of CS to include those supporting DRR as reflected in the post-2015 agenda, including the S&T Roadmap to support implementation of the Sendai Framework for DRR 2015–2030.One particular challenge that also needs addressing relates to identifying and understanding the value chain for CS in the context of DRR.The European research and innovation Roadmap for Climate Services recognises that this is a major gap for CS in general and efforts have been underway to address this gap under Horizon 2020.As for all climate services, the CS value chain for DRR is more than likely a web with providers, intermediaries and users all extracting and adding value for their targeted and subsequent users.The Roadmap also recognises that capacity building for all operating in the value chain is critical to realise the benefits that CS is and could be offering.The DRR and CS communities will need to enhance efforts towards working together.These efforts should include effective engagement at the international and national levels directed at realising and demonstrating potential benefits.There is also a need to recognise these challenges and opportunities within research and innovation efforts nationally and multi-nationally in terms of identifying related research questions and enabling innovations.A useful step forward would be working together within science and operational fora of the respective communities. | In January 2018, three leading European initiatives on climate services (CS) and disaster risk reduction (DRR) initiated a discussion on how the DRR community could be best served by new and emerging CS. The aim was to identify challenges and opportunities for delivery of effective operational disaster risk management and communication informed by an understanding of future climate risks. The resulting discussion engaged experts from civil protection, health, insurance, engineering and the climate service community. Discussions and subsequent reflections recognised that CS can strengthen all phases of the DRR cycle and that there are lessons to learn from experience that could enhance and demonstrate the value of CS supporting the DRR community. For this to happen, however, the supporting information should be relevant, accessible, legitimate and credible and engage both service supply and demand sides. It was also agreed that there was need for identifiable and credible champions recognised as providing leadership and focal points for the development, delivery and evaluation of CS supporting DRR. This paper summarises the identified key challenges (e.g. disconnection between CS and DRR; accessibility of relevant and quality-controlled information; understanding of information needs; and understanding the role of CS and its link to the DRR planning cycle). It also suggests taking advantage of the unique opportunities as a result of the increased coherence and mutual reinforcement across the post-2015 international agendas and the increasing recognition that links between public health and DRR can provide impetus and a focus for developing CS that support DRR. |
31,411 | iGHBP: Computational identification of growth hormone binding proteins from sequences using extremely randomised tree | Circulating growth hormones exist in a partially complexed form with binding proteins.The high affinity growth hormone binding protein is one such predominant GH binding protein that represents the extracellular ligand-binding domain of the GH receptor .In humans, GHBP is generated by the proteolytic cleavage of the GHR at the cell surface using the tumor necrosis enzyme factor-α-converting enzyme, thereby releasing the extracellular domain of GHR .By contrast, GHBP is produced in rodents by the alternative processing of the GHR transcript .Binding GH to the GHR triggers the physiological functions of the hormone.Previous studies suggested that the biological effects of GHBP is dependent on the serum level of GH , as low levels of GH lead to a dwarf phenotype but increases the life longevity , while high levels lead to acromegaly, kidney damage, and diabetic eye.Therefore, the study of GHBP is gaining momentum from functional proteomics, leading to its clinical identification.Traditionally, GHBPs were identified and characterised using biochemical experiments including immunoprecipitation, ligand immunofunctional assays, chromatography, and cross-linking assays .To identify GHBP from a protein sequence using these methods seems to be highly expensive, time-consuming, and overly complex to be utilised in a high-throughput manner.Thus, the development of sequence-based computational methods is needed to identify potential GHBP candidates.Recently, Tang et al. developed an Support vector machine-based prediction model called HBPred , where the authors have used an optimal feature set obtained from dipeptide composition using an incremental feature selection strategy.HBPred is the only publicly available method, which was developed using the same data set as our method.Although the existing method has a specific advantage in GHBP prediction, the accuracy and transferability of the prediction model still require improvement.In this study, we proposed a novel sequence-based predictor, called iGHBP, for the identification of GHBPs from given protein sequences.Firstly, we collected GHBPs from UniProt and constructed nonredundant benchmarking and independent data sets.Secondly, we investigated five different machine learning algorithms , five compositions , and 16 hybrid features.In total, we generated 21 models for each ML method and selected the best model.Thirdly, we applied a two-step feature selection protocol on the above selected best model to improve the prediction performance.Finally, we evaluated these models against the state-of-the art method, HBPred, on the independent data set.Experimental results showed that the ERT-based prediction model produced consistent performance on both the benchmarking and independent data sets, hence, we named iGHBP as the superior model, demonstrated by outperforming the existing predictor as well as other predictors tested in this study.Therefore, it can be expected that iGHBP can be an effective tool for identifying GHBPs.The iGHBP methodology development involved five major stages: Data set construction, feature extraction, feature ranking, model training and validation, and the construction of the final prediction model.Each of these major stages is described in the following section.where the subsets S+ and S− respectively contain 123 GHBPs and 123 non-HBPs, and the symbol ∪ denotes a union, in set theory.To assess the performance of iGHBP with other related methods, we constructed an independent data set.Firstly, we considered 355 manually annotated and reviewed GHBP proteins from Universal Protein Resource using hormone-binding keywords in molecular function item of Gene Ontology.After this, we used CD-HIT , which is widely used to perform sequence clustering and to remove highly similar sequences, by setting a threshold of 0.6.The final data set contained 31 GHBPs and was supplemented with an equal number of non-GHBPs.Basically, these non-GHBPs are other functional proteins such as cancer lectins and phage virion proteins.Note that none of the protein sequences in the independent data set appeared in the benchmarking data set, ensuring a fair comparison of prediction model performance.CTD was introduced by Dubchak, et al. for predicting protein-folding classes.A detailed description of computing CTD features was presented in our previous study .Briefly, the twenty standard amino acids are classified into three different groups, namely: polar, neutral, and hydrophobic.Composition consists of percentage composition values from these three groups for a target protein.Transition consists of the percentage frequency of a polar followed by a neutral residue, or that of a neutral followed by a polar residue.This group may also contain a polar followed by a hydrophobic residue or a hydrophobic followed by a polar residue.Distribution consists of five values for each of the three groups, and measures the percentage of a target sequence length within which 25, 50, 75, and 100% of the amino acids of a specific property are located.CTD generates 21 features for each PCP; hence, seven different PCPs yield a total of 147 features.The AAIndex database contains a variety of physiochemical and biochemical properties of amino acids .However, utilising all the information present in the AAIndex database as input features to the ML algorithm may affect the model’s performance, due to redundancy.To this end, Saha et al., applied a fuzzy clustering method on the AAIndex database and classified it into eight clusters, where the central indices of each cluster were considered as high-quality amino acid indices.The accession numbers of the eight amino acid indices in the AAIndex database are BLAM930101, BIOV880101, MAXF760101, TSAJ990101, NAKH920108, CEDJ970104, LIFS790101, and MIYS990104.These high-quality indices encode the target protein sequences as 160-dimensional vectors.However, the average of these eight high-quality amino acid indices was used as an additional input feature to save the computational time.Briefly, we extracted five feature encoding schemes based on composition and physicochemical properties, which includes AAC, DPC, CTD, AAI, and PCP respectively generates 20-, 400-, 147-, 20-, and 10-dimensional vectors.In this study, we explored five different ML algorithms, including RF, ERT, SVM, GB, and AB for binary classification.All these ML algorithms were implemented using the Scikit-Learn package .A brief description of these methods and how they were used given in the following sections:RF is one of the most successful ML methods, and utilises hundreds or thousands of independent decision trees to perform classification and regression .RF combines the concepts of bagging and random feature selection.For a given training data set, generate a new training data set by uniformly drawing N bootstrapped samples from D. Grow a tree using Di and repeat the following steps at each node of the tree until its fully grown: select mtry random features from the original features and select the best variable by optimising the impurity criteria, and split the node into two child nodes.The tree grows until the amount of data in the node is below the given threshold.Repeat the above-mentioned steps to build a large quantity of classification trees.To classify a test data, input features are passed through from the root to the end node of each tree based on predetermined splits.The majority of the class from the forest is considered as the final classification.SVM is a well-known supervised ML algorithm , which has been widely used in various biological problems .It maps the original feature vectors into a higher Hilbert space using different kernel functions and then searches an optimal hyperplane in Hilbert space.In this study, radial basis kernel function was utilized to construct a SVM model.Grid search was performed for optimizing regularisation parameters C and the kernel width parameter γ with the search space as mentioned in .Fruend proposed AB algorithm that combines a several weak classifiers to build a strong classifier.In this study, we treated decision tree as a base classifier with the default parameters as implemented in Scikit package.However, the number of estimators at which boosting terminated is optimized in the range of 50–500 with an interval of 50.Friedman proposed the GB algorithm , which is a forward learning ensemble method that produces a final strong prediction model based on the ensemble of weak models, which has been widely used in bioinformatics and computational biology .In GB, the two most influential parameters are ntree, and nsplit, we optimized with the search space as mentioned in .In addition to the above ML algorithms, we note that there are other ML algorithms such as deep belief network, recurrent neural network, deep learning, and two-layer neural network have been successfully applied in various biological problems .However, these methods will be considered in our future studies.Generally, three cross-validation methods, namely an independent data set test, a sub-sampling test, and a leave-one-out cross-validation test, are often used to evaluate the anticipated success rate of a predictor.Among the three methods, however, the LOOCV test is deemed the least arbitrary and most objective as demonstrated by Eqs. 28-32 of , and hence it has been widely recognised and increasingly adopted by investigators to examine the quality of various predictors .Accordingly, the LOOCV test was also used to examine the performance of the model proposed in the current study.In the LOOCV test, each sequence in the training data set is in turn singled out as an independent test sample and all the rule-parameters are calculated without including the one being identified.Additionally, the receiver operating characteristic curve, which is a plot of the true positive rate against the false positive rate under different classification thresholds, is depicted to visually measure the comprehensive performance of different classifiers.To improve the feature representation capability and identify the subset of optimal features that contribute for correctly classifying GHBPs and non-GHBPs, we employed a novel two-step feature selection strategy.Notably, the two-step feature selection protocol employed here is similar to the one used in our recent studies , where the features were ranked according to feature importance scores using the RF algorithm in the first step, and feature subsets were selected manually based on the FISs in the second step.In this study, the first step is identical to our previous protocol.However, in the second step, a sequential forward search was employed to select the optimal feature subset, rather than using manual feature subset selection.In the second step, we utilised SFS to identify and select the optimal features from a ranked feature set based on the following steps. The first feature subset only contained the first feature in the ranked set D.The second feature subset contains the first and the second feature in D, and so on.Finally, we obtained N feature subsets. All the N feature subsets were inputted to ERT to develop their corresponding prediction model using a LOOCV test.Finally, the best performance produced by the feature subset was considered as the optimal feature set.In this study, we considered 21 feature encodings that include individual composition-based features and hybrid features, which were inputted to five different ML algorithms, developing their corresponding models using a LOOCV procedure.In total, 105 prediction models were developed and the performance of each model in terms of accuracy with respect to the different feature encodings and ML algorithms is shown in Fig. 2.Among these methods, ERT and RF perform consistently better than other three algorithms.Here, the model that achieved the highest accuracy was regarded as the best model.Accordingly, five models were selected from each ML method.Surprisingly, these five ML models produced their best performances using hybrid features; RF and AB: H5; SVM: H4; and GB: H10), indicating that various aspects of sequence information may be needed for a better prediction.Table 1 shows the performance comparison of five different ML methods, where the methods are ranked according to MCC and it can be considered as one of the best measures in binary classification .Among these methods, RF, ERT, and GB produced a similar performance with an MCC and accuracy of 0.546 and 0.772, respectively, which is slightly better than AB and significantly better than SVM.Therefore, we selected only four ML-based models and applied feature selection protocol on these models.To identify the most informative features that improves a prediction performance, a feature selection protocol was employed to remove noisy and redundant features .In an effort to construct the optimal or best predictive model, we applied a two-step feature selection protocol to identify an optimal feature set from the hybrid features that improves the prediction performance.In the first step, we applied the RF algorithm to rank the features, according to FIS, with hybrid features H5, H8 and H10.SFS approach was used in the second step to select the optimal feature set from the ranked feature list.Fig. 3A shows the feature importance scores of 420-dimensional vectors.These features were ranked according to FIS and generated 420 feature sets.Each feature set was inputted to the ERT algorithm, and their corresponding models were developed using an LOOCV test.We plotted the SFS curve in Fig. 4A by using accuracy as Y-axis and feature number as X-axis.The maximum accuracy of 84.96% was observed with an optimal feature set of 190 features, while the other metrics such as MCC, sensitivity, specificity, and AUC are 0.701, 88.62, 81.30, and 0.896, respectively.Surprisingly, the obtained performance is identical to HBPred, where both methods use identical cross-validation methods and benchmarking data sets, however the number of features and the choice of ML algorithms are different.We also dramatically reduced the considered features from 420 to 190, indicating that our proposed feature selection technique could pick out the optimal dipeptides and AAI so as to further improve the prediction quality.The above procedure was followed for other three methods.The best performance in terms of accuracy for RF, GB, and AB peaked at 80.5%, 81.7%, and 83.1%, respectively, with corresponding X-axis of 241, 161, and 167.These results show that a two-step feature selection protocol significantly improves the performances of the respective models.Next, we compared the performances of four different ML-based methods.To be specific, the accuracy of the ERT-based prediction model is ~1.9–4.4 higher than the other three methods, indicating the superiority of the ERT-based method in GHBP prediction.Hence, we named ERT-based prediction model as iGHBP.To show the efficiency of our feature selection protocol, we compared the performance of the optimal model and the control without feature selection or using all features.Fig. 5 shows that our two-step feature selection protocol significantly improved the prediction performances of all four ML-based methods.Specifically, ERT, RF, GB and AB, whose accuracy values were respectively 7.7%, 2.9%, 4.5%, and 6.6% higher than the control, indicating an effectiveness of feature selection protocol.A similar protocol has been used in previous studies and has shown that the corresponding optimal models improved in performance .Although feature selection protocol significantly improved the performances of the respective ML-based methods, we specifically investigated the effectiveness of our feature selection protocol on ERT-based method.Here, we computed each feature average of GHBPs and non-GHBPs separately and compared their distribution for the hybrid features and the optimal features.Results show that GHBPs and non-GHBPs were distributed more differentially in the feature space using optimal feature set when compared to the hybrid features, demonstrating why our feature descriptor led to the most informative prediction of GHBPs.Generally, it is essential to evaluate the proposed model using an independent data set to check whether the prediction model has generalisation capability or robustness .In order to check the robustness of iGHBP, we further compared against three other ML methods developed in this study and against the state-of-the-art predictor on the independent data set.To make a fair comparison, we ensure lower sequence identities between the benchmarking and independent data sets, as it would otherwise lead to an overestimation of performance if the sequences in the independent data set had higher identities that those in the benchmarking data set.The results are summarised in Table 2, where the methods are ranked according to MCC.It can be observed that the proposed predictor iGHBP achieved the best performance with the following metrics with MCC, accuracy, specificity, and AUC, values of 0.646, 82.3%, 83.9, and 0.813, respectively.Specifically, the MCC and accuracy of iGHBP were 17.4–45% and 9.7–22.6% higher when compared to the other methods, thus demonstrating the superiority of iGHBP.Furthermore, we computed a pairwise comparison of AUCs between iGHBP and HBPred using two-tailedt test and obtained the P-value of 0.009, demonstrating iGHBP significantly outperformed the HBPpred.It is worth mentioning that both iGHBP and HBPred produced identical performance with the benchmarking data set, although there was variation in the input feature dimension and ML algorithm.However, only iGHBP produced a similar and consistent performance in both the benchmarking and independent data sets, indicating that the current predictor is more stable and reliable.Notably, the optimal feature set contains 190 features, which is ~ 3-fold higher than the features used in the previous study.It is understandable that a larger and optimal feature set plays an important role in capturing the key components between the actual GHBPs and non-GHBPs and improve the performance.This is remarkable progress in biological research because a more reliable tool for the identification of biological macromolecules can vastly reduce the experimental cost.Hence, the iGHBP can be expected to be a tool with a high availability for the identification of GHBPs.As pointed out in and shown in many follow-up publications , user-friendly and publicly accessible web servers are the future of direction for developing more useful predictors.To this end, an online prediction server for iGHBP was developed, and it is available at www.thegleelab.org/iGHBP.All data sets utilized in the current study can be downloaded from our web server.Below, we give researchers a step-by-step guideline on how to use the webserver to get their desired results.In the first step, users need to submit the query sequences into the input box.Note that the input sequences should be in FASTA format.Examples of FASTA-formatted sequences can be seen by clicking on the button FASTA format above the input box.Finally, clicking on the button Submit, you will get the predicted results on the screen of your computer.The biological significance of GHBPs has motivated the development of computational tools that facilitate accurate prediction.In this work, we developed a novel GHBP predictor called iGHBP.Here, we systematically assessed the use and performance of various composition-based features and their combinations along with various ML approaches in GHBP prediction.Our main findings are as follows: Among five classifiers, ERT performed the best according to our performance measures, based on LOOCV. Of those five different compositions, an optimal feature set using a combination of DPC and AAI achieved the highest performance, emphasising the arrangement of particular local ordering dipeptides and biochemical properties. Experiment results from independent tests show that the proposed predictor iGHBP is more promising and effective for the GHBPs identification.As an application of this method, we have also made available an iGHBP webserver for the wider research community to use.It is expected that iGBHP will be a useful tool for discovering hypothetical GHBPs in a high-throughput and cost-effective manner, facilitating characterisation of their functional mechanisms.Furthermore, our proposed methods, along with the increasing availability of experimentally verified data and novel features, will greatly improve the prediction of GHBP.The authors declare that there is no conflict of interest.BM and GL conceived and designed the experiments.SB and BM performed the experiments.BM, SB, and TS analyzed the data.GL contributed reagents/materials/software tools.BM, SB, and GL wrote the manuscript. | A soluble carrier growth hormone binding protein (GHBP) that can selectively and non-covalently interact with growth hormone, thereby acting as a modulator or inhibitor of growth hormone signalling. Accurate identification of the GHBP from a given protein sequence also provides important clues for understanding cell growth and cellular mechanisms. In the postgenomic era, there has been an abundance of protein sequence data garnered, hence it is crucial to develop an automated computational method which enables fast and accurate identification of putative GHBPs within a vast number of candidate proteins. In this study, we describe a novel machine-learning-based predictor called iGHBP for the identification of GHBP. In order to predict GHBP from a given protein sequence, we trained an extremely randomised tree with an optimal feature set that was obtained from a combination of dipeptide composition and amino acid index values by applying a two-step feature selection protocol. During cross-validation analysis, iGHBP achieved an accuracy of 84.9%, which was ~7% higher than the control extremely randomised tree predictor trained with all features, thus demonstrating the effectiveness of our feature selection protocol. Furthermore, when objectively evaluated on an independent data set, our proposed iGHBP method displayed superior performance compared to the existing method. Additionally, a user-friendly web server that implements the proposed iGHBP has been established and is available at http://thegleelab.org/iGHBP. |
31,412 | The Good School Toolkit for reducing physical violence from school staff to primary school students: A cluster-randomised controlled trial in Uganda | Exposure to physical violence in childhood is widespread and associated with increased risk of depressive disorders and suicide attempts,1 poor educational attainment,2 and increased risk of perpetrating or experiencing intimate partner violence in later relationships.3,4,Recent national surveys suggest that, at least in some settings, violence from school staff could be an important but overlooked contributor to the overall health burden associated with violence against children.More than 50% of men and women reported physical violence from teachers when they were aged 0–18 years in Tanzania,5 and in Kenya more than 40% of 13–17-year-olds reported being punched, kicked, or whipped by a teacher in the past 12 months; 13–15% had experienced the same from a parent.6,There are no nationally representative data in Uganda, but our own work in one district shows that more than 90% of children aged about 11–14 years report lifetime physical violence from school staff, with 88% reporting caning, and 8% reporting extreme physical violence such as ever being choked, burned, stabbed, or severely beaten up.7,4% had ever sought medical treatment for an injury inflicted by a staff member.7,In Uganda, corporal punishment has been banned by the Ministry of Education and Sports since 1997, although it is not fully illegal.Assessments of interventions to reduce physical violence from school staff in low-income and middle-income settings are almost entirely absent from the literature.8, "One study in Jamaica that tested the Incredible Years intervention in preschools showed a large reduction in negative teacher behaviours9 and improvements in child conduct disorder,10 suggesting that it is possible to change teachers' violent behaviour; we are not aware of any other trials on the topic.Evidence before this study,We are not aware of any other trials of interventions which seek to reduce physical violence from school staff towards primary school children.Existing interventions to prevent violence in schools come mainly from high-income countries and have largely focused on childhood sexual abuse, bullying, and other violence between students, with less emphasis on violence from school staff.A large global systematic review of school and school environment interventions on a range of health outcomes found no studies that addressed physical violence from school staff to students.We did a systematic search of Medline, Embase, and ERIC from first record until January, 2013, and searched websites of various non-governmental organisations working on child protection, and found no trials.We have done updated searches in Medline from Jan 1, 2013, to Jan 13, 2015, with MeSH terms and keyword searches using the terms “corporal punishment”, “physical violence”, “school”, and the clinical trial filter options, and have found no trials.Despite this lack of tested interventions, prevalence data indicate an unmet need.Where national surveys have been done in Kenya and Tanzania, they suggest that more than 50% of adolescents have experienced of physical violence from school staff.Added value of this study,To our knowledge, this is the first trial of an intervention to reduce physical violence from school staff to primary school children.We therefore provide the first rigorous evidence that reducing this form of child maltreatment is possible.Implications of all the available evidence,Our results suggest that the Good School Toolkit can reduce physical violence from school staff to primary school children in Uganda.Further research is needed to explore the effectiveness of this intervention over longer time periods, at scale, and to explore other types of interventions to reduce this common form of child maltreatment.We report results of the Good Schools Study, which assessed the Good School Toolkit developed by the Ugandan not-for-profit organisation Raising Voices.Our main objective was to determine whether the Toolkit could reduce physical violence from school staff to students.The Good School Toolkit produces a large reduction in physical violence from school staff, as reported by primary school students in Luwero District, Uganda.There was some evidence that the Toolkit was more effective for male than female students, although it was highly effective for both sexes."The Toolkit also improved students' feelings of wellbeing and safety at school, suggesting that the intervention is effective in changing the school environment.The Toolkit did not affect student mental health or student educational test scores.Qualitative research shows the existence of norms supportive of “beating” as being necessary for positive child development in Tanzania19 and South Africa,20 as well as in some high-income settings such as the USA, where corporal punishment in schools is legal in 18 states.21,Change in attitudes and behaviours related to physical violence and punishment has occurred over several decades in Sweden, where 53% of parents supported corporal punishment of their children in 1965 versus 11% in 1994.22,Our results are highly encouraging because they demonstrate that it is possible to change an entrenched, normative behaviour such as the use of physical violence over the 18-month timescale of programme implementation."The Toolkit also positively affected students' feelings of safety and wellbeing at school, but contrary to our hypotheses, we did not find effects on mental health outcomes or educational test scores.According to our theory of change, improvements in school wellbeing and reductions in mental health symptoms should precede improvements in educational outcomes.11,It is possible that, over the timescale of the intervention implementation, these effects simply did not take place.It is also possible that staff are more likely to punish students with worse symptoms of externalising or internalising disorders.If this is the case, we would not expect reducing violent punishment to automatically reduce mental health symptoms.We also note that both mental health symptoms and educational outcomes are likely to be associated with a range of socioeconomic, familial, and structural factors outside of school, which might not be amenable to a school-based violence prevention programme.Most of the schools involved in the project, similar to other schools in Uganda, are faced with large structural issues related to poverty—for example, large class sizes, poor physical infrastructure, and a lack of resources for teaching.Although students felt safer in intervention schools, it might be that improving the atmosphere at school is necessary but not sufficient to improve outcomes in the short term of our evaluation.Our study has a number of strengths, and also some limitations.Our results should be generalisable to other African settings—we sampled schools to be representative of larger schools in Luwero District, 100% of schools agreed to participate, and no schools dropped out of the study.However, students in Ugandan primary schools are slightly older than in some higher-income countries, which might affect the generalisability of our results to other primary school populations with different age profiles.We were well-powered to detect an effect, and had very high response rates and very low levels of missing data.We noted an instance of possible contamination during the trial, which would have biased effect estimates downwards, yet we still detected a large effect.Although we have made use of a standardised, internationally recognised and widely used questionnaire to measure self-reported violence exposure, our main limitation is that violence measures are by necessity reported rather than observed.We used student reports of violence outcomes as the most conservative test of the intervention effect.Reports from school staff about use of physical violence against students are likely to be biased in the same direction as the intervention effect, whereas student reports would likely be biased in the opposite direction.Third-party corroborated reports of violence exposure vastly underestimate prevalence compared with self-report23 and medical record checks for injuries resulting from staff violence are not feasible in this context."Nonetheless, staff reports and students' reports of past-term exposure show very similar effect sizes and direction, lending support to our results.Similar to other complex social interventions, we were unable to mask participants or data collectors to allocation.This could have introduced bias towards a larger effect, but it is unlikely that this would entirely account for such a large observed difference.Given the prevalence of violence observed in our sample and in other surveys in the region, use of the Toolkit or similar programmes could have a major effect on the burden of child maltreatment in countries where violence from school staff is common.Further analyses are underway to explore the effect of the intervention on other forms of violence, including violence from peers.We note that, although we observed a large reduction in levels of violence, absolute prevalence of physical violence still remained high at 30% and 60% in the past week and past term, respectively.Further research is needed to examine whether the Toolkit can further reduce prevalence if implemented over longer time periods, to examine whether the effects of the Toolkit are sustainable without ongoing support from Raising Voices, and to examine the intervention effect at scale.Further research to address violence happening outside of schools is also needed.The Good Schools Study took place in 42 primary schools in Luwero District, Uganda, from January, 2012, to September, 2014.Luwero District is demographically similar to the rest of Uganda, according to the last Ugandan census in 2002.The intervention was implemented over 18 months, between September or October, 2012, and April or May, 2014.The study consisted of a cluster-randomised controlled trial, a qualitative study, an economic evaluation, and a process evaluation.The study was approved by the London School of Hygiene and Tropical Medicine Ethics Committee and the Uganda National Council for Science and Technology.Our protocol is registered at clinicaltrials.gov and is published elsewhere,11 and we present our main trial results here.We did a two-arm cluster-randomised controlled trial with parallel assignment.A cluster design was chosen because the intervention operates at the school level.Using the official 2010 list of all 268 primary schools in Luwero as our sampling frame, we excluded 105 schools with fewer than 40 registered Primary 5 students and 20 schools with existing governance interventions implemented by Plan International.The remaining 151 schools were stratified into those with more than 60% girls, between 40 and 60% girls and boys, or more than 60% boys.42 schools were randomly selected using a random number generator in Stata, proportional to size of the stratum.42 was chosen on the basis of the number of schools in which Raising Voices could implement the intervention that would also give us power to detect a reasonably sized intervention effect.Stratified block randomisation was then used to allocate the schools to the two groups of the trial.Allowing for a loss to follow-up of two schools per group, and conservatively assuming interviews with 60 students per school, with a prevalence of past week physical violence of 50%7 and an intracluster correlation coefficient of 0·06,7 we had 80% power to detect a 13% difference in the prevalence of reported violence between the intervention and control groups with 5% statistical significance.All headteachers agreed for their schools to participate in the study and schools were enrolled by Raising Voices staff and JC.Cross-sectional baseline and endline surveys were conducted at schools in June or July, 2012, and June or July, 2014, respectively.We chose this design rather than a cohort design to avoid problems related to attrition of individual students, and because our main aim was to measure prevalence at follow-up.Parents were notified and could opt children out, but children themselves provided consent.Up-to-date lists of all P5, 6, and 7 students were obtained from each school, and a simple random sample of up to 130 P5, 6, and 7 students were invited for individual interviews where surveys were administered.If there were fewer than 130 P5–7 students in a school, all were invited for interview.Implementation of the intervention was school-wide, but data was collected from P5–7 students only, because they were able to respond to questions in survey format.All those who could speak Luganda or English and who were deemed by interviewers to be able to understand the consent procedures were eligible.All school staff were invited to participate and provided informed consent.At least one repeat visit was made to find staff and sampled students absent on the day of the survey.Schools were stratified into 12 blocks on the basis of whether they were urban or rural, whether their baseline prevalence of past week violence was above or below the baseline median of past week physical violence from school staff of 55%, and a qualitative assessment of high or low likelihood of attrition from the trial.LA used block randomisation to generate allocation lists.At a meeting of all school headteachers, a representative from each school within a block was invited to place their school name in an opaque bag.Names were then drawn from the bag by a headteacher nominated by the group and schools were then allocated to intervention or wait-list control conditions in the sequence on the allocation list, recorded by JC.Owing to the nature of the intervention, it was not possible to mask participants.Allocation was not intentionally revealed to those collecting data; however, given the nature of the intervention, they should also be regarded unmasked.The statistician was masked to allocation while performing analyses.Potential risks related to the intervention itself were minimal, but we anticipated that during survey data collection we would detect children in need of support from child protective services.Children were informed during the consent process that their details might be passed on to child protection officers.Referrals were based on predefined criteria agreed with service providers, related to the severity and timing of violence reported.12,All children were offered counselling regardless of what they disclosed.Any adverse effects of the intervention itself were monitored during regular visits to schools by the dedicated study monitoring officer.Monitoring data were collected termly via structured classroom observations, formal and informal interviews with staff, and observations of students.No major changes to the trial protocol were made.We made changes to our child protection referral strategy after the baseline survey.These experiences at baseline are published elsewhere,12 and will be described in a separate publication for the follow-up.The Good School Toolkit is a complex behavioural intervention which aims to foster change of operational culture at the school level.The Toolkit draws on the Transtheoretical Model,13 and contains behavioural change techniques that have been shown to be effective in a variety of fields14 and have been included in interventions to change teacher behaviour in primary schools9,15 and to reduce perpetration of intimate partner violence.16,The Toolkit materials consist of T-shirts, books, booklets, posters, and facilitation guides for about 60 different activities.These activities are related to creating a better learning environment, respecting each other, understanding power relationships, using non-violent discipline, and to improving teaching techniques.More details are provided in the panel.The primary outcome was past week physical violence from a school staff member, self-reported by students according to the International Society for the Prevention of Child Abuse and Neglect Screening Tool—Child Institutional.17,Secondary outcomes were safety and wellbeing in school, mental health status,18 and scores on educational tests.All were measured with instruments widely used internationally which have been validated in a variety of settings.Instruments were translated where necessary, and some items and time frames for recall added to the ICAST to capture the Ugandan context.All were pretested for understanding and piloted before the baseline survey.Analysis of our baseline data shows construct validity for our primary outcome in our sample, with children reporting past week physical violence also reporting high levels of mental health difficulties.We did an intention-to-treat analysis using data from our cross-sectional follow-up survey.All analyses were done in Stata/IC 13.1.Data were collected using a survey programmed into tablet computers with algorithms designed to eliminate erroneous skips.Most educational test data were collected on paper; all were double scored, and double entered.All trial outcomes were measured at the level of individual student participants.Analysis was done with individual level student data, accounting for clustering of students within schools using mixed-effects regression models.“Unadjusted” analyses of continuous outcomes control for the school-level mean of the outcome at baseline."Adjusted analyses control additionally for students' sex, whether or not they had a disability, and their school's location and baseline level of past week physical violence from school staff.For non-normally distributed continuous outcomes, 95% confidence intervals were estimated by use of 2000 bootstrap replications.All covariates were specified a priori.We planned a priori to conduct subgroup analyses by sex, by urban or rural location of the school, and by baseline levels of past week physical violence from school staff.None of the funding sources played a role in the design of the study, data collection, analysis, interpretation, or writing of the results.The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.42 schools participated in the baseline and endline survey, and 3814 of sampled students were interviewed at endline.School, student, and staff characteristics were evenly distributed across study groups at baseline.Most students were aged 11–14 years, 52% were female, and 7·3% reported some form of disability.More than half reported eating fewer than three meals in the day before the survey.Staff were in their mid-30s, and nearly 60% were female.Demographic characteristics of staff and students at the endline survey are reported in the appendix.At baseline, levels of each primary and secondary outcome were similar across groups.54% of students reported past week physical violence from school staff.The mean score on the Strengths and Difficulties Questionnaire was 0·47, and the mean wellbeing score was 10·9.Educational test scores at baseline were comparable across groups, and showed low levels of reading ability.At follow-up, 80·7% of students in the intervention group had completed the previous grade in the same school that they were currently in, and 89·1% of staff had worked in their current school for more than 1 year and thus would have been exposed to at least some intervention activities.Levels of absenteeism were high, with about 20% of students surveyed indicating that they had been absent in the past week.During the trial, we recorded one major incident of contamination, where an intervention school invited head teachers from three neighbouring control schools to an event about the Toolkit.The control schools did not do any further activities and did not receive any support from Raising Voices, and the intervention school was asked not to invite other schools to its events until after the trial.In the follow-up cross-sectional survey, 595 of 1921 students in the intervention group reported past week physical violence from school staff, versus 924 of 1899 students in the control group.After accounting for clustering between students within schools, there was a 60% reduction in the odds of our physical violence outcome.This corresponds to a 42% reduction in risk of past week physical violence from school staff.The Strengths and Difficulties Questionnaire total scores did not differ between groups at follow-up, and were similar to baseline scores, indicating that there was no detectable effect of the intervention on this outcome after 18 months of implementation.Levels of school wellbeing were higher in the intervention than control group, however.There was no evidence that the intervention had an impact on any educational test scores.There was weak evidence that the intervention had a stronger effect in male students than female students, but there was no evidence that the intervention effect differed by urban or rural location or baseline level of past week violence from school staff."We did supplementary analyses to examine students' self-reports of physical violence from school staff in the past term, and staff reports of their use of physical violence against students.Students in intervention schools reported lower levels of past term violence.Staff in the intervention group also reported using less violence in the past week than those in the control group.No adverse effects of the intervention itself were detected via monitoring visits.At follow-up, 434 of 3820 children were referred because they disclosed severe violence in the survey. | Background: Violence against children from school staff is widespread in various settings, but few interventions address this. We tested whether the Good School Toolkit-a complex behavioural intervention designed by Ugandan not-for-profit organisation Raising Voices-could reduce physical violence from school staff to Ugandan primary school children. Methods: We randomly selected 42 primary schools (clusters) from 151 schools in Luwero District, Uganda, with more than 40 primary 5 students and no existing governance interventions. All schools agreed to be enrolled. All students in primary 5, 6, and 7 (approximate ages 11-14 years) and all staff members who spoke either English or Luganda and could provide informed consent were eligible for participation in cross-sectional baseline and endline surveys in June-July 2012 and 2014, respectively. We randomly assigned 21 schools to receive the Good School Toolkit and 21 to a waitlisted control group in September, 2012. The intervention was implemented from September, 2012, to April, 2014. Owing to the nature of the intervention, it was not possible to mask assignment. The primary outcome, assessed in 2014, was past week physical violence from school staff, measured by students' self-reports using the International Society for the Prevention of Child Abuse and Neglect Child Abuse Screening Tool-Child Institutional. Analyses were by intention to treat, and are adjusted for clustering within schools and for baseline school-level means of continuous outcomes. The trial is registered at clinicaltrials.gov, NCT01678846. Findings: No schools left the study. At 18-month follow-up, 3820 (92.4%) of 4138 randomly sampled students participated in a cross-sectional survey. Prevalence of past week physical violence was lower in the intervention schools (595/1921, 31.0%) than in the control schools (924/1899, 48.7%; odds ratio 0.40, 95% CI 0.26-0.64, p<0.0001). No adverse events related to the intervention were detected, but 434 children were referred to child protective services because of what they disclosed in the follow-up survey. Interpretation: The Good School Toolkit is an effective intervention to reduce violence against children from school staff in Ugandan primary schools. Funding: MRC, DfID, Wellcome Trust, Hewlett Foundation. |
31,413 | A roadmap for gene system development in Clostridium | The genus Clostridium have long been recognised as a large and disparate grouping of bacteria that has representatives of importance to both human and animal diseases as well as to the industrial production of chemicals and fuels.Whilst the majority of those species responsible for human and animal diseases have been known for decades, in recent years the importance of members of the class clostridia in the gut microbiome has become ever more apparent .On the other hand, the desire to exploit an ever widening diversity of species for biotechnological purposes has intensified.Of particular note, is the growing momentum behind the industrialisation of gas fermentation for chemical and fuel production using clostridial acetogens.Acetogenic bacteria, typified by Clostridium autoethanogenum , are able to capture carbon through gas fermentation, allowing them to grow on a spectrum of waste gases from industry to produce ethanol .They can also consume ‘synthesis gas’ made from the gasification of renewable/sustainable resources, such as biomass and domestic/agricultural waste.Acetogenic gas fermentation can, therefore, produce ethanol in any geographic region without competing for food or land.Indeed, the commercialisation of ethanol production from ArcelorMittal Steel Mill off-gas is now at an advanced stage .When fully scaled, it could enable the production in Europe of around 500,000 tons of ethanol a year.Intriguingly, Clostridium difficile carries the pivotal genes required for CO/CO2 fixation, the Wood Ljungdahl pathway, and is reported to be able to grow on CO as a carbon source .Given the increasing numbers of clostridial species that need to be more thoroughly characterised, either to counter the diseases they cause or to exploit their beneficial properties, the implementation of genetic systems is required.Accordingly, SBRC Nottingham has formulated a roadmap for gene system development in any clostridial species.The roadmap revolves around exploitation of the dual, and opposite, phenotypes conferred on the cell by mutant and wildtype pyrE alleles.The presence of the former makes the host a uracil auxotroph that is resistant to fluoroorotic acid, while the latter confers sensitivity to FOA but is a uracil prototroph.Selective cycling between these two alleles allows recombination-based genome editing by ‘knock-out’ and ‘knock-in’.Importantly, the availability of a mutant pyrE allele may be used as a locus for the rapid genome insertion of DNA for both complementation studies and for the insertion of application specific modules.Using pyrE mutant hosts, therefore, presents considerable advantages for all mutational studies, regardless of the mutagen employed.Implementation of the roadmap is reliant on two fundamental developments being in place, namely: the availability of a fully annotated genome sequence, and; a means of introducing DNA into the clostridial host.The availability of an annotated genome sequence is central to gene system development, not only to identify gene targets, but additional to assist in overcoming RM barriers.The closure of whole genome sequences, however, is hindered by the presence of long stretches of repetitive DNA which can prevent scaffold assembly of the shorter DNA reads generated by commonly used technologies, such as, Illumina MiSeq, Ion Torrent and 454 GS FLX + Titanium.In these cases, the read lengths generated are unable to cover the repetitive sequence lengths of 5–7 Kb commonly found in bacteria .Genome closure therefore requires expensive and time consuming manual finishing.PacBio have developed Single Molecule Sequencing Technology which is capable of generating read lengths in excess of 15 Kb and currently, therefore, represents the technology of choice for determining whole genomes.However, compared to Illumina sequencing the error rate for PacBio sequencing is relatively high, particularly across homopolymer regions between two and fourteen base pairs in length .Accordingly, it is advisable to combine PacBio sequencing with Illumina sequencing, mapping the latter reads to the PacBio-derived, reference assembly.Following the correction of any errors in the determined closed genome, it can be submitted to one of a number of online facilities for automated annotation.For example, the Integrated Microbial Genomes system at DOE’s Joint Genome Institute provides such an annotation service.It should be noted, that in those instances where the genome sequence of the clostridial species being used is published, the assumption should not be made that the sequence is correct.For instance, the first clostridial genome sequence to be determined and 48 insertions/deletions.Similarly, the PacBio-derived genome sequence of Clostridium autoethanogenum DSM 10061 was shown to contain 243 SNVs .It is equally important, not to make the assumption that the laboratory isolate being used has an identical sequence to that published.SNVs and Indels can arise, particularly if the strain has been passaged through single colonies.Thus, a strain of Clostridium acetobutylicum ATCC 824 in common use in a number of European laboratories was shown to possess 2 SNVs and 1 deletion in comparison to the strain deposited at the ATCC .More telling are the changes identified in the erythromycin resistance derivative, 630Δerm, of the Clostridium difficile strain 630.These equated to 71 differences between the two strains, encompassing 8 deletions, 10 insertions, 2 insertion-deletions, 50 substitutions and 1 region of complex structural variation .Prevention of DNA transfer by host RM systems is highly strain specific.Indeed, there are many instances where restriction has not been a problem, e.g. Clostridium beijerinckii NCIMB 8052 , Clostridium perfringens strain 13 , C. difficile strains CD37 & CD630 and Clostridium botulinum ATCC 3502 .Genome sequencing has shown that many of these organisms carry at least one type II methylase gene, but they lack genes encoding the cognate restriction enzymes.Thus, for instance, the genomes of C. botulinum ATCC 3502 and C. perfringens strain 13 contain orphan copies of methylase genes, and are both readily transformable in the absence of any measures to circumvent restriction barriers .In many instances, however, the successful transfer of extrachromosomal elements, either by transformation or conjugation, has required the circumvention of the activity of endogenous restriction-modification systems.This is achieved through appropriate methylation of the vector DNA to be introduced.The nature and specificity of those enzymes involved has been achieved in a variety of ways.In early work, experimental approaches predominated in which restriction activity was initially detected in bacterial lysates after which, the restriction and the methylation specificity of the RM system was determined and then countered through the deployment of an appropriate methylase activity in the Escherichia coli donor with the requisite specificity.Early examples include C. acetobutylicum ATCC 824 , Clostridium cellulolyticum ATCC 35319 , C. botulinum ATCC 25765 and C. difficile CD3 and CD6 and more recently Clostridium pasteurianum and Clostridium cellulovorans .In recent years, there has been an increasing emphasis on using genome sequences to identify potential RM systems and using available gene knock-out systems to inactivate the identified restriction systems.Thus, successful DNA transfer in C. cellulolyticum was achieved by the inactivation of a putative MspI-like endonuclease gene, ccel2866 , while DNA transfer in C. acetobutylicum ATCC 824 and DSM 1731 in the absence of methylation of the transferred plasmid was achieved by inactivation of gene encoding type II restriction endonuclease Cac824I using either ClosTron mutagenesis or allelic exchange .In the latter study, inactivation of a second gene encoding an additional type II restriction enzyme led to a further 8-fold increase in electroporation frequency .A similar approach was undertaken to activate the previously identified Type II restriction gene CpaAI in C. pasteurianum , dispensing with the need to methylate plasmids in the E. coli donor using M. FnuDII methylase prior to DNA transfer.Whilst the majority of studies have focussed on Type II systems, a recent report demonstrated that the barrier presented by Type I systems can be negated by cloning into the E. coli donor both the methylase and specificity subunits of two type I systems identified in Clostridium saccharobutylicum NCP 262.The resultant protection of the mobilisable cloning vector in the E. coli Top10 donor led to the successful transfer of a shuttle vector to C. saccharobutylicum in a triparental mating using the conjugative donor strain CA434 .These same authors went on to individually inactivate the hsdR components of RM1 and RM2 systems using ClosTron mutagenesis, achieving a 10-fold and 8-fold increase in transfer frequencies, respectively.In the past, the specificity of methylation systems was determined either by showing that homologous genes encoding methylases of known specificity protected the vector to be transferred , where the necessary analysis is undertaken free of charge.But for a brief dalliance with PEG-mediate protoplasts procedures, only two methods are routinely pursued in terms of obtaining DNA transfer to clostridial hosts, electroporation and conjugative mobilisation from a shuttle host, invariably E. coli.In terms of conjugative plasmid transfer, the method of choice is to use conjugation between an E. coli donor and the clostridial recipient.The majority of methods rely on the oriT-mediated mobilisation of plasmids by the transfer functions of IncP plasmids, either located autonomously or integrated in the genome.The commonest origin of transfer employed is that of RK2.Methods are largely based on the pioneering work of Williams et al. , in C. acetobutylicum, later replicated in C. difficile , although in some instances an alternative oriT region from the broad-host range transposon Tn916, has been used with certain strains of C. difficile .In the case of electroporation, empirical changes are made to a multitude of parameters to achieve the highest transfer frequencies.These include preparing cells at different phases of growth, including cell weakening agents in the media, the use of different buffers for preparation of competent cells, and the different electrical parameters of the electric pulse amplified, as well as its duration.The allelic exchange methodologies developed in this laboratory are all reliant on replication defective plasmids .When such plasmids encode an antibiotic resistance gene, typically catP, they are maintained within the cell by antibiotic selection, thiamphenicol.Under such situations, the rate of growth of the population is determined by the rate at which the plasmid is segregated to the daughter cells.If the plasmid is endowed with a region of homology to the chromosome, then those rare cells in which the plasmids integrate via homologous recombination now have a growth advantage because every daughter cell carries a copy of the catP gene.The integrated sub-population, therefore, has a growth advantage over those cells in which the plasmid remains autonomous.This growth advantage manifests itself as visibly larger colonies on agar media.We have termed such plasmids, pseudo-suicide vectors .It follows that an early stage in the application of the roadmap to a particular clostridial species is to determine the most defective replicon of those available.This undertaking has been simplified by the creation of the pMTL80000 modular vector series .The pMTL80000 vector series represent a standardised plasmid set in which each module is localised to a defined restriction fragment bounded by one of four rare 8 bp palindromic sequences, corresponding to the restriction recognition sites of the endonucleases SbfI, AscI, FseI and PmeI .Modules correspond to the Gram-negative replication region, the Gram-positive replicon, an antibiotic resistance gene, and an application specific module.These modules are always arranged in the same order, viz., PmeI-SbfI, SbfI-AscI, AscI-FseI and FseI-PmeI.At the time, 18 modules were available, including 5 different Gram-positive replicons, those of the plasmids pBP1, pCB102, pCD6 and pIM13.The modules are numbered to allow the easy identification of the components present in any particular plasmid.This system allows the combinatorial construction of shuttle plasmids from modules in the standard format.It also provides for the quick and easy modification of existing pMTL80000-based plasmids.All of the pMTL allelic exchange vectors, transposon vectors and ACE vectors described in this review conform to this modular format.Moreover, numerous other modules have been added to the system, including new replicons.Updates can be found at http://chainbiotech.com/modular-plasmids/.To determine the most defective replicon, the vectors are transferred to the target clostridial strain and their segregational stability assessed.This can be determined either by measuring the growth rate in the presence of antibiotic, or by growing in the absence of antibiotic and then estimating the number of cells that have lost the plasmid.The latter involves either plating cells on agar media with and without antibiotic and comparing the cfu/ml, or by plating onto media lacking antibiotics and then patch plating onto agar media with and without antibiotic.It should be noted that aside from the different properties of replicons in the different clostridial species, they can also show wide variation in different strains of the same species.For example, plasmids based on the pBP1 replicon are the least stable in the C. difficile strain R20291, whereas this represents the most stable replicon in strain 630.The pyrE gene encodes orotate phosphoribosyl transferase, responsible for the conversion of the pyrimidine intermediate orotic acid into orotidine 5′-monophosphate.FOA is an analogue of orotic acid and is converted by the same enzyme into 5-fluoroorotidine monophosphate which is subsequently converted to 5-fluorouridine monophosphate, instead of UMP.Accumulation of 5-FUMP is toxic and leads to cell death .It follows that the inactivation of pyrE prevents the accumulation of 5-FUMP and therefore, confers on the host a FOAR phenotype.The first step of the roadmap is to generate a pyrE mutant using ACE.The isolation of this mutant is facilitated by the fact that such mutants become resistant to FOA.As it is not clear what FOA concentration to use in selective media it is preferable to use ClosTron technology to rapidly generate a pyrE mutant, and then use that mutant to establish the FOA supplemented media required to distinguish pyrE mutants from wild type.The ClosTron is one of the most used clostridial gene knock-out systems and is a derivative of the Sigma Aldrich Targetron system .By making a handful of nucleotide changes to the group II intron encoding region, the intron can be directed to insert into almost any region within the genome.Through the use of a Retrotransposition-Activated Marker based on the ermB gene, successful insertion is selected on the basis of acquisition of resistance to erythromycin.The re-targeted region is designed using an online re-targeting algorithm, and an order placed with DNA2.0 for both the synthesis of the retargeted region AND its custom cloning into the ClosTron vector.Re-targeted ClosTrons are delivered ready for use in 10–14 days, allowing mutants to be isolated 5–7 days after receipt.This dispenses with the need to purchase Sigma Aldrich kits, or pay to use their algorithm.Standard protocols are deployed to implement ClosTron technology in the chosen Clostridium, to generate a pyrE mutant, requiring exogenous uracil.Once obtained, the concentration of FOA required to select for pyrE minus cells is determined empirically.As pyrE mutants are auxotrophic, the media also has to contain exogenous uracil.Generally speaking, the level of exogenous uracil added is in the range of 5–50 μg/ml, while the FOA supplementation can be as low as 800 μg/ml to as high as 3.5 mg/ml.Having established the most defective Gram-positive replicon, an ACE vector is constructed using this replication region based on the pMTL80000 modular format to inactivate pyrE.Following integration of the ‘pseudo-suicide’ plasmid by single-crossover recombination, the system is designed such that during the desired second recombination event, a plasmid borne allele becomes ‘coupled’ to a genome located allele which leads to the creation of a new selectable allele, allowing the isolation of double-crossover cells.The use of highly asymmetric homology arms dictates the order of recombination events.A long, right homology arm directs the first recombination event and a much shorter left homology arm directs the second recombination event.The ease and rapidity of ACE allows the sequential extension of operon size and complexity through repeated cycles of the method, as demonstrated through iterative insertion of the entire lambda genome into the C. acetobutylicum genome as wells as synthetic mini-cellulosome operons .As indicated, following transfer of the ACE vector into the cell, single cross-over integrants are selected based on faster growing, larger colonies.Such integrants are invariably integrated via the LHA, due to its greater size compare to the RHA .These faster growing colonies are then streaked out onto minimal media lacking thiamphenicol and supplemented with FOA and uracil at the concentration determined using the ClosTron mutant.Those FOAR cells that arise will represent deletion mutants in which the desired second crossover event has occurred and the excised plasmid has been lost.Phenotypically they are, therefore, FOAR, uracil minus and Tm sensitive.Their authenticity is checked by using oligonucleotide primers that flank the pyrE gene to PCR amplify a DNA fragment, which in addition to being of a smaller size compared to the wild type, is confirmed to encompass the expected deletion event by nucleotide sequencing.As the pyrE mutant is now resistant to FOA, the introduction of a functional copy of a pyrE gene will lead to restoration of sensitivity.As such, the introduced gene can be used as a counter selection marker.Accordingly, KO pseudo-suicide vectors can be constructed carrying a suitable KO cassette, an antibiotic resistance marker, a heterologous functional pyrE gene and the selected defective replicon.The plasmid is introduced into the cell and clones in which the plasmid has integrated via homologous recombination, between one or other of the two homology arms and the corresponding complementary DNA in the chromosome, selected on the basis of their larger colony size on agar media supplemented with Tm.Selection and restreaking of the faster growing colonies allows the isolation of single crossover integrants, as determined by PCR screening using an appropriate combination of primers complementary to regions flanking the homology arms and vector encoded sequences.The isolation of pure single crossover populations is essential as the presence of substantive sub-populations of cells carrying autonomous plasmids can lead to high counts of spurious mutants in the presence of the counter selection agent.Examples of the KO vectors constructed based on this principle are pMTL-YN3 and pMTL-YN4, which are used for KO in C. difficile strain 630 and the PCR-ribotype 027 strain R20291, respectively .The former vector uses the pCB102 replicon , due to its comparatively greater defectiveness in strain 630, while plasmid pMTL-YN4 is based on the pBP1 replicon which is the most unstable replicon in strain R20191.In the case of C. acetobutylicum ATCC 824, the replicon of choice and the one used in the KO vector pMTL-ME3 is that of pIM13.The heterologous pyrE gene used in the case of both organisms was that of C. sporogenes ATCC 15579.Traditionally KO cassettes may be individually generated as separate Left and Right Homology Arms that are sequentially cloned using created restrictions sites into corresponding vector restriction sites.Alternatively, they can be commercially synthesized, as either the entire DNA fragment or, if appropriately designed, they can be assembled from smaller fragments using procedures such as G-Blocks , USER cloning , ligase cycling or Golden Gate .Alternatively, the two DNA fragments comprising the LHA and RHA may be joined prior to cloning using Splicing Overlap Extension PCR .The robustness and reliability of the method was initially demonstrated in C. difficile through the creation of in-frame deletions in spo0A, cwp84, and mtlD in strain 630Δerm and spo0A and cwp84 in R20291 , and in C. acetobutylicum ATCC 824 the spo0A, cac824I, amyP and glgA genes .The procedure has proven equally effective using the alternative negative selection marker codA in both C. difficile and C. acetobutylicum .This gene encodes cytosine deaminase, which catalyzes the conversion of cytosine to uracil.It also converts the innocuous pyrimidine analogue 5-fluorocytosine into the highly toxic 5-fluorouracil.FU toxicity occurs as a result of the irreversible inhibition of thymidylate synthase, a key enzyme in nucleotide biosynthesis.Once the pyrE mutant is generated, an ACE correction vector is constructed, designed to restore pyrE to wild type.In this instance, following the isolation of a single crossover integrant, the desired double crossover event is simply selected by plating on media lacking uracil.That is to say, the auxotrophic mutant is converted back to prototrophy.The nature of the deletion is such that reversion to prototrophy cannot occur by any other means than ACE-mediated replacement of the defective allele with a wild type version.In other words, there are no false positives.The prototrophic cells now become FOA sensitive.Crucially, the system provides the in parallel opportunity to complement the mutant at an appropriate gene dosage, through the insertion of a functional wild type copy of the gene, into the genome, either under the control of its native promoter or the strong Pfdx promoter, concomitant with restoration of the pyrE allele back to wild type .The extra effort involved in the deployment of ACE vectors compared to the use of autonomous complementation vectors is minimal.They require the same amount of effort in terms of construction and transfer into the desired bacterial host.Mutants transformed with autonomous complementation plasmids need to be purified by restreaking, whereas an ACE complementation transformant merely needs to be restreaked onto minimal agar media lacking uracil, and those colonies that grow purified by restreaking.The extra effort, therefore, equates to the time it takes for uracil prototrophic colonies to develop, ca. 2–3 days in the case of C. difficile for instance.The efficiency of ACE is such that success is assured and moreover, false positives cannot arise as reversion of the pyrE deletion is impossible.Although the effort required for ACE-mediated complementation is minimal, the benefits are considerable.It avoids the phenotypic effects frequently observed with high copy number plasmids and dispenses with the need to add antibiotic to ensure the retention of the complementing plasmid.Such antibiotic addition can affect phenotype and necessitate the inclusion in any phenotypic assessments of the mutant a vector only control.Moreover, the pyrE allele represents an ideal position where other application specific modules may be inserted into clostridial genomes, such as a sigma factor to allow deployment of a mariner transposon , hydrolases for degrading complex carbohydrates , therapeutic genes in cancer delivery vehicles or the addition of an ermB gene to improve the reproducibility of the virulence of the NAP1/B1/027 epidemic strain R20291 in the hamster model of infection .The benefits of the presence of the pyrE locus are such, that there is a rational argument for using pyrE mutant hosts, and their cognate ACE correction vectors with any particular mutagen, including the ClosTron and any of the alternative negative selection markers developed in recent years.These include the E. coli genes codA and mazF exemplified in C. difficile and C. acetobutylicum , respectively, and the Thermoanaerobacterium saccharolyticum tdk and Clostridium thermocellum hpt genes, which were used to make knock-outs in C. thermocellum .The use of pyrE host would also find utility in the recently published recombineering approach developed for use in C. acetobutylicum and C. beijerinckii , as well as allelic exchange mutants made using CRISPR genome editing .A suite of pyrE ACE vectors are available to either: correct the mutant pyrE allele of deletion mutants made in either C. difficile or C. acetobutylicum.In C. difficile strains the correction vectors for strains 630ΔermΔpyrE and R20291ΔpyrE are pMTL-YN1 and pMTL-YN2, respectively.Those vectors that allow the simultaneous complementation of an inactivated gene concomitant with restoration of prototrophy are pMTL-YN1C and pMTL-YN2C, while those that bring about overexpression of the complementing gene are pMTL-YN1X and pMTL-YN2X.In C. acetobutylicum the respective vectors are pMTL-ME6, pMTL-ME6C and pMTLME6X.Equivalent vector sets, are available in this laboratory for C. beijerinckii, C. botulinum, C. perfringens, C. sporogenes, C. pasteurianum, Clostridium ljungdahlii and C. autoethanogenum.Further vectors are also under development for a number of other clostridial species.One exemplification of the utility of ACE and the pyrE locus for inserting application specific modules is that used to derive a universal transposon system for Clostridium sp .In early work, we had demonstrated the utility of a mariner based plasmid system in C. difficile in which the mariner transposase gene was placed under the control of the promoter of the C. difficile toxin B gene, tcdB.This promoter is exclusively recognised by a specialised class of sigma factor, TcdR, which belongs to a family that is unique to a handful of toxinogenic clostridial species .As E. coli does not produce an analogous sigma factor, the promoter is poorly recognised by this host, a feature that prevents transposon activity in E. coli prior to transfer of the vector.Avoidance of transposition activity in the donor strain prior to transfer to the clostridial recipient is a desirable attribute as transferred plasmids could potentially either become devoid of the catP-based min-transposon or indeed in some way functionally affected by its insertion into the vector at a new position.Essentially, the system may be considered as a conditional expression system, where transposase expression is limited to the clostridial host.As other clostridial species lack a functional equivalent to TcdR we reasoned that if we were to introduce the encoding gene into genome at the pyrE locus using ACE then the pMTL-SC1 transposon vector should be functional in the new host.Accordingly, the tcdR gene was inserted into the genomes of both C. acetobutylicum and C. sporogenes using the ACE complementation vector pMTL-ME6C and a functionally equivalent plasmid developed for C. sporogenes .Successful integrants were selected merely on the basis of restoration of uracil prototrophy.In both cases, the genes were cloned without the tcdR promoter region and were therefore reliant on the upstream promoters responsible for pyrE expression.This level of expression apparently had no effect on the phenotype of the strains generated based on the observed absence of any effects on growth rate, sporulation frequency and measured metabolic products .The level of expression was, however, sufficient for effective expression of tcdR, as transposition of the min-transposon was readily detected at a frequency of 2.6 × 10−4 and 3.2 × 10−4 in C. acetobutylicum and C. sporogenes, respectively .As was previously the case with C. difficile, inverse PCR on the C. acetobutylicum and C. sporogenes transposon mutants demonstrated that just a single insertion had taken place in the overwhelming majority of cases and that insertion had taken place principally within protein coding sequences.The latter frequency is consistent with the fact that 80% of the clostridial genome is protein coding.The utility of the system was further exemplified by the isolation of mutants phenotypically affected in sporulation/germination as well as autotrophic strains which could no longer grow on minimal media .The utility of the system was further improved by the development and deployment of a plasmid delivery vehicle that was conditional for replication.This was achieved by placing an IPTG inducible promoter upstream of the pCB102 replicon.In the absence of inducer the plasmid replicated normally.Upon addition of IPTG, the plasmid was rapidly lost: 80% in the case of C. sporogenes and 100% in the case of C. acetobutylicum .The ability to rapidly lose the plasmid represented a considerable improvement on pMTL-SC1, which was based on a pseudo-suicide replicon and required a minimum of two passages of the recipient bacteria to eradicate the Himar1 C9 transposase encoding plasmid.The conditional vector has been shown to function effectively in C. beijerinckii, C. botulinum and C. autoethanogenum but cannot be used in C. difficile.The latter observation is because C. difficile apparently does not take up IPTG.Functionality in this particular clostridial host, or indeed any clostridial host where a similar obstacle is encountered, will require the substitution of the IPTG inducible promoter with a different inducible system, eg., the anhydrotetracycline inducible system exemplified in C. difficile or the lactose inducible system of C. perfringens .The key steps involved in the roadmap are summarised in Fig. 4.Through implementation of the outlined procedures it is possible to formulate a generally applicable toolbox for use in potentially any Clostridium spp.To date, provided DNA transfer is obtained, we have not encountered any clostridial species where this technology cannot be applied.Thus, all clostridia contain the requisite pyrimidine pathway, the inactivation of which leads to uracil auxotrophy and FOAR.ClosTron technology appears universally applicable and at least one, most usually all, of the modular replicons available have proven functional.In the unlikely event that they are not, the modular nature of our vectors mean that a new functional replicon can be rapidly substituted.Aside from the use of pyrE alleles as counter selection markers, their most useful attribute resides in the use of the mutant allele, in combination with ACE, as a locus for genome insertion, be it for complementation studies or for the insertion of application specific modules.The case for using such hosts for all mutational studies, regardless of the mutagen, is compelling. | Clostridium species are both heroes and villains. Some cause serious human and animal diseases, those present in the gut microbiota generally contribute to health and wellbeing, while others represent useful industrial chassis for the production of chemicals and fuels. To understand, counter or exploit, there is a fundamental requirement for effective systems that may be used for directed or random genome modifications. We have formulated a simple roadmap whereby the necessary gene systems maybe developed and deployed. At its heart is the use of ‘pseudo-suicide’ vectors and the creation of a pyrE mutant (a uracil auxotroph), initially aided by ClosTron technology, but ultimately made using a special form of allelic exchange termed ACE (Allele-Coupled Exchange). All mutants, regardless of the mutagen employed, are made in this host. This is because through the use of ACE vectors, mutants can be rapidly complemented concomitant with correction of the pyrE allele and restoration of uracil prototrophy. This avoids the phenotypic effects frequently observed with high copy number plasmids and dispenses with the need to add antibiotic to ensure plasmid retention. Once available, the pyrE host may be used to stably insert all manner of application specific modules. Examples include, a sigma factor to allow deployment of a mariner transposon, hydrolases involved in biomass deconstruction and therapeutic genes in cancer delivery vehicles. To date, provided DNA transfer is obtained, we have not encountered any clostridial species where this technology cannot be applied. These include, Clostridium difficile, Clostridium acetobutylicum, Clostridium beijerinckii, Clostridium botulinum, Clostridium perfringens, Clostridium sporogenes, Clostridium pasteurianum, Clostridium ljungdahlii, Clostridium autoethanogenum and even Geobacillus thermoglucosidasius. |
31,414 | LRRK2 mutations in Parkinson's disease: Confirmation of a gender effect in the Italian population | Leucine-rich Repeat Kinase 2 is one of the genes that is most frequently involved in Parkinson's disease.Many variants in this gene have been described, but only a few of them are certainly pathogenic, including mutations G2019S and R1441C/G/H.The most common mutation is G2019S, whose frequency varies considerably among populations .Penetrance of LRRK2 mutations is incomplete and age-related .Recent studies have shown that gender distribution is even among Ashkenazi Jews LRRK2 carriers and in other genetic forms of PD .These studies also suggested that there may be gender-related differences in the balance between genetic and environmental factors, the genetic load being heavier in women than in men .On the other hand, it is still debated whether PD LRRK2 carriers display a distinctive clinical phenotype compared to idiopathic PD or not .We compared demographic and clinical features between carriers and non-carriers to shed light on the possible impact of LRRK2 mutations on clinical features and to investigate whether or not LRRK2 status influences gender distribution and PD phenotype.We studied 2976 unrelated consecutive patients with degenerative parkinsonism, who contributed to the ‘Parkinson Institute Biobank’ from June-2002 to January-2011.All patients were enrolled consecutively, not selected for any clinical or familial feature.A first group of 1245 patients was described elsewhere .Here, we report an update on 1734 additional patients, reaching a total of 2976 unrelated consecutive patients.All patients were tested for the major LRRK2 mutation G2019S in exon 41.Exon 31, containing the mutations R1441C/G/H, was analyzed in a subgroup of 1190 patients, out of whom 1088 had PD.In addition, when LRRK2-mutations were identified in a proband, we studied all available family members with a definite PD diagnosis.Thus, we enrolled 10 additional living relatives affected by PD and carriers of the G2019S mutation from nine families.Patients found to be carriers of mutations in any other PD-related genes were not excluded from LRRK2 genetic analysis.However, to minimize confounding effects on the phenotype, we excluded such patients from the comparative analysis of clinical features.Clinical diagnosis was made according to established diagnostic criteria .All the 2976 patients had a diagnosis of primary degenerative parkinsonism: 2523 fulfilled criteria for PD, 53 for Dementia with Lewy Bodies, 128 for Multiple System Atrophy, 14 for Frontotemporal Dementia, 103 for Progressive Supranuclear Palsy, and 33 for Corticobasal Degeneration.In the remaining 122 patients the clinical diagnosis was still uncertain and reported as Undefined Primary Parkinsonism.Among the 2523 PD patients, 1488 were male, mean age at onset was 55.76 years, mean disease duration was 14.12 years.Family history in LRRK2-carriers was evaluated during formal genetic counseling.Proband relatives with possible parkinsonism not available for neurological examination were assumed to have PD only in case of previous diagnosis and when prescription of dopaminergic therapy was reported.In non-carriers, family history was collected by means of a questionnaire.Formal genetic counseling sessions occurred in most cases when a relative was reported to be affected by PD.Patients were classified as “familial” if at least one among their 1st, 2nd, or 3rd degree relatives had a formal diagnosis of PD.Clinical features of LRRK2-carriers were compared to those of patients whose molecular analysis was negative for the major LRRK2 mutations and for other mutations in known PD genes."Demographic and clinical data were collected from all patients, including the latest Unified Parkinson's Disease Rating Scale scores from part I to III in the medication-On and -Off state, and the Hoehn and Yahr stage.Major milestones of PD progression were explored by transforming specific UPDRS items into dichotomous variables, i.e. falls, postural instability, non-levodopa-responsive freezing of gait, dysphagia and speech difficulties.The study was approved by the local Ethics Committee and written informed consent was obtained from all subjects.R1441C/G/H, G2019S and I2020T mutations were analyzed with standard methods.LRRK2 haplotype was analyzed in all G2019S carriers.We compared demographic and clinical variables between LRRK2-carriers and non-carriers, and we also tested gender differences between groups, using parametric and non parametric tests as appropriate.In order to better characterize group differences, six variables were analyzed in a multivariate context through multivariate logistic regression, with the presence of LRRK2 mutations as dichotomous response variable.Furthermore, we compared motor phenotype and dichotomous values of selected UPDRS items.Statistical analyses were conducted using R statistical software.Among the 2523 unrelated consecutive PD patients, 40 resulted to be carriers of LRRK2 mutations, G2019S being the most frequent.No mutation was found in all the other patients with alternative clinical diagnoses.LRRK2 mutations were significantly more frequent in familial than in sporadic PD cases.In all G2019S carriers, genotypes were compatible with the common haplotype.We found several cases with rare genotypes.Notably, one patient was carrier of the novel I2020L missense variant.The I2020 residue is involved in a well-known mutation, and several in silico tools predict that the I2020L change is damaging.Therefore, we considered this new variant as a mutation.Finally, we disclosed two synonymous variants in the heterozygous state: c.6054C > T, which has already been described , and the novel c.6021C > T variant.A total of 49 PD LRRK2 carriers were included in the analysis of clinical features, resulting from the sum of N = 40 from the case series and N = 10 affected living relatives, excluding N = 1 patient with clinical diagnosis of PD whose post-mortem examination revealed a PSP-like tauopathy.Demographic and clinical features of LRRK2-carriers did not differ from those of non-carriers.Gender distribution resulted to be the only differential feature between the two groups, as most of the carriers were female.This difference remained significant after excluding the 10 living relatives recruited in addition to the case series.After adjusting for disease duration and age, the frequency and severity of major motor and non-motor symptoms were similar.We compared carrier females to carrier males, and against the respective non-carrier group.We did not find any significant difference, with exception of smoking.Major milestones of PD progression did not show any significant gender-related difference, both in the Off- and in the On-state.In an attempt to evaluate the genetic component for PD in males and females regardless of genetic status, we compared family history in all PD cases.Women reported a family history of parkinsonism more frequently than men, but the difference was statistically significant only considering all relatives up to 3rd degree relatives.However, a clear trend was evident also in 1st degree relatives.To the best of our knowledge, this is the largest consecutive series of patients with primary parkinsonism collected at a single clinical referral centre and tested systematically for major LRRK2 mutations.Our findings confirm that G2019S is the most common mutation in the Italian PD population, while it is virtually absent in patients with other primary parkinsonian syndromes.Critically, it is more common in women.Large epidemiological studies have consistently reported a higher incidence of sporadic idiopathic PD in men than in women .Although the basis of this difference has not been clarified so far, it has been suggested that the predominance in males is due to more frequent occupational or recreational exposure to toxins, but a putative neuroprotective role played by estrogens has also been suggested .According to Mendelian inheritance, autosomal genetic forms of PD should follow an even distribution of gender among affected cases.However, LRRK2 mutations have a low life-time penetrance, approximately 30–40% .Therefore, PD in LRRK2 carriers should still be considered of multifactorial etiology, the intervention of other genetic and/or environmental factors being mandatory for the development of disease.Accordingly, we would expect a similar 60:40 male-to-female distribution.However, this is not the case, as we found a surprising overturn leading to 57% prevalence in LRRK2 female carriers.A relatively higher percentage of women amongst LRRK2-carriers compared to patients with idiopathic-PD has previously been reported, but only in the Ashkenazi Jewish population .Several other studies have reported that LRRK2 carriers were mainly female, but the difference between genders did not reach statistical significance, probably because of small sample size .If LRRK2-carrier women have a greater load than men, women might be expected to develop disease earlier or progress faster than men, or both.A recent study on a large LRRK2 PD cohort confirmed the former hypothesis, demonstrating that onset of disease occurs 5 years earlier in women .On the other hand, our extensive analysis did not reveal any gender-related difference in clinical phenotype, including motor and non-motor symptom severity and measures of disease progression.In this scenario, one could set forth the hypothesis that this gender-related effect applies not only to LRRK2 and to other genetic forms of PD , but also to sporadic idiopathic PD.To further explore this hypothesis, we additionally investigated the family history of PD in the whole cohort of unrelated consecutive PD patients according to gender.The rationale of this analysis followed what is usually described in other complex multifactorial inherited diseases: when subjects of the less commonly affected gender manifest disease, their relatives are at increased risk because of the relatively larger ‘genetic load’ overcoming protective factors .Accordingly, we found that PD is more common in the family history of women than men, regardless of LRRK2 status.Alternatively, it could be speculated that the LRRK2 mutated protein may interact with specific female hormonal or genetic factors, thus potentially explaining the 57% of female LRRK2-carriers, even exceeding the expected 50% rate in the presence of autosomal distribution of the mutation.Our analysis of clinical features confirmed that female carriers do not have more severe symptoms than male carriers or female non-carriers and their symptoms do not differ in any way.Their age at onset is similar and their disease does not appear to progress more rapidly.Hence, interaction between LRRK2 and specific female factors does not seem likely.Our data support the hypothesis that LRRK2-associated PD phenotype is not distinguishable from idiopathic PD, as we did not find any remarkable clinical difference between LRRK2-carriers and non-carriers.The missense variant I2020L, which we found in one of our patients, involved the same residue of a well-known mutation, I2020T .Despite in silico predictions suggesting a damaging effect, further studies including co-segregation analysis and functional assays are necessary to include the I2020L variant among LRRK2 pathogenic mutations.Several strengths of our study are worth mentioning.First, LRRK2-carriers were identified within a large consecutive series of patients devoid of bias due to pre-selection based on clinical or demographic features, such as ethnicity, age at onset or family history.Second, the consecutive enrollment from a single tertiary centre enabled an exhaustive and standardized clinical assessment in all cases.Finally, this is the first study investigating possible gender effects on a Caucasian non-Jewish population, including not only distribution, but also a comprehensive investigation of potential differences in motor and non-motor clinical features.In conclusion, we confirm a higher frequency of females among LRRK2-carriers.This might be due to a greater genetic load compared to males, where environmental factors may play a prominent role.Further studies are required to investigate this ‘gender effect’ in larger populations, including not only LRRK2-carriers but also other genetic causes of PD.None of the authors have any disclosures to report. | The relative risk of developing idiopathic PD is 1.5 times greater in men than in women, but an increased female prevalence in LRRK2-carriers has been described in the Ashkenazi Jewish population. We report an update about the frequency of major LRRK2 mutations in a large series of consecutive patients with Parkinson's disease (PD), including extensive characterization of clinical features. In particular, we investigated gender-related differences in motor and non-motor symptoms in the LRRK2 population. Methods: 2976 unrelated consecutive Italian patients with degenerative Parkinsonism were screened for mutations on exon 41 (G2019S, I2020T) and a subgroup of 1190 patients for mutations on exon 31 (R1441C/G/H). Demographic and clinical features were compared between LRRK2-carriers and non-carriers, and between male and female LRRK2 mutation carriers. Results: LRRK2 mutations were identified in 40 of 2523 PD patients (1.6%) and not in other primary parkinsonian syndromes. No major clinical differences were found between LRRK2-carriers and non-carriers. We found a novel I2020L missense variant, predicted to be pathogenic. Female gender was more common amongst carriers than non-carriers (57% vs. 40%; p=0.01), without any gender-related difference in clinical features. Family history of PD was more common in women in the whole PD group, regardless of their LRRK2 status. Conclusions: PD patients with LRRK2 mutations are more likely to be women, suggesting a stronger genetic load compared to idiopathic PD. Further studies are needed to elucidate whether there is a different effect of gender on the balance between genetic and environmental factors in the pathogenesis of PD. © 2014 The Authors. |
31,415 | Iodate in calcite, aragonite and vaterite CaCO3: Insights from first-principles calculations and implications for the I/Ca geochemical proxy | Understanding seawater oxygen content is important, since oxygen minimum zones adversely affect marine fisheries and biological productivity.Oxygen depletion appears to be affecting a greater and greater volume of intermediate ocean waters, and has been associated with ocean warming and climate change."Understanding the climate sensitivity of the oceans' oxygen minimum zone, both today and over geological time scales, demands the development of a geochemical proxy for water oxygen content.As a redox-sensitive element, iodine has been proposed as a suitable proxy for this purpose.Iodine has been recognised as a promising geochemical indicator for oxygen content: the variation of speciation of iodine in seawater, as either iodide or iodate anions, and the iodide/iodate redox potential lies close to that of O2/H2O.As oxygen content decreases, the speciation of iodine in aqueous solutions changes from dominantly iodate to iodide, and iodide is thermodynamically stable in anoxic waters.Iodine accumulates in planktonic and benthic marine calcifiers, and follows a nutrient-like vertical distribution in the oceans.Diagenetic processes may act as a further control on the presence of iodine in pore fluids.Since I− does not enter carbonate minerals, while IO3− does, the iodine content of marine carbonates may be used as a proxy for seawater .More generally, halogens play an important part in Earth’s ecosystems, and volcanic emissions of halogens are known to be considerable, but most studies to date have focussed on chlorine and fluorine.Typical concentrations of iodine in carbonate rocks lie in the ppm range and may range as high as 500 nM in ocean waters.The fates of iodine in the solid Earth, in crustal and mantle rocks and reservoirs, and its sources, sinks and fluxes, remain undefined and largely unknown, although the possible transfer of iodine into the mantle through subducted oceanic sediments remains a clear possibility.Inorganic precipitation experiments have found that I/Ca ratios in calcites crystallised from solution are linearly-dependent upon the IO3− concentration of the parent water, but are independent of I− content of such water.Lu et al. carried out experiments in which they grew synthetic calcites spiked with iodine in solution, either as iodide or iodate.While they found little dependence of the I/Ca ratios in their samples for the iodide-bearing solutions, they saw a linear relationship between I/Ca and iodate concentration and concluded that the likely substitution mechanism is IO3− substituting for the CO32− oxy-anion in calcite: since IO3− occurs in oxygenated water, the I/Ca ratio was found to be higher in the test of foraminifera grown in high water.Their interest was in the application of measurements of iodine in calcite as a geochemical proxy for seawater redox potential, given the fact that iodate/iodide speciation changes with the oxidation state of seawater.This is particularly interesting because I/Ca in benthic foraminifera provide a route to measuring the variations in oxygen contents of bottom waters.Additionally, low I/Ca ratios in fossil planktonic foraminifera have been identified with ocean anoxia in the stagnant oceans of the mid-Cretaceous, associated also with organic-rich clay-bearing sediments.In the same way that B, Mg and Na have been seen to vary through the test wall of a foraminifera, so I/Ca ratios also display heterogeneity, in the basis of ICP-MS measurements.If such intra-test heterogeneities are indeed caused by variations in the redox-conditions over the lifetime of a single foraminiferal specimen, it may even be possible to reconstruct relative changes in bottom waters on sub-annual time scales, although the biological controls on iodine incorporation are unclear.These previous studies made no direct measurement, by spectroscopy or other structural means, of the speciation and incorporation state of the iodine in their samples, however.A fully quantitative relationship between dissolved oxygen and foraminiferal I/Ca has yet to be developed, and inferences from I/Ca remain qualitative.None the less, the fate of iodine in biogenic carbonates has been found to be a reliable indicator of ocean oxidation state, with results from foraminiferal calcite being employed to infer bathymetric variations in deoxygenation of ancient and modern oceans, for example.It is worth noting that further interest in the fate of iodine in the near surface solid Earth and in ground waters has arisen due to its prevalence in nuclear waste materials.Iodine has one stable isotope, 127I, but twenty-five radioactive isotopes and 131I is an acute radioactive contaminant.Due to its long half-life, high inventories in typical spent fuel, high bioactivity and high mobility, 129I released into the environment has been identified as a major potential hazard in groundwater near nuclear waste disposal sites.Understanding the interaction between iodine in aqueous solution and carbonate mineral precipitates in sediments is crucial in understanding the pathways and risks from 129I.Much of the 127I and 129I that originally existed as aqueous species in contaminated groundwater at the Hanford nuclear site co-precipitated into groundwater calcite, mainly as iodate, demonstrating the important role in the geochemical immobilization of radioactive iodine played by the interaction between CaCO3 and aqueous iodine species.In order to better understand the incorporation of iodine into carbonates, it is necessary to first determine the ultimate location of iodine within carbonate mineral structures.Several early studies tackled the incorporation of trace and major elements into calcite but they mainly focused on exchange of Ca2+ with other cations, such as Mg2+, Na+, Fe2+, Zn2+, As5+, Sr2+.More recently, attention has turned to anion substitutions into carbonates, building on earlier work characterizing sulfate incorporation into carbonate minerals.Electron paramagnetic resonance spectroscopy and X-ray diffraction have been used to study the possible incorporation of SO42−, NO3− and Cl− into the calcite crystal structure and to find out the possible sites and mode of their incorporation.The results indicated that the calcite lattice becomes distorted due to the incorporation of such anions.Based on these studies, it was concluded that sulfate can be incorporated into the calcite lattice and substitute for carbonate ions.Their results were supported by a number of later experimental investigations.Extended X-ray absorption fluorescence spectroscopy was also used to determine how the tetrahedral SeO42− species are accommodated in the bulk calcite structure.Lamble et al. found that tetrahedral SeO42− oxy-anions substitute for trigonal CO32− in calcite.As far as iodine incorporation is concerned, an X-ray single crystal electron density study of calcite Maslen et al. attempted to infer the influence of dilute iodate incorporation and postulated that it substitutes for the carbonate ion in calcite based on the fact that the Ca–O distance in calcium iodate, Ca2, is similar to that in calcite.Most recently, Podder et al. used a combination of X-ray absorption spectroscopy and first principles calculations to identify the nature of iodate in calcite and vaterite, confirming that this is the dominant incorporation mode.Here, we extend our understanding of the incorporation of iodine in calcium carbonates using ab initio computational simulations of iodine substitution for carbon in the carbonate group as well as for Ca at the large cation site, within all three ambient pressure/temperature polymorphs of calcium carbonate, namely calcite, aragonite and vaterite, with the aim of quantifying the local distortion effects and estimating the likely enthalpic penalties of iodine incorporation.Early simulation studies of carbonates, and calcite in particular, adopted atomistic models to understand trace element substitutions, but more recently ab initio methods such as density functional theory have been adopted successfully to understand the effects locally of trace or minor anionic element substitutions, including those of sulphate, selenite, and bromate oxy-anions within carbonate.The energetics and physical properties of three polymorphs of calcium carbonate, CaCO3, namely calcite, aragonite and vaterite, were calculated using first-principles structure methods based on density functional theory.Calculations were performed employing the Vienna ab initio simulation package.The generalized gradient approximation in the scheme of the Perdew–Burke-Enzerhoff pseudopotentials was used, alongside projected-augmented-wave potentials for electron–ion interactions.PAW potentials with 3s23p44s2, 2s22p2, 2s22p4, and 5s25p5 electrons as valence electrons were adopted for the Ca, C, O and I atoms respectively.A kinetic energy cut-off of 500 eV was chosen, and Monkhorst-Pack meshes for Brillouin zone sampling were selected with a resolution of 0.3 Å−1."Elastic constants were computed from the strain-stress method, and the bulk modulus, shear modulus, Young's modulus and Poisson's ratio were derived from the Voigt-Reuss-Hill averaging scheme.The incorporation of iodine into pure calcium carbonate was treated as an impurity substitutional defect.Two different basic approaches can usually be used to model an impurity in a crystal.In the first, the full system geometry is relaxed at zero pressure.This approach accounts for the variation of cell parameters observed for relatively high impurity concentrations.However, because of periodic boundary conditions, only an ordered distribution of impurities in the crystal is actually considered and this may artificially affect the symmetry of the whole.Alternatively, a relaxation of the impurity-bearing model can be performed with the volume of the pure crystal maintained, which makes the simplifying assumption the symmetry and molar volume of the bulk crystal should not be affected by the presence of an impurity at low concentration.In this case, computational limits prevent calculation of very large super-cells and a stress is usually observed over the super-cell, due to residual elastic interactions between the impurity and its periodic images.We have adopted both approaches to make a comparison of the enthalpies determined from each of the methods.The starting structure for the computational work on calcite, aragonite and vaterite was that of R-3c, Pnma, and Cc respectively.The structures of R-3c calcite and Pnma aragonite are well-known and among the first to be identified in the early history of crystallographic studies.We used the structures of Graf and De Villiers as starting points for our calculations.The structure of vaterite is much-debated, however, and we therefore did not use any of the experimentally-derived structures.We choose, instead, the theoretically calculated Cc structure, containing 12 formula units of CaCO3 per unit cell for the convenience of making a supercell containing the same number of atoms for all three polymorphs.A super-cell containing 24 formula units of CaCO3 was constructed for each of the polymorphs.In the case of calcite, a supercell was chosen that corresponds to four conventional hexagonal unit cells.The supercell was, therefore, constructed as 2a × 2b × c where a, b, and c are the cell parameters of the hexagonal setting of the calcite R-3c unit cell.For aragonite, the simulation cell comprised six times the volume of the conventional unit cell, with a supercell constructed as 2a × 3b × c, where a, b, and c are the cell parameters of the orthorhombic Pnma unit cell of aragonite.For vaterite, the simulation cell comprised twice the volume of the conventional unit cell, with a supercell constructed as a × 2b × c, where a, b, and c are the cell parameters of the monoclinic Cc unit cell of vaterite.The structures were relaxed, with atomic relaxation terminated when the change in the total energy per atom converged within 1 meV.The enthalpy of pure calcite was found to be lower than that of the aragonite and vaterite.It should be noted, however, that the differences in enthalpies of these three polymorphs of CaCO3 are generally similar to the distribution of estimates of enthalpy that are provided by different density functionals and that calcite is stabilized against the other polymorphs at ambient by entropic effects, especially associated with carbonate rotational disorder at high temperatures.In order to evaluate the reliability of our simulations further, experimental values of lattice parameters for carbonates are also given in Table 1 and show good structural correspondence.The GGA functional, the most commonly used for solids, is well known to overestimate the lattice constants with typical errors amounting up to 2%.Here, the lattice constants of our simulation are larger than that of experimental values by 0.8–1.7%.Our simulations are, therefore, in reasonably good agreement with experiment.Using the relaxed structures as a starting point for the next stage of the calculations, the properties of iodine-doped equivalents were calculated by introducing one iodine atom into each of the supercells in two substitution mechanisms.One involved the substitution of iodine for calcium, onto the large octahedral site.The other corresponded to the substitution of iodine for carbon, as an iodate replacing the carbonate group.In each case the modified structures correspond to an average site occupancy of around 4% substitution of iodine onto the site.Our calculations do not assume the oxidation state of iodine, but simply find the lowest enthalpy configuration based on the pseudo-potential model.We adopted two approaches.In the first, the iodine-substituted structures were constructed without imposing any symmetry restrictions and then optimized under no symmetry restrictions.The symmetries of the resultant structures were then analysed and it was found that they were P321 for iodine-substituted calcite, Pm for the iodine-substituted aragonite structure and P2 for iodine-substituted vaterite.It should be stressed that we are not suggesting that iodine substitution at the ppm level, as seen in nature, would result in reduction of the long-range space group symmetry of a carbonate host crystal to these symmetries.Rather, these represent the symmetry of the local distortion that is to be expected around an iodine substituent atom when hosted within such a phase.An alternative approach was then adopted, in which the density of the host carbonate structure was maintained by fixing the unit cell parameters, and a single iodine atom substituted into the structure with the metric unit cell of the host lattice maintained, without any further relaxation of the volume or unit cell parameters.When performing the substitution, every possible atomic replacement needs to be taken into consideration.Since all the carbon atoms are symmetrically equivalent in calcite and aragonite, there is only one way to do the C-site substitution.This is also true for the Ca-site substitution.In vaterite, however, there are three symmetrically distinct carbon atoms in a unit cell, and there are therefore three ways to replace one carbon atom with one iodine atom.We accordingly tried all the three possible substitutions and discuss only the one with lowest energy henceforth.It is worth noting that the space group of the vaterite iodine-substituted structure with lowest energy is P2, while the symmetry of the other two reduced to P1.There are also three possible Ca-site substitutions in vaterite, with the most stable one having P−1 after iodine substitution, and the other two less-favoured structures having P1 symmetry.Calculated enthalpies for each of the reactants and products allow us to determine the enthalpies of reaction for each of the CaCO3 polymorphs.It is worth noting that incorporation of iodine in calcium carbonates is expected to result in some charge deficit that must be compensated, and this can be accounted for using either a homogeneous electrostatic background or a coupled heterovalent substitution.We have used the first of these methods to maintain the electric neutrality of the system, but charge compensation mechanisms in natural iodine-bearing carbonates may play an important role.Iodine itself can adopt multiple valence states, with I3+ and I5+ each forming iodite or iodate oxy-anions in nature.In view of this it is quite possible that iodine incorporation and substitution for C4+ in the carbonates that we have considered is accommodated by variable oxidation state of the iodine itself.Alternatively, coupled substitution of Ca2+ at the large octahedral M-site, for example by monovalent Na+ might reasonably occur, via a mechanism such as Na+ + I5+ ⇔ Ca2+ + C4+.Indeed, this mechanism has been considered previously, for carbonates, and shown to be effective.In this case, the incorporation of iodine may enhance the propensity for incorporation of sodium, and vice versa.We are not aware of any studies of the correlation between I/Ca values and other proxy measures, such as Na/Ca, but conducting such combined measurements offers a route to understanding the influence of individual trace elements on the concentrations of others.The detail of the possible coupled substitutions are undoubtedly likely to be influenced by vital processes within the calcifying space of a biomineralising organism.Other monovalent alkali ions could just as well take part in the coupled substitution of M+ + I5+ ⇔ Ca2+ + C4+, where M = Li, Na, K, Rb or Cs, for example.It would be instructive to explore correlations between iodine concentrations and the concentrations of any of these alkali ions.We find that the energies of the iodine-bearing structures that we obtain from either method of structural relaxation are essentially identical within the accuracy of the calculations.Furthermore, we have calculated the enthalpies using an alternative cross-correlation functional within the local density approximation to check that the choice of functional does not affect our conclusions.From the results shown in Table S1 it is clear that the choice of functional does affect the absolute values of enthalpy for the three phases, but does not change the relative sizes of the enthalpies and thus does not alter our conclusions.Hereafter, therefore, we restrict our discussion to the results pertaining to the GDA-derived structures, in which full structural relaxation was allowed under no symmetry or metric cell constraints.These structures most accurately represent the “local distorted symmetry” corresponding locally to the iodine environment.The energies of the two substitution reactions for calcite are +40.5 and +25.7 kJ/mol respectively.Equivalent values for substitutions into the aragonite form of CaCO3 are +38.7 and +30.3 kJ/mol.Finally, equivalent values for substitutions into the vaterite form of CaCO3 are +34.2 and +22.5 kJ/mol.It is worth noting that our calculations correspond to zero pressure and zero temperature conditions.In the absence of any other factors, the bare thermodynamic influence of increased temperature might be expected to favour iodate incorporation due to the entropic advantage, in the same way that the Mg/Ca geochemical proxy for temperature works.However, preliminary results on synthetic iodine-bearing calcites suggested lower partition coefficients for iodate at higher temperatures.Establishing the origin of this observed temperature-dependence of iodine partitioning is beyond the scope of our work, which offers little insight into the potential mechanisms.One might speculate, however, that the experimentally-observed reduction in iodate partitioning into calcite with temperature could be related to the temperature-dependence of iodine speciation in aqueous solutions, as controlled by iodine hydrolysis.Further complicating factors could include kinetic effects associated with dissolution/re-precipitation reactions of the carbonate itself.Our results demonstrate unequivocally that iodate substitution into calcite involves a lower energy penalty than substitution into aragonite, and that iodine is expected, therefore, to partition as iodate into calcite over aragonite.However, substitution into vaterite appears even more favourable than into either of the two other phases.Thus, although vaterite is metastable with respect to calcite as pure CaCO3, incorporation of iodine as a defect substitution is less prohibitive into the vaterite structure than into either aragonite or calcite.This demonstrates that vaterite has greater capacity for incorporation of minor impurities in solid solution.Furthermore, our results indicate that iodine favours incorporation onto the C-site over the Ca-site in all three polymorphs, that is to say, iodine substitution as IO3−, replacing the CO32− carbonate group, is by far the most favourable substitutional reaction.We therefore focus our analyses on this C-site incorporation.To understand the energy difference, it is first necessary to look at the effect each iodine substitution has upon the crystal lattice.Structural details of iodine-bearing carbonates, including calcite, aragonite and vaterite, are listed in Table 1 along with those of pure carbonate structures.The substitution of IO3− for the CO32− carbonate group results in a local modification of the structure, with local distortions and with a significant change of lattice parameters.In particular, the incorporation of the larger iodine atom induces obvious local distortion around the hosted iodine atom in the calcium carbonate structures, including calcite, aragonite and vaterite, as seen in Fig. 1.For calcite, the lattice of the iodine-bearing structure remains rhombohedral, even in the immediate vicinity of the iodine substituent, although the space group symmetry is lowered from R3c in pure calcite to P321 in the vicinity of the defect in iodine-bearing calcite.Iodine sits in planar triangular coordination, with the same coordination geometry as carbon in CO3, but he introduction of IO3 groups causes out-of-plane tilts of both the IO3 and CO3 molecules within the layers lying parallel to the hexagonal planes.For comparison, the unit cell parameters of the supercell of calcite are also given in Table 1.As can be seen from this table, in order to accommodate the iodine atom, whose atomic radius is twice that of the carbon atom, the volume of the unit cell increases by lengthening along both the a and b axes, which define the layers parallel to the plane on which the carbonate groups lie.In iodine-bearing aragonite, the original orthorhombic cell reduces in symmetry to monoclinic after substitution of one in 24 carbon atoms by iodine and relaxation of the unit cell.This is revealed as a very subtle change of the β, angle from 90° to 89.847° and results in a local symmetry for the iodine-bearing aragonite of Pm.The incorporation of iodine atoms enlarges the aragonite unit cell locally, in the same way as is seen in calcite, principally as a significant increase of the b cell parameter, along which direction the carbonate/iodate groups are aligned in rows.It is worth noting that, unlike the case of calcite or vaterite, the coordination of iodine by oxygen is irregular trigonal pyramidal, which is similar to the iodate coordination geometry in calcium iodate, although the coordination is distorted and much closer to triangular planar than that of iodate in calcium iodate.The iodate molecule in iodine-bearing aragonite shows two OIO bond angles of 123.6° and one of 103.7°, so all are relatively close to the triangular planar bond angle of 120°.In calcium iodate the equivalent bond angles are 98.7°, 98.7° and 98.3° and the iodate geometry is much more pyramidal.The oxygen atoms’ out of plane movements relative to the aragonite-hosted iodine create local dipoles and the structure becomes locally-polar.This is interesting since the same phenomenon was reported in an early work which focused on substitutions onto the cation positions in these carbonates.It seems likely that the incorporation into aragonite, unlike other forms of carbonates, results in the formation of dipoles for a wider range of elemental substitutions.In iodine-bearing vaterite, the phase which displays the smallest energy penalty for iodine incorporation, the symmetry of the host monoclinic structure is reduced around the iodate oxy-anion, resulting in a local P2 symmetry.The incorporation of iodine atom causes a small increase in the vaterite unit cell volume locally.There is subtle disruption of the orientations of the carbonate anions around the iodine atom, but the changes are not as obvious as for the other two polymorphs of CaCO3 because the vaterite structure is already relatively disordered, with a wider range of CO3 group orientations.Previous work on the substitute of sulfate into vaterite found that the carbonate groups were less disrupted by the substitution but that the sulfate group itself was more distorted than when substituted into calcite or aragonite.We find precisely the same situation for iodine incorporation into vaterite, in comparison with calcite and aragonite, with the iodate group being highly distorted with two IO bond of 1.975 Å and one of 1.874 Å, and two IOI bond angles of 88° and one of 176°.The effects of iodate replacement for the carbonate ion are readily quantified by inspection and comparison of the radial distribution functions for the pure and iodine-bearing versions of these three polymorphs.The radial distribution functions for these six structures are shown, plotted as g, in Fig. 2, where the distribution of electron density away from a central calcium atom is plotted.In aragonite, the first shell of atoms away from calcium is affected by the incorporation of iodine with a noticeable change in the height of this first shell peak for the iodine-aragonite.In comparison, the calcite and vaterite first shell peak is hardly changed by the incorporation of iodine.This likely accounts for the larger energy cost for iodine incorporation into aragonite compared to into calcite or vaterite.Comparing these latter two polymorphs, we see that the longer-range structure of iodine-bearing and iodine-free vaterite are rather similar to each other, while the same range of structure in calcite shows more significant perturbation upon iodine incorporation.Indeed, it is apparent that iodine may be accommodated into the vaterite structure with rather a small structural distortion penalty.Finally, we have computed some of the physical properties of the structures that we have predicted.The elastic constants and mechanical properties of each of the three polymorphs, including their iodine-bearing equivalents, are presented in Table 2.Experimental measurements of the elastic properties of vaterite have not been carried out due to the difficulty in performing such measurement on this highly metastable phase, so we have compared our results with the computational results of Demichelis et al.The calculated elastic constants of the iodine-free CaCO3 polymorphs agree well with experimental observations, providing further confidence in our results for the iodate-bearing versions of each phase.The incorporation of iodine acts to slightly soften the structures of calcite and aragonite, but has the opposite effect in the vaterite structure.The consequence is that the iodine-bearing vaterite is stiffer than iodine-bearing aragonite, as well as pure CaCO3 vaterite, further underlining the important nature of this phase in the potential incorporation of iodine in natural systems.Furthermore, we have found that iodine incorporation into calcium carbonate is energetically favoured in vaterite, compared with calcite or aragonite.The sequence vaterite > calcite > aragonite for ease of iodate incorporation is identical to that found for sulfate incorporation previously.Additionally, we find that iodine incorporation into vaterite lends this carbonate improved mechanical properties.This may be important in biomineralogical “deployment” of vaterite in organisms.It has been observed, for many organisms, that vaterite mineralization occurs when organisms suffer unfavourable stress conditions, such as disease or degraded environmental drivers.If such vaterite is also more able to accommodate trace elements, and by doing so able to develop improved strength and mechanical properties, such improvements may help explain the occurrence of vaterite in these organisms under these circumstances.It is interesting to note that, as well as our predicted higher I/Ca ratio for vaterite over calcite or aragonite, Mg/Ca and Mn/Ca ratios are also observed to be higher in vaterite than in aragonite.Similar variations in geochemical proxy amplitudes between vaterite and aragonite or calcite have been seen in the calcified parts of a number of organisms including bivalves.As well as the implications that such variability has for the importance of identifying the mineralogy of any calcium carbonate used for I/Ca geochemical proxy work, the potential enhancement of I/Ca in vaterite may also be important in understanding the origins of inherited iodine signatures.For example, it has recently been observed that foraminiferal calcite appears to be derived, in the early stages of calcification, from precursor vaterite.In such circumstances, it is reasonable to conclude that the signatures measured in the stable “daughter” calcite phase of the test may well reflect the partitioning, under biological calcification, pertaining to the parent vaterite crystals.The incorporation of iodine into all three naturally-occurring polymorphs of calcium carbonate – calcite, aragonite and vaterite, has been investigated via first-principles computational methods.In each case the incorporation of an iodine atom is favoured most strongly as substituent for carbon in the form of iodate, which causes local distortions of the structure over a length scale of around 10 Å.The local strain field around the iodate appears more extensive in aragonite than in the other two polymorphs, and the incorporation of iodine occurs with the least energy disadvantage into the vaterite structure.Furthermore, iodine-bearing vaterite shows improved mechanical strength compared to pure CaCO3 vaterite.Our results confirm the expectation that iodine is incorporated as iodate within biogenic carbonates, and thus confirms the empirical observation that I/Ca may be employed as a proxy sensor representing the oxidation state of the water bodies within which such carbonates form.Furthermore, our observation that iodate is likely preferentially incorporated into the three polymorphs in order of ease vaterite > calcite > aragonite implies that the presence of vaterite in any biocalcification process, be it as an end-product or a precursor, needs to be taken into account when robustly applying the I/Ca geochemical proxy.The structural data and electron localisation functions for each calculated phase, that support the findings of this study, are available in Mendeley at doi: http://doi.org/10.17632/vvypr24z7k.2 accessible at http://doi.org/10.17632/vvypr24z7k.2. | The incorporation of iodine into each of the three polymorphs of CaCO3 – calcite, aragonite and vaterite, is compared using first-principles computational simulation. In each case iodine is most easily accommodated as iodate (IO3−) onto the carbonate site. Local strain fields around the iodate solute atom are revealed in the pair distribution functions for the relaxed structures, which indicate that aragonite displays the greatest degree of local structural distortion while vaterite is relatively unaffected. The energy penalty for iodate incorporation is least significant in vaterite, and greatest in aragonite, with the implication that iodine will display significant partitioning between calcium carbonate polymorphs in the order vaterite > calcite > aragonite. Furthermore, we find that trace iodine incorporation into vaterite confers improved mechanical strength to vaterite crystals. Our results support the supposition that iodine is incorporated as iodate within biogenic carbonates, important in the application of I/Ca data in palaeoproxy studies of ocean oxygenation. Our observation that iodate is most easily accommodated into vaterite implies that the presence of vaterite in any biocalcification process, be it as an end-product or a precursor, should be taken into account when applying the I/Ca geochemical proxy. |
31,416 | Zn-Al LDH growth on AA2024 and zinc and their intercalation with chloride: Comparison of crystal structure and kinetics | The application of conversion coatings on metallic materials such as steel, aluminum alloys and magnesium alloys is a widely used method for corrosion protection .It generally involves an electrochemical or chemical modification of the material surface leading to the formation of a layer, which creates a physical barrier and improves adhesion to the consecutively applied polymer coatings allowing a better resistance against corrosion attack.However, in many cases, the physical barrier alone is not sufficient since once the latter is perforated due to defects or scratches, corrosion is inevitable.Therefore, recently many studies were focused on the development of self-healing corrosion protective systems, but with the main aim of replacing the detrimental Cr-based conversion coatings due to their toxicity and harmful impact on the environment .The focus of these studies was oriented to the “smart” delivery systems with the capacity to store corrosion inhibiting agents and release them when triggered by changes of pH, presence of corrosive species or mechanical damage .Among these “smart” systems, layered double hydroxides occupy an important position as potential delivery systems for promoting an active corrosion protection of different metal alloys .Layered double hydroxide is part of the class of anionic clays belonging to brucite-like structures.They can be represented by the formula x+x/y·zH2O, where M and Ay− correspond to metallic and anionic species, respectively .The isomorphous substitution of some of the MII cations in the brucite-like layers by a trivalent MIII cation leads to the generation of a stacking of positively charged sheets.This positive charge is balanced by anions intercalated in the galleries between these mixed hydroxide layers.The flexibility in terms of metal ions variation as well as their ratio and change of anions in the interlayer gives rise to a large class of isostructural materials, which are very attractive in a broad range of applications .In the past decade, the use of LDH nanocontainers in the field of corrosion protection has considerably evolved due to the anion-exchange aptitude of LDH .This ability of easy anion-exchange can be exploited as a mean for corrosion protection in two separate but complementary ways:host for corrosion inhibitors in anionic form between the interlayers and their release on demand when triggered by changes at the interface or presence of corrosive species .uptake of corrosive species in the galleries, behaving as nanotraps , while inhibitor is released.Following the pioneer works of Buchheit et al. , a lot of effort has been devoted to LDH growth as a conversion coating on active metals with the main focus on the corrosion protection of aluminium and magnesium alloys .However, recently the method was extended to other metallic substrates such as zinc, aiming equally to the corrosion protection of zinc galvanized steel structures .The cations involved in the in-situ Zn-Al LDH conversion layer formation on both aluminium and zinc substrates, are the same.The only difference resides on the source of these cations.In the case of the aluminium substrate, Al3+ will be generated from the dissolution of the aluminium substrate itself whereas Zn2+ will be generated from the dissolution of the zinc substrate .This could implicate an apparent difference on the final formed LDH structure and morphology.In the present study, we investigated possible discrepancies in the formation of Zn-Al LDH-NO3 on a pure zinc substrate and on an AA2024 aluminium alloy.Furthermore, chloride was chosen as a model anion to investigate the anion exchange reaction in these two systems since it works as a main trigger for release of inhibitors when LDH is used for corrosion protection.The chemicals used in the current work are aluminum nitrate nanohydrate3·9H2O, >99%, Sigma-Aldrich, Germany), zinc nitrate hexahydrate2·6H2O, >99%, CarlRoth, Germany), ammonium nitrate, ammonia solution, sodium nitrate, sodium chloride and deionized water Millipore™ >18.3 MΩ.The test samples and their compositions are described below:a) Aluminum substrate: AA2024,b) Zinc substrate: zinc,The size of the substrates in both cases was 10 × 10 mm and the specimens were polished using a 2500 SiC paper, rinsed with deionized water and left to dry at room temperature under air conditions.The synthesis procedure employed in this work was reported in a recent publication by Mikhailau et al. .Briefly, Zn-Al LDH-nitrate was grown on zinc substrates in 1 mM solution of Al3 and 0.1 M NaNO3 at pH 3.2 and temperature 90 °C.The source for the Zn element is provided by a dissolution of the zinc substrate.The synthesis time has been adjusted to achieve the best result in terms of LDH phase structural uniformity.Care has been taken to remove carbon dioxide from water solutions reducing its impact onto structural properties of the synthesized LDH.The LDH synthesis was performed according to the procedure described in previous works .Zn-Al LDH-nitrate was grown in a hydrothermal bath containing Zn2 and NH4NO3 under 95 °C for 30 min.In this case, the source for the Al element is provided by a dissolution of the AA2024 substrate.The obtained Zn-Al LDH-NO3 treated AA2024 and zinc samples were then moved to the set-up predisposed for the in-situ monitoring and study of the anion exchange reaction with NaCl.The anion exchange step was carried out in a solution of 0.1 M NaCl at room temperature for 30 min.For the sake of interpretation, the LDH-NO3 and LDH-Cl will be denominated Zn-LDH-NO3 and Zn-LDH-Cl when grown on zinc, and Al-LDH-NO3, Al-LDH-Cl when grown on the aluminum substrate.The studied specimens were examined both in top and cross-sectional views using a Tescan Vega3 SB scanning electron microscope equipped with an eumeX energy dispersive X-ray spectrometer for the elemental composition analysis of the specimens.The study of the growth of LDH films on aluminum and zinc substrates as well as ion-exchange processes of the surfaces were carried out at the P08 high-resolution diffraction beamline from the PETRA III synchrotron radiation source with an X-ray energy of 25 KeV .The experimental set-up comprised an in-situ flow cell with a transparent window allowing the grazing incidence X-ray diffraction measurements.The cell was also connected to a neMESYS pump composed of a set of syringes, controlling the electrolyte solutions flow in and out of the cell.The ion exchange conditions were controlled remotely using a software provided by the neMESYS.Therefore, it was possible to keep a hold onto the ion-exchange reaction while simultaneously performing the diffraction experiment.The diffraction patterns were collected before the start of the anion-exchange reaction and then continuously recorded at an interval of 0.54 s after the start of the anion exchange.A two-dimensional PERKIN Elmer detector with a pixel size of 200 µm was used.The sample to detector distance was set to 1.426 m.The radial integration was performed using GSAS II package.Then results were analyzed and treated using the FAULTS and AMORPH softwares.FAULTS was utilized to refine the diffraction patterns and the identification of the atom positions, while AMORPH was used for the quantification of the present amorphous phases.It is worth mentioning that the results described in this paper are specific to this experimental set up in terms of the cell geometry, flow rate, etc.The detailed study of influence of these particular parameters as well as features of anion exchange kinetics are not in the scope of the current study but will be investigated in future.Cross section SEM images of LDH-NO3 prepared on zinc and AA2024 substrates are shown in Fig. 2.The LDH conversion layer produced on zinc substrate has approximately 10–12 µm in thickness while the LDH on AA2024 is less than 1 µm, apart from some zones where other effects intervene to produce bigger flakes as demonstrated before .The SEM planar view of the LDH layer grown on zinc and AA2024 are depicted in Fig. 3.Fig. 3.a and 3.b show zinc surface covered with LDH and with a typical plate-like structure.This perpendicular plate-like structure could not be witnessed with the cross-section image Fig. 2.a.The reason being that, the sample preparation may have been slightly aggressive and altered with the layered by crushing and damaging the structure.At lower magnifications, some areas of the surface show a non-homogeneous distribution with respect to the size of the flakes.After an anion exchange reaction in a solution of 0.1 M NaCl, no visible changes in the surface morphology can be observed.By comparing the LDH layers that were formed on the zinc substrate to the LDH formed on AA2024 substrate, a different surface morphology can be seen.This morphology differs in terms of the flakes shape and size.Whereas the Zn-LDH-NO3 and Zn-LDH-Cl flakes form an assembly of bigger and well-defined lamellas, the Al-LDH-NO3 and Al-LDH-Cl present significantly smaller flakes aside from the zones of Cu-rich intermetallic, where large LDH islands can be observed due to the micro-galvanically induced dissolution of aluminum .Similarly, to LDH on zinc, the overall LDH structure is preserved after the anion-exchange with Cl−,The EDS analysis of the surface is presented in form of two-dimensional maps followed by the extracted representative EDS spectra Fig. 5.The EDS maps show an uneven distribution of the elements Al, Zn, N and O on both Zn-LDH-NO3 and Zn-LDH-Cl.Chloride is found only on Zn-LDH-Cl after the ion exchange reaction.Although the LDH layer thickness on the zinc substrate is relatively homogeneous, the presence of Cl is noted only on some specific zones.These specific zones are also enriched in zinc, this might entail that the anion-exchange between NO3– and Cl− has led as well to the formation of intermediate phases with a complex Zn and Cl composition.This can be verified later in the text by the XRD analysis.In the case of LDH film on Al alloy substrate, the same elemental distribution can be observed.But contrary to LDH on Zn substrate, here, the elements are mainly accumulated at the thicker LDH islands.Due to the difference in the thickness of the LDH layers for the different substrates and on the AA2024 alloy itself, less X-ray emission comes from thinner areas in comparison the thicker zones.The difference in the total accumulation of the signals leads to the observation of a clear contrast on the elemental distribution maps.Moreover, it can be seen that after anion-exchange with chlorides, the Cl element is present on the same sites where Zn is found.This is an indication of the successful intercalation of Cl− between the LDH interlayers.The XRD pattern of the LDH-NO3 on the zinc substrate demonstrates a single-phase LDH with a good crystallinity.The crystal structure was successfully refined with FAULTS software , within the rhombohedral space group R-3m.For the refinement three-layered polytypes 3R1 and 3R2 were taken, which denote the trigonal prismatic and octahedral arrangements of hydroxyl groups in adjacent layers, respectively .Moreover, additional non-LDH peaks could be identified.These peaks could not be properly assigned, but this brings us back to the EDS results described above.In the case of the pattern in Fig. 5.a-2, the peak indicated by an asterisk and arrow may be assigned to the formation of zinc hydroxide chlorides as intermediates.The latter share some similar features with LDH .It is worth emphasizing that for the refinement process, the water background was subtracted for all patters excluding the pattern in Fig. 5.b-2.The latter representing the Al-LDH-Cl, could not be refined after subtraction of water background.Whereas for the other spectra, the background subtraction did not alter with the crystal structure refinement.The obtained values of the crystal structure parameters are given in Table 3.The cell parameter c for Zn-LDH-NO3 equals to 26.862 Å and is close to that one reported in previous works .This c value allows to conclude that in Zn-LDH-NO3, nitrate ion plane makes a sufficiently large angle with the metal hydroxide layer - close to ~70° as it was established in and is represented in Fig. 7.This situation clearly corresponds to a high cation layer charge and a Zn:Al ratio to be equal to 2:1 .Indeed, refinement of the structures were made with other Zn:Al ratios but only a the ratio 2:1 provided a consistent agreement between the experimental and calculated XRD patterns.Moreover, in a work by S. Marappa et al. , several compositions with different Zn:Al ratios were studied.At a high layer charge, the authors obtained an angle equal to ~70° between NO3– and hydroxide layers.This makes sense since for compensation of the high charged cation layers, it is necessary to pack plain NO3– anions as close as possible.Therefore they tend to be perpendicular to these cation layers.Reflections and were used for the calculation of the average dimensions of the crystallites along c-axis and in the direction of a-b plane respectively.In this way, an average flake dimension along the a-axis and c-axis of La = 107.0 ± 7.5 nm and Lc = 49.4 ± 3.4 nm was obtained Zn-LDH-NO3, respectively.In the course of the intercalation process with Cl−, the average flake size becomes smaller in both directions, reaching La = 78.4 ± 5.5 nm and Lc = 30.9 ± 2.2 nm.The X-Ray pattern for Al-LDH-NO3 has other features.It can clearly be seen, that the LDH reflections, including basal and ones, are asymmetrically broadened.The small sizes of the Al-LDH-NO3 flakes provide this effect, presumably in combination with the two following factors; i) The first one can be associated with the fact that the planar anions NO3– have a coordination symmetry which is different from the one of the interlayer site, hence introducing a significant stacking disorder .This could result in a broadening of the non-basal peaks.ii) The second one can be connected with a considerable disorder in the geometrical alignment of NO3– anions in between the positively charged layers, which leads to interstratification.This in turn would lead to the broadening and asymmetry of the basal reflections.The absence of the same broadening on X-ray patterns of LDH grown on Zn substrate can mean that LDH growth is highly influenced by nature of the substrate, in other words the source of the generated Al3+ and Zn2+ cations.Chloride intercalated Al-LDH-Cl reveals a more ordered structure since the Cl− anion has a symmetry compatible with that of the interlayer site.However, the non-uniform distribution of the metal cation layers remains still could present, along with small flake size leading to the broadening of and reflections for Al-LDH-Cl.Indeed, in the case of Al-LDH-NO3, the estimation according expression gives some contradictory values like Lc = 6.9 ± 0.5 nm before intercalation and Lc = 11.5 ± 0.8 nm – after the reaction.Thus the broadening of the peaks cannot be attributed entirely to the size effect, and the average size of the flakes cannot be estimated with proper accuracy.Looking into the SEM images for the Al-LDH-NO3 and Al-LDH-Cl, it can be seen that it is the impractical to draw a conclusion on the basis of the X-ray diffraction pattern solely.The overall morphology and mechanism of LDH growth on AA2024 is governed by a number of factors that can be associated to the complexity of the AA2024 alloy composition .That being said, the shift of the and basal reflections into higher angles, after anion-exchange with Cl− is still a direct consequence of a decrease in the basal spacing.Indeed, the replacement of nitrates in the LDH interlayers with chlorides leads to a contraction of the interlayers that manifests by a decrease in the basal spacing .The time evolution of the diffraction patterns during anion-exchange process is presented in Fig. 8.The emergent appearance of the new reflections near the basal ones – and – indicates the formation of a new crystal phase and coexistence of two crystal phases during an appreciable time.Thereafter, the basal reflections and corresponding to parent crystal phase, eventually disappear.The fast formation of Zn-LDH-Cl phase in the case of the current experiment, can be detected by the broadening of the basal reflection at the ~44th second.This broadening is related to the coexistence of both Zn-LDH-NO3 and Zn-LDH-Cl crystal phases.The FWHM of the new reflection is comparatively broad and equal to 0.180°, while the peak width of reflection for the parent Zn-LDH-NO3 is equal 0.061°.This reflects also the fact that after induction stage, anion-exchange reaction starts from comparatively modest volume, which provides small coherent scattering region.The new reflection, corresponding to Zn-LDH-Cl phase becomes narrower when Cl− replaces NO3– and finally the SCSR size effect disappears, leading to FWHM ≈ 0.073°.For Al-LDH-NO3, the broadening of the 003 reflection at the ~37th second can be observed indicating the start of the anion exchange process.Then as Cl− replaces NO3–, the 003 reflection becomes narrower and more symmetrical in shape.It can be noted, that the anion exchange process on Zn substrate is faster than on Al substrate.On the latter case, the first exchange occurs rather fast during the first 60 s and continues at a much smoother pace for a duration of approx. 200 s. Whereas, for Zn substrate, the kinetics are different since the overall exchange ceases after about ~70 s.In addition to the above processes, a strong amorphous phase appears in a considerable amount during the intercalation step for both Zn and Al substrates.This amorphous phase can be mainly ascribed to the scattering from water.Considering the stability of the crystal structure on both Zn-LDH-NO3 and Al-LDH-NO3 with respect to intercalation, it is important to mention that this process is accompanied by a particular decomposition of crystal phase.The decrease of the integrated intensities of the Bragg reflections of the host compounds witnesses about the decrease of the number of the scattering centers.Thus, some remains of the crystalline substance also contribute to the amorphous halo.The time evolution of the volume fraction of the amorphous phase is also demonstrated in Fig. 8.The separation of integral intensities of the crystalline component from the amorphous at the measured diffraction patterns was made using AMORPH software .The structure profile refinement of the chloride-intercalated compounds – Zn-LDH-Cl and Al-LDH-Cl – was made using FAULTS software within rhombohedral R-3m space group and polytypes 3R1 and 3R2.Examples of refinement are shown in Fig. 9.The analysis allows to give the preference to 3R1 polytype for LDH–Cl on Zn substrate.For Zn-LDH-Cl, the best fit was obtained using model with two different gallery types with both Ow and Cl− located at 18 h site, but with different coordinates.The cations are located on the 3a site, hydroxyl groups are distributed on the 6c site.The analysis did not allow the deduction of any kind of preferential stacking for Al-LDH-Cl.This can be the result of a turbostratic disorder of the cation layers as it was previously mentioned, which are preserved in the intercalated substance.The best fit for chloride intercalated Al-LDH-Cl gives a model where atoms Cl occupy the positions corresponding to gallery type 2 in Zn-LDH-Cl.A graphic representation of the galleries’ structure is illustrated in Fig. 10.The unit cell parameters for Zn-LDH-Cl and Al-LDH-Cl are specified in Table 3.These unit cell parameters are compared to respective parental LDH–NO3 accordingly.As expected, there is no influence on the structure of the metal hydroxide layers arising due to the intercalation process, while the difference between values of c-parameters indicates a decrease of the interlayer distance.The gallery heights h, for both LDH layers were estimated by subtracting the value of hydroxide layer thickness from basal spacing d, which can be calculated as c/3.The value of the layer thickness for Zn-based LDH was taken from a previous work and is equal to 0.471 nm.The obtained values are also presented in Table 3.Overall, there is a coherent relation between the morphology of the LDH film observed with SEM images, their elemental analysis obtained by EDS mapping and the crystal structure refinements deduced from the X-ray diffraction data obtained by the in-situ synchrotron diffraction measurements.The driving mechanism for LDH growth on the zinc and AA2024 substrate is clearly distinct.This can be already seen from the thickness of the LDH layers.A relevant observation to add for the case of the AA2024 substrate, is that the presence of intermetallics led to an inhomogeneous size distribution that manifested by zones with large LDH islands, as already stated above and in previous studies .The LDH growth on Al involves a dissolution of the aluminum oxide layer liberating the desired aluminate ions that then react with zinc cations and other relevant species to form Zn-Al LDH .Around the intermetallic zones, a different dissolution process is governing.The latter are much more complex and could have affected the course of the LDH growth in terms of kinetics and final LDH composition around this area.This results in significantly higher challenges to obtain concrete information regarding the crystallite size calculations of the Al-LDH-NO3 and Al-LDH-Cl following the anion-exchange reaction with chlorides.In terms of kinetics of the anion-exchange reaction, additional studies are also required.A change in the crystalline size of the LDH on Zn substrate in comparison with LDH on AA2024 aluminum alloy was determined based on the XRD results evaluation.However, this change cannot be observed from SEM images.Moreover, other factors related to the rearrangement of the interlayer anion during the exchange reaction could not be clearly explained for now.Regardless, a hypothesis can be emitted, that while charging the cationic Zn:Al host layer is close to 2:1 , the NO3– are oriented almost vertically, this leads to the gallery height to expand making it easier for the Cl− anions to enter the galleries.The fast exchange rate between NO3– and Cl− confirms this statement.On the other hand, at lower charge of the cationic host layers, the NO3– anions will be positioned in a parallel manner to cation layer .Therefore, Cl− anions will not be able to enter into galleries so quickly, thus reducing the exchange reaction rate.However, this remains a hypothesis and needs a profound study and investigation and will be taken into consideration in the future.A thorough crystal structure study has been performed in order to compare LDH grown directly on two different substrates.For this purpose, SEM/EDS examination and X-ray diffraction measurements were used.The combination of these methods allowed the establishment of the following points:The overall parent LDH-NO3 structure is similar for both cases.LDH grown on Zinc and AA2024, both belong to the hexagonal lattice group with rhombohedral space group R-3m.The main difference resides in the flake size of LDH grown on Zn and AA2024 substrates.After anionic exchange with Cl−, no changes on the metal hydroxide layers on the different substrates have been depicted from the SEM images.However, the calculation for Zn-LDH-Cl revealed a decrease of the LDH crystallite size.Moreover, the crystal structure refinement reported a different interlayer arrangement for Zn-LDH-Cl, while the Al-LDH-Cl exhibits the same interlayers.In this work, the importance of the kinetics aspect of the anion exchange reaction and the influence of the interlayer anion rearrangement on the process was perceived.Aside from performing a more precise study about the influence of the substrate composition on LDH growth, it would be also necessary to carry out a systematic study of anion-exchange reactions with various anions to establish a clear pattern and obtained more valuable mechanistic information. | The dissimilarities and features of the crystal structure of ZnAl LDH-NO3 conversion layers grown directly on pure zinc and aluminum alloy 2024 were investigated in the present paper. Although the nature of the cations in the double hydroxides are the same in both cases (Al3+ and Zn2+), their sources differ according to the substrate. This leads to a difference in the cationic layers and interlayer structure, which consequently influences the anionic exchange reaction. In the frame of this work, the kinetics of the anion-exchange of nitrate by chloride was investigated as well as the crystal structure of the resulting ZnAl LDH-Cl on both substrates. Synchrotron high-resolution X-ray diffraction was the main method to obtain structural information and was supported by additional calculations and scanning electron microscopy. The current study revealed noticeable changes on the positioning of the interlayer atoms for the ZnAl-LDH-Cl on zinc in comparison with the ones on AA2024 substrate. |
31,417 | Modeling the response of ON and OFF retinal bipolar cells during electric stimulation | Neuroprosthetic subretinal implants for blind patients have shown that restoration of vision is principally possible.The quality of artificially created vision has not yet reached levels comparable to natural vision in humans, and this goal may be a difficult one to achieve.Subretinal implants in retinitis pigmentosa patients suffering from photoreceptor degeneration are located in the area formerly occupied by rods and cones, between the retinal pigment epithelium and the outer plexiform layer.Due to their location, retinal bipolar cells are assumed to be the primary target of extracellular electrical stimulation through a subretinal multi-photodiode-array.Voltage-controlled stimulating pulses create a longitudinal voltage gradient along the membrane of RBCs.One of the factors limiting the resolution of artificially generated visual percepts may be the simultaneous and unselective stimulation of ON-type and OFF-type cone bipolar cells.These initiate the functionally opposing retinal ON- and OFF-pathways.Simultaneous activation of the two pathways could potentially result in a mutual cancellation, such that no visual perception would be elicited.While an exact annihilation is highly unlikely to be caused by a subretinal implant, there is sufficient reason to be concerned about achievable contrast and resolution.Although several experimental studies have introduced more sophisticated stimulating approaches such as high-frequency stimulation no appropriate stimulating strategies to avoid co-activation of the ON and OFF pathway have been found so far.The cellular processes occurring during extracellular stimulation under in vivo conditions in retinal implant patients are largely inaccessible to presently available measurement techniques.A better understanding, especially of membrane currents and related biophysical events, is required for our ability to improve the stimulation strategies used today.With focus on RBCs, we here describe a modeling approach for deepening our understanding of these processes.Previous conductance-based models of RBC responses to electrical stimulation exist, however, these assumed no presence of voltage-gated ion channels, used single-compartment morphologies or were based on lower vertebrate data.This new model is based on realistic morphological, immunochemical and electrophysiological data from ON-type and OFF-type CBCs of the rat, due to the good availability of experimental rat data today and relatively simple experimental verification possibilities.Furthermore, the mammalian retina has many properties in common, making these modeling results more applicable to human clinical studies.Two voltage gated calcium ion channels of rat CBCs were integrated into the model through fitting of Hodgkin-Huxley-like equations for ionic currents to published electrophysiological recordings.In combination with a model which describes the change of intracellular i, this allowed us to investigate the synaptic activation of CBCs during extracellular stimulation.We can therefore make testable predictions about the influence of different stimulation paradigms on retinal activation and their suitability for sustainable, selective stimulation of retinal ON- and OFF-pathways using a subretinal implant.The calculation of transmembrane voltage Vm and ionic currents Iion during extracellular stimulation requires quantitative knowledge on the extracellular voltage Ve which is generated by the electrodes on the MPDA in a subretinal location.Therefore, the calculation is realized in two separate and methodologically different steps.In the first step Ve is calculated and in the second step the response of a target cell is evaluated using a multi-compartment model.Monopolar stimulation was realized using a disk electrode with 50 μm diameter and a height of 10 μm for stimulation attached to the surface of the chip layer.The distant return electrode used in the retinal implant was simulated by setting the boundary conditions of the retinal layer to ground at its outer boundaries.Historically, RBCs were initially assumed to be passively responding neurons.However, during the last two decades, successive new discoveries of active, voltage-gated ion channels in the membrane of RBCs have been made).For the active model the previously proposed ion channel equipment was simplified to a model that only contains Ca++ T-type channels or Ca++ L-type channels in their synaptic terminals to investigate i which is responsible for synaptic activity.T-type and L-type Ca++ currents have been found in rat CBCs.A detailed study on the differential expression of T-type and L-type Ca++ channels proposed that rat CBCs could be divided into two groups: T-rich cells with prominent T-type and weaker L-type Ca++ currents or L-rich cells with more L-type and less T-type Ca++ currents.Strong T-type Ca++ currents have been found previously in rat type 5 and type 6 CBCs, which are ON cells.To maximize the differences between the ON and OFF, the ON model was therefore set to be a T-rich cell while the OFF model was L-rich.The conductance-based calculation of ionic currents in dependence of Vm was performed in a custom made multi-compartment neuronal membrane model using Mathworks Matlab.A set of differential equations was used for each ion channel type, based on the formalism developed by Hodgkin and Huxley.The Nernst-potential of Ca++ depended on i and was calculated dynamically.However, due to extremely low i the majority of outward current through Ca++ channels is carried by K+ ions.Therefore, the effective equilibrium potential of Ca++ channels is influenced by the K+ equilibrium potential.Different ratios were used for the T-type and L-type Ca++ channel models.A model developed previously for T-type Ca++ currents in HEK-293 cells transiently expressing human Cav3.3 channel subunits was adapted for simulation of T-type Ca++ currents in rat CBCs with one activating gate and one inactivating gate.The model appropriately reproduced experimentally measures rat CBC T-type Ca++ currents.Fig. 3A shows the current density over time for one synaptic compartment during simulated voltage clamp experiments.The holding potential was −80 mV and was increased by increments of 25 mV up to 45 mV.Two different models were combined to optimally simulate voltage-dependent activation and inactivation of L-type Ca++ currents in rat CBCs.The two activation gates were adapted from a model developed for L-type Ca++ currents in feline RGCs, while the inactivation gate was based on a model created for rat hippocampal CA3 pyramidal neurons.Fig. 3B shows the current density for one synaptic terminal compartment during a simulated voltage clamp procedure.Holding potential was set to −70 mV and clamp voltages were −45, −20, +5, +30 and + 55 mV.Due to the lack of three-dimensional representations from traced CBC morphologies in publicly available databases, a custom-made Matlab tool was developed for creation of such morphologies.x- and z-dimension of the morphological model can be extracted from 2D print images of traced bipolar cells.Cellular extent into the y-dimension is generated using a confined, normally distributed random variable based on the 2D extent of neuronal processes.For the ON model, an identified type 9 rat CBC was used.Projections of the ON model morphology are shown in Fig. 4A from a frontal, lateral and top position.The type 3 CBC was used for the morphology of the OFF model.The ON model neuron consisted of 94 compartments whereas the OFF geometry consisted of 78 compartments.In all simulations, the dendritic tree had a distance of 25 μm to the surface of the stimulating electrode.All stimulations were either performed in voltage clamp mode to explore the behavior of the various ion channel models or as extracellular stimulation to investigate the influence of an external electric field on the model fibers.Monophasic rectangular anodic and cathodic pulses were delivered as well as biphasic anodic- and cathodic-first stimuli, for the stimulating paradigm see Fig. 5.Also the response of the model neurons to stimulus bursts and sinusoidal stimulation was explored.Default pulse-length used in simulations was 0.5 ms or 1 ms, as these are also common in clinical trial.1 V was set as default stimulating amplitude, since stimulation amplitudes with the Tübingen subretinal implant typically range between 0–2 V and maximally reach 2.5 V to prevent tissue damage.The choice of 1 V rather than a value closer to the charge injection limit leaves additional room for further optimizations.In a first step the previously described passive model was used to determine the influence of geometric factors on the excitation of the two model neurons.Second, simulations with the active model were conducted to examine which stimulating paradigms might be able to stimulate either ON or OFF CBCs selectively.As described previously, the key assumption of the model was that ON-type CBCs exhibit L-type calcium channels in their synaptic terminals whereas OFF-type CBCs show T-type channels.To test this hypothesis we computed the response of both cells containing the same ion channel equipment as well as we tested how the two model neurons respond to electric stimulation without any voltage gated ion channels.Furthermore, monophasic, biphasic, repetitive and sinusoidal stimulation has been tested to examine what stimulation paradigms might activate ON and OFF cells differently.For further investigations the passive model was extended with calcium channels in the synaptic terminals since the intracellular calcium concentration controls the synaptic activity between bipolar cells and retinal ganglion cells.A previous study did not make any assumptions whether T-rich or L-rich cells can be divided into subgroups with different functions.However, Ivanova and Müller measured strong T-type currents in type 5b, 6a and 6b CBCs which are all ON type cells.Furthermore, very weak L-type currents were reported in type 3 CBCs which are OFF type cells.Therefore, L-type calcium channels were added to the ON CBC terminal compartments and T-type channels to the OFF CBC synaptic compartments, respectively.To investigate how the intracellular calcium concentration changes when the axon length is varied we used the OFF CBC morphology as standard cell and elongated the axonal and synaptic parts.In Fig. 8 membrane voltage over time and the corresponding i for different lengths are shown.A 0.5 V pulse leads to small variations of Vm).The synaptic compartments of the standard geometry depolarize to a maximum of approximately −10 mV whereas a longer axonal and synaptic portion results in a slightly higher membrane voltage of +5 mV.Intracellular calcium, however, shows large differences from 0.66 μM up to 1.41 μM).An identical ion channel equipment in both cells, i.e. either L- or T-type channels in the synaptic compartments of both geometries, shows that synaptic calcium concentration is mainly influenced by the applied pulse and the depolarization characteristic of the model neuron.In Fig. 9A the L-type channel was implemented in the ON and OFF CBC.A 0.5 ms, 0.5 V pulse leads to a higher increase of i in the ON CBC because of a more depolarized membrane).However, when a 1 V pulse is applied calcium increases in both geometries to the same value which is again likely because of the maximum inward calcium current and a consequent distinct maximum internal calcium concentration as can be seen in panel.Panel shows that if both neurons are depolarized equally to about +20 mV i also does not differ between the two neurons.Fig. 10 shows the variations in maximum synaptic i at the terminal compartments for 816 different cell positions relative to the stimulating electrode when a standard pulse is delivered.The center pixel in the bottom row represents the standard position, the cell-soma is centered above the electrode, the distance between the dendritic tree and the electrode is 25 μm in z-direction.When the electrode is shifted in x-direction as well as in z-direction the maximum synaptic calcium concentration changes.Each pixel represents a 2 μm shift.Note that the two-dimensional maps in Fig. 10 are not exactly symmetric because of the asymmetric cell morphologies.Furthermore, the ON CBC in Fig. 10A does not have its maximum i at the standard electrode position like the OFF CBC.For the ON CBC the change of i is fairly small whereas the OFF CBC shows larger variation.The OFF CBC depolarizes less than the ON CBC during the three stimulations as shown in Fig. 11B.A 0.5 V pulse leads to a maximum membrane voltage of approximately −7 mV, a 1 V pulse to +41 mV and a 2 V pulse to +130 mV. i only increases slightly to 0.7 μM when a 0.5 V pulse is delivered, 1 V and 2 V pulses lead to 3.05 μM and 3.6 μM, respectively.The calcium current is totally inward during the 0.5 V pulse, during the 1 V and 2 V pulses also outward current densities occur when the cell depolarizes.Inward peak amplitudes are 2 μA/cm2, 12 μA/cm2 and 15 μA/cm2, respectively.Interestingly, although the ON CBC gets more depolarized than the OFF type cell during all pulses, higher calcium concentrations and more sustained calcium levels can be evoked in the OFF CBC.This discrepancy results from two major reasons: the density of calcium channels in the synaptic terminals is different in both cells and the kinetics of the two channel types are not the same.The fairly fast reversion to resting state of i in the ON CBC is caused by a de-activation since the membrane voltage drops back to resting potential within 1 ms after pulse offset.Because the activation variable m is almost 0 at resting potential the synaptic calcium influx stopped immediately afterwards and returned fast back to its resting value.The time constant of the T-type channel on the other hand has larger values at resting potential than in depolarized states.Therefore, the slower change of activation also leads to a slow decrease of the calcium current amplitude and therefore to a sustained i level.The described behavior of the two ion channels as well as the different de- and hyperpolarization characteristics are probably the reasons for the fairly large differences in Fig. 10.Anodic- and cathodic-first pulses were applied on the two model neurons to examine how i changes.Fig. 12A displays the time course of the transmembrane voltage during 1 V anodic-first and cathodic-first pulses with a length of 1 ms overall.The synaptic compartments of the ON CBC depolarize up to approximately 90–100 mV for both stimulation modi.Resulting peak i for the ON CBC was 0.39 μM during anodic-first stimulation and 0.77 μM during cathodic-first stimulation.In Fig. 12B it can be seen that the membrane voltage depolarizes less than in the ON CBC to approximately 45 mV. i in the OFF CBC synaptic compartments increases to about 3.1 μM when a anodic-first stimulus is applied and to 2.9 μM during a cathodic-first pulse.In sum, cathodic-first stimulation leads to higher i in the ON CBC whereas for the OFF CBC anodic-first stimulation results in a higher i.Since retinal implants use stimulus bursts to evoke visual perceptions in blind people it was tested how repetitive stimulation would affect synaptic i in the two model neurons.A monophasic, cathodic pulse with a pulse length of 0.5 ms and a stimulus amplitude of 1 V was applied 5 times.Inter-stimulus intervals between the 5 consecutive pulses varied between 0.5 ms and 49.5 ms.A 1 ms stimulus period results in slightly increasing i during consequent pulses.The first pulses raises i to approximately 0.75 μM, the following pulses increase i in the synaptic compartments to a peak value of 1 μM.After pulse offset, i went back to its resting state within 4–5 ms.When the same stimulation paradigm is applied to the OFF CBC i increases to values up to 3.8 μM after the last of the 5 single pulses.After the last pulse it takes about 40–50 ms to bring back i to its initial value of 0.1501 μM.To investigate the influence of sinusoidal stimulation on synaptic calcium currents we simulated how three different ion channels respond to sinusoidal stimulation with different frequencies.The previously presented calcium L- and T-type calcium channels as well as a sodium channel were investigated in a mono-compartment model.Sodium channel kinetics were taken from Benison and coworkers.The additional sodium channel was incorporated to investigate which parts of the retina might be preferably activated with sinusoidal stimulation.This study presents various computer simulations trying to resemble actual physiologic processes in retinal bipolar cells during subretinal stimulation.Results from electrophysiologic recordings were investigated and a computational model which was developed previously was modified.Furthermore, one model ON CBC and OFF CBC were reconstructed by using a previously presented tool.Monophasic, biphasic, single and repetitive pulses in different lengths and amplitudes were applied in passive and active mode to see how the two model neurons respond to external electric stimulation.To find crucial geometric parameters in the model several simulations in a passive model were conducted.While many parameters as soma diameter and the diameter of the synaptic compartments do not have a large influence, the length of the whole fiber does.Since the ON CBC was longer than the OFF CBC it was depolarized more during the stimulation with the same anodic pulses.When the axonal length was shortened consecutively in axial-direction the depolarization characteristics became similar for both cells.However, if both cells are the same in length the OFF CBC is still less depolarized which means that also other factors than length play a role for the magnitude of depolarization.Since the ON CBC depolarizes slower but stronger than the OFF CBC this behavior changes for very short pulse lengths which are, however, not common in retinal implants.The reasons for the different depolarization characteristics of the synaptic compartments between ON and OFF CBCs seem to be the geometry of the synaptic terminal region and the voltage gradient along the cells z-axis which changes for locations further away from the stimulating electrode.This study assumes that ON CBCs exhibit calcium L-type channels in their synaptic terminals whereas OFF CBCs show T-type channels in their synaptic compartments as proposed in a previous study.During the applied pulses, the synaptic calcium concentrations reached peak values up to 1 μM for the ON CBC and up to 3.6 μM for the OFF CBC which is sufficient to initiate synaptic activity from RBCs to retinal ganglion cells.However, in this study no retinal network activity was investigated, other retinal neurons as amacrine cells may also have a significant influence on the signaling cascade between RBCs and retinal ganglion cells when stimulated electrically.Identical ion channel equipment in both model neurons and elongating the axonal and synaptic portions of the OFF CBC showed that the state of depolarization in the synaptic compartments and therefore again the geometric factors have the greatest influence on i during stimulation.As can be seen in Fig. 11A synaptic calcium currents are stronger in outward than in inward direction.The two outward amplitudes for 1 V and 2 V pulses are different, however, i shows the same time course and maximum.Because the initial value for i was close to the residual value the initial outward currents did not affect i strongly.Following inward currents, however, had the same amplitudes and therefore i did not differ between the two distinct stimulating amplitudes.When retinal implants, placed either at the inside or outside of the retina, stimulate inner eye neurons electrically two different activation mechanisms can be found: direct stimulation which is triggered by the stimulating pulse itself and indirect stimulation through cell-to-cell interaction.Previous experimental studies reported differential activation of ON and OFF retinal ganglion cells.The presented model only examines the response of bipolar cells to electrical stimulation without taking into account any network activity.Anodal monophasic stimulation using subretinal electrodes results in an increased synaptic calcium concentration and therefore might be able to indirectly activate ganglion cells via synaptic activity.Cathodal stimulation on the other hand does not lead to an increase of the internal calcium concentration, and therefore is unlikely to mediate any synaptic activation.A possible explanation for the differences between the presented model and experimental findings might be the fact that only bipolar cells without connecting amacrine and ganglion cells have been examined.The main axes of bipolar cells are aligned perpendicular to the stimulating electrode and therefore will be activated when the first derivative of the applied voltage is positive.However, amacrine cells and ganglion cells, which are aligned parallel to the stimulating electrode are activated in regions where the second derivative of the external potentials is positive.This concept of the activating function has been presented and discussed previously.Therefore, using the presented model and two distinct model geometries, an anodal stimulating pulse can not lead to network activation which might be the case in actual experiments.However, morphologies which show specific terminal geometries as well as amacrine cells might also be activated during simulated anodal stimulation.Thus, it is likely that Jensen and Rizzo activated bipolar cells as well as other parts in the retinal network and thus recorded network activation which can not be explained in this study.Therefore, a comparison between experimental data of retinal ganglion cells and the presented modeling results is not possible without extending the model.Combining a bipolar cell, an amacrine cell, a ribbon synapse and a subsequent ganglion cell will give further insights into the mechanisms of direct and indirect stimulation.One of the main goals in the development of retinal implants is to avoid the co-activation of the ON and OFF pathway during stimulation.Biphasic pulses are on the one hand an opportunity to avoid charge injection into the tissue and on the other hand such pulse configurations might be able to focally activate either ON or OFF CBCs.It was shown that i in the synaptic terminals in the ON CBC is approximately 2 times higher during a cathodic-first pulse than during an anodic-first pulse.The OFF CBC on the other hand shows a slightly stronger increase of i when an anodic-first pulse is applied.In both cases calcium currents were initiated during the anodic phase.Therefore, it might be possible to activate either the ON or OFF pathway by using either anodic- or cathodic-first voltage pulses.The outcome of clinical studies using repetitive stimulation shows various drawbacks.The most reported disturbance of patients is the fading of visual sensations when the stimulating frequency exceeds levels of 10 Hz.Jensen and coworkers suggested that vesicle depletion might be the cause for this phenomenon.In this study, synaptic i returned to its resting state after 4–5 ms and approximately 40–50 ms, however, repetitive stimulation was not shown to be sustainable under these conditions.Therefore, there may be other limiting factors that underlie the fading of percepts during clinical application.Since previous studies reported that sinusoidal waveforms with different frequencies might be able to differentially activate bipolar and ganglion cells, respectively, we also examined how such pulses activate the presented ion channels.We resembled Fig. 7G from Freeman and coworkers to see if the presented ion channels support their assumptions.Our L- and T-type channels show similar characteristics during sinusoidal stimulation.Therefore, as stated before, lower frequencies in fact might activate bipolar cells and photoreceptors stronger whereas higher frequencies activate sodium channels and thus ganglion cells preferably.Modeling and simulation of neuronal tissue during external electric stimulation exhibits several limitations.As mentioned before, this study only investigates the activation of bipolar cells without taking into account other neuron-to-neuron connections in the complex retinal network.Therefore, other neurons as amacrine cells can have a substantial influence on the synaptic activity between bipolar cells and ganglion cells.Furthermore, to differentiate between ON and OFF CBCs, the ON CBC model was chosen to be T-rich whereas the OFF CBC model was L-rich which might be not the case. | Retinal implants allowing blind people suffering from diseases like retinitis pigmentosa and macular degeneration to regain rudimentary vision are struggling with several obstacles. One of the main problems during external electric stimulation is the co-activation of the ON and OFF pathways which results in mutual impairment. In this study the response of ON and OFF cone retinal bipolar cells during extracellular electric stimulation from the subretinal space was examined. To gain deeper insight into the behavior of these cells sustained L-type and transient T-type calcium channels were integrated in the synaptic terminals of reconstructed 3D morphologies of ON and OFF cone bipolar cells. Intracellular calcium concentration in the synaptic regions of the model neurons was investigated as well since calcium influx is a crucial parameter for cell-to-cell activity between bipolar cells and retinal ganglion cells. It was shown that monophasic stimulation results in significant different calcium concentrations in the synaptic terminals of ON and OFF bipolar cells. Intracellular calcium increased to values up to fourfold higher in the OFF bipolar model neuron in comparison to the ON bipolar cell. Furthermore, geometric properties strongly influence the activation of bipolar cells. Monophasic, biphasic, single and repetitive pulses with similar lengths, amplitudes and polarities were applied to the two model neurons. |
31,418 | Recognizing and engineering digital-like logic gates and switches in gene regulatory networks | Electronic computers contain powerful decision-making circuits, built using switches with well-defined digital characteristics that are connected to produce Boolean logic operators.Synthetic biologists are making progress at replicating digital decision making in living organisms, aiming to program cells for applications in areas such as environmental sensing and medicine .Digital-like behaviour in natural and synthetic biological systems is used to produce in effect all-or-nothing responses: the output signal from digital-like modules switches between low and high output levels over a short range of input signal.Biology is inherently analogue due to the stochastic nature of the molecular interactions that propagate information flow, and so biological switches possess digital characteristics to greater or lesser degrees.Strongly digital-like characteristics are desirable when implementing biological switches in bio-computing circuits as Boolean logic gates.A steep, ultrasensitive transition between OFF and ON states is key, minimising signal degradation when logic gates are layered .A large difference between output levels in the OFF and ON states also reduces noise propagation through the circuit, maintaining signal fidelity.The inputs and outputs from connected gates in a circuit must be composable both in terms of signal type — so information can be transferred — and amplitude — so that the OFF and ON output levels of an upstream gate are below and above the switching threshold for the downstream gate.Ideally the switching threshold and output level of a gate should be tunable.Decision-making also requires that logic gates receive inputs from multiple upstream gates, whilst remaining orthogonal to signals from all other host and synthetic components in the system .Here we review efforts that have been made to identify parts for digital bio-computation, with an emphasis on large part families and those that are amenable to rational redesign, as these will form the basis of future large-scale genetic logic circuits.Improvements to the digital characteristics of existing biological logic gates are necessary to maintain signal fidelity in deeply layered circuits, and we discuss engineering strategies for making these enhancements."Characterisation of a component's switching properties allows key properties such as dynamic range, activation threshold, and transfer function steepness to be determined .The nonlinear, ultrasensitive response to an input signal that characterises digital-like biological parts is usually quantified by fitting the Hill function to the curve, with ultrasensitive mechanisms having an apparent Hill coefficient greater than one ."Fundamental knowledge of a biological part's mechanisms of action allows probable candidates for logic gates to be selected: Components with known cooperative mechanisms, such as the TetR repressor's ligand-induced weakening of DNA binding affinity , can be chosen to provide sensitive switching; High ON:OFF ratios can be found in part types with low intrinsic leakiness, for example when a part is absolutely required for output such as a phage RNA polymerase ; The requirement for integration of multiple signals can be fulfilled by choosing components with activating or repressing partners, for example transcription factors which need activating chaperones .Our lab has investigated the Pseudomonas syringae hypersensitive response pathway regulatory components as a model for engineering orthogonal digital-like control of transcription in Escherichia coli .A great number of similar regulatory modules exist in many different bacterial species, offering a largely untapped resource to construct versatile orthogonal genetic logic devices.Sophisticated digital genetic circuits require a large number of composable parts that act with minimal crosstalk and cause low toxicity to the host.Genomic mining strategies can be employed to screen for orthogonal homologs of useful parts.Stanton et al. produced a set of 16 orthogonal TetR repressor homologs and cognate operators which was used to build NOT and NOR gates .Whilst the design of a single repressor binding site within a strong constitutive promoter was appropriate for library construction and screening, the authors note that this configuration produces gates with a high OFF state and low cooperativity.Future versions could use multiple operators to improve digital characteristics, as will be discussed in the section “Motifs for ultrasensitivity”.Biological components require some modification from their native configuration to allow them to connect properly and retain signal fidelity in the context of a large synthetic gene circuit.Whilst largely irrational modification of individual components has been shown to be an effective strategy for isolating variants with enhanced ON:OFF ratios and altered thresholds , and improved orthogonality , more rational approaches, often using in silico models, enable efficient and systematic optimisation.The output and activation threshold of a switch may be tuned to facilitate composition with neighbouring gates, sensors, or analogue synthetic circuitry."This is usually performed by altering the concentration of a gate's constituent components, for example, higher concentrations of an activating transcription factor will decrease the activation threshold of a switch .Tuning can be achieved using a number of mechanisms , though usually via changes to the transcription and translation initiation sequences.For bacteria especially, part libraries and computational tools for ribosome binding site design enable efficient screening of sequences to achieve desired component levels.Transcription and translation initiation sequences suffer from context dependencies which must be minimised to enable predictive design of synthetic gene circuits .Repressive antisense transcription is another technique that could be widely applied to fine-tune transcriptional logic gate activation thresholds .Many part types do not display ultrasensitive responses, so this property must be engineered.Steep switching transitions can occur due to various molecular mechanisms , but some of these are more amenable to intervention by design: Whilst introducing cooperative binding of ligand molecules to a receptor would be a difficult protein engineering problem, building gene circuits with motifs that create an ultrasensitive response — such as sequestration, multi-step mechanisms, and positive feedback — is a widely applicable strategy, and one that allows for tuning of the transfer function.The threshold and profile of a transfer function can be modified to have more digital-like characteristics using a ‘sequestration’ or ‘titration’ strategy, where high affinity sequestration of a signal-carrying factor by a buffer of decoy binding sites must be overcome before its effect on the output is observed.This strategy also has the effect of lowering the OFF state, and shifting the activation threshold to a higher level .The degree of sensitivity and shape of the response may be modified by using decoys with different binding affinities, or different concentrations of decoy .Sequestration can be performed by a constitutively expressed binding partner: Rhodius et al. identified twenty highly orthogonal extracytoplamic function σ factors and their corresponding promoters, plus cognate anti-σ factors, using genomic part mining .The simple buffer gate that results from inducible ECF expression does not exhibit good digital characteristics, but using low-level expression of the anti-σ to sequester its partner improves the sigmoidicity of the response.Using RNA-RNA interactions for sequestration is an appealing strategy as binding partners can be easily designed .Similarly, it is simple to add decoy binding sites for transcription factors into synthetic DNA .Ultrasensitivity can also arise from multi-step mechanisms, which use an input to regulate multiple levels of a signal cascade, resulting in a steeper multiplicative output response.Implementation of a cascade also allows for signal amplification, increasing the ON:OFF ratio.Xie et al. made use of the programmability of nucleic acid components when applying this motif in their HeLa cell classifier, adding miRNA target sites to the mRNAs of cascading transcription factors .Positive feedback loops have also been successfully employed to increase the steepness of transfer functions, and amplify the output signal .Palani and Sarkar made use of a dual-feedback motif which amplified both receptor and transcription factor components of a cascade to improve and tune the threshold, sensitivity, and output of their transfer function .Unwanted bistability is a potential downside of using positive feedback motifs: because the ‘reset’ transfer function is offset in a bistable system, the range of input concentration over which effective bi-directional switching occurs increases, possibly obscuring the improvements made by steepening the transfer function.The bistable region can however be tuned through sequestration .Logic gates need to assimilate multiple input signals.For transcriptional logic gates, it is often possible to simply combine promoter or operator sequences to control transcription of the output .Similarly at the RNA level, some cis-acting sequences can be concatenated to allow multiple trans-acting elements to control translation, for example small transcription activating RNA cis-elements or micro RNA target sequences .Another general strategy for creating AND or NAND logic gates is to split the carrier of an input signal into parts that are individually inactive.Split parts might recombine to form the active component spontaneously , or can be fused to domains that promote association.Addition of split-intein domains to divided protein components allows the native polypeptide to be reformed, which is useful if there is weak spontaneous association or the activity is sensitive to fusions .Large-scale circuits require large orthogonal sets of switches that are composable, retain signal fidelity, and are functionally complete.Part mining is a promising approach for discovering such sets, but using parts that have programmable specificities enables their creation in a rational manner.Protein tools with customisable DNA-binding specificity, such as transcription activator-like repressors, have been used successfully to build logic gates , but their repeated structure makes them difficult to synthesise.Nucleic acids are facile to produce with current cloning and synthesis techniques, but many RNA-based part families suffer from weaker binding interactions compared to proteins.In recent years a number of new RNA-based tools have been developed which have overcome previous limitations in dynamic range , but as yet none have all the qualities required for large-scale circuits.A promising compromise is a transcriptional switch based on the Streptococcus pyogenes clustered regularly interspaced short palindromic repeat Cas9 protein, which combines RNA-based programmability with strong binding.Nuclease-inactive Cas9 retains the ability to tightly bind a target DNA sequence complementary to the spacer of a guide RNA.Transcriptional repression by dCas9-mediated CRISPR-interference reduces expression by up to 1000-fold .The large ON:OFF ratio means CRISPRi can be used for digital-like gene circuits, where layered logic is produced by controlling the expression of downstream gRNAs .The large ON:OFF ratio is not sufficient for deeply layered circuits and so CRISPRi must be engineered into an ultrasensitive switch.Gander et al. recently fused the Mxi1 chromatin remodeler to dCas9 for improved repression in their yeast gene circuits .The increased cooperativity due to Mxi1 activity enabled the construction of three-layer logic circuits, plus an impressive seven-layer inverting cascade.dCas9 can also act as a scaffold for transcription activation proteins to switch target promoters ON , although generally lower reported ON:OFF ratios combined with a lack of ultrasensitivity has so far limited the use of CRISPRa in digital logic circuits.CRISPRi naturally lends itself to NAND logic through differential expression of the gRNA and protein components as the inputs , and split versions ofCas9 have been developed which will enable greater regulatory control and versatility .Ultimately, dCas9 can effect the decision made by the synthetic computation circuit on the host transcriptome .Improvements in Cas9 specificity will obviously also be beneficial to CRISPRi circuits , but orthogonality and modularity in synthetic systems can also be improved by optimisation of the target sequences .With the resources now available through part mining efforts and use of programmable components, combined with rational approaches to refining logic gate characteristics, we anticipate significant increases in the scale and complexity of synthetic biological digital-like logic circuits in the near future.Whilst it is prudent to remember that digital logic is not the best choice for all biological computations, the ability to program organisms to make robust binary decisions will be fundamental to many applications.The immediate challenges for the field are to improve the digital characteristics of parts to enable deeper layering of circuits, and continue to develop effective computational tools for circuit design.Because of their programmability it is likely that dCas9 homologs and catalytic mutants of other RNA-guided nucleases will play a central role in the next generation of digital gene circuits, especially where interaction with the host genome is required.Ultrasensitivity in dCas9 or other transcription factors might be improved by the incorporation of sequestration , multi-step, or feedback strategies, though the addition of dimerization domains to enable cooperative binding to DNA is an intriguing untested possibility .As both part libraries and the scale of desired synthetic circuits grow, the act of choosing the most appropriate parts will become increasingly automated .Nielsen et al. provide an exciting insight into the future of digital bio-programming with their recent work, using computer aided design to produce functional logic circuits with a 75% success rate in the first design-build-test cycle .The study highlights how part insulation is essential for predictive design, and underlines again that improvements to the digital character of gates are required to combat the trend of increasing failure rate with growing circuit depth.As circuit sizes grow, designers will have to consider the functional modularity of parts in order to mitigate the effects of retroactivity and host burden .Host chassis genome minimisation will hopefully make undesirable interactions between synthetic and host components easier to predict and avoid .Temporal dynamics will also gain importance as circuits become more deeply layered, lengthening the time taken to elicit the ultimate output, and also increasing the likelihood of faults occurring due to signals propagating at different speeds.This will drive the development of new logic gate types with faster switching, perhaps using reversible covalent modification rather than transcription and translation.Digital memory elements, such as bistable switches or gates based on recombinase-mediated DNA flipping , can also be employed to improve signal stability and fidelity.The degree of characterisation that parts are subjected to has so far been fairly ad hoc, on the basis of pragmatic project-specific constraints, but the fabrication of large sets of quality components that can be applied in diverse situations will change this mind-set.Some standardisation is now required for the transition from building at the scale of individual logic gates and simple functions, to the construction of effective, robust systems .Standardisation allows for much of the complexity of biological systems to be ignored at the systems level, abstracting a functional unit to a small set of IN/OUT properties.Canton et al. envisioned datasheets to accompany biological parts , which could include switching thresholds, LOW/HIGH output levels, and signal rise time.This abstraction is best suited to highly insulated components, which those involved in decision-making signal processing ideally are.The question of how best to define a standard for biological logic gates is beyond the scope of this review, but will be influenced by the designs of those who develop component libraries, by the properties of the particular part family, and by the requirements and established practices of the community of end-users.Standardisation of notation in the form of the Synthetic Biology Open Language already facilitates the transferability and uptake of new designs ; similar ease of use and reuse of components will enable synthetic biology to start achieving its potential in real-world applications.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest | A central aim of synthetic biology is to build organisms that can perform useful activities in response to specified conditions. The digital computing paradigm which has proved so successful in electrical engineering is being mapped to synthetic biological systems to allow them to make such decisions. However, stochastic molecular processes have graded input-output functions, thus, bioengineers must select those with desirable characteristics and refine their transfer functions to build logic gates with digital-like switching behaviour. Recent efforts in genome mining and the development of programmable RNA-based switches, especially CRISPRi, have greatly increased the number of parts available to synthetic biologists. Improvements to the digital characteristics of these parts are required to enable robust predictable design of deeply layered logic circuits. |
31,419 | Computational Modeling of Oxidative Stress in Fatty Livers Elucidates the Underlying Mechanism of the Increased Susceptibility to Ischemia/Reperfusion Injury | The shortage of donor organs for liver transplantation called for the extension of donor organ criteria, so that suboptimal grafts, such as fatty livers, are more and more used for liver transplantation .Moreover, the increasing prevalence of fatty livers in western populations leads to higher numbers of patients with fatty livers subjected to major liver surgery .Fatty livers are characterized by an aberrant fat accumulation within the cytosol of hepatocytes.Transplanting such fat-loaded livers is accompanied by a higher incidence of postoperative complications and transplant rejections leading to higher morbidity and patient mortality .Liver grafts with moderate to high fat accumulation specifically show an increased susceptibility to intraoperative ischemia/reperfusion injury .IRI is triggered by a biphasic process, which activates a series of metabolic adjustments and signaling processes.Ischemic injury originates from the interruption of blood flow, which is associated with an insufficient perfusion of hepatic tissue and, therefore, a reduced supply of cells with oxygen.However, O2 is essential as electron acceptor in the respiratory chain.Consequently, hypoxic conditions caused by ischemia let the cells suffer from ATP depletion .Eventually, the lack of O2 disrupts proper hepatic metabolic function and can trigger the initiation of cell death processes .Additionally, restoration of blood flow after a period of ischemia places the cells at further risk for metabolic dysregulation and the induction of inflammatory processes .Reperfusion aggravates the ischemic insult and may increase the risk for organ failure .Due to the high incidence of steatotic donor organs, a substantial understanding of the key processes responsible for the lower tolerance of these livers to IRI is needed.Despite intense research, we currently do not fully understand the reason why steatotic livers are more prone to IRI than normal livers.Certainly, the principal source can be attributed to the fat-induced metabolic impairments and the disturbed hepatic microcirculation caused by the swelling of the fat-laden hepatocytes .However, on the cellular level one of the main forces driving IRI is an intense formation of reactive oxygen species.An excess formation of ROS is a feature of hepatic oxidative stress, a condition of a serious redox imbalance .During transplantation, steatotic livers suffer particularly from excessive mitochondrial ROS production , an impaired induction of the antioxidative defense system , mitochondrial uncoupling and a disruption of the hepatic stress response to hypoxia mediated by the transcription factors HIFs .All elements together culminate in severe mitochondrial injury and high oxidative stress in steatotic livers , much higher compared to normal livers not overloaded with fat.Cellular damage arises from the high level of intracellular ROS, which causes modifications of DNA and oxidation of cell proteins, as well as initiation of the reaction chain for lipid peroxidation .Although it is well-known that the fat-induced pathological changes in microcirculation and metabolism get further aggravated by ischemia/reperfusion , the underlying mechanism of the excessive mitochondrial ROS formation in steatotic livers under these conditions is not yet clear.Here, computational modeling unifying the current knowledge about relevant physiological processes of ROS production and detoxification linked to hepatic fat metabolism will promote our understanding of ischemic injury in steatotic livers.A mathematical model that allows for the in silico simulation of hypoxia and reoxygenation for various degrees of steatosis would be particularly helpful.In this paper, we introduce a mathematical model that links key processes of hepatic lipid metabolism to the formation and detoxification of ROS.The model allows the simulation of hypoxia and reoxygenation conditions and predicts the level of hepatic LPO as a marker of damage caused by oxidative stress.We reveal that the increased susceptibility of steatotic livers to IR can be explained by a feedback loop between processes of H2O2 detoxification and LPO production.This interaction pattern can finally cause a bistable systems behavior in the level of oxidative stress.Here, the first state represents a low level of oxidative stress and occurs in normal, low fat-laden livers, whereas for steatotic livers the system drives to the second state with a high level of oxidative stress.This modeling result promotes our understanding of the increased vulnerability of steatotic livers to IRI.Theoretically, our proposed mechanism would support the prediction of a maximal tolerable ischemia duration for steatotic livers: Going over this threshold would increase drastically the risk for severe IRI and liver failure.We developed an integrated, mathematical model of the key pathways of lipid metabolism coupled with ROS metabolism using the Software R .Here, the focus was put on well-known interactions between hepatic fat content and oxidative processes.The metabolic processes were implemented as rate laws based on current literature knowledge.The model allows the simulation of liver metabolism under normoxia and hypoxia followed by reoxygenation.Thus, it can be applied to elucidate interactions between fat and ROS metabolism under ischemia-like conditions.We did not specifically include processes leading to reperfusion injury, e.g. the activation of inflammatory processes, additional to the ischemic injury.The simulation of reoxygenation after a period of hypoxia is focused on how the ischemic injury and the level of oxidative stress is augmented under reoxygenation.Therefore, our model aims to elucidate basic mechanisms of ischemic injury and how the level of cell damage propagates during reoxygenation.For model development, we applied a modular approach by starting with the implementation and calibration of a stand-alone submodel of hepatic lipid metabolism, which is capable to simulate hepatic triglyceride accumulation for different levels of plasma fatty acid supply.In a second step, ROS metabolism and known interactions with FA metabolism and LPO were integrated.LPO leads to the production of toxic intermediates such as malondialdehyde and 4-hydroxynonenal .Therefore, in our model, the level of LPO is assessed by the hepatic concentration of MDA, which is typically used as an indicator of LPO damage in biological and medical sciences .The hepatic concentration of MDA is typically determined by a TBARS assay .The integrated model, finally providing a system of 5 ordinary differential equations, was calibrated and validated using current literature data.For simulation and model analysis we used the R packages ‘deSolve’ and ‘FME’ .The R code of the mathematical model is provided in Appendix D.The robustness of our model output was tested by considering a 10% variation for each parameter.To show the effects of such variation in parameter values, we conducted 100 runs with different parameter values.Before a run, for each parameter a value was drawn randomly from a normal distribution with the value from the original model as mean and a 10% standard deviation.The emerging patterns under normoxic conditions were recorded.In addition, to specifically consider the robustness in MDA formation as output of our model, we looked at the pattern emerging under a higher parameter variation in this process.For this, we changed the parameter kMDA, involved in MDA formation, by 50% and investigated the systems behavior under normoxic conditions.Our phenomenological model allows a closer look on the consequences of temporal hypoxia on fat metabolism, oxidative stress level and LPO production in the liver.Our intention was not to construct a comprehensive representation of each mechanistic detail of hepatic metabolism and, therefore, our model does not allow simulating a daily time course of lipid compounds.Rather, we put emphasize on the phenomenological simulation of hepatic lipid metabolism and oxidative stress regarding the amount of stored fat metabolites.A mathematical model, representing key processes of FA metabolism in the liver, was established based on previously published models with some modifications.In our model, the following pathways are represented by rate laws: FA and O2 uptake from the blood into the liver cells, mitochondrial FA oxidation, a term representing other oxidative processes, TG synthesis and export.Details of the mathematical model and its equations, as well as parameter values, can be found in Appendix A and Table A1.Parameter calibration and validation of the metabolic model was conducted using experimental data from literature.For simulation runs, we used a range from 0.1 mM to 1.4 mM of plasma FA concentration as model input.This range covers a normal diet up to a chronic HFD, respectively.In our model, the supply of FAs via blood determines the accumulation of TGs within the liver cells, thereby determining the severity of steatosis.Of note, we do not directly model plasma TG circulation.Simulation runs started at normoxic conditions and the model was executed until the state variables reached a stable steady state.Starting from this stable state, the model was executed under hypoxic conditions until, again, a stable steady state was reached.Reoxygenation was then simulated by setting the O2 supply back to the normoxic value.The lipid submodel was extended by equations representing hepatic ROS formation and detoxification by antioxidative enzymes.We focused on H2O2 because it is more stable than the superoxide anion O2−.The toxicity of O2– is principally based on the generation of further ROS, which then attacks biomolecules .Furthermore, H2O2 generation in hepatocytes seems to be mainly independent from O2– production by the respiratory chain but depends in major parts on the activity of FA oxidation .Thus, we decided to implement H2O2 and OH⁎ production.In our model, the production of H2O2 depends directly on intracellular O2 concentration and the rate of FA oxidation.ROS production under hypoxic conditions does not directly mirror mitochondrial respiratory chain activity , thus we focused our model on ROS production by mitochondrial β-oxidation of FAs.The implementation of rate equations for the antioxidative enzymes catalase and glutathione peroxidase follows previously published models .Here, the inhibition of CAT activity by a high concentration of its substrate H2O2 is accounted for by an inhibition term .The concentration of H2O2 directly affects the production rate of OH⁎, which is the most important ROS regarding cellular damage due to its high reactivity .This radical oxidizes intracellular lipids, thereby initiating LPO.The process proceeds as free radical chain reactions leading to the production of toxic intermediates such as MDA .As mentioned above, the level of LPO is assessed by the concentration of MDA in our model.The level of oxidative stress can be assessed by the degree of H2O2 production.To simulate ischemia-like conditions, we also need to account for hypoxia-induced effects in our metabolic model.The oxidation rate of FAs is influenced under hypoxic conditions by the expression of HIFs , which mediate metabolic adaptations during phases of O2 paucity .A decreasing O2 concentration leads to a switch-like response of HIF activation with a plateau at very low O2 levels .To account for the effect of hypoxia in our modeling framework, we adjusted the equation of mitochondrial FA oxidation by adding a sigmoidal term depending on intracellular O2 concentration.Further details of rate equations and parameter values are provided in the Appendices A2, B2 and Table A1.We validated our phenomenological model by comparison of simulation results with a broad range of experimental data and known patterns extracted from various literature sources.Numerical data only reported in figures and plots were extracted via WebPlotDigitizer .The data for model validation is different from the data used for model calibration.Our novel constructed model, coupling lipid and ROS metabolism and using MDA content as model output, is based on mechanisms described in current literature.It shows that the system can be directed in two distinct stable steady states, whereas the direction is determined by the initial level of LPO.Starting from different initial concentrations of MDA, we run our metabolic model under normoxia and with a fixed value of plasma FA concentration of 0.2 mM.Depending on the initial MDA concentration, the system reaches one of two stable steady states.If the initial level of MDA concentration is low, the ROS and MDA formation also stays in a low stable state.If the initial level of MDA concentration is high, the system is driven to a high level of oxidative stress and MDA.Thus, for the same parameter values the modeled system reaches a low and a high level of oxidative stress.Importantly, the concentration of supplied FAs via blood determines the threshold of MDA concentration, which must be exceeded to drive the system to the second stable state.Running the model for an increasing amount of plasma FA concentrations revealed that the system under high FA supply reaches the second stable state for lower initial concentrations of MDA compared to low and moderate FA supply.This means that steatotic livers encounter a lower threshold of MDA to switch from a low to a high level of oxidative stress than normal livers.Moreover, in steatotic livers the MDA concentration in the state of low oxidative stress is higher than for normal livers, thus steatotic livers are undergoing more LPO.The output of a computational model may depend strongly on the chosen parameter values.Therefore, we considered the effect of a 10% standard deviation for each parameter to show the robustness of our model prediction.We conducted 100 runs with different parameter values each drawn from a normal distribution with the original value as mean and a 10% standard deviation.The model was run under normoxia for a range of MDA concentrations like the runs reported for the original parameter values in the results Section 3.2.The emerging system pattern was recorded.The results of all runs are presented in Appendix C.In 95 out of 100 runs a bistable pattern emerged over the simulation time, showing the robustness of our model results regarding variation in parameter values.Furthermore, to evaluate how parameter variation in the MDA formation process may affect our model results, we changed the parameter value in MDA generation by 50%.We observed that increasing and decreasing of the kMDA parameter value did not contradict our underlying model hypothesis of bistability.However, knowing the exact rate of MDA formation is essential to determine the threshold, at which the system shifts into the second stable state.Our novel model can be used to simulate the response of hepatic lipid and ROS metabolism under a lack of O2.So, we let the modeled system run under ischemia-like conditions.Here, the observed bistable systems behavior matters and provides a basis for explaining the increased susceptibility of steatotic livers to hypoxia.Running the metabolic model under hypoxic conditions leads to a rise of the hepatic concentrations of FAs, TGs, H2O2 and MDA compared to the normoxic concentrations.Conditions with a high intracellular concentration of FAs and TGs cause a shift of the system to a high level of oxidative stress and LPO.Finally, under steatotic conditions a lack of O2 supply leads to an increase of MDA formation above the threshold separating the first and second stable state.Thus, evoked by hypoxic conditions, the system is directed to a high state of oxidative stress.To evaluate how the system acts under reoxygenation after transient hypoxia, we rerun our model with normal O2 supply values starting from the steady state concentrations reached under hypoxia.After reoxygenation, the steady state hepatic concentrations of H2O2 and MDA reached values similar to normoxic conditions for low to moderate concentrations of stored TGs.Note that the plasma FA concentration determines the level of stored TGs.Simulations in case of high TG concentration result in a great increase in the concentrations of H2O2 and MDA.This high level exceeds clearly the concentrations reached under normoxic conditions.In contrast to these high H2O2 and MDA concentrations, the concentrations of the other model state variables are similar to the level under normoxic conditions.How does the duration of the hypoxic period influence the concentration of MDA?,We started simulation runs over the whole range of plasma FA concentrations under hypoxic conditions.A low to moderate supply of FAs exhibited that the system stays in a low level of MDA, independent of the length of the hypoxic period.However, for steatotic conditions, the model outcome depends on the duration of hypoxia.A high supply of plasma FAs and, therefore, a high hepatic TG concentration, is associated with a MDA concentration exceeding the threshold.Thus, the system is driven to the second stable state.The higher the hepatic FA and TG concentrations, the earlier during hypoxia the threshold of MDA is reached, which directs the system to the high state of oxidative stress and LPO.Exemplified by the simulation run with a plasma FA concentration of 1.1 mM, a short duration of hypoxia can be tolerated even by steatotic livers, but longer hypoxic periods force the system into the second stable state of high LPO.This pattern would allow the prediction of a certain cut-off for the maximal tolerable hypoxia duration depending on the observed hepatic FA and TG concentrations.Liver donor organs with a high fat content show increased susceptibility to IRI during transplantation.Up to now, there is no consensus on the risk of using steatotic liver grafts for transplantation and the question is still under debate how much fat accumulation is tolerable.The reasons for this are controversial results in studies reporting surgery outcome for transplantations of steatotic livers and difficulties in the qualitative assessment of the amount and type of lipids in liver grafts .The metabolic and signaling changes triggered by ischemia and aggravated during reperfusion are complex and strongly intermingled with the content of fat in the liver.It is known that steatotic livers exhibit an enhanced ROS formation overwhelming the AOD .However, the interaction network between hepatic ROS formation, FA metabolism and LPO has to be elucidated.Thus, an understanding of the pathophysiological mechanisms in steatosis as well as the adaptations occurring under IR conditions is necessary to evaluate the risk of transplanting liver grafts with moderate to high steatosis grade.We developed a mathematical model of hepatic lipid metabolism coupled to ROS metabolism.Model results show clearly a bistable systems behavior emerging from the underlying interaction network, driving the system into a low or high state of oxidative stress and LPO.Here, generally, keeping a low level of ROS and LPO is beneficially to cells because both act as signaling messengers .On the other side, a high state of oxidative stress is the cornerstone of pathological conditions such as IRI and nonalcoholic steatohepatitis .The term bistability refers to a dynamic system that can stay stable in two distinct states.The switch from one stable state to the other stable state is triggered by stimuli, which do not need to be persistent.Bistability forms the basis for numerous phenomena in biological systems , among others in cell signaling , gene regulation , cell differentiation , regulation of apoptosis and even in population dynamics .A detailed mathematical model of the activity of the respiratory chain in mitochondria already uncovered a switch-like behavior from low to high ROS formation by the respiratory complex III .In this model, the bistability in ROS formation is triggered by a lack of O2, inducing a highly productive state of mitochondrial ROS formation.Consistent with our model results, the system stays in this high ROS formation state after reoxygenation.Generally, revealing a bistable pattern in a biological system provides a detailed view of how the system is regulated and what are the key components and their interrelations.A biochemical system needs at least 3 structural elements to generate a bistable response : positive feedback, a reaction to prevent explosion, and a reaction to filter out small stimuli.All three elements can be found in the interaction network of hepatic ROS metabolism and LPO, cumulatively determining the level of LPO.First, a positive feedback loop emerges in the formation of LPO determined by H2O2 because LPO reduces the capacity of the AOD, which is responsible for H2O2 detoxification.The degradation of the AOD during IR is well-grounded by the cytotoxicity of LPO and its end products.This includes the induced disruption of subcellular membrane structures accompanied by alterations of membrane permeability, reduction of the glutathione level and enzymatic dysfunction .This dysfunction is caused by the reaction of LPO end products with amino acids and proteins and DNA.Further support can be found by the lowered activity of the antioxidants observed in patients undergoing liver transplantation .Second, the explosion of LPO is prevented by the termination of chain reactions and by cellular repair or protection mechanisms.In the model, this was implemented by a repair equation representing the enzymatic metabolization of MDA .Third and finally, small stimuli are filtered out by the activity of the AOD.GPx detoxifies H2O2 at relatively low concentrations, whereas CAT is active when H2O2 starts to accumulate .Thus, if the rate of FA oxidation and therefore ROS formation gets slightly enhanced, e.g. after meals, the activity of CAT and GPx prevents an increase of H2O2 formation.Together, these 3 elements generate a bistable systems behavior in the level of oxidative stress and LPO.The proposed bistable behavior provides a theoretical explanation for the increased susceptibility of steatotic livers to IRI.In our model, transient hypoxia enhances the formation of H2O2 and, thus, also the formation of MDA.If the MDA level exceeds its threshold, the system switches from the low to the high state of oxidative stress.For simulation runs with high concentrations of plasma FAs the threshold was reached already during a short period of hypoxia and forced the system into the stable state of high oxidative stress.The system stayed in this stable state also after reoxygenation and did not return to the low state of oxidative stress.This model behavior is in accordance to experimental studies revealing that the cell damage in IR experiments correlates with the duration of the ischemic period.In case of a short ischemic period, the liver suffers only from cell injury that is reversible, thus after reperfusion the system slips back to normal O2 consumption and energy metabolism .However, in case of a longer period of ischemia the cell damage gets irreversible and the liver suffers from dysfunction after reperfusion .Of course, an experimental validation of our proposed bistable systems behavior needs to be conducted in future.However, our model results do not only provide a possible explanation for the underlying mechanism, it would also offer the possibility to estimate by computational modeling the maximal tolerable ischemia time for steatotic livers.Exceeding this limit during the transplantation process would lead to severe IRI and a considerable increased risk for liver failure.To reach this aim, further quantification of relevant parameters and processes is necessary.After transplantation, the ischemic injury of a donor organ is aggravated by additional ROS formation in the reperfusion phase, i.e. by reoxygenation .In our model, the high state of oxidative stress and LPO reached under hypoxia is a stable one.Therefore, the system persists in this second state also during reoxygenation and a high level of ROS and MDA formation is maintained after reestablishment of the normal O2 supply.Although not having explicitly implemented mechanisms of reperfusion injury, our model showed an enhanced ROS formation during reoxygenation.Further processes determining the level of reperfusion injury, such as the initiation of the hepatic inflammatory response can be implemented in future to allow an even more precise prediction of the level of reperfusion injury.Integration of liver damage caused e.g. by neutrophil-mediated oxidative stress would surely improve the prediction of the oxidative stress level for steatotic livers.We are aware, of course, that other processes also influence the degree of IRI in steatotic livers; processes that are not part of the model yet but need to be addressed in future to allow a quantitative prediction of the level of oxidative stress.In the function of fueling the mitochondrial respiratory chain , oxidative processes of the carbohydrate metabolism are key factors in ROS formation."Glycolysis is closely linked to the production of ATP, which showed reduced levels in livers after IR with consequences for the cell's energy metabolism.In addition, a rapid depletion of hepatic glycogen reserves takes place , which influences the level of oxidative damage .Besides the importance of metabolic pathways, their regulation by the cellular signaling network also matters.Of note, metabolic adaptations in response to prolonged hypoxic periods are not fully integrated into our modeling framework.Studies clearly showed the regulation of hepatic lipid metabolism by these transcription factors and a key role of HIF impairment in steatotic livers as one mechanism for the increased susceptibility .Here, additional modeling effort is needed to include further pathways of oxidative metabolism as well as signaling regarding the adaptive response to transient hypoxia to allow an accurate prediction of the maximal tolerable ischemia duration of steatotic donor organs for transplantation.Moreover, we focused our modeling effort primarily on the process of LPO to evaluate IRI, because it is directly linked to the content of FAs and TGs and the TBARS assay is the most frequently used bioassay to determine the level of oxidative stress in medical studies .Thus, this assay provides the possibility of using already published data for model construction and parameter estimation.However, there are also other indices, which are important to evaluate the degree of IRI, namely the grade of protein oxidation and DNA lesions.Protein oxidation seems to be also an important driver for the increased susceptibility of steatotic livers .Patients with steatosis exhibit a higher level of protein oxidation, as measured by the liver content of protein carbonyls, compared to a healthy control group .Here, protein oxidation leads to structural changes within the cells causing a loss of function.This involves enzymes inactivation, which may contribute to the impairment of AOD in a similar way as implemented in our model for the LPO index.Moreover, if the degree of protein oxidation is too large, the repair system for oxidatively modified proteins can become inhibited .Indeed, end products of LPO were already reported to inhibit the 20S-proteasome complex .Altogether, these can contribute to the establishment of a second stable state for the protein index or may lead to cell death due to irreversible cell damage.Further modeling effort is necessary to evaluate the outcome of an excessive protein degradation in relation to the repair term during IR.Oxidative DNA lesions, which means the generation of oxidized bases with high frequency, occur mainly in mitochondria and were reported occasionally in NAFLD and NASH studies .Additionally, there is a correlation between oxidative DNA damage and the grade of inflammation in NASH .In fatty livers, net oxidative DNA damage seems to rely on the efficiency of the repair system rather than on the ROS production rate .Mutations caused by DNA lesions can interfere with the transcription of genes coding for antioxidative enzymes and for respiratory chain components and may alter their expression levels.These promotes mitochondrial and cell dysfunction as well as carcinogenesis.Finally, a high level of oxidative stress leads to genome instability reflecting impairments in the DNA damage repair system and to excess DNA damage, which might initiate cell death.The current model version does not yet include these processes, because of a lack of mechanistic understanding of the correlation between protein and DNA degradation with the hepatic fat content under hypoxic/reoxygenation conditions and how both indices affect the susceptibility of fatty livers to IRI.Overall, further research is necessary to evaluate the interaction between fat content, hypoxia and the level of protein and DNA degradation under IR conditions.Generally, the grade of steatosis is characterized by the amount of stored TGs in a liver and it is thought to be the key indicator regarding the susceptibility of steatotic livers to IRI.However, lipotoxicity is mainly promoted by FAs and FAs, not TGs, promote ROS formation by their oxidative degradation .Therefore, we believe that not only the amount of stored TGs is an indicator of the increased susceptibility, but also the intracellular concentration of FAs might be important.In our model, a direct discrimination between the effects of TGs and FAs on ROS formation and LPO is not possible due to the metabolic interrelation between both.Experimental studies however revealed the strong modifying effect of FAs on mitochondrial ROS formation and confirmed that the exposure of liver cells to an increasing amount of FAs does not only lead to an intracellular accumulation of lipids but also to an increased formation of ROS .Generally, FAs can act as key modifiers on the oxidative stress level in three main ways.First, mitochondrial oxidation of FAs fuels the respiratory chain and promotes the formation of ROS .Second, ROS production occurs also directly by the activity of acyl-CoA dehydrogenase , which is the first enzyme during β-oxidation.And third, FAs are the starting point for oxidative deterioration mediated by free radicals, propagating by free radical chain reactions and ending up in the production of reactive aldehydes .These end products have detrimental effects on liver cells.We clearly see a potential field for further research to answer the question how FAs influence IRI in livers, especially in steatotic livers.In conclusion, our novel computational model provides a theoretical prediction of a bistable systems behavior triggered by the level of LPO and FAs.This pattern might explain the increased susceptibility of steatotic livers to IRI and provides the possibility to predict the maximal tolerable ischemia time in respect to the severity of hepatic steatosis.In future, we see the potential of computational models in helping to improve the understanding of metabolic adaptations and how this interferes with the FA and TG content in the liver.This would allow a more detailed consideration at which threshold a steatotic liver is still suitable for transplantation and which grade of steatosis bears a high risk for postoperative liver failure.Such considerations will help to specify selection criteria for organ allocation and, therefore might increase the pool of available donor organs for liver transplantation. | Question: Donor liver organs with moderate to high fat content (i.e. steatosis) suffer from an enhanced susceptibility to ischemia/reperfusion injury (IRI) during liver transplantation. Responsible for the cellular injury is an increased level of oxidative stress, however the underlying mechanistic network is still not fully understood. Method: We developed a phenomenological mathematical model of key processes of hepatic lipid metabolism linked to pathways of oxidative stress. The model allows the simulation of hypoxia (i.e. ischemia-like conditions) and reoxygenation (i.e. reperfusion-like conditions) for various degrees of steatosis and predicts the level of hepatic lipid peroxidation (LPO) as a marker of cell damage caused by oxidative stress. Results & Conclusions: Our modeling results show that the underlying feedback loop between the formation of reactive oxygen species (ROS) and LPO leads to bistable systems behavior. Here, the first stable state corresponds to a low basal level of ROS production. The system is directed to this state for healthy, non-steatotic livers. The second stable state corresponds to a high level of oxidative stress with an enhanced formation of ROS and LPO. This state is reached, if steatotic livers with a high fat content undergo a hypoxic phase. Theoretically, our proposed mechanistic network would support the prediction of the maximal tolerable ischemia time for steatotic livers: Exceeding this limit during the transplantation process would lead to severe IRI and a considerable increased risk for liver failure. |
31,420 | Spontaneous focusing on numerosity and the arithmetic advantage | For many children, the development of symbolic number knowledge is a long and arduous process.Learning the number sequence by rote may happen very early on – children typically begin counting around the age of two – but it can take years to grasp the meanings of the words in the count list.While some children start school with a range of numerical skills, others have yet to understand that the last word in their count list represents the numerosity of the set as a whole.In other words, they have yet to acquire the cardinal principle of counting.Recent research has highlighted the role of informal numerical experiences in the acquisition of formal symbolic number knowledge.In particular, Hannula and colleagues have demonstrated that preschoolers show individual differences in their tendency to focus on numerical information in informal everyday contexts."These individual differences in ‘Spontaneous Focusing on Numerosity’ are related to children's counting skills and they predict later arithmetical success. "SFON is a recently-developed construct which captures an individual's spontaneous focusing on the numerical aspects of their environment.The term “spontaneous” is used to refer to the fact that the process of “focusing attention on numerosity” is self-initiated or non-guided.That is, attention is not explicitly guided towards the aspect of number or the process of enumeration."The idea is that “SFON tendency indicates the amount of a child's spontaneous practice in using exact enumeration in her or his natural surroundings”. "The measures used to assess children's SFON differ from typical enumeration measures.Firstly, children are not guided towards the numerical aspects of the tasks; researchers are careful to ensure that the numerical nature of the tasks is not disclosed.Secondly, the tasks always involve small numerosities so that all children have sufficient enumeration skills to recognise the numbers in the activities.This is important for ensuring that the tasks capture individual differences in focusing on numerosity rather than individual differences in enumeration skills.To demonstrate that SFON tasks are not measures of individual differences in accuracy of number recognition skills per se, previous studies have included guided focusing on numerosity versions of the tasks.Hannula and Lehtinen and Hannula et al. showed that low-SFON children could perform the tasks when guided towards numerosity, thus their low-SFON scores can be interpreted as not focusing on numerosity rather than not having sufficient skills needed to perform the tasks."In a three-year longitudinal study, Hannula and Lehtinen tracked preschool children's counting skills together with their SFON. "Results showed that children's SFON, measured at 4, 5, and 6 years, was significantly associated with the development of number word sequence production, object counting and cardinality understanding.Path analyses revealed a reciprocal relationship suggesting that SFON both precedes and follows the development of early counting skills."Follow-up work demonstrated the domain specificity of SFON as a predictor of children's numerical skills. "In another longitudinal study, Hannula et al. measured children's SFON together with their spontaneous focusing on a non-numerical aspect of the environment, namely, ‘Spontaneous Focusing on Spatial Locations’.Findings showed that SFON in preschool predicted arithmetic skills, but not reading skills, two years later in school.This relationship could not be explained by individual differences in nonverbal IQ, verbal comprehension or SFOL.Further results from more recent studies have demonstrated an even longer-term role of SFON in predicting school mathematics achievement.Hannula-Sormunen et al. found that SFON in preschool is still a significant predictor of mathematics achievement at the age of 12, even after controlling for nonverbal IQ.This longer-term relationship was found not only for natural number and arithmetic skills, but for rational number conceptual knowledge as well."SFON is emerging as a key factor for explaining variations in children's numerical development.However, the mechanisms behind this relationship are not yet clear.In particular, we do not know why SFON provides a numerical advantage.Hannula et al. proposed that the more children focus on the numerical aspects of their environment, the more practice they acquire with enumeration and thus, the better their counting skills become."To explore this possibility, they looked at the relations between children's subitizing-based enumeration, object counting and SFON. "Regression analyses revealed a direct relationship between children's SFON and their number sequence production skills.In contrast, there was an indirect relationship between SFON and object counting that was explained by individual differences in subitizing-based enumeration skills."This provides some evidence to suggest that SFON promotes perceptual subitizing skills which in turn supports the development of children's counting skills. "Other research has investigated motivational factors in the development of children's SFON and early numerical skills.In one of the first SFON studies to be conducted outside of Finland, Edens and Potter explored the relationship between SFON and counting skills in 4-year-old children in US preschools."They obtained teacher reports of children's motivation, attentional self-regulation, persistence and interest in mathematics. "They also measured children's self-selected activity choices during free-play in the classroom. "In line with the results from Hannula and colleagues, Edens and Potter found a positive correlation between preschoolers' SFON and their object counting and number sequence production skills. "In terms of the motivational factors, they found that teachers' reports of children's motivation and interest in mathematics were significantly correlated with children's counting skills, but not with children's SFON. "Moreover, there was no relationship between children's SFON and their self-selected activity choices during free-play: High-SFON children did not choose overtly number-related activities in their classrooms. "These findings suggest that SFON does not reflect children's interest in mathematics, or at least not their “overt” interest in mathematics. "Together these studies indicate that the factors underpinning the relationship between SFON and children's numerical development are more likely to be cognitive than affective.However, the precise mechanisms involved need further investigation.The current literature is sparse and somewhat limited in scope.Thus far, studies exploring the mechanisms of SFON have focused solely on its relationship with early counting skills."We do not know why SFON is related to children's later arithmetical development.We also do not know how SFON relates to more basic numerical competencies such as nonsymbolic processing skills or ‘number sense’."One possibility is that SFON works by increasing children's fluency with number symbols.High-SFON children may get more practice mapping between their newly-acquired symbolic representations of number and pre-existing nonsymbolic representations.As children get practice with, and improve the precision of these mappings, their counting and arithmetic skills may develop.This is theoretically likely because we know from previous research that mapping ability is related to mathematics achievement.For example, Mundy and Gilmore found that children aged 6–8 years showed individual differences in their ability to map between nonsymbolic representations and symbolic representations."These individual differences explained a significant amount of variation in children's school mathematics achievement.Some initial support for this possibility comes from two recent studies.Firstly, Sella, Berteletti, Lucangeli, and Zorzi found that pre-counting children who spontaneously focused on numerosity did so in an approximate manner.Sella et al. suggest that high-SFON children might be more prone to comparing and estimating numerical sets from an early age thus improving the precision of their numerical representations.Secondly, Bull found that high-SFON children performed better than their low-SFON peers on a numerical estimation task, in which they had to assign a symbolic number word to a nonsymbolic array of dots.In other words, children who consistently focused on numerosity were better able to map between nonsymbolic and symbolic representations of number.In addition to these studies, research exploring the transition from informal to formal mathematics knowledge has highlighted the role of mapping ability."In a one-year longitudinal study Purpura, Baroody, and Lonigan demonstrated that the link between children's informal and formal mathematics knowledge was fully explained by individual differences in symbolic number identification and the understanding of symbol to quantity relations.Here, informal mathematics knowledge was defined as “those competencies generally learned before or outside of school, often in spontaneous but meaningful everyday situations including play”.It is important to note that this informal mathematics is a separate construct to SFON.Therefore further research is needed to examine the nature of the relationships between SFON, mapping ability and early arithmetic skills.The aim of the present study was to investigate possible factors which may explain the positive relationship between SFON and symbolic number development."Specifically, we sought to investigate whether the relationship between children's SFON and mathematical skills can be accounted for by individual differences in fluency with nonsymbolic and symbolic representations of number.We gave children aged 4–5 years a battery of tasks designed to assess SFON, nonsymbolic magnitude comparison, symbolic comparison, nonsymbolic-to-symbolic mapping and arithmetic skills.We also gave them a digit recognition task to determine their knowledge of number symbols.Furthermore, we obtained measures of visuospatial working memory and verbal skills.The inclusion of these control measures is necessary to show that we are capturing individual differences in SFON, and not just individual differences in working memory or verbal skills."Our predictions for the study were as follows: First, we predicted that SFON would show a significant positive correlation with children's mathematical skills, thus confirming the results of previous studies.We tested this prediction using partial correlation analyses to control for age, working memory skills, verbal skills and Arabic digit recognition."Second, we predicted that the relationship between SFON and mathematical skills would be largely explained by individual differences in children's ability to map between nonsymbolic and symbolic representations of number.We tested this prediction using hierarchical regression analyses with two mathematical outcome measures, symbolic number comparison and standardised arithmetic performance.These outcome measures have been shown to be closely related in previous studies."The inclusion of the symbolic comparison measure allowed us to directly examine the nature of the relationship between children's SFON and their fluency with number symbols.Participants were 130 children aged 4.5–5.6 years.Children were recruited from three primary schools in Nottinghamshire and Leicestershire, UK, which were of varying socio-economic status1: one low, one medium and one high.All children were in the second term of their first year of school.At this stage classes are very informal; learning is play-based and child-led, following the ‘Early Years Foundation Stage’ framework.Participation was voluntary and the children received stickers to thank them for taking part.Study procedures were approved by the Loughborough University Ethics Approvals Sub-Committee.Nine children were excluded from all the analyses for the following reasons: English was not their native language, speech and language difficulties and/or selective mutism, other special educational needs, failure to identify numerical digits beyond 1.A further two children did not complete all of the measures at Time 2, leaving a total of 119 complete datasets.Children took part in two testing sessions scheduled one-week apart.During Session 1 they completed two SFON tasks and a visuospatial working memory task.During Session 2 they completed a series of computer-based numerical processing tasks followed by a standardised measure of arithmetic.Testing took place on a one-to-one basis with the researcher who was present at all times throughout each of the tasks.The tasks were presented in the same order for every child.Each task is described in turn below, in the order in which it was presented.Children were tested individually in a quiet room or corridor outside their classroom.The researcher ensured that the testing area was free from any numerical displays that might have prompted the children to focus on number or helped them to solve a numerical problem.During testing Session 1, children were not told that the tasks were in anyway numerical or quantitative."Likewise, the children's parents and teachers were not informed of the numerical aspects of the study; rather, they were told that the study was focusing on children's general thinking skills. "Throughout all tasks children received general praise but no specific feedback was given.At the end of each task children were allowed to choose themselves a sticker.Children completed two SFON measures, an imitation ‘Posting Task’ developed by Hannula and Lehtinen and a novel ‘Picture Task’ adapted from Hannula et al.The order of these tasks was counterbalanced.The materials used in this task were a toy postbox, a pile of 20 blue letters and a pile of 20 yellow letters."The researcher introduced the materials by saying: “Here is Pete the Postman's postbox, and here are some letters.We have some blue letters and some yellow letters.Now, watch carefully what I do, and then you do just the same”.The researcher posted two yellow letters, one at a time, into the postbox followed by one blue letter.They then prompted the child: “Now you do just the same”.On the second trial the researcher posted one blue letter and one yellow letter and on the third and final trial they posted two blue letters and three yellow letters."The researcher progressed from one trial to the next by saying: “Okay, let's go again”. "All of the trials involved small numerosities to ensure that they were within the children's counting range.As outlined in the introduction, it is important that SFON tasks include small numerosities so that all children have sufficient enumeration skills to recognise the small numbers in the activities.This ensures that the tasks capture individual differences in focusing on numerosity and not individual differences in enumeration skills.In line with Hannula and Lehtinen the researcher recorded all verbal and nonverbal quantitative acts.These included utterances including number words, " counting acts, use of fingers to denote numbers, utterances referring to quantities or counting, and interpretation of the goal of the task as quantitative.For each of the three trials children received a score of 0 or 1 depending on whether or not they spontaneously focused on numerosity.Children were scored as spontaneously focusing on numerosity if they posted the same total number of letters as the researcher2 and/or if they presented any of the quantifying acts listed above.Note that because SFON scores for each trial were binary, a child who posted the correct number and a child who posted the correct number plus presented a quantifying act both received the same score of 1.Each child received a total SFON score out of three.Responses were coded by a single observer.A second independent observer coded a random subset of the observation forms to establish inter-rater reliability.The inter-rater reliability was 1.00.The materials used in this task were three cartoon pictures each laminated on A4 card.The pictures are shown in Fig. 1.The researcher introduced the task by saying: “This game is all about pictures."I'm going to show you a picture, but I'm not going to see the picture.Only you get to see the picture."This means I need your help to tell me what's in the picture.”",On each of three trials, the researcher held up a picture in front of the child and said: “What can you see in this picture?,The researcher wrote down everything the child said.If the child was reluctant to speak, the researcher repeated their request: “Can you tell me what you can see?,If the child spoke too quietly, the researcher prompted them to speak a little louder.There was no time limit for children to respond.When the child finished the researcher asked: “Is that everything?, "When the child was ready to move on the researcher introduced the next trial: “Let's look at another picture.Ready, steady …,The pictures were presented in the same order for each child.Picture 1 showed a girl standing in the rain with a leaf umbrella and baby chicks.Picture 2 showed a boy and a girl in a hot air balloon with houses and trees below.Picture 3 showed a girl with a hat on holding a basket of flowers near the sea.Importantly, all pictures contained several small arrays that could be enumerated, for example, “three chicks”, “two children”, “four flowers”.The set sizes of these arrays ranged from 1 to 9.As with the Posting Task, small numerosities were included so that all children would have sufficient enumeration skills to recognize the numbers in the activities.For each of the three trials children received a score of 0 or 1 depending on whether or not they spontaneously focused on numerosity.Children were scored as spontaneously focusing on numerosity if their description contained any symbolic number word/s, regardless of whether they had enumerated the objects correctly.For example, if a child accurately described “three chicks” in Picture 1 they received a SFON score of 1.Likewise, if a child miscounted and described “four chicks” they too received a SFON score of 1.However, if a child described “some chicks” and made no other reference to number in their description then they received a SFON score of 0.Note that because SFON scores for each trial were binary, a child who mentioned number several times and a child who mentioned number only once both received the same score of 1.As with the Posting Task, each child received a total SFON score out of 3.The inter-rater reliability of two independent observers was 1.00."Children's verbal skills were indexed by the average number of words they uttered on the SFON Picture Task.The SFON Picture Task required children to produce verbal descriptions of the pictures they were presented with.Given these verbal requirements, it is important to show that individual differences on this task were capturing individual differences in SFON, not just individual differences in verbal skills.Verbal skills were thus measured by adding up the number of words children uttered on each Picture Task trial and computing the average across all three trials.The verbal descriptions of the pictures were recorded by the researcher during the testing phase of the Picture Task.The researcher wrote down the descriptions using shorthand allowing her to write as quickly as the children spoke.Overall, children showed large individual differences in the length of their picture descriptions.The word count ranged from 6.67 words to 68.67 words.Verbal skills are controlled for in the analyses presented in the Results section.Visuospatial working memory skills were measured using a visual search task adapted from Hughes and Ensor.The materials were a circular silver tray, 11 different coloured paper cups, 9 stickers and an A3 piece of card.The researcher randomly positioned each cup upside down around the rim of the circular tray."They then introduced the task to the child by saying: “Now we're going to play a finding game.Here are some cups.They are all different colours.Can you tell me what colours they are?,This question was intended to check whether the child could distinguish between all of the different colours.The researcher then placed each sticker on top of a cup, pointing out to the child that there were not enough stickers for all of the cups and that two cups would not have stickers.Next, they instructed the child: “Watch carefully whilst I hide the stickers under the cups.Later, you can have a go at finding them.,The researcher hid all of the stickers and then covered the cups with a piece of card."They told the child: “Now, I'm going to spin the cups. "Then you can choose one cup and see if there's a sticker inside.”",The researcher spun the cups and then removed the card for the child to choose a cup.If they found a sticker then they took it out and kept it beside them.The researcher continued by covering up the cups again and spinning them round before allowing the child to choose another.This continued until the child found all 9 stickers, or, until the maximum number of spins was reached.Each child received a score out of 18 depending on the number of errors they made.Children completed four computer-based numerical processing tasks.Task instructions were presented on the laptop screen and they were read aloud by the researcher.Following the computer-based tasks children completed a standardised measure of arithmetic.The researcher was present at all times throughout each of the tasks."This task measured children's ability to compare nonsymbolic numerical stimuli.Children were presented with two arrays of dots and they were asked to select the more numerous of the two arrays.The task was incorporated into a game in which the children saw two fictional characters and were asked to quickly decide who had the most marbles.Numerosities ranged from 4 to 9 and the numerical distance between the two numbers being compared was either small or large.Numerosities 1 to 3 were excluded because they are in the subitizing range.Dot arrays were generated randomly in accordance with previous numerosity experiments, such that no two dot arrays for the same quantity were the same.Stimuli were created using the method by Dehaene, Izard, and Piazza to control for continuous quantity variables such as dot size and envelope area.All dot arrays were black dots on a white circular background as shown in Fig. 2a.The side of the correct array was counterbalanced.Each of 40 experimental trials began with a fixation cross for 1000 ms, followed by the two dots arrays for 1250 ms, followed by a question mark until response.Stimuli presentation times were chosen based on pilot testing with children of the same age.Children responded by pointing to the character with the most marbles.The researcher recorded these responses via the ‘c’ and ‘m’ keys on a standard keyboard.The order of the trials was randomised and children were prompted to take a break after 20 trials.The experimental trials were preceded by two blocks of four practice trials.In the first practice block children received no time limit; they were presented with a fixation cross followed by the two dot arrays until response.In the second practice block, the researcher introduced the experimental time limit of 1250 ms to prevent the children from counting.The researcher emphasised that it was a speeded game, and children were encouraged to have a guess if they were not sure.Each child received an accuracy score based on the proportion of items they answered correctly."This task measured children's ability to compare symbolic numerical stimuli.Children were presented with two Arabic digits and they were asked to select the numerically larger of the two.Numerosities ranged from 4 to 9.The problems were identical to the nonsymbolic problems, except the numerosities were presented as Arabic digits instead of dot arrays.Symbolic stimuli were black digits on a white circular background as shown in Fig. 2b.Each of 40 experimental trials began with a fixation cross for 1000 ms, followed by the two Arabic digits for 750 ms, followed by a question mark until response.Stimuli presentation times were chosen based on pilot testing with children of the same age.They varied across tasks to avoid floor and/or ceiling effects.In line with the nonsymbolic version of the task, children responded by pointing to the character with the larger number of marbles and the researcher recorded these responses on the computer.The experimental trials were preceded by two blocks of four practice trials.The first practice block had no time limit and the second practice block introduced the experimental time limit of 750 ms. All trials were presented in a random order and children were prompted to take a break half-way through.Each child received an accuracy score based on the proportion of items they answered correctly."This task measured children's knowledge of Arabic digit stimuli.Children were asked to read aloud a series of Arabic digits presented one by one in a random order on the laptop screen.Children scored one point for each correct identification giving a total score out of 9."This task measured children's ability to map nonsymbolic numerical stimuli onto symbolic numerical stimuli.Children were presented with an array of dots and they were asked to quickly decide which of two Arabic digits matched the numerosity of the dots.The task was adapted from Mundy and Gilmore.Numerosities ranged from 2 to 9 and the numerical distance between the two symbolic choices was either small or large.The number range included small numerosities within the subitizing range because pilot testing revealed some children to be performing at chance with the larger numerosities.Stimuli were presented simultaneously with the dot array centred at the top and the symbolic stimuli at the bottom left and right hand sides of the screen.Each of 40 experimental trials began with a fixation cross for 1000 ms, followed by the numerical stimuli for 2000 ms, followed by a question mark until response.Stimuli presentation times were chosen based on pilot testing with children of the same age.The dot array disappeared when the question mark appeared to prevent children from counting.Children responded by pointing to the digit that matched the numerosity of the dots.The researcher recorded these responses via the ‘c’ and ‘m’ keys on a standard keyboard.As with the comparison tasks, the experimental trials were preceded by two blocks of four practice trials.Again the researcher emphasised that this was a speeded game and children were encouraged to have a guess if they were not sure.Each child received an accuracy score based on the proportion of items they answered correctly.The arithmetic subtest of the Wechsler Preschool and Primary Scale of Intelligence was administered in accordance with the standard procedure.There were 20 questions in total.Questions 1 to 4 required children to make nonsymbolic judgements about size or quantity, questions 5 to 8 required children to perform counting tasks with blocks and questions 9 to 20 required children to mentally solve arithmetic word problems.Children continued until they had answered four consecutive questions incorrectly.They received a raw score out of 20."First, we present the descriptive statistics for children's performance on each of the experimental tasks. "Next, we explore the correlations among children's performance on the SFON and mathematical tasks. "Finally, we run a series of hierarchical regression models to test whether the relationships between SFON and mathematical skills can be accounted for by individual differences in children's ability to map between nonsymbolic and symbolic representations of number.Fig. 3 shows the number of children focusing on numerosity from zero to three times on each SFON task."Children's performance on all other tasks is presented in Table 1.Together these demonstrate that children showed individual differences in SFON and a range of performance on the working memory and mathematical tasks.The two SFON tasks varied in terms of difficulty.Scores on the Posting Task were negatively skewed and scores on the Picture Task were positively skewed.As a possible result of this, performance on these two tasks was not significantly correlated."The order of the SFON tasks was counterbalanced therefore children's scores were checked for order effects.The results demonstrated no order effects: There was no significant difference in SFON on the Posting Task between children who completed the Posting Task first versus children who completed the Picture Task first = −.50, p = .618); likewise, there was no significant difference in SFON on the Picture Task between children who completed the Posting Task first versus children who completed the Picture Task first = −1.08, p = .283).Correlations between all variables are reported in Table 2.These show that the SFON tasks were both positively related to performance on the mathematical tasks, thus lending support for Prediction 1.The correlation between SFON and arithmetic was .30 for the Posting Task and .47 for the Picture Task.These correlations are similar in magnitude to those found in previous SFON studies."Importantly, they remain significant even after controlling for age, working memory skills, verbal skills and Arabic digit recognition: Children's arithmetic scores were positively correlated with SFON scores on the Posting Task and the Picture Task.To explore the nature of these relationships we ran a series of hierarchical regression models.Specifically, we tested whether nonsymbolic skills and the mapping between nonsymbolic and symbolic representations could account for the relationship between SFON and mathematics achievement.We used two mathematical outcome measures, symbolic comparison performance and arithmetic performance, both of which were highly correlated.For each of these dependent variables, we conducted a set of two models.In the first model baseline variables were entered in Step 1, followed by SFON in Step 2, and nonsymbolic comparison and mapping performance in Step 3.In the second model, the order of steps 2 and 3 were reversed.As shown in Table 3, SFON was a significant predictor of symbolic comparison performance when entered in Step 2, before the nonsymbolic comparison and mapping tasks, but not when it was entered after these variables in Step 3.In other words, SFON did not explain significant variance in symbolic comparison performance once nonsymbolic comparison and mapping performance had been taken into account.This demonstrates that the relationship between SFON and symbolic processing skills can be accounted for by individual differences in nonsymbolic skills and mapping skills.With arithmetic performance as the dependent variable, we see a different pattern of results.SFON was a significant predictor of arithmetic when entered into the model at Step 2 and also at Step 3.This shows that SFON explains additional variance in arithmetic performance over that explained by nonsymbolic skills and mapping skills.Therefore, the relationship between SFON and arithmetic skills is only partly accounted for by individual differences in nonsymbolic skills and mapping skills."These results add to our limited understanding of how children's informal interactions with number relate to their early mathematical skills.First, they replicate previous studies showing that SFON is associated with an arithmetic advantage.Second, they extend previous findings by providing evidence that this association persists even after controlling for individual differences in Arabic digit recognition, verbal skills and working memory.Third, and most importantly, they advance our theoretical understanding of how SFON may exert its positive influence on arithmetic skills.Specifically, the findings suggest that SFON may lead to increased practice mapping between nonsymbolic and symbolic representations of number which improves symbolic fluency and, in part, leads to better counting and arithmetic skills.Since mapping ability only partly accounted for the relationship between SFON and arithmetic skills, further research needs to explore the additional factors at play.We highlight two possibilities.One possibility is that SFON improves the precision with which children execute arithmetic procedures.High-SFON children may get more practice counting and as a result they may develop more mature counting strategies, which lead to more accurate arithmetic calculations.We know that as children become more proficient at counting they become less reliant on finger counting and they start to use more mature counting strategies, e.g. ‘counting on’ as opposed to ‘counting all’.Numerous studies have related these advanced counting strategies to improved performance on arithmetic tasks."Therefore, if SFON supports the acquisition of more mature counting strategies then it may also advance children's arithmetic skills, over and above the advantage provided by high-SFON children's mapping ability.A second possibility is that SFON provides an arithmetic advantage because it makes children better at extracting and modelling numerical information from real-world contexts.We know that being able to construct a mental representation of an arithmetic problem is an important process in numerical problem solving.Children with high-SFON tendency may not necessarily have more advanced computational skills; rather, they may be better at working out when these computational skills need to be used.Note that the standardised arithmetic task used in the present study comprised several word-based problems in which children needed to extract and model numerical information from a real-world story context, e.g. buying apples, sharing sweets and losing toys.As well as testing these possibilities, it would be valuable for future studies to examine issues surrounding causality.Data presented here is cross-sectional thus we can only tentatively specify the causal nature of SFON based on prior longitudinal research."Hannula and Lehtinen showed that children's SFON was reciprocally related to counting skills.This suggests that SFON and arithmetic skills are likely to develop together in a cumulative cycle.Further longitudinal work will allow us to determine whether SFON increases symbolic fluency, and therefore arithmetic skills, and/or vice versa.In addition to these theoretical issues, the findings from the present study generate methodological discussion.Here we introduced a new picture-based task for measuring SFON.Children were shown a cartoon picture and they were asked to describe what was in the picture.The potential advantages of this task are threefold.Firstly, there are several competing dimensions on which one can choose to focus.Children may focus on the number of items in the picture or they may focus on the colours of the items or the emotional content.This contrasts with the Posting Task on which children can focus on little information other than the number of letters posted.Secondly, unlike the pretend play activities of the Posting Task, the Picture Task is suitable for participants of all ages.It may be administered with simple cartoon pictures for preschoolers and primary school-aged children or with more complex visual scenes for older children and adults.Importantly, this means that we can study SFON throughout development in a simple and consistent manner.Thirdly, the Picture Task is quick and easy to run.While the Posting Task needs to be administered on a one-to-one basis, the Picture Task may be flexibly administered in small or large whole group settings.Here participants would be required to write down their descriptions rather than orally responding.Thus, we would first need to consider whether written SFON responses differ from oral SFON responses.Despite these potential advantages, the Picture Task is not without its limitations.In view of the verbal requirements of the task it is only appropriate for children who have developed verbal communication skills.It would not be suitable for measuring SFON in infants or children with speech and language difficulties, and it may need to be used cautiously with bilinguals."Given the verbal demands, it is necessary to control for children's verbal skills when using this Picture Task. "Recall that in our current study verbal skills were measured by calculating the average number of words in children's picture descriptions. "This average word count was positively correlated with children's SFON thus it was entered as a control variable in all our regression models. "Future studies should employ similar controls for the length of children's picture descriptions, ideally by audio recording and transcribing children's verbal responses. "In the current study children's picture descriptions were written down by the researcher at the time of testing which may not be an entirely reliable way of recording the responses. "A further methodological issue is that the Picture Task cannot be used interchangeably with the Posting Task because children's scores on these tasks were not significantly correlated. "This lack of correlation may stem from the oppositely skewed distributions of children's performance on the two tasks.Scores on the Posting Task showed a tendency towards ceiling effects while scores on the Picture Task showed a tendency towards floor effects.Alternatively, the lack of correlation between the tasks may be due to the different task demands and response modes.We speculate on four key differences below.First, one noticeable difference between the two SFON tasks is that the Posting Task requires a nonverbal response whereas the Picture Task requires a verbal response."Second, the tasks can be seen to differ in terms of their time frame for focusing on numerosity: In the Posting Task the child needs to focus on number immediately, whilst in the Picture Task the child has as much time as s/he wants to start focusing on numerosity.Third, the ambiguity of the number aspect differs in the two tasks: In the Posting Task the ambiguity is at the level of the general aim of the task, whereas in the Picture Task the ambiguity is at the level of how to proceed in the task.Fourth, the tasks differ in that the Posting Task is an action-based task whereas the Picture Task is a perception-based task.Research suggests that action and perception are functionally dissociable streams of the visual system which may not be linked in early development; thus it is perhaps not surprising that performance on the tasks is not correlated in four- and five-year-old children.In view of these speculations, further research is needed to untangle the subtle differences between the tasks and the underlying SFON constructs that they are measuring.Both tasks show predictive validity of arithmetic skills therefore they both warrant further investigation.Finally we turn to the educational implications of this research."Our findings show that SFON is an important factor in the development of children's early numerical skills.This raises interesting questions as to whether SFON is something that can be trained."Can we increase children's tendency to recognise and use numbers in informal everyday contexts?",If so, do increases in SFON lead to better mathematical outcomes?,Researchers have started to explore these issues."A preliminary small-scale intervention study found that preschool children's SFON was enhanced through social interaction in day care settings. "This enhancement was associated with improved cardinality skills suggesting that SFON-based interventions may help to support children's early counting development.The present study reveals one way in which SFON may exert its positive influence on arithmetic skills.Specifically, it shows that the relationship between SFON and arithmetic can be explained, in part, by individual differences in nonsymbolic number skills and the mapping between nonsymbolic and symbolic representations of number.In light of this, adult guidance in helping low-SFON children to recognise and use more everyday numerosities may help children to make the links between symbols and quantities."With more practice making these links, the precision of low-SFON children's mappings between nonsymbolic and symbolic representations of number may increase, and this may support the development of early counting and arithmetic skills.We know that preschoolers show individual differences in numerical knowledge and these individual differences predict mathematics achievement throughout the primary and secondary school years.Moreover, we know that the more children engage with and enjoy informal numerical activities before school, the more they engage with formal mathematics throughout school and higher education."Therefore, the present research exploring children's early engagement with numbers may play a key role in identifying important factors that influence children's later success with mathematics. | Children show individual differences in their tendency to focus on the numerical aspects of their environment. These individual differences in 'Spontaneous Focusing on Numerosity' (SFON) have been shown to predict both current numerical skills and later mathematics success. Here we investigated possible factors which may explain the positive relationship between SFON and symbolic number development. Children aged 4-5 years (N = 130) completed a battery of tasks designed to assess SFON and a range of mathematical skills. Results showed that SFON was positively associated with children's symbolic numerical processing skills and their performance on a standardised test of arithmetic. Hierarchical regression analyses demonstrated that the relationship between SFON and symbolic mathematics achievement can be explained, in part, by individual differences in children's nonsymbolic numerical processing skills and their ability to map between nonsymbolic and symbolic representations of number. |
31,421 | Validation of Structures in the Protein Data Bank | The Worldwide Protein Data Bank is the international consortium that maintains the Protein Data Bank—the single global archive of three-dimensional structural models of biological macromolecules and their complexes as determined by X-ray crystallography, nuclear magnetic resonance spectroscopy, three-dimensional cryoelectron microscopy, and other techniques.wwPDB consortium members include the Research Collaboratory for Structural Bioinformatics, Protein Data Bank in Europe, Protein Data Bank Japan, and Biological Magnetic Resonance DataBank.In an effort to improve efficiency and share the structure deposition workload, the four wwPDB partners recently launched OneDep, a unified system for deposition, biocuration, and validation of macromolecular structure data.The biocuration of PDB entries primarily involves verification, consistency checking, and standardization of submitted data.Biocurators review and annotate polymer sequence information, chemical description of ligands and modified polymer residues, and composition of biological assemblies.In structural biology it has become critically important to supply experimental data along with atomic coordinates to allow validation of the structural model and to support inferences therefrom.Clearly, raw experimental data, before application of any transformations which may lead to loss of information, and devoid of interpretation, would lend the ultimate support of the final model and allow an independent verification of the results, leading to novel validation tools.Efforts to archive such raw data are under way through established archives for X-ray diffraction images, X-ray free electron laser images, NMR free induction decay, and 3DEM images.The wwPDB currently enforces archiving of reduced representations of experimental data, while encouraging deposition of raw experimental data into these method-specific resources.Efforts by the PDBx/mmCIF Working Group to improve and extend the capture of processed diffraction data to include unmerged intensities and details of crystal samples and raw images contributing to integrated intensities are ongoing.Mandatory archiving of structure factors and NMR restraints began in 2008, followed by NMR-assigned chemical shifts in 2010, and 3DEM volume maps in 2016.The availability of experimental data not only enhances the integrity of the PDB archive but also allows systematic validation of atomic structures, and ultimately leads to better validation tools and improved quality of the archived data.Validation tools developed by the community and implemented within the OneDep system help to identify possible issues with experimental data, atomic model, or both, and thus allow depositors the opportunity to review and correct any errors prior to concluding a PDB deposition.In addition, unresolved issues may be uncovered by wwPDB biocurators or by manuscript reviewers, who are provided with access to the official wwPDB validation report.One of the more time-consuming tasks faced at present by wwPDB biocurators is the reprocessing of entries, as occasioned by depositors submitting revised atomic models to address issues uncovered during biocuration or manuscript peer review.The wwPDB stand-alone validation server was developed with the express purpose of enabling depositors to identify problems and resolve them in advance of submission.To incorporate state-of-the-art validation tools into the wwPDB biocuration pipeline, and to provide useful validation metrics to depositors and other PDB users, the wwPDB convened Validation Task Forces for crystallography and NMR, and together with the EMDataBank project partners convened a corresponding VTF for 3DEM.A validation software pipeline informed by the recommendations of the three VTFs has been integrated into both the OneDep system and the stand-alone wwPDB validation server.All three VTFs recommended that structures deposited to the PDB be validated against three broad categories of criteria, each of which is discussed in more detail in subsequent sections.The first category involves knowledge-based validation of the atomic model, without regard to the associated experimental data.Examples include the number of residues that are outliers in the Ramachandran plot and the number of too-close contacts between non-bonded atoms.For each of these criteria, the report provides both raw and normalized scores.To the extent possible, structural models from all experimental methods are evaluated with the same criteria in this category.The second category involves analysis of experimental data.Criteria in this category are specific to the experimental technique and sometimes to its “submethods”; they include metrics such as Wilson B value or estimated twinning fraction in crystallography and completeness of chemical-shift assignments in NMR.The third category involves analysis of the fit between the atomic coordinates and the underlying experimental data.Criteria for crystallography include metrics such as R and Rfree and real-space-fit outlier residues.Criteria for NMR and 3DEM models are still under development, and the validation pipeline will be augmented with these when they become available.Some metrics are analyzed across the entire archive so that percentile scores can be derived.It is very important to note that issues highlighted by a validation metric do not necessarily imply errors in the model.Instead they may point to genuine, albeit unusual, features of the structure, which may be of biological interest: e.g., Val50 in the structure of the protein annexin is involved in Ca2+ ion coordination and is consistently flagged as a Ramachandran outlier.Such unusual features should, however, be supported by convincing experimental evidence.The wwPDB is working toward providing depositors with a mechanism for adding explanatory comments to the official wwPDB validation reports.Official wwPDB validation reports provide both overall quality scores for a PDB submission and detailed lists of specific issues.Above-average global scores can sometimes mask local issues; hence it is important to review the entire report, especially during structure refinement.The reports are provided as human-readable PDF files and as machine-readable XML files, and are made available with the public release of the corresponding PDB entry.The machine-readable files contain all of the detailed validation information and statistics."For example, the validation XML file specifies for each protein residue any outlying bond length or bond angle, the residue's rotameric state, its region in a Ramachandran plot, any atoms involved in too-close contacts, and the fit to electron density.These XML files can be read and interpreted by popular visualization software packages, such as Coot, to display validation information for any publicly available PDB entry.Herein, we describe the format and content of the PDF files, which are the more commonly accessed validation report files.A full description of the report content is available at https://wwpdb.org/validation/validation-reports.The PDF validation reports are available in two formats: a summary, in which a maximum of five outliers are presented for each metric, and a complete report, in which all outliers are enumerated.The PDF reports are organized as follows.The title page displays the wwPDB logo, specifies the type of the report, shows basic administrative information about the uploaded data or the PDB entry, lists the software packages and versions that were used to produce the report, and provides a URL to access help text at https://wwpdb.org.The executive summary shows key information about the entry, such as the experimental technique employed to determine the structure, a proxy measure of information content of the analyzed data, and a number of percentile scores, comparing the validated structure to the entire PDB archive.Table 1 lists key criteria reported in this section, covering knowledge-based geometric validation scores.For crystal structures, the fit to experimental data is summarized by an overall measure and by the fraction of residues that locally do not fit the electron density well.These criteria were selected because they are not typically optimized directly during structure refinement.Ideally, a high-quality structure will score well across the board.Good values for only one of the metrics with poor scores for others could be a sign of a biased model building/refinement protocol.For each metric, two percentile ranks are calculated: an absolute rank with respect to the entire PDB archive and a relative rank.For crystallographic structures, the relative rank is calculated with respect to structures of similar resolution, while structures derived from NMR or 3DEM are compared against all other NMR or 3DEM structures, respectively.Absolute percentile scores are useful to general users of the PDB to evaluate whether a given PDB entry is suitable for their purposes, while the relative percentiles provide depositors, editors, reviewers, and expert users with a means to assess structure quality relative to other structures derived in a similar manner.The percentile ranks are followed by a graphical summary of chain quality.Each standard polypeptide and polynucleotide residue is checked against ideal bond and angle geometry, torsion-angle statistics, and contact distances.Residues are then color coded based on the results: green if no issues are detected, yellow if there are outliers for one criterion, orange if there are outliers for two criteria, and red for three or more criteria with outliers reported.A horizontal stack bar plot presents the fraction of residues with each color code for each polypeptide or polynucleotide chain.The fraction of residues present in the experimental sample but not included in the refined atomic model is represented by a gray segment, and the fraction of residues “ill-defined” by the NMR ensemble is represented by a cyan segment.For X-ray crystal structures, an upper red bar indicates the fraction of residues with a poor fit to the electron density.This is followed by a table listing ligand molecules that show unusual geometry, chirality, and/or fit to the electron density.The section on overall quality is followed by one on entry composition, which describes each unique molecule present in the entry.For NMR entries, a separate section on ensemble composition is also included.As most NMR structures are deposited as ensembles of conformers, this section reports on what parts of the entry are deemed to be well-defined or ill-defined and also identifies a medoid representative conformer from the ensemble, i.e., the conformer most similar to all the others.The section on residue quality highlights residues that exhibit at least one kind of issue, i.e., color coded yellow, orange, or red, as described above.While unusual features are not unexpected even in high-resolution structures, typically occurring with a frequency of 0.5%, they nevertheless should be inspected, and the sequence plots are intended to help users more easily find residues with validation issues.The section that presents an overview of the experimental data is specific to each experimental technique.For X-ray crystal structures, the structure factors are analyzed using the Phenix tool Xtriage to identify outliers, assess whether the crystalline sample was twinned, and analyze the level of anisotropy in the data.The R and Rfree values are presented as provided by the depositor and as recalculated by the wwPDB from structure-factor amplitudes and the model.The Rfree value measures how well the atomic model predicts the structure factors for a small subset of the reflections that were not included in the refinement protocol.It is a useful validation metric showing whether there are sufficient experimental data and restraints compared with the number of adjustable parameters in the model: Rfree values much higher than R could indicate an overfitting to experimental data during refinement.R values provided by the depositor are displayed along with R values recalculated by the DCC tool from the atomic model and structure factors with the same refinement program as was used to refine the atomic model.Good agreement between the depositor R values and those recalculated serves to check whether the data have been uploaded and interpreted correctly within the OneDep system.For NMR structures, the report contains an overview of the structure determination process and the overall completeness of the resonance assignments.For 3DEM structures, if a volume map is available, basic information describing the experimental setup and the map is included.The section on model validation provides further details for each criterion covering polypeptides, ribonucleic acids, small molecules, and non-standard polymer residues."The bond lengths and bond angles of amino acid and nucleotide residues are checked by MolProbity's Dangle module against standard reference dictionaries.Close contacts between non-bonded atoms are analyzed using MolProbity.As MolProbity does not deal with close contacts between symmetry-related molecules in the case of crystallographic experiments, these are checked by the in-house software “MAXIT”.MolProbity also performs protein-backbone and side-chain torsion-angle analysis and RNA-backbone and ribose-pucker analysis.For X-ray crystal structures of proteins, cases where 180° flips of histidine rings and glutamine or asparagine side chains improve the hydrogen-bonding network without detriment to the electron density fit are also reported.The MAXIT software is also used to identify and report cis-peptides and stereochemistry issues, such as chirality errors and polymer linkage artifacts.The geometry of all non-standard or modified residues of a polymer, small-molecule ligands, and carbohydrate molecules is analyzed with the Mogul software.For each bond length, bond angle, dihedral angle and ring pucker, Mogul searches through high-quality, small-molecule crystal structures in the Cambridge Structural Database to identify similar fragments.Each bond length, angle, and so forth in the compound is compared against the distribution of values found in comparable fragments in the CSD, and outliers are highlighted.Chirality problems are diagnosed by checking against the wwPDB Chemical Component Dictionary definitions.The fit of the atomic model to experimental data is analyzed by the procedure developed for the Uppsala Electron Density Server.Electron density maps are calculated with the REFMAC program using the atomic model and the structure factors.The fit is assessed between an electron density map calculated directly from the model and one calculated based on model and experimental data.The fit is analyzed on a per-residue basis for proteins and polynucleotides, and reported as the real-space R value.These RSR values are normalized by residue type and resolution band to yield RSRZ.Residues with RSRZ >2 are reported as outliers.At present, this analysis is not possible for non-standard amino acids/nucleotides or ligands, as these compounds are not present in sufficient numbers in the PDB to generate reliable Z scores.For these, therefore, only the RSR value, real-space correlation coefficient, and the so-called Local Ligand Density Fit score are reported.LLDF for a ligand or non-standard residue is calculated as follows: all standard amino acid or nucleotide residues within 5.0 Å distance of any atom of the ligand or non-standard residue are identified by the CCP4 NCONT program, taking crystallographic symmetry into account.The mean and SD of the RSR values for these neighboring residues are then calculated, and these are used with the RSR value of the ligand or the non-standard residue itself to provide a local, internal Z score.If fewer than two neighboring residues are within 5.0 Å of the entity, then LLDF cannot be calculated.LLDF values greater than 2 are highlighted in the reports.The wwPDB partners and the crystallography community are evaluating this and other metrics to reliably assess the fit to electron density for bound ligands, following the recommendations of the wwPDB/CCDC/D3R Ligand Validation Workshop.For NMR structures, the report contains a section on validation of assigned chemical shifts.Each structure can potentially be linked to more than one list of chemical shifts.Therefore, each chemical-shift list is treated independently.For each list, a table summarizing any parsing and mapping issues between the chemical shifts and the model coordinates helps depositors detect and correct data entry errors.For entries containing proteins, the PANAV package is invoked to suggest corrections to chemical-shift referencing.Completeness of resonance assignments per chemical-shift list is calculated for each type of nucleus and location.Unusual chemical-shift assignments are identified according to the statistics compiled by BMRB.Severe chemical-shift outliers are frequently the result of spectral “aliasing,” and these need to be corrected to achieve valid data deposition.Finally, for entries containing polypeptides, the amino acid sequence and chemical shift information is used by the RCI software to calculate a random coil index for each residue, which estimates how likely the residue is to be disordered.In a bar-graph representation of RCI for each polypeptide chain, each residue considered to be ill-defined from the analysis of the NMR ensemble of conformers is colored cyan; this result from analysis of coordinates alone can then be compared with experimental evidence for potential disorder from the RCI.The OneDep validation module is used at various points during PDB data deposition and biocuration.When data deposition is concluded, a preliminary validation report is supplied to the depositor, who must review and accept this report before the uploaded data can be submitted for biocuration.Depositors are strongly encouraged to review all issues enumerated in the preliminary validation report and to address them before continuing to the submission step.Data re-upload is possible at this stage in the process.Once the depositor accepts the preliminary validation report, uploaded data are submitted for biocuration, which serves to resolve data integrity and representation issues prior to the final validation step, which results in the official wwPDB validation report for the uploaded entry.These official wwPDB validation reports are watermarked as confidential and contain information describing the entry, including title and PDB accession code, plus a much richer analysis of small molecules and non-standard polymer residues than is possible at the preliminary stage.A growing number of journals require that manuscripts describing biomacromolecular structures be accompanied by the official wwPDB validation report.At the time of public release of the entry, the official wwPDB validation report is updated to reflect any revisions to the entry or to the validation pipeline.Released official wwPDB validation reports are made publicly available via the wwPDB FTP area and the wwPDB partner websites.Population statistics for the entire archive are recalculated annually and the reports for all entries are then updated accordingly.The same validation module is available from the wwPDB stand-alone validation web server and from an application programming interface designed for use by structure determination, refinement, and visualization software.The primary function of the stand-alone validation web servers and the API is to allow checking of the atomic model and experimental data during structure determination and refinement.At the time of writing, these two access modes combined generate on average ∼600 invocations of the wwPDB validation pipeline per week.We expect this number to increase as awareness builds in the community.At present, the wwPDB stand-alone validation web servers/API generate only preliminary wwPDB validation reports, which are not appropriate for submission with scientific manuscripts.The wwPDB validation pipeline orchestrates execution of each community-recommended validation tool, extracts key metrics produced by these tools, and packages this information in both summary reports and detailed XML data files.The pipeline is implemented as a set of modules, each responsible for preparing the inputs in required formats and parsing the outputs of a particular validation tool.The modules access data and validation tools through a collection of APIs shared by all of the wwPDB OneDep system components.These core APIs provide uniform access to the diverse set of pipeline dependencies, including both locally developed and community-supported tools and libraries.As the pipeline executes each module, it records names and versions of each validation tool together with the completion status for the tool.Pipeline results are recorded in data files and summarized in formatted reports.The data file organization is documented in the XSD format schema files.Summary reports are composed using TeX formatting instructions and rendered in PDF format for delivery.Access to the wwPDB validation pipeline is provided in three ways: as an anonymous pre-deposition web user interface, as an integral part of the wwPDB deposition and biocuration platform, and as a web-service API.The web user interface implementation makes use of the OneDep software framework, which selects only the subset of the deposition user interface features required to support the validation service.The anonymous wwPDB stand-alone and OneDep deposition validation services both manage computationally intensive workloads using the OneDep internal workflow system.While both services share the same OneDep software stack, these services are independently deployed and hosted on separate compute clusters.Compute resources can be scaled according to demand.The web-service API is supported by both a client-side Python implementation and a Unix command-line interface.Execution of the wwPDB validation pipeline using the API involves multiple steps performed in the context of a validation session.Within a session, the API provides methods to upload data files, queue validation pipeline requests, check completion status, and recover result files.The API steps are summarized in Table 4.The Python client API, bundled by standard Python package management tools, is available from the Python Package Index server.Installation and user documentation for the Python API and CLI are provided at https://wwpdb.org/validation/onedep-validation-web-service-interface.Future resource requirements of the web-service API are anticipated to be significantly greater than those of the web user interfaces.As a result, a different workflow system has been developed to support the web-service deployment.This system uses a message broker to route requests from the web-service API to a distributed collection of task queues.Queued validation task requests are handled by a set of back-end services.The volume of back-end services can be adjusted quickly in response to changes in workload.Our current implementation uses the RabbitMQ message broker and the supporting AMQP Python client library.As the component validation tools and underlying reference datasets of high-quality structures are updated, both raw and normalized scores calculated by the wwPDB validation pipeline are likely to change over time.Moreover, as the PDB continues to grow, percentile ranks of structures also change.To account for such changes, wwPDB validation reports are regenerated annually for the entire public archive, with recalculated statistics underlying the percentile ranks based on the state of the PDB archive on December 31 of the preceding calendar year.Following internal review, the updated reports replace the older versions in the public wwPDB FTP areas.The most recent update took place on March 15, 2017.Older reports continue to be accessible via yearly snapshots of the wwPDB FTP area.For most entries, changes in the percentile ranks are modest year-on-year.However, with improved tools for structure determination and more awareness of the importance of validation, it is hoped that erroneous features will become increasingly rare in newly deposited structures.As a result, the percentile ranks for older structures are expected to slowly decline, reflecting an increase in overall quality of structures in the PDB archive.Official wwPDB validation reports provide an assessment of structure quality using widely accepted and community-recommended standards and criteria.To help deliver the best possible quality in the PDB archive, the wwPDB partners strongly encourage journal editors and referees to request these reports from authors as part of the manuscript submission and review process.To achieve this goal, wwPDB partners have formally approached the journals responsible for publishing most structures to request them to implement mandatory submission of official wwPDB validation reports together with manuscripts describing the structures.,Figure 2 lists the 25 journals that published the majority of PDB structures between 2012 and 2016.At the time of writing, submission of official wwPDB validation reports is required by Structure, the Nature Publishing Group, eLife, the Journal of Biological Chemistry, all International Union of Crystallography journals, FEBS Journal, the Journal of Immunology, and Angewandte Chemie International Edition in English as part of their manuscript submission process.Submission of official wwPDB validation reports is further recommended by Cell, Molecular Cell, and Cell Chemical Biology.The interaction between wwPDB and journals is an ongoing effort.More journals have expressed interest recently, and we expect that additional publishers will commence requiring wwPDB validation reports as part of their manuscript review process.To assist the structural biology and wider scientific community in interpreting the valuable information contained in wwPDB validation reports, the OneDep team has made available an extensive set of documentation materials at https://wwpdb.org/validation/validation-reports.These materials include explanatory notes for each kind of validation report, frequently asked questions, and instructions for use of the web-service API.Introduction of wwPDB validation reports for structures determined by X-ray crystallography, NMR, and 3DEM coincided with growing awareness of the importance of validation in each of the experimental communities.The X-ray crystallography community in particular has developed, over a period of more than 25 years, sophisticated validation tools for analysis of experimental data and atomic models, and of the fit between the two.The NMR community has also made significant advances in the validation arena in recent years.The trends described here reflect a growing maturity of structural biology as a field.Figure 3 documents that geometric quality scores for X-ray crystal structures of proteins have improved over the past decade, as the tools for structure determination evolved and structure validation became more commonplace.It was observed 15 years ago, when data deposition was less common, that the “tendency of macromolecular crystallographers to deposit their experimental data is strongly negatively correlated to the free R value of their models.,Thus, another contributing factor to the improving statistics may be the fact that deposition of experimental data has become mandatory since then.This important development enabled better validation of structures, calculation of electron density maps for all crystal structures, and recalculation of structural models.Ramachandran analysis is perhaps the best-known and most widely used geometric quality metric for experimentally determined models.Figure 3A shows that the distribution of the fraction of residues in an entry classified as Ramachandran outliers remained relatively constant until approximately 2005, at which time the distribution started to narrow.Only 25 of the released X-ray crystal structures deposited to the PDB in 2016 had more than 5% Ramachandran outliers.Similar trends are observed for the fraction of residues modeled in non-rotameric conformations and for the clashscore of X-ray crystal structures.More detailed statistical analyses of the PDB archive show that X-ray structure quality assessed over 2-year intervals improved between 2012–2013 and 2014–2015.For NMR entries, the analysis of validation metrics reveals fewer trends.There has been no perceptible change in the fraction of Ramachandran outliers, residue side chains modeled in non-rotameric conformations, or clashscores since 2006, and the observed distributions of these metrics are considerably wider than seen for X-ray crystal structures.Nevertheless, the highest-quality NMR structures compare well with crystal structures on these three metrics.Figure 4 shows that the quality of bond lengths and bond angles for ligands and small molecules deposited to the PDB, as assessed with Mogul, has remained unchanged during the past decade.The wwPDB, having become keenly aware of this issue, convened the first wwPDB/CCDC/D3R Ligand Validation Workshop in 2015.This workshop brought together co-crystal structure determination experts from academia and industry with X-ray crystallography and computational chemistry software developers with the goal of discussing and developing best practices for validation of co-crystal structures, editorial/refereeing standards for publishing co-crystal structures, and recommendations for ligand representation across the archive.These recommendations have been published and were endorsed by the wwPDB X-ray VTF at its most recent meeting in November 2015.Implementation of the recommendations is under way.The OneDep validation module will continue to be developed and improved as the wwPDB partnership receives further recommendations from the expert VTFs for X-ray, NMR, and 3DEM, the OneDep system is refined, and feedback is received from PDB depositors and users alike.Analyses of wwPDB biocuration efficiency have suggested that further improvements could be made by encouraging the use of the stand-alone wwPDB validation server.The wwPDB biocurators note that one of the major reasons for depositors to re-refine their models after the first round of biocuration is poor validation metrics pertaining to ligands.This realization informs the ongoing wwPDB efforts to provide richer information about the quality of ligands in the preliminary reports, including an encouragement to submit the refinement dictionaries used by depositors.A recent improvement to the OneDep deposition web pages allows highlighting of major issues pertaining to polymer geometry from the validation reports with the intention of providing this information in a more accessible form.Preliminary data indicate a reduction in the number of data replacements following this change.Table 2 illustrates the bidirectional interaction between depositors and the wwPDB OneDep system.The wwPDB partners strongly encourage depositors to first use the stand-alone validation server and correct their structural model as much as possible prior to deposition.Depositors are also strongly advised to address issues raised by the wwPDB biocuration staff prior to release of the PDB entry.The wwPDB validation reports for NMR structures do not yet include analysis of NMR restraints.To achieve this goal, the wwPDB in partnership with Leicester University has convened a working group for standardization of restraint representation.The resulting NMR Exchange Format will be supported by all major NMR software packages for structure determination and will be unambiguously convertible to the NMR archival format.The NMR-STAR dictionary has been updated to handle the data in NEF format, and dictionary version 3.2 has been released in January 2017.A bidirectional translator to interconvert NEF and NMR-STAR files is now also available.The wwPDB validation pipeline will be extended to include analysis of restraint data and of the fit between atomic model and restraints.The wwPDB validation reports for 3DEM structures currently include only assessment of geometric parameters for the map-derived atomic coordinates.In the near future, we will add basic information about the experimental map and map-model fit, integrating some of the features from the EM map visual analysis software.Recent technological breakthroughs in 3DEM have already led to a rapid increase in the number of depositions of electric potential maps in EMDataBank and atomic models in the PDB.The wwPDB and EMDataBank partners are leading community efforts to define the information to be collected at deposition and to solve challenges of validation of 3DEM maps, models, and the fit between the two.At a recent wwPDB PDBx/mmCIF working group meeting, a decision was taken to convene a Subcommittee for Electron Microscopy; we also plan to reconvene the EM VTF to obtain further recommendations.Major contributors to this project are S.G., E.S.G., P.M.S.H., A.G., J.D.W., Z.F., and H.Y.The X-ray validation pipeline was implemented by S.G., Z.F., H.Y., O.S.S., and J.D.W.The NMR validation pipeline was implemented by P.M.S.H., A.G., Z.F., O.S.S., and S.M.The 3DEM validation pipeline was implemented by E.S.G. and O.S.S.The validation pipeline was integrated in the OneDep deposition and biocuration system by J.D.W., T.J.O., E.P., Z.F., E.S.G., P.M.S.H., L.M., O.S.S., and J.M.B.The stand-alone validation web server was implemented by E.S.G., E.P., T.J.O., P.M.S.H., and J.D.W.The validation web-service API was implemented by J.D.W. Annual report recalculations were performed by S.G. and O.S.S. Testing of integrated systems and feedback on the report content were provided by S.S., J.Y.Y., J.M.B., G.S., A.M., C.S., E.P., B.P.H., M.R.S., C.L.L., A.P., A.G., Y.I., N.K., K.B., E.L.U., and R.Y. Project management was provided by A.G., A.P., M.Q., J.D.W., and J.Y.Y. Overall project direction was provided by J.L.M., H.N., H.M.B., S.K.B., S.V., and G.J.K.The manuscript was written by A.G., J.Y.Y., J.D.W., C.L.L., J.M.B., O.S.S., and A.P. | The Worldwide PDB recently launched a deposition, biocuration, and validation tool: OneDep. At various stages of OneDep data processing, validation reports for three-dimensional structures of biological macromolecules are produced. These reports are based on recommendations of expert task forces representing crystallography, nuclear magnetic resonance, and cryoelectron microscopy communities. The reports provide useful metrics with which depositors can evaluate the quality of the experimental data, the structural model, and the fit between them. The validation module is also available as a stand-alone web server and as a programmatically accessible web service. A growing number of journals require the official wwPDB validation reports (produced at biocuration) to accompany manuscripts describing macromolecular structures. Upon public release of the structure, the validation report becomes part of the public PDB archive. Geometric quality scores for proteins in the PDB archive have improved over the past decade. Gore et al. describe the community-recommended validation reports, produced by wwPDB at deposition and biocuration of PDB submissions, and integrated into the archive of publicly released PDB entries. The authors also show that the quality of protein structures has improved over the last decade. |
31,422 | The diversity of CM carbonaceous chondrite parent bodies explored using Lewis Cliff 85311 | The Mighei-like meteorites are primitive rocks that are rich in water and organic molecules and so are particularly important for understanding the potential for volatiles to have been delivered to the terrestrial planets by asteroids and comets.CMs have spectroscopic affinities to the ∼13% of classified asteroids that belong to the C-complex.The C-complex includes B, C, Cb, Cg, Cgh and Ch types, which are most common in the outer part of the main asteroid belt, and ∼60% of them have spectroscopic signatures of hydrated silicates.Current exploration of the near-Earth asteroids Bennu and Ryugu will greatly enhance our understanding of the links between carbonaceous chondrites and their parent bodies.Bennu is a B-type asteroid with a hydrated surface and has spectroscopic affinities to the highly aqueously altered CMs.Reflectance spectra of the C-type asteroid Ryugu reveal abundant hydroxyl-bearing minerals, and its closest analogues are the thermally/shock metamorphosed carbonaceous chondrites.The nature and diversity of hydrated carbonaceous asteroids can also be explored by investigating variability within the CM meteorites as they are the most abundant group of carbonaceous chondrites.The CMs are typically composed of chondrules and refractory inclusions that are supported in a fine-grained matrix.The chondrules and refractory inclusions characteristically have fine-grained rims.All of the CMs have undergone parent body aqueous alteration.Phyllosilicates are the most abundant alteration product; they comprise 56–91 vol.% of the bulk rock and are the main constituent of the matrices and FGRs.The most reactive of the original components, namely mesostasis glass in chondrules, melilite in refractory inclusions, and amorphous material in the matrix and FGRs, is preserved only in the most mildly altered CMs including Yamato 791198, Paris, Elephant Morraine 96029 and Jbilet Winselwan.The variability in the degree of aqueous alteration of the CMs must reflect differences between parent body regions in properties such as: the initial ratio of anhydrous material to water ice; proximity to regions of fluid flow; the duration and/or temperature of alteration, which may in turn relate to depth in the body or the intensity/frequency of collisional processing.Members of the CM group could therefore have been sourced from different locations within a single parent body, or from two or more bodies that may have evolved in contrasting ways.Here we ask whether Lewis Cliff 85311 can help to define the extent of heterogeneity of a single CM parent body, or provide new insights into the diversity of multiple hydrated carbonaceous chondrite parent bodies with CM-affinities.This meteorite has come to our attention because its bulk chemical composition suggests that it is a mildly aqueously altered CM whereas its bulk oxygen isotopic composition indicates that it is an anomalous carbonaceous chondrite.We have therefore undertaken a petrographic, mineralogical, chemical and isotopic study of LEW 85311 with a focus on understanding the variety of materials that were accreted, and the nature of subsequent parent body processing.In tandem, we have analysed a suite of CMs in order to provide a benchmark for interpreting the results from LEW 85311.LEW 85311 was recovered by ANSMET in 1985, and is paired with LEW 85306, 85309 and 85312.It has a mass of 199.5 g, a weathering grade of Be and a shock stage of S1.LEW 85311 is classified as CM2.This study used three polished thin sections; LEW 85311,39; LEW 85311,90) and a 2.921 g chip.Fifteen CMs were also studied for comparison with LEW 85311.They are listed in Table 1, along with their petrologic type relative to three classification schemes.None of these CMs have undergone significant post-hydration heating.Each thin section was coated with a thin layer of carbon then studied using a Zeiss Sigma field-emission scanning electron microscope operated at high vacuum.The Sigma is equipped with an Oxford Instruments XMax silicon-drift energy-dispersive X-ray spectrometer operated through Oxford Instruments AZtec/INCA software.Backscattered electron images were obtained at 20 kV/∼1 nA.Point counting was undertaken by traversing the thin sections using the stepwise stage movement function.The apparent diameters of chondrules and refractory inclusions were determined from BSE images.The apparent diameter of these objects is expressed as:/2, where the short axis is the diameter of the chondrule/refractory inclusion taken mid way along its length.Apparent FGR thickness was calculated as:/2.X-ray maps were acquired from entire thin sections and from individual objects at 20 kV/3 nA and a 1024 × 768 pixel resolution, with the spectra being processed using AZtec.Quantitative chemical analyses were acquired using the Sigma operated at 20 kV/2 nA, with beam currents monitored using a Faraday cup.Spectra were acquired for 60 seconds, and quantified using INCA software.Calibration used the following mineral standards, with typical detection limits in parentheses: Na, jadeite; Mg, periclase; Al, corundum; Si, diopside; P, apatite, S, pyrite, Cl, tugtupite, K, orthoclase feldspar; Ca, wollastonite; Ti, rutile; Cr, chromite; Mn, rhodonite; Fe, garnet; Ni, Ni metal.Co was not quantified owing to a peak overlap with Fe.Electron-transparent foils for transmission electron microscopy and scanning transmission electron microscopy were cut and extracted from selected parts of the thin sections using a FEI duomill Focused Ion Beam instrument operated using 30 kV Ga+ ions and over a range of beam currents during the milling process.Foils were initially milled to a thickness of ∼1 μm, then extracted using an in-situ micromanipulator and welded to the tines of a Cu grid using ion and electron beam deposited platinum.They were then milled to ∼100 nm thickness and loaded into a double-tilt goniometer holder.Bright-field images and selected area electron diffraction patterns were acquired using a FEI T20 TEM operated at 200 kV.High angle annular dark-field imaging and quantitative X-ray microanalysis used a JEOL ARM200F field-emission STEM operated at 200 keV.For the analyses the ARM was operated with a 182 pA probe current and spectra were acquired and processed using a Bruker 60 mm2 SDDEDX spectrometer operating Esprit V2.2 software.A ∼85 mg chip of LEW 85311 was powdered using an agate mortar and pestle.Approximately 50 mg of the powder was then packed into an aluminium sample well and analysed using an INEL X-ray diffractometer with a curved 120° position sensitive detector at the Natural History Museum, London.Copper Kα1 radiation was selected and XRD patterns were collected from the meteorite sample for 16 hours, throughout which time it was rotated.Standards of all minerals identified in LEW 85311 were analysed for 30 min.Modal mineral abundances were determined using a profile-stripping method that has now been applied to >30 CM chondrites.Briefly, the XRD pattern of each mineral standard was scaled to the same measurement time as the meteorites.The standard pattern was then reduced in intensity until it matched the intensity in the meteorite patterns, at which point it was subtracted to leave a residual pattern.After subtracting all of the mineral standards there were zero counts in the residual, and the fit factors were corrected for relative differences in X-ray absorption to give their final volume fractions in LEW 85311.Oxygen isotopic analysis of LEW 85311 was undertaken by infrared laser fluorination at the Open University.When analysing predominantly anhydrous samples it is normal procedure to reduce the system blank by flushing the chamber with at least two aliquots of BrF5, each held in the chamber for 20 min.However, for samples that contain a significant proportion of phyllosilicates, such as CMs, this protocol can be problematic as it may result in preferential reduction in the hydrated silicate fraction prior to the high temperature fluorination step.This is because phyllosilicates can react with BrF5 at low temperature during this blank reduction procedure.To minimise this problem, LEW 85311 was run in “single shot” mode, with only one standard and one aliquot of LEW 85311 loaded at a time.This involved a four-stage procedure: a 5 min BrF5 measured blank was run prior to the analysis of LEW 85311, a single ∼2 mg aliquot of the meteorite was then analysed, a second 2 min measured blank was conducted after this analysis and finally the isotopic composition of the internal obsidian standard was then measured.After fluorination, the O2 gas released was purified by passing it through two cryogenic nitrogen traps and over a bed of heated KBr.The isotopic composition of the purified oxygen gas was then analysed using a Thermo Fisher MAT 253 dual inlet mass spectrometer.Interference at m/z = 33 by NF+ was monitored by performing scans for NF2+ on the sample gas following initial analysis.As NF2+ was below interference levels no further sample treatment was required.Analytical precision for the Open University system, based on replicate analyses of an internal obsidian standard, is ±0.05‰ for δ17O; ±0.09‰ for δ18O; ±0.02‰ for Δ17O.Oxygen isotopic analysis for LEW 85311 are reported in standard δ notation, where δ18O has been calculated as: δ18O = × 1000 and similarly for δ17O using the 17O /16O ratio, the reference being Vienna Standard Mean Ocean Water.For the purposes of comparison with the results of Clayton and Mayeda Δ17O, which represents the deviation from the terrestrial fractionation line, has been calculated as: Δ17O = δ17O − 0.52 × δ18O.Bulk samples of LEW 85311 and LEW 85312 have been chemically and isotopically analysed by Xiao and Lipschutz, Choe et al., Alexander et al., Friedrich et al. and Mahan et al.These datasets can be used to assess the affinity of LEW 85311 to other carbonaceous chondrite groups.The ratio of elements with different volatilities is taxonomically indicative, namely highly volatile to moderately volatile and refractory.Using these ratios, Friedrich et al. place LEW 85311 within the range of 15 CMs that they had also analysed.However, data in Choe et al. and average Sc/Mn and Zn/Mn values from all of the previous studies plot slightly outside of this range, although still much closer to the CMs than other carbonaceous chondrite groups.The CI-normalised elemental composition of LEW 85311 is comparable to Paris for elements with 50% condensation temperatures less than Nb, whereas the more refractory elements and most REEs plot between Paris and the CV carbonaceous chondrites.The bulk oxygen isotopic composition of LEW 85311 was originally determined by Clayton and Mayeda.They classified LEW 85311 as an ungrouped carbonaceous chondrite on the basis of its “exceptional” oxygen isotope composition, but noted that “volatile and labile trace elements in LEW 85311 are typical of CM2 chondrites.,In addition, the matrix separate of LEW 85311 that was analyzed by Clayton and Mayeda plots within the CM2 field, providing evidence for a possible genetic link between LEW 85311 and the CMs.The bulk oxygen isotope analysis of LEW 85311 undertaken for the present study is richer in 16O than that of Clayton and Mayeda and plots well inside the CV-CK-CO field.The XRD pattern and modal mineralogy of LEW 85311,84 is comparable to the least altered CMs.It has a phyllosilicate fraction of 0.67), corresponding to type 1.7 on the classification scheme of Howard et al.LEW 85311 stands out from those CMs whose modal mineralogy has been quantified using XRD by virtue of its relatively high abundance of Fe,Ni metal, scarcity of sulphide, absence of calcite, low PSF and high ratio of cronstedtite to Fe, Mg serpentine.The constituents of LEW 85311,39 as determined by SEM point counting are: matrix; FGRs; chondrules, chondrule fragments and refractory inclusions.Calcite-rich objects comprise 0.2 vol.%, in agreement with the low abundance of this mineral as revealed by XRD.Table 5 lists point counting results from eight CMs for comparison.A previous study of LEW 85311 found that it contains barred olivine, porphyritic olivine, porphyritic olivine–pyroxene and porphyritic pyroxene chondrules with an average apparent diameter of 190 μm.The 72 chondrules and chondrule fragments measured for the present study have an approximately log-normal distribution with an average apparent diameter of 213 ± 153 μm.Chondrule fragments comprise one or several grains of olivine, pyroxene and/or Fe,Ni metal.Both intact and fragmented chondrules have FGRs.Fayalitic olivine in type II chondrules is very similar in composition to olivine in type II chondrules from Paris and Acfer 094.Olivine and pyroxene grains in intact chondrules are typically pristine, whereas in a few cases grains of fayalitic olivine within chondrule fragments have been partially replaced by phyllosilicates along their contact with the enclosing FGR.No chondrule mesostasis glass has been preserved, and in its place are pores, porous arrays of pyroxene crystallites or phyllosilicates.Chondrules can also contain rounded pores tens of micrometres in diameter.Many type I chondrules contain ‘nuggets’ of kamacite, which have an average composition of Fe94.3Ni5.2Cr0.4.These kamacite grains have been partially or near-completely altered, and two compositionally and petrographically distinct types of alteration products are recognized: Fe-rich/S-rich, which form a narrow rim to kamacite grains and also penetrate into their interior; Fe-rich/S-poor, which occur as a concentrically laminated mantle and veins extending from the mantle into the FGR.The Fe-rich/S-rich material is dominated by Fe, Ni and S, and gives low analytical totals.It is compositionally comparable to tochilinite that has formed by the alteration of kamacite in CM carbonaceous chondrites.LEW 85311 tochilinite is closer in composition to tochilinite in the mildly altered CMs than in the more highly altered meteorites.The Fe-rich/S-poor mantles yield low analytical totals suggesting significant concentrations of unanalyzed OH/H2O.The detection of Cl suggests that akaganéite is present.Akaganéite is a common weathering product of Antarctic iron meteorites and ordinary chondrites where it has the following compositional range: Fe, Ni; Cl; S.Most analyses of LEW 85311 Fe-rich/S-poor mantles are within the range of Antarctic akaganéite: Fe, Ni, Cl, S.The main difference to akaganéite in Buchwald and Clarke is a wider range of S, although only a few analyses have high S concentrations.The low Cl values probably reflect an intergrowth of akaganéite with Cl-free goethite.In order to assess the abundance, mineralogy and degree of preservation of refractory inclusions, those occurring in a 0.28 cm2 area of LEW 85311,90 were studied in detail.This area contains 41 inclusions.They have an apparent diameter of 20–402 μm and an aspect ratio of 1.2–6.5.The mineralogy and structure of all inclusions occurring in a 0.08 cm2 part of the mapped region was determined.This area contains 15 inclusions.Seven of them are dominated by spinel and pyroxene, four by spinel and perovskite, and each of the others contains one refractory mineral.Most of the inclusions are porous and contain phyllosilicates; the gehlenite grain has been partially replaced by coarse grained phyllosilicate.Among the 15 inclusions are five morphological types: five banded, three nodular, two simple, three simple distended and two complex distended.In addition to occurring in LEW 85311,90, gehlenite is present in a refractory inclusion in LEW 85311,39, where it is accompanied by spinel, perovskite, pyroxene and Ca-carbonate.Its empirical formula is Ca2.0Al1.5Mg0.2Si1.2O7.The presence of pores surrounding the gehlenite, and the occurrence of small relict gehlenite grains suggests that it has survived partial dissolution.Melilite-bearing refractory inclusions were also described from LEW 85311 by Simon et al.They found two inclusions where melilite mantles hibonite, and this relationship was interpreted to show that the melilite formed earlier.Although most LEW 85311refractory inclusions are comparable in mineralogy and morphology to those in the CMs, the LEW 85311,31 thin section contains a rare forsterite chondrule enclosing a simple spinel-perovskite refractory inclusion.Reaction of forsterite with spinel has formed a high-Z material that is rich in O, Al, Si, Ca and Ti.Most LEW 85311 Ca-carbonate occurs as meshworks of needle-fibre crystals, which are present in all three thin sections and most abundant in LEW 85311,39.This Ca-carbonate occurs in two petrographic contexts: small patches within chondrules and chondrule fragments; the main constituent of relatively large rounded, irregular or elongate objects that have a FGR.The needle-fibre crystals were identified as calcite by SAED.On average they are ∼18 μm in length by 2 μm in width.Individual fibres are aggregates of ∼1 μm wide acicular crystals.The fibres may be straight or curved, sometimes to such an extent that they are almost circular.Most fibres are oriented randomly relative to each other, although those at the edges of the meshworks may be aligned with their long axes roughly parallel to each other in a radiating or palisade structure.TEM shows that the fibres have a low defect density and no inclusions.In almost all objects the needle-fibre calcite is associated with one or more Fe- and Cr-rich minerals.The kamacite grains are usually rimmed or cross-cut by Fe-rich/S-rich alteration products whose composition is similar to tochilinite after kamacite.The average formula for the P-rich sulphides is Fe2.0Ni2.2S3.2P0.8, which is similar to P-rich sulphides elsewhere in LEW 85311 that were analysed by Nazarov et al.Fe-sulphide, schreibersite and eskolaite crystals tend to be present at the margins of the meshworks, whereas the tochilinite and P-rich sulphide occur within them.Two Fe-rich veins occur in LEW 85311,31.One is 500 μm in length by 15 μm in width and has a FGR.It contains grains of kamacite, Fe-sulphide and schreibersite in a matrix of a Fe-rich/S-rich material that is comparable in composition to tochilinite after kamacite in LEW 85311 chondrules.The other vein is partly wrapped around a chondrule fragment and is composed solely of tochilinite and P-rich sulphide.LEW 85311 matrix and FGRs yield low analytical totals that are consistent with a phyllosilicate-rich mineralogy.Both components plot within the ‘serpentine field’ of a ternary diagram, and specifically between end-member serpentine, and LEW 85311 cronstedtite and tochilinite-cronstedtite intergrowths.The matrix and FGRs have a lower Mg/ than the bulk meteorite.Comparison of LEW 85311 matrix and FGR compositions with 13 CMs also analysed for the present study show a comparable pattern of depletion and enrichment, with LEW 85311 being compositionally closest to those CMs that have been mildly aqueously altered.The MgO/FeO ratio of CM matrices is informative about CM parent body history as it increases with progressive aqueous alteration.The value for LEW 85311 is lower than the CM2.7 meteorite Paris.The apparent thickness of FGRs on chondrules and chondrule fragments ranges from 15 to 101 μm, and is positively correlated with their diameter.FGRs have a sharp contact with the matrix and are distinguished from it by a lower Fe/Si ratio and a finer and more homogeneous grain size.Despite being more compact than the matrix, FGRs characteristically contain fractures and micropores.Fractures are typically oriented normal to the outer edge of the host object and pinch out towards the matrix.Micropores are an average of 6 μm in size and some have a negative crystal shape.They can occur throughout the FGR, but are usually more abundant closer to the matrix.TEM shows that the FGRs contain silicate and sulphide mineral grains of a range of sizes, between which are cylindrical TCI and serpentine crystals.The matrix contains fine silicate and sulphide grains together with clumps of phyllosilicates and rare grains of Ca-carbonate.Particles ∼1 μm in size also occur in the matrix that are similar in size and shape to amorphous domains in Y-791198 and GEMS grains in Paris; they are thus likely to be composed of amorphous silicate, sulphide and metal.Locally developed petrofabrics are evidence for mild ductile compaction of the matrix.TEM and STEM images show that the matrix contains platy and entangled cylindrical crystals.The platy crystals have a chemical composition close to that of cronstedtite in Paris, and such a mineralogy is consistent with their ∼0.7 nm lattice fringe spacing.The cylindrical crystals are intermediate in composition between cronstedtite and tochilinite and so are inferred to comprise an intergrowth of both minerals.Although TCI has been identified within the matrix by TEM, the tens of micrometer size TCI clumps that characterize CM matrices are absent.Our petrographic, mineralogical, chemical and isotopic results show that LEW 85311 has both similarities and differences to the CMs.Below we describe the geological history of LEW 85311 and evaluate its affinity to the CM group through a discussion of the material that was accreted, and its subsequent mineralogical and geochemical evolution.We then consider the implications of these results for understanding the nature of its parent body, and specifically whether LEW 85311 comes from the same body as other members of the CM group, or is a piece of a different, and maybe previously unsampled, hydrated C-complex asteroid.The LEW 85311 parent body was formed by accretion of coarse-grained objects, a fine-grained matrix, and probably also water-rich ice.The origin of FGRs on the coarse-grained objects has been debated, with three processes being proposed: accretion of dust within the solar nebula; impact compaction of fine-grained matrix material around chondrules and other objects within the parent body; aqueous alteration of the host object.Clues to the origin of LEW 85311 FGRs come from their structure and composition.They have a sharp outer edge, which separates the compact and equigranular rim material from the more porous and mineralogically heterogeneous matrix.The FGRs also differ in chemical composition to the matrix, and their apparent thicknesses correlates well with the apparent diameter of their host objects.We propose that these properties are inconsistent with an origin by impact compaction, and best explained by accretion in the solar nebula.This conclusion agrees with results of an X-ray tomography study of Murchison FGRs by Hanna and Ketcham.They also observed that the FGRs are compositionally uniform across different chondrule types, thus arguing against an origin by aqueous alteration of their more compositionally variable host objects.Chondrules, chondrule fragments and refractory inclusions therefore had FGRs when they were incorporated into the LEW 85311 parent body.Relative to eight CMs analysed for the present study, LEW 85311 contains the second lowest proportion of matrix and highest proportion of FGRs.It thus somewhat resembles the ‘primary accretionary rock’ lithology that characterises unbrecciated CMs.The proportion of the meteorite that consists of chondrules, chondrule fragments and refractory inclusions is close to the CM average.The size distribution of LEW 85311 chondrules and chondrule fragments can potentially provide valuable information about this meteorite’s relationship to carbonaceous chondrite groups.For example, chondrules in CV and CK meteorites have a larger apparent diameter than those in COs and CMs.However, the only data on CM chondrule sizes that was available to Friedrich et al. was an analysis of 100 chondrules in Murray, which have an average apparent diameter of 270 ± 240 μm.The size distribution of chondrules and chondrule fragments has recently been determined for Jbilet Winselwan.Two different lithologies were studied, whose chondrules and chondrule fragments were of a similar apparent size.Friend et al. suggested that the difference between Jbilet Winselwan and Murray was because Rubin and Wasson did not measure chondrule fragments.The apparent diameter of LEW 85311 chondrules and chondrule fragments falls between Jbilet Winselwan and Murray, and is much smaller than the CVs and CKs.In the absence of a larger dataset of the size of CM chondrules and chondrule fragments, the LEW 85311 measurements cannot be used to rigorously test the meteorite’s relationship to the CMs.FGRs in LEW 85311 have an average apparent thickness of 37 μm, which is thinner than Jbilet Winselwan.However, the slope of the correlation between chondrule apparent diameter and FGR apparent thickness is similar between the two meteorites, implying comparable conditions of rim formation.LEW 85311 refractory inclusions are similar in mineralogy to those that have been described from the CMs.They are however smaller on average: 105 ± 88 μm in LEW 85311 versus 130 ± 90 μm and 130 ± 80 μm for the mildly altered CMs QUE 97,990 and Paris, respectively.LEW 85311 refractory inclusions are considerably more abundant than in the other two CMs.This high density of refractory inclusions is the most likely explanation for refractory element and REEs composition of LEW 85311, which falls between Paris and the CVs.As refractory inclusions are 16O-rich, their unusually high concentration within LEW 85311 may also explain why its bulk oxygen isotopic composition plots within the CV-CO-CK field.The difference between our analysis and the bulk measurement by Clayton and Mayeda may simply reflect intra-meteorite heterogeneity in the abundance of refractory inclusions.The low degree of aqueous alteration and commensurately low volume of matrix of LEW 85311 could also contribute to its bulk oxygen isotopic composition because the meteorite’s fine grained matrix has a considerably higher δ17O and δ18O value than its chondrules and refractory inclusions.The close similarity in volatile element compositions between LEW 85311 and Paris shows that the two meteorites sample parent body regions that accreted a similar range of volatile-bearing components, and that the nature of any fractionation during subsequent parent body processing was also comparable.By analogy with the CMs, post-accretionary processing of the LEW 85311 parent body could have included one or more of: shock metamorphism; brecciation; aqueous alteration; post-hydration heating.After falling to Earth the meteorite may also have been weathered.The S1 shock stage of LEW 85311 demonstrates that it has not experienced pressures of >4–5 GPa so that its localized and mild petrofabric can be explained by compaction accompanying one or more low intensity impacts.None of the three thin sections contains clasts, thus showing that these impacts were sufficiently gentle that there was no brecciation or mixing.The intact crystal structures of tochilinite and serpentine constrain the intensity of heating that LEW 85311 could have experienced after aqueous alteration.Estimates of the temperature at which tochilinite breaks down range from 120 °C to 300 °C, and serpentine starts to degrade at ∼300 °C.Kimura et al. used metal composition and sulphide texture to explore heating of the CMs.They placed LEW 85311 in their group “A”, showing that it had not been significantly heated before/during aqueous alteration, which would have affected the Fe, Ni metal, or after aqueous alteration, which would have been identifiable by the presence of pentlandite lamellae/blebs in pyrrhotite.This lack of evidence for heating is also consistent with the unshocked and unbrecciated nature of LEW 85311 showing that it did not experience high impact temperatures.We therefore conclude that water-mediated alteration was the main agent of post-accretionary processing of LEW 85311.The principal evidence for aqueous alteration of LEW 85311 is the dissolution of some original components, and precipitation of new phases.Here we evaluate the nature of the geochemical reactions, the environments of water/rock interaction, and similarities to the CMs.Dissolution of chondrule mesostasis glass has left pore spaces between phenocrysts, whereas the relatively large and rounded voids within chondrules may have formed by dissolution of Fe, Ni metal nuggets.The irregular/faceted pores within FGRs were most likely produced by dissolution of one or more primary minerals, and the greater abundance of these pores in outer parts of the rims may indicate that fluids responsible were hosted in the matrix.There is no evidence to determine the environment of chondrule and FGR dissolution.Serpentine, cronstedtite, tochilinite and TCI were identified by XRD and TEM, and are characteristic products of CM parent body aqueous alteration.Where they occur in the matrix and FGRs, these minerals are assumed to have formed at the expense of fine-grained silicate, sulphide and metal, and probably also ∼1 μm size GEMS-like particles.The absence of large TCI clumps distinguishes LEW 85311 from the moderately to highly altered CMs).Evidence for replacement of coarser grained silicates by phyllosilicates is restricted to occasional grains of fayalitic olivine and one crystal of gehlenite.By contrast, nuggets and veins of kamacite have been extensively replaced by tochilinite, P-rich sulphide and schreibersite.Alteration of kamacite to sulphides was a common reaction during the initial stages of aqueous processing of the CMs within geochemical environments with a high S activity.Overall, the processes and products of aqueous alteration of LEW 85311 are closely comparable to ‘Stage 1′ of the four-stage CM aqueous alteration sequence that was described by Hanowski and Brearley and refined by Velbel et al.Thus the geochemical environment within the LEW 85311 parent body was similar to the mildly altered CMs.By analogy with Antarctic ordinary chondrites, the akaganéite-bearing mantles to kamacite grains are interpreted to have formed by terrestrial weathering.Thus, LEW 85311 has been modified by reaction with water both pre- and post-terrestrially, a combination of processes that is common to the CM Elephant Moraine 96029, and probably many other Antarctic carbonaceous chondrites.Ca-carbonate is rare in LEW 85311.It was not detected by XRD, and Alexander et al. recorded 0.3 wt.% carbonate-hosted carbon in bulk samples of both LEW 85311 and LEW 85312; this value is lower than 42 of the 43 unheated CMs analysed by Alexander et al.Ca-carbonate occurs in three contexts: within a gehlenite-bearing refractory inclusion; scarce small crystals within the matrix; meshworks of needle-fibre calcite.By analogy with calcite in refractory inclusions from the CM falls Murchison and Murray, Ca-carbonate in the LEW 85311 refractory inclusion is interpreted to have formed by parent body aqueous alteration.This conclusion is consistent with experimental results showing that gehlenite alters to calcite when it reacts with carbonate-rich solutions under hydrothermal conditions.As the matrix-hosted Ca-carbonate has been partially replaced by phyllosilicates, in common with calcite in the matrices of CM falls, it is likewise interpreted to be the product of parent body processing.Needle-fibre calcite could have formed on the parent body or by terrestrial weathering, and these two environments can be distinguished using its C and O isotopic composition.Given that the needle-fibre calcite is by far the most abundant type of Ca-carbonate in each of the three LEW 85311 thin sections, its isotopic composition should be close to that of bulk Ca-carbonate in LEW 85311 and LEW 85312.In Supplementary Fig. A4 these values are plotted along with the 43 CMs in Alexander et al., terrestrial calcite in an Antarctic CM, and terrestrial Ca-carbonate from Antarctic ordinary chondrites.The carbon isotopic composition of bulk LEW 85311 and LEW 85312 Ca-carbonate is very different to the Antarctic weathering products, and so the needle-fibre calcite is interpreted to have formed on the LEW 85311 parent body.The LEW 85311 and 85312 data plot towards the low δ13C and δ18O end of the trend defined by the CMs in Supplementary Fig. A4.Alexander et al. interpreted this trend to reflect differences in temperature of the carbonate-precipitating fluids, thus implying that LEW 85311 and LEW 85312 Ca-carbonate would have formed in the higher temperature part of the range.Despite being isotopically consistent with the CMs, LEW 85311 needle-fibre calcite is distinct in its petrographic context, and crystal size and shape.CM calcite occurs most commonly in the matrix as relatively coarse and equant grains, suggestive of slow rates of crystal growth from fluids of a low degree of supersaturation.Conversely, the small size of the needle-fibre crystals and their random orientations relative to each other imply rapid crystal growth from highly supersaturated solutions.Terrestrial calcite that is very similar in crystal size and shape is termed ‘moonmilk’; it occurs in caves and burial tombs and forms by evaporation.We therefore propose that LEW 85311 needle-fibre calcite is also a parent body evaporite.The consistent association of needle-fibre calcite with kamacite and its alteration products suggests that the metal was aqueously altered prior to carbonate precipitation, making space for the calcite.These kamacite grains must have been relatively large and free-floating in the solar nebular so that they accreted FGRs.Most of the calcite-rich objects are rounded, but several have a shard-like shape suggestive of brittle breakage of kamacite prior to accretion of the FGR.The geochemical conditions under which kamacite alteration would associate with calcite precipitation are unknown.However, the common intergrowth of calcite with TCI in CM matrices is evidence for a genetic link between carbonate and sulphide precipitation in carbonaceous chondrite parent bodies.LEW 85311 has been mildly altered when assessed relative to the mineralogical and geochemical criteria that are used for the CMs.Kimura et al. found that it has a “low” degree of aqueous alteration.LEW 85311 and its pair LEW 85312 have been assigned to petrologic types of 1.9/1.8 and 1.7.Relative to the petrologic subtypes of Rubin et al., LEW has been tentatively assigned to CM2.6–2.7 and to CM2.3.These two different subtype assignments can be tested by comparison with the petrologic types of LEW 85311.There is a good linear correlation between the three classification schemes for meteorites of petrologic subtypes 2.0–2.6.However, as there are no meteorites have been classified at CM2.7 or higher on the Rubin et al. scheme that have also been measured by Alexander et al. or Howard et al., the nature of the correlation lines at lower degrees of aqueous alteration cannot be determined.Nonetheless, as a petrologic subtype of 3.0 on the Rubin et al. scale should equal a petrologic type of 3.0 on the other two scales, the regression lines in Supplementary Fig. A5 can be extrapolated between CM2.6 and CM3.0.Using these lines, a petrologic type of 1.9–1.8 corresponds to ∼CM2.7–CM2.6, and 1.7 corresponds to ∼CM2.6.Taking the two comparisons together suggests a subtype of CM2.7–CM2.6 for LEW 85311, in agreement with Choe et al.It is difficult to assign LEW 85311 to a petrologic subtype directly because the meteorite lacks large “PCP clumps”, whose abundance and chemical composition are key metrics in the Rubin et al. scheme.The other criteria of Rubin et al. are consistent with a subtype of CM2.6 or higher for LEW 85311 apart for the abundance of Fe, Ni metal.They propose that meteorites of subtype CM2.5 and CM2.6 should contain 0.03–0.3 vol.% and ∼1 vol.% metallic Fe-Ni, respectively.As XRD shows that LEW 85311 contains 0.3 vol.% metal, a ∼CM2.5 classification is suggested.However, the only meteorite classified by Rubin et al. as CM2.6 was QUE 97990, which contains 0.2 vol.% Fe,Ni metal as quantified by XRD.Furthermore, the 0.3 vol.% Fe,Ni metal is equal to its abundance in EET 96029, and only one of the 36 CMs plotted in Fig. 3 is richer in metal.Therefore by using XRD-determined metal abundance alone, LEW 85311 is less altered than QUE 97,990 and so should be assigned to a subtype of CM2.6 or higher.The lack of chondrule glass that is preserved in EET 96029 and Paris argues against LEW 85311 being significantly more pristine than these two CMs, and so we conclude overall that it has a petrologic subtype of CM2.7.A variety of other properties of LEW 85311 are comparable to those of the most weakly aqueously altered CMs, which further demonstrates that LEW 85311 evolved in a comparable manner.Mild alteration is consistent with the preservation of gehlenite, which is highly reactive in the presence of liquid water.LEW 85311 has a low phyllosilicate fraction, showing that alteration stopped early thereby preserving much of the original olivine and pyroxene.XRD also shows a high ratio of cronstedtite to Mg,Fe serpentine.Little of the early formed cronstedtite had recrystallized to Mg-serpentine because the main sources of magnesium needed for this reaction are Mg-rich olivine and pyroxene, which are more resistant to parent body aqueous alteration than their Fe-rich equivalents.The Mg/ of tochilinite after kamacite in LEW 85311 is lower than the mildly altered CMs Murchison and Murray, again because relatively little Mg had been liberated from Mg-rich olivine and pyroxene that would otherwise have been available to increase its Mg/.This limited supply of Mg during aqueous alteration is also expressed by the MgO/FeO value of LEW 85311 matrix, which is significantly lower than that of Paris.Finally, data in Nazarov et al. show a relationship between the chemical composition of P-rich sulphides and degree of aqueous alteration of their host meteorite, and accordingly LEW 85311 P-rich sulphides have a composition similar to those in the most mildly altered CMs.Therefore, multiple proxies agree that LEW 85311 has been mildly altered, and its geochemical and mineralogical evolution followed a trajectory closely comparable to the most weakly hydrated CMs.This meteorite has many similarities to the CMs, but also some important differences that have prompted us to question its link to the group.The CMs are homogeneous in chemical composition, with any small differences between meteorites being potentially attributable to terrestrial weathering.This compositional homogeneity has been interpreted to demonstrate that the CM3 starting material was chemically uniform and aqueous alteration was isochemical so that water-soluble elements were not leached.Rubin et al. concluded that the compositional homogeneity of the CMs indicates a single parent body for the group.LEW 85311 sits outside of the narrow range of CM chemical compositions.We contend that this divergence from the CMs is due to a different starting material that was isochemically altered.An alternative explanation is that this meteorite was initially chemically similar to other CMs but changed in composition during open system aqueous alteration.However, in such a scenario LEW 85311 would be expected to differ from the CMs in its water-soluble elements rather than the most highly refractory elements as is observed.Moreover, Mg should have been selectively leached, yet the chemical compositions of Mg-bearing alteration products are consistent with mildly altered CMs.The oxygen isotopic compositions of the CMs are quite variable, in part due to contrasting degrees of aqueous alteration and in a few cases also post-hydration heating.LEW 85311 falls outside of the CM range, but its unusual isotope values can again be explained by abundant refractory inclusions.LEW 85311 is almost indistinguishable from the mildly altered CMs with regards to the processes and products of aqueous alteration.These alteration products also include tochilinite, which is a mineralogical ‘hallmark’ of the CMs.The only evidence for a difference in alteration between LEW 85311 and the CMs is the presence of needle-fibre calcite, which may have formed by evaporation of relatively high temperature parent body fluids, and the absence from the matrix of TCI ‘clumps’.Despite these differences, LEW 85311 is interpreted to have originally contained a similar proportion of water ice to many of the CMs, then was heated and remained warm for a sufficient length of time to be mildly aqueously altered.NWA 5958 is a hydrated carbonaceous chondrite with some similarities to LEW 85311.It has been classified as C2-ung and its bulk oxygen isotopic composition plots within the CV-CO-CK field close to one of the analyses of LEW 85311.However there is no petrographic or geochemical evidence for an elevated abundance of refractory inclusions that could otherwise account for this 16O-rich composition.Jacquet et al. found that NWA 5958 is moderately aqueous altered relative to the CMs, and its infra-red spectrum closely resembles that of LEW 85311, especially in the 10 μm region.The similarities between NWA 5958 and LEW 85311 require further investigation, and one intriguing suggestion is that along with other ‘anomalous’ meteorites, NWA 5958 and LEW 85311 may be members of a new group of ‘primitive’ carbonaceous chondrites with CM affinities.We set out to ask whether LEW 85311 can provide new insights into the diversity of hydrated C-complex meteorite parent bodies.Our work has shown that the material accreted to form LEW 85311 was subtly different to the CMs, particularly with regards to the abundance of refractory inclusions, yet this meteorite’s subsequent mineralogical and geochemical evolution was very similar to other members of the group.We therefore conclude that the CM classification is appropriate, and that the diversity within the group shows that the CMs sample two or more parent bodies that accreted in a similar part of the solar nebula, and grew to a comparable size and at over a common timescale so that they had an equivalent geological evolution.We anticipate that the forthcoming return of samples from the ‘rubble pile’ asteroids Ryugu and Bennu will greatly enhance our understanding of the nature and diversity of C-complex asteroids.LEW 85311 is a CM carbonaceous chondrite, and most of its properties are consistent with other members of the group.It is composed of rimmed chondrules, chondrule fragments and refractory inclusions supported within a fine-grained matrix.The chondrules and chondrule fragments are similar in size to those of the CMs, although their FGRs are relatively thin.LEW 85311 has undergone parent body aqueous processing, and many of the alteration products are indistinguishable in mineralogy and chemical composition to those of the most mildly altered CMs.However, there is evidence for a distinctive geochemical environment during aqueous alteration of LEW 85311 from the presence of meshworks of needle-fibre calcite within objects that were previously rich in kamacite, and the absence of TCI clumps.LEW 85311 is comparable to the CMs in its volatile element composition, but differs in its refractory element, REE and bulk oxygen isotopic composition owing to a high abundance of refractory inclusions.It also accreted an unusually low proportion of water ice, which limited the degree of aqueous alteration.LEW 85311 shows that the CM group samples more than one parent body. | Lewis Cliff (LEW) 85311 is classified as a Mighei-like (CM) carbonaceous chondrite, yet it has some unusual properties that highlight an unrealised diversity within the CMs, and also questions how many parent bodies are sampled by the group. This meteorite is composed of rimmed chondrules, chondrule fragments and refractory inclusions that are set in a fine-grained phyllosilicate-rich matrix. The chondrules are of a similar size to those in the CMs, and have narrow fine-grained rims. LEW 85311 has been mildly aqueously altered, as evidenced by the preservation of melilite and kamacite, and X-ray diffraction results showing a low phyllosilicate fraction and a high ratio of cronstedtite to Fe, Mg serpentine. The chemical composition of LEW 85311 matrix, fine-grained rims, tochilinite and P-rich sulphides is similar to mildly aqueously altered CMs. LEW 85311 is enriched in refractory elements and REEs such that its CI-normalised profile falls between the CMs and CVs, and its oxygen isotopic composition plots in the CV-CK-CO field. Other distinctive properties of this meteorite include the presence of abundant refractory inclusions, and hundreds of micrometer size objects composed of needle-fibre calcite. LEW 85311 could come from part of a single CM parent body that was unusually rich in refractory inclusions, but more likely samples a different parent body to most other members of the group that accreted a subtly different mixture of materials. The mineralogical and geochemical evolution of LEW 85311 during subsequent aqueous alteration was similar to other CMs and was arrested at an early stage, corresponding to a petrologic subtype of CM2.7, probably due to an unusually low proportion of accreted ice. The CM carbonaceous chondrites sample multiple parent bodies whose similar size and inventory of accreted materials, including radiogenic isotopes, led to a comparable post-accretionary evolution. |
31,423 | Dataset on statistical reduction of highly water-soluble Cr (VI) into Cr (III) using RSM | This dataset contains 3 Tables and 5 Figures that represent statistical optimization of electrocoagulation process for reduction of Cr to Cr from synthetic wastewater in batch mode of operation using BBD.A total 15 number of batch experiments including three centre points were carried out in triplicates using statistically deigned experiments.The results are shown in Table 1–3a and 3b.The suitability of the selected model to provide adequate approximation of the real system is also confirmed by the diagnostic plots.Such plots include normal probability plots, residuals versus predicted and the predicted versus actual value plot.The 3D graphs were plotted to identify the optimized reaction conditions and to understand the individual effects of pH, voltage and time for efficient conversion of Cr to Cr.Stock solution of Cr was prepared by dissolving of potassium dichromate in distilled water.Sodium hydroxide and sulphuric acid were used to adjust the pH of the solution.Potassium permanganate, sodium azide and 1,5-diphenylcarbazide were used for analysis of the chromium present in synthetic solution .A 95 mL of Cr solution tested in a 100 mL volumetric flask.The pH of sample was maintained less than 2 by adding 2 drops of concentrated H2SO4 then 2 drops of phosphoric acid were added.Then 2 mL of 1,5 Diphenylcarbazide added to the solution and mixed thoroughly then leave for 5–10 min for full-color development.After full color development an appropriate amount of the solution was taken into 3 mm quartz cell and measured its absorbance at 540 nm using UV–vis Double Beam spectrophotometer .The electrolytic cell consists of a glass beaker of 400 mL capacity.Aluminum and iron sheets were used as electrodes.The electrode distance between anode and cathode was maintained constant of 1.5 cm during electrolysis.A direct current was supplied by a DC power source.Agitation was provided to maintain uniform concentration inside the cell using.A stock solution Cr was prepared by dissolving an appropriate amount of potassium dichromate in distilled water.All the experiments were carried out under potentiostatic conditions at room temperature.The pH of the solution was adjusted using either dilute HCl or NaOH.After each experiment the samples were collected and analyzed for Cr using 1,5-diphenylcarbazide method.Box–Behnken design was established with the help of the Design Expert 11 software for statistical design of experiment and data analysis.The three significant process variables considered in this study were: Voltage, time and pH as shown in Table 1.The total number of experiments in this study was 15 including three center points were carried out in triplicates for the estimation of error.The observed and predicted results for each set of reaction parameters are given in Table 2.A quadratic polynomial equation using Design Expert software was fitted to the experimental data obtained according to the Box–Behnken design.Normality plot have been illustrated in Fig. 1 for aluminum and iron electrode electrodes.Fig. 1 shows the normality assumption is clearly satisfied reduction of 95% which is close to result obtained by EC experiments given as straight line .The actual and the predicted results by EC process using aluminum and iron electrode is shown in Fig. 2.Actual values are the measured response data for a particular run, and the predicted values are evaluated from the model and generated by using the approximating function.It is seen in Fig. 2 that the data points lie close to the diagonal line and the developed model is adequate for the prediction of each response.ANOVA studies presented in Tables 3a and 3b.3D plots suggested time and current as the dominant process parameters for reduction of Cr to Cr. | With the excellent solubility, mobility, bioaccumulation and carcinogenesis, hexavalent chromium Cr (VI), widely exists in various industrial effluents such as chrome plating, metal finishing, pigments, and tanning. Cr (VI) is one of the toxic metal pollutants among all the heavy metals. Therefore, the purpose of this work was to convert highly water-soluble Cr (VI) into Cr (III) species using electrocoagulation (EC) process. The Box–Behnken design (BBD) as was applied to investigate the effects of major operating variables and optimization conditions. The predicted values of responses obtained using the model is agreed well with the experimental data. This work demonstrated that the Cr (VI) is entirely converted into Cr (III) in solid-phases in electrocoagulation process. It was also found that reduction increased with current density that suggesting that the reduction efficiency is closely related to the generation of floc. |
31,424 | In situ pPy-modification of chitosan porous membrane from mussel shell as a cardiac patch to repair myocardial infarction | Myocardial infarction is the major cause of mortality amongst cardiovascular diseases worldwide .Following MI, the limited regenerative potential of the heart causes scar formation in and around the infarcted region, often accompanied by abnormal electric signal propagation and desynchronized cardiac contraction, which finally leads to heart failure .The lack of electric connection between healthy myocardium and the scar sites results in most of the progressive functional decompensation .The engineered electrical cardiac patches have proven to be a promising alternative biological treatment to create functional cardiac syncytium, replace dysfunctional parts of damaged myocardial tissue, reduce adverse remodeling and preserve cardiac function .The prerequisites for successful myocardial repair of the implanted ECPs are electrically coupling with host tissue, triggering propagation of electrical impulses throughout the heart, participating in the synchronous contraction of the whole heart, and the assembly of cardiomyocytes into a functional syncytium .The electrically active scaffold is particularly beneficial to improve cardiomyocytes function by elevating connexin 43 expression, which help to regulate cell-cell communication, to increase electrical coupling, and to promote contractile behavior .Conducting polymers such as polypyrrole , carbon nanotube , polyaniline and poly , exhibit many excellent properties such as biocompatibility, conductivity and redox stability.Among these, pPy is one of the most extensively studied conductive polymer in the application of tissue engineering scaffolds due to its easy preparation, better biocompatibility, inherent electrical conductivity, controllability of surface biochemical properties and suitable hydrophobicity for cell adhesion .It has been demonstrated that pPy could sustain the electrophysiological maturation and functionality of CMs with less cytotoxicity and its conductivity could also guide the regular beating of the heart, which are hardly achieved by utilizing the conventional nonconductive polymeric biomaterials .These promising results indicate the potential application of pPy for cardiac regeneration due to its possibility of electrical signal transmission.However, the weak mechanical property of pPy limits its application in cardiac tissue engineering.To overcome this shortcoming, conductive pPy is hybridized with other biomaterials to construct ECPs with stronger contractile and electrical properties .In view of biomimetics, the main purpose is to create cell-compatible or tissue-favored environments for various damaged tissues, and an optimal choice is to utilize the underlying properties of natural biological systems .The architectures at submicrometer scale are abundant in native ECMs and play a key role in regulating cellular behavior.Decellularization of the native omentum or the whole heart can be taken as the native tissue scaffolds for CTE , but sources of these native tissues are rare.Inspired by marine creatures like mussels, a series of ideal materials have been developed, such as extracellular matrix substrates , strong adhesives , and other bioengineering materials .We previously developed a mussel shell-derived scaffold with a multilayer, an interconnected porous structure for wound repair .The main component of this scaffold is chitosan, which exhibits a minimal immune reaction .It is possible that the chitosan-based shell material could serve as a 3D matrix scaffold to efficiently deliver host cells to the damaged cardiac tissues.To test the possibility, we crosslinked the pPy with the shell-derived chitosan membrane to construct a novel shell-pPy scaffold for restoring MI.Dopamine, a small molecule containing catechol and amine groups, is similar to adhesive proteins of marine mussels .Under a weak alkaline pH condition, it can self-polymerize to form an adherent polydopamine, which displays striking adhesion property to render materials with secondary modification .As a biocompatible adhesive, the polydopamine could also attenuate toxicity caused by intrinsic pPy .Inspired by this, we constructed two other scaffolds, shell-pPy-PDA and shell-PDA-pPy, and found that PDA-modification showed no advantages on CMs adhesive growth and maturation compared with pure pPy-modification.Mere modification of pure pPy endowed the SP scaffold with desirable biocompatibility.The SP scaffold could promote cell-cell coupling and maturation, along with supporting synchronous contraction of cardiomyocytes.In vivo, the SP-derived ECP could significantly improve heart function when it was transplanted into the MI rats.A large number of neonatal vessels were also observed in the infarcted sites.We found that the SP scaffold could retain suitable porosity and the pPy increased the roughness on the inner wall of the pores.We highlighted that in addition to the flexible and conductive hallmarks of the SP, its porosity and subtle spatial structures are also important for MI repair.Pyrrole and dopamine hydrochloride were obtained from Sigma–Aldrich."Dulbecco's modified Eagle's high-glucose medium, fetal bovine serum and trypsin were purchased from Gibco.The Live/Dead cell staining assay was from Molecular Probes™.The primary antibodies of Von Willebrand factor, F-actin, α-actinin and connexin 43 were purchased from Abcam.Alexa Fluor® 488 Donkey Anti-Rabbit IgG and Alexa Fluor® 568 Donkey Anti-Mouse IgG were procured from Life Technologies.The pure shell scaffold was obtained using our published method .The solid shell derived from scallop was immersed into nitric acid solution for 3 days and then treated with sodium hydroxide for 6 h.pPy was introduced using in situ polymerization on the surface of the chitosan shell scaffold by chemical oxide means.To prepare the SP and SPD scaffold, firstly the pyrrole and glutaraldehyde were dissolved in 0.5% acetic acid aqueous solution to obtain the pre-polymer aqueous solution.The reserved pure shells were added after.With gentle shaking at 4 °C for 1 h, the shell was allowed to be saturated with pyrrole solution and then incubated in FeCl3·6H2O solution at 4 °C overnight, the black SP scaffold was accordingly obtained.The molar ratio between FeCl3 and py was around 2.7.The PDA was added into the prepared Tris–HCl aqueous solution to obtain 2 mg/mL PDA/Tris–HCl solution.The SP scaffold was then immersed into the PDA/Tris–HCl solution at 4 °C overnight to obtain the SPD scaffold.To obtain the SDP scaffold, the pure shell membranes were reacted with the former 2 mg/mL PDA/Tris–HCl aqueous solution and immersed into the pPy-A solution incubated at 4 °C for 1 h, and then incubated in FeCl3·6H2O solution overnight.All of the scaffolds were dialyzed with deionized water for 3 days and the dialyzed water was refreshed three times a day.The membranes were submerged in 75% ethanol at 4 °C overnight.After being rinsed in 0.01 M PBS thrice, the submerged scaffolds were utilized for cell culture.The different scaffolds were washed with deionized water thrice and then lyophilized for use.The dried scaffolds were mounted onto aluminum stubs with conductive paint and sputter-coated with an ultrathin layer of gold.They were then imaged and photographed under S-3000N scanning electron microscopy and energy disperse spectroscopy at an accelerating voltage of 20 kV.Triplicate samples were used for each scaffold and three individual views were selected in each sample.The data was then analyzed using Image J and Origin 8 software.After being fixed in 2.5% glutaraldehyde overnight, the ECPs were rinsed in 0.01 mol L−1 PBS three times, subsequently dehydrated in graded series of ethanol,and air dried.Finally, the samples were sputter coated with gold, then imaged and photographed under S-3000 N scanning electron microscopy.Infrared spectroscopic analysis was carried out for the different shell-derived scaffolds using a Fourier transform infrared spectrometer with an attenuated total reflectance attachment under dry air at room temperature.The analysis was executed in the range of 4000 cm−1 to 400 cm−1 with 20 consecutive scans at a resolution of 2.0 cm−1.The tensile properties of hydrated scaffolds with length: 8–10 mm, width: 6–8 mm, and thickness: 0.35–0.70 mm, were measured using a uniaxial tensile tester machine with the cell load capacity of 5 N at 3 mm/min rate.All measurements were operated at room temperature.The stress–strain curves of the membranes were obtained using Analyzer software and the elastic modulus was calculated from the initial 0–10% of the linear region of the stress–strain curves.The conductivity of the shell-derived scaffolds was evaluated at room temperature using a three-probe detector.The dry scaffold coated on the surface of the glassy carbon electrode was taken as the working electrode, a platinum electrode as the counter electrode, and a saturated calomel electrode as the reference electrode.Ten specimens were measured for each scaffold and the recorded values were then analyzed using GraphPad Prism 6 software.In vitro degradation test was used to determine the weight loss of the shell-derived scaffolds incubated in PBS at 37 °C for 1, 2, 3, 4, 5 and 6 weeks.After incubation, samples were washed with de-ionized water, dried overnight in a vacuum drier and then weighted.The morphology of the scaffolds was evaluated by SEM and the degradation percentage was obtained by dividing the weight loss to the initial dry weight.Triplicate samples were used for each scaffold and the data was then analyzed using Origin 8 software.All animals were purchased from the Animal Center of the Southern Medical University, P.R. China.All of the animal experiments were performed with the approval of the Southern Medical University Animal Ethics Committee according to the Regulations for the Administration of Affairs Concerning Experimental Animals.Neonatal rat ventricular myocytes were isolated from 1 to 3 day old Sprague–Dawley rat hearts as described previously .Briefly, the rats were anesthetized with isoflurane, thoracic cavity opened, and their hearts were quickly harvested, carefully dissected and enzymatically dissociated.The isolated cells were maintained in high-glucose DMEM supplemented with 10% FBS, 100 U mL−1 penicillin, and 100 μg mL−1 streptomycin at 37 °C in a 5% CO2 incubator.After being cultured for 2 h, the unattached cardiomyocytes were collected and then seeded onto the scaffolds or the culture dishes and cultured in high-glucose DMEM supplemented with 10% FBS, 100 U mL−1 penicillin, and 100 μg mL−1 streptomycin at 37 °C in a 5% CO2 incubator.The culture medium was refreshed every 2 days.The cell viability on the different ECPs was detected by live/dead cell staining assay.Firstly, the samples were rinsed with PBS three times, followed by incubation in the staining solution for 30 min at 37 °C, shielded from light.Photos of the stained samples were obtained using a laser scanning confocal microscope.The calcein-AM green fluorescence was used to stain live cells and the ethidium homodimer-1 red fluorescence was used to detect dead cells.After being cultured for 3 days and 7 days, the cardiac cells on different ECPs in vitro were fixed in pre-warmed 4% paraformaldehyde at room temperature for 20 min.The samples were then washed three times in PBS and permeabilized with 0.2% TritonX-100 in PBS at room temperature for 10 min.After blocking with 2% bovine serum albumin/PBS at room temperature for 30 min, the samples were sequentially incubated in primary antibody, including rabbit anti-α-actinin, mouse anti-CX43, mouse anti-F-actin and rabbit anti-VWF at 4 °C overnight.The samples were washed with PBS three times followed by incubation with the secondary antibodies, including Alexa Fluor488 Donkey anti-rabbit IgG, Alexa Fluor568 donkey anti-mouse IgG and Alexa Fluor488 donkey anti-mouse IgG, at room temperature for 1 h.The samples were then rinsed with PBS, further stained with 4′,6-diamidino-2-phenylindole for 1 h and finally imaged using confocal microscopy.The fluorescent intensity concerning α-actinin and CX43 was normalized by dividing with the intensity of DAPI.Triplicate samples were used for each ECPs and three individual areas were selected within each sample.The data was then analyzed using Image J and Origin 8 software.The beating video of the seeded CMs was obtained using a video capture program.The video sequences were captured at a rate of 20 frames per second.Representative beating signals were analyzed using the image processing program image J and MATLAB based on the previously reported method .In the obtained color maps, the relative amplitude is represented by color and thus the beating behavior of the different sites located on the same x scale and different y scale of the scaffold within 10 s was displayed.In the curves, the beating behavior of the site located in the scaffold at within 10 s was also presented.Six membranes per group were used for spontaneous beating analysis.The calcium indicator assay kit including Fluo-4 AM and Pluronic F127 was utilized to assess intracellular Ca2+ transient within seeded neonatal CMs on the different shell-derived scaffolds.On day 7, the samples were washed once with pre-warmed PBS, followed by incubation into the calcium indicator solution for 45 min.Next, the indicator was discarded and the ECPs were soaked in pre-warmed PBS until imaging.Five different sites within each region were selected and the increase in the concentration of Ca2+ ions was captured using a fluorescent microscope at 488 nm wavelength.The fluorescent dye intensity during cells’ contractions was normalized by dividing with the background intensity and plotted over time.All experiments were performed in triplicate.Male Sprague–Dawley rats were anesthetized with isoflurane following which permanent left anterior descending ligation was performed according to the procedure described previously .On the postoperative day 14, the surviving rats underwent an echocardiographic examination.The rats with FS < 30% were selected and randomly divided into 6 groups, namely the sham group, the MI group, the pure shell ECP group, the shell-pPy ECP group, the shell-pPy-PDA ECP group and the shell-PDA-pPy ECP group.For the patch transplantation, the scaffolds were cut into disks of 10 mm in diameter, and the neonatal rat CMs were seeded into the scaffolds at a concentration of 7 × 107 cm−3.Before being seeded in the scaffolds, the CMs were labeled with CM-DiI for tracking the implanted cells.The labeled cardiomyocytes had been seeded into the scaffolds for 7 days, and the functional ECPs based on different shell-derived scaffolds were fabricated, including the S-, the SP-, the SPD- and the SDP-derived ECPs.The ECPs were then implanted into the infarcted area of the heart and were fixed onto the epicardial surface with 6-0 sutures."In the sham group, the rat's chest was opened and the heart was exposed twice as explained above, while the suture needle was put through the corresponding sites like those in the transplanted groups but no ligation was performed.The rats received daily subcutaneous injections of azathioprine and methylprednisolone for immunosuppression.The left ventricular function of animals from all groups was assessed using an IE33 echocardiograph system equipped with a 15-MHz transducer.Two weeks after ligation and 4 weeks after ECPs transplantation, the rats were anesthetized with isoflurane and echocardiograms were recorded respectively.The LV internal dimensions at both diastole and systole, fractional shortening, and ejection fraction were measured.Three cardiac measurements were obtained from the three samples respectively for each group.After the patch was transplanted for 4 weeks, the animals were anesthetized, sacrificed, and the hearts explants of the MI area, treated with patches, were embedded with OCT compound, slowly frozen with liquid nitrogen, and then fixed at −80 °C.The tissues were cut into 8 μm thickness sections in the LV transverse direction and were stained with Masson trichrome.The sections were also used for immunofluorescence detection according to the mentioned in vitro immunofluorescence method.Under the Masson trichrome staining, the red area indicated the myocardium and the blue area referred to collagen.The area of regenerated tissues in the left ventricle MI region was calculated according to the sum of the red area and blue area.The ratio of the myocardium in the regenerated tissues in the MI area was equal to the ratio of red area versus the area of regenerated tissues.The data was analyzed using Image J and Origin 8 software.All tests were performed at least three times.Data were presented as mean ± standard deviations.All results were compared using SPSS13.0 software.One-way analysis of variance was used in a comparison of more than 2 means by the Bonferroni method."To determine a statistically significant difference between groups, we used a Tukey's multiple comparison tests.Cardiac cell therapy suffers from limitations related to poor engraftment and significant cell death after transplantation.In this regard, ex vivo tissue engineering is a promising approach to increase cell retention and survival .Recently, conductive scaffolds have attracted much attention due to their conductive properties in CTE.The pPy, a useful conductive material, was beneficial for enhancing the electrical interactions among CMs as well as promoting the maturity of CMs and tissue regeneration .Our previous flexible scaffold , derived from mussel shell, possessed the macroporous structure and suitable porosity.As the most abundant constituent of chitosan-based matrix structure, this shell-derived scaffold might be utilized for CTE.However, the shell-derived scaffold is electrically insulated, which leads to poor electrical coupling and cell–cell integrity among CMs and thereby diminished the functionality of the constructed tissues.The inclusion of electrically conductive polymer pPy could address the existing shortcoming of the shell-derived scaffold.Here, the flexible shell membrane was fabricated using a simple acid-base treatment according to our published methods .The pPy was successfully immobilized in the chitosan matrix via a chemical oxidative polymerization method .To create an electrically conductive SP scaffold, pyrrole monomers were firstly oxidized to positive monomer radical, grafted onto to the chitosan backbone through conjugation between an active hydroxyl group and positive pyrrole radical, and then polymerized into pPy in situ shell using FeCl3.To achieve a conductive shell membrane with a fine mechanical property, glutaraldehyde was added to cross-link the chitosan backbone.Accordingly, a novel 3D ECP can be obtained by a combination of neonatal cardiac cells and SP scaffold.The previous study reported that PDA exhibited good cytocompatibility and could attenuate toxicity of pPy.Immersing scaffolds into the PDA/Tris–HCl aqueous solution could coat the PDA onto the surface of the shells.To evaluate the role of PDA modification on the effect of biocompatibility, spatial structure, and conductivity, we fabricated two different hybrids, the SDP and the SPD scaffolds.The SEM observation showed that the pure shell scaffold exhibited a uniform macro-porous structure, whereas the hybridized scaffolds endowed the pure mussel shell matrix with different hallmarks.The SP scaffold could increase the roughness, maintain the high-fidelity microporous structure of chitosan shell, and obtain the desired conductivity.The pPy uniformly coated onto the surface of the inner wall of pores.While in the SPD, a thin polydopamine film coated onto a majority of the pores of the SP resulted in heterogenous pore size.In the SDP, the pPy on the PDA-coated shell scaffold exhibited as aggregates, causing most of the porous structures to be diminished.The low-porosity in the SPD and SDP led to poor cell adhesion and growth .Energy spectrum results showed that the main components in the shell scaffold were not broken after modification.It has been shown that electrical stimulation through conductive composite could be capable of promoting cardiac cell spread and functionalization, thereby regulating the beating of infarcted hearts .Additionally, a synthetic dopamine-based scaffold showed conductivity as low as 10−3 S/cm at 100% relative humidity, while the specific conductivity in the range of 0.8–61.9 S/cm could be achieved in the pure pPy .To confirm whether the electro-activity is only attributed to the polypyrrole component and evaluate the effects of the polydopamine modification on the conductivity, the conductivities of different scaffolds were carried out.As shown in Fig. 1F, the electrical conductivity of SP was found to be significantly greater compared to those of pure shell and pPy/PDA modified shell-derived scaffolds and the mean conductivity of SP was 0.062 ± 0.0048 S/m.The decreased conductivities of the SPD and SDP scaffold were mainly due to the coated PDA semiconductor.The aggregated pPy in the surface of SDP scaffold would cause low transport of electrons along pPy backbones and lead to weak conductivity .These results implied that elevation in electro-activity of SP corresponded to the uniformly coated pPy and the subtle spatial structure of the shell.Our developed SP scaffold maintained a similar level of conductivity compared to the healthy native cardiac tissues, with values in the 10−2 S/cm range, which would be beneficial to the electric integration of infarcted zone and normal myocardium.According to the above results, we summarized that the abundant porous structures and excellent conductivity endowed the SP with an ideal scaffold for MI repair.In regard to the SPD and SDP, the porosity and conductivity were much lower than that of SP, which indicated that the pure grafting of PDA is not always beneficial in tissue engineering.Stretching tests were done to generate experimental stress–strain data of the synthesized shell-derived scaffolds and to calculate their elastic moduli.Pure shell scaffold exhibited elastic and ductile properties in which uniaxial deformation could reach 80% stretching.All three pPy-modified scaffolds showed similar stress–strain behavior in which uniaxial deformation could reach about 40% stretching and then did not lead to any deformation with the increase of stretching.The modification of pure pPy especially rendered the shell scaffold to exhibit a relatively stiff behavior.The elastic moduli of pPy-modified scaffolds increased slightly with the addition of pPy.Accordingly, the elastic modulus of our developed SP scaffold was close to that of the natural mammalian myocardium with the range of 200–500 kPa at the end of diastole .The infrared spectrum of chitosan shell, which showed a strong stretching band at 3430 cm−1, was assigned to the axial stretching vibration of O–H superimposed onto the –N–H– stretching band.Bands due to the Cs-NHAc units at 1630 cm−1 and symmetrical angular deformation of CH3 at 1376 cm−1 were also observed in the shell scaffold.Additionally, the specific bands of the β glycoside bridge of chitosan could be detected around 1154 cm−1 and 896 cm−1.As shown in Fig. S1, the bands of the β glycoside bridge in the shell was 1130 cm−1.The specific bands could also be exhibited in SP, SPD and SDP .The infrared spectrum of chitosan shell, which showed a strong stretching band at 3430 cm−1, was assigned to the axial stretching vibration of O–H superimposed onto the –N–H– stretching band.Bands due to the Cs-NHAc units at 1630 cm−1 and symmetrical angular deformation of CH3 at 1376 cm−1 were also observed in the shell scaffold.Additionally, the specific bands of the β glycoside bridge of chitosan could be detected around 1154 cm−1 and 896 cm−1.As shown in Fig. S1, the bands of the β glycoside bridge in the shell was 1130 cm−1.The specific bands could also be exhibited in SP, SPD and SDP .The –N–H– stretching vibration band was observed at 3425 cm−1, the peak at 1610 cm−1 was attributed to C–N stretching, and the peak around 1595 cm−1 was assigned to –CC– stretch of pyrrole ring.As shown in Fig. S1, the specific bands of pyrrole could be also observed in SPD and SDP .When modified with dopamine, the peaks at ∼1250 cm−1 and ∼1600 cm−1 deriving from aromatic rings could be detected in SPD and SDP .In addition, the infrared spectrum of the modified scaffolds showed a broad absorption band between 3270 and 3450 cm−1 due to the overlap of N–H and O–H stretching vibration.This band was much broader than the bands which were observed in the spectra of chitosan due to the stretching vibrations of bonded N–H…O, O–H…O and O–H…N .The –N–H– stretching vibration band was observed at 3425 cm−1, the peak at 1610 cm−1 was attributed to C–N stretching, and the peak around 1595 cm−1 was assigned to –CC– stretch of pyrrole ring.As shown in Fig. S1, the specific bands of pyrrole could be also observed in SPD and SDP .When modified with dopamine, the peaks at ∼1250 cm−1 and ∼1600 cm−1 deriving from aromatic rings could be detected in SPD and SDP .In addition, the infrared spectrum of the modified scaffolds showed a broad absorption band between 3270 and 3450 cm−1 due to the overlap of N–H and O–H stretching vibration.This band was much broader than the bands which were observed in the spectra of chitosan due to the stretching vibrations of bonded N–H…O, O–H…O and O–H…N .The degradation behavior of S, SP, SPD and SDP scaffolds were evaluated by observing the morphological changes and weight losses of the scaffolds after soaking the scaffolds in PBS at 37 °C.As shown in Fig. S2, all the shell-derived scaffolds appeared to have some morphological changes displayed with free fragments after being incubated in PBS for 8 days.The pPy remained uniformly on the surface of the inner wall of pores.The low degradation of pPy-modified scaffolds might be due to the covalent crosslinking between chitosan and pPy.Additionally, less pPy aggregates and poly-dopamine membrane observed in SPD scaffold, and small poly-dopamine pieces were peeled from SDP scaffolds.The previous report has shown that the pPy/chitosan composite was completely degraded within 12 weeks and decreased by 21.6% within 6 weeks of its in vivo implantation.In our study, the degradation rate of SP scaffold increased with time-dependent incubation and the weight decreased by 23.79% ± 1.49 after soaking in PBS for 6 weeks.The degradable scaffold in cardiac tissue engineering is beneficial to tissue integration and can avoid subsequent surgical removal, while an excessive speed of degradation would result in immediate electrical and mechanical losses which would limit its long-term performance in vivo .As shown in Fig. S2F, the SP scaffold displayed a relatively stronger elongation and a more elastic and ductile compared to the other scaffolds after 6 weeks of soaking.Interestingly, the overall framework of the SP scaffold stayed intact without cracking when it was stretched to more than 25% deformation and the deformed scaffold even restored to its original shape after being soaked into PBS solution.Besides, the elastic modulus of the SP scaffold was maintained in the range of 200–500 kPa, which was similar to that of the natural mammalian myocardium .In summary, the SP scaffold with suitable biodegradation rate and proper mechanical property can be taken as a promising scaffold in CTE.The degradation behavior of S, SP, SPD and SDP scaffolds were evaluated by observing the morphological changes and weight losses of the scaffolds after soaking the scaffolds in PBS at 37 °C.As shown in Fig. S2, all the shell-derived scaffolds appeared to have some morphological changes displayed with free fragments after being incubated in PBS for 8 days.The pPy remained uniformly on the surface of the inner wall of pores.The low degradation of pPy-modified scaffolds might be due to the covalent crosslinking between chitosan and pPy.Additionally, less pPy aggregates and poly-dopamine membrane observed in SPD scaffold, and small poly-dopamine pieces were peeled from SDP scaffolds.The previous report has shown that the pPy/chitosan composite was completely degraded within 12 weeks and decreased by 21.6% within 6 weeks of its in vivo implantation.In our study, the degradation rate of SP scaffold increased with time-dependent incubation and the weight decreased by 23.79% ± 1.49 after soaking in PBS for 6 weeks.The degradable scaffold in cardiac tissue engineering is beneficial to tissue integration and can avoid subsequent surgical removal, while an excessive speed of degradation would result in immediate electrical and mechanical losses which would limit its long-term performance in vivo .As shown in Fig. S2F, the SP scaffold displayed a relatively stronger elongation and a more elastic and ductile compared to the other scaffolds after 6 weeks of soaking.Interestingly, the overall framework of the SP scaffold stayed intact without cracking when it was stretched to more than 25% deformation and the deformed scaffold even restored to its original shape after being soaked into PBS solution.Besides, the elastic modulus of the SP scaffold was maintained in the range of 200–500 kPa, which was similar to that of the natural mammalian myocardium .In summary, the SP scaffold with suitable biodegradation rate and proper mechanical property can be taken as a promising scaffold in CTE.Biocompatibility is a basic requirement for a tissue-engineered cardiac patch.To evaluate whether the inclusion of pPy in the shell scaffold was toxic to CMs, Live/dead staining assays of cells seeded on S, SP, SPD and SDP scaffolds were analyzed respectively.Compared to those to the pure shell group, more cells attached to the pPy-modified scaffolds, which indicated that all the modified scaffolds are nontoxic and biocompatible.The density and viability of cells on the SP scaffold without PDA were as the same as those grown on the pPy/PDA-modified scaffolds.In order to investigate the cytoskeleton organization of the CMs, the cells within the scaffolds were stained for F-actin fibers after being cultured for 3 days.Compared to those in the S group, more elongated intracellular actin filaments were exhibited and more cells were connected to form a network in the SP, SPD and SDP scaffolds.These results implied that the inclusion of pPy within the shell matrix would not affect the cell viability and pPy modification of shells with or without PDA could promote CMs adhesion and elongation.Compared to non-conductive biomaterials, conductive biomaterials have proved to exhibit a better performance for the functionalization of CMs, including the maturation of the cardiomyocytes and their synchronous contraction .The ability to contract synchronously is mainly based on the expression of cardiac-specific proteins, such as sarcomeric α-actinin and connexin43.Sarcomeric α-actinin, a specific cardiac protein, is involved in the maturation of CMs and regulation of muscle contraction .CX43 protein, a well-known gap junction protein, is mainly responsible for the electric-contraction coupling of the CMs and synchronous contraction of the cells .After MI, gap junction remodeling occurs and vulnerability to arrhythmia is enhanced due to cardiac fibrosis and the loss of gap junction expression in the cardiac cells.Increased levels of CX43 protein could attenuate such post-infarct arrhythmia .As shown in Fig. 3, the cells mainly displayed round shapes without intact formation of sarcomeres, and the protein of CX43 was internalized in all the ECPs on day 1.On day 3, the SP-derived ECP showed homogeneous alignment, highly organized intracellular sarcomeric α-actinin structures along the longitudinal cell axis, and remarkable cross-striations, representing a hallmark of maturing cardiomyocytes.In addition, the expression and distribution of CX43 proteins exhibited different patterns among different ECPs: almost invisible in the S-derived ECP; low levels around the cytomembranes in the SPD-derived ECP; low levels around the nucleus in the SDP-derived ECP; and high levels along the cytomembranes homogenously in the SP-derived ECP.The data suggested that the pure pPy modification of the shells could improve CMs maturation along with cell–cell coupling compared to the other scaffolds, which was in favor of ECPs construction.As shown in Fig. 3, the cells mainly displayed round shapes without intact formation of sarcomeres, and the protein of CX43 was internalized in all the ECPs on day 1.On day 3, the SP-derived ECP showed homogeneous alignment, highly organized intracellular sarcomeric α-actinin structures along the longitudinal cell axis, and remarkable cross-striations, representing a hallmark of maturing cardiomyocytes.In addition, the expression and distribution of CX43 proteins exhibited different patterns among different ECPs: almost invisible in the S-derived ECP; low levels around the cytomembranes in the SPD-derived ECP; low levels around the nucleus in the SDP-derived ECP; and high levels along the cytomembranes homogenously in the SP-derived ECP.The data suggested that the pure pPy modification of the shells could improve CMs maturation along with cell-cell coupling compared to the other scaffolds, which was in favor of ECPs construction.Local matrix microenvironment might be a key issue in directing the immature CMs differentiating into the beating CMs.To further evaluate the 3D architecture of SP scaffold on the effects of CMs maturation, the cardiac-specific proteins of CMs on the different layers in SP were assessed.As shown in Fig. S3, immature CMs without organized sarcomere structures were presented in all of the three layers of the pure shell scaffold.Obvious mature sarcomere structures could be observed in each layer across the SP scaffold, including the deep layer, the middle layer and superficial layer.While in the SPD and SDP groups, only a few mature sarcomeres were observed in the deep layer and the middle layer.This suggested that the cross-linked porous network architecture in the conductive SP scaffold could create an ideal 3D environment for CMs infiltration and the enhanced surface roughness of the internal wall could also facilitate 3D extension of cardiac cells .Under this conditions, a 3D multi-layers CMs network was formed in the SP-derived ECP, and thereby affected the maturation and differentiation of immature ventricular myocytes, specifically resulted in the emergence of mature sarcomeres in the deep layer .Overall, such promising effects in SP scaffolds were dependent on pPy providing electrical signal and natural shell chitosan matrix giving a suitable 3D multilayer structure .The developed shell-pPy ECP could mimic in vivo physiological environment and allow the seeded CMs organized into maturate structures.Local matrix microenvironment might be a key issue in directing the immature CMs differentiating into the beating CMs.To further evaluate the 3D architecture of SP scaffold on the effects of CMs maturation, the cardiac-specific proteins of CMs on the different layers in SP were assessed.As shown in Fig. S3, immature CMs without organized sarcomere structures were presented in all of the three layers of the pure shell scaffold.Obvious mature sarcomere structures could be observed in each layer across the SP scaffold, including the deep layer, the middle layer and superficial layer.While in the SPD and SDP groups, only a few mature sarcomeres were observed in the deep layer and the middle layer.This suggested that the cross-linked porous network architecture in the conductive SP scaffold could create an ideal 3D environment for CMs infiltration and the enhanced surface roughness of the internal wall could also facilitate 3D extension of cardiac cells .Under this conditions, a 3D multi-layers CMs network was formed in the SP-derived ECP, and thereby affected the maturation and differentiation of immature ventricular myocytes, specifically resulted in the emergence of mature sarcomeres in the deep layer .Overall, such promising effects in SP scaffolds were dependent on pPy providing electrical signal and natural shell chitosan matrix giving a suitable 3D multilayer structure .The developed shell-pPy ECP could mimic in vivo physiological environment and allow the seeded CMs organized into maturate structures.A well-organized structure of functional cardiac syncytium supports the functional beating of the engineered cardiac-like tissues.Consequently, the beating behavior of cardiomyocyte populations, seeded on different shell-derived scaffolds, was further analyzed through real-time video microscopy.All tissues, on either pure shell scaffold or pPy/PDA-modified shell scaffolds, demonstrated spontaneous beating activities.Spontaneous beating behavior was recorded in all kinds of shell-derived ECPs on day 5.But only the SP-derived ECP could synchronously contract after 3 days of culture.A relatively stable synchronous beating behavior in SP-derived ECP and a mussy beating in the SPD- and SDP-derived ECPs, mainly generated by the cell populations closed to the center, were recorded.An irregular beating, originating from the cell populations cultured in the scattered area, could be observed in the pure shell-derived ECP.Amongst all ECPs, the SP-derived ECP displayed the whole contraction with the largest amplitude along with linear displacements.A well-organized structure of functional cardiac syncytium supports the functional beating of the engineered cardiac-like tissues.Consequently, the beating behavior of cardiomyocyte populations, seeded on different shell-derived scaffolds, was further analyzed through real-time video microscopy.All tissues, on either pure shell scaffold or pPy/PDA-modified shell scaffolds, demonstrated spontaneous beating activities.Spontaneous beating behavior was recorded in all kinds of shell-derived ECPs on day 5.But only the SP-derived ECP could synchronously contract after 3 days of culture.A relatively stable synchronous beating behavior in SP-derived ECP and a mussy beating in the SPD- and SDP-derived ECPs, mainly generated by the cell populations closed to the center, were recorded.An irregular beating, originating from the cell populations cultured in the scattered area, could be observed in the pure shell-derived ECP.Amongst all ECPs, the SP-derived ECP displayed the whole contraction with the largest amplitude along with linear displacements.The change of calcium signal is often accompanied by heart contraction .In order to investigate calcium signaling within the seeded CMs, we recorded the variation of the intracellular calcium concentration on day 5 using Fluo-4 AM as the fluorescent calcium indicator.The spontaneous increase in intracellular Ca2+ concentration was represented as the fluorescent intensity of dye divided by the background intensity.Cells grown on the SP scaffold exhibited relatively rhythmical Ca2+ fluctuation and higher Ca2+ amplitudes than those on the other scaffolds.The reason being that an intact 3D-aligned functional cardiac symplasm was formed between the elongated cardiomyocytes and the 3D porous framework in the SP-derived ECP.Such calcium transient profile further confirmed the formation of 3D network cell-cell communication between CMs in the SP-derived conductive ECP.Specifically, pPy embedded in chitosan membrane bridged the electrically insulated structure of the matrix and facilitated signal propagation between the cells.However, intracellular Ca2+ puffs occurred in different frequencies at 5 individual spots in the other ECPs.This was due to the poor formation of intact cardiac-like tissues, including scattered CMs with a flat and polygonal morphology grown in the quasi-2D planar context in the pure shell and SDP scaffold, and isolated flattened CMs in the peninsula-like gap in the SPD scaffold.Above all, the SP scaffold containing pPy and chitosan matrix could provide a promising 3D spatial structure for elongated sarcomeres formation, cell-cell communication, and cell-scaffold interaction, and improve electrical impulse propagation across the SP based ECPs in vitro.The change of calcium signal is often accompanied by heart contraction .In order to investigate calcium signaling within the seeded CMs, we recorded the variation of the intracellular calcium concentration on day 5 using Fluo-4 AM as the fluorescent calcium indicator.The spontaneous increase in intracellular Ca2+ concentration was represented as the fluorescent intensity of dye divided by the background intensity.Cells grown on the SP scaffold exhibited relatively rhythmical Ca2+ fluctuation and higher Ca2+ amplitudes than those on the other scaffolds.The reason being that an intact 3D-aligned functional cardiac symplasm was formed between the elongated cardiomyocytes and the 3D porous framework in the SP-derived ECP.Such calcium transient profile further confirmed the formation of 3D network cell-cell communication between CMs in the SP-derived conductive ECP.Specifically, pPy embedded in chitosan membrane bridged the electrically insulated structure of the matrix and facilitated signal propagation between the cells.However, intracellular Ca2+ puffs occurred in different frequencies at 5 individual spots in the other ECPs.This was due to the poor formation of intact cardiac-like tissues, including scattered CMs with a flat and polygonal morphology grown in the quasi-2D planar context in the pure shell and SDP scaffold, and isolated flattened CMs in the peninsula-like gap in the SPD scaffold.Above all, the SP scaffold containing pPy and chitosan matrix could provide a promising 3D spatial structure for elongated sarcomeres formation, cell-cell communication, and cell-scaffold interaction, and improve electrical impulse propagation across the SP based ECPs in vitro.A series of echocardiography were performed to assess the cardiac function in the groups of MI, S-, SP-, SPD, and SDP-derived ECP groups respectively.The echocardiograph images and videos showed that the SP-derived ECP group exhibited an enhanced thickness and contractile activity of the free wall in the left ventricle.Mean fractional shortening, ejection fraction and left ventricle internal dimensions at diastole and systole were significantly improved in the SP-derived ECP group as compared to those of rats in the SPD- and SDP-derived ECP groups.Deterioration in the cardiac function but slight improvement in the measurements of LVIDd and LVIDs occurred in the S-derived ECP group after left anterior descending occlusion.Although there was no significant difference in either average LVIDd or average LVIDs between the SP and SDP ECP groups, mean fractional shortening and ejection fraction were significantly increased in the SP ECP group, indicating that SP-derived ECP could better maintain heart function, compared to SDP-derived ECP.A series of echocardiography were performed to assess the cardiac function in the groups of MI, S-, SP-, SPD, and SDP-derived ECP groups respectively.The echocardiograph images and videos showed that the SP-derived ECP group exhibited an enhanced thickness and contractile activity of the free wall in the left ventricle.Mean fractional shortening, ejection fraction and left ventricle internal dimensions at diastole and systole were significantly improved in the SP-derived ECP group as compared to those of rats in the SPD- and SDP-derived ECP groups.Deterioration in the cardiac function but slight improvement in the measurements of LVIDd and LVIDs occurred in the S-derived ECP group after left anterior descending occlusion.Although there was no significant difference in either average LVIDd or average LVIDs between the SP and SDP ECP groups, mean fractional shortening and ejection fraction were significantly increased in the SP ECP group, indicating that SP-derived ECP could better maintain heart function, compared to SDP-derived ECP.To explore the reason why the SP-derived ECPs displayed a promising repair efficacy for MI, the fate of CMs in the transplanted ECPs was examined.The cardiomyocytes were labeled with CM-DiI and then cocultured with scaffolds for 7 days in vitro.The CM-DiI labeled ECPs were then delivered to the infarcted region of the MI rats for 4 weeks.As shown in Fig. 7, many donor CMs immigrated to the infarcted area in the three modified shell-derived ECP groups, while few DiI+ CMs were detected in the S-derived ECP group.A large amount of DiI+ cells with uniform dispersion were located on the infarcted area in the SP-derived ECP group, while only a small number of DiI+ CMs were scattered on the infarcted area in the SPD- and SDP-derived ECPs groups.The cardiomyogenesis in the infarcted area after the different patches transplantation was assessed through detecting the α-actinin and CX43 protein levels using immunofluorescence staining.Compared to the other three groups, a more organized α-actinin positive myocardium and enhanced expression of CX43 were detected in the infarcted area of the SP-derived ECP implanted rats.The donor cardiomyocytes could promote neomyocardium formation through paracrine signaling and a suitable ECP could bridge the healthy and MI regions, assisting the structural integrity and tissues regeneration in MI.The porous structure and electro-activity endowed the shell-pPy scaffold with a greater potential for the survival of donor CMs and a desirable repair efficacy for MI rats.The cardiomyogenesis in the infarcted area after the different patches transplantation was assessed through detecting the α-actinin and CX43 protein levels using immunofluorescence staining.Compared to the other three groups, a more organized α-actinin positive myocardium and enhanced expression of CX43 were detected in the infarcted area of the SP-derived ECP implanted rats.The donor cardiomyocytes could promote neomyocardium formation through paracrine signaling and a suitable ECP could bridge the healthy and MI regions, assisting the structural integrity and tissues regeneration in MI.The porous structure and electro-activity endowed the shell-pPy scaffold with a greater potential for the survival of donor CMs and a desirable repair efficacy for MI rats.Angiogenesis is a key issue in heart repair, and the newborn vascular system could provide oxygen and nutrients for the infarcted area.The promotion of angiogenesis attenuated cardiac remodeling after MI .After being stained with Von Willebrand Factor, few blood vessels were detected in the infarcted region in the MI group, while many blood vessels were observed in other four different ECPs transplanted groups.A distinct increase in blood vessels was detected especially in the SP-derived ECP group, and no significant difference among the S, SPD and SDP-derived ECPs groups were recorded.These results implied that obvious angiogenesis could be triggered by the SP-derived ECP, while only a small number of vessel regenerations were induced by the other three ECPs.Additionally, the large vessels greater than 100 μm in diameter in the SP group would provide adequate oxygen and nutrients to the local immigrated cells and keep them alive.Abundant neo-vasculature in the SP-derived ECP mended area could also discharge toxic waste products in or around the MI region, which could prevent more cell death overtime and provide a better microenvironment for recruited cells.The developed SP scaffold possessed uniformly distributed and interconnected pores, additionally the inside pore size was in an appropriate range for cells location.This highly porous SP scaffold is favorable for cell migration, vascularization and prolonged survival of the ECP .Conductivity is markedly reduced or even lost in the infarct region.The presence of the pPy could partly restore conductive microenvironment in the MI regions, thereby facilitating cell growth and neovascularization.Additionally, the polycationic conducting pPy polymer could facilitate negative CMs adhesion to the infarct region ."Masson's trichrome staining was performed to assess the area of collagen-containing fibrous tissue and regenerated heart tissue after the ECPs were transplanted into rats for 4 weeks.Obviously, the infarcted region in ventricle wall of the MI group and the pure shell group were mainly occupied by fibrous tissues, while the fibrotic area was significantly smaller in the modified shell-derived ECPs groups, especially in SP-derived group.The SP ECP group maintained a significantly thicker LV wall, less fibrosis, and larger area of regenerated cardiac muscle as compared to the other groups, indicating that the SP-derived ECP had performed better in the repair of damaged cardiac tissues under ischemic environment around the infarcted region.In addition, implanted patches were still tightly integrated to the host epicardium of LV wall 4 weeks after transplantation in the groups of SP, SPD, and SDP.This indicated that the modified shells could delay the degradation time and still maintained intact structure after being transplanted into rats for 4 weeks, which would be beneficial for continuous heart repair in a relatively long period."Masson's trichrome staining was performed to assess the area of collagen-containing fibrous tissue and regenerated heart tissue after the ECPs were transplanted into rats for 4 weeks.Obviously, the infarcted region in ventricle wall of the MI group and the pure shell group were mainly occupied by fibrous tissues, while the fibrotic area was significantly smaller in the modified shell-derived ECPs groups, especially in SP-derived group.The SP ECP group maintained a significantly thicker LV wall, less fibrosis, and larger area of regenerated cardiac muscle as compared to the other groups, indicating that the SP-derived ECP had performed better in the repair of damaged cardiac tissues under ischemic environment around the infarcted region.In addition, implanted patches were still tightly integrated to the host epicardium of LV wall 4 weeks after transplantation in the groups of SP, SPD, and SDP.This indicated that the modified shells could delay the degradation time and still maintained intact structure after being transplanted into rats for 4 weeks, which would be beneficial for continuous heart repair in a relatively long period.Electro-conductive scaffolds have attracted much attention due to their conductive properties in cardiac tissues.The pPy, an extensively used conductive material, provides an inductive microenvironment to enhance the electrical communications amongst CMs, to promote the maturity of CMs, and to drive functional cardiac regeneration.Here, a flexible conductive matrix scaffold was fabricated in situ shell.The shell-pPy scaffold could retain the subtle porous structure of the chitosan shell.The pPy incorporation could even increase the roughness on the inner wall of the pores in the shell and endow the scaffold with a suitable conductivity.Polydopamine, one of the most versatile molecules for functionalizing material surfaces , was utilized for its role in neutralizing the toxicity of pPy in the functional cardiac patches leading to the development of two different pPy/PDA modified hybrids, SPD and SDP.We found that PDA-coating decreased the pore size and porosity of the shell scaffolds and the pPy/PDA-modified scaffolds exhibited a disorganized sarcomere structure accompanied by a mussy beating.Compared to pPy/PDA modified shell scaffolds, the scaffold which was merely modified by pure pPy, provided a more biocompatible 3D electrophysiological microenvironment for exogenous CMs growth, cell-cell crosstalk, and maturation, along with synchronous contraction of CMs.After being treated with the SP-derived ECP in the MI rats, fibrosis was drastically attenuated, abundant angiogenesis was triggered, and heart function was significantly improved.We summarized that the conductive SP scaffold with favorable porosity and subtle spatial structures, which would assist to provide a suitable 3D microenvironment, could facilitate the maturation and functionalization of recruited CMs and reach a promising cardiac repair efficacy of MI rats.X.Z.Q. was responsible for the concept and design of the study.X.P.S. did the in vitro study and wrote the manuscript.G.L.Y. helped J.M. for in vivo animal experiment.A.A helped X.P.S. to revise the manuscript.L.Y.W. and L.Y. conducted the immunofluorescence staining experiments.X.Z.Q. analyzed the results and corrected the manuscript.All authors reviewed the manuscript. | Polypyrrole (pPy), a widely-used conductive material, can create an electrophysiological condition for the exogenous cardiomyocytes (CMs) via its conductivity, biocompatibility and available processing method. Our previously developed flexible scaffold from the mussel-derived chitosan shell possessed multiscale, interconnected-porous structure and proper stiffness, which was a promising tissue substitute for wound healing. Here, a 3D pristine hybrid scaffold was fabricated by incorporating polypyrrole into the mussel shell-derived membrane (shell-pPy). The pyrrole monomers were grafted onto to the shell membrane and then in situ polymerized to form conjugated pPy-chitosan shell using FeCl 3 . The developed pPy-chitosan shell was used to produce an engineered cardiac patch (ECP). The shell-pPy ECP maintained subtle spatial structure and was granted well cellular viability, aligned morphology, abundant striation and organized CX43, robust contraction and rapid calcium transients in vitro. When transplanted into the infarcted hearts, the developed shell-pPy ECP could also mimic the mechano-electronic function for cardiomyocytes maturation and integrity. Notably, obvious angiogenesis was triggered, and cardiac pumping performance was promoted in the infarcted area by the shell-pPy ECP. The study highlighted a functional pPy-modified cardiac patch with suitable conductivity, subtle 3D structure, and mechanical property can serve as an ideal ECP to mimic cardiac niche for myocardial repair. |
31,425 | Epidemiologic studies and novel clinical research approaches that impact TB vaccine development | Tuberculosis vaccine efficacy trials are large, expensive and complex.In order to show the efficacy of a vaccine to prevent TB disease in a population with 1% TB incidence, at least 100 subjects need to be enrolled to accumulate a single TB disease endpoint.The cost of such trials results in the need to focus on specific populations that will lead to the greatest knowledge with the fewest subjects enrolled, to have an understanding of the epidemiology and endpoints that allow for appropriate powering assumptions, and to use novel designs to decrease clinical sample size.The 4th Global Forum featured a series of sessions that focused on these issues, and brought to light some of the newest developments and ongoing studies in this arena.Dr. Mark Hatherill in a session entitled “Developing TB vaccines: for what purpose?,gave an overview of the potential indications for TB vaccines, excluding therapeutic studies that would shorten active TB drug therapy.These potential indications for TB vaccines included Prevention of Infection, Prevention of Disease, and Prevention of Recurrence.Of the vaccines currently in clinical testing, most candidate vaccines are aimed at POD .An effective POD vaccine is desirable as such a candidate could be used in Mycobacterium tuberculosis uninfected and infected individuals and have a more rapid impact on the TB epidemic than a POI vaccine, by interrupting transmission of TB from infectious adults.Issues with POD studies include size, cost, and duration of the studies, due to the slow accrual of TB disease endpoints.Meta-analyses have demonstrated that protection against TB disease following BCG vaccination is consistently better in infants and tuberculin skin test negative children than in adults and TST positive children .A concern exists that in adult POD studies, pre-existing anti-mycobacterial immunity, as evidenced in interferon gamma release assay-positive individuals, may block the effects of TB vaccines.This may be particularly problematic for live vaccines such as BCG and new recombinant BCG constructs, as well as live attenuated Mtb vaccine candidates Such pre-existing immunity might inhibit the limited replicative ability of live vaccines, which is considered to be crucial in inducing optimal protection.The possibility that pre-existing Mtb immunity might also mask the response to boosting with a subunit or viral-vectored vaccine seems less likely, given that Mtb infected individuals have shown greater immunological responses to these experimental vaccines than did uninfected individuals .POD as an indication is also likely to be the preferred indication from a regulatory point of view, since the microbiologically confirmed TB disease endpoint is an accepted gold standard.For POD studies, the question of whether to enroll Mtb uninfected or naïve subjects remains a vexing issue.A study using IGRA as an indicator of Mtb infection that was conducted in over 6000 BCG vaccinated adolescents in South Africa demonstrated that the rate of TB disease in Mtb infected individuals is 3-fold higher than in Mtb uninfected individuals, and 8-fold higher in Mtb negative individuals who become infected compared to those who remain Mtb uninfected.Overall, a meta-analysis of similar studies revealed that IGRA positive subjects have 2.1 fold increased risk of disease overall, compared to IGRA negative subjects followed for equal lengths of time .For this reason, the present ongoing phase 2b M72/AS01E POD study conducted by Aeras and GSK in 3500 subjects at ten sites in three African countries is only enrolling IGRA positive subjects .The premise for the POI indication is that a TB vaccine that effectively prevents TB infection would prevent subsequent TB disease in these individuals .The POI indication is an attractive hypothesis to test, since the high annual rate of infection with Mtb compared to active TB disease requires 8–10-fold fewer subjects in a POI trial compared to an equivalently powered POD study.A number of observational and cohort studies with BCG lend credence to the possibility that BCG vaccination in early life may prevent infection over longer periods of time, with a 27% overall efficacy observed in a recently published meta-analysis of such studies .The major benefit of an effective POI vaccine would be in those who have not previously been Mtb infected, such as neonates, children and adolescents.Whether a POI approach would also be beneficial in areas with high force of infection and therefore high reinfection rates has not yet been demonstrated, but modeling studies suggest that high frequency of Mtb exposure is unlikely to affect the ability to demonstrate efficacy of an effective POI vaccine .There has been ongoing debate over whether the IGRA endpoint could be used for licensure, as it is possible that Mtb infection is only prevented in those who would not otherwise progress to disease, and therefore there would be no effect on actual TB disease and TB transmission rates.Doubts about the suitability of using IGRA as endpoint, due to variability around the test threshold and high rates of reversion in low TB burden settings, have been mostly allayed by the good concordance with TST and strong association between IGRA conversion and risk of TB disease in high TB burden settings .A major issue in POI trials nevertheless remains the fact that IGRA conversions are not always sustained and reversions to negative frequently occur.Data from animal models and human studies from the pre-chemotherapeutic era suggest that TST reversion is associated with reduced risk of incident TB disease .There are conflicting recent data as to whether such ‘reverting’ individuals are more or less likely to develop TB disease than individuals with sustained conversions .To test the POI approach, SATVI and the Desmond Tutu TB Center at the University of Cape Town have completed enrolment of 990 school age adolescents in a phase 2 POI trial testing either the H4:IC31 vaccine or BCG revaccination to prevent Mtb infection .Primary results will be available by the end of 2016.POR studies are attractive for a number of reasons: the TB disease endpoint accrual will be higher than that in POD studies, requiring approximately 4–8 fold fewer participants and shorter duration.This in turn allows these studies to be used, in the same way POI studies could be used, as a method of selecting vaccines for large and expensive phase 3 POD studies.Additionally, immunotherapeutic use of TB vaccines, such as following completion of TB treatment, could be attractive as a public health measure to reduce the incidence of recurrent TB disease, which places an enormous burden on National TB Programs.A POR vaccine would be a particularly attractive prospect for treating multidrug-resistant TB and extensively drug-resistant TB, for which treatment regimens are much longer, more arduous, and expensive.Candidate vaccines for use in POR indications already have demonstrated some feasibility in animal models and in humans.In a further discussion on the use of POR, Nunn et al. reported that greater than 90% of recurrent TB cases occurred within 12 months of end of treatment , and a rate of 5.4% in the first twelve months after treatment was reported in Malawi .Recurrence of TB disease after successful treatment could be due to either relapse or reinfection, with reinfection rates remaining static and relapse rates declining with time.Post-treatment relapse could be assumed as a model for reactivation from latent infection.If a vaccine were to protect against relapse, an efficacy trial in prevention of active TB in latently Mtb infected populations would be warranted.Additionally, if a POR vaccine protected against reinfection, it would more likely be able to prevent infection in previously uninfected populations.Dr. Richard White spoke on the potential public health impact of new TB vaccines with POI or POD effects.He noted that new POI vaccines would be expected to have a large impact on Mtb infection, but a slower impact on TB disease and mortality burden than a post-infection POD vaccine.In contrast a post-infection POD vaccine would be expected to have a faster impact on TB disease, have biggest impact before 2035/2050 if targeted at adolescents, adults , or the elderly, and will be needed to reach World Health Organization 2035/2050 goals.He also noted that to reduce TB in 0–4 year olds, targeting adults with post-infection POD may have quicker impact than vaccinating <1 year olds directly.Practically, a clearly significant advantage of the POI strategy may be the ability to utilize already existing infant/childhood vaccination programs for delivery of new TB vaccines.However, a significant issue for implementation of a POD vaccine directed at adults is the lack of an existing vaccination platform such as exists for infants and children.This may be overcome by administration of the vaccine to adolescents at schools along with human papilloma virus vaccines, through mass immunization campaigns, by using existing systems for vaccinating adults such as influenza campaigns, or as part of occupational health systems or well-women clinics.Dr. Tom Evans discussed novel clinical and immunological concepts to assist in reducing risk, cost and sample sizes required for testing candidate vaccines, and methods to accelerate development of the most promising vaccine candidates towards efficacy trials.Possible new trials testing POR and POI indications of candidate vaccines were further discussed, but without known correlates of protection, intensive immunological studies need to be incorporated into the preclinical vaccine evaluation strategies.Such studies could incorporate systems biology analysis, immune cell phenotypes, and in vitro and in vivo mycobacterial growth inhibition assays.They could also be followed by larger studies that would use these tools to evaluate correlates of treatment success or disease recurrence.Evans also emphasized the importance of aerosol delivery of TB vaccines to induce local mucosal immunity in protection against pulmonary TB.He addressed the pressing need for a human TB challenge model and presented the key feature of optimal human TB challenges strain candidates.Given the lack of comprehensive dose–response investigation in the vaccine field, he suggested that TB vaccine developers follow the drug field by developing and using mathematical modeling to drive early trial designs for dose regimen optimization."Dr. Grace Kiringa presented progress in implementation, recruitment and follow up of participants in a phase 2b study to test efficacy of GSK's M72/AS01E vaccine in Kenya.Other sites include eight centers in South Africa and two in Zambia.KEMRI has been successful in its participation as a site for the AERAS-402/CrucellAd35 vaccine testing in infants, contributing >50% of enrolled babies .Close collaboration with local administration, village leadership, religious leaders and community members helps attain recruitment and retention in follow-up.Concerns about study participation are centered on the local taboo issues of blood collection and the requirement for contraception prior to and during study participation.In an approach to investigate immunological synergy between two vaccine candidates, Dr. Iman Satti discussed results of a phase 1 trial conducted in TB-negative, BCG-vaccinated adults living in the UK who received one or two intramuscular doses of a hAd35-vectored vaccine containing Ag85A, Ag85B, and TB10.4 followed by intradermal administration of MVA Ag85A, compared to three intramuscular doses of the adenoviral vector alone.CD4+ and CD8+ T-cell IFN-γ responses were significantly increased following the heterologous MVA85A boost of the hAd35 vaccine, but not after a second or third homologous hAd35 vaccination.Similar trial designs utilizing combination approaches may help to inform future optimal vaccination regimens, and lead to novel advances in TB vaccinology.Dr. Xuefeng Yu presented the progress of his company on utilizing mucosal delivery of Ad5Ag85A to boost parental BCG immunization.In animal models, including the mouse, guinea pig, goat and cow, intranasal vaccination with Ad5Ag85A generated memory T-cells on the surface of respiratory mucosa and provided superior lung protection to pulmonary TB compared to either intramuscular vaccination or BCG .Ad5Ag85A given intramuscularly was safe in a phase 1 study conducted in Canada and elicited CD4+ and CD8+ T-cell responses .Unexpectedly, in that study pre-existing anti-Ad5 immunity did not significantly affect the immunogenicity of the Ad5Ag85A vaccine.To support the application of Ad5-vectored vaccine as booster to BCG vaccination, Dr. Yu presented preliminary data of Ad5Ag85A challenge studies in the non-human primate model, as well as lessons learned from a recent phase 1 study of Ad5-vectored Ebola vaccine.In the NHP study, vaccination with either intratracheal or aerosol Ad5Ag85A conferred partial protection as compared to mock vaccination with saline .In the human trial, a single dose of IM Ad5-EBOV up to 1.6 × 1011 viral particles had an acceptable safety profile and was robustly immunogenic, even in participants with pre-existing adenovirus immunity .Rebecca Harris modeled epidemiological data to predict the potential impact of vaccination strategies with new TB vaccines in an aging population in China."According to Harris's model, the elderly will suffer most TB cases by 2050 and Mtb reactivation, as opposed to reinfection, is estimated to become the main source of incident disease in both the elderly and overall.The model incorporated elements of possible reduced transmission in the elderly due to a lower number of social contacts than young adults based upon data from contact studies in Southern China.Her results also suggest that the greatest impact would be seen with a vaccine that is efficacious post-infection and targeted to the older adult population rather than young adults and adolescents.Harris highlighted the importance of including older adult and elderly populations in clinical trials as early as possible in vaccine development, to ensure that new candidates will be indicated for the populations that will need them most and provide greatest population-level impact.Based on a study using TST and IGRA to determine LTBI prevalence rates at four rural sites in China, Dr. Lei Gao suggested that current China LTBI prevalence may be overestimated by TST, as compared to that measured by IGRA testing.Based on the China CDC 2000 data, an estimated 44% of Chinese would have LTBI.The newer studies suggest a significantly lower figure, however .Rates may historically have been elevated by repeat dosing of BCG vaccine.This study also identified close contacts, the elderly, and smokers as high risk groups for disease development, and therefore potential target populations for LTBI monitoring and intervention.The investigators are repeating TST and IGRA testing in a follow-up year, which should allow for the first accurate prediction of actual TB infection rates in this population.Initial data from the latency study would also suggest that new infection rates in the elderly are significant and have been markedly underestimated by the use of TST alone.From a retrospective epidemiology study in Southern Mozambique, a region with high TB-HIV co-infection, Dr. Alberto Garcia-Basteiro presented data that estimates an exceptionally high TB incidence in young adults living with HIV.Further, HIV prevalence in children with TB under 3 years of age was 44% , with TB diagnosed by gastric lavage in 30% of suspect cases.Given the low case detection rate of 39% in Mozambique , and because only a small proportion of patients receive antiretroviral therapy, HIV and TB co-infection will continue to be a major public health concern if no drastic improvements are made soon in diagnosing and treating TB/HIV in this region .Dr. Vidya Mave introduced RePORT India, a joint initiative between the US and India governments to better evaluate TB infection and disease rates in India.Several cohort studies are being conducted to collect data along with clinical samples to explore specific questions.As an example, Cohort for TB Research by the Indo-US Medical Partnership Multicentric Prospective Observational Study, a five-year prospective observational study consisting of three prospective cohorts – active TB, household contact and control cohorts – that aims to investigate both host and microbial factors associated with TB treatment outcomes, TB disease progression, and TB transmission in India was presented at the Global Forum.Mave discussed a massive undertaking to collect and store samples from study participants, which will be applied to future studies.The sessions emphasized that vaccination strategies should be based on demographic and TB epidemic patterns of a country.In countries such as China, targeting the elderly might achieve the greatest impact, whereas in some African countries where HIV prevalence and burden of TB in young adults are already very high, vaccinating the adolescent population will likely be of greatest benefit. | The 4th Global Forum on TB Vaccines, convened in Shanghai, China, from 21 – 24 April 2015, brought together a wide and diverse community involved in tuberculosis vaccine research and development to discuss the current status of, and future directions for this critical effort. This paper summarizes the sessions on Developing TB Vaccines for Prevention of Disease, Prevention of Infection, and Immunotherapy Indications; Concepts and Approaches in Clinical Research & Development; and Epidemiological Research. Summaries of all sessions from the 4th Global Forum are compiled in a special supplement of Tuberculosis. |
31,426 | Multiplexed microsatellite markers for seven Metarhizium species | Insect pathogenic species of the fungal genus Metarhizium Sorokin are widely used in biological control of arthropod pests.Recent multilocus phylogenetic analyses of the genus resulted in delineation of a complex of nine species within the species Metarhizium anisopliae.The complex comprises a well-defined inner core, the PARB clade, including M. pingshaense, M. anisopliae, M. robertsii and M. brunneum.Furthermore, it includes a clade consisting of M. majus and M. guizhouense and three additional species, i.e., M. acridum, M. lepidiotae, and M. globosum.In order to further improve our understanding of genotypic diversity and population genetic structures within Metarhizium species, to track introduced strains in the environment, to assess their possible effects on indigenous Metarhizium populations, and/or to characterize cultivars, highly resolving genetic markers are required.Microsatellites, also known as simple sequence repeat markers, have proven to be an ideal tool for such purposes.Forty-one SSR markers have been isolated from three different Metarhizium strains originally identified as M. anisopliae but now recognized as M. anisopliae, M. brunneum and M. robertsii.Selected subsets of these SSR markers have been used to characterize genotypic diversity of Metarhizium isolates from different environments.The goal of this study was to test the applicability of the 41 SSR markers for multilocus genotyping in different Metarhizium species, taking into account the recent taxonomic refinements in this genus.This information was used to compile SSR marker sets applicable to several different Metarhizium species.A collection of 65 Metarhizium strains representing all nine Metarhizium species of the M. anisopliae species complex and two species of the M. flavoviride species complex was genotyped using 41 SSR markers.BLAST searches with reference sequences for these SSR markers demonstrated their presence and broad distribution in the genomes of M. anisopliae, M. brunneum and M. robertsii.Fifty-four of the strains used in this study were included in the most recent revision of Metarhizium and were obtained from the USDA-ARS Collection of Entomopathogenic Fungal Cultures or the Centraalbureau voor Schimmelcultures collection.The remaining eleven strains, among them the frequently used biological control agent M. brunneum strain BIPESCO 5, were obtained from other culture collections.Species affiliation of eleven strains not included in the study by Bischoff et al. was verified by sequencing and comparing the 5′ end of elongation factor 1 alpha as described by Bischoff et al.GenBank accession numbers are provided in Table S1.Each species was represented by at least 4 strains except for M. frigidum and M. globosum with only one strain each.The strains derived from insects or soils and originated from 29 countries representing all continents except Africa and Antarctica.Cultures were maintained and fungal mycelia were produced as previously described.Genomic DNA was extracted using Nucleo Spin Plant II DNA extraction kit.Forty-one SSR primer pairs were combined in sets of two or three pairs to perform multiplex touchdown polymerase chain reactions.PCR was conducted in 20 μl reaction volumes containing 10 ng genomic DNA, 0.2 μM of each primer, 0.2 mM dNTPs, 1 × GoTaq® Flexi Reaction buffer, 0.25 U of GoTaq® Flexi DNA Polymerase and 3 or 4 mM MgCl2.One primer of each pair was labeled with NED, HEX or FAM, respectively.Touchdown PCR conditions consisted of 2 min initial denaturation at 94 °C, followed by 12 cycles of 30 s denaturation at 94 °C, 30 s annealing at Ta + 12 °C, and 40 s extension at 72 °C followed by n cycles of 30 s denaturation at 94 °C, 30 s annealing at Ta and 40 s extension at 72 °C.PCR was terminated with a final elongation step of 15 min at 72 °C.PCR fragment sizes were analyzed using capillary electrophoresis as described previously.In seven species at least 21 markers were amplified from ⩾75% of the strains of a species.For the remaining species only 2–12 markers were amplified of which only one marker was polymorphic for M. acridum.For M. globosum and M. frigidum only one strain per species was included, thus results for these two species are considered tentative.The highest percentages of cross-species transferability and the highest numbers of polymorphic markers were obtained for species in the PARB clade.All SSR markers isolated from M. anisopliae were transferable to M. brunneum and M. pingshaense, and all markers obtained from M. robertsii were transferable to M. anisopliae and M. pingshaense.Cross-species transferability of SSR markers isolated from M. anisopliae, M. brunneum or M. robertsii was negatively correlated with phylogenetic distance based on sequence analysis of EF1alpha.Decreasing cross-species transferability with increasing taxonomic distance has also been observed for other fungal genera such as Lobaria and Phytophthora.Cluster analyses performed on SSR marker data did not correspond to the multilocus sequence phylogeny of Metarhizium and no species-specific clustering was obtained.The complexity of evolution of SSR and their flanking region, which may lead to convergence in allele sizes among species, render SSR markers as inappropriate for reconstructing phylogenetic relationships.Therefore, the use of SSR markers for species identification is limited and species affiliation should be based on DNA sequences, i.e. EF1alpha sequence comparison.Nei’s unbiased genetic diversity ranged from 0.21 to 1 and varied substantially among species and SSR loci tested.A significant correlation was observed between He and other indices of diversity such as Shannon index and evenness.PCR amplifications with all 41 SSR primer sets revealed single alleles for all strains, except for all M. majus strains, among which two alleles were obtained at one to eleven SSR loci per isolate.Polymorphism in M. majus strains depended on the locus and the particular strain.These results suggest duplication of the corresponding regions or possibly a diploid genome.Similar observations have been reported in previous studies of M. majus isolates using isozymes or genome sequence analyses.To provide robust and generally useful SSR genotyping tools for Metarhizium, polymorphic SSR markers that amplify reliably from as many Metarhizium spp. as possible, multilocus multiplex PCR methods were identified.For this purpose fifteen SSR markers were selected and grouped into five sets including three markers each.Selection was based on four criteria: applicable to M. anisopliae, M. brunneum, M. guizhouense, M. lepidiotae, M. majus, M. pingshaense, and M. robertsii, high within-species diversity, amplification success and easily scorable SSR peak patterns.The markers were grouped according to matching PCR conditions and different allele size ranges to simplify marker scorability.Additional markers with high-resolution power for individual species can be selected from Tables S3 and S4.The currently most efficient approach to genotype Metarhizium isolate collections is to first perform SSR marker analyses using the five marker sets, then to determine species affiliation of individual multilocus microsatellite genotypes and finally, if further resolution is required, to use additional SSR markers appropriate for the identified species.This approach will be applicable and useful for strain characterization, tracking introduced BCA strains in the environment, and analyses of population genetic structures of M. anisopliae, M. brunneum, M. guizhouense, M. lepidiotae, M. majus, M. pingshaense, and M. robertsii. | Cross-species transferability of 41 previously published simple sequence repeat (SSR) markers was assessed for 11 species of the entomopathogenic fungus Metarhizium. A collection of 65 Metarhizium strains including all 54 used in a recent phylogenetic revision of the genus were characterized. Between 15 and 34 polymorphic SSR markers produced scorable PCR amplicons in seven species, including M. anisopliae, M. brunneum, M. guizhouense, M. lepidiotae, M. majus, M. pingshaense, and M. robertsii. To provide genotyping tools for concurrent analysis of these seven species fifteen markers grouped in five multiplex pools were selected based on high allelic diversity and easy scorability of SSR chromatograms. |
31,427 | Powerpath controller for fuel cell & battery hybridisation | There are many types of fuel cell available, all of which are applicable to this research, however, in this paper we will consider a Proton Exchange Membrane fuel cell, usually abbreviated to ‘PEM fuel cell’ .This is a power supply that converts hydrogen and oxygen gas into water and electricity."Unlike a battery which has a finite capacity due to it's self contained nature, a PEM fuel cell will keep supplying electricity as long as the fuel is continually supplied.Oxygen is often provided from using air from the surrounding atmosphere and the hydrogen can be supplied from pressurised tanks or a reformer."This also gives the advantage that a PEM fuel cell's voltage does not change over time as does a battery when it is depleting. "Conversely, every fuel cell's voltage does vary with load.The three mechanisms for losses in a PEM fuel cell which govern the voltage relationship with load are activation, ohmic and mass transport losses .The load and voltage relationship is well known as a polarisation curve and is constructed using steady state loads.This does not account for is a pulsed load, such as Pulse Width Modulation, which creates a duty cycle on the load .Using this technique it is possible to change the perceived load; for example, a 50% duty cycle on a 50 A load may only appear to be a 25 A load if the switching frequency is high enough.This will be important in a hybrid by allowing the partial load on the PEM fuel cell to be controlled, therefore allowing the voltage of the PEM fuel cell to be matched to the battery voltage.In this paper we will consider a LIPO battery in a flexible polymer case), which is well known to be much smaller, lighter and is able to be cycled considerably more than the other battery types such as lead acid and NiMH .In comparison to a PEM fuel cell, a LIPO has considerably higher power density but has a much lower energy density.This lends itself to a hybrid design, as is being used in many applications today .The LIPOs being used have a safe operating range of 3.2 to 4.2 V/cell.A 3 cell and 4 cell LIPO will be considered due to their similarity to the operating voltages of the Horizon H100 PEM fuel cell.Power electronics in hybrids have three main purposes; balancing, protection and regulation.Balancing is required to ensure that both supplies can be connected and is typically achieved using a DC–DC regulator on the PEM fuel cell output, set to match the varying LIPO voltage as it reduces during depletion .Protection is required to ensure each of the supplies only power the output and not each other, achieved using solid state diodes, in order to block any back, downstream capacitor discharge or any situations where once supply becomes higher than the other.Regulation is required to match the output voltage to what is required by the load.Again, a DC–DC regulator may be used .A traditional electronic hybrid is shown in Fig. 1a.Regulator efficiency depends greatly upon the type of regulator being used and the difference between the input and output voltages.For low power devices, linear low-dropout regulators provide a simple solution to reduce output voltage however their efficiency reduces greatly the more current is drawn through the device.Higher power systems use more efficient switched mode regulators to either reduce or increase voltage or reduce only.With regard to switching regulators, the synchronous type remove the inefficient solid state diode found in non-synchronous types and replace it with a Metal Oxide Semiconductor Field Effect Transistor, greatly improving efficiency, but both still require inductors to smooth the pulsed current output found in even the most capable regulators).However, all types of switching regulator have complex circuits and are not easily scalable.Inductors also create a secondary issue of electromagnetic interference, which is not desirable in many applications today using wireless communications, including Remote Controlled aircraft.Due to inherent inefficiencies in both types of regulator they require heatsinks and/or fans to remove the lost energy dissipated from the electronics as heat, which in turn use up energy to power and control, increasing the system losses.In particular, linear regulators are not traditionally suited to high current applications such as RC aircraft which typically require 50 A .Reverse current protection is traditionally handled with simple solid state diodes .Solid state diodes typically induce a 0.5–1.5 V drop, therefore their efficiency is proportional to the load current using Joules’ First Law.This provides a large source of inefficiency, heat, size and weight for high current applications.Ideal diodes have been proposed before and are being used in a few low power devices such as mobile phones, however are yet to progress into high power applications outside of active AC–DC rectification .The knowledge from switching regulators can be used to control the flow of current more efficiently than semiconductor diodes by using transistor switches.It is well known that current will only flow from a high potential to a low potential, therefore we can use this principle along with a highly efficient Power MOSFET to switch a supply on or off depending on the voltage gradient between the input and output.In this case, if the output has a higher potential than the input, the switch will be off, and vice versa, but without the pronounced voltage drop experienced with a semiconductor diode."The ideal diode MOSFET is controlled by a comparator and amplifier embedded within an integrated circuit such that it's operation is completely autonomous with no user intervention.Moreover, multiple MOSFETs may be used in parallel to either increase efficiency or peak power capability, making the ideal diode system easily scalable.Using the discussed principles it may be possible to hybridise a PEM fuel cell with a battery without the need to force voltages to balance using a regulator, this will be defined as natural balancing.There are three common hybrid strategies which can be used for linking two electrical power sources to a motor; series, parallel and combined.These strategies have been discussed at length in the following papers, with a summary provided below; .The parallel hybrid shown previously in Fig. 1a is simple and will provide power if either supply fails, and may have a peak power of the sum of the individual supplies.If using a mechanical linkage it requires a complex gearbox and with an electrical linkage it needs a regulator and diodes which, as discussed, can be a large source of electrical inefficiency.It also does not allow for recharging of the battery this system may be more suited to lead acid batteries which are less sensitive to varying input currents where the diodes can be removed to allow charging, with the disadvantage being a much lower power density battery as previously discussed.A series hybrid uses a battery as the direct power source to the motor, and the secondary supply as generator to recharge the battery.A series hybrid has mechanical simplicity but at the expense of added electrical complexity with battery charging circuitry.Battery chargers are also relatively low current devices due to the limitations of how quickly the chemistry within the battery will safely accept charge, meaning that the nominal load must be below the peak charge current for maximum endurance.Peak power is the sum of the battery output and the charger output, therefore may be much less than that of a parallel hybrid.Finally, a combined hybrid includes the complexities of both strategies by implementing allowing the secondary supply to power the load and recharge the battery simultaneously, providing greater system flexibility.The added electrical complexities traditionally introduce inefficiencies with the power regulation and reverse flow protection, however the peak power is the combined output of the two supplies as in the parallel case.When exploring system optimisation, if one supply is a high energy density PEM fuel cell and the other a high power density LIPO, it is found that different strategies are needed for different load cases.In a high load situation it would make sense to have a parallel hybrid to allow for power sharing and therefore an overall increase in peak power compared to having a series hybrid.However, in a low load situation, a series hybrid may be more favourable to allow battery recharge or a complete passthrough from the fuel cell to the load if the battery is charged.In many applications, the design power of the system will be above that of the charger capability, so a combined hybrid solves the excess overhead issue as previously mentioned, to ensure the system can still be optimised for endurance.Using regulators does not exploit the PEM fuel cell ability to operate in a wide voltage window.By duty cycling the load experienced by the PEM fuel cell is might be possible to naturally balance the voltage to the battery without forcing it with a regulator, improving the system output by reducing circuit complexity and the inefficiencies that go with it.Furthermore, using Power MOSFET switches in a dual ideal diode setup would greatly improve system efficiency and voltage output by removing the traditional diodes from the circuitry.The aim is to develop and demonstrate an efficient, naturally balanced fuel cell hybrid powertrain.The objectives are to:Develop a model to test the theory of natural balancing.Test the electronic hardware to prove the plausibility of natural balancing.Demonstrate a naturally balancing PEM fuel cell battery hybrid.Using a dual ideal diode the possibility to link two power supplies together safely without losses is achieved efficiently.However, without forced regulation there is no guarantee that both power sources can have an opportunity to power the output.Careful specification and understanding of the power sources is therefore more important in a naturally balancing hybrid.Fig. 2 shows the usable region of the Horizon H100 PEM fuel cell polarisation curve for this paper.This has been obtained from test data and has had a best fit line equated to it, the equation of which will form the model explained in Section Modelling.Overlaid on this curve are the typical LIPO voltages which show the potential naturally balanced regions of the hybrid for a 3 cell or 4 cell battery.It can be seen that with a 3 cell LIPO the naturally balanced region allows a PEM fuel cell partial load of 3.75 to 5.25 A.However, with a 4 cell LIPO this partial load is reduced to 1.6–3.75 A, but at a higher voltage.Analysing the two LIPO cell counts on Fig. 2, it can be seen that if a 4 cell LIPO is used the range at which the two supplies may balance is across the majority of the operational range of the PEM fuel cell from 12.8 to 16.8 V. With a 3 cell this is confined to the high current end of the polarisation curve from 9.6 to 12.6 V.In this paper a 3 cell was chosen in order to allow the PEM fuel cell to operate at its peak power whilst the battery is still at around 50% charge, whereas if a 4 cell was used the battery would be almost flat before peak power is achieved.It is important to note that going below the “flat” voltage of a LIPO, by dropping into a current region below 3.2 V/cell, would cause irreversible damage to the battery so is not considered here.The powerpath controller, shown at the centre of Fig. 3, has been analysed from an electrical flow perspective to understand the operation of the system as a parallel hybrid.In order for either of the supplies to safely power the load, it must have a higher voltage than the other supply.Therefore there are three potential outcomes; FC only, battery only or both.Both transistor switches “on” can only occur if the two supply voltages are equal, i.e. balanced, otherwise reverse flow would occur from one supply into the other.It is this logic that provides the diode functionality of the system by preventing a reverse voltage which would result in a reverse current flow.Analysing the expected dynamics of the system under normal operation gives the following.We are assuming that the current capacity of the PEM fuel cell is exactly 10 A for simplicity:When under no load, the PEM fuel cell voltage will be highest at just over 20 V, and the battery will be fully charged at 12.6 V. Therefore the PEM fuel cell transistor switch will be turned on, and the battery transistor switch will be off.If a large load is applied to the PEM fuel cell, it will initially supply the current for a very short time due to its internal capacitance however its voltage will quickly collapse below 12.6 V, at which point in time the switching will be reversed.At this point the battery will be under the large load and the PEM fuel cell under zero load.LIPOs have a very stable voltage under load so the voltage will hold with little reduction and the full load will be delivered to the motor electronics.A short time after this the PEM fuel cell voltage will recover to within 100 mV of the battery voltage, which will be slowly decreasing in proportion to the state of charge of the battery.Therefore the logic will change to the balanced state with both transistor switches on.At this point the PEM fuel cell will be part loaded, in share with the battery, causing its voltage to drop and the PEM fuel cell transistor switch to turn off.Now not under load, the voltage will recover and switch back to the balanced state and so on.This load and recovery cycling will create a duty cycle for each power supply.In this case the LIPO has a 100% duty cycle meaning it is connected to the output all of the time.The PEM fuel cell, however, may have a 50% duty cycle due to it recovering and then being in the balanced state for an equal amount of time.This will mean that the PEM fuel cell voltage measured will be at some point between the open circuit and loaded voltage as defined by the polarisation curve, assuming steady state.The higher the load, the faster the PEM fuel cell discharges when switched on, however recovery takes the same amount of time as driven by the PEM fuel cell chemistry, this therefore reduces the duty cycle.Conversely, if the load is low the PEM fuel cell voltage will be high and sustained thus creating a 100% duty cycle.The duty cycle will determine the load sharing ratio between the two supplies and allow the PEM fuel cell to contribute to the output power even when the output is above the capability of the PEM fuel cell.The voltage of the battery corresponds to the balancing voltage of the system, which has been plotted on the polarisation curve in Fig. 2 to determine the peak current deliverable by the PEM fuel cell in that state.Regulation of the PEM fuel cell voltage is not achieved, however, so the output of the system must be capable of handling up to the open circuit voltage of the PEM fuel cell but this will only ever be for very low load situations at low power.Typically brushless motors used in RC aircraft can handle a wide voltage range, e.g. 9.6 to 25.2 V allowing for anything from a 3 cell to 6 cell LIPO to be used.Power MOSFETs have a typical “on resistance” of 2 mΩ which gives a theoretical transmission efficiency of 99.83% for the PEM fuel cell powerpath and 98.33% for the battery powerpath when under a maximum source load of 10 A and 100 A respectively.This is clearly a vast improvement on the efficiency experienced when using any type of DC–DC regulator, so, it is worth investigating further.Moreover, DC–DC converters in the order of 100 A are not easily available and very expensive.It is also important to note that this requires no protection diodes, due to the switch severing the connection between a high and low voltage supply, which would typically incur an additional 5 to 15% loss plus any demand from cooling fans and a cruise power increase due to the weight of the heatsinks.Models of the power supplies and electronics have been created in SIMULINK to explore the dynamic behaviour of a powerpath controller when the sources are a PEM fuel cell and a LIPO battery.The schematic for which can be seen in Fig. 4.At this point, if the calculated voltage is below that of the battery, then the battery voltage is used to back calculate the current that would be output by the PEM fuel cell.This simulates the binding phenomena of the PEM fuel cell to the battery voltage when in a loadsharing mode.Since the controller operates on two transistor switches, binary logic is used.This can be replicated in simulation to create the system as previously explained in Section Theory of natural balancing.To test the theory and the model, a triangular ramp test has been used to understand how the hybrid reacts in the two major modes of operation; PEM fuel cell only and naturally balanced.The former mode will have a wide voltage range and should follow the polarisation curve of the PEM fuel cell and there should be no contribution from the battery.The latter mode should be at the voltage of the battery and the PEM fuel cell voltage should balance at this voltage.Increasing the load in the naturally balanced mode will alter the duty cycle and therefore the load sharing ratio between the two power supplies.The PEM fuel cell output should be constant with the battery filling in the remaining partial load.The simulation results of the natural balancing hybrid strategy are shown in Fig. 5.Firstly it can be seen that for low loads, the only contributor to the output is the PEM fuel cell and it follows the polarisation curve as the load increases and most importantly, the battery current is not negative, showing the ideal diode setup works.Once the two voltages balance the partial load provided by the PEM fuel cell remains constant and the partial load from the battery increases in proportion to the demand.When the demand reduces, the opposite occurs, and at the point where the PEM fuel cell voltage raises above that of the battery, the battery output it reduced to zero."As explained in the maths, the battery is modelled to show it's depletion and can be seen by the natural balance voltage reducing with respect to the batteries discharge curve, in this test we are starting with a charged 3 cell LIPO. "Since the reduction in balancing voltage alters the position on the PEM fuel cell's polarisation curve we actually get an increase in the partial load supplied by the PEM fuel cell as we move towards its peak power.Clearly, a battery with a higher capacity would deplete slower and delay this shift towards peak power.Simulation results have shown that the dual ideal diode powerpath controller setup with a PEM fuel cell and battery allows both power supplies to be on at the same time, but only when there is a positive potential difference between the source and the output, ensuring that there is no reverse current flow which might damage the power supplies.This occurs in load situations above the battery voltage shown on Fig. 2.When the load is below the respective battery voltage then the PEM fuel cell is the only supply connected to the output and the LIPO goes unused.As the battery depletes, the balance voltage reduces and so the potential PEM fuel cell contribution naturally increases as the balance point moves to the right on Fig. 2.At no time is a DC–DC regulator or traditional diode required.The PEM fuel cell being used is the Horizon H100, a 100 W hydrogen fuel cell.The hydrogen is fed at 0.5 bar and the air is atmospheric as per the datasheet .The stock controller has been removed and a new custom one fitted; however, for this paper it is set to mimic the purging strategy used by the stock controller.The controller is powered by the hybrid output, rather than a separate battery, so the system is fully closed and not dependant on any external mains supply thus capable of long endurance running.This extra load, in parallel with the output load, is variable and up to 10 W dependant upon processor load at any given time.The controller is also the datalogger and user interface for the whole system.It is responsible for safely controlling the PEM fuel cell, recording results and sending the instantaneous load requirements to the digital loadbank via TCP at each timestep.The battery being used is a Hyperion G3, 3 cell, 3300 mAh, 35C LIPO.This has a potential difference range as previously discussed of 9.6 to 12.6 V, is well worn in and fully charged and balanced to 4.2 V/cell.The power electronics have been custom designed and incorporate Power MOSFETs as the main switches and the Linear Technologies LTC 4416 dual ideal diode controller as the powerpath controller.The battery powerpath only requires a single MOSFET, however the PEM fuel cell powerpath has two in series in opposite polarisations to overcome the body effect of the semiconductor.The only practical difference here is that the resistance of that powerpath will be double that of the other, leading to a slightly lower overall efficiency.The power electronics have shunt resistors on the two supplies in order to measure current through a 16-bit analogue-to-digital converter.The sensor readings are taken and saved in the datalogger which is a Raspberry Pi computer.A digital loadbank is being used to realise the demand from the system which is sent from the user interface on the Raspberry Pi computer to the loadbank over the University network.It is configured to demand a given current rather than power to mimic three-phase brushless motor controllers used in hobbyist remote control applications.It has also been used to help calibrate and verify the ADCs measurements.Three tests have been designed in order to fully verify the dynamics of the system proposed in Sections Theory of natural balancing and Modelling.The first test explores the high speed switching behaviour using a bench Power Supply Unit on one powerpath and a LIPO on the other.This will help verify that the hybrid can operate along either powerpath independently, or with both on simultaneously, whilst ensuring no reverse flow, thus still acting as a diode.Test two endeavours to prove that the system does naturally balance if the two supply voltages are the same.This is achieved by using two LIPO batteries at the same state of charge, therefore the same voltage, connected one after the other."The final test integrates the PEM fuel cell into the system, with it's previously discussed voltage dynamics, and a 3 cell LIPO battery.This test will show that the hybrid works as expected from the basic model developed in Section Modelling.In order to protect both power supplies, the controller must ensure that the potential difference from the supply to the output is always positive when the respective transistor switch is on, otherwise it must be off to protect against reverse current flow.Using a stable voltage from a LIPO and an adjustable voltage from a bench PSU, Fig. 6 shows the switching logic of the powerpath controller as the voltage output to the gates of the MOSFET switches.The MOSFETs being used are p-channel so as the gate-ground voltage reduces, the gate-source voltage becomes negative and the switch turns on.Initially the LIPO is a higher voltage than the PSU and as such the battery transistor switch gate is ‘low’, and the PSU gate the opposite, proving that the negative voltage across the PSU transistor switch caused the controller to turn it off to provide reverse current protection.As the PSU voltage is increased there is a point at which the system is balanced before the PSU holds a higher voltage and the switching logic is completely flipped compared to the start to protect the LIPO.After a brief pause, the PSU voltage is ramped down and following a slightly longer balanced period and the logic is switched again in favour of the LIPO taking up the full load and the PSU being protected.Test two explored the balanced mode of the hybrid, shown in Fig. 6 when both switches are on together at 60 to 120 μs and 295 to \u200b380 μs.Using two batteries at the same state of charge, with just battery 1 connected, it can be seen that battery 1 transistor switch gate is biased on, and battery 2 transistor switch gate is off.After 15 ms battery 2 is connected and Fig. 7 shows it takes the full load as battery 1 transistor switch gate is immediately switched off.This is due to the output load causing battery 1 voltage to be slightly reduced, whereas, at the initial point of connection battery 2 is under no load so holds a higher voltage.Once the internal capacitance effect of battery 2 is depleted at 21 ms, battery 1 is switched back on as they are now within 100 mV of each other as defined in Section Theory of natural balancing.Using the Horizon H100 PEM fuel cell and Hyperion 3 cell LIPO, Fig. 8 shows the results of running a triangular ramp test as previously done in simulation in Section Modelling and depicted by Fig. 5.Of particular interest is that the battery did not discharge as much as expected but a slight voltage dip can be seen when the LIPO is loaded, an effect which was ignored in the battery model approximation."Initially, as the load is applied, the PEM fuel cell takes up the load and it's voltage reduces in line with it's polarisation curve.At the point the battery is not contributing at all.When the load increases to a point at which the PEM fuel cell voltage matches that of the LIPO, the load sharing situation begins as the system is now naturally balanced as explained in Section Theory of natural balancing without any forced voltage regulation."As the load continues to increase, the loadsharing ratio increases in favour of the LIPO however the PEM fuel cell output remains constant, as does it's voltage. "As the load is reduced, the reverse happens where the PEM fuel cell unlatches itself from the LIPO voltage and takes full control of the load with respect to it's polarisation curve.All voltages and loads in the test would typically be accepted by a brushless motor and speed controller combination as discussed in Section Theory of natural balancing.The peak power experienced by the Horizon H100 72 W."It is important to note that the deficit from peak power of ∼18 W is not due to losses, it just isn't generated due to the balancing voltage with respect to the polarisation curve in Fig. 2, thus the fuel is not wasted.Also, it is interesting to note that a PEM fuel cell under part load may be more efficient than when it is at peak power in terms of chemical efficiency .By measuring the voltage drop from input to output of the two powerpaths at different loads, the efficiency of the system can be determined.Results for this are shown in Table 1.The main sources of inefficiency in the hardware are the Power MOSFET switches, the shunt resistors and the PCB tracks.In principle, these inefficiencies will increase with load due to the internal resistance of the components, which will cause these components to become warm.This limits the maximum load for the PEM fuel cell powerpath to 10 A and the LIPO powerpath to 100 A for this particular hardware design.Temperature impacts the internal resistance of the components and for this test no forced or passive cooling was applied, it was only just warm to the touch.Notably, to increase the power capabilities of this hardware only thicker copper PCB tracks, more MOSFETs in parallel and higher power rating shunt resistors would be required.All these modifications would serve to either preserve efficiency or may even improve it, meaning the hardware is easily scalable to larger power applications unlike traditional regulator and diode hybrids.Test results show that the powerpath controller with dual ideal diode acts as an efficient parallel hybrid.In particular, the theory of natural balancing between a PEM fuel cell and a battery has been proven and shows that traditional diodes and regulators are not required.This allows for a highly efficient hybrid using only MOSFET switches as the link between input and output, resulting in reduced losses, size, heat generation and weight.It is also known that the system is scalable with no reduction of efficiency due to simple electronics knowledge of MOSFETs in parallel.In this paper we identified that using regulators to hybridise a PEM fuel cell with a battery does not exploit the natural ability of a motor and PEM fuel cell to operate over a wide voltage range.Moreover, the use of solid state diodes to protect one supply from being powered by the other results in not only power losses; but an increase in mass, size and temperature of the power electronics.When at loads that would cause the to have a voltage below the battery, the PEM fuel cell latches to the battery voltage and the battery supplies the current deficit.The voltage is held high by enforcing a high speed duty cycle on the PEM fuel cell output to match it to the battery voltage.By choosing a battery whose nominal voltage aligns with the peak power voltage of the PEM fuel cell, in this case a 3 cell LIPO at a nominal 11.1 V, the PEM fuel cell can contribute more to the output than a higher voltage battery.The system is compatible with a more powerful 4 cell LIPO at a nominal 14.8 V however the PEM fuel cell contribution will be lower, reducing the overall system endurance.In this case you would also need to increase the cell count of the PEM fuel cell to match the battery and modify the power electronics as previously explained in order to recover the system efficiency but at a higher overall power.We have realised a new control strategy to switch highly efficient Power MOSFETs using a dual ideal diode powerpath controller in a high current powertrain, with considerably less losses than traditional regulators and the same protection capabilities of solid state diodes.This methodology also exploits the natural ability of the PEM fuel cell to operate at different voltages and the wide operating range of a brushless motor.The system is also completely self-contained by using power from the output rail to run the PEM fuel cell controller, datalogger and communications.This makes the system completely off-grid, allowing for use in transport applications or anywhere where mains power is not available.Importantly, the results shown in this paper include the full power requirements of the system, so are not invalidated by needing an external power supply for any part. | Proton Exchange Membrane (PEM) fuel cells are a chemically fuelled power supply which generally have a higher energy density than Lithium-Polymer Battery (LIPOs) but a much lower power density. In order for PEM fuel cells to increase the endurance of an in-service battery power supply, without decreasing the peak power, it should be hybridised with a battery. It is key for the market that the overall switch to hybrid technology is low cost in terms of size, weight and money. Hybrid technology tends to be generically designed to suit any power system, using regulators to ensure voltage matching, and diodes to control the direction of electrical flow. Many electric motors are controlled by speed controllers which can regulate the thrust provided by the motor, accounting for fluctuations in voltage usually found in a depleting battery. Using diode and regulator based hybrids for electric motor applications is therefore inherently inefficient even if complicated synchronous DC-DC converters are used due to the increased cost, size and weight. This paper demonstrates the ability to use ideal diodes to control the flow of electricity through the hybrid and that voltage regulation is not needed for a motor in this case. Furthermore, this paper explores the natural balancing strategy created by duty cycling the PEM fuel cell to different points within it's polarisation curve, removing the requirement for DC-DC converters to match it to the battery voltage. The changes made improve the efficiency of the hybrid power electronics to over 97%. |
31,428 | Waning protection following 5 doses of a 3-component diphtheria, tetanus, and acellular pertussis vaccine | Pertussis vaccines derived from whole Bordetella pertussis organisms and combined with diphtheria, tetanus toxoid were available from the 1940s and were effective , but were also associated with safety concerns , which ultimately led to the development of acellular pertussis vaccines combined with diphtheria, tetanus toxoid .By the late 1990s, the United States had switched from DTwP to DTaP vaccines for all recommended doses .The Advisory Committee on Immunization Practices recommends 5 doses of the DTaP vaccine at ages 2, 4, 6 and 15–18 months, followed by a 5th dose given between ages 4 and 6 years.Data regarding the efficacy, safety, and immunogenicity of using vaccines from multiple brands in a DTaP vaccination series are lacking; therefore, the Food and Drug Administration and ACIP recommend immunizing with the same DTaP vaccine brand throughout the entire immunization series.If the prior vaccine brand cannot be determined or is not available, ACIP recommends using any of the licensed DTaP vaccines.In the US, there are currently 2 available DTaP vaccines, a 3-component DTaP vaccine and a 5-component DTaP vaccine .Despite high levels of vaccine coverage among children and adolescents, an increase of pertussis incidence has been observed since 1980s, with peaks arising every 3–5 years .California experienced its largest pertussis outbreak in more than 50 years in 2010–2011, followed by an even larger outbreak in 2014–2015.Although not the only factor, waning of vaccine effectiveness following 5 DTaP doses plays a central role in recent outbreaks .We reported that during California’s 2010 pertussis outbreak, protection from pertussis following the 5th dose of DTaP from any manufacturer waned on average 42% each year .To our knowledge, studies to date have not evaluated waning after 5 DTaP doses of the same type of acellular pertussis vaccine or manufacturer.It is not known whether waning after 5 doses of the same brand of DTaP in children vaccinated according to the US schedule is similar to the DTaP waning reported following 5 doses of DTaP from multiple manufacturers .This study assessed the durability of protection against pertussis following California’s 2010 and 2014 pertussis epidemics in children who received 5 doses of 3-component DTaP vaccines in Kaiser Permanente Northern California, according to the US schedule.KPNC is an integrated health care delivery system which provides care to approximately 3.5 million members.KPNC operates 49 medical clinics and 21 hospitals, including pharmacies and laboratories.KPNC electronic databases capture vaccinations, laboratory tests, and inpatient, emergency room, and outpatient diagnoses.KPNC’s single centralized laboratory has identified Bordetella pertussis and Bordetella parapertussis using polymerase chain reaction since 2005.PCR results are categorized as positive for Bordetella pertussis; positive for Bordetella parapertussis; or negative for both.PCR kits were supplied by Roche from December 2005–May 2009 and by Cepheid from May 2009 onwards .Within KPNC, DTaP was introduced for the 5th dose in 1991, the 4th dose in 1992, the 3-dose primary series in 1997, and all 5 childhood doses by 1999.Persons born before 1999 either received all DTwP vaccines or a mix of DTwP and DTaP vaccines.This was a case-control study in which we selected cases and controls from all KPNC members from January 1, 2006 through March 31, 2015.We included all KPNC members during the study period who met the following criteria: were born in 1999 or later; and received 5 doses of DTaP vaccines in KPNC between 1 and 84 months of age, with the doses distributed as follows: 3 doses between 1 and 11 months, 1 dose between 12 and 46 months, and 1 dose between 47 and 84 months.We excluded persons who received reduced-antigen-content pertussis vaccine before the PCR test, who received any pertussis-containing vaccine between the 5th dose and the PCR test, who received a PCR test within 2 weeks of the 5th DTaP dose, and children who were not KPNC members for greater than 3 months between the 5th DTaP dose and PCR test.We applied the same exclusion criteria for cases and controls, and excluded individuals as controls if they were an earlier case.Finally, we excluded persons who were older than 12 years of age who had not yet received Tdap or who met the other exclusion criteria above.Cases: We included as potential cases all individuals who tested pertussis PCR-positive and parapertussis negative during the study period and who had received either 5 doses of DTaP3 or 5 doses of DTaP vaccines regardless of type or manufacturer before testing PCR-positive, depending on the study objective.Controls: We utilized two control groups.The first control group consisted of persons who tested PCR-negative for both pertussis and parapertussis and who received 5 doses of either DTaP3 or any DTaP vaccines before testing PCR negative.The second control group consisted of all KPNC members of the same sex, age, race or ethnic group, and medical clinic as each pertussis case and who were members on the date the matched case tested PCR-positive.We retained all KPNC-matched controls who received 5 doses of either DTaP3 or any DTaP vaccines prior to their anchor date.PCR-negative controls were intended to be comparable to the cases with regard to propensity to seek care and to get the PCR test when symptomatic.However, this comparison group is vulnerable to “collider bias” such that the test negative individuals could differ from the cases with respect to unmeasured selection factors.For this reason, we also used the larger comparison groups of KPNC-matched controls who were not vulnerable to this particular bias.We used the PCR-negative comparison group for our primary analysis because we were most concerned about biases related to healthcare seeking behaviors.Collider bias was of less concern because our study population all received 5 doses of DTaP and the timing of the 5th dose seemed unlikely to be driven by confounders.To better put the current results into the context of our prior study on DTaP waning, we also identified an additional study population described previously .Individuals in “any 5th DTaP” met all the above criteria, with the exception of the stringent DTaP vaccine history requirements.The “any 5th DTaP” vaccine requirement was receipt of any type of DTaP vaccine in KPNC between 47 and 84 months of age.This study had two primary aims.The first was to estimate the waning of DTaP3 protection against pertussis infection by comparing time since the 5th DTaP3 dose between cases and PCR-negative controls who had all received 5 doses of DTaP3 vaccines.The second was to estimate the waning of DTaP protection against pertussis infection by comparing time since the 5th DTaP dose between cases and PCR-negative controls who had all received 5 doses of “any DTaP” for their childhood vaccination series.The secondary aims were the same except that waning protection following the 5th DTaP dose was estimated by instead comparing cases to KPNC-matched controls.We fit conditional logistic regression models to examine cases versus controls in relation to time since the 5th DTaP dose.We modeled time since the 5th DTaP dose as a continuous variable and estimated the odds ratio per 365 days since the 5th DTaP dose.This OR indicates the average percent increase in the odds of acquiring pertussis per year of additional time since the 5th DTaP dose.We raised the per-year OR to the power of 2 to see how much pertussis risk increased over 2 years since the 5th DTaP dose, to the power of 3 to see how much pertussis risk increased over 3 years since the 5th DTaP dose, and so on.For the primary analysis, we conditioned the logistic model on calendar time intervals that ranged in width from monthly during the epidemic periods to yearly when cases were infrequent.We used covariates to adjust for age, sex, race or ethnic group, and medical clinic.For the secondary analysis, we conditioned the logistic model on the PCR test date and all the matching variables, and we used imputed probabilities of race or ethnic group as covariates for additional adjustment for the strata of children with imputed data.Data on race or ethnic group was available for approximately 98% of the study population, and for the rest we imputed race or ethnic group using the Bayesian Improved Surname Geocoding Method algorithm .This study was approved by the KPNC Institutional Review Board.We used SAS software, version 9.2 or later for all analyses.The final study population consisted of children 4–12 years of age at the PCR test.The DTaP3 population consisted of 340 pertussis cases, 3841 PCR-negative controls, and 4700 KPNC-matched controls, while the “any DTaP” population consisted of 462 pertussis cases, 5649 PCR-negative controls and 7126 KPNC-matched controls.There were differences in the year of PCR test, age, and race or ethnic group comparing cases with PCR-negative controls in both the DTaP3 and “any DTaP” population.Most DTaP vaccines administered to the “any DTaP” population were manufactured by GSK.Cases were more likely to be farther away from their 5th DTaP dose than were PCR-negative controls for children in both the DTaP3 and “any DTaP” study populations.Logistic regression analyses that modeled time since the 5th DTaP dose found substantial waning after the 5th dose.Among children in the DTaP3 population, the odds of being PCR-positive for pertussis each additional year after the 5th dose was 1.27, comparing cases with PCR-negative controls.Findings were similar when comparing cases with KPNC-matched controls.Among children in the “any DTaP” population, the odds of being PCR positive for pertussis per year since the 5th dose was 1.30 when compared with PCR-negative controls.Results when comparing cases with KPNC-matched controls in the “any DTaP” population were similar.Results were similar for the “any 5th DTaP” sensitivity analyses consisting of 706 pertussis cases, 9060 PCR-negative controls, and 17,160 KPNC-matched controls.This study found that among children who had only ever received DTaP3 vaccines, pertussis risk increased, on average, by 27% each year as vaccinees got farther away from their 5th dose of DTaP3.Similarly, when evaluating waning after 5 DTaP doses regardless of manufacturer, pertussis risk increased by 30% for each additional year after the receipt of the 5th DTaP dose.The finding was not unexpected because most of the DTaP vaccines administered in KPNC during the study period were DTaP3.Of the children in this study who did not receive 5 DTaP3 doses, most received 3 or 4 doses of DTaP3 and more than 98% of the “any DTaP” population received at least 1 dose of DTaP3.Overall, this study found that there is substantial waning of protection against pertussis as children become more remote from their 5th dose of DTaP.Our findings allowed us to estimate the trajectory of DTaP3’s waning effectiveness during the years after the 5th dose, given a reasonable estimate from other studies about initial vaccine effectiveness.For example, if VEi was 90% then our finding – that pertussis risk increases by 27% per year after the 5th dose – implies that after k years, VE would be: VEk = 100% × = 100% × .Thus, after 5 years, VE5 would amount to 100% × = 100% × 0.67 = 67%, indicating that VE had decreased to 67% 5 years after the 5th dose of DTaP3.We also looked at waning immunity using models that specified time since 5th DTaP as a set of 7 indicator variables rather than as a continuous variable.VE estimates decrease monotonically: in each interval, pertussis risk was higher than in the preceding interval.We previously reported that annual waning after the 5th DTaP dose was 42%.In this prior study, we considered receipt of any DTaP dose between 47 and 84 months of age within KPNC as the 5th DTaP dose, and the study period ended in 2011 .For this current study, our inclusion criteria were substantially more stringent, requiring that all 5 DTaP doses were administered within KPNC, and the study period extended to 2015.In order to consider the current results in the context of our previous findings, we performed sensitivity analyses using the same inclusion criteria as previously with data from both 2010 and 2014 epidemics, and found that pertussis risk increased by 37% per year on average.This finding including the additional 2014 epidemic data, while not directly comparable to our previous finding of 42% annual waning, demonstrates consistency between the two waning estimates and strengthens our current findings that there is substantial waning of protection after 5 doses of either DTaP3 or any brand of DTaP.This study is also consistent with a recent meta-analysis of DTaP waning that estimated that the odds of pertussis increased by 33% each additional year since the last dose of DTaP .Further, a recent Australian study evaluated children through age 3 years following 3 doses of DTaP3 administered in infancy and found that VE against laboratory-confirmed pertussis was 83.5% after the 3rd dose between ages 6–11 months, 79.2% at age 1 year, and 59.2% at 3 years of age .The Australian study suggests that 3 doses of DTaP3 vaccine provide reasonable protection during the first year of life, but that this protection waned substantially even before administration of their next recommended DTaP dose between ages 4 to 6 years.Until recently, the Australian schedule differed from the US in that they did not administer a 4th DTaP dose during the second year of life.This study had limitations.Because of our high levels of DTaP vaccine coverage, we could only compare children who were more versus less recently vaccinated with a 5th DTaP dose, rather than compare vaccinated with unvaccinated children.Thus, we were able to estimate how the risk of pertussis associated with the waning VE after the 5th DTaP dose increases over time, but we could not estimate the initial VE of the 5th DTaP dose.Second, PCR testing may misclassify pertussis status for a few individuals; however, such misclassification is unlikely to be related to time since vaccination.Further, results were consistent when comparing cases with both PCR-negative or KPNC-matched controls.In addition, as discussed above, the results of this study are not fully comparable with our previously published DTaP waning results because this current study included more recent outbreak data.In conclusion, there is substantial waning of protection against pertussis as children become more remote from their 5th dose of DTaP3.Risk of contracting pertussis increased on average by 27% per year since vaccination after the 5th dose of DTaP3.As this cohort of children and adolescents who only received acellular pertussis vaccines ages and continues to grow larger, we will be increasingly vulnerable to pertussis outbreaks until vaccines which provide more enduring protection are developed.Pediarix, Kinrix and Infanrix are trademarks of the GSK group of companies.NPK and RB received research grant from the GSK group of companies for the study conduct.NPK and RB report unrelated research support to their institution from the GSK group of companies, Sanofi Pasteur, Merck and Co, Pfizer, MedImmune, Nuron Biotech, and Protein Science.POB is employed by the GSK group of companies and holds shares in the GSK group of companies as part of his employee remuneration.GK was employed by the GSK group of companies at the time of the study conduct and is currently employed by CSL Behring.GK also reports holding of shares in the GSK group of companies and in CSL Behring as part of her employee remuneration.JB, BF and LA report no conflicts of interest.GlaxoSmithKline Biologicals SA funded this study and was involved in all aspects of the study, including study design and interpretation of the data.GlaxoSmithKline Biologicals SA took charge of any publication costs.All authors were involved in the conception of the design of the study and participated to the development of this manuscript: NK wrote the first draft, JB, BF and RB collected or generated the data, NK, JB, BF, and RB analyzed study data, NK, JB, BF, POB, GK and RB interpreted study data.All authors revised the manuscript critically for important intellectual content.All authors approved the final version before submission, except RB who died prior to manuscript submission.All authors agree to be accountable for all aspects of the work.The corresponding author had final responsibility to submit for publication. | Background The effectiveness of diphtheria, tetanus, and acellular pertussis (DTaP) vaccines wanes substantially after the 5th dose given at ages 4–6 years, but has not been described following 5 doses of the same type of DTaP vaccine. We investigated waning effectiveness against pertussis in California over nearly 10 years, which included large pertussis outbreaks, following 5 doses of GSK DTaP vaccines (DTaP3). Methods We conducted a case-control study (NCT02447978) of children who received 5 doses of DTaP at Kaiser Permanente Northern California from 01/2006 through 03/2015. We compared time since the 5th dose in confirmed pertussis polymerase chain reaction (PCR)-positive cases with pertussis PCR-negative controls. We used logistic regression adjusted for calendar time, age, sex, race, and service area to estimate the effect of time since the 5th DTaP dose on the odds of pertussis. Our primary analysis evaluated waning after 5 doses of DTaP3. We also examined waning after 5 doses of any type of DTaP vaccines. Results Our primary analysis compared 340 pertussis cases diagnosed at ages 4–12 years with 3841 controls. The any DTaP analysis compared 462 pertussis cases with 5649 controls. The majority of all DTaP doses in the study population were DTaP3 (86.8%). Children who were more remote from their 5th dose were less protected than were children whose 5th dose was more recent; the adjusted odds of pertussis increased by 1.27 per year (95% CI 1.10, 1.46) after 5 doses of DTaP3 and by 1.30 per year (95% CI 1.15, 1.46) after any 5 DTaP vaccines doses. Conclusions Waning protection after DTaP3 was similar to that following 5 doses of any type of DTaP vaccines. This finding is not unexpected as most of the DTaP vaccines administered were DTaP3. Following 5 doses of DTaP3 vaccines, protection from pertussis waned 27% per year on average. NCT number: NCT02447978. |
31,429 | Using Actiwatch to monitor circadian rhythm disturbance in Huntington' disease: A cautionary note | Huntington's disease is a progressive neurodegenerative disorder presenting in midlife with a triad of motor, emotional and cognitive symptoms.It is well established that the mutation causing HD affects the central nervous system, in particular, the medium spiny neurons of the striatum.The motor disorder is most widely known for the characteristic and striking involuntary movements, in particular chorea and dystonia, but the cognitive decline and emotional disturbance are often more debilitating.Moreover, there are a range of other symptoms such as weight loss and sleep disturbance that are commonly reported in HD and can be difficult to manage."Sleep disturbance can be distressing for both patients and carers, may exacerbate the cognitive decline, and is also reported in many other progressive neurodegenerative disorders such as Parkinson's disease and Alzheimer's disease.Sleep disturbance in HD has been reported as occurring very early in the course of the disease in both animal and human studies and is temporally associated with cognitive deterioration.It is a disruptive symptom for both affected individuals and their carers and currently is by necessity treated empirically and often with little success.Although there is some evidence of degeneration of the hypothalamus in HD individuals and evidence of lowered melatonin levels the mechanism underlying circadian rhythmicity disruption and sleep disturbance are still unclear and need to be elucidated in order to develop effective treatments.There is evidence that both mouse and rat transgenic models of HD can recapitulate sleep disorders reported in patients, making them potentially important means for understanding the mechanisms underlying circadian disruption and sleep disturbance.For example, the circadian behaviour of R6/2 mice was found to be disturbed, with increased daytime and reduced nocturnal activity along with disruption of circadian clock genes in the suprachiasmatic nuclei, motor cortex and striatum.A relationship between circadian disruption and cognitive decline in R6/2 HD mice was inferred when the pharmacological imposition of sleep and wakefulness with Alprazolam and Modafinil was shown to improve cognitive function in the mutant animals.Less work has been done in transgenic rat models, although a transgenic rat model of HD mirrors the sleep-wake disturbances seen in the mice, with accompanying reduction in the levels of adrenergic α2 receptors and leptin.The capacity to translate between human and rodent studies is important in facilitating understanding of the biology of circadian disruption in HD, and a number of techniques have been employed to investigate sleep architecture in HD patients and rodents, with polysonography being recognised as the gold standard with EEG as a necessary and important element of this.However, full polysonography, or even simple ambulatory EEG, are cumbersome, difficult to perform over prolonged periods of time for technical and acceptability reasons, and relatively expensive.Thus, there have been attempts use wearable devices such as Actiwatch® to assess movement over one or more 24 h periods as a surrogate for sleep on the basis that individuals tend to move substantially more when they are awake than asleep.Indeed Actiwatch® technology has been used in a number of HD circadian and sleep studies and could represent a simple and acceptable way of recording circadian rhythm and sleep in both people and animal models of HD.Here we compare Actiwatch® recording, ambulatory EEG and sleep diaries in small numbers of HD individuals as a prelude to developing a translational platform to assess circadian disruption in this disorder.While the gold standard for sleep analysis is polysomnography, in our study we adapted and confined the test to EEG recording as we were interested in differentiating between wakefulness and sleep this was considered sufficient and found to be pragmatic."Thirteen participants were recruited to this study from the South Wales Huntington's disease clinic, based in Cardiff.Inclusion criteria included a positive genetic test for HD, being above the age of 18 and below the age of 65, and having no concomitant medical conditions."Asymptomatic individuals were defined as having a Unified Huntington's Disease Rating Scale total functional score of 13/13 and an UHDRS motor score of less than 6, and symptomatic individuals having a TFC between 4 and 11 and a total motor score greater than 20.Assessment of disease status were undertaken by an experienced neurologist.Nine community control individuals were recruited.The demographics of the study participants are summarised in Table 1.In the presymptomatic group, one patient was on oral contraceptive and one on buproprion.Of the symptomatic patients, one was on an anti-psychotic, olanzapine, one on an antidepressant, two were on hypnotics, one on asprin and another on aspirin and ibroprofen.In the presymptomatic group there were 2 ex-smokers and 1 non-smokers.In the symptomatic group there were 2 ex-smokers, 1 smoker and 6 non-smokers.One symptomatic patient had a history of previous alcohol abuse and alcohol history was unremarkable for all other participants.No subjects had respiratory comorbidities."Ethical approval for the study was obtained from the South East Wales local research ethics committee and patients were recruited from the Cardiff Huntington's disease clinic.All diagnoses were confirmed with a positive genetic test.Controls were healthy volunteers who were not at risk of HD.All individuals recruited into the study were asked to wear ambulatory EEGs for a 24 h period, to wear an Actiwatch® and to keep a sleep diary for a period of one week, and to donate saliva samples for cortisol measurement.The Actiwatch® Activity monitoring system was worn on the non-dominant hand and recorded activity over a period of seven days.Actiwatch records with a sensitivity of 0.05 g with a bandwidth between 3 Hz and 11 Hz and a sampling frequency of 32 Hz.1 min epochs were put into 5 min bins for comparison with EEG.The Actiwatches® employed an analogue that used the amount of activity to make an estimate as to whether the subject was awake or asleep.All sleep episodes were visually inspected before analysis to screen for artefacts and malfunctioning.The watches were waterproof and participants were asked to wear them continuously.Patients were instructed to press the activation button, which delivered a single recorded pulse, to indicate the time they started to try to sleep and to press again to indicate waking in the morning.EEG electrodes were fixed with collodion and placed according to the 10/20 international system.The continuous recording was obtained using ambulatory EEG system worn for 24 h.The date and time were synchronised for analysis of EEGs and Actiwatch® data.The EEG traces were analysed offline by an experienced neurophysiologist.The EEG was scored manually in epochs of five minute to determine wakefulness or sleep.The criteria for wakefulness was denoted by presence of eyeblinks artefacts, and/or an alpha rhythm in the EEG.Early sleep stage was characterized by lack of eyeblink artefacts, fragmentation or absence of alpha activity and replacement of background EEG by low amplitude mixed frequency EEG activity and presence of slow eye movements.Later stages of sleep were identified by characteristic sleep phenomena such as vertex waves, K complexes, sleep spindles and slow waves.Differentiation between REM sleep and wakefulness was determined by lack of eyeblink artefacts, absence of sustained alpha activity and presence of rapid eye movement artefacts.Patients were given sleep diaries in the form of a booklet with a series of questions for each 24 h period.They were encouraged to make entries into the diary throughout the day with an emphasis on collecting information about night time sleep as soon as possible after rising in the morning.The questions included: time of going to bed at night; time the subject started trying to go to sleep; approximately how long it took them to fall asleep; how many times they woke in the night and the duration of the wakeful period; the time they woke in the morning; the time they rose in the morning; the number and duration of day time naps.All participants collected saliva twice a day, 12 h apart, for a week by collecting saliva in microcentrifuge tubes.Subjects were requested to collect at least 1 ml of saliva.Samples were stored at −20 °C until analysis of cortisol levels could be performed by Prof J Herbert and S Cleary.All ANOVA were undertaken using the Genstat v16.1 statistical package, with unbiased iterative correction for missing values.Comparison of epochs of Actiwatch® wakefulness with EEG and sleep diaries was performed visually on a patient-by-patient basis.Of the 22 participants who took part in the study, 21 completed a sleep diary.There was a trend for deterioration across most parameters with more advanced disease state, in particular for sleep latency, number and duration of wakeful periods in the night, and number and duration of day time naps, but not for time of first attempting night-time sleep.However, when the groups were compared by 2-way ANOVA none of the differences reached significance on any of the parameters.There was some loss of data caused by water damage due to faulty waterproofing of some watches.Analysis of Actiwatch® data by group to assess differences in circadian rhythmicity did not reveal any significant group differences.The periods of wakefulness as assessed by Actiwatch® were then directly compared to EEG and sleep diary data on a patient-by-patient basis.As the EEG was fitted for the first 24 h of the 7 day experiment, the results obtained from the three different methods were compared over the first 24 h to assess the consistency between them in measuring sleep disturbance.For three of the symptomatic individuals both full Actiwatch® and EEG data was not available and so they were not included in the analysis.Across all groups, the sleep diary had a tendency to fail to capture night-time waking as recorded by the EEG.All periods of sleep as determined by Actiwatch® agreed with the EEG recording.However, for both symptomatic and asymptomatic HD subjects, periods of night-time wakefulness recorded by the Actiwatch® agreed poorly with EEG recordings in that there were multiple periods of ‘wakefulness’ indicated by the Actiwatch® for which the corresponding period of EEG recording demonstrated the individual to be asleep.This was not the case for all patients: in patient 1 and patient 10 there was agreement with the EEG recording.Some of these epochs were characterised by excessive movement artefact, and these are indicated in the figure as “sleep plus movement”.There was one period of wakefulness recorded by the Actiwatch® that corresponded with a recording of awake in the sleep diary and sleep plus movement on the EEG.This may indicate that the EEG recording of this epoch does not reliably indicate sleep, although this patient was noted to have completed the diary unreliably.Unfortunately a full data set was only available for one control patient.In that subject there were two periods of Actiwatch® waking; one associated with “sleep plus movement” on the EEG and one also recorded as wakefulness by the EEG.Saliva samples were obtained for all 4 asymptomatic, 6 of the 9 symptomatic, and all nine control individuals.There were missing samples across the 7 days in many cases, and indeed only one of the controls produced a complete set of samples.Missing values were corrected by an iterative unbiased estimator routine within the analyses of variance package.On direct questioning, many subjects of both the HD affected and control groups reported that they found saliva collection mildly unpleasant and also the required amount of saliva.Analysis of cortisol levels revealed no differences between groups, but a difference in time with morning cortisol levels being higher than in the evening, and a significant group × time interaction, with a blunted cortisol peak in HD symptomatic subjects.By using three methods to monitor sleep in HD patients, this study highlights the potential shortcomings in diary and actimetry records, in comparison to the more intensive ambulatory EEG approach.The main finding in this study was that ambulatory EEG recordings suggested that caution should be applied when interpreting Actiwatch® recording in individuals with both asymptomatic and symptomatic HD.Specifically, although there was good agreement when the Actiwatches® indicated sleep, and there were periods in which both Actiwatches® and EEG demonstrated the individual to be awake, there were also multiple occasions when the Actiwatch® indicated wakefulness but the EEG indicated that the patients was asleep.This was the case even in those periods in which sleep recordings punctuated with movement artefact were excluded.Thus, in these cases recording with Actiwatches® alone would overestimate the extent of wakefulness, even in asymptomatic subject with little day time chorea, which could consequently interfere with assessment of circadian rhythmicity.The most likely reason for this discrepancy is that, despite the fact that most involuntary movements in HD disappear with sleep, people with HD nevertheless display involuntary movements during lighter periods of sleep, and these are more exaggerated than sleep-associated movement in control individuals and are thus recorded as wakeful by the Actiwatch® algorithm.Thus, caution is necessary in interpreting Actiwatch® assessment of wake/sleep cycles in individuals with movement disorders.It is possible that this could be addressed by adjustments to the technology calibration or to its placement, but our data suggests that such technology would need to be verified by EEG recordings before it could be reliably used independently in assessment of wake/sleep cycles.It is unlikely that these limitations would apply to rodent models of HD in the same way, since overt dyskinetic movements such as chorea and tics are generally not seen in HD animals although they may exhibit generalised akinesias, hyperactivity and abnormal gait and postural responses.However, our data also indicates that simultaneous Actiwatch® and ambulatory EEG could be useful in the study of nocturnal involuntary movements.The sleep diary tended to underestimate the extent of wakeful periods as recorded by the EEG and this was especially obvious in the symptomatic individuals.Discrepancies in diary recordings and other methods of sleep recording, including accelerometers, have been reported previously, although in healthy populations it appears that there is a tendency for more sleep disturbance to be reported by diary than is recorded by accelerometers.The under-reporting of EEG-confirmed wakeful periods in our study suggests that individuals do not fill in the diary during a wakeful period and either fail to complete or fail to remember the wakeful period the following morning.However, there were trends towards worse night-time sleep and increased day-time napping in HD positive individuals, which may have reached significance on a group basis with larger numbers.Thus, sleep diaries may still have a useful contribution in this context but, from our data, do not appear to be reliable on an individual basis.All groups exhibited the expected circadian changes in salivary cortisol levels with morning levels significantly higher than evening levels, indicating that the daily cycles of all subjects were intact.However, we also saw blunting of the morning level in symptomatic individuals.Previous reports have most commonly found cortisol levels to be raised in HD, although this was not replicated in a recent study in which 24 h sampling of asymptomatic and symptomatic HD patients was performed.Kalliolia et al. suggested that the raised levels in previous studies could be due to the stress of repeated sampling.The blunted levels in this study may be indicative of circadian disruption, but currently remain unexplained and are deserving of further study in a larger cohort of patients.Salivary cortisol has been used in numerous studies to assay cortisol levels and appears to correlate well with serum levels.There were some problems of acceptability in that some individuals found the collection method distasteful, some patients reported difficulty in producing enough saliva to fill the tube to the 1 ml mark, and there were missing samples across the 7 day collection period."However, overall this appears to be a feasible method for collecting community based samples with relatively little associated stress for the study subjects, as has been noted previously in Alzheimer's disease.In summary, we highlight some of the difficulties in attempting to undertake community-based studies of circadian rhythm and sleep in an HD population.In addition to technical difficulties that included loss of data due to equipment failure, we found poor agreement between Actiwatch® and ambulatory EEG recordings for patients with both asymptomatic and symptomatic HD, which we interpret as most likely to be due to involuntary movement “breakthrough” during sleep.The data presented in our study is not sufficient to suggest the absence of circadian disturbance in HD, and indeed we consider that the accumulating animal and clinical data point towards circadian disturbance being an intrinsic element of this disorder.However, we believe the findings are important to consider when designing studies of circadian activity and sleep in HD and also in other disorders in which involuntary movements are a feature.We could not confirm the usefulness of sleep diaries from our study, but have data to support further assessment of saliva samples for the assessment of cortisol in this condition. | Huntington's disease (HD) is an inherited neurodegenerative disorder that is well recognised as producing progressive deterioration of motor function, including dyskinetic movements, as well as deterioration of cognition and ability to carry out activities of daily living. However, individuals with HD commonly suffer from a wide range of additional symptoms, including weight loss and sleep disturbance, possibly due to disruption of circadian rhythmicity. Disrupted circadian rhythms have been reported in mice models of HD and in humans with HD. One way of assessing an individual's circadian rhythmicity in a community setting is to monitor their sleep/wake cycles, and a convenient method for recording periods of wakefulness and sleep is to use accelerometers to discriminate between varied activity levels (including sleep) during daily life. Here we used Actiwatch® Activity monitors alongside ambulatory EEG and sleep diaries to record wake/sleep patterns in people with HD and normal volunteers. We report that periods of wakefulness during the night, as detected by activity monitors, agreed poorly with EEG recordings in HD subjects, and unsurprisingly sleep diary findings showed poor agreement with both EEG recordings and activity monitor derived sleep periods. One explanation for this is the occurrence of 'break through' involuntary movements during sleep in the HD patients, which are incorrectly assessed as wakeful periods by the activity monitor algorithms. Thus, care needs to be taken when using activity monitors to assess circadian activity in individuals with movement disorders. |
31,430 | Peritonsillar and deep neck infections: a review of 330 cases | Deep Neck Infections are defined as suppurative infectious processes of deep visceral spaces of the neck that usually originates as soft tissue fasciitis and may lead to an abscess.1,Direct extension of an upper aerodigestive infection through fascial planes is the most common cause.DNI are a frequent emergency in Otolaryngology which can be life-threatening as it may lead to airway obstruction, mediastinitis or jugular vein thrombosis.2,The aim of this study is to review different factors that may be the predisposing ones to an increase of infection risk and may have an important role in prognosis.We performed a retrospective study of patients diagnosed of cervical infection who were admitted in the emergency room of our hospital from January 2005 to December 2015.We excluded patients with superficial skin infections, limited intraoral infections and cervical necrotizing fasciitis.Finally, 330 patients were enrolled in our study.Although peritonsillar infections are not truly DNI, we decided to include them in our review because of its high incidence and sometimes coexistence with other deep neck space infection.We used excel and SPSS to perform statistical analysis and Pearson X2 test were calculated to obtain p-values.There were 176 men and 154 women.Our population ages ranged from 6 months to 87 years, the mean age was 32.89 ± 18.198 years.81.51% of them were adults and 19.49% were children.50% were older than 31 years old.The mean number of patients with a neck infection admitted in our hospital per year during 11 years was 29.82 people.The distribution by years is shown separately in Fig. 1.Autumn was the period where more patients presented a DNI, 8.55 ± 4.82 cases.This implies that between the end of September and the first half of December, 2.85 patients were admitted per month due to this pathology.The distribution in seasons is displayed in Fig. 2.The mean hospital stay was 4.54 days.7.3% of the population had allergy to some antibiotics, penicillin was the most common followed by aminoglycosides and quinolones.62 patients had had previously a DNI, and 14 had had tonsillectomy done years before.There were 28 patients with underlying systemic diseases.Diabetes Mellitus was the most prevalent in our population.The etiology of the infection was identified in 296 patients.The most common cause was pharyngotonsillar infections, followed by odontogenic infections.Rest of the causes exposed in Table 2.The peritonsillar space was the most commonly affected.Distribution of localizations is shown in Table 3.The most common symptom reported by the patients was odynophagia in 98.2% of patients while the most common sign was the presence of trismus in 55.5%, followed by cervical lymphadenopathies in 53.6%.244 patients had not received antibiotics prior to admission in our hospital.Those who have been treated had been taking penicillins in most of the cases.The rest of the patients had received macrolides, usually in a 3 day monodose treatment.A Fine Needle Aspiration was realized in 277 patients, in 22.74% of the cases purulent material was obtained, classifying it as an abscess.In routinary blood test, abnormal blood cell count was found with an increase in neutrophils in 313 cases.When the physical examination and the FNA were not enough to reach a diagnose, an imaging technique was realized.Cervical CT with iodinated contrast was the gold standard, DNI was described as diffuse inflammation area or a hypodense area with the presence of a “rind”, an air/fluid level or scattered small gas bubbles.CT was needed in 194 cases, and 48 of them required a second one due to a bad clinical evolution during hospital stay.Usually a second image test was performed after 48 h without any improvement with treatment.Cervical ecography was realized in 4 patients, they were one child under 1 year old and three adults with a severe renal failure in order not to expose them to iodinated contrast material.In two children with suspicion of retropharyngeal infection we preferred a cervical lateral radiography to avoid unnecessary radiation in infants.In these cases an increase of soft tissue in the retropharyngeal space was shown.Bacterial cultures were just possible in 221 patients however a positive result was obtained in 61.99% of them.The isolated pathogens and their incidence are shown in Table 4.All of our patients received antibiotics and corticosteroids.In 304 cases we chose a β lactamic associated with an inhibitor of β lactamases.Those who were allergic to β lactamics, were treated with an aminoglycoside or a quinolone in monotherapy or associated with an antibiotic against anaerobic microorganisms.There were three patients who required a drug change because of antibiotic resistance or torpid evolution; in those cases we preferred carbapenems.We had one DNI in our population caused by Mycobacterium tuberculosis, so it was treated with tuberculostatics drugs in the same way a respiratory infection is handled.Patients were treated with antibiotics during a mean time of 10.92 ± 3.73 days.Those who needed intensive care unit stay were the ones who required a longer antibiotic treatment.245 of them needed surgical drainage, 196 needed a transoral approach while 36 required a cervicotomy.In 4 patients we opted for a combined approach, it was usually used in multispace infection when the affected area were not adjacent.When there was tonsillar necrosis or intratonsillar abscess, we performed a tonsillectomy at the time of surgical drainage.16 of 245 patients who had been operated on, needed a second surgery because of bad clinical evolution.13 of our patients had complications.Mediastinitis was the most frequent one followed by airway obstruction, cellulitis, pneumonia, acute renal failure and sepsis.Tracheostomy was performed in 6 patients, 3 of them due to acute airway compromise and the other 3 secondary to prolonged orotracheal intubation.We observed a vocal cord paralysis and a Horner syndrome in two patients after surgery.5 of 13 required intensive care unit attentions, with a mean stay of 49 days.One patient died from septic shock.The factors that were related with complications were analyzed.Male patients and those allergic to penicillins had a higher rate of complications and ICU stay.All factors are shown in Table 5.In our review pharyngotonsillar infections were the most common cause of peritonsillar and DNI.This result is consistent with some studies in the literature,1–4 although for the majority, odontogenic infections are the main cause, especially in studies carried out in Asia and Eastern Europe.5–7,This may be related to different oral hygiene conditions between different countries.Although peritonsillar infections are not strictly DNI we chose to consider them in our review, as well as other studies did,2,6,8 because in many cases it was the start of a proper DNI or because it had severe complications as a DNI can have.If we quantified just strict DNI we had a population of 91 parapharyngeal and 11 retropharyngeal infections in 10 years.In patients with DNI is more common to find cases who had not had a tonsillectomy, this may be explained as tonsils have an increased bacterial load living within crypts.7,We would like to enhance that we found an increase in DNI incidence in the second period studied; this could be due to an aging population or the fear to over prescribe antibiotics and develop resistant microorganisms.In fact, 3 out of 4 people had not taken any medication prior to the emergency consult.In this study we found that systemic comorbidities like diabetes mellitus3,4,9,10 or hepatopathy and allergy to penicillins are common in cases of DNI who suffer complications or require ICU stay.DM results in a defect of polymorphonuclear neutrophil function, cellular immunity and complement activation.Consequently, hyperglycemia and high glycosylated hemoglobin are predictors of worse prognosis,10 due to it, our diabetic patients were studied by the Endocrinology department.The prevalence of penicillin allergy in our review was lower than the global population one,11 however it was much higher in patients who required ICU stay or who suffered complications.S. viridans was the most common pathogen in our population, as well as in other studies.3,6,12,We did not find Klebsiella pneumoniae in our environment, which differs from studies in Asia.4,8,10,They usually find a high prevalence of this microorganism, specially in diabetic patients.13,We had two ways of obtaining material for culture, either a FNA in the consult or a sample obtained during surgical drainage.Sometimes none of them could be performed.Some patients had a severe trismus which hindered the FNA.On the other hand, 245 patients received surgical drainage, which is the best moment to take a sample of the infected material, but it was not always possible, as in some cases the material obtained from the infected area was not enough or was not in suitable conditions.Besides, even when the sample was enough cultures were not always positive.This may be explained by antibiotics taken prior to sample extraction or an incorrect sample management.According to the treatment, we confirm what most studies have already said.Every patient received antibiotics and corticosteroids.1–11,14,Surgical drainage still is the option when medical treatment is not enough, when there is already a well formed hypodense area with margins well defined or an air/fluid level or signs of complications such as mediastinitis or involvement of multiple regions.15,Complications may appear as a consequence of extension of the infection through neck spaces.Mediastinitis and airway obstruction were the most common ones as previous studies have shown before.1–3,In cases of mediastinitis, thoracic surgeons performed the drainage in the same surgical time as our team did a cervicotomy.Traqueostomy was needed in a lower percentage than other studies,3 around 1% like some Indian review.6,The use of corticosteroids decreases tissue edema and the probability of pus gush into the airway while endotracheal intubation, making the procedure safer and more successful.16,17,Even there has been an increase in DNI incidence, mortality remains low as it have been previously shown in other studies.17–19,DNI are still common and can develop serious complications.Immunocompromised patients with systemic comorbidities are susceptible of worse prognosis.In spite of the increase in DNI, mortality has decreased thanks to multidisciplinary attention and improvements in imaging techniques or antibiotics and surgery, which have enabled an earlier diagnosis and treatment.The authors declare no conflicts of interest. | Introduction: Deep neck infections are defined as suppurative infectious processes of deep visceral spaces of the neck. Objective: The aim of this study is to review different factors that may influence peritonsillar and deep neck infections and may play a role as bad prognosis predictors. Methods: We present a retrospective study of 330 patients with deep neck infections and peritonsillar infections who were admitted between January 2005 and December 2015 in a tertiary referral hospital. Statistical analysis of comorbidities, diagnostic and therapeutic aspects was performed with Excel and SPSS. Results: There has been an increase in incidence of peritonsilar and deep neck infections. Systemic comorbidities such as diabetes or hepatopathy are bad prognosis factors. The most common pathogen was S. viridans (32.1% of positive cultures). 100% of the patients received antibiotics and corticosteroids, 74.24% needed surgical treatment. The most common complications were mediastinitis (1.2%) and airway obstruction (0.9%). Conclusion: Systemic comorbidities are bad prognosis predictors. Nowadays mortality has decreased thanks to multidisciplinary attention and improvements in diagnosis and treatment. |
31,431 | Mechanisms of activation induced by antiphospholipid antibodies in multiple sclerosis: Potential biomarkers of disease? | Multiple Sclerosis is a chronic demyelinating/degenerating disease affecting predominantly the white matter of the central nervous system.Antibody responses in MS are a prominent characteristic and the types of antibodies detected in MS patients so far comprise a large and heterogeneous collection.Although there is no specific biomarker strongly correlated in the diagnostic criteria, the intensive search for biomarkers in MS continues and disease-specific patterns of autoantibody profiles obtained by microarrays suggest their further use in the characterization of disease type and direction of research paths.Several cell types are implicated in antigen-specific antibody responses in MS, including neurons, oligodendrocytes and astrocytes.Naturally, myelin-associated antigens were the first to be suspected to elicit antibody responses and numerous studies have confirmed antibody incidence against myelin basic protein, proteolipid protein, myelin oligodendrocyte glycoprotein and more.Strikingly, a study by Ho et al., reported the antibody reactivity against phosphate groups of certain phospholipids of the myelin sheath and suggested the pharmacological use of these PL for their protective properties.Prevalence of aPL including IgG isotype anti-cardiolipin antibodies in MS have been extensively described by us and others."The location of a particular epitope within the cell or the site where it is exposed on the surface, can distinguish between highly specific, pathogenic responses, and less pathogenic but highly informative; this dichotomy may determine the antibody's value as a possible biomarker.Furthermore, the identification of the effects of autoantibodies in autoimmune pathogenesis is a subject of investigation since it implicates different pathways and presents characteristic disease phenotypes.One of the most studied pathways in immune responses is p38MAPK, which plays a key role in activation of the physiological processes of inflammation and oxidative stress.There are a number of potential mechanisms for p38 involvement in MS pathogenesis.Early evidence for its involvement in autoimmune neuroinflammation came from microarray studies showing that the expression of MAPK14 was elevated approximately 5-fold in MS lesions in the CNS.Most importantly, recent studies using pharmacologic inhibitors and genetic approaches have demonstrated a functional role of p38 MAPK in the experimental autoimmune encephalomyelitis model and dissected the roles of this kinase in different immune cells.These results suggest that p38 MAPK activity is necessary for the progression of clinical signs and EAE pathology and that this pathway has potential for pharmacologic intervention in MS, although it is unclear which cell types may be involved.Similarly, NFκB has also been revealed as being able to signal downstream of MAPK kinases and associated with autoimmunity and inflammation.Any identified molecules with a role in the pathways which mediate pathogenesis could comprise possible therapeutic targets.Overall, a variety of multiple cellular and humoral mediators have been identified as implicated in aPL pathology.The highlights of research so far, point mostly towards endothelial cells, platelets and monocytes as cellular contributors.The association of MS with an increased risk of venous thromboembolism described in epidemiological studies, in addition to the correlation between MS and pro-thrombotic factors including aPL positivity, could imply that the MS pathogenic mechanisms may at least partly implicate thrombotic processes.With this knowledge in mind, it seems promising to investigate the use p38 MAPK inhibitors to treat MS. Along these lines, further research aimed at elucidating the precise mechanisms of how aPL via p38 MAPK and NFκB may control disease and a more broad understanding of the molecular players and signaling pathways implicated should provide other novel targets for intervention.Here we examine the molecular mechanisms of how aPL in patients from MS patients may illuminate potential therapeutic targets.Our results presented in this study, identify p38MAPK and NFκB signaling as central pathways, arguably the master regulators of the inflammatory responses.Blood samples were collected prospectively from 127 MS patients that fulfilled the revised McDonald criteria.From a detailed history available for each patient, none of the patients included in the study had any underlying autoimmune disease, no neurological manifestations not attributable to MS and no evidence of thrombotic events or pregnancy morbidity, or any clinical manifestations otherwise linked to the Antiphospholipid Syndrome or any other pathological entity.The population of MS patients from which blood was taken after signing informed consent forms approved by the national bioethics committee, comprised of 88 patients in the relapsing-remitting phase of the disease, 11 primary progressive and 28 secondary progressive patients.From the latter, 5 patients also presented relapses.The entire age range of the MS patients was between 22 and 79 years of age.Up to 64 of the 127 patients received treatment, from which 40 were being administered Interferon-β, 11 were taking Natalizumab and 13 were receiving other types of medication such as mitoxantrone, fingolimod, azathioprine, citalopram, mycophenolate, glatiramer acetate, methotrexate, alprazolam or citalopram.Ninety-two healthy controls were also included in the study, which matched the MS patient population in gender and age.These individuals did not present with any pathological condition and had no history of a long-term illness or evidence of an autoimmune disorder.The aCL activity of IgG was measured as previously described using international calibrators in G phospholipid units.The nine most highly positive serum samples from the cohort of MS patients that were tested by ELISA for IgG anti–CL and the sera of nine heathy control age and gender matched, were selected for purification.The mean age of the IgG anti-CL positive group is 46.33 and 48.89 for the HC group.IgG was purified from all serum samples by protein G sepharose affinity chromatography, passed through Detoxi-Gel™ Endotoxin removing columns and the presence/absence of endotoxin was determined by the Limulus Amoebocyte Lysate assay.All IgG preparations tested negative in this assay.The concentration of purified IgG was determined using the Nanodrop ND-1000 Spectrophotometer.For in vitro studies to investigate the effects of IgG from MS patients positive for aPL compared to IgG from HC, we used pooled IgG derived from an equal concentration of IgG from nine individuals in each group.The human astrocytic cell line was cultured in MEM medium containing 10% foetal bovine serum, 100 units/ml penicillin and 100 μg/ streptomycin at 37 °C in a humidified atmosphere consisting of 5% CO2.U87 cells were incubated with 100 μg/ml MS or HC IgG, 3 μg/ml LPS, 100 ng/ml TNF-α or media alone for one hour.The one-hour incubation period has been identified to be appropriate for experiments addressing the effects of purified IgG in astrocytes following time-course experiments carried out for time periods between 10 min, 1, 3, 6 and 12 h.Subsequently, cell extracts were obtained according to standard protocols and were used for western blotting.Cell extracts were resolved by sodium dodecyl sulfate – 10% polyacrylamide gel electrophoresis and then transferred to nitrocellulose membranes.Phosphorylated and total p38 and NFκB protein levels were determined by Western blotting using monoclonal or polyclonal anti-phospho-p38 MAPK, anti-p38 MAPK, and anti-GAPDH antibodies.Protein levels were quantified using the image analysis software ImageJ 1.x.All data are expressed as the mean SEM.Statistical analysis was performed using SPSS Statistics for Windows, Version 20.0.A normality test and an equal variance test were performed."If data groups passed both tests, a comparison was made by a parametric test.If the normality conditions were not met, a comparison was made by a non-parametric test.P values <.05 were considered significant.Table 1 summarizes the demographics and clinical features of the study population and controls.MS patients have been collected to participate in this study.Eighty-nine of them were female and thirty-eight were male.Their mean age was 51.69 ± 12.19 years.Nighty-two healthy individuals were recruited, fifty-five female and thirty-seven were male.The mean age of the healthy individuals was 52.1 ± 17.75 years.Noteworthy, there are no statistical differences in the gender proportion between MS patients and HC.Anti-cardiolipin antibodies of IgG isotype were positive in 23 of 127 patients with MS and in 1 of 92 HC.After purification of total IgG from the serum of MS patients positive for IgG anti-CL and healthy controls, the activity of these fractions was confirmed by ELISA, as performed initially for testing of seropositivity, and subsequently pooled to obtain one group for MS and one group for healthy controls.The pooled IgG samples were tested at a concentration of 100 μg/ml dilution in triplicate and anti-CL titers were expressed in GPLU.The mean IgG anti-CL titer of pooled IgG for MS patients was calculated to be 50.64 GPLU, whilst pooled IgG from healthy controls did not bind CL.To establish the effects of exposing astrocytes to IgG, we used pooled IgG samples from MS patients and healthy controls.U87 cells were treated with pooled IgG for 10 min, 1, 3, 6 and 12 h.Maximal differences in phosphorylation of p38MAPK and NFκB were detected between MS patients and healthy controls after one hour exposure to IgG.Cells cultured with positive control showed maximal level of phosphorylation of both p38MAPK and NFκB after one hour incubation, while medium alone had no effect.To analyze the mechanism triggered by anti-CL antibodies in MS patients, levels of phosphorylated forms of p38 MAPK, as measured by densitometry, were significantly higher in MS patients compared with controls.From a total of four individual experiments, there was an average 2.5-fold increase in phosphorylation in p38 MAPK and p65 NFκB.Representative Western blots showing phosphorylated and total forms of NFκB from MS patients and healthy controls are depicted in Fig. 2.IgG from MS patients significantly increased p65 phosphorylation in astrocytes at the protein level compared with IgG from healthy controls.Given the evidence that p38 and NFκB are activated in astrocytes from MS patients, we sought to investigate whether these intracellular pathways are also activated in MS patients that are not positive for anti-CL antibodies and to what extent the anti-CL antibodies are involved.Pooled sera from ten patients with MS that were not positive for anti-CL antibodies were applied in all in vitro experiments.Patient IgG preparations selected for this group had no IgG anti-CL activity.We confirmed that the presence of anti-CL antibodies in the serum of patients with MS is responsible for increased phosphorylation of p38 and NFκB compared with those of patients negative for anti-CL antibodies and healthy controls.More specifically, there was a 12-fold increase in phosphorylation of p38 MAPK and a 2-fold increase in phosphorylation of p65 NFκB following IgG stimulation with MS positive for anti-CL antibodies but no increase at all with IgG from a MS with no anti-CL antibodies or healthy controls.The lack of p38 MAPK and p65 NFκB activation in MS patients negative for anti-CL antibodies provides further proof that the observed NFκB activation is specifically due to anti-CL antibodies in those MS patients.The data obtained in this study strongly suggest that in MS patients, there is upregulation of certain signaling pathways as a consequence of anti-CL antibody activation.This is, to our knowledge, the first study in which aPL-induced intracellular signals that mediate the phosphorylation of signaling pathways in astrocytes have been further described.Use of pooled samples was necessary to carry out the multiple experiments to establish the initial responses with IgG in astrocytes, due to sample limitation.We tested different pools for each clinical group consisting of ten individual samples.In the case of the MS patients that were positive for anti-CL, we have selected IgG from the patients with the highest activity to CL.Our results primarily display that IgG fractions from MS patients who are positive for anti-CL antibodies induce p38 and NFκB phosphorylation in astrocytes.This was apparent for both phosphorylated proteins, with phosphorylation of p38 MAPK being more prominent.Stimulation of astrocytes with MS IgG showed an increase in p38 MAPK of about 2.5-fold and an increase in p65 NFκB of about 2.5-fold compared to the healthy control IgG.This study is the first to assess the effects of IgG from MS patients positive for aPL on cells of the human CNS.To date, one study tested the effects of aPL, namely anti-CL on astrocytes.The authors reported inhibition of astrocytic proliferation and contribution to the formation of blood clots by activation with anti-CL antibodies from patients with SLE.In that regard, the in vitro effects of rat astrocytes that were stimulated with IgG from patients with neuromyelitis optica, lead to induction of various inflammatory mediators and immune related genes.Interestingly, the upregulation of the chemokine CC ligand 5 was disease-specific, in comparison to samples from patients with other diseases, including RRMS.However, samples from progressive MS patients were not included.It would be of great interest to compare the effects of NMO IgG against MS IgG and HC since it is known that NMO is antibody mediated, compared to the much more complex pathophysiology of MS which implicates a large scale of cellular and humoral immune mediators.Attributing specific molecular processes to certain diseases requires extensive research, assessing the induction of antibody-mediated effects across several clinically distinct conditions.B cells in general, are shown to be highly involved in CNS inflammation and toxicity, as shown in a study by Lisak et al., where B cell supernatants from MS patients exert pathological effects on oligodendrocytes, regarding viability and morphology.Consequently, one could argue that antibody-mediated effects in the CNS, are not a result of an isolated mechanism.Given our findings on U87 cells in response to IgG from MS patients and from healthy controls, we can conclude that MS IgG may initiate multiple inflammatory processes in astrocytes.Since the IgG fractions used in the present study originated from patients positive for aPL, our next aim was to confirm that these effects are aPL specific.Interestingly, we observed that IgG from patients who are not aPL positive do not activate astrocytes concluding that activation of astrocytes is aPL specific.A possible limitation of this study would be that due to sample limitation, we were unable to deplete the anti-CL antibodies fraction from the MS patients.We feel given the clear difference between MS positive for ant-CL and MS negative for anti-CL allows us to speculate that these difference are attributable to the specific antibodies.Our findings support the hypothesis of astrocytes playing key roles in antibody-mediated pathogenicity.This is in accordance with a study evaluating MS CSF-derived monoclonal recombinant antibodies whereby describing loss of myelin, astrocyte activation and deposition of complement products.As discussed by the authors, neither the abundance of such antibodies, nor the exact mechanisms by which they contribute to demyelination are clear.For instance, complement appears to be implicated following the binding of some of these antibodies to astrocytes and/or neurons.There are recent studies available that associate the role of TLR2 with the proinflammatory profile of astrocyte cultures in CNS inflammation, therefore supporting the hypothesis that TLR2 and/or TLR4 can be involved in aPL signaling and astrocytic responses in MS. TLR-2 and TLR-4, are membrane targets of aPL, mediating the deleterious effects in the APS.In APS, there is extensive evidence where in vitro experiments have shown that activation of monocytes and endothelial cells by aPL involves downstream mediators of TLRs.Activation of the NFκB transcription factor family plays a crucial role in inflammation through its ability to induce transcription of proinflammatory genes.Substantial evidence suggests that MAP kinases can contribute in the regulation of NFκB.Moreover, it has been demonstrated that the p38 pathway is implicated in the cytoplasm as well as in modulation of its transactivating potential in the nucleus.Activation of p38 MAPK is of interest, since this kinase is essential in both inflammation and coagulation, making it an attractive candidate as a potential mediator of thrombotic effects in pathologic conditions.Taken together, the above results suggest that IgG from MS patients positive for aPL promote phosphorylation of p38MAPK and NFκB signaling molecules.In this way we have extended our findings and observed that astrocytic stimulation with IgG from MS patients positive for IgG anti-CL further enhances the hypothesis that aPL may carry out inflammatory pro-thrombotic effects in the CNS of MS patients.Here, we describe how the aPL signaling mechanisms described may provide further evidence to the development of thrombosis in MS. The involvement of aPL in MS has been well documented by us and others which further supports this principle.Some aspects remain to be confirmed, for example whether following these signaling effects there is subsequent induction in tissue factor, a cellular initiator of blood coagulation."Theoretically, the involvement of TF is an attractive avenue for development and may help to establish new therapeutic approaches, such as selective inhibition of MAP kinases, to reverse the possible prothrombotic state in patients with MS. Despite the fact that p38 MAPK inhibitors showed great promise in preclinical models of RA and Crohn's disease but so far the clinical results not as promising, the therapeutic efficacy of these does not necessarily mean the same for MS and this should be pursed further.At present, thirteen kinase inhibitors have been approved for oncologic indications.Regardless, the number of kinase inhibitors and the range of clinical indications are likely to expand dramatically in the next few years.At this point our results from in vitro stimulations provide further confirmation of the involvement of aPL in the pathogenesis of MS. | Multiple sclerosis (MS) is a chronic, multifactorial, inflammatory disease of the central nervous system where demyelination leads to neurodegeneration and disability. The pathogenesis of MS is incompletely understood, with prevalence of antiphospholipid antibodies (aPL) speculated to contribute to MS pathogenesis. In fact, MS shares common clinical features with the Antiphospholipid Syndrome (APS) such as venous thromboembolism. Consequently, the presence of aPL which are associated with blood clot formation in the APS need to be further investigated for a possible pro-coagulant role in the development of thrombosis in MS. The effects of IgG aPL from patients with MS upon astrocyte activation has never been characterized. We purified IgG from 30 subjects. A human astrocytic cell line was treated with 100 μg/ml IgG for 1 h, and cell extracts were examined by immunoblot using antibodies to p38 MAPK and NFκB to further examine intracellular signaling pathways induced by these IgGs. Only IgG from patients who are positive for aPL caused phosphorylation of p38 MAPK and NFκB in astrocytes. These effects were not seen with IgG from patients with MS but with no aPL or healthy controls. Understanding the intracellular mechanism of aPL-mediated astrocyte activation may help to establish new therapeutic approaches, such as selective inhibition of the mitogen-activated protein kinases, to control MS activity or possible thrombotic states. |
31,432 | Quantitative geospatial dataset on the near-surface heavy metal concentrations in semi-arid soils from Maibele Airstrip North, Central Botswana | Herein, the data consists of tables and figures which help analyze the near-surface heavy metals contents of soils collected from 1050 geo-referenced points underlain by paragneisses and amphibolites parent materials at the Maibele Airstrip North in Central Botswana.Other heavy metals below detection limit including Mo, dl <5 ppm; Cd, dl <10 ppm; Sn, dl <20 ppm; Sb, dl <20 ppm; W, dl <10 ppm; U, dl <5 ppm; and Se, dl <5 ppm had no values reported in the data.Portable x-ray fluorescence spectrophotometer in a “soil” mode was used to determine the heavy metals.The average of two readings on two samples collected from the same point on the grid layout was recorded and reported.Soil samples were collected at intervals of 25 m following straight marked lines.Sample line trend was from north to south and a total of 30 lines were sampled, each with 35 sampling points.A total distance of about 875 m was covered for each line.A pit of about 30 cm depth was dug to remove the topsoil and organic material before collecting soil samples.Two soil samples were collected for each point, one sample was sieved using the Fieldmaster soil sampling sieve set before being placed in a labeled transparent sample bag and the other was put in a sample bag as collected.The two samples collected at a single point were label with the same sample number but differentiate by letters at the end.All soil samples were taken to the base camp and allowed to air dry before analyzing using a portable x-ray fluorescence analyzer.Samples from the same point were analyzed consecutively, and the analyzer made an average analysis from the measurements it obtained from the two samples.The data was downloaded into a computer and an excel document showing the element contents for each sample was made.A calibration standard, a blank and a duplicate were used after every 20 samples were analyzed, and they are highlighted with a yellow color in the excel data sheet in Supplementary table.The correlation values obtained from Karl Pearson׳s correlation analysis of the data are given in Table 1 below.This table shows the estimated strength of the relationships between the six heavy metals.A correlation value that is closer to 1 can be said to have stronger relationship strength, where the indicates the direction of the relationship – with “+” denoting positive and “−” denoting negative.A “0” value indicates that there is no correlation between the respective variables.The loading values obtained from the PCA of the data are given in Table 2.This table shows the amount of variation that a particular variable contributed to a given factor.A variation value that is more than 0.5 can be said to be a significant contribution to the respective factor.The contribution may vary from moderate to very high, with “0.6 and above” denoting high to very high contribution and “0.5–0.6” denoting moderate contribution.From this table, if Kaiser׳s rule of taking the number of factors to use as the total number of eigenvalues ≥1 is applied, it can be deduced that the six heavy metals of the data can be grouped only under three factors.From Table 2, heavy metals Co, Cu and Zn can be said to have contributed significantly to Factor 1, although with a moderate contribution of 0.521, 0.500 and 0.518 respectively.Likewise, only metals Cr and Ni contributed significantly to Factor 2, while only metal Pb contributed significantly to Factor 3.Thus, the six heavy metals of the data are grouped as follows:First factor: ,Second factor: ,Overall, the three factors explain 83.2% of the total variation in the data. | This article contains a statistically analyzed dataset of the heavy metals including Cr, Co, Ni, Cu, Zn and Pb contents of near-surface (~30 cm depth) soils in a Cu–Ni prospecting field at Airstrip North, Central Botswana. The soils developed on paragneisses and amphibolites parent materials in a semi-arid environment with hardveld vegetation, “The geology of the Topisi area” (Key et al., 1994) [1]. Grid sampling was adopted in the field data collection. Heavy metals were determined using the relatively new portable x-ray fluorescence spectrometer (Delta Premium, 510,890, USA) technology in a “soil” mode. The data presented was obtained from the average reading of two soil samples collected from same point but passed through sieves. |
31,433 | School mobility and prospective pathways to psychotic-like symptoms in early adolescence: A prospective birth cohort study | The Avon Longitudinal Study of Parents and Children is a UK birth cohort study examining the determinants of development, health, and disease during childhood and beyond.The study has been described in detail elsewhere.14,In summary, 14,541 women were enrolled, provided that they were resident in Avon while pregnant and had an expected delivery date between April 1, 1991 and December 31, 1992.A total of 13,978 children formed the original cohort.Ethical approval for the study was obtained from the ALSPAC Law and Ethics committee and the local research ethics committees.Informed consent was obtained from the parents of the children."From the first trimester of pregnancy, parents have completed postal questionnaires about the study child's health and development, while the child has attended annual assessment clinics, including face-to-face interviews and psychological and physical tests.At a mean age of 12.9 years, psychotic-like symptoms were assessed using the semi-structured, face-to-face Psychosis-like Symptoms Interview.The PLIKSi comprises 12 psychotic symptoms, encompassing hallucinations, delusions, and thought interference over the previous 6 months.The items are derived from the Diagnostic Interview Schedule for Children version IV and the Schedules for Clinical Assessment in Neuropsychiatry version 2.0.Trained interviewers rated each item as absent, suspected, or definitely present.The average κ value was 0.72, indicating good interrater reliability.15,Two PLIKS variables9 were considered: probable/definite, and definite.Mothers reported school mobility when children were approximately 9 years of age.In all, 34 mothers reported no school change; 2,698 reported 1 school change; 2,267 reported 2 school changes; and 446 reported ≥3 school changes.Because of the skewed distribution of responses, we constructed a dichotomous variable: “No school mobility” was coded as 0, 1, or 2 different schools and “school mobility” as 3 or more different schools.As indicated by the distribution of the data, most children experienced 1 or 2 school changes.This reflects the progression through the English school system, typically beginning with nursery or preschool at 4 years of age; reception class at 5 years of age; and primary school from 6 to 11 years of age.We used a cut-off point of ≥3 to indicate school mobility as consistent with previously reported definitions of mobile students.16,Residential mobility was “mother-reported” when the child was approximately 5, 6, 7, and 8 years of age.Assessment points were selected to match the period defined for school mobility as closely as possible.A total of 3,748 mothers reported no home moves; 1,565 reported 1 home move; 607 reported 2 home moves; and 218 reported ≥3 home moves.Unlike natural school progression changes, home moves are not normative as the child progresses through school; therefore, we chose a lower threshold of ≥2 moves to indicate residential mobility.Bully victimization was assessed at 10 years by child report with the Bullying and Friendship Interview Schedule.17,Trained psychology graduates asked children about bullying by peers in the past 6 months.Bully victimization was coded as present if the child reported being relationally and/or overtly bullied, either frequently or very frequently at 10 years.Similarly, bully status was coded as present if the child reported relationally and/or overtly bullying others frequently or very frequently at 10 years.Bully victimization and bully status at 10 years were very highly correlated.To avoid problems with multicollinearity within the path analysis, we collapsed these variables to create involvement in bullying indices: 0 = no involvement; 1 = involvement as a bully or victim; and 2 = involvement as a bully and victim.Assessment of friendships was based on questions from the Cambridge Hormones and Moods Project Friendship Questionnaire.18,Children were asked five questions during clinic sessions, for example, “Do your friends understand you,” or “Do you talk to your friends about problems?,Responses were summed to create a friendship scale from 0 to 15, with 0 denoting the most positive friends score and 15 the least positive.A number of psychosocial risk factors were assessed.Level of urbanicity was ascertained at birth and was coded in line with previous research as 0 = village/hamlet, 1= urban/town.19,Multiple social risk factors during pregnancy and from birth to 2 years were assessed using the Family Adversity Index.The FAI consists of 18 items.If an adversity item was reported, it was coded as 1 point, and the points were then summed to derive a total FAI index score for each time point.The 2 FAI scores were summed and incorporated into the analysis as a continuous variable.Ethnic background of the child was based on the ethnicity of the mother and her partner.If the mother and/or her partner reported non-white ethnicity, the child was coded as non white.Initial analyses were carried out using SPSS version 19 statistical software.Unadjusted and adjusted associations between psychosocial factors, school mobility, peer difficulties, and subsequent psychotic-like symptoms were computed.Unadjusted associations between psychosocial factors and subsequent school mobility, and school mobility and subsequent peer difficulties, were also computed.Results are reported in odds ratios and 95% confidence intervals for dichotomous outcomes and β coefficients for continuous outcomes.Using Mplus version 6, we modeled the pathways via which psychosocial factors and school mobility may be associated with subsequent psychotic-like symptoms.Probit estimation is recommended for path analysis with both categorical and continuous endogenous variables.20,Probit regression is a log-linear approach analogous to logistic regression, producing similar χ2 statistics, p values, and conclusions to logit models.21,Probit regression coefficients indicate the strength of the relationship between the predictor variable and the probability of group membership.They represent the change in the probability of “caseness” associated with a unit change in the independent variable; thus, it is important to keep the scale of the predictor in mind when interpreting probit coefficients.For example, a probit coefficient of 0.034 indicates that each 1-point increase in the Family Adversity Index resulted in an increase of 0.034 standard deviations in the predicted z score of psychotic-like symptoms.Thus, one would expect probit values to be larger for dichotomous predictors, which represent the change from “no caseness” to “caseness” rather than a single value on a continuous scale.The weighted least squares means and variance estimator was used, yielding probit coefficients for categorical outcomes and normal linear regression coefficients for continuous outcomes.Data were available for 6,448 children who completed the Psychosis-like Symptoms Interview15 at the annual assessment clinic at 12 years.Those who were lost to follow-up were more often boys, non white, of low birth weight, born to single mothers of lower educational level, that is, did not obtain O levels, from families living in rented accommodations, and exposed to family adversity.Those students lost to attrition were also more likely to have moved school ≥3 times, to live in an urban area, and to have been exposed to family adversity.A total of 5.6% of adolescents reported definite PLIKS and 13.7% suspected/definite PLIKS.In all, 13.4% had moved home ≥2 times.School and residential mobility were significantly associated with one another.Mobile students were 3.5 times more likely to have moved home ≥2 times.Unadjusted and adjusted associations between psychosocial factors, school and residential mobility, peer difficulties, and subsequent psychotic-like symptoms are reported in Tables 1 and 2.Urban residence, family adversity, residential mobility, school mobility, and peer difficulties were all significantly associated with PLIKS definite and probable/definite symptoms.Combined bully/victim status was strongly associated with PLIKS definite outcome.In multiple logistic regressions, family adversity, school mobility, bullying, and negative friendship score remained significantly associated with definite PLIKS outcome, whereas urbanicity, family adversity, bullying, and friendship score remained significantly associated with PLIKS probable/definite outcome.After adjustment for all other risk factors, school mobility led to an approximately 1.5 times increased risk, and being both a bully and victim of bullying led to an approximately 2.5 times increased risk of definite PLIKS.Associations between psychosocial factors and school mobility were assessed.Family adversity and ethnicity were significantly associated with school mobility.School mobility was significantly associated with bully status, bully victimization and negative friendship score.We conducted 2 path models using definite and probable/definite psychotic-like symptom outcomes.Based on existing literature, in the first path model we incorporated direct associations between all psychosocial risk factors, sex, school mobility, residential mobility, bullying involvement and subsequent psychotic-like symptoms, and indirect associations from psychosocial risk factors, sex, and school mobility to psychotic-like symptoms.Thus, urbanicity, ethnicity, sex, family adversity, and residential mobility were incorporated as exogenous variables; school mobility and peer difficulties as independent, mediating and dependent variables; and psychotic-like symptoms as the main endogenous variable.The fit indices indicated that there was room for improvement in model fit.Inspection of the modification indices suggested that incorporating a pathway from family adversity to bullying involvement would improve model fit.As this pathway was consistent with the research literature,22 it was incorporated into the final model leading to a considerably improved model fit: definite PLIKS outcome and probable/definite PLIKS outcome.Bullying involvement was incorporated as an ordinal variable consistent with the observed dose–response relationship in the unadjusted analysis.In Mplus, an ordinal variable is treated as a continuous latent variable that exceeds thresholds to give the various outcome categories.One coefficient per ordinal variable is produced.This can be interpreted in the same way as a continuous variable.Direct associations among psychosocial factors, school mobility, and peer difficulties are shown in Figure 1.Family adversity and ethnicity predicted school mobility, whereas school mobility predicted bullying involvement and negative friendship score.Boys were more likely to be involved in bullying and to report negative friendships.Direct and indirect pathways to psychotic-like symptom outcome are shown in Table 3 and Table 4.Family adversity, urbanicity, and bullying involvement were independently associated with PLIKS definite and probable/definite symptoms.School mobility was independently associated with PLIKS definite symptoms.There was a significant indirect association between school mobility and PLIKS via bullying involvement, and a significant indirect association between family adversity and PLIKS via bullying involvement.The indirect associations were of a relatively small magnitude, indicating partial mediation.For example, the indirect effect of school mobility on definite psychotic-like symptoms via bullying involvement was 0.018, whereas the direct association between school mobility and psychotic-like symptoms was 0.108.Therefore, the ratio of indirect effect to direct effect was 0.17, that is, the indirect effect was approximately one-sixth of the size of the direct effect.Using data from the ALSPAC cohort study, we explored whether, and how, school mobility might be associated with increased risk of psychotic-like symptoms in early adolescence.First, we found that school mobility is independently associated with an increased risk of psychotic-like symptoms, even when controlling for all other psychosocial risk factors.School change is stressful for students.23,24,Psychologically, it can lead to the formation or exacerbation of negative schemata, such as low self-esteem25 and external locus of control.24,As negative schemata have also been associated with the development of psychotic symptoms,26-28 such schema may represent 1 mechanism by which school mobility could increase the risk of psychotic-like symptoms.In addition, repeated school change may induce feelings of social defeat,29 which, if chronic, may lead to sensitization of the mesolimbic dopamine system, and hence heighten the risk of psychotic-like symptoms in vulnerable individuals.30,Second, school mobility was also associated with an increased risk of psychotic-like symptoms via bullying involvement, indicating a second “indirect” pathway through which school mobility may be associated with increased risk.Consistent with previous research,9,31 we found a significant association between bullying involvement and psychotic-like symptoms; involvement in bullying was the strongest predictor of psychotic-like symptoms, leading to an approximately 2.5 times increased risk.Results here expand on current evidence by highlighting mobile students as an especially “at risk” group for bullying involvement.Consistent with previous research,32,33 we found that mobile students were more likely to encounter negative friendships and bullying.Indeed, research suggests that mobile students tend to view themselves as insecure and to have fewer friends than their less-mobile peers.24,34,These observations are also consistent with the social defeat hypothesis of psychosis, which has been postulated as the mechanism linking social risk factors to psychosis.Therefore, peer problems may add to psychosocial adversities in a cumulative way, presenting a further source of marginalization, exclusion, and social defeat.30,Third, we found that urbanicity, ethnic status, and family adversity were independently associated with psychotic-like symptoms.Consistent with previous research, we found that mobile students were more likely to have experienced family adversity and to be of ethnic minority status,13 suggesting that those who experience adversity and marginalization from a young age are more likely to change school more often.However, school mobility was not found to be a mediator of the association between such psychosocial risks and psychotic-like symptoms.Instead, the effects of family adversity were partly mediated by involvement in bullying at school.This confirms previous research that family stresses increase the risk of involvement in bullying22 and adverse mental health outcomes, including psychotic-like symptoms.10,This study has a number of strengths.We used a large, longitudinal data set, and were able to take into account a number of psychosocial factors associated with school mobility and psychotic-like symptoms.Using path analyses, several pathways to psychotic-like symptoms were quantified while taking into account the time ordering of exposures, enabling us to assess the potential temporal associations between school mobility, other risk factors, and subsequent psychotic-like symptoms.There are also limitations to this study.Although we controlled for residential mobility, we were unable to distinguish between school moves with and without concomitant home moves.Although many educators believe that school mobility is an inevitable consequence of moving homes, research suggests that approximately 40% of school moves are not associated with residential changes.23, "Furthermore, we did not control for any pre-existing peer difficulties or individual traits present before the child's first entry into school, which may have contributed to subsequent school mobility35 and bullying experiences.36",Second, there were missing data, resulting in a reduced sample size.This reduces statistical power and therefore works against our hypotheses, rather than inflating effects.37,We found that those who were lost to attrition were more likely to have moved school 3 or more times, to live in an urban area, to be of ethnic minority, and to have been exposed to family adversity.Previous simulations with this longitudinal data resource indicate that selective dropout may underestimate the prevalence of psychiatric disorders but has only a small impact on associations between predictors and outcomes, even when dropout is correlated with predictor variables.38,Nevertheless, selective dropout will have reduced the representativeness of our sample.Third, the psychosis outcome referred to symptoms occurring over the previous 6 months only, and for some adolescents, these phenomena may have been transient and self-limiting.However, recent long-term follow-up indicates that psychotic experiences in childhood highly increase the risk of psychosis in adulthood.4,Our study demonstrates that school mobility is independently and also indirectly associated with psychotic-like symptoms via bullying involvement.As bullying35 and school exclusion39 may significantly contribute to student mobility and are also associated with risk factors for psychosis, including social deprivation,22,40 ethnicity,41 and alienation from mainstream society,40 the impact of school exclusion on mental health outcomes may be a fruitful route of inquiry.Although school moves may be unavoidable, involvement in bullying and isolation from peers are amenable to psychosocial interventions42 and may be a focus of attention for mobile students.Reports suggest that teachers may lack the time and resources to ensure that mobile students are adequately established within new school environments.43,Pilot schemes indicate that the addition of dedicated “mobility support workers” may help mobile students to successfully establish themselves within new school environments,44 reducing the risk of bullying involvement and other social difficulties.An awareness of mobile students as a possible high-risk population and routine inquiry regarding school changes and bullying experiences may be advisable in mental health care settings.45,Clinical Guidance,School mobility during childhood may increase the risk of psychotic-like symptoms in early adolescence, both directly and indirectly via increased risk of bullying involvement.When assessing young persons with psychotic disorders, clinicians should explore history of school mobility and its psychological/emotional impact, particularly of bullying and marginalization.Strategies to help mobile students to establish themselves within new school environments may help to reduce peer difficulties and to diminish the risk of psychotic-like symptoms. | Objective Social adversity and urban upbringing increase the risk of psychosis. We tested the hypothesis that these risks may be partly attributable to school mobility and examined the potential pathways linking school mobility to psychotic-like symptoms. Method A community sample of 6,448 mothers and their children born between 1991 and 1992 were assessed for psychosocial adversities (i.e., ethnicity, urbanicity, family adversity) from birth to 2 years, school and residential mobility up to 9 years, and peer difficulties (i.e., bullying involvement and friendship difficulties) at 10 years. Psychotic-like symptoms were assessed at age 12 years using the Psychosis-like Symptoms Interview (PLIKSi). Results In regression analyses, school mobility was significantly associated with definite psychotic-like symptoms (odds ratio [OR] =1.60; 95% CI =1.07-2.38) after controlling for all confounders. Within path analyses, school mobility (probit coefficient [β] = 0.108; p =.039), involvement in bullying (β = 0.241; p <.001), urbanicity (β = 0.342; p =.016), and family adversity (β = 0.034; p <.001) were all independently associated with definite psychotic-like symptoms. School mobility was indirectly associated with definite psychotic-like symptoms via involvement in bullying (β = 0.018; p =.034). Conclusions School mobility is associated with increased risk of psychotic-like symptoms, both directly and indirectly. The findings highlight the potential benefit of strategies to help mobile students to establish themselves within new school environments to reduce peer difficulties and to diminish the risk of psychotic-like symptoms. Awareness of mobile students as a possible high-risk population, and routine inquiry regarding school changes and bullying experiences, may be advisable in mental health care settings. © 2014 American Academy of Child and Adolescent Psychiatry. |
31,434 | Microclimate and matter dynamics in transition zones of forest to arable land | In ecology, fragmentation is defined as the occurrence of discontinuities in prevalent or native land cover and habitat properties.Although it is a natural process, fragmentation as we observe it today is mainly caused by humans.As fragmentation occurs, it substitutes diverse and biomass-rich ecosystems with intensively used, man-made ecosystems, e.g. agricultural land.Between these ecosystems, i.e. at their edges, transition zones occur through fluxes of matter, energy and information.The processes and effects that occur have been categorised by Murcia into abiotic, direct biological and indirect biological effects of transition zones.Abiotic conditions – such as temperature – affect biological processes and thus habitat functions.In the literature, there is evidence that microclimatic gradients alter processes in transition zones, e.g. litter decomposition.Altered soil and air moisture and temperature in transition zones influence the metabolism of microorganisms, and with that matter dynamics.Wind blowing into transition zones of forests carries nutrients that trees and bushes comb out of the air.This leads to higher nitrogen availability in the transition zone, which enhances wood and leaf litter decomposition.Higher nitrogen deposition might be beneficial for above- and belowground carbon stocks and sequestration in the transition zone, but on the other hand trees are reported to have less wood volume.Fragmentation-related habitat loss is likely to be the most important threat to biodiversity and one reason for the continued extinction of species.Fragmentation is most often caused by an expansion of arable land and increases the ratio of edges to forest interior.Magura et al. have argued that these managed edges with an intensive human impact offer a rather inhospitable habitat in addition to habitat loss caused by fragmentation alone.However, the hospitability of transition zones greatly depends on the species that are investigated.Kark and van Rensburg as well as Lidicker have argued that transition zones can be hotspots for biodiversity and even evolutionary processes as novel niches.Edges caused by roads or with adjacent managed areas can favour exotic species compared to native species.In a review, Fahrig argued that fragmentation has a positive effect on biodiversity.On the other hand, Fletcher et al. argued that this perspective is too onesided and that in fact negative effects on biodiversity occur.Nonetheless, the general mechanisms and influence of processes in transition zones are poorly understood.As Ries et al. have noted, scientists have often merely described the edge effect of a single matrix and then they have extrapolated between matrices.Moreover, many studies focus on the fragment, but Ferrante et al. argue that the character of the matrices plays a more important role.In addition, most studies only refer to forested transition, considering it to be 100 m perpendicular to the zero line.Among those studies, few measurements exist for temperate forests.For arable land, Cleugh; Kort and Nuberg reviewed literature on the windbreak effect of forested areas on microclimate, soil conditions and crop productivity.Cleugh and Hughes also provide models based on wind tunnel experiments and analyses of field experiments.Another article by Bird highlights similar positive effects of windbreaks and shelter on pasture.We measured microclimate along different transects between managed continental temperate forests and agricultural land for one year.In addition, we measured soil nitrogen and carbon content as well as litterfall.In this paper, we analyse environmental gradients and their effects on biota and matter dynamics based on the following hypotheses:The width of the transition zone from arable land to forest depends on the measured variable.The abiotic environmental gradients are non-linear across ecosystem boundaries.Biotic effects are the consequences of abiotic environmental gradients in the transition zone.The terminology in this article follows our concept of transition zones in quantitative ecology.The measurements for this study were conducted in northeast Germany in the Federal State of Brandenburg in 2016 and 2017.For a detailed description of methods and data, see Schmidt et al.For hourly microclimatic measurements, an east-facing and a west-facing site were equipped with one transect of five weather stations each – one weather station at the zero line, two within the arable land and two within the forest.For the sake of brevity, positive values are used for distance from the zero line for the arable land, and negative values for the forest.The distances were chosen according to the results of our literature review.At greater distances no significant effects were expected.In our east- and west-facing study design, we wanted to detect environmental gradients for these opposing cardinal directions rather than compare extremes like in north and south direction.Aboveground biomass of oilseed rape, wheat, pea and barley was measured at four 1 m2 plots at different distances from the zero line on the arable land.The aboveground parts of the plants were harvested, oven-dried and weighed.In the forest, the diameter at breast height and the height of trees as proxy for aboveground biomass of pine and larch were measured at three plots.Litterfall was measured at 0 m, -35 m and -70 m in the forest.At each distance, ten litterfall traps were arranged parallel to the zero line with a distance of 1 m towards each other to account for the forest heterogeneity.Soil was sampled at two depths at the transects and analysed for total nitrogen and carbon content.The goal of the analysis of the time series of meteorological and soil parameters was to identify effects that could be ascribed to the position along the transect and separate them from other effects, like e.g. measurement imprecisions.To do this, each set of five time series of the same variable measured at different positions along the single transects underwent a principal component analysis.The principal component analysis of time series is meant to decompose the total variance of multidimensional data sets.It yields a set of independent principal components that explain most of the variance of the time series.In terms of microclimatic time series this analysis is done, as the variance can be high and might result in misleading interpretations.In mathematical terms, the principal component analysis performs an eigenvalue decomposition of the covariance matrix of the respective time series.Usually the first principal component is very close to the time series of spatial mean values from all considered sites, and depicts the largest fraction of variance of the total data set.Each of the remaining principal components then describes deviations from that mean behaviour, which can be ascribed to a specific effect.Identification of that specific effect, however, requires additional background data and a sound understanding of the relevant system.Our analysis aimed to identify the principal component that would reflect the effect of position along the transect rather than, e.g., the effect of local soil heterogeneities.We identified the respective component by checking the time series of the relevant principal components for monotonic decrease or increase along the transect.In cases where such a relationship existed, correlation of the single observed time series x with the time series of the relevant principal component PCy was used as a quantitative measure of the strength of the effect.The correlation coefficients rx,PCy were then normalised in such a way that +1 denotes typical time series of the inner forest position, -1 typical time series of positions in the arable land, and any value -1 < x < 1 describing the degree of similarity to either the typical forest or typical arable land time series of the relevant variable.We carried out a Bonferroni-adjusted post-hoc analysis to compare the data on trees, litterfall, soil and above-ground biomass with respect to their position in the transect.To verify whether samples originated from the same distribution, we performed Kruskal-Wallis one-way analyses of variance.The R programming language was used to perform all statistical analyses.The data is available in the accompanying method paper.The measured variables of air pressure, air temperature, precipitation, relative humidity and solar radiation did not follow distinct patterns of a transition zone from arable land to forest at the west-facing site.In the forested transition zone, the relative similarities were rather stable, except for solar radiation.Wind direction, air and soil temperature tended to be more similar to forest patterns; average wind speed was more similar to arable land.Air pressure, maximum wind speed, precipitation, soil moisture and solar radiation did not exhibit a clear pattern along the transect.The main wind direction for this region is southwest.At the west-facing site at 0 and 30 m, the main wind direction tends towards the west, while at 15, -35 and -70 m the direction is south.At the east-facing site, the main wind direction at 15 m is more westerly than the main wind direction of the region.At -35 m, it is the same as for the region as a whole.At 15 and 0 m, the wind direction is more to the south, and is to the south at -70 m.Comparing results from the two transects, only average wind speed and direction as well as soil temperature exhibited roughly monotonic patterns along both transects, while solar radiation and precipitation as well as air pressure did so in only one out of the two transects.In terms of absolute values, soil temperature was 2–5 °C higher on average in the arable land of the west-facing site compared to the forest interior in June and July 2016 as well as from March to July 2017.In winter, the forest soil tended to be warmer.Except for January, February and July 2017, soil moisture was lower on average in the forest.Maximum and average wind speeds were higher in the arable land compared to the zero line as well as to the forest interior.At the east-facing site, average soil temperature was approx. 2 °C–4 °C higher on average in the arable land compared to the zero line and the forest interior, except in autumn and winter.The average air temperature tended to be slightly higher in the arable land, except for the period June to September 2016, when arable lands were considerably warmer than the forest interior, by 0.5 °C–2 °C.The average relative humidity was lower in the arable land, while the average wind speed was higher in all months of measurement.The height of the trees per plot is significantly lower at the zero line at both sites with an average height of 18.98 m and 20.52 m compared to the interior plots.This figure does not differ significantly between the plots from 50 to 70 m and 130 to 150 m.The diameter at breast height was not significantly different except for the east-facing site in the 0 to 20 m plot with 24.94 cm compared to 27.8 cm and 25.78 cm.At the east-facing site, the mean dry mass of litterfall of pine was not significantly different with respect to distance to the zero line.The mean dry mass of the litter of larch at the west-facing site was significantly lower in the plot at the zero line compared to 35 m and 70 m towards the forest core matrix.It is not pertinent to compare both sites because of their different tree species and tree ages.For barley, the mean dry biomass was significantly higher at 7.5, 15 and 30 m compared to the zero line.At 7.5 and 30 m, mean dry biomass of barley was not significantly different, while at the 15 m mean, the dry biomass was significantly higher.Pea had significantly higher mean dry biomass at 7.5 and 30 m compared to the plot at the zero line.At 15 m, the mean dry biomass of pea was significantly lower than at 7.5 m and 30 m.The mean dry biomass of oilseed rape was significantly higher at 7.5, 15 and 30 m compared to the zero line.The mean dry biomass at all other distances was not significantly different.Wheat had the statistically highest mean dry biomasses at 15 m, but not different at 30 m. However, the mean dry biomass was lowest at the zero line.At 7.5 m, it was also significantly lower than the figures observed at 15 and 30 m.The highest mean values for total soil carbon content were found at the zero line, with 1.56% at the east-facing site and 1.67% at the west-facing site at a 20 cm depth.These values are significantly higher than all other distances except 70 m in the forest.The same holds true for the samples from the 40 cm depth, except for 35 m from the transect in the forest.The lowest values for Ct were found in the arable land, with less than 0.2%.Additionally, Ct was significantly different between 15 m in the arable land and 35 m in the forest at 40 cm depth as well as between 60 m in the arable land and 70 m in the forest at 20 cm depth.In terms of Nt, the highest values were also at the zero line, with 0.13% at both sites.Here, the zero line differs significantly from all other distances.The ratio between total soil carbon and nitrogen content was – with values between 4.17 and 6.12 – the lowest at a depth of 40 cm and in the arable land, except for 105 m in the forest on the west-facing site, where it was 5.13.The widest C:N relationship was found at the 20 cm depth in the forest at both sites, with values between 13.35 and 16.07.We hypothesised that the width of the transition zone from arable land to forest depends on the measured variable.We found that it is smaller for some microclimatic gradients according to the shape of the correlation coefficients of the first principal components compared to other authors.This is in line with other authors.In most cases, the forested transition zone was approx. 35 m, which is only one-third of the extent other authors have assumed.In the arable land, the spatial extent was approximately 15 m at the west-facing, and up to 30 m at the east-facing site.The widths we report here coincide with transition zones of 25 to 50 m for the aboveground space with a maximum of 125 m we reviewed earlier.Differences in the spatial extent compared to other authors might occur due to the physical structure of edges.Moreover, our study comprises measurements for more than one year and covers all seasons.Seasonal differences might be not covered in other studies due to shorter measurement periods.The cardinal direction of measurements in transition zones plays an important role, e.g. for solar radiation.Therefore, results may vary between transition zones for i.e. north- and south- as well as east- and west-facing edges.In our study, we wanted to avoid too strong effects of cardinal directions north and south and use opposing transition zones instead.This might be a reason for differences in microclimatic gradients to other studies.The width of transition zones we report in this article is based on the assumption that the maximum extent of the transition zone in general is not wider than in our measurement design including all other spatial conditions.The first principal component depicts the mean temporal pattern averaged over all positions along the transect.It indicates whether a measurement point is within the assumed maximum transition zone.Although this approach allows separating the spatial effect from other effects, it does not account for the width of microclimatic gradients at the respective positions in the transect and beyond per se.However, the similarities in Fig. 2 reflect the strength of the spatial effect and a correlation between observed time series and the relevant principal component.Therefore, the monotony of the similarities and its S-shape are the explanatory approach and can be assumed as an approximation to the microclimatic gradients.The strength in our study is therefore not a spatial repetition, but rather a high temporal resolution and the seasonality.The variance is disentangled by the principal component analysis and assigned to the spatial position in the transect.The S-shape and its width figures the similarity of the relationship between measured values and the main behaviour and assigns it to values that are typical for the forest or arable land based on our data.Some of the evaluated microclimatic gradients are S-shaped.On the other hand, for solar radiation, precipitation and some other microclimatic variables, the graphs go up and down and the similarities are not specific to their position in the transect.We especially expected S-shaped gradients for solar radiation in the transition zone.Other authors like Erdős et al. and Wicklein et al. report significant gradients in solar radiation for north- and south-facing transition zones.However, the lacking S-shape of gradients does not mean that there are no relevant gradients per se.In terms of precipitation, the measurement tools tended to be dirty in the forest which might made some measurements inaccurate.For air pressure, there might be no gradient on the measured scale.Shading of trees to a higher distance and the intensity of solar radiation might have influenced gradients in solar radiation.The shape of the gradients may also be inverted over the course of the year: in summer, soil temperature was higher in the arable land compared to the forest.In winter, soil temperatures were lower in the arable land.Ewers and Banks-Leite argued that this is a buffering effect of temperature in the surrounding area of forests.Although they made their argument for tropical forests, we can support this for temperate forests.Another aspect is that the soil in the arable land is bare and unprotected to air temperatures during winter.Like others, we measured higher soil temperatures at the zero line compared to the forest interior.The air temperature was only slightly different over the course of the year.Comparing air temperature gradients for summer months with the results of Erdős et al. or Heithecker and Halpern, we came to similar results: forests are colder when compared to arable or grassland.A change in magnitude over the course of the year was also measurable.This is most probably due to changing foliage and plant cover.For summer months, we can give support for the correlation between distance to the zero line and lower air temperature presented in the meta-analysis by Arroyo-Rodríguez et al. and a review by Tuff et al.Although temperature should be closely related to solar radiation, we were unable to find monotonic patterns along the transects in these time series.At the west-facing site, soil moisture was slightly lower in the forested transition zone relative to arable land and the zero line.This contrasts with the findings of Remy et al. as well as Riutta et al., who have reported drier zero lines.However, Farmilo et al. reported higher soil moisture for small fragments in contrast to continuous forest, which is comparable to a transition zone.The problem with these measurements is that they are difficult to compare accurately, as the two studies from Riutta et al. only measured soil moisture occasionally, and Farmilo et al. only four times, while we measured continuously for more than one year.The lack of comparability is problematic, as soil moisture influences the activity of soil biota, which in turn is an important factor for matter dynamics and possible greenhouse gas emissions.Moreover, it was not possible to show precipitation to be a main influencing factor for an altered soil moisture regime, as we did not find clear monotonic shifts along the transects for precipitation.Another microclimatic generalisation can be derived from our results for direction of wind.The wind direction in the transition zone changes due to turbulences caused by obstacles.This is in line with other authors.It is also supported by the average wind speed that changes at both sites and for nearly every month as we report in Schmidt et al.: wind speed at 70 m in the forest was half that of 30 m in the arable land.This penetration distance, the spatial extent of higher wind speed in the forested transition zone compared to forest interior, is also in line with other authors.A transition zone between forest and arable land of altered aboveground biomasses has a width of up to 65 m perpendicular to the zero line.Because of the distances between the plots, this is just an approximation.Nevertheless, the extent appears to be in line with the approximated extent of altered environmental gradients.Considering the whole transition zone, aboveground biomass has an inverted bell shape.With respect to tree height and diameter as an indicator, we found lower aboveground biomass in the forest at the zero line.This was also reported for decreased tree heights at distances of 25 to 30 m by Ibanez et al. and for an urban pine forest by Veselkin et al.Wright et al. found the basal area to be lowest at the zero line but then stabilised at 20 m from the zero line.More generally, Islam et al. have found trees next to the zero line to be smaller and lower in diameter in fragmented forests, which could mean reduced carbon storage or wood volume.This is contrary to Hernandez-Santana et al. and Dodonov et al., who reported an increase in height towards the zero line.Remy et al. argued that wood volume was higher towards the zero line due to increased atmospheric N deposition and favourable light conditions compared to forest interior.Similar results are reported by Wicklein et al. who, in addition, found higher sapling density in north and south-facing transition zones.Most studies like ours only took trees into account, but not the bush and shrub layer.Islam et al. have described this as a problem, albeit a minor one.However, Erdős et al. report the highest vegetation cover in the transition zone between forest and steppe.In the light of this, height and diameter as proxies for lower aboveground biomass in forested transition zones might be not sufficient as shrubs, higher sapling density and herb biomass are not accounted for.These measures should be considered when calculating biomass in transition zones.The influence of this, however, might be case specific.Litterfall was lower at the west-facing site.One reason might be the windward direction of this site, as wind can carry litter into the forest and away from the zero line.In addition, the two to threefold higher average wind speed compared to the interior forest would substantially enhance litter removal in the forested transition zone.Lower litter cover and litter depth was also found by Watkins et al. close to roads compared to the forest interior.The biomass in the cropped transition zone increased as distance from the zero line increased.This was also found by Mitchell et al. for soybean, with an increase of 55% to 117% from the zero line to 100 m in the arable land.Mitchell et al. argued that pest regulation has an influence on crop growth, and vice versa.On the other hand, pest regulation is influenced by the distance to forest as well as the general landscape structure.Lower air and soil temperatures and altered solar radiation, as reported by Gray et al. for forest gaps, may cause these effects.Especially, dimmed solar radiation is reported to have a negative influence on crop growth in transition zones, but also affects species composition.Another possible reason for these effects could be an altered soil water regime in the transition zone, e.g. drier transition zones as described in the discussion on microclimate.Kort argued that decreased crop production within 50 m is due to competition between crops and trees for water and solar radiation.In addition, manoeuvring heavy agricultural machinery at field edges might have compacted the soil, which would reduce crop growth.Since we only measured biomass, we cannot make predictions about actual yield.However, we found visible proof that crop anthesis lags behind in the cropped transition zone to up to 15 m from the zero line.That most likely affects the degree of ripeness of crops in the transition zone, and might cause lower yields there, as the harvest is on a fixed date.On the other hand, Ricketts et al. reported increased pollination in the transition zones.In our case, shading by the trees most probably caused delayed flowering.Crop growth in transition zones adjoining forest fragments is influenced by several biotic and abiotic variables.Moreover, the landscape structure plays an important role.However, according to Kort and Mitchell et al. the spatial extent perpendicular to the zero line of decreased crop growth appears between 15 and 50 m.The content of soil carbon and nitrogen was primarily elevated at the zero line.An explanation might be an accumulation of nitrogen from fertilisation and higher atmospheric N deposition.In terms of carbon, a strip of approximately two to three meters with a grassland character directly at the edge might have accumulated carbon in the soil over the years.Therefore, a transition zone can have a maximum width of 50 m perpendicular to the zero line in our experiment.This width is in line with our findings that altered conditions in soils of transition zones occur within 10 to 20 m with a maximum of 50 m.In general, the levels of soil carbon and soil nitrogen were low, most likely due to the sandy soils in this region.This and the rather intensive use of N mineral fertilisers leads to low C:N ratios in the mineral soil.The gradients for C and N levels are most likely bell-shaped, because there was no statistical difference between the arable land and the forest – in spite of what we generally expected and in part due to findings by other authors regarding soil and litter deposition – but there were higher values at the zero line.Higher C and N content levels cannot be ascribed to reduced litter input, as Remy et al. found no effect of distance for C and N in needles and leaves.In addition, we only found significantly less litterfall at one site.However, C and N stocks in the mineral soil were higher at the zero line by approximately one-third, which is in line with our findings.For N, the reason might be higher atmospheric N deposition at the zero line, and N being released more quickly from litter and wood.On the other hand, Moreno et al. as well as Vasconcelos and Laurance reported no difference in litter decomposition rates at the zero line relative to the forest interior.It is still unclear what role soil moisture plays in this context.Didham and Remy et al. also found no effect for air temperature.However, Riutta et al. and Simpson et al. reported a correlation between soil moisture, microbial activity and litter decomposition.It could be that the effect of single trees on litter decomposition is underestimated, which makes processes even more complex.Like other authors, we report spatially explicit environmental gradients, their biotic effects and feedback relations.For deeper understandings of landscape processes, researchers often apply mechanistic modelling.In most of the modelling studies that include more than just one ecotope, different ecosystems are modelled independently, without consideration for any lateral connections.Some habitat models have considered at least biotic exchange through individual movement, and hydrological models at watershed level have also included lateral water flows.However, cross-ecosystem relations are rare in models for biomass growth and ecosystem service assessment.Depending on the goal of the model, it may be necessary to account for transition zone gradients and their effects, e.g. when applying forest and crop growth models or biogeochemical models on the landscape scale.Some of the feedback relations seem obvious: soils close to the zero line may contain higher soil carbon content due to litterfall from adjacent trees, while trees are smaller towards the zero line and may store less carbon.Crop yield depressions in the transition zone might result from shading or from competition for water.Higher air humidity at the edge of the forest could decrease evapotranspiration and thus increase the risk of fungal infections, which could consequently affect yields and the quality of agricultural products.These effects – and probably many more – all affect the provision of ecosystem services and hence human wellbeing.With deeper insights into transition zones, we may be able to connect up forest and crop growth models at their ecological boundaries and explore more of these assumed feedback patterns, disentangling some of the complexity.This would be an important step towards a holistic understanding of processes on the landscape scale. | Human-driven fragmentation of landscapes leads to the formation of transition zones between ecosystems that are characterised by fluxes of matter, energy and information. These transition zones may offer rather inhospitable habitats that could jeopardise biodiversity. On the other hand, transition zones are also reported to be hotspots for biodiversity and even evolutionary processes. The general mechanisms and influence of processes in transition zones are poorly understood. Although heterogeneity and diversity of land use of fragments and the transition zones between them play an important role, most studies only refer to forested transition zones. Often, only an extrapolation of measurements in the different fragments themselves is reported to determine gradients in transition zones. In this article, we analyse environmental gradients and their effects on biota and matter dynamics along transects between managed continental temperate forests and agricultural land for one year. Accordingly, we found S-shaped microclimatic gradients in transition zones of 50–80 m between arable lands and forests. Aboveground biomass was lower within 65 m of the transition zone, 30 m in the arable land and 35 m in the forest. Soil carbon and nitrogen contents were elevated close to the transition zone's zero line. This paper contributes to a quantitative understanding of agricultural landscapes beyond individual ecotopes, and towards connected ecosystem mosaics that may be beneficial for the provision of ecosystem services. |
31,435 | Helsinki VideoMEG Project: Augmenting magnetoencephalography with synchronized video recordings | Magnetoencephalography is a non-invasive functional brain imaging method that monitors neuronal activity by measuring the associated magnetic fields .In clinical practice MEG is mostly used for pre-surgical localization of epileptogenic zones, where it has been shown to detect sources of pathological activity that are undetectable by other non-invasive techniques .MEG is in many respects similar to the closely related technique of electroencephalography: the two techniques share the underlying sources of the signal, and rely on similar mappings from the activity of these sources to the signals measured by the sensors, yielding similar temporal and spatial resolutions .In clinical practice, EEG examinations of epilepsy patients are routinely augmented with synchronized video recordings —a procedure known as video-EEG or VEEG.In video-MEG—the MEG counterpart of the video-EEG procedure—of epilepsy patients, video has proven useful for identifying artifacts and documenting seizures .Adding time-synchronized video to MEG epilepsy recordings has been demonstrated to significantly affect the interpretation of the data .In addition to clinical applications, video recording of the subject can also be useful in basic research, for example, for verification of subject compliance and performance.Despite the potential benefits, video-MEG recordings have gained little traction with MEG practitioners.One of the main impediments to wider adoption of video-MEG is the lack of practical solutions that would allow an MEG laboratory to integrate video recordings into the workflow with reasonable cost and manpower requirements.While several MEG manufacturers have advertised integrated video recording capabilities in their future products, currently there are no commercial video-MEG solutions on the market.Moreover, these advertised capabilities are only available for new MEG device installations, which makes them irrelevant to existing MEG sites.The Helsinki VideoMEG Project described in this paper aims at remedying this situation by providing MEG practitioners with tools for setting up video-MEG recordings.The project is guided by three main principles:Practicality."The project's goal is to allow a typical MEG facility to establish a video-MEG operation in practice with reasonable monetary and manpower costs.Openness.All the materials are freely available on GitHub1 under an open-source license.The users are free to study, redistribute, and modify these according to their needs.The users are also welcome to contribute modified versions of the materials back to the project.Vendor- and device-neutrality.The project aims at providing a video-MEG solution that is compatible with any MEG device.Establishing a video-MEG operation entails two relatively separate developments: creating an instrumentation setup for recording video and audio of the participant during the MEG experiment in a way that is synchronized to MEG data acquisition, and setting up a facility for the analysis of the resulting video, audio, and MEG data streams.The first task—establishing a video-MEG recording setup—necessarily involves acquisition and installation of video recording hardware.To put video-MEG within a reach of a typical MEG facility, the Helsinki VideoMEG Project has developed a hardware setup that only uses widely available standardized off-the-shelf components and requires no special skills for assembling.The project provides all the material necessary for setting up video-MEG recordings.The second task, setting up a video-MEG analysis facility, presents a different set of challenges.On one hand, it does not require any hardware development, as it can be completely addressed by developing appropriate software tools.On the other hand, integration with the existing instruments poses much bigger challenge for video-MEG analysis than it does for video-MEG recordings.Currently, MEG practitioners use a whole spectrum of different tools for analyzing the data."These include open-source packages, such as FieldTrip or MNE , that provide a lot of power and flexibility at the expense of ease-of-use, as well as closed proprietary Graphical User Interface-based software suits heavily optimized for relatively narrow range of workflows, such as Elekta Oy's DANA software.The latter class of MEG analysis tools is especially popular with clinicians, who constitute an important target audience for video-MEG.Ideally, the Helsinki VideoMEG Project should provide software that integrates video analysis capabilities into existing MEG analysis tools, however, in practice this is next to impossible for proprietary software.The project therefore adopts a pragmatic approach of adding video functionality to existing software tools when it is possible, while relying on workarounds when it is not.For the MEG users that use MATLAB- or Python-based scientific computing environments for their data analysis, the project offers MATLAB and Python routines for importing video and audio data and synchronizing it to MEG traces."For MEG practitioners that rely on proprietary GUI-based tools for interactive exploration of the data, the project provides several basic standalone utilities that can be used in conjunction with the user's favorite MEG analysis software.For recording video and audio of the participant, the Helsinki VideoMEG Project employs video recording system, which essentially constitutes a separate instrument that is independent of the MEG device.The only link between the two is the synchronization line that carries the timing information encoded as a sequence of trigger pulses.Thus, any MEG device capable of recording external trigger pulses can be used with the system.Fig. 1 provides an overview of the video recording setup.Audio and video of the patient are captured with a microphone and one or more video cameras located inside the magnetically shielded room.The microphone and the cameras are connected to the audiovisual computer."The AV computer timestamps the audio and video data and stores it as files on the computer's local hard drive.In addition, the AV computer generates timing trigger sequences that are recorded by the MEG device and used for synchronizing MEG traces to the video and audio streams.The AV computer is a standard amd64-based office PC running a 64-bit version of Linux operating system.The setup was tested with Ubuntu 14.04 LTS and Ubuntu 16.04 LTS distributions of Linux, but other distributions may work as well.The computer needs to be equipped with a parallel port interface for outputting timestamps and an IEEE 1394 interface for connecting the cameras.The firewire link can use either a standard copper cable or an optical fiber for communicating with the cameras, requiring either a standard or an optical interface, respectively.The AV computer runs custom Qt-based software written in C++."The software monitors and records the video and audio streams, and generates the timing signals that are emitted over the computer's parallel port.The project supports firewire machine-vision cameras that implement the industry-standard protocol—IIDC 1394-Based Digital Camera Specifications, also known as the DCAM2—for communicating with the host computer.Either grayscale or color cameras can be used.By default, the software records in a resolution of 640 × 480.If the cameras use an optical fiber for the link to the AV computer, they need a separate power supply, which would typically be located inside the RF-shielded stimulation cabinet of the MEG system.Cameras can be daisy-chained.Currently, the software supports up to six cameras; the performance of the AV computer may also limit the usable number of cameras, depending on the chosen resolution and frame rate.Especially in epilepsy recordings, several cameras can be useful as they provide a better overview of the patient ."The AV computer allows recording the audio from inside the MSR via the computer's sound card. "The exact configuration of the audio recording setup depends on the site's specific requirements.One possible choice is an optical microphone, such as Sennheiser MO 2000.The microphone itself does not contain any magnetic parts, and the signal connection is optical, eliminating any sources of interference inside the MSR.For a low-cost alternative, an electret microphone can be used.The system uses the Unix time in milliseconds as timestamps.The recording software timestamps every audio buffer and video frame as soon as they are available.The timestamps are also sent to the parallel port at regular intervals.They are represented as sequences of constant-amplitude pulses, where bits are encoded as delays between subsequent pulses’ rising edges.Values 0 and 1 are represented as short and long delays respectively.Altogether 42 bits are used to represent the POSIX timestamp in milliseconds.The sequence starts with the least significant bit and ends with a parity bit, with a total of 43 bits.Typically the parallel port output is connected to an MEG trigger channel, which is recorded synchronously with other MEG data.Thus a timestamped MEG sample is available every 10 s.In postprocessing, the timestamps for MEG samples between the timing pulses can then be computed by linear interpolation.The video is recorded as individual JPEG-encoded frames; no intraframe coding is currently performed.Thus, individual frames can be easily retrieved independently of each other at the expense of increased disk usage.The JPEG frames are stored consecutively in a single file per camera, interleaved with the timestamps for each frame.The audio is recorded as buffers.Buffer size may depend on the particular sound card and drivers used; a typical buffer size is 1024 samples, corresponding to about 23 ms at a typical sampling rate of 44,100 Hz.Similar to video, audio buffers are stored into a single file, interleaved with their corresponding timestamps."The exact specification of the audio and video file formats is available from the project's website.The Helsinki VideoMEG Project approaches the task of analyzing video-MEG recordings in two ways described below.For the users implementing their own analysis pipeline in MATLAB or Python programming environments, the project offers a complete set of functions for integrating audio and video streams into the analysis, namely routines for loading the audio and video data, and extracting and interpolating timestamps.Additionally it provides utility programs, written in Python, for various ancillary tasks—e.g. exporting video and audio to standard AVI format.The project also offers several documented examples of simple analysis pipelines written in Python.At the current stage, the selection of GUI-based tools for interactive review of the video-MEG data is quite limited."Elekta Oy has demonstrated an experimental version of it's Graph software that allows review of video and audio jointly with the MEG data from the company's VectorView and Triux MEG devices . "This software, however, has never been officially released by Elekta and it's availability to a typical MEG facility is uncertain. "The developers of the FieldTrip software have added initial support of the Helsinki VideoMEG Project's video and audio formats to their package.This provides FieldTrip users with a basic tool for interactive review of MEG and EEG traces jointly with video and audio."Notwithstanding the aforementioned options, the lack of a practical GUI-based video-MEG data analysis tool currently constitutes the main hindrance to the project's progress and the main focus of the future development roadmap.We performed a test measurement to assess the accuracy of synchronization between the audio and video streams and the MEG data.The test system consisted of a Dell Precision 490 office PC running the Ubuntu Linux 14.04 operating system.The motherboard integrated sound system based on the Sigmatel STAC9200 chip was used for audio.For video, two Allied Vision Stingray F033 cameras were connected to the system via optical firewire in a daisy-chained configuration.The Elekta TRIUX system in the BioMag laboratory of Helsinki University Central Hospital was used as the MEG device.The MEG was set up to generate trigger pulses once per second.The video cameras were pointed at the trigger interface unit which has LEDs indicating trigger onset.A function generator was set up to output a 1-kHz sinusoidal pulse with a 100-ms duration on the rising edge of the trigger signal.Using an oscilloscope, we verified that the delay between the trigger rising edge and the sinusoid onset was below 1 ms. The output of the function generator was then connected to the line input of the soundcard.Thus the rising edge of the pulse served as a reference event used for testing the synchronization between the three data streams—MEG, video and audio.Each rising edge coincided with the onset of the LED flash and onset of the sinusoidal pulse.Data were acquired continuously for 20 min.Next, the data were synchronized using the VideoMEG Python utilities.After synchronization, tone onsets were localized from the recorded audio data by a correlation with a 1-kHz complex exponential and thresholding.LED onsets were detected from the video data by thresholding: the frame where the intensity of the LED first reached 50% of its maximum was denoted as the LED onset frame."Since the LED may be on for only a part of the video camera's frame acquisition interval, the on/off transition is not necessarily instantaneous in the video.If the synchronization between video, audio, and MEG traces were perfect, the reference event—rising edge of the trigger pulse—would be assigned the same time in all the three data streams.The extent of the discrepancies between the timings assigned to the same event in different data streams characterize the accuracy of synchronization.Fig. 4 shows the histogram of the discrepancies between audio and MEG timings of the reference events.The mean discrepancy was 15.8 ms with a standard deviation 0.157 ms. Discrepancies between video and MEG timings of the events varied between 0 and 1 frames.In other words, following a trigger event, the LED was detectable either immediately in the next available frame or the one after that.The probabilities for 0-frame delay were 11.6% and 12.0% for cameras 1 and 2, respectively.Thus most of the LED flashes were detected with 1-frame delay.We have presented the architecture of a low-cost video recording system whose output can be accurately synchronized with the rest of the MEG data stream.The total cost of the system hardware depends on the number and type of cameras and on whether an optical microphone is used; a basic system can be put together for approximately 5000 €.On our setup based on a standard office PC, the system was verified to have relatively short audio and video latencies.Critically, audio and video jitter were small: <1 ms for audio and 1 frame for video in this hardware setup.The fixed audio delay of approximately 16 ms should be low enough for clinical applications.However, for applications demanding maximally accurate synchronization of the external audio stream, we recommend compensating for the fixed delay.This is possible as long as the jitter is negligible.The delay and jitter might depend on the particular hardware used, so we recommend validating individual setups before deployment."Below we outline the rationale for some of the project's design decisions.To synchronize the AV computer to the MEG, the project relies on binary timing signals from the AV computer recorded by the MEG devices’ trigger channel.This approach offers the advantages of being simple and reliable, and, importantly, independent of the internals of the MEG acquisition system.The only requirement for the MEG device is to be able to accurately register external trigger pulses, which in practice is satisfied by any modern MEG system.Encoding the absolute time in a sequence of pulses every 10 s significantly increases the robustness of the synchronization scheme.It allows synchronization in the presence of partially missing or corrupt data and prevents accidental missynchronization of unrelated data streams.The redundancy of such an encoding, complemented by the redundancy provided by parity bits, allows reliable automatic detection of the timing information in the MEG data without the need for the user to explicitly specify the timing channel.To emit the timing signal, the AV computer requires a binary output facility that can produce an arbitrary sequence of pulses and allow accurate enough control of their timing.Whereas there is a multitude of options available for solving this problem—such as digital-to-analog converters, digital input/output cards, etc.—by far the cheapest, simplest and easiest solution employs parallel port.Although the LPT technology is obsolete, parallel ports are still widely available for desktop computers as add-on PCI cards.When selecting a type of camera to be used for recording the patient, we have considered a number of possible alternatives, such as consumer web-cameras, surveillance cameras, and industrial machine-vision cameras.We have finally decided to adopt firewire-based machine vision camera a number of reasons:Such cameras offer much longer product life cycle than most of the alternatives.That means that the cameras, associated materials, and technical support will be available for many years into the future.The cameras rely on open and stable IIDC interface for communicating with the host computer.This allows to avoid a vendor/model lock-in, as the camera can be easily replaced with another one from a wide selection of models by different vendors that support the interface.Being designed for integration into third-party systems, machine-vision cameras offer the user an extensive control over the camera configuration both physical and programmatic."This level of control is critical to the project's ability to accommodate a wide range of requirements posed by different clinical and research use cases—such as the demand for recordings in low levels of visible light or different experiment-specific requirements for video quality and frame rate.Some firewire-based machine vision cameras allow the use of optical fiber for data connection to the host computer instead of the standard copper firewire cable.The cameras still require a conductive cable for supplying the power, however the power supply can be completely independent from the host computer.This greatly simplifies the practicalities of installing the camera inside the MSR.Using an optical fiber instead of a copper cable may also reduce the amount of electromagnetic interference introduced by the setup, however, this was not systematically tested.For the initial clinical validation we constructed a prototype that employs two black-and-white Stingray F-033 cameras at resolution 640 × 480 and frame rates of either 30 or 60 fps.This choice of a relatively low-end camera model was motivated by the intent to minimize the costs and avoid potential performance problems.In line with previous reports , our experience confirms that even at this modest resolution video provides significant additional clinical value in MEG examinations of epilepsy patients .However, the Helsinki VideoMEG Project can accommodate cameras with higher resolutions, albeit at the expense of higher requirements for the AV computer hardware and storage space.With firewire cameras, for a given frame rate the video resolution is ultimately limited by the firewire bandwidth; however, this limit is way above our current resolution of 640 × 480—there are cameras that attain the resolution of 1388 × 1038 at 30 fps for color video and full HD at 30 fps for black-and-white video.The project employs a simple custom format for storing video and audio data.The audio is stored uncompressed and video—as a sequence of JPEG-compressed individual frames.In the initial stages of the project, we prefer a custom format over commonly used ones because it considerably simplifies the development of video recording and analysis tools.The main drawbacks of using a simple custom data format are:unavailability of the data for analysis by numerous third-party tools, such as video editors, and the increase in storage requirements.The address the first issue, the project provides conversion utilities that allow export of the video and audio data to a widely-used AVI format.For the default camera resolution currently provided by the project, the second shortcoming is tolerable, if not negligible.The amount of storage space required by video and audio is comparable to that required by MEG data, thus most MEG sites should be able to handle the additional storage demands effected by the video without any significant modifications to their data storage facilities.This situation is, however, bound to change as the project embraces higher-resolution cameras.Hence, transitioning to a more sophisticated video storage format is one of the projects near-term goals.With the exception of a comprehensive GUI-based analysis tool, the Helsinki VideoMEG Project provides a complete set of instruments for integration of video and audio recordings of the subject into the MEG laboratory workflow. | The primary goal of the Helsinki VideoMEG Project is to enable magnetoencephalography (MEG) practitioners to record and analyze the video of the subject during an MEG experiment jointly with the MEG data. The project provides: . Hardware assembly instructions and software for setting up video and audio recordings of the participant synchronized to MEG data acquisition.. Basic software tools for analyzing video and audio together with the MEG data.The resulting setup allows reliable recording of video and audio from the subject in various real-world usage scenarios. The Helsinki VideoMEG Project allowed successful establishment of video-MEG facilities in four different MEG laboratories in Finland, Sweden and the United States. |
31,436 | Biomass resources and biofuels potential for the production of transportation fuels in Nigeria | The combustion of fossil fuels such as coal, oil and natural gas for the conventional method of producing transportation fuels, chemicals, and power, has been established for many years .This method is a significant global concern as it releases greenhouse gases particularly carbon dioxide into the atmosphere.Petroleum consumption for road transportation is currently the largest source of CO2 emissions .It accounts for 23% of CO2 emissions worldwide and 59.5% of CO2 emissions in Nigeria .In 2013, world CO2 emissions from the consumption of petroleum exceeded 11,830 million metric tonnes World total transport energy use and CO2 emissions are projected to be 80% higher by 2030 than the current levels .The United States Environmental Protection Agency cited in calculated the amount of CO2 emissions from the combustion of gasoline and diesel to be about 8.887×10−3 and 1.0180×10−2 metric tonnes CO2/gallon respectively.According to Howey et al. unless there is a switch from fossil fuel to low-carbon alternative fuel, CO2 emissions from vehicles may not reduce below ~8 kg CO2.One major method which has been studied to reduce CO2 emissions from vehicles is the blending of gasoline with ethanol .It is estimated that about 8.908×10−3 metric tonnes of CO2 are emitted from the combustion of a gallon of gasoline that does not contain ethanol, and 1.015×10−2 metric tonnes of CO2 are emitted from the combustion of a gallon of diesel that does not contain ethanol .Increase in the consumption of ethanol fuel has mitigated increases in CO2 emissions from the transportation sector .To further reduce these emissions, fuel switching to low carbon alternatives such as biomass fuel is essential.This is because, biomass currently offers the only renewable source of energy that can substitute for petroleum fuels as well as reduce CO2 emissions .Globally, biomass fuel is becoming ever more attractive as suitable substitute for fossil fuels due to the increasing demand for clean energy, declining fuel reserves, and its contribution towards reducing dependence on crude oil.The processing of biomass for biofuel, biopower, and bioproducts has important effects on international policy and economy, and on rural development.It reduces the dependence on oil-producing countries and supports rural economies by creating jobs and providing an additional source of income .Hence the purpose of this review.Despite Nigeria having four petroleum refineries with combined crude distillation capacity of 10.7 million barrels per day , an amount that far exceeds the national demand, the country still imports the majority of refined petroleum products.This is due to the low capacity utilization of existing refineries .At 2013, typical capacity utilization for the four existing refineries was about 22%, with crude oil production of 2367 thousand bbl./d.At the same time, approximately 164,000 bbl./d of petroleum, and 82,000 barrels of fuel ethanol were imported .To reduce the nation׳s dependence on imported oil it is important to improve refinery utilization and diversify to other energy resources.Therefore the development of alternative fuels particularly biomass-derived fuels from locally available biomass needs investigation.There is a wide range of biomass conversion processes, at varying stages of technical maturity.Some are commercially available, while others are at demonstration stages.For instance, ethanol production from sugar cane is commercially available in Brazil , while biofuel production from algae is at research and development phase .Existing research on biomass resources and the potential for biofuels in Nigeria is focused on power generation and biofuels production from first generation biomass.Typically this substitutes fuel production for food crops.There is currently limited information on the state of biomass conversion technologies for the utilization of non-food crops for transportation fuels production in Nigeria.This paper reviews biomass resources and biofuel potentials to produce transportation fuels, notably biomass resources available from first, second, third and fourth generation feedstocks in Nigeria.It assesses the biomass conversion technologies tested in the country, and the technology readiness level.It also identifies research gaps alongside the policy targets defined for sustainable biofuel production.In addition, the potential for biofuel contributing towards more sustainable production with improved environmental and socio-economic benefits is discussed.More detailed region-specific evaluation of biomass resources can then be used to define the scope for local production of biofuels within Nigeria.The term biomass literally means living matter.However, biomass is often used to describe any organic material obtained from plant and animal tissue .This includes agricultural resources, agricultural residues, forest resources, waste including municipal solid waste, industrial waste, and other wastes, as well as algae.These materials are referred to as feedstocks in bio-refining and are classified into four generations: first, second, third, and fourth.First generation refers to the biofuels derived from agricultural products: sugar or starch-based crops and oilseeds, e.g. sugarcane to produce bioethanol or palm oil for the production of biodiesel.Through fermentation or trans-esterification, first generation biomass feedstocks can be processed into bioethanol or biodiesel respectively.Most common uses are as first generation biofuels.Biomass is abundant in nature and broadly dispersed globally with its distribution being dependent on geographical area.Countries such as Brazil and Nigeria have significant natural resources to produce transportation biofuels, biopower and bioproducts from biomass.Nigeria has substantial biomass potential of about 144 million tonnes per year .According to the U.S. Energy Information Administration most Nigerians, especially rural dwellers, use biomass and waste from wood, charcoal, and animal dung, to meet their energy needs.Biomass accounts for about 80% of the total primary energy consumed in Nigeria ; oil, natural gas and hydro .This large percentage represents biomass used to meet off-grid heating and cooking needs in the rural areas.The total land area of Nigeria is approximately 92,376,000 ha, and is divided into 36 states with available land area of about 91,077,000 ha.Of this agriculture covers 71,000,000 ha .About 37.3% of the agricultural area is arable land, i.e. 7.4% permanent crops and 9.5% forest areas, Fig. 2, indicating a large share of cultivatable land for biofuel production in the country.As at 2014, the population of Nigeria was 178,517,000 with a density of 193.2 inhabitants per km2 .Rural dwellers account for about 50% of the population .The country Gross Domestic Product at market prices was estimated at US$568.5 billion in 2014 , and annual growth rate at 6.3% of which the agricultural sector contributes about 4.3%.A total of 12.6 million economically active people are engaged in agriculture .The national poverty level in 2009 was 46%, with the human development index at about average in 2013 providing a significant and growing demand for energy.Biomass feedstocks can be converted into different fuels using a range of processes to generate heat or electricity, produce liquid biofuels, or biogas.Most of the emerging bio-refineries in Nigeria use first generation biomass feedstocks.These sources are largely food crops and thus not sustainable for biofuel production.First generation bio-refining is largely driven by legislative targets and favourable taxation to increase biofuel supply Most notable is the directives set by the EU, the US Energy Independence and Security Act of 2007 , and the 2005 Nigerian Biofuel Policy Incentive which sets legislative targets for establishing biofuel markets by providing exemptions for biofuel industries from taxation .The adverse effects first generation biomass feedstock have on global food prices is moving research into the use of lignocellulosic biomass resources, otherwise known as second generation biomass feedstocks.These feedstocks include crop residues, wood residues and dedicated energy crops cultivated primarily for the purpose of biofuel production.Second generation biomass feedstocks are increasingly gaining interest globally as sustainable alternative to fossil fuels because they are not food crops and so not in competition with food .There is a wide variety of photosynthetic and fermentative bacteria and algae that are currently being explored as biocatalysts for biofuel production because of their high carbohydrate or lipid/oil contents.These microbial cells are categorised as third-generation biomass feedstock .In comparison to first- and second-generation feedstock for biofuel production, these microbial cells are more sustainable because, they do not require arable crop lands or other farming inputs for cultivation, and so are not in competition with food.Algae have a fast growth rate compared to other terrestrial plants and can grow in different liquid media – wastewater streams which are common in the Niger Delta region of Nigeria.Algae can also be cultivated in high yields using bioreactors.It is estimated that microalgae could produce about 10 to 300 times more oil than traditional or dedicated energy crops in future .However, the algal-based oil production platform is technologically immature.Biofuels from third generation sources have limitations in terms of economic performance, ecological footprint, reliance on sunlight, geographical location and so are inadequate to substitute for fossil fuels .Metabolic engineering involving biosynthesis can improve alcohol productivity.This process is categorised as fourth generation bio-refining.Genetic modification can be used to increase CO2 capture and lipid production as well as develop low input, fast growing energy crops with reduced fertilizers, insecticides, and water requirements.For instance, genetically modified wheat and barley contain more hydrolysable biomass .Advanced biofuels such as biobutanol and biomethane are gaining as much investment support currently as bioethanol, due to their high energy density, low hygroscopic and less corrosive nature.Biomass feedstocks can be grouped into: agricultural resources, crop residues, forestry resources, urban waste and other waste.The Nigerian agricultural sector is characterized by traditional smallholders, who use simple production techniques and bush-fallow systems for cultivation .However, this accounts for about two-thirds of the country׳s total agricultural production .Prior to the discovery of oil and gas, agriculture was the mainstay of the nation׳s economy .It accounts for over 50% of the GDP and 75% of export revenue .But with the rapid growth of the petroleum industry, there has been gross neglect in agricultural development, and this has led to a relative decline in the sector.Agriculture in Nigeria is also influenced by the climatic and vegetative zones.Nigeria is classified into eight agro-ecological zones based on the temperature and rainfall pattern from the north to the south.The variation in rainfall, temperature, and humidity, and its effect on the natural vegetation zones determines the types of indigenous plants that are grown in the country.Apart from the ultra-humid belt along the coast which has an average rainfall of 2,000 mm/year, the climate in the north is semi-arid, while that of the south is humid .The humid tropical zone favours the growth of low base saturation and low solar radiation crops such as cassava, rice, sweet potatoes, and some grasses .The ecological zones are distinguished by the northern Sudan Savannah, southern rain forest, and Guinea Savannah or Middle belt .The savannah land represents 80% of the vegetation zones, and serves as natural habitat for grazing large numbers of livestock such as camels, cattle, donkeys, goats, horses, and sheep .The humid tropical forest zone in the south which has longer rains when compared to the savannah land in the north, has the capacity of supporting plantation crops such as cocoa, coffee, cotton, oil palm, rubber, and staple crops like cassava, cocoyam, cowpeas, groundnut, maize, melon, rice, sweet potatoes, and yam .The increasing rainfall from the semi-arid north to the tropical rain forest south also allows for crop diversity, from short season cereals, millet, sorghum, and wheat in the north to cassava, rice, and yams in the wet zones.Cash crops such as cotton, groundnuts and tobacco are grown in the drier north, while cocoa, coffee, ginger, rubber, sugar, and oil palm are grown in the south .Energy crops such as sugarcane, cassava, sweet sorghum, and corn are plants with high energy content that can be grown specifically as biomass feedstock .They can be grown on marginal or degraded agricultural land.However, their growth is also based on rainfall distribution.Nigeria has huge potential for energy crop cultivation, and for biofuels production due to the availability of arable lands and water .The Food and Agriculture Organization of the United Nations estimates in Table 3 shows substantial cultivation of energy crops in Nigeria over a ten-year period.There have been continuous increases in the areas of land harvested, and tonnes of energy crops produced in Nigeria from 2004 to 2013.Especially with crops such as sugarcane and cassava which are the major feedstocks for emerging biofuel projects in Nigeria.Ethanol production from sugarcane is currently the most attractive alternative to fossil fuel as it achieves significant GHG emission reductions .It is obtained from renewable biomass: sugarcane and bagasse.Brazil and the United States are the largest producers of ethanol from sugarcane with both countries accounting for about 86% of total bioethanol production in 2010 .In Brazil, the introduction of ethanol in automobiles reduced carbon monoxide emissions from 50 g/km in 1980 to 5.8 g/km in 1995 .The Brazilian economy has grown its sustainable biofuel production from sugarcane as the government-implemented policies have encouraged the production and consumption of ethanol .Sugarcane is grown in several parts of Nigeria, usually on small holdings of 0.2 to 1.0 ha for chewing as juice and as feed for livestock .Following the increase in demand for biofuel production in Nigeria, sugarcane is now grown on a large scale as an industrial raw material.The Nigerian National Petroleum Corporation identifies sugarcane, cassava, sweet potato and maize as the main raw materials for bioethanol production for its Automotive Biofuel Program .Over $3.86 billion has been invested in sugarcane and cassava feedstock plantations in Nigeria, and in the construction of 10,000 units of mini refineries and 19 ethanol bio-refineries, for the annual production of 2.66 billion litres of fuel grade ethanol .Cassava is grown on a commercial scale with Nigeria being the largest producer of cassava in the world .About 75% of Africa׳s cassava output is harvested in Nigeria .Agricultural areas harvesting cassava increased from 3,531,000 ha in 2004 to 3,850,000 ha in 2013 with increases in production growing from 38845000 t in 2004 to 54,000,000 t in 2013.Production growth at this rate was the result of a transformation from cassava being firstly a famine reserve crop, then a rural staple food crop, to then a cash crop for urban consumption, and finally to use as an industrial raw material .The International Fund for Agricultural Development and the Food and Agriculture Organization of the United Nations review of cassava in Africa shows how the planting of new varieties of Tropical Manioc Selection cassava developed by International Institute of Tropical Agriculture, and given to Nigerian farmers in the 1970s has transformed cassava from being a low-yielding famine-reserve crop to a high-yielding cash crop.Aside from sugarcane, cassava produces the highest amount of carbohydrate.Depending on the cultivar, field management and age, a mature cassava root ranges in length between 0.15 to 1.0 m and weigh 0.5 to 2.5 kg.With time, up to an optimal period of 12 to 15 months after planting, the starch content increases .Cassava requires at least 8 months of warm weather to produce a crop, and may be harvested between 10 to14 months .The Nigerian weather favours its growth especially in the humid tropical zone .Thus, cassava has high potential as industrial raw material for ethanol production in Nigeria.The process for conversion of cassava for biofuel is already established .However, cassava is considered as a first generation biofuel feedstock in direct competition with food.Despite this, cassava residues which are nonedible can be used for biofuel production.Nigeria has a residue potential of about 7.5 million tonnes/annum .The use of sorghum as an alternative bioenergy feedstock increased in interest from the 1970s .With the 2008 Farm Bill classification of the sorghum grain as an advanced biofuel feedstock, ethanol production has developed further as a new and important market .Sorghum cultivars are considered as efficient biomass feedstocks for energy conversion because the cultivars possess fermentable sugars that are readily available within the hollow stem of the plant.Hence enzymatic conversion of starch to sugar is not necessary, thus giving sorghum an economic advantage over starch-based crops.Nigeria ranks amongst the top four producers of Sorghum in the world: United States 18.68%, Nigeria 17.12%, India 11.27%, and Mexico 9.81% .In 2013, sorghum was harvested on 5,500,000 ha of agricultural land in Nigeria, with aggregate production of 6,700,000 t.It is one of the most drought-resistant crops cultivated in the central and western areas in Nigeria .Traditionally, the crop is cultivated for food and beverages and roofing in local communities.However, it is an increasingly attractive feedstock for bioethanol production because of its high sugar content.The stalk of sweet sorghum is rich in fermentable sugar which can be extracted as juice .This contains ammonia, acid and minerals enabling it to be used for multiple purposes including fermentation for bioethanol production.Conventional fermentation technology can be used to convert the juice in sorghum into alcohol.The bagasse can be used for co-generation of steam or electricity or as feedstock for cellulosic biofuel production.World Agricultural Supply and Demand Estimate report cited in states that 26% of domestic grain sorghum is utilized in the production of ethanol.Sweet sorghum has the potential to produce up to 1319.82 gallon/ha of ethanol but its use as a feedstock is constrained because of its seasonality.Maize, also referred to as corn, is a major feedstock for the production of liquid biofuel.It accounts for about 8.4% of global ethanol production .In the U.S., it is the major feedstock for the production of ethanol .Ethanol obtained from maize accounts for 48 to 59% greenhouse gas emission savings and 1.5 to 1.8 energy balance in the U.S .However, maize has not been favoured as an ethanol feedstock outside of the U.S. because of concerns about competition with food.Ethanol obtained from maize is estimated to reduce greenhouse gas emissions by 40% .Maize can be cultivated widely though it is grown mainly in temperate climates.Maize has high productivity per unit of land.In the right environments the agricultural output of maize is about 7 to 11 t per ha higher than other cereals; its ethanol yield can amount to 769.89 gallons/ha of corn .However it uses large amounts of fertilizers and pesticides, and thus consumes fossil fuel energy .The conversion process also consumes energy of about 41.60 GJ per ha maize .But water consumption is relatively low : between 3 to 4 L of water per litre of ethanol for feedstock production and in the ethanol conversion process .Oil palm is a valuable energy crop comprising a kernel enclosed with pulp and mesocarp.The pulp is edible oil, while the kernel oil is used primarily for the production of soap.Both parts can be used in producing biodiesel.Oil palm is the fourth most produced commodity in Nigeria .It is grown predominantly in the south-eastern part of Nigeria on small-scale farming, and as semi-wild palms .Its 2013 production was about 960,000 t, and market share 3% .Oil palm accounts for about 10% of global biodiesel production, and is rapidly increasing particularly in Indonesia and Malaysia .It is the most efficient source for biodiesel yield per unit of land compared to other oil crops such as soybeans, rapeseed or sunflowers.The average oil yield from oil palm is about 3.74 t/ha/a while for soybean oil it is 0.38, sunflower 0.48, and rapeseed 0.67 .With the increasing demand and high potential for expanded trade opportunities in developing countries, new oil palm plantations for biodiesel production are emerging in Africa and in Latin America .However, Malaysia, Indonesia and Nigeria remain the top three producers of oil palm in the world .Nigeria is ranked 13th largest producer of soybean in the world with average yield of 591,000 metric tonnes .It is estimated that Nigeria had the potential of producing 284.5 ML of biodiesel from 638,000 ha of soybeans cultivated in 2007.Soybean is a major feedstock for biodiesel production in the U.S and in Latin America .China is also a major producer of soybean; however, it does not use it as feedstock for biodiesel production because of its competition with food.Argentina and Brazil are expected to expand to soy oil for biodiesel production due to the availability of land and relatively lower production cost.Though under the current market forces, soybeans tend to be grown as a single crop in these countries, thus posing the challenge of sustainability.Research and development into new feedstocks to support future biofuel expansions from versatile crops and non-edible oil seeds such as jatropha is ongoing in Asia and other countries .However, the large gap between future demand and potential domestic supply requires expanding biofuel production in developing countries, which have the land and the climate required for large- scale production of feedstocks.Jatropha is a second-generation dedicated energy crop.Its cultivation does not compete with food and other cash crops for arable land as the plant has the ability to survive in marginal lands.Following the rainfall distribution, Jatropha can be grown in all ecological zones in Nigeria.Its rainfall requirement is not heavy.The plant can thrive in average annual rainfall of about 250 mm .In Nigeria, Jatropha is yet to be appreciated as a viable economic crop.Its cultivation is still limited to its use as decorative plant or as hedge crops in rural communities.However, various development projects are ongoing across the country for its use as feedstock for biofuel production .Environmental concerns on the impact of jatropha cultivation on soil quality as well as the effect of caustic effluents from processing of jatropha oil and the toxicity of jatropha seeds, cake, and extracted oil were initially evident.However, these concerns were reduced after it was discovered that jatropha is a viable option for the remediation of metal contaminated soils/land, following the study of Kumar et al. that assessed the remediation potential of jatropha on soils contaminated by petroleum exploitation and spillages from petroleum transportation and products distribution in the Niger Delta region of Nigeria.The costs and benefit of investing in biodiesel production from jatropha is worth assessing to ensure the benefits outweigh the economic, social, and environmental cost.Although biofuels can be produced from several other energy crops such as Miscanthus, this paper is focused on those currently in use in Nigeria, so as to align the feedstock availability with the technology readiness level in the country.Central to the selected feedstocks are: the production concentration, input requirements and derivable yield.These vary considerably among crops and locations.Table 5 shows a summary of derivable biofuels from ten different feedstocks: cultivated areas, biofuel type, and estimated biofuel production potential.In terms of biofuel production potential, the feedstock with the highest yield is palm oil while coconut has the lowest yield.Some of the other feedstocks, in order of decreasing biofuel yield are: cassava, groundnut, corn, sugarcane, soybeans, sesame, and cotton seed.This estimate affirms the availability of biomass resources for biofuel production in Nigeria.However, the resources considered in Table 5 are first generation biomass feedstocks in competition with food.Nigeria needs to harness its renewable energy potential from non-food biomass feedstocks for sustainable production of biofuel.Biofuels can positively influence agriculture if non-food feedstocks are utilized.The United States Department of Agriculture estimates that net farm income in the U.S. can increase from 3 to 6 billion US$ annually if switchgrass is used as energy crop .Although biofuels can be produced from several other energy crops such as Miscanthus, this paper is focused on those currently in use in Nigeria, so as to align the feedstock availability with the technology readiness level in the country.Central to the selected feedstocks are: the production concentration, input requirements and derivable yield.These vary considerably among crops and locations.Table 5 shows a summary of derivable biofuels from ten different feedstocks: cultivated areas, biofuel type, and estimated biofuel production potential.In terms of biofuel production potential, the feedstock with the highest yield is palm oil while coconut has the lowest yield.Some of the other feedstocks, in order of decreasing biofuel yield are: cassava, groundnut, corn, sugarcane, soybeans, sesame, and cotton seed.Grasses have high fibre content, and can be converted into biofuel following various biomass conversion techniques including, cellulosic fermentation to ethanol.Most grass species such as Pennisetum, Andropogon, Panicum, Chloris, Hyparrhenia, Paspalum and Melinis used as hay and pasture for livestock feed or for soil conservation can serve as energy crops.It is estimated that 200 million tons of dry biomass can be obtained from forage grasses and shrubs .In Nigeria, grassland occupies about 23% of the total land area and is concentrated majorly in the Guinea savannah which is situated in the middle of the country, and extends southwards to southern Nigeria .These grasses are currently underutilised.Agricultural residues are organic materials produced as by-product in the course of harvesting and processing agricultural crops.They are classified into two categories : crop residues, and agricultural industrial by-product.Crop residues produced during harvest are primary or field-based residues while those produced alongside the product at time of processing are secondary or process-based residues.Depending on the mode of handling, both field-based and process-based residues have high potential for energy production .Like grasses, they are underused.About 50% of the agricultural residues are burnt on cropland before the start of the next farming season .They are usually used as fodder for livestock, fertilizer for crop regrowth, for soil conservation, or are burnt off.Agricultural residues vary.Their bulk density, moisture content, particle size and particle distribution are dependent on age of residue, stage of harvest, or physical composition and length of storage and harvesting practices.Table 6 shows estimates of some major crop residues available in Nigeria.These residues have huge energy potential and can contribute greatly to the nation׳s economy, particularly those from cassava, rice and maize.The use of non-traditional feedstock such as straws, stalks, and bagasse for biofuel production has the advantage of contributing food into the market, since it is only the crop residues that are used for biofuel production.Irrespective of the process technologies used in bio-refining, intractable waste products that will be difficult to convert into valuable biofuels or biomaterials will be generated.These spent biomass residues may contain lignin fragments, residual carbohydrates, and other organic matter that need to be treated in an environmentally friendly manner, so as to leave little or no ecological footprint.Such wastes and residues are important energy sources in biorefineries given their chemical energy content, and are ideal feedstocks for thermochemical conversion to syngas .Nigeria׳s land covers range from tropical rain forest in the south to Sahel savannah in the northern part of the country .The rain-forest area generates more woody-biomass than the savannah areas which generates mostly crop residues .About 9.5% of Nigeria׳s total land area is occupied by forest .But approximately 1200 km2 of the forest is lost annually .Table 7 shows the extent of Nigeria׳s forest, and the annual change rate from 1990 to 2000 and from 2000 to 2010.Forest is distributed across Nigeria as seen in Table 8.About 95% of these conventional forests in Nigeria are government owned .Unfortunately, these forest areas are not properly secured and their resource not conserved.Private individuals easily trespass into the forest and extract its resources for firewood.And so the exact potential of the country׳s forest biomass is not well known because of poor records of the forest resource production and exploitation.Forest biomass is categorized into above-ground biomass, and below-ground biomass.Above-ground biomass comprises all living biomass above the soil, which includes barks, branches, foliage, seeds, stems and stumps .The concentration of woody above-ground biomass in Nigeria is shown in Fig. 5.A large amount of this is in the Niger Delta region of the country – the same region that houses most of the existing petroleum refineries.Below-ground biomass is all living biomass of live roots.Sometimes, fine roots of less than 2 mm diameter are not included as they often cannot be empirically differentiated from soil organic matter or litter.Forest is a major source of biomass that has the potential of contributing substantially to a nation׳s biofuel resources.Global Forest Resources Assessment of Nigeria forest biomass is presented in Table 9.The wealth of forest biomass can be harnessed by utilizing its resources for industrial purposes.Forest-based industries have the opportunity of maximizing renewable energy resources to stir development, create reliable fibre supply, and contribute to domestic economies.For example, forest-based companies are now in the market producing liquid biofuels and other biomaterials through the development of ‘bio-refineries’ .Many countries are providing support for the development of biofuels and bioenergy, which is somewhat directed towards the forestry sector, as it is believed that the forest industry has a feasible future, particularly with the increasing emphasis on ‘green economy’.Canada has paused production at its old pulp and paper mills under its ‘Bio-Pathways’ project, in order for the country׳s forest industry to focus on developing the potential of new sawn wood, and other valued wood products to transform its pulp and paper mills into bio-refineries that can produce bioenergy, valuable chemicals and high-performance fibres for advanced applications .Similarly, Nigeria׳s Biofuel Research Agency is coordinating the biofuel crop production optimization programme in collaboration with Forest Research Institute Nigeria to develop the country׳s biofuel feedstock.Currently, the largest fuelwood sources are forests, communal farmlands and private farmlands .Wood fuel, including wood for charcoal is a major biomass feedstock used in Nigeria to meet household energy needs.It is the highest produced forest biomass in Nigeria.In 2008, over 62.3 million m3 wood fuel was produced and consumed.It is estimated that about 55% of annual global use of wood is utilized as fuelwood in developing countries .Forest residues are largely untapped biomass energy resources in most parts of Africa .They consist of wood processing co-products such as wood waste and scrap not useable as timber, that is, sawmill rejects, veneer rejects, veneer log cores, edgings, slabs, trimmings, sawdust, and other residues from carpentry and joinery.They also include green waste from biodegradable waste which can be captured and converted into biofuel through gasification or hydrolysis .Like agricultural residues, forest residues are by-products of forest resources.They can be harvested alongside forest resources, and so do not need additional land for cultivation.The availability of forest residues depends on the productivity of the industry where they are obtained.Typical residue yield from a tropical sawmill for export is between 15 and 20% of the total biomass, or 30–45% of the actual biomass delivered to the sawmill.These biomass types vary in composition, volume and quality, depending on the processing steps and soils of origin .The Nigerian environment is highly polluted with enormous amounts of waste: municipal solid waste, food waste, industrial waste, and animal waste, and these are a major problem in the country.Urban waste and other by-products rich in biomass can be used as feedstocks for biofuel production.The biofuel concept that is capable of producing immediate benefit is biogas from wastes .This does not require irrigation or land input, and could aid reduction in the pervasive use of firewood as well as create a clean environment.Biogas, a methane-rich gas produced by anaerobic treatment of any biomass, is a multi-benefit, flexible technology that can be applied on household scale, community scale or industrial scale .The technology is straightforward and practicable on both small and large scale.Beside electricity generation, biogas produces fertilizer as a valuable by-product.Biogas can also be upgraded to transportation fuel.However, in Nigeria, the preferred use is as cooking fuel, even though electricity generation would seem an attractive option for large-scale applications considering the poor electricity situation in the country.The technology to utilize household waste, sewage, industrial waste, and other organic waste can be implemented virtually everywhere in Nigeria.Thus biogas production is an effective way to dispose organic waste, generate energy, produce fertilizer, and circumvent the issue of land and new cultivated areas.A considerable amount of waste is generated in some major cities in Nigeria.Maximizing this waste for biogas production, instead of open burning, could circumvent the tremendous sanitary problem posed in the country.The rate of municipal solid waste generation is highly influenced by population, income level, and activities .The type, amount, and concentration of household, commercial, and industrial activities determine the volume of waste generated in a municipality.Table 12 shows total solid waste generated in some major cities in Nigeria.Municipal solid waste is comprised of two main components: biogenic and non-biogenic.The separation of the biogenic component from the non-biogenic components is not efficient in developing countries, especially in the rural areas where there are no proper waste management facilities, so the wastes are burned.In the urban areas, they are basically discarded in dump sites, or used in landfills.Anaerobic digestion of organic waste in landfill releases methane and carbon dioxide into the atmosphere; this pollutes the environment.The biogenic component can be treated by anaerobic digestion to produce biogas methane.It was estimated that 1 t of MSW deposited in landfill is capable of producing between 160 and 250 m3 of biogas .The non-biogenic component constitutes the non-biodegradable inorganic substance, such as metal and plastics.Most of the MSW in Nigeria contains biodegradable waste materials because of limited industrial activities in the country.Solid and liquid food wastes are generated daily by the food processing industries, hotels and restaurants.These wastes include foods that are not up to the specified quality control standards, peelings and remnants from crops, fruits and vegetables.Hotel and restaurant contributions to the GDP of Nigeria are on the increase.With population of over 170 million, the food industry generates a considerable amount of wastes .Currently, most of these solid wastes end in dumpsites while the wastewater from food industries which usually contains sugars, starch and other dissolved and solid organic matter, constitute environmental pollution.These food processing wastes which include wastes from dairy and sugar industries, and from wine and beer production, can be anaerobically digested to produce biogas or fermented to produce ethanol .The conversion technology depends on the nature and volume of the available waste, and the desired end product.Waste cooking oils can be filtered and used as straight vegetable oil or converted to biodiesel .Also, waste streams with smaller volumes can also be maximized.Large amounts of effluent or wastewater containing organic or inorganic substances are discharged from industries.This may require wastewater treatment depending on the characteristics and amount of wastewater.These industrial wastewater or sewage sludge can be anaerobically digested to produce biogas .However, industrial wastewater treatment in Nigeria is minimal.Most industrial wastewaters are disposed of directly into rivers.Only a few industries carry out primary wastewater treatment by employing either on-site or off-site disposal methods.Nigeria urgently needs to invest in both sewage systems and waste management in order to maximize these resources.It is estimated that Nigeria can produce 6.8 million m3 of biogas daily from fresh animal waste, as 1 kg of fresh animal waste produces approximately 0.03 m3 gas; and Nigeria generates about 227,500 t of fresh animal wastes per day .Animal waste accounts for 61 million tonnes/year of Nigeria׳s energy reserve .Like agriculture residues, animal wastes are a by-product of livestock rearing.The most common domesticated livestock production in Nigeria comprises cattle, pigs, goats, sheep and chickens.The wastes from these animals are one of the most suitable materials for biogas production through the process of anaerobic digestion.Cattle, goats, and sheep are largely reared in the northern part of Nigeria, while pig cultivation is common in the south.On a typical commercial farm in the north, over one thousand cattle can be found, whereas chicken production predominates in the south.Generally, the majority of urban and rural households in the country keep at least three poultry birds among other ruminant livestock .In terms of biogas production potential from animal waste in Nigeria, the north can be considered more sustainable because of the amount of cattle waste in that region generates.Urban waste and other waste from non-food crops have high potential for biofuels in Nigeria.They can contribute greatly to supplying a sustainable and clean energy future particularly in the transport sector if technologies to harness these resources are developed.Following recurrent fuel scarcity issues in Nigeria, and increases in petrol and petroleum prices, Nigeria has started to diversify its fuel supply to use its natural resources more effectively.Biofuel is an attractive alternative to substitute for fossil fuel.Solid biofuels which are used mainly in developing countries, especially wood, account for about 69% of world renewable energy supply, while liquid biofuels account for 4% of transportation supply and 0.5% of global Total Primary Energy Supply .Biogas share is about 1.5% and has the highest annual growth rate of 15% since 1990 compared to other biofuels.Liquid biofuels have a significant annual growth rate of 11%, whereas solid biofuels have an annual growth rate of 1% .IEA Statistics reveal that since 1990, bioenergy share has been about 10% of global TPES with an average annual increase of 2% .Bioenergy supply has increased from 38 EJ in 1990 to 52 EJ in 2010 following the rising demand for energy in non-OECD countries and the new policies to increase the share of renewable and domestic energy sources in both OECD non-OECD-countries.Bioenergy is typically the major source of energy in developing non-OECD countries, but covers a minor share of TPES in OECD-countries.China and India were the largest bioenergy producers in 2010, producing 20% and 17%, respectively of the world׳s bioenergy .China׳s share of bioenergy was less than 10% of its TPES while India׳s share was nearly 25%.Nigeria and United States were the third and fourth largest bioenergy producers with shares of over 80% and below 4% respectively .Given that the Clean Development Mechanism of the Kyoto Protocol obligates 15 rich countries to invest in green energy in developing countries, the Nigerian National Petroleum Corporation renewable energy program is likely to attract investment grants.To date, 70,000 Euros grants have been received by NNPC from Germany׳s Renewable Energy, Energy Efficiency Partnership .The programme is expected to improve the ability of the agricultural sector, create jobs in the rural areas, maximize the country׳s carbon credits in line with Kyoto protocol of which Nigeria is a signatory , and attract grants/funds to the NNPC, while creating opportunity for foreign exchange earnings in the country by exporting surplus products and freezing crude oil in the country that otherwise would be used.According to the former Group Managing Director of the NNPC, Kupolokun, Nigeria will earn about $150 m annually from the biofuel initiative after take-off .NNPC has an intricate biofuel production program; effort should be directed towards its actualization, rather than on the importing of ethanol.The August 2005 Nigeria Automotive Biomass Programme was established to develop two major types of biofuels: ethanol from cassava and sugarcane, and diesel from oil palm, as well as integrate the downstream petroleum sector with the agricultural sector.The renewable energy program is expected to expand the country׳s energy base and create commercial opportunities for the corporation through partnerships with the private sector, in the form of Joint Ventures and agencies with the requisite expertise, such as the various agricultural research institutes in the country.NNPC has MOUs in place with two Brazilian companies, Petrobras and Coimex to leverage on their experience and marketing expertise.Talks with Venezuela׳s PDVSA were also revived for technology transfer for converting cassava to ethanol .Conversion of starchy feedstocks such as cassava, maize, rice, sweet potato, and yam into bioethanol has been successfully commercialized in several countries.In the U.S., 13.9 billion gallons of bioethanol was produced in 2011, almost all of which was from maize .Nigeria׳s climatic conditions support the production of maize and other starchy feedstocks.Currently, bioethanol, biodiesel, and biogas are the major biofuels produced in Nigeria, of which biogas is more feasible at industrial scale compared to biodiesel which is still under investigation.Biogas also has the advantage of reduced impact on the environment as its feedstock does not pose the threat of deforestation.Ethanol production in Nigeria has been in existence since 1973 with cassava as the main feedstock.One of the major biofuel companies in the country, the Nigeria Yeast and Alcohol Manufacturing Company plans to establish a 200 million USD ethanol plant, with a targeted production of 30 million litres annually .Considering the fact that fossil-based fuel is not keeping pace with the increasing demand for environmentally friendly fuel, it is anticipated that biofuel will significantly impact on the country׳s petroleum products quality.It has the potential of replacing toxic octane enhancers in gasoline, and thus, reduces particulate emission, tailpipe emissions and ozone pollution.Other anticipated benefits of biofuels are increased economic development, more tax revenue for the government from the industry׳s economic activities, job opportunities, rural community empowerment, improved farming techniques, increased agricultural research, and increased crop demand .There are several technologies for the conversion of biomass into biofuels, biopower and bioproducts.Here, commercially available technologies including those currently in use in Nigeria are included.Biomass can be processed through two major conversion pathways: biochemical and thermochemical.The appropriate biomass conversion process is determined by the type and quantity of the biomass feedstock and the desired form of energy .Furthermore, biomass conversion efficiency is dependent on the feedstock particle size and shape distribution and the type of reactors.A review of some of the conversion technologies is given below.Biomass composition can be defined from three major components: cellulose, hemicellulose, and lignin.Biochemical conversion processes involve the breakdown of the hemicellulose components of the biomass for the reaction to be more accessible to the cellulose, while the lignin components remain unreacted .Using a thermochemical conversion process, the lignin can be recovered and used as fuel.Biochemical conversion involves two main processes: anaerobic digestion and fermentation.Anaerobic digestion is a multi-benefit, flexible technology suitable for energy production from agricultural residues and other biodegradable wastes .It is a feasible option for producing renewable energy for both industrial and domestic use .In anaerobic digestion, high-moisture content biomass is converted by microorganisms in the absence of oxygen to produce a mixture of carbon dioxide, methane-rich gas, and traces of other gases such as hydrogen sulphide .The by-product or nutrient rich digestate from anaerobic digestion can serve as fertilizer for agriculture.Biogas produced from anaerobic digestion has an energy content that is about 20–40% of the lower heating value of the biomass feedstock .In the modern pursuit for clean energy, anaerobic digestion has been investigated for biogas production and for recycling of CO2 in flue gas .Third and fourth generation biomass feedstock, algae, have the capacity to produce methane and recycle nutrients by direct use of anaerobic digestion .At present, anaerobic digestion is employed primarily on agricultural residues, animal waste and other wastes in Nigeria for fertilizer and biogas production.Fermentation is an enzymatic controlled anaerobic process .It is the third step in the production of bioethanol from lignocellulosic biomass.Raw biomass is first pre-treated, then hydrolysed, before fermentation.Pre-treatment increases the surface area of the biomass, decreases the cellulose crystallinity, eliminates the hemicellulose, and breaks the lignin seal.Enzymatic hydrolysis converts the cellulose component of the biomass into glucose, and the hemicellulose component into pentose and hexoses.The glucose is then fermented into ethanol by selected microorganisms.Fermentation uses microorganisms and/or enzymes for the conversion of fermentable substrates into recoverable products.Currently, ethanol is the most desireable fermentation product, but the production of several other chemical compounds such as hydrogen, methanol, and succinic acid at the moment, is the subject of most research and development programmes.Hexoses, mainly glucose, are the most common fermentation substrates, while pentose, glycerol and other hydrocarbons require the development of customized fermentation organisms to enable their conversion to ethanol .Fermentation technology is established and widely used for waste treatment, and for sugar to ethanol production .Brazil developed a successful bioethanol program based on fermentation of sugar in sugarcane feedstock to ethanol.Brazil produced 5.57 billion gallons of ethanol fuel in 2011, accounting for 24.9% of the world׳s total ethanol used as fuel .Nigeria׳s climate is similar to that of Brazil and can produce large amounts of sugarcane.The emerging biofuel projects in Nigeria propose sugarcane as feedstock for ethanol production.Biodiesel is produced by alcohol transesterification of large branched triglycerides into smaller straight-chain molecules of, for example, methyl esters with enzyme, acid or an alkali as catalyst .The resulting fatty acid methyl esters are easily mixed with fossil diesel.Wood extractives consist of vegetable oils and valuable chemicals.The vegetable oil can be converted to biodiesel by transesterification with methanol .Biodiesel technology is still in the emerging phase in Africa; no commercial biodiesel production has been reported in spite of feedstock availability and biodiesel potential in Nigeria as shown in Table 5, current biodiesel production exists only at research scale.Trial production of biodiesel from palm kernel oil, and other edible and non-edible feedstocks is also being researched in some Nigerian universities.Thermochemical conversion processes involves more extreme temperature than that used in biochemical conversion .Examples are: direct combustion, pyrolysis, gasification and liquefaction.Direct combustion accounts for over 97% of world bioenergy production .It is the most common way of extracting energy from biomass.Direct combustion can be applied to several fuel materials: energy crops, agriculture residues, forest residues, industrial and other wastes .However, this conversion method is not used for biofuel production , as it provides energy only in the form of heat and electric power.Pyrolysis is a major biomass conversion process that is precursor to the combustion or gasification of solid fuels.It involves the thermal decomposition of biomass at temperatures of about 350–550 °C, under pressure, in the absence of oxygen .The process produces three fractions: liquid fraction, solid and gaseous fractions.Pyrolysis has been applied for thousands of years in charcoal production but is only considered lately because of the moderate temperature and short residence time .The fast pyrolysis process yields liquid of up to 75 wt% which can be used in engines, turbines and refineries or as energy carriers in a variety of applications .Another attraction is the possibility of co-processing fast- pyrolysis oil in a conventional oil refinery, as hydrogen from the refinery can be used to upgrade the oil into transportation fuels, and some off-gases from the pyrolysis plant can be used in the refinery .The economics of these options depend on the relative price of natural gas, biomass feedstock and incremental capital costs.Co-processing of petroleum with renewable feedstock offers advantages from both technological and economical points of view.By using existing infrastructure and configuration, little additional capital investment is required .However the co-location of pyrolysis plant with a refinery also depends on the availability of land, cost of hydrogen and value of the off-gases.The fast pyrolysis of biomass feedstock to bio-crude and subsequent refining to biodiesel and other drop-in fuels is estimated to have the lowest capital cost at about USD 1/litre/year of production capacity for a plant with annual capacity of 289 ML/a .Fully commercialized fast pyrolysis and bio-crude refining to biodiesel and other drop-in fuels can significantly lower costs.If the plant can prove the stability of the process and meet the design availabilities, their biofuels will also be competitive with petroleum products.The first commercial-scale facility using the fast pyrolysis and bio-crude refining process route is the USD 215 million KiOR Inc. plant in Columbus, Mississippi .Currently there are no commercial scale fast pyrolysis plants in Nigeria.Simonyan and Fasina cited two studies on bio-oil production that were carried out in Nigeria using locally cultivated corn cobs.Further research into employing pyrolysis co-processing of bio-oil in the Nigerian refineries is necessary, as the process could reduce the overall capital cost of setting up a stand-alone biorefinery and bring a near-term solution to competitive biofuel production in the country.Gasification is the partial oxidation of biomass into a combustible gas mixture at temperatures of 800–900 °C.The gas produced, known as synthesis gas consists of a mixture of carbon monoxide, hydrogen, carbon-dioxide, methane, and traces of other light hydrocarbons, and steam as well as nitrogen present in the air that was used for the reaction .The low-calorific value gas produced can be burnt directly or used as a fuel for gas engines and gas turbines in generating electricity.It can also be used as feedstock in the production of chemicals.Gasification and anaerobic digestion are the two major processes in which biogas can be produced.Based on the available biomass resources in Nigeria, biomass gasification can be carried out for biogas production in virtually every state in the country.Biomass pyrolysis and direct liquefaction with water are sometimes mixed-up with each other.Both are thermochemical processes where organic compounds in the biomass feedstock are converted into liquid products.In the case of biomass liquefaction, feedstock macro-molecule compounds are decomposed into fragments of light molecules in the presence of a suitable catalyst.At the same time, these fragments, which are unstable and reactive, re-polymerize into oily compounds having appropriate molecular weights.While in pyrolysis, the catalyst is usually not necessary, and the light decomposed fragments are converted to oily compounds through homogeneous reactions in the gas phase .The technology readiness levels of these biomass conversion processes are at different stages: some are at research and development, others at demonstration stage, and a few are commercially available.Conventional biofuels, i.e. biofuel obtained from sugar and starchy crops or by transesterification of vegetable oil, are comparatively mature.However, their feedstocks are first generation biomass, and so face the issue of sustainability.Sustainability of biotechnology can be developed to enhance economic outcomes, increase land-use efficiency and the environmental performance of conventional biofuels.Furthermore, cost improvements can be achieved by co-processing of biofuel and petroleum, that is, by integrating bio-refining with the downstream petroleum processes.Producing conventional and/or advanced biofuels in bio-refineries would promote more efficient use of biomass and bring associated cost and environmental benefits.Generating ethanol from lignocellulosic wastes through hydrolysis and fermentation has the potential to yield an encouraging amount of bioenergy in relation to the required fossil energy inputs, but the technology is yet to be deployed commercially.The conversion of cellulose to ethanol involves two steps: the breakdown of the cellulose and hemicellulose components of the biomass into sugars, and then the fermentation to obtain ethanol.The very wide range of estimated fossil fuel balances for cellulosic feedstocks reflects the uncertainty regarding this technology and the diversity of potential feedstocks and production systems .Africa lacks large oilseed infrastructure, storing and crushing facilities, as well as operating commercial-scale biodiesel plants.Small to medium, decentralized biodiesel production with standards satisfactory to engine manufacturers could therefore be a feasible option for boosting development in Africa as it would keep more resources and revenue within communities .An agricultural-based biodiesel model, that is, biodiesel plant located close to an agricultural area with an integrated oil mill is recommended for sustainable production.It will benefit local communities directly.Such a model increases the scope for regional creation of value and, at the same time introduces biodiesel production in a closed loop recycling management cycle.The biodiesel plant model reduces the feedstock transportation cost due to the close proximity of the feedstock, making it more efficient from energy and cost perspectives.Maximizing biomass resources for commercial production of biofuel in Nigeria is a controversial issue, especially as the emerging biofuels projects in the country propose the utilization of first generation biomass feedstock.Food security could be challenged as high-yielding energy crops such as sugarcane and cassava may be diverted into biofuel production, which could lead to a food crisis.There is need for Nigeria to explore other low-yield biomass feedstocks which are abundant in the country, such as jatropha for biodiesel production, animal waste or MSW for biogas production.Biofuel production is still in its early stages of development in Nigeria.Several stakeholders including: the federal government, NNPC, universities, research institutes, private investors, and local farmers have been involved in the Nigerian Automotive Biofuel Programme.The programme which was initiated by the federal government in 2005, gave the NNPC the mandate to carry out 10% blending of biofuel with fossil fuels in the nation׳s refineries .Currently, there is no commercial biofuel production in Africa .The few ethanol production plants in existence utilize imported crude oil as feedstock.Whereas in 2011 the Nigeria Export and Import Bank granted loans to some companies to commence commercial production of biofuel ; most of their projects – both state government and private sector owned – are still in the planning phase.Also, the proposed locations for the bio-refineries are far away from the existing petroleum refineries, and would require transportation to the refinery location for blending.Three out of four of the existing petroleum refineries in Nigeria are located in the Niger Delta region ; and this area is characterized by substantial woody above-ground biomass resources which can be utilized for biofuel production in the existing refineries.Research shows that the production of both gasoline and diesel biofuels employs biomass conversion technologies that produce wide boiling range intermediate oil that requires treatments similar to conventional refining.Thus, bio-refineries can leverage on existing petroleum refinery facilities for the finishing of bio-intermediate oils.Developing biomass conversion facilities in proximity to the existing petroleum refinery infrastructure could reduce the cost of setting up new stand-alone bio-refineries.Nationally, the existing petroleum refineries seem to have adequate processing and hydrotreating facilities to convert bio-derived oils to transportation fuels.However, there are several concerns among which are:The low capacity utilization of the existing refineries in the country which is as a result of operational failures, fires and sabotage mainly on the crude pipelines feeding the refineries ;,The need to examine the ideal capital investment locations for biomass conversion facilities.For instance, if existing petroleum refining facilities should be expanded to handle a ׳raw׳ bio-oil intermediate, or if it will be best to produce finished biofuels at the new project sites;,The impact of biomass-derived oil on the existing refinery process, and the ability of the refiners to meet the required product quantity; and,The need for comprehensive data on the chemical composition of the expected biofuel intermediates, and experience on its behaviour in refining processes.Addressing these issues requires strong collaboration between the refining industry and the biomass research programmes ongoing in the country.This is in order to identify priorities and opportunities to satisfy knowledge and experience gaps, as well as direct investments to support the Nigerian biofuels objectives.Already, the federal government has put in place some incentives in the Nigerian biofuel policy to promote market entry for investors, and to support biofuel projects in the country, some of which include pioneer status for all registered businesses engaged in biofuel production, waiving the paying of value added tax, customs duties and the possibility of obtaining long-term preferential loans .The policy seeks to promote investment in Nigeria׳s biofuel industry by encouraging the participation of all stakeholders.However, on-going debates on the economic opportunities for biofuel production against other environmental and sustainability challenges in the country suggest the industry seems to be taking too long to move out of the planning stage into commercial biofuel production.Several policy makers view biofuels as a key to reduce dependence on imported oil, decrease greenhouse gas emissions, as well as develop rural areas .Based on current gasoline demand in Nigeria, a policy target of 10% ethanol blending in the nation׳s refineries has been set .The policy is aimed at integrating the agricultural sector of the nation׳s economy with the downstream petroleum industry.The target is to achieve 100% domestic biofuels production by 2020.Market demand for gasoline is estimated 1.3 billion litres.This is expected to increase to 2 billion litres by 2020.Biodiesel is estimated at 900 million litres by 2020 as compared to current market possibility of about 480 million litres for a 20% blend for biodiesel .The Energy Commission of Nigeria has set up a national energy policy with the objectives as follows :To gradually reduce the nation׳s dependence on fossil fuels while at the same time creating a commercially viable industry that can precipitate sustainable domestic jobs.To gradually reduce environmental pollution.To firmly establish a thriving biofuel industry utilizing agricultural products as a means of improving the quality of automotive fossil-based fuels in Nigeria.To promote job creation, rural and agricultural development, and technology acquisition and transfer.To provide a framework capable of attracting foreign investment in the biofuels industry.To streamline the roles of various tiers of government in order to ensure an orderly development of the biofuels industry in Nigeria.To involve the oil and gas industry in the development of biofuels in Nigeria.The nation shall improve on the link between the agricultural sector and the energy sector.The nation shall promote the blending of biofuels as a component of fossil-based fuels in the country as required for all automotive use.The blend shall involve the process of upgrading fossil-based fuels.The nation shall promote investments in the biofuels industry.The nation shall grant biofuels pioneer status for an initial 10-year period with the possibility of additional 5-year extension.The nation shall support the emergence of an industry in which a substantial portion of feedstock used by biofuel plants will be produced by large – scale producers and out growers.The nation shall ensure that the biofuel industry benefit from carbon credit.These policies are in line with the principles cited in , in which renewable policy design should reflect.That is,The removal of non-economic barriers, such as administrative hurdles, obstacles to grid access, poor electricity market design, lack of information and training, and the tackling of social acceptance issues – with a view to overcoming them – in order to improve market and policy functioning,Need for a predictable and transparent support framework to attract investments,Introduction of transitional incentives that decrease over time in order to foster and monitor technological innovation and move technologies quickly towards market competitiveness,Development and implementation of appropriate incentives guaranteeing a specific level of support to different technologies based on their degree of technology maturity,Consideration of the impact of large-scale penetration of renewable energy technologies on the overall energy system, especially in liberalised energy markets, with regard to overall cost efficiency and system reliability.To achieve this policy target, short-, medium-, and long-term strategies have been planned :Encouraging integrated biofuels operators to set up agricultural service companies to support out-growers scheme.Mandating biofuel producers to establish public private partnership with biofuels feedstock out-growers.Facilitating easy market entry for intending biofuel operators through supportive regulations on biofuel activities.Granting pioneer status-tax holiday to all registered businesses engaged in biofuels related activities.Granting 10-Year import duty waiver for biofuels equipment not produced locally.Exempting biofuel companies from taxations, withholding tax and capital gains tax in respect of interest on foreign loans, dividends and services rendered from outside Nigeria to biofuel companies by foreigners.Granting a single-digit interest on a preferential loan to be made available to investors in the biofuels industry to aid the development of large-scale out-growers schemes and co-located power generating plants.Establishing agro-allied industries capable of benefiting from the incentives put in place to foster the development of the agro-allied industry in addition to other incentives.Reviewing, improving and continuation of short-term strategies.Establishing a research and development fund to encourage synergy between the private and public sectors in R and D in which all biofuel companies shall contribute 0.25% of their revenue for research in feedstock production, local technology development and improved farming practices.Persuading biofuel producers to use auditable feedstock weighing equipment and methodologies as may be prescribed.Reviewing, improving and continuation of medium-term strategies.However, Ohimain׳s review of the Nigerian Biofuel Policy and Incentives identified some conflicts, gaps, and inconsistencies in the policy that need to be addressed, particularly, the limiting of biofuels to biodiesel and bioethanol, whereas, there are other energy carries that are obtainable from non-food biomass resources in the country.The policy did not address sustainability issues in terms of the environmental and socio-economic impact, as it is based on the utilization of first generation biomass feedstock for biofuel production.This is not sustainable as it has the potential of igniting a food crisis.Globally, countries are imposing blending of transport fuels with 10% biofuels in order to ensure energy security and reduce greenhouse gas emission .An expanding biofuels sector poses both opportunities and threats for sustainable development.The set of opportunities includes increased local use of biomass resources, which may induce rural development and facilitate the production of transportation biofuels, as well as create job opportunities, and improve air quality in cities, while the threats include food crisis, land use change and tenure security, climate change, and socio-economic implications .The continued volatility of fuel prices, the environmental issues associated with GHG emissions, and the combined effect on food and global economics have incited a sense of resolution amongst stakeholders to source for sustainable and viable solutions in the production of biofuels.Sustainable biofuel production involves the utilization of agricultural residues, forest residues, and solid waste.It excludes traditional uses of biomass as fuel-wood.Traditional biomass is not sustainable, and is used in most developing countries as a non-commercial source – usually for cooking.Most Nigerians use traditional biomass such as wood, and charcoal to meet their household energy needs – cooking and heating .Solid waste management in Nigeria is one of the greatest challenges facing state and local government environmental protection agencies .The volume of solid waste generated increases faster than the ability of the waste management agencies to improve on the financial and technical resources needed to parallel this growth.This being stated, it is necessary to consider utilizing these wastes for biogas production for a country with urgent needs for waste management and with ample supply of solid waste across the country.In recent years landfill areas have been a major source of biogas production .Biogas production is a viable option with numerous advantages.Besides being an effective waste management procedure, it is a means of reducing health hazards, indoor air pollution and deforestation.It is flexible in terms of feedstock, does not claim any land, produces fertilizer as a by-product, can be implemented relatively quickly at small scale and is suitable for decentralized use.The technology for biogas production can be implemented almost everywhere with household waste, sewage, industrial waste, agricultural residues and other organic material .Imposing a mandatory 10% blending ratio of biofuels in transportation fuel would, for example, take 85–176 million hectares of arable land, depending on the generation of biofuels – first or second .That is, the crop combination used as feedstock, fraction of residues, and the assumed crop yield.According to , these lands engaged for biofuels production could produce food enough to feed 320–460 million people.It is possible that the further use of land for biofuels production could contribute to loss of biodiversity, increase in GHG emissions and trigger food versus fuel competition.A study carried out by the Centre for International Forestry Research evaluated the impacts such plantations may have on sustainability and found: biodiversity, soil fertility, and water availability as the major issues with utilizing short-rotation crops for biofuels production.However the impact on biodiversity depends considerably on the land use prior to the change.Though deforestation will normally bring about decrease in biodiversity, the usage of degraded land would improve biodiversity and add to species multiplication.Thus landscape conservation could ensure biofuel sustainability in Africa.The growing global demand for clean energy, the concerns about climate change and the need for GHG emissions reduction have challenged most countries to source for alternative forms of energy.Nigeria is not an exception.Environmental sustainability of biofuels is primarily defined in terms of GHG emissions reduction, and other emissions resulting from agricultural practices such as the use of fertilizers and pesticides, irrigation, soil tillage, and harvesting .Additionally, land use prior to biofuel conversion is a critical factor in evaluating the environmental impact.GHG reduction potential suffers markedly if grasslands or forests are used for biofuels.If grasslands or forests are converted into agricultural land to produce biomass, the GHG reduction potential will be different than if biomass production is just started from agricultural land.So far, studies on biomass and GHG emissions assume that land use remains unchanged .Besides GHGs, energy and water resource preservation are other issues to consider when evaluating environmental sustainability.In some circumstances, the quantity of water used and its impact on local water quality and future availability may be the main constraint against biofuels.Linked to water is the problem of fertilizer runoff– especially near streams and rivers.Nationally, increasing fuelwood consumption contributes to deforestation with consequent desertification and soil erosion .There is concern on the sustainability of sugarcane, in terms of land use change.This has been a particular issue in Brazil, the world׳s leader in sugar cane ethanol, where sugar cane expansion into grazing areas can push livestock systems into the forest zones.Brazil, being sensitive to these concerns, has placed restrictions on sugar cane expansion areas to minimize the negative impacts .However, oil palm plantations can also pose environmental problems when expansion takes place on sensitive lands.This is a particular concern in Malaysia and Indonesia where some oil palm is planted in drained peat lands, resulting in significant CO2 emissions outweighing any carbon benefits arising from the new palm-oil plantations .Moreover, expansion of corn for ethanol in the USA – which tends to reduce soybean acreage as corn–soybean rotation contracts – pushes up soybean acreage expansion in Latin America .This, in turn, raises concerns over potential undesirable land expansion and even encroachment into forested areas, with potentially negative environmental and GHG emission consequences.Consequently, rice husk constitutes one of the major environmental nuisances as it forms the major municipal solid waste heaps in the areas where it is disposed.Most rice husks generated during rice milling are burnt in the field.This kind of traditional disposal method has caused widespread environmental concerns since it causes air pollution.As a result of the health and environmental concerns, many countries have imposed new regulations to restrict field burning activities.Subsequently, methods to dispose and to use agricultural residues such as rice straw and rice husk have shifted towards the global “waste to resource” agenda .As cited in the socio-economic aspect of sustainability can be evaluated in a number of ways: its impacts on employment, wages, health, and gender inclusion.It relies much on the influence of the different stakeholders.Depending on what is being assessed, the location, and the socio-economic implication assessment may cover studies on: the impacts on indigenous peoples, human rights, community health, and physical resettlement.Usually, biofuels development takes place in rural areas.These areas in Nigeria are characterized by small-scale and subsistence farming.It is believed that biofuels will create jobs and means of livelihood to the rural dwellers by attracting to the agricultural sector, capital investment and new technologies as well as improved access to fertilizers, infrastructure and high yielding varieties.Biofuels production can also increase access to energy services.This implies higher rural wages with positive effects for the local economy .When it comes to access to and control of land and other productive assets, in Nigeria, the level of participation in decision-making and socioeconomic activities of the male child is usually more than the female, as it is believed that the female child would eventually marry and leave the family.This is seen in the country׳s statistics.There are more male population economically active in agriculture than female.Balancing the economic benefits with environmental and social impacts is an important factor.Even when biofuels meet environmental sustainability criteria, they need to also pass economic sustainability standards.That means ensuring production efficiency and profitability requires access to sustainable resources, and reliable output markets.Thus the challenge is achieving all these while ensuring economic viability and minimizing negative environmental and socio-economic impacts .The sustainable production of transportation fuels from biomass resources in Nigeria requires alternative feedstocks and new technology development.Currently it is not clear that non-food biomass feedstock will be established, as current research provides no evidence for this take up.The co-processing of bio-intermediate oil with petroleum in conventional refinery infrastructure is dependent on a number of factors, among which are the feedstock type and availability, the energy potential, the capital cost of integrating the biomass pre-conversion facility to the existing conventional refinery infrastructure against the cost of a stand-alone biorefinery, the location of the petroleum refinery and technology transfer.The transformation of Nigeria to a bio-based economy, where non-food biomass replaces crude oil, will emerge if the identified research gaps, policy shortfall and sustainability issues are addressed.The materializing of the 10% ethanol blending in the nation׳s refineries to 100% domestic biofuels production in the country by 2020 will be possible if the biomass processing routes and sustainability issues are well defined.This review identifies the biomass resource available in Nigeria and the potential to use these resources to meet the country׳s biofuel demand.Biomass is obtainable from a wide variety of sources: energy crops, agricultural crop residues, forest resources, urban and other wastes, which are distributed throughout the country based on the climatic and vegetative zones.With rising demands for clean energy and recurrent fuel scarcity, Nigeria needs to diversify its fuel supply and maximize its use of natural resources.Biofuel is an attractive alternative to substitute for fossil fuel.Nigeria is a net oil importer of transportation fuel.This makes the country vulnerable to volatility in global fuel prices and dependent on foreign exchange to meet its domestic energy needs.The goal therefore is to reduce the high dependence on imported petroleum by maximizing domestic biomass resources for biofuel production.However, this should be achieved sustainably with minimal environmental and socio-economic impact.With location of the existing petroleum refineries in the Niger Delta region of Nigeria, and the large biomass resources obtainable in the same area, it is pertinent for the Nigerian National Petroleum Corporation to consider as part of its biofuels programme if it is better to produce finished biofuels at the new bio-refineries and transport it to the existing refineries for blending, or if it would be better for existing refining infrastructures in the country to be expanded to process raw biomass into bio-intermediate oil for blending. | Solid biomass and waste are major sources of energy. They account for about 80% of total primary energy consumed in Nigeria. This paper assesses the biomass resources (agricultural, forest, urban, and other wastes) available in Nigeria and the potential for biofuel production from first, second, third and fourth generation biomass feedstocks. It reviews the scope of biomass conversion technologies tested within the country and the reports on the technology readiness level of each. Currently, most of the emerging biofuels projects in Nigeria utilize first generation biomass feedstock for biofuel production and are typically located many miles away from the petroleum refineries infrastructures. These feedstocks are predominantly food crops and thus in competition with food production. With significant availability of non-food biomass resources, particularly in the Niger Delta region of Nigeria, and the petroleum refineries located in the same area, it is pertinent to consider expanding use of the petroleum refinerys infrastructure to co-process non-food biomass into bio-intermediate oil for blending with petroleum. This not only addresses the potential food versus fuel conflict challenging biofuel production in Nigeria, but also reduces the cost of setting up new bio-refineries thus eliminating the transportation of ethanol to existing petroleum refineries for blending. In view of this, it is recommended that further research be carried out to assess the feasibility of upgrading existing refineries in Nigeria to co-process bio-based fuels and petroleum products thus achieving the targets set by the Nigeria Energy Commission for biofuel production in the country. |
31,437 | Sensory Processing Sensitivity in the context of Environmental Sensitivity: A critical review and development of research agenda | To survive and thrive on planet earth it is essential for all organisms to draw on environmental resources, such as food, protection from predators and social support.Animals and humans are programmed to perceive, process, react and adapt to specific social and physical elements of the environment, both positive and negative ones.Of interest, there are substantial inter-individual differences in sensitivity and responsivity to the environment in animals and humans; some are much more sensitive and reactive compared to others.Across populations, a continuum from low to high sensitivity to the environment is observed.In recent years, Sensory Processing Sensitivity, which describes inter-individual differences in trait sensitivity to experiences, and which began as a barely known topic 20 years ago, has become a much discussed facet of Environmental Sensitivity theory.In this review, we discuss the knowns and unknowns in relation to the current conceptualisation of SPS, highlight the relevance and impact of the construct, and describe perspectives for future cross-disciplinary research.SPS is part of a family of theoretical frameworks on Environmental Sensitivity.Environmental Sensitivity is an umbrella term for theories explaining individual differences in the ability to register and process environmental stimuli.These include the theories of Differential Susceptibility, Biological Sensitivity to Context, nd Sensory Processing Sensitivity, the topic of the present review.All these theories state that individuals differ in their sensitivity to both aversive as well as supportive environments.Unique to Sensory Processing Sensitivity is that it proposes an underlying phenotypic trait characterised by greater depth of information processing, increased emotional reactivity and empathy, greater awareness of environmental subtleties, and ease of overstimulation.Early studies estimate that about 15–20% of the population can be considered high on the SPS trait.The first measure to assess SPS, the Highly Sensitive Person Scale, is a 27-item self-report questionnaire of positive and negative cognitive and emotional responses to various environmental stimuli including caffeine, art, loud noises, smells and fabrics.SPS is related to other temperament and personality traits reflecting sensitivity to environments.For example, traits such as introversion, neuroticism, and openness to experience have been associated with increased reactivity to environmental influences.Furthermore, the Behavioural Inhibition System and the Behavioural Activation System, which describe the extent of pausing activity for the processing of conflicting information and the urge to approach and satisfy needs, have been related to heightened sensitivity to negative and positive environmental stimuli, respectively.Nonetheless, analyses show that SPS is distinctive from these traits.Recent findings suggest that SPS is moderately heritable.Further, research has revealed associations between SPS and cognitive, sensory and emotional information processing in the brain.This points towards a biological foundation for the SPS construct.SPS is conceptualised as a temperament trait, and not a disorder.However, in adverse childhood environments, individuals with high SPS scores may shift from typical to atypical development, with a negative impact on well-being, and higher risk for behavioural problems and psychopathologies in childhood and adulthood.Conversely, individuals high on SPS exposed to positive events in life may flourish and perform exceptionally well, for example showing more positive mood and intervention responsivity, a result with important implications for policy makers and practitioners.Despite the above described insights from SPS research, several shortcomings remain.SPS brings advantages in terms of capturing a global phenotype through questionnaire-based and behavioural/observational assessment, its weaker point is that biological research on the aetiology and mechanisms underlying SPS is still in its infancy.How the so-far identified neural processes interact and shape sensitivity to the environment is not well understood yet.What is more, the relationship of SPS to existing personality and temperament constructs reflecting sensitivity to environments needs to be further clarified conceptually and empirically.Lastly, sensory sensitivities are also observed in mental disorders, but the relevance of SPS to seemingly related disorders and well-being needs to be further studied.Finally, more work is needed on interventions to foster the potential of high SPS individuals and prevent negative consequences.From a theoretical point of view, studying SPS is important for deepening our understanding of a fundamental aspect of inter-individual differences in sensitivity to the environment, observed in humans and animals.Interestingly, recently SPS has been discussed also in the context of anthropological studies.SPS also has implications for health, education and work: SPS is thought to be a significant factor impacting well-being, quality of life, and also functional difficulties.Thus, it needs to be studied rigorously and with respect to both basic and applied processes to improve well-being and life satisfaction, and preserve human capital, while preventing adverse effects and impairment among highly sensitive populations.From a societal impact perspective, SPS has gained substantial popularity in the public and media, with programmes being developed and professionals trained to coach and support highly sensitive employees, leaders, parents and children.However, basic, translational and applied scientific research on SPS is lagging behind, creating an imbalance between the need for information from society and the scientific knowledge collected so far.This easily leads to misinterpretations of what SPS is, and comes with risk for misinformation and potentially even harm to the public, and neglects the societal responsibility of science.The aim of this paper is to address the above shortcomings by critically discussing the state-of-the-art regarding scientific insights on SPS in a narrative review, and stimulating the field by proposing a future research agenda.We review the origins of the Sensory Processing Sensitivity framework and how it relates to other frameworks of Environmental Sensitivity, how to measure SPS, whether empirical evidence supports a dimensional or categorical conceptualisation of SPS, the relationship of SPS to other temperament and personality traits, what the underlying biological bases of SPS are, and the relevance of SPS to mental health and intervention.We have included all studies focusing on SPS directly, published in indexed journals included in PubMed and Scopus until September 2018, allowing a complete, exhaustive summary of the current literature on SPS and related field.We advocate that some speculation is required to set a comprehensive future research framework in which transdisciplinary approaches will be central.This review borrows from team science principles to bring together several authors with diverse areas of expertise to address the increasing complexity in science, requiring increased interdependency and specialisation in order to create more coherent research efforts.This allows the current review to take a broader perspective as well as updated view compared to the previous review on SPS, for instance through a greater focus on neuroscience and biobehavioural mechanisms, including animal work and the operationalisation of core components of SPS, as well as links to mental health and intervention.Since the late 1990s, several theoretical contributions, which have been developed independently from each other, have investigated such individual differences in sensitivity to environments.Initially, sensitivity was primarily seen as a vulnerability.The Diathesis-Stress model, also Dual-Risk model, proposes that individuals characterised by individual risk factors have a predisposition to suffer the negative consequences of environmental adversities more than others.However, subsequent theories proposed that more sensitive individuals experience stronger effects and responsivity to both negative and positive environmental conditions and stimuli.These theories include Differential Susceptibility, Biological Sensitivity to Context, and Sensory Processing Sensitivity.Differential Susceptibility, which has roots in developmental psychology, poses that highly sensitive individuals have a higher susceptibility to the environment, and assumes an evolutionary perspective by positing that individual differences in sensitivity represent two alternative developmental strategies maintained by natural selection to increase diversity and fitness of the species.More recently, the Vantage Sensitivity theory has been put forward, which concerns individual differences in response to positive stimuli, such as supportive psychological interventions without making claims about the potential response to adverse experiences.In essence, Differential Susceptibility integrates the Diathesis-Stress and Vantage Sensitivity frameworks, by suggesting responsivity to both positive and negative environments.Differential Susceptibility puts emphasis on phenotypic temperament characteristics, endophenotypic attributes and genetic variants that may act as plasticity factors that make people more malleable to environmental influences.In contrast, Biological Sensitivity to Context focuses specifically on physiological differences in reactivity to environmental stimuli.It is defined as neurobiological susceptibility to cost-inflicting as well as benefit-conferring environments, and operationalised as an endophenotype reflecting heightened reactivity in one or more stress response systems.In other words, stress response systems increase susceptibility to negative environments, but also to resources and support.Compared to Differential Susceptibility, which emphasises that differences in sensitivity are genetically determined and a result of bet-hedging against uncertain futures, Biological Sensitivity to Context emphasises the role of early environmental pressures in shaping sensitivity as it is based on the evolutionary notion of conditional adaptation, as high sensitivity is thought to develop in response to both extreme negative or positive environments.The Sensory Processing Sensitivity framework has been developed based on extensive review of the animal literature, and informed by temperament and personality theories on behavioural inhibition, shyness, and introversion in children and adults.Sensory Processing Sensitivity emphasises that sensitivity can be captured in a phenotypic temperament or personality trait, characterised by greater depth of information processing, increased emotional reactivity and empathy, greater awareness of environmental subtleties, and ease of overstimulation, thought to be driven by a more sensitive central nervous system.Environments in the context of Sensory Processing Sensitivity are broadly defined and include any salient conditioned or unconditioned internal or external stimuli, including physical environments, social environments, sensory environments, and internal events.Recently SPS has been discussed also in the context of anthropological studies.The described frameworks have recently been integrated within the broader Environmental Sensitivity meta-framework, displayed in Fig. 1.Each of these frameworks on Environmental Sensitivity provides a unique contribution to the study of individual differences in response to the environment.The frameworks agree that individuals differ in sensitivity to environments, and that only a minority of the population are highly sensitive, as if a minority is sensitive this holds an evolutionary advantage.Benefits of sensitivity are frequency-dependent in that sensitivity is advantageous when rare but disadvantageous when common.In essence, Differential Susceptibility proposes a mechanism, which is also underlying the Biological Sensitivity to Context and Sensory Processing Sensitivity frameworks.Unique to Sensory Processing Sensitivity is that it is the first framework to propose and develop a psychometric tool that captures sensitivity to environments directly as a phenotypic trait in adults and children, with important theoretical and applied implications for the study of individual differences in response to the environment.Three observational studies were more in line with SPS acting as a vulnerability factor, in line with Diathesis Stress rather than Differential Susceptibility.An early study on SPS and the quality of the environment found an interaction between parenting environment and SPS, such that high SPS adults reporting having had an unhappy childhood scored higher on negative emotionality and social introversion, whereas high SPS adults reporting a happy childhood differed little from the larger population of non-highly sensitive adults on these traits.Furthermore, in adults SPS was shown to moderate the effect of parental care on depression symptoms.Individuals scoring high on SPS reported the highest depression scores when parental care was low, while depression scores were unrelated to SPS when parental quality was high.A study on life satisfaction in adults showed that while individuals high in SPS reported lower life satisfaction when childhood experiences were particularly negative, no evidence was found for differential effects to positive experiences.The other studies were more in line with Differential Susceptibility.A paper by Aron et al for the first time reported a crossover interaction in three studies involving adults.Individuals high in SPS who reported a troubled childhood scored especially high on negative affect measures, but individuals high in SPS without such childhoods scored especially low on negative affect measures.This provided evidence that high SPS scores are linked to benefitting more from positive experience, in line with Differential Susceptibility.Furthermore, a six-month longitudinal study assessing SPS in kindergarten children, reported that SPS interacted with changes in positive and negative parenting in predicting externalising behavioural problems.Children scoring high on SPS were most responsive to changes in parenting behaviour in both directions, predicting increasing externalising problems when parenting became more negative, as well as predicting decreasing externalising problems when parenting improved, supporting Differential Susceptibility.Recently, laboratory studies have provided additional evidence that individuals high in SPS indeed show heightened responsivity to negative and positive experiences.Adults high in SPS who were exposed to a positive mood induction video-clip, have been shown to have greater changes in positive affect compared to those reporting low sensitivity.Furthermore, adults scoring high on SPS have been shown to be more willing to trade off their privacy when viewing terrorism-related pictures compared to high SPS individuals viewing neutral pictures, whereas such a difference was not observed in individuals with low SPS scores.This suggests that individuals high in SPS may be more sensitive to terrorism-related media and community themes.Two intervention studies have provided evidence for greater intervention responsivity related to higher SPS, in line with Vantage Sensitivity.An intervention study in adolescent girls, found that girls high in SPS responded more favourably to a school-based resiliency programme based in concepts of cognitive-behavioural therapy and positive psychology techniques.Specifically, girls scoring high on SPS showed a significant reduction in depression symptoms, which was evident at six and 12-month follow ups, whereas girls low in sensitivity did not show any significant change.These findings of heightened responsivity to positive experiences in individuals scoring high in SPS have recently been replicated in a large randomized control trial testing the efficacy of a school-based anti-bullying intervention.In line with expectations, the results of this study showed that the intervention significantly decreased victimisation and bullying across the entire sample.However, a more in-depth analysis of interaction effects showed that intervention effects were driven primarily by children scoring high in SPS.Conversely, for children scoring low on SPS, no significant effect was reported.Furthermore, SPS has been shown to moderate the impact of early negative parenting styles on behavioural problems, and of positive parenting styles on social competence at age three and six, suggesting that is able to capture behavioural traits relating to sensitivity toward both positive and negative environmental stimuli.Overall, these findings provide evidence that SPS, as assessed by questionnaire or behavioural observation, is related to heightened responsivity to negative as well as positive environments.Future studies should expand research on SPS as a sensitivity marker to both positive and negative environments.Going beyond a correlational approach, more research is needed that manipulates the positive or negative environmental variable in more controlled laboratory contexts or within intervention studies.Furthermore, testing Differential Susceptibility in SPS in the context of daily life is important, to capture ecologically valid assessments of micro stressors and macro stressors.Ecological Momentary Assessments, which involve assessing the participant in real time in their natural environment, would be a particularly useful tool to examine whether high SPS individuals are more responsive to positive and negative events throughout their daily life.Future studies which test the interaction between SPS and environmental events/quality ranging from low to high will benefit from conducting more sophisticated analysis as the Regions of Significance analyses, as a superior method for testing Differential Susceptibility than the simple slopes method, as it is able to distinguish where the significance of crossover interactions lie.Another approach involves the reparametrized equation approach, which allows one to compare different Environmental Sensitivity models based on crossover intersection points and associated confidence intervals.Furthermore, studies have predominantly used cross-sectional study designs.Longitudinal study designs would allow a more in-depth analysis of causation and of differences at a within-person level, and the study of short- and long-term dynamic changes in response to environments.One pertinent question is whether SPS is a stable trait across development, or whether certain experiences lead to changes in levels of SPS.Lastly, biological underpinnings of Differential Susceptibility in SPS is only beginning to be unravelled, and it remains unclear whether the same biological systems that support responsivity to negative environments also support responsivity to positive environments in high SPS individuals.We also need better understanding how the core hypothesised features in SPS relate to one another, and increase aetiological and neural understanding underlying the Sensory Processing Sensitivity framework.We make suggestions as to how this can be achieved in Section 6, and in Fig. 1.The first measure for assessing SPS, namely the Highly Sensitive Person scale, developed alongside the theoretical framework of Sensory Processing Sensitivity, was the result of an exploratory and empirical study of what is meant when the term sensitive is used by clinicians and by the public.Elaine Aron and Arthur Aron conducted a series of in-depth qualitative interviews with 39 adults who self-identified as “highly sensitive”, “introverted”, or “easily overwhelmed by stimuli”.The first 60-item HSP Scale was based on these interviews and included statements regarding being highly conscientious, startling easily, having a rich inner life, and being more sensitive to pain, all considered markers of increased sensitivity.This contributed to defining the construct of SPS as referring to a broader sensory processing of information captured by a variety of indicators, rather than simply sensitivity toward sensory stimuli.The questionnaire was tested on a broader sample including 604 undergraduate psychology students and 301 individuals from a community sample, resulting in the self-report 27-item HSP scale, currently used for assessing SPS in adults.The psychometric properties and validity of the 27-item HSP scale, as well as shorter version, have subsequently been validated in multiple studies.Building on the HSP scale for adults, the recent Highly Sensitive Child Scale, a 12-item self-report measure of SPS in children as young as 8 years of age has been developed.The scale includes items such as “I find it unpleasant to have a lot going on at once” and “Some music can make me really happy”.The HSC scale has also been used in a parent-report format in order to assess sensitivity in kindergarten children, based on the same items from the HSC scale but rephrased so that it is the parent reporting on the child’s behaviour.The analysis of the factorial structure of the scale indicated that the HSC scale has adequate internal consistency and good psychometric properties across independent samples.Its criterion validity has been confirmed by showing that children scoring high on this scale are more sensitive and responsive to the positive influence of psychological interventions, as well as to both positive and negative parenting quality.There is also another 23-item parent-report questionnaire assessing SPS in children.This questionnaire includes items such as “My child is bothered by noisy places” or “My child seems very intuitive”, and has been used to examine its association with daily functioning in a Dutch-sample involving parents of children ages 3–16.The items of this questionnaire partially overlap with the HSC questionnaire.However, this questionnaire scale has not yet been validated as to whether children scoring high on this measure are more sensitive to environmental influences and process information more deeply.The HSP/HSC scales have been translated into several languages.Dutch, Italian, German, Turkish, Japanese and Icelandic versions are available, and the HSC partial measurement invariance has been confirmed across age, gender, and country based on Dutch and UK versions.Both HSC and HSP scores tend to be normally distributed in the population, although some authors have pointed out a slight trend towards a bimodal distribution.Finally, the questionnaire-based assessment of SPS has been recently extended to the study of personality in animals.The Highly Sensitive Dog owner-report questionnaire has been developed for the assessment of a canine-SPS trait in domestic dogs.Animal models of SPS have also been developed.The HSC Rating-System provides a behavioural observation assessment of SPS in pre-schoolers aged 3–5 years.The development of the measure was guided by a theory-driven approach inspired by the theoretical definition of SPS in children and by the definition of the broader construct of Environmental Sensitivity.The rating system, applied to a series of laboratory episodes derived from the Laboratory Temperament Assessment Battery procedure traditionally used for the coding of temperament, and coded by external observers trained on this method, has been found to capture children’s sensitivity to the rearing environment, moderating the impact of both positive and negative parenting on positive and negative children’s outcomes.The validation of the HSC Rating-System is currently limited to an American middle-class population and to a single study.However, given that it provides a multi-modal, and a more objective behavioural measure, it promises to be a useful tool for future research on SPS in children.Proper administration and coding of behaviour observation is key as external observers may misinterpret a child’s signals and certainly may lack access to the child’s inner world.Initial factor analyses on the HSP scale suggested a unitary sensitivity factor captured by a variety of items.However, subsequent factor analyses exploring alternative solutions found convergence for different components.In recent years, they have been often adopted in SPS studies as a way for describing features characterising the SPS trait.The most extensively psychometrically supported solution across children and adults includes the following components: 1) Low Sensory Threshold, 2) Ease of Excitation, and 3) Aesthetic Sensitivity.The three sensitivity components of LST, EOE and AES have been found to relate differentially to affect variables.More specifically, EOE and LST were both found to be associated with a moderate effect size with self-reported negative emotionality, anxiety and depression, and LST, but not EOE, has been reported to correlate with self-rated sensory discomfort.Conversely, AES was reported to be associated with positive emotionality such as positive affect and self-esteem, but not with negative emotions, both in adulthood and childhood.Importantly however, the LST, EOE and AES subscales were not designed, but emerged when analysing the scale, further their biological validity is unclear, and it is not clear what the components measure or mean when taken separately.Recently, reconciling the apparently contradictory views of the existence of a unique, general, SPS factor or different components of sensitivity, psychometric data across childhood, adolescence and adulthood, provided evidence in support of a bifactor solution.This solution includes a general SPS factor and allows recognition of the multidimensionality of the HSC and HSP scales, as represented by the three sensitivity components of LST, EOE and AES.These results are consistent with findings identifying the summary score of the HSP and HSC scales capturing an increased sensitivity to positive and negative stimuli.The 23-item parent-report HSP questionnaire for children showed two factors: Overreaction to Stimuli, which comprised items associated with overstimulation, emotional intensity and sensory sensitivity and Depth of Processing.Because this parent-report questionnaire includes additional items compared to the more extensively studied HSC scale for children, this result may suggest that the inclusion of other items could allow one to capture specific SPS aspects not currently included in the HSC self-report questionnaire.Finally, a one-factor solution emerged for the newly developed observational HSC Rating-system for pre-schoolers.This unique SPS factor correlated moderately and negatively with assertiveness, and moderately and positively with constraint, and all temperament factors together explained only half of SPS variance.Overall, these results suggest that though SPS is associated with observed temperament to a moderate extent, it is not fully captured by other temperament factors.More objective assessment procedures for SPS would be a very valuable alternative or addition to questionnaire-report.For infants and children, this could take the form of observational measures similar to the HSC Rating-system developed for pre-schoolers.From middle childhood and in adulthood, a semi-structured interview on SPS, which remains to be developed, would be valuable.Such an interview would provide a richer and more nuanced assessment of sensitivity as it includes observer-rated observational data based on the trained interviewer’s judgments in interpreting responses.The assessment of SPS could also be made more objective by the addition of cognitive-, genetic- or bio-markers.While the HSP and HSC scales have good psychometric properties and have been validated in multiple ways, the scales need to be validated and optimised further.First, behaviours such as pausing to check in novel situations or taking time to make decisions, cardinal characteristics of individuals high in SPS and associated with depth of processing, are not sufficiently covered in the HSP and HSC scales.The items coming the closest to capturing depth of processing are those relating to the AES component.Nonetheless, the SPS scale has been associated with activation of brain areas involved in greater depth of processing, such as greater activation in secondary perceptual processing brain areas, in fMRI studies, suggesting that the existing scale does already capture depth of processing.More research is needed to test the ability of the scale to capture the SPS construct fully, or whether the additional items on depth of information processing are needed.Second, HSP/HSC scale items are mainly negatively phrased, and may therefore not adequately capture the experience of highly sensitive individuals without psychological problems.Indeed, many of the items on the HSC/HSC scales appear to describe negative consequences of greater depth of information processing.One of the authors of this manuscript has developed a less negatively, and more neutrally worded version of the HSP scale, which is currently being validated.Lastly, regarding cultural differences, the analysis of the HSC scale invariance across cultures suggests that while the underlying structure of the scale is conceptualised similarly between Belgian and British people, and Belgian and British people attribute more or less the same meaning to the latent construct of the scale, the mean differences may not be comparable.That is, Belgians tend to score higher on some items, a trend that has been reported also for Italian children.This suggests that some items may need to be adapted for cultural sensitivity, while retaining the pure assessment of SPS.The literature on SPS suggests that roughly 20% of the population is assumed to be highly sensitive and 80% less sensitive.A popular metaphor is the Orchid-Dandelion metaphor, where Dandelions reflect the majority of the population who are less sensitive to the influence of either positive or negative environments, whereas Orchids are more strongly affected by environmental adversity but also flourish more in positive environments.That 20% of the population is highly sensitive was first proposed by the theory on SPS as an analogy to the work on infant reactivity, as defined by Kagan.These researchers categorised infants into qualitative groups of infant reactivity, based on a theoretical framework concerning differences in the excitability of limbic structures, and applied this model to observational judgments of motor and crying reactions in infants.Taxometric analyses, which are expressively designed to distinguish taxa from dimensions, supported their theoretical framework, by showing that a minority of infants were highly reactive to visual, auditory and olfactory stimuli, with the remainder falling into a less reactive group.Moreover, Kagan’s work, empirical studies, and computer-based simulation on other temperamental traits related to sensitivity to environments in human and animals also provided support for the existence of individual traits associated with heightened sensitivity to the environment, as well as putative sensitivity gene variants with a relatively low population frequency of about 10–35%.This was also further supported in a Diploma thesis on SPS using taxometric analyses on the HSP scale in N = 898 individuals, which revealed a high sensitive taxonic group with a base rate of 15–20%; although this work was not replicated in a Master’s thesis.Overall, taxometric research across personality and psychopathology, has yielded dimensional results more often than taxonic ones, and there is a strong trend that newer studies reveal dimensional results.This has been suggested to be primarily due to improvements in taxometric practice, rendering early influential taxonic findings spurious.Hence, we expect similar findings to emerge for the HSP/HSC scales.More recently, two studies have applied latent class analysis to the HSC and HSP scales.The first study identified three SPS classes across four ethnically-diverse UK-based samples containing 8–19 year olds, using the HSC scale: a low, medium and high sensitive group.These latent class findings were replicated in a study on multiple US adult samples using the HSP scale, which also revealed a three -class solution: 31% high sensitive, 40% medium, and 29% low sensitive.The authors labelled this third class Tulips, who are intermediate between Orchids and Dandelions in terms of their sensitivity scores.Together, the studies suggested preliminary cut-off scores differentiating low, medium and high sensitive groups, which were relatively consistent across ages, but characterised by relatively low sensitivity and specificity.In the adult study, the three-group categorisation was subsequently applied to an independent sample of 230 UK-based adults.This revealed that differences between the three detected sensitivity groups in response to a positive mood-induction task were more of quantitative rather than qualitative nature: Orchid individuals scored significantly higher in Neuroticism and emotional reactivity and lower in Extraversion than Dandelions and Tulips, with Tulips also significantly differing from Dandelions and scoring intermediate to Dandelions and Orchids.In both studies, the HSP/HSC scales were relatively normally distributed.Overall, these findings suggest that SPS is a continuously distributed trait but that people fall into three sensitivity groups along a sensitivity continuum.Whether SPS should be considered as a dimensional or categorical trait is an important question.Dimensionality would suggest that individuals in the population differ merely quantitatively in level of SPS traits with normal variation from low to high.In contrast, categorisation would suggest that individuals in the population can be separated into non-arbitrary, qualitatively different sensitivity groups.Clarity about the categorical or dimensional nature of SPS has consequences for how SPS should be assessed, and for the selection of suitable research designs.Overall, the more recent research on the HSP and HSC scales suggests that SPS is a continuous trait, along which individuals fall into different sensitivity classes.In terms of future work, taxometric analyses on the HSC/HSP scales would be a useful addition to the already conducted latent class analyses, which would address the questions of distinguishing taxa from dimensions more directly.Aron and Aron introduced SPS as a trait related to, but distinct from other temperament and personality constructs.Being developed based on extensive review of the animal literature, it has been suggested that SPS may relate to a general trait of sensitivity to the environment, or meta-personality trait of contextual sensitivity, which structures personality differences through determining the degree to which individual behaviour is guided by environmental influence.In light of this, we discuss here SPS within the context of temperament and personality constructs.According to Eysenck, individual differences in personality can be described in terms of two dimensions: introversion and neuroticism.Introversion relates to the optimal level of arousal at which an individual performs best: for those high in introversion, this level is way lower than for those high in extraversion.Neuroticism comprises proneness to distress and emotional instability.In a series of seven studies, Aron and Aron examined associations of SPS with introversion and neuroticism.They found low to moderate associations with introversion and fairly high associations with neuroticism.As to introversion, qualitative research by Aron and Aron shows that not all highly sensitive individuals display the profile of being socially introverted.As an alternative to Eysenck’s theory, Gray’s Reinforcement Sensitivity Theory proposed that individual differences in the sensitivity of basic brain systems underlie individual differences in personality: the Behavioural Inhibition System, Behavioural Approach System, and Fight/Flight System.In the original version of the theory, the BIS was thought to mediate reactivity to conditioned punishment and frustrating non-reward, and to underlie negative emotions, in particular anxiety.The BAS was thought to be reactive to conditioned stimuli signalling reward or relief from punishment and underlie positive emotions.The FFS was thought to modulate responses to unconditioned aversive stimuli and to underlie fear and defensive aggression.In 2000, Gray and McNaughton published a revision of the RST.In this revised RST, the BAS still functions as a reward system, and modulates responses to all appetitive stimuli.Similarly, the FFS was assumed to modulate responses to all aversive stimuli and renamed to Flight, Fight and Freezing System.The BIS was now thought to be activated by stimuli that activate both the BAS and FFFS, and responsible for the inhibition of ongoing behaviour in the service of conflict detection and resolution.According to Aron and Aron, SPS is especially related to BIS functioning, given the ‘pause-to-check’ function of this system.Consistent with this assumption, Smolewska et al. reported a positive association of BIS sensitivity with SPS as a global construct, as well as with its three components.In the same study, BAS sensitivity was found to be largely unrelated to SPS.If narrower facets of BAS are differentiated, i.e. positive affect vs approach motivation in response to incentive cues, only the former showed a small significant association with SPS as a global construct and with the components of EOE and EAS.More recently, Pluess et al. examined the association of SPS with BIS and BAS sensitivity in two samples of children.They found significant positive correlations of both BIS and BAS sensitivity with SPS as a global construct, as well as with EOE and EAS components.Only BIS sensitivity was also positively correlated with the LST component.According to Rothbart et al. temperament can be described as individual differences in emotional, motor, and attentional reactivity as measured by latency, intensity, recovery of response, and self-regulation processes that modulate reactivity.Temperamental reactivity refers to responses to change in the external and internal environment, measured in terms of the latency, duration and intensity of emotional, orienting and motor reactions.Self-regulation refers to processes that serve to modulate reactivity, especially processes of executive attention and effortful control.Depending on the developmental stage, three to five broad temperament domains are distinguished.Positive affectivity/extraversion reflects one’s level of pleasurable engagement with the environment and the extent to which a person feels active, happy and enthusiastic; negative affectivity reflects subjective distress and an unpleasurable engagement with the environment; effortful control comprises processes that modulate reactivity, such as attentional control, inhibitory control and activation control.In some developmental stages, affiliative motivation and/or orienting sensitivity/openness are conceived as separate domains.Evans and Rothbart examined the association of SPS components with temperament domains and facets of Rothbart’s model in adults.For SPS, a two-factor conceptualization was used: one factor combined the EOE and LST components reported by Smolewska et al.; the other was identical to the AES component.Similar findings have been reported in a sample of gifted young adults.In Evans and Rothbart’s paper, the combined EOE/LST component of SPS was found to be have a strong positive association with negative affectivity, a moderate negative association with effortful control and a relatively low negative association with positive affectivity/extraversion.The AES component of SPS was found to have a strong positive association with all facets of orienting sensitivity from Rothbart’s model, and low to moderate positive associations with positive affectivity/extraversion and affiliative motivation.Sobocko and Zelenski replicated the positive associations between the negative affect component of SPS and negative reactivity in Rothbart’s model.Bridges and Schendan replicated the association between negative affect of the SPS and negative reactivity based on both Rothbart and colleagues’ model of SPS and their adult temperament scale.Further, EOE and LST components of the SPS are moderately negatively correlated with Rothbart’s extraversion/surgency but weakly positively correlated with Rothbart’s orienting sensitivity, and all components of SPS are weakly negatively related to Rothbart’s effortful control.Sobocko and Zelenski also replicated the positive associations between the AES component of SPS and positive affectivity/extraversion in Rothbart’s model.Bridges and Schendan also found the AES component of SPS to be positively associated with orienting sensitivity.Pluess et al. reported, in samples of 9–18 year olds, positive correlations of negative affectivity, positive affectivity and effortful control with SPS as a global construct as well as with EOE, LST and EAS.The five-factor model of personality comprises five broad personality domains, derived from natural language using a lexicographic approach.The domains include Extraversion, Neuroticism, Openness to experience, Agreeableness and Conscientiousness and each domain has a number of specific facets.As a global construct, SPS has been found to be positively associated with Neuroticism with a moderate effect size and negatively associated with the domain of Extraversion.Also, in most studies SPS was found to be positively associated with Openness to experience.Five studies examined associations of SPS as global construct with the domains of Agreeableness and Conscientiousness; in none of the studies, these associations were significant.When the three dimensions of SPS are examined separately, a differentiated picture emerges.Across studies, both EOE and LST were found to have a positive association with Neuroticism.Also, both EOE and LST were found to be inversely related to Extraversion; these associations were, however, generally weaker and less consistent across studies than those with Neuroticism.In one study, in 15–19 year olds, EOE was inversely related to Conscientiousness.In undergraduates, one study found both EOE and LST inversely relate to Openness, while another study in a diverse adult sample found a weak positive relation for LST.In another study, in undergraduates, both EOE and LST were inversely related to Openness.AES was consistently found to be positively associated with Openness to experience.In three studies, AES was also positively related to Conscientiousness and in two studies AES was positively related to Neuroticism but much less so than to Openness in line with the greater relation between AES and positive than negative affect characteristics.In most studies, none of the SPS components were found to be significantly associated with Agreeableness.As an exception, in Lionetti et al. and Bridges and Schendan), a positive association between AES and Agreeableness emerged, in relation to a shortened 12-item version of the HSP scale, while a weak negative relation was found for LST and Agreeableness.Two unpublished pilot studies have moved beyond the predominant focus on the domain level of the five-factor model to a fine-grained examination of which five-factor subdomains are specifically relevant for SPS.In the first pilot study, a community sample of 16 through 26 year olds completed both the HSP and the NEO-PI-3 scales, and both domain- and facet-level associations were examined.At domain level, SPS was found to be positively associated with higher Neuroticism and Openness, negatively associated with Extraversion, whereas no significant association was shown with Agreeableness and Conscientiousness.At facet level, however, a more nuanced picture emerged showing that some of the associations at domain-level were driven by associations among some but not all facets.Also, it became clear that non-significant associations at the domain level resulted from opposite patterns of associations for facets with the same domain.These preliminary findings suggest that in order to comprehensively grasp the set of personality facets that characterize high SPS individuals, a facet-level analysis is needed.A second pilot study was conducted in a sample of 13 professionals who registered for a training programme “HSP for Professionals”.Prior to the training, they were asked to fill in the NEO-PI-3, which assesses the five-factor model, taking the perspective of a prototypical highly SPS individual.Mean raw scores were converted to stanines in order to identify domains and facets that pop up as ‘low’/’very low’ or ‘high’/’very high’ compared to population norms.At domain level, Neuroticism popped up as ‘very high’, Agreeableness and Openness as ‘high’ and Extraversion as ‘low’.Interestingly, above or below average domain scores were found to be driven by above or below average scores on only part of the facets.Also, an average domain score was found to be driven by the fact that within that domain, some facets popped up as ‘high’, whereas others popped up as ‘low’ or scored average.These preliminary findings suggest that high SPS might be considered as a blend of personality facets across domains.This opens opportunities to further extend and refine the set of tools available for the assessment of SPS, more specifically by constructing a five-factor model-based SPS compound consisting of all the facets that pop up as ‘high’ or ‘low’ in prototypical high SPS individuals.In some studies, constructs from different personality theories are simultaneously related to SPS.For example, Smolewska et al. examined the relative contribution of Neuroticism and BIS sensitivity in predicting SPS.They found that both Neuroticism and BIS sensitivity positively predicted SPS as a global construct, as well as the SPS components EOE and LST.The associations with Neuroticism were remarkably stronger than those with BIS sensitivity.In addition, Neuroticism positively predicted AES, although that association was lower in magnitude than the associations with the other two subscales and with SPS as a global construct.In two recent studies in child samples, multiple regression analyses were used to examine associations of BIS and BAS sensitivity, positive and negative emotionality/affectivity and effortful control with SPS as a global construct as well as the EOE, LST and AES components.The multivariate models predicted 26 to 34% of the variance of the SPS global score, and 15 to 35% of the variance of the SPS components.In the first study, BIS sensitivity and Neuroticism emerged as significant predictors of SPS as a global construct, as well as of EOE.BIS sensitivity also predicted LST.BAS sensitivity, positive emotionality/affectivity and – albeit to a lesser extent – BIS sensitivity predicted AES.In the second study, BIS sensitivity was unrelated to SPS, but Neuroticism was found to positively predict SPS as global construct, as well as EOE and LST.In addition, BAS sensitivity was inversely related to LST.And finally, positive emotionality positively predicted both SPS as global construct and AES.Across the two studies, EOE and LST were most consistently predicted by BIS sensitivity and negative emotionality, whereas AES was predominantly predicted by BAS sensitivity and positive emotionality.Nonetheless, these different personality constructs at best explained a modest proportion of the variance of SPS, suggesting that SPS is not fully explained or captured by existing temperament and personality constructs.As reviewed above, SPS shows small to moderate associations with existing temperament and personality traits, even when these are taken together, and also differs conceptually from these temperament and personality traits.There is therefore reasonably good evidence that SPS can be considered a distinct construct.Whether SPS reflects a more fundamental or meta-personality trait of sensitivity to environments remains a hypothesis.Future research should furthermore continue to examine associations of SPS with traditional temperament and personality constructs, as this can aid the understanding of SPS based on what is already known regarding personality constructs.For example, normative data are available for the five-factor model, but are not available for SPS.One potential advice is to extend the above pilot research on associations between SPS and the five-factor model facets.Different approaches are informative here: a facet-level analysis of associations between HSP or HSC scores and five-factor model traits; a comparison of the five-factor model domain and facet scores of high SPS individuals to population norms in order to identify domains and facets on which these individuals’ children score either high or low.Understanding the aetiology of any complex trait requires a vast effort culminating research from large-scale genetic databases.This often starts with twin data research, whereby the heritability of a trait is estimated by comparing twin correlations between monozygotic twins and dizygotic twins.This classical twin design can give an estimate of the proportion of variance in a trait that is explained by genetic, shared environmental and non-shared environmental factors.While this method is useful for elucidating whether genes play a role in a given trait, it cannot specify which genetic variants are implicated in its ontogeny.For this, molecular genetic studies are needed to find associations between traits and specific variants.Candidate gene studies test for associations with genetic variants such as single nucleotide polymorphisms that have some known biological function, therefore a priori assumptions are made about the relevance of the gene for the given trait.Genome wide association studies search for associations across the entire genome and thereby represent a data-driven approach to find significant genetic variants.GWAS require data from huge samples of the population to account for statistical obstacles such as multiple comparisons.Only one twin study has been conducted assessing the heritability of SPS.This study estimated that 47% of variance in SPS, assessed using the HSC scale in a UK population-representative sample of adolescents, could be explained by genetic factors, with the remaining variance explained by non-shared environmental factors.Multivariate analyses revealed that genetic influences on the AES component were largely distinct from those underlying LST and EOE.This may reflect an underlying multi-dimensional biological model of sensitivity, and it opens up the possibility that genetic factors may contribute to the development of subgroups of high SPS individuals who in particular score high on either AES or LST/ EOE.SPS correlated significantly with five-factor model Neuroticism and Extraversion, and these correlations were largely explained by shared genetic influences.This suggests that the small to modest extent to which SPS shares phenotypic overlap with the other personality traits, is due to shared genes.Only two molecular genetic studies of SPS have been conducted.The first study included 169 individuals and reported an association between SPS and the serotonin transporter-linked polymorphic region.5-HTTLPR has been shown to increase sensitivity to environmental stimuli, specifically negative but also positive ones.High SPS was related to s/s homozygosity.However, results from this study should be interpreted with caution given that the association between SPS and the s-allele was quite small, and the study had a small sample size.The second molecular genetic study assessed the association between SPS and multiple candidate genes in the dopaminergic system in a sample of 480 college students.Ten polymorphisms were reported to show significant associations with SPS and were included in subsequent regression analyses, which revealed that these polymorphisms together explained as much as 15% of variance in SPS, with recent stressful life events explaining an additional 2%.Such large effect sizes are rather unusual in molecular genetic studies and require replication.To date, five functional MRI studies of SPS have been conducted in humans, providing evidence for its neural basis.Utilizing the HSP scale as a measure of SPS, two studies examined brain responses to perceptual tasks, while the other two investigated SPS responsivity to emotional stimuli.The fifth study examined differences in resting-state brain activity in association with SPS.Additionally, several behavioural studies of SPS have been conducted in humans, providing evidence that awareness of environmental subtleties and emotional reactivity is enhanced in SPS.Furthermore, while studies have not yet directly addressed depth of processing, empathy, and overstimulation, findings point towards differences in these also.These behavioural studies will be discussed in the context of the associated neuroimaging findings.For one fMRI study examining perceptual responsivity as a function of SPS, the study participants were scanned while doing a task to notice subtle differences in photographs of landscapes.Results showed that higher levels of SPS were associated with increased reaction times and increased activation of brain areas implicated in high-order visual processing and attention, such as the right claustrum, left occipito-temporal, bilateral temporal and medial and posterior parietal regions in response to detecting minor changes in stimuli.A behavioural study in a diverse sample of 97 adults likewise found that the high SPS group had higher reaction times to detect changes of an object in photographs only when the change is subtle, not when more obvious.In another fMRI study examining the perceptual aspects of SPS using the HSP scale cross-culturally, Asians and Americans performed visuospatial tasks emphasising judgments that were already known to be either context-independent or context-dependent, so that brain activation is generally higher when performing the more difficult task.It was found that individuals scoring high versus low on SPS showed lower culture-related differences in task performance.This suggests that SPS is associated with perceptual judgments that are based more directly on the actual incoming stimuli as they are, rather than on a cultural information ‘filter’.In line, while the Asians and Americans displayed increased activation of the frontal and parietal cortices when performing the more difficult task, this was not found in the high SPS individuals among the Asians and Americans.These results are consistent with a behavioural study involving German undergraduate students which showed that SPS was positively correlated with enhanced performance in a visual detection task.Interestingly, though no neurobiological correlates were investigated, SPS has been explored in association with other visual stimuliand preferences and, specifically, it has been investigated whether it is associatedwith blur tolerance andhigh-chroma colors preferences.No significant association was identified between SPS and the degree of blur tolerance nor with preferences for high-chroma colors, even though at a descriptive level individuals who are highly sensitive reported to like high-chroma colors less than individuals who were low on sensitivity.In another fMRI study examining the neural correlates of SPS in response to emotionally evocative face images of a partner or stranger, recently married men and women were scanned twice.The task was specifically designed to measure empathic processes as participants were first prompted with a sentence describing the context of the face image with corresponding statements such as, “Your partner is feeling very happy because something wonderful has happened to them”.The results revealed that across all conditions, SPS was significantly associated with increased activation in brain regions that coordinate attention and action planning.For happy and sad photo conditions, SPS was associated with stronger activation in brain areas involved in sensory integration, awareness and empathy, as well as preparation for action and cognitive self-control.The insula is particularly interesting with respect to SPS because it is responsible for perceiving and integrating interoceptive sensory stimuli, and has been thought to be the “seat of awareness”.Also, activation of the inferior frontal gyrus was found, which is part of a Mirror Neuron System, a network of regions that are involved in empathic processing and facilitate rapid intuition of others’ goals."Similarly, the cingulate cortex is involved in attention and the recognition of others' actions.The premotor area finding is also of interest in the context of response to others’ emotions as it is involved in unconscious behavioural control and action planning.Finally, the dorsolateral prefrontal cortex is involved in higher order cognitive processing, decision making, self-regulation and task performance.Accordingly, these data suggest that high SPS individuals may readily intuit, “feel” and integrate information, and respond to others’ affective states, in particular to positive emotional states of a close partner.The results are consistent with cardinal traits of SPS as they highlight depth of processing, awareness and being more affected by others’ moods and affective displays.In another fMRI study of SPS, a group of females were scanned while viewing generally positive, negative, and neutral images from the standard International Affective Picture System – IAPS.Participants also completed the HSP scale and provided retrospective reports of childhood quality, measured with a battery of validated scales.Results showed that SPS was significantly correlated with neural activity in areas involved in memory, emotion, hormonal balance, and reflective thinking.Furthermore, results showed that SPS was associated with a stronger reward response to positive stimuli; and this effect was especially amplified for individuals reporting higher quality childhoods.For negative stimuli, the SPS x childhood interaction showed significant activation in brain regions implicated in emotion-processing and self-regulation, without diminished reward activity.These results provide a suggestion for how positive childhoods may have long-term impacts on individuals’ susceptibility to stimuli, namely through mechanisms related to self-regulation and by buffering individuals from dampened reward effects in response to negative stimuli.Finally, researchers investigated whether resting-state brain activity mediated the effects of dopamine-related genes on SPS.It was found that temporal homogeneity of regional spontaneous activity in the precuneus suppressed the effect of dopamine-related genes on SPS.The precuneus is involved in the integration of higher-order information such as visuo-spatial imagery, episodic memory, and emotional stimuli, especially when self-related mental representations and self-processing are involved.This finding indicates that the relation between SPS and dopamine genes is moderated by precuneus activity.In two behavioural studies with English undergraduate students, high SPS groups were different on controlled and automatic attention tasks.In one study using a standardized test, a high SPS or AES group made more errors only when the task involved incongruent flankers, supporting the association of SPS with greater attention to irrelevant information, which may promote greater depth of processing but can result in errors.In another study, high SPS individuals showed both more interference and more facilitation effects for spatial congruency on an automatic exogenous attention orienting task.Consistent with the idea that greater automatic attention may support greater awareness of subtle information, another behavioural study suggested that high SPS groups have a greater ability to become more consciously aware of subtle higher-order, structured information during an implicit learning task.Neurosensitivity mechanisms, especially lower inhibition and automatic attention, may contribute to creative abilities in individuals high in SPS.Altogether, these findings suggest that SPS is associated with differences in controlled and automatic attention neural processes that have implications for other aspects of cognition, with some being beneficial and some not.Basic research on the neural and physiological mechanisms underlying SPS greatly advance our understanding of the construct.Since genetic evidence underlying personality traits, including SPS, is not yet conclusive, it is argued that basic research on the neural basis of behaviour in experimental animals is needed to further advance mechanistic understanding.Indeed, animal models allow control over environmental factors as is not possible in humans, as well as invasive and causal manipulations.Thus, animal models may provide critical advances on the role of neuromodulators in behaviour and cognition in relation to biologically based traits.Sensitivity to environments is seen across many animal species, with two different behavioural patterns consistently reported: one bold, proactive and more extraverted; and another more cautious, reactive and inhibited.Hence, using animal models to understand the biology underlying SPS is sensible.One potential animal model that can help to advance the understanding of mechanisms is the serotonin transporter knockout mouse/rat model.It is now widely accepted that these mouse and rat models modelling the 5-HTTLPR s-allele show behavioural resemblances with people who are high on SPS.For instance, the knockout animals exhibit faster sensory processing, show reduced latent inhibition which is indicative for increased openness to environmental subtleties, adapt better to changes in the environment, exhibit increased anxiety-related behaviour in response to novel or emotionally conflicting situations, show increased responsivity to rewarding agents, have a better memory for emotionally arousing events, and show depression-like phenotypes upon exposure to uncontrollable stress.There is also evidence that 5-HTT knockout mice behave according to the Differential Susceptibility theory, as cohabitation of male mice with female mice reduced anxiety-like behaviour and increased exploratory locomotion in 5-HTT knockout but not control mice.Although the association between SPS and the serotonin system needs further replication, the phenotypic overlap encourages the use of 5-HTT rodents as a model for Environmental Sensitivity approximating SPS, in order to increase the understanding of the neural mechanisms underlying SPS.In line with the human fMRI studies, functional and structural imaging studies in 5-HTT rodents point to altered activity of the prefrontal cortex, amygdala, insula, nucleus accumbens, and hippocampus.Brain activity responses as measured by fMRI reflect a summation of complex synaptic signalling events.Since information integration is dependent on balance between excitation and inhibition in the brain, mediated by the neurotransmitters glutamate and GABA, respectively, the excitation-inhibition balance in the brain may well be the basis of the neural mechanisms driving increased sensitivity to environments.Using 5-HTT knockout rats as an animal model for Environmental Sensitivity, approximating high SPS, it was found that faster sensory processing was associated with reduced inhibitory control over excitatory principal neurons in the somatosensory cortex, leading to increased excitability and sensory gating.It is possible that the increased excitability extends to other regions beyond the somatosensory cortex, given that GABA system components are reduced in the somatosensory cortex, prefrontal cortex and hippocampus.Of interest, during brain maturation, GABA undergoes a switch from inducing depolarizing to hyperpolarizing responses in postsynaptic cells.This switch is dependent, amongst others, on increased expression of the K/Cl co-transporter.In 5-HTT knockout rats, KCC2 expression is reduced in the cortex, which would increase the membrane depolarization of postsynaptic cells receiving GABAergic inputs.This raises the possibility that the behavioural profile of 5-HTT knockout rats, and thereby Environmental Sensitivity, may relate to neuronal immaturity.As suggested by a group of neuroscientists, neuronal immaturity may be associated with increased plasticity and openness to the environment.Besides the brain, also related peripheral systems may contribute to sensitivity to environments.The hypothalamus-pituitary-adrenal-axis is implicated in the bodily response to environmental insults, allowing the organism to respond in an adaptive manner.Studies using 5-HTT knockout rats revealed that under baseline conditions, plasma corticosterone levels are increased compared to wild-type rats, but reduced after moderate early life stress.This was related to increased adrenal mRNA levels of, e.g. the adrenocorticotropic hormone receptor.With the use of an in vitro adrenal assay, naïve 5-HTT knockout rats were furthermore shown to display increased adrenal ACTH sensitivity.Interestingly, no changes in HPA-axis components were found in the hypothalamus and pituitary, suggesting that peripheral systems independent of the brain can contribute to sensitivity to environment.It has been well-established that environmental factors have the ability to modify gene expression through epigenetic mechanisms.Epigenetic mechanisms refer to the changes in gene expression that do not involve changes in the DNA sequence.One type of epigenetic mechanism through which early life factors can alter gene expression later in life is DNA methylation, which involves the addition of methyl groups to the DNA, to convert cytosine to 5-methylcytosine.Highly methylated areas tend to be less transcriptionally active.Using 5-HTT knockout rats, it was found that DNA methylation of the corticotrophin releasing factor was increased in the amygdala of 5-HTT knockout rats exposed to early life stress, compared to wild-type control rats and rats not exposed to early life stress.This correlated significantly with reduced CRF mRNA levels.CRF mRNA levels were in turn found to correlate with improved stress coping behaviour, as a manifestation of sensitivity to environments.Thus, while no evidence was found for changes in HPA-axis components in the brain to regulate HPA-axis reactivity to early life stress, environmental factors may influence HPA-axis reactivity through epigenetic mechanisms in the brain.Research on the genetic and environmental aetiologies of SPS is still in its infancy.As candidate gene studies have been criticised for their reliance on a priori assumptions about the biological function of specific genes, which is limited at present, a multi-pronged approach is needed to investigate the aetiology underlying SPS.Also, common and complex phenotypes, such as SPS, are expected to result from multiple genetic variants of small effect size, as well as from synergistic interactions with the environment.To advance the understanding of the aetiology of SPS, we recommend different levels of analysis for future research.First, finding an association between SPS and 5-HTTLPR supports recent theoretical assumptions that high SPS and the s-allele share phenotypes, in terms of heightened sensitivity to environments and emotional reactivity.Research with animals in the laboratory does suggest strong links between serotonergic gene variants and enhanced attention to emotional stimuli, a key feature of SPS.Thus, it will be very relevant to conduct large scale studies examining serotonin gene variants in humans.Second, twin studies should be conducted in order to extend initial findings regarding the aetiologies of SPS beyond adolescents to children and adults, and to study stability and change over time of genetic and environmental effects.Further, twin-based DeFries-Fulker extremes analyses would be useful, in order to assess whether high levels of SPS are quantitatively similar or qualitatively different from normal aetiological variation in SPS, further addressing the continuum vs category question from an aetiological standpoint.Third, the genetic structure of SPS needs to be assessed in a GWAS of sufficient size, in order to develop a basic model for the specific genetic variants associated with SPS.Lastly, we recommend more novel molecular genetic approaches such genome-wide complex trait analyses and the identification of polygenic scores for SPS, which are created for individuals in a new target sample based on the number of trait-associated alleles weighted by their effect size from the discovery GWAS sample.While the fMRI studies have brought substantial advances in understanding the neural underpinnings of SPS, this work is still in its infancy.One direction for future research we recommend is to examine large-scale brain networks.A paradigm shift in the field of cognitive neuroscience emphasises the functioning of the brain as an activity balance between sets of large-scale networks that support unique, broad domains of cognitive functions.These networks include the salience network, and the default mode network.Given the function of these networks, they may well underlie the SPS sensitivity facets.For instance, heightened emotional reactivity as observed in SPS could be related to increased salience network activity.Likewise, deep cognitive processing could relate to increased activity of the default mode network.Understanding the highly sensitive brain in terms of large-scale brain networks would significantly advance our insight in the neural basis of SPS.Specifically, it would help in understanding how deep information processing and heightened emotionality reactivity in SPS are associated with each other.An open question is whether deep cognitive processing is the central facet of SPS, and other phenotypes are secondary.It is also possible that reduced ‘filtering’ of sensory information, leading to increased awareness of environmental subtleties, drives subsequent increased emotional and cognitive processing of the sensory information.Specialised large-scale brain network analyses allow the identification of a central node by investigation of directionality of functional connectivity between networks.Of interest to further understand the function of the high SPS brain is the Embodied Predictive Interoception Coding model.This model postulates that the brain anticipates incoming sensory inputs by generating predictions through past experiences.Detection of a salient stimulus by comparing the predictions to actual sensory input can then be used as an alerting/reorienting signal, and relayed to the appropriate nodes that can implement a shift in attention or behaviour.This process involves the generation of predictions by agranular cortices and prediction errors by granular cortices in the salience and default mode network.Since large-scale brain networks have also been identified in rodents, it represents an excellent translational assay, to link data derived from animal studies to humans.While studies so far are compatible with the SPS characteristics of greater depth of processing and emotional reactivity, these characteristics have not been directly examined.In the brain perceptual information processing proceeds hierarchically from low to deep levels, that is, neurons coding low level features, such as lines, which converges onto the same neuron at a more advanced stage of processing to construct higher level features.Association cortex contains convergence-divergence zones wherein higher level information feeds back to lower levels, producing richer semantic representations embodied in lower level perceptual information.This raises the question of how depth of processing is related to perception.Evidence is accumulating that recurrent and top-down feedback processes in frontoparietal regions, which contribute to greater depth of processing, affect perception.Perception may be altered in SPS due to higher sensitivity of perceptual processing itself or to influences on perception from deeper information processing, including attention mechanisms, or top-down influences of high emotional reactivity.These possibilities may be distinguished using perceptual tasks, testing bottom-up versus top-down influences of neutral.The use of empirical tests to assess sensory perception by itself may also reveal how SPS relates to changes in the perception of sensory information and deeper information processing.For instance, the ability to inhibit responses to incoming sensory information is an important feature of a healthy individual for which many conventional EEG tests are at hand such as sensory or sensorimotor gating.Indeed, in work prior to the definition of SPS, more creative people were found to be more sensitive, defined as habituating more slowly to sensory noise and higher skin potentials.In line, 5-HTT knockout rats show reduced latent inhibition, also indicative for more attention for irrelevant sensory stimuli.Furthermore, creative people who are sensitive, defined as having high resting arousal, physiological over-reactivity to stimulation, and poor biofeedback performance, show more variable alpha EEG responses and -on tasks requiring more creativity- less blocking of alpha EEG.Notably, a field of human studies is emerging on the involvement of neuronal coherence and computation in gating and perception but also in other relevant cognitive functions such as multisensory integration, working memory, and selective attention, which may benefit SPS research.Depth of processing predicts differences in neurobehavioral characteristics of SPS in memory and attention reflecting greater semantic, elaborative, distinctive, and effortful information processing.For example, regarding memory, individuals high on SPS should perform better on episodic memory tests, which benefit from greater depth of processing.Consistent with this, groups with s- relative to l- allele of 5-HTTLPR and 5-HTT knockout rodents show higher episodic memory and attention.Furthermore, high SPS individuals show higher episodic memory and more details, even following implicit learning, suggesting that automaticity of processes leads to better memory.Finally, further investigation of physiological responses, like HPA-axis reactivity, are of interest to expand our understanding of the biology of Environmental Sensitivity and its objective measurement.Plasma ACTH and cortisol levels, and DNA methylation levels of genes related to the HPA-axis in blood cells, can readily be measured in humans, and these measurements can be extended to the brain in rodents.Of interest, 5-HTT knockout rats and human 5-HTTLPR s-allele carriers similarly display a decrease in heart rate in response to a threat predicting cue, and similarly show moderation of the heart rate response by a neural circuitry involving the amygdala and the periaqueductal gray.Potential changes in autonomic regulation is supported by human imaging data whereby high SPS is associated with greater activation in the amygdala and PAG in response to emotionally evocative stimuli.While 5-HTT knockout rat data help to fine tune the understanding of mechanisms underlying high SPS, a drawback is that these rats are genetically defined, and not phenotypically like high SPS.Therefore, a phenotypic rat model based on extremes in emotionality and increased information processing in a population of wild-type rats is currently being developed.The phenotypes of this new phenotypic model resemble those of 5-HTT knockout rats, but the underlying aetiology is different.By combing animal and human research we can make significant advances in the mechanistic understanding of high SPS.SPS is conceptualised as a trait rather than a disorder, but in interaction with negative environments high SPS may increase risk for maladaptation and negative developmental outcomes, including mental and physical symptoms.Indeed, research has related SPS to a range of negative outcomes.These include higher levels of psychopathology-related traits, including internalising problems, anxiety, depression, and traits of autism spectrum disorders and alexithymia.SPS has also been associated with lower levels of subjective happiness, and lower levels of life satisfaction.It is also related to factors associated with poor stress management including difficulties in emotion regulation, a greater but more accurate perception of home chaos, increased levels of stress, physical symptoms of ill health, and greater work displeasure and need for recovery.Interestingly, a computational based model has been proposed to try to get a better understanding on the association of SPS with feeling of distress and overwhelming, pointing out to the importance of an external regulator agent to promote the ability of the highly sensitive one to gradually learn on his/her own on how to cope with upsetting stimuli.Recently, SPS has also been proposed as a trait associated with frequent nightmares and vivid images in dreams, a hypothesis that has yet to be tested, and has been reported to be higher in individuals with type 1 diabetes.Only part of these studies included interaction effects, but of those that did, most have supported the role of interaction with negative environments in predicting maladaptive outcomes, as reviewed in Section 2.2.Central to the conceptualisation of SPS as reflecting sensitivity to environmental factors, is that SPS is not only relevant to understanding maladaptation, but also optimal development or even flourishing in positive environments.As such, higher levels of SPS have been related to positive outcomes, including increased positive affect following positive mood induction, increased social competence in interaction with positive parenting styles, reduced depression scores and bullying and victimisation following intervention, as reviewed in Section 2.2, and increased activation in the major reward centres of the brain in response to positive stimuli, such as smiling partner faces or generally positive emotional images, as well as higher creativity.Further, the HSP scale correlates significantly with feelings of awe, which add to the pleasure and meaning in life, assessed using a standard 6-item Awe scale.An association between SPS and higher creativity, determined by neurobiological factors, has been also proposed by other authors, at a theoretical level.Regarding parenting, high SPS mothers are shown to score significantly higher on Parenting Difficulties and Attunement to Child, whereas high SPS fathers scored significantly higher only on Attunement to Child.Results remained after controlling for external stressors, negative affectivity, education, marital status, age, and children’s age.Similar, a German study reported a negative association between transition to parenting and well-being in highly sensitive individuals.Furthermore, in a sample of Chinese parents with children with ASD symptoms, SPS has been reported to negatively impact on parental mental health through an indirect effect on parental intolerance of uncertainty.These findings suggest that for those high on SPS it is particularly important for their well-being to have ways to manage their perceived overstimulation of parenting, especially given that it could facilitate the expression of their self-reported benefit of the trait, their greater attunement to their children.SPS is linked to increased risk for atypical development and subsequent mental disorder symptoms.Most work thus far has focused on links of SPS to symptoms of anxiety and depression in non-clinical samples.Borrowing from psychological models of depression, a recent theory explains the association between SPS and psychological distress as a secondary phenomenon of cognitive reactivity to sensory information and related negative emotions.As such, it is not sensory stimuli per se or related negative emotions that are hypothesised to lead to psychological distress, but the secondary cognitive reactions of individuals to stimuli and emotions.This cognitive reactivity of individuals has been suggested to distinguish healthy and unhealthy individuals with high SPS.Such a model is trans-diagnostic as it explains psychological distress associated with SPS, independent of specific diagnoses.In support of this, Brindle et al. found that difficulties in emotion regulation partially mediate the link between SPS and depression.Further, Meyer et al. found that higher SPS is related to more negative cognitive and affective reactions to ambiguous social scenarios, which is a cognitive risk factor associated with anxiety and depression.Next to anxiety and depression, sensitivity to environmental stimuli is also relevant to psychiatric disorders, such as ASD, attention-deficit/ hyperactivity disorder and schizophrenia.However, the relationships of SPS to these disorders remain to be clarified.Different links are possible, such as that SPS may act as a risk or protective factor, modifying factor, precursor or endophenotype for different disorders, or as a cross-disorder trait.Relevant to the question of similarities and differences to disorders involving sensory sensitives, a recent review of the brain regions involved in each of the conditions revealed that SPS differs from ASD and schizophrenia in that in response to social and emotional stimuli, SPS uniquely engages brain regions involved in reward processing, empathy, physiological homeostasis, self-other processing, and awareness.However, no study has compared brain structure or function in high SPS individuals directly to those with disorders involving sensitivity to environments.Such studies are needed before more firm conclusions can be drawn.A vibrant research area is the study of sensory symptoms in ASD, which have been added to the clinical symptoms of ASD in the DSM-5.ASD is a neurodevelopmental disorder that is also associated with hypersensitivities, but unlike SPS it has also been linked to hyposensitivity, whereas hyposensitivity has neither been hypothesised nor examined for SPS.Yet, Jerome and Liss, reported that individuals high in SPS experienced low registration, and postulated that this could reflect a compensatory mechanism put into place when an organism was so over-aroused that it shut down.This mechanism has also been hypothesised to occur in individuals with autism.Research is needed to delineate whether low registration in SPS and hyposensitivity in ASD are related.Furthermore, it is unclear whether sensory sensitivities in individuals with ASD reflect basic sensory differences, or differences in affective response to these stimuli.No studies so far have quantified SPS among individuals with ASD or other diagnoses in order to test the extent of overlapping architectures of sensory processing.At the neural level, sensory symptoms are thought to originate from differences in low-level processing in sensory-dedicated regions in the brain of individuals with ASD, whereas SPS is associated with brain regions involved in reward processing, memory, physiological homeostasis, self-other processing, empathy and awareness.This implies that sensory sensitivity has distinct qualities in ASD and SPS.More research is needed to understand whether and how sensory processing in ASD relates to SPS.Lastly, there is a broad literature on sensory processing dysregulations including factors such as poor registration, sensitivity to stimuli, sensation seeking and sensation avoiding.The relation of SPS to this literature needs further empirical testing.We reason that while individuals high in SPS may have any disorder, including Sensory Processing Disorder, the indications that nearly one third of the population may be high in SPS indicates that SPS is a not disorder.That is, it is unlikely that a disorder would be so prevalent under evolutionary pressure.Furthermore, the perceptual advantages of SPS, such as decreased influence of culturally induced perceptual biases, would seem to suggest that SPS bestows perceptual processing advantages.Individuals with high levels of SPS are shown to benefit more from psychological intervention.Intervention approaches may therefore not only be particularly vital for individuals high in SPS, given the association of SPS with psychopathology and stress-related problems, but also particularly effective.Proposed interventions for individuals high in SPS experiencing psychological distress include those focusing on increasing an individual’s self-efficacy regarding dealing with emotions.Given that acceptance of negative affective states has been shown to partially mediate the association between SPS and symptoms of depression, and given that associations between SPS and anxiety were only found when mindfulness and acceptance were low, mindfulness and acceptance-based programmes may also be valuable.Mindfulness-based interventions are increasingly shown to be effective in the reduction of stress, anxiety and depression relapse prevention.Based on neuroimaging data showing greater responsivity to affective stimuli as a function of SPS in areas implicated in emotion, mindfulness-based trainings, and in fact different meditation types linked to deactivation of the amygdala, may be useful for the enhancement of self-control and diminished emotional reactivity in high SPS individuals.A randomised-controlled study in 47 highly sensitive individuals, identified using the Orienting Sensitivity scale of the Adult Temperament Questionnaire, which is related to SPS, found that mindfulness-based stress reduction had large effects on stress, social anxiety, personal growth and self-acceptance, and moderate effects on emotional empathy and self-transcendence.Recently, it has been proposed that mindfulness-based cognitive therapy may ameliorate psychological distress in individuals with high levels of SPS through addressing cognitive reactivity, and that MBCT may have transdiagnostic intervention effects through mediation by cognitive reactivity of individuals high in SPS.Recently, a computational based model has been proposed to try to get a better understanding on the association of SPS with feeling distress or overwhelmed, pointing to the importance of an external regulator agent to promote the ability of the highly sensitive one to gradually learn on his/her own on how to cope with upsetting stimuli.Finally, a recent study involving Japanese students reported that physical exercise might moderate the association between SPS and depression tendencies in young adults, but the result has to be replicated in longitudinal studies to clarify the impact of physical activity on the association between sensitivity and depressive symptoms.Most studies so far are based on non-representative samples.Associations between SPS and mental disorder need to be quantified further, also in relation to clinical samples, longitudinal designs, mental health registries, objective and biological markers of physical health and stress, and economic impact of SPS in terms of expected occurred health-care costs.An important line of future research is to examine the usefulness of SPS as a cross-disorder trait.Cross-disorder traits are not symptoms of disorder, but are, as neutral traits, uniquely suited to bridge psychiatric disorders with biological substrates of behaviour, clarify heterogeneity and comorbidity and inform cross-disorder interventions, not achieved by the current diagnostic systems.SPS may be an ideal cross-disorder trait because it is: a) observed in humans and animals, b) heritable, and c) associated with traits of mental disorders.Furthermore, there is evidence supporting that aetiological factors involved in SPS partially overlap with those in psychiatric disorders, for example serotonergic and dopaminergic genes are also involved in the aetiology of ADHD, anxiety and depression.SPS may be a suitable addition to the Research Domain Criteria, as the Sensory Processing Sensitivity framework has been established based on observing stimulus responsivity in >100 animal species, indicating a strong biological foundation.A logical progression is to use human neurocognitive measures, such as electro- or magneto-encephalography and event-related potential studies, in particular to expand this field to the human counterpart of cognitive neuroscience.A critical need is to characterize basic sensorimotor, perceptual, socioemotional and neurocognitive function in relation to SPS, with learning and memory, attention, and emotional reactivity as the abilities that should vary most in SPS, but also other basic abilities.Further, more complex abilities should be characterised, as differences in basic abilities will affect more complex ones, and widespread neurobehavioral differences, which may affect large-scale brain networks, are predicted based on the neural and developmental mechanisms of SPS.Such neurobehavioral characteristics will be important for defining what SPS is, developing objective measures of SPS in addition to questionnaires, and tracking neurobehavioral characteristics in SPS across the lifespan and as a function of different kinds of environments.An unresolved question is to what extent SPS taps into the same construct of sensory reactivity as ASD.High SPS and ASD are both characterised by sensory sensitivities, but there are also important differences: SPS is a temperament trait and not a disorder, differs from ASD in terms of heritability, and higher empathy is expected in high SPS individuals, whereas certain aspects of empathy and social processing are often impaired in many individuals with ASD.Nonetheless, it is conceivable that children with high SPS are misdiagnosed for ASD for instance when they are exposed to negative environmental factors that precipitate social withdrawal.A crucial caveat is the extreme heterogeneity in symptom constellations and severity across the autistic spectrum.Many studies have addressed the relationship between “clinical” sensory symptoms, often referred to as sensory modulation, and symptom severity and ASD subtype.The construct of SPS opens up the interesting possibility to test the contribution of normal sensitivity to ASD morbidity.It has been postulated that many different aetiologies converge on final common pathways leading to ASD.Differential sensitivity to the environment might be an interesting factor to add to this list and explore via SPS-driven research.SPS may be important for informing personalised intervention.Intervention effects may be greater and more long-term in those higher on SPS, as highly sensitive individuals process or internalise stimuli more deeply, which may allow them continuous application of the acquired intervention strategies.Research on the mechanisms underlying links between SPS and psychopathology, and the responsiveness of high SPS individuals to intervention will be important to help understand how interventions work and for developing new interventions derived from such mechanisms, with implications for more as well as less sensitive individuals.As SPS is both genetically and environmentally determined, it may be possible to target sensitivity to environments in less sensitive individuals in order to facilitate treatment effectiveness, for example by therapy affecting neurobiological substrate of SPS.As SPS appears to have consequences for predicting intervention success, measurement of SPS in clinical practice should be considered.In addition to developing and testing effectiveness of interventions for individuals with high levels of SPS and psychological distress, there is also a need for prevention programmes for high SPS individuals to prevent them from shifting to atypical development and help them flourish, and to examine the conditions leading to psychological flourishing and positive health in individuals high in SPS.A first step would be to educate individuals high on SPS about the trait, similar to psychoeducation programmes used in mental health settings.These individuals can then be followed longitudinally to study the expected beneficial effects of being educated about the trait, either in relation to a control high SPS group not educated about their trait, or compared to the period before being informed.We expect that being aware of being high on SPS is key, as it allows to adopt appropriate self-care behaviours such as sometimes avoiding overstimulating situations and getting enough time to themselves to process their recent experiences.Another important step is to educate parents and teachers of children with high SPS about the trait, and to examine the effects of being raised and supported by parents and teachers who understand the child’s sensitivity on school performance, well-being and psychosocial adjustment.With this review we have provided a comprehensive overview of the current status of research on SPS and knowledge gaps, and suggestions for future research.In Table 1 we have summarized the suggestions for future research in order to further understand SPS and to improve the management of mental health and well-being.While research on SPS is still in its infancy and there is need for greater methodological rigour of studies, there is now increasingly good evidence that SPS is distinct from other temperament and personality constructs.SPS allows measurement and mechanistic understanding of why some individuals are more sensitive to environmental influences than others.Since SPS is a basic individual characteristic also observed in animals it has far reaching implications.It provides the opportunity to explain individual differences in development in the context of environmental experiences, it may explain susceptibility to psychopathologies, and may allow early detection of individuals at risk and early intervention to prevent aberrant behavioural developments, and help high SPS individuals to flourish in modern society.We could envision a role for SPS in the Research Domain Criteria that describe behavioural domains across brain disorders.Specifically, its evolutionary roots provide the premise to obtain mechanistic understanding of SPS across species, and thereby to work towards its clinical implementation. | Sensory Processing Sensitivity (SPS) is a common, heritable and evolutionarily conserved trait describing inter-individual differences in sensitivity to both negative and positive environments. Despite societal interest in SPS, scientific knowledge is lagging behind. Here, we critically discuss how SPS relates to other theories, how to measure SPS, whether SPS is a continuous vs categorical trait, its relation to other temperament and personality traits, the underlying aetiology and neurobiological mechanisms, and relations to both typical and atypical development, including mental and sensory disorders. Drawing on the diverse expertise of the authors, we set an agenda for future research to stimulate the field. We conclude that SPS increases risk for stress-related problems in response to negative environments, but also provides greater benefit from positive and supportive experiences. The field requires more reliable and objective assessment of SPS, and deeper understanding of its mechanisms to differentiate it from other traits. Future research needs to target prevention of adverse effects associated with SPS, and exploitation of its positive potential to improve well-being and mental health. |
31,438 | Similarities and differences in child development from birth to age 3 years by sex and across four countries: a cross-sectional, observational study | Research from various fields of science has established the importance of early childhood development on health and productivity across the lifespan.1,Nevertheless, 43% of children younger than 5 years in low-income and middle-income countries are estimated to be at risk of not reaching their full developmental potential.2,Such estimates have been used to calculate loss of adult productivity and increased health expenditures in LMICs.3,These estimates, however, are indirect measures, based on the proportions of children with stunting and those living in poverty."To guide early childhood development policies, research is underway to substantiate these estimates by creating population indicators of early childhood development based on assessment of children's development.4",Two other pressing needs require methods of assessing child development across LMICs.The first is for health-care systems to be able to assess the development of individual children and identify the need for interventions.5,The second is for research tools to be able to measure the effect of interventions on child development.6,All measurements of early childhood development, whether they are population-based indicators, individual assessments, or research tools, must incorporate information on early developmental milestones.To guide the development of universally applicable tools, it is first necessary to establish when healthy children attain milestones, and which milestones are similarly attained across sexes and countries.Whether child development is similar across sexes and populations is a question that is of fundamental importance for understanding and promoting human development.4–7,One of the UN Sustainable Development Goal indicators is “the proportion of children under 5 years of age who are developmentally on track in health, learning and psychosocial well-being, by sex”.8,The UNICEF Multiple Indicator Cluster Survey7 has incorporated questions to assess the development of children aged 3–5 years at the population level.What is considered developmentally on track in the first 3 years and whether it can be measured universally, however, remains unclear.Previous research9–11 has shown that children attain developmental milestones at substantially different ages across sexes and cultures."This conclusion is derived from studies that had a number of methodological problems, one of the most important being that little attention was given to children's health and the extent to which this might affect their development. "The largest international study9 on the attainment of developmental milestones in children, led by WHO, included sites where children's health-related risks were likely to be different.This study also applied different assessment tools across the populations.Other studies have examined narrow age ranges or few domains of development, and have based their conclusions on statistical significance, but not necessarily clinical significance.10,11,On the basis of the conclusion that child development is different across countries, many countries have had to devote substantial resources to the re-standardisation of instruments for the measurement of child development or have been left without methods to assess children.5,12,The WHO Motor Development Study13 examined for the first time a healthy sample of children and found that ages at which children achieve six gross motor milestones were similar across sexes and five diverse countries.Whether the ages of attainment of milestones in other developmental domains vary in healthy children across different countries has not been established.5,Evidence before this study,The absence of information about when healthy children attain developmental milestones and which milestones are attained similarly across sexes and countries that are culturally different remains an important barrier to addressing developmental difficulties and supporting early childhood development within health-care systems in low-income and middle-income countries.The Guide for Monitoring Child Development has been identified as an instrument that meets both psychometric properties and feasibility criteria in LMICs for monitoring child development in seven domains: expressive language, receptive language, fine motor, gross motor, relating, play, and self-help.We did a background search which we began on Jan 11, 2007, for the original manuscript describing the GMCD and the WHO book Developmental Difficulties in Early Childhood: Prevention, Early Identification, Assessment and Intervention in Low and Middle-Income Countries.The search has been updated to July 31, 2017.We searched PubMed, PsychINFO, and Google from inception to July 31, 2017, for original research articles and systematic reviews pertaining to assessment of early childhood development in LMICs.The search terms we used included: “development”, “screening”, “monitoring”, “surveillance”, ”instruments”, “milestone”, “early intervention”, “disability”, “delay”, “disorder”, “risk”, “psychosocial”, “anemia”, “nutrition”, “prematurity”, “chronic illness”, “low birth weight”, “depression”, “poverty”, “gender”, “country”, “low and middle-income”, and “high-income”.Previous research on sex and country differences for early childhood development is inconclusive as a result of several methodological issues.The largest previous study led by WHO, done in the 1990s, concluded that child development could not be compared across countries.More recently, the WHO Motor Development Study for the first time used a healthy sample to assess six motor milestones in five countries and in 2006 concluded that these milestones were attained at similar ages across sexes and countries.However, no study has used a sample of healthy children to examine the ages of attainment of milestones in multiple domains across different country samples.Added value of this study,We enrolled a large sample of healthy children in four countries with different demographic, cultural, and linguistic characteristics, and showed that most developmental milestones in early childhood are attained at similar ages.Across countries, the age of attainment of milestones was most similar in the play domain, whereas the largest differences in age of attainment were found in the self-help domain.To the best of our knowledge, our study is the first to provide the ages of attainment of more than 100 milestones in multiple domains in healthy children from four different countries, and to examine the differences between sexes and countries using predefined criteria and regions of practical equivalence.Implications of all the available evidence, "Our study provides information about the age of attainment of early developmental milestones and about the specific milestones that are attained at similar ages across sexes and countries, which fulfils an essential need in addressing children's health and development in LMICs.Further development of assessment tools that incorporate these milestones could potentially enhance services and development of policies and contribute to intervention research that benefits the development of children internationally.The aim of our study was to ascertain when healthy children of both sexes and in four countries that are geographically, culturally, and linguistically different attain key developmental milestones, to establish which milestones are attained at similar ages across sexes and countries, and to identify those milestones for which important differences exist.We did this cross-sectional, observational study in Argentina, India, South Africa, and Turkey.We recruited children between March 3, 2015, and May 18, 2015, from 22 health clinics and identified a subsample of healthy children for generating milestone curves to examine similarities and differences among sexes and countries in ages of attainment of developmental milestones.The study was done in clinics providing routine health care, which, in Argentina, South Africa, and Turkey, were in the Ministry of Health community health centres in the greater urban and peri-urban regions of Rosario, Pretoria, and Ankara."In Mumbai, in addition to similar clinics, children were also recruited at private physicians' offices to ensure that an adequate number of healthy children were recruited.We aimed to recruit children at typical health-care clinics, but did not aim for country-representative sites.The sites are referred to by country name because multiple sites were included within each country.The study was approved by the institutional review board at each site, and by Yale University School of Medicine Human Investigation Committee )."The research assistants obtained written informed consent from the children's caregivers.Children aged 0–42 months and their caregivers who were seen for either routine care or minor illnesses were recruited.Children aged between 37 and 42 months were included to ensure that we correctly identified a median age of attainment for children nearing 36 months of age.A subsample of healthy children was identified by excluding any children who fulfilled one or more of the following criteria: birthweight less than 2500 g; perinatal complications requiring neonatal interventions, prolonged hospital stay or readmission; undernutrition,14 or history of undernutrition; history of chronic health or developmental problems according to medical records or physical examination at the time of the study; or history of anaemia or haemoglobin concentration of less than 105 g/L at the time of the assessment.Children with missing health data were also excluded.Little information is available about the sample size needed to assess differences and similarities between ages of milestone attainment.In each country, we aimed to recruit 50 healthy children per 1, 2, and 3 monthly intervals in the 0–6, 7–12, and 13–42 month age groups, respectively.We used the developmental monitoring component of the Guide for Monitoring Child Development15 to assess the ages of attainment of milestones.The GMCD is an open-ended, pre-coded interview with the caregiver, which assesses child development in the domains of expressive and receptive language, fine and gross motor functioning, relating, play, and self-help.The theoretical construct and components of the GMCD have been described previously.15–18, "The development of the GMCD over 10 years in Turkey involved developing the open-ended questions and probes, determining milestones that caregivers provided as responses to the questions, examining face validity to assure that the milestones were robust indicators of children's functioning by selecting milestones that existed in other well used instruments and consulting with international experts, and doing reliability, standardisation, validity, and feasibility studies.The GMCD has received demand internationally, and service providers from over 20 countries have been trained in its use.18,The GMCD was considered appropriate for use in this study because the open-ended interview technique avoids common problems associated with testing children, such as children not complying with tests in unfamiliar circumstances, and questionnaires that pose closed questions, which might result in socially desired answers.Furthermore, the GMCD has been identified in a comprehensive review5 as one of three developmental screening instruments that meet psychometric and feasibility criteria appropriate for LMICs.All 125 original GMCD milestones were used, including 89 that had been standardised and validated for children aged 2 years and younger in Turkey, and 36 milestones for older children that had been piloted and assessed for face validity.We complied with guidelines for translation and adaptation of instruments.19,20,In Rosario and Ankara, the GMCD was applied using the predominant languages of Spanish and Turkish; in Pretoria using isiZulu, sePedi, seTswana, and English; and in Mumbai using Marathi, Gujrathi, Hindi, and English.The original Turkish GMCD was translated to English, checked for quality by two experienced translators, and independently back-translated to Turkish.The remaining translations were done from the English version and back-translated to English.All research staff were fluent in English in addition to their native languages.One author, IOE—developer of the GMCD—trained the research staff on how to use the GMCD in English."To ensure high inter-rater reliability, each research assistant's scoring on English speaking cases was compared with IOE's scoring after training.Agreement with IOE on at least 95% of all scored milestones for ten consecutive GMCD interviews was required of each research assistant.Inter-rater reliability thereafter was checked and corrected monthly in the native languages by the co-investigators who had high inter-rater reliability with IOE and quarterly by IOE through the review of videotapes of interviews that were in English, Spanish, or Turkish.For quality assurance, each research assistant was observed doing two interviews each month.To ensure that caregivers could comprehend the questions and respond, the translated GMCD questions and milestones were piloted in samples of 100 children with different languages and age ranges at each site.Subsequently we omitted nine milestones from the study that caregivers did not report spontaneously and when probed stated that they were unclear whether the child had obtained the milestone.An international advisory committee comprised of experts in child development and representatives from WHO and UNICEF provided feedback on the appropriateness of the milestones and interpretation of the results.The research assistants interviewed caregivers using the GMCD, and obtained data on household sociodemographics.Anonymity was maintained by excluding identifying information from all data.Anthropometry was done using standard methods14 and haemoglobin was measured using HemoCue.21, "The clinicians examining the children completed a checklist with information on the child's health status on the basis of their clinical assessments, health records when available, and physical examination results.To estimate the distribution of ages of attainment across sexes and countries, we calculated the ages of the children in months by dividing their age in days by 30.The data consisted of binary measurements and thus logistic and probit regression models—suitable for binary outcomes—were used to provide estimates of the cumulative distribution of the age of attainment for each milestone.Selection between logit and probit models was based on the lowest deviance information criterion for each milestone.22,To assess the ages of attainment of milestones across sexes and countries, and to allow comparison between the results of our study and previous studies,9 we used the 50th percentile age of attainment of milestones.Bayesian point estimates and corresponding 95% credible intervals were generated for the 50th percentile ages of attainment for girls, boys, each country, and the total sample.The CrIs were generated from the 2·5th and 97·5th percentiles of the posterior distribution of the median age at milestone attainment.In Bayesian inference, the probability of attaining a milestone by a particular age was estimated by modelling the logit or probit of an outcome of interest after the data were collected, and by incorporating the non-informative or neutral previous information on the contribution of each predictor on the logit or probit."We used the Markov Chain Monte Carlo package—which contains the MCMClogit function—to output the posterior distribution of the children's age variable corresponding to the 50th percentile of each milestone.No standard definition is available to calculate significant differences when comparing ages of attainment of milestones.Within our large sample, small differences were likely to be statistically significant.We therefore applied criteria to assess the clinical significance of the magnitude of the difference by defining a region of practical equivalence.23,Milestones were considered to be attained at equivalent ages if the absolute difference was 1·5 months or less, 2·5 months or less, 3·5 months or less, and 4·5 months or less, for milestones with 50th percentile point estimates between ages 0 and 6, 7 and 12, 13 and 24 months, and 25 and 36 months, respectively, and if the observed 95% CrIs of the differences were within the region of practical equivalences.Milestones with 50th percentile point estimates of more than 36 months were omitted.We did statistical analysis using R statistical software and the MCMCpack and BEST statistical packages.The funder had no role in study design, data collection, data analysis, data interpretation, or writing of the report.The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.Of the 10 246 children recruited, 281 had missing health data and were excluded.From the remaining sample, 5016 children were excluded.Of the 9965 children with health data, the most frequent reasons for exclusions were anaemia and undernutrition.The final sample of healthy children included 4949 of the 10 246 children recruited.Of the 4949 children included in the final sample, 1417 were enrolled in Turkey, 1415 in Argentina, 1215 in India, and 902 in South Africa.In all four countries, fewer girls were recruited than boys, and girls were even less well represented in the healthy sample of 4949 children.Girls accounted for 630 of 1417 included children in Turkey, 369 of 902 children in South Africa, 619 of 1415 children in Argentina, and 396 of 1215 children in India.Characteristics of the healthy sample are shown in table 1.Of the 116 milestones investigated, the 50th percentile point estimates were more than age 36 months for ten milestones and were therefore excluded from further analysis.The 50th percentile point estimates with 95% CrIs for the remaining 106 milestones are shown in table 2.Milestones with non-equivalent differences across sexes and countries are shown also in table 2.For example, for milestone 68 in the fine motor domain, non-equivalent differences were identified between the median age of attainment for boys and girls and between most countries.Across the sexes, most milestones were attained at a younger age by girls than boys, but the differences between sexes were small.Most milestones were equivalent across all four countries.When examined by domain, most milestones were equivalent across countries in the play, fine motor, gross motor, relating, expressive language, and receptive language domains.In the self-help domain, only two of the nine milestones were equivalent.11 of the 25 milestones that were not equivalent across countries involved exposure to tasks such as children taking care of themselves, climbing up and down stairs, and drawing; seven of these were in the self-help domain, two in gross motor domain, and two in the fine motor domain."Differences in two expressive language and all five receptive language milestones were associated with caregivers' understanding of children's speech or interpretation of what children understand such as verbs, commands, objects, prepositions, or stories.Additionally, four milestones associated with the production of words and sentences, and three associated with relating to people did not meet criteria for equivalence across the four countries.The distribution of ages of attainment for selected milestones are shown in figures 1 and 2, illustrating some of the similarities and differences across sexes and countries.We studied a large sample of healthy children in four countries with different cultural and linguistic characteristics to examine the development of children in the first 3 years of life.Our study provides information on developmental milestones that might be used across populations to assess development and also on those that require further investigation or elimination from international instruments.The aim of most research comparing early childhood development across populations has been to describe cultural and ethnic variations and their association with contextual differences.20,24,Most studies have included children from high-income countries, ethnic minorities, and small samples from LMICs.By contrast, our objective was to describe the variability in the ages of attainment of milestones and to establish whether enough similarities exist to guide the development of universal instruments, to avoid the costly restandardisation and revalidation of instruments.We therefore used definitions of equivalence to interpret our data rather than statistical significance alone.In a cross-sectional study9 with a similar goal led by WHO in the 1990s, approximately 28 000 children aged 0–6 years were tested in China, India, and Thailand.The prevalence of health risks in the samples was not described and different developmental instruments were applied across sites.Both factors might have accounted for differences in the median age of attainment of milestones across countries and within country urban and rural sites.Nevertheless, when comparing the study led by WHO9 and our study, the median ages of attainment for the milestone of saying one meaningful word in our sample and the samples from urban China and India in the WHO study are similar and for the milestone of saying two meaningful words.The WHO Motor Development Study13 assessed the ages of attainment of six gross motor milestones in healthy children in five countries.This study used both caregiver report and direct observations to establish when children attained milestones.Again, the median age of attainment for the milestones common to both studies in our study compared with the Motor Development Study are similar.Furthermore, to compare our data with data obtained in high-income countries, we examined the Denver II developmental screening test,25 an instrument developed in the USA.The median ages of attainment of our total sample were almost identical for milestones such as “uses six meaningful words”, “walks alone”, “kicks ball”, “reaches to objects”, and “holds pencil and scribbles”.These striking similarities provide further support for the universality of development across countries for some milestones, and also for the validity of the open-ended question technique used in our study.Our study advances the understanding of early childhood development by showing that many milestones in numerous domains are similarly attained across sexes and countries.We found that the attainment of almost all milestones is similar in the first year when environmental and cultural influences might have the smallest effect.The similarity of play across our country samples parallels earlier studies.26,The difference in ages of attainment for pretend play between girls and boys emerging in the third year of life might reflect cultural influences with regard to how boys and girls are expected to play.The ages of attainment of play milestones in healthy children across countries is of utmost importance to integrated interventions that include play and are being highly promoted in LMICs.3,27, "A large proportion of the differences in ages of attainment of milestones was associated with timing of children's exposure to experiences.For example, South African children could drink from a cup at a median age of 8 months compared with Argentinian children who reached this milestone at a median age of 16 months.In South Africa, where early independence is encouraged,28 children attained most self-help milestones at an earlier age than children in the other three countries, whereas in Argentina a more protective parenting style is generally adopted,29 which might explain later attainment of these milestones.Culture is not the only factor that determines experiences.South African and Argentinian children attained the milestone of climbing up and down stairs at an older age than Indian and Turkish children, which is probably because most children included in these samples were more likely to live in single-storey houses, whereas Indian and Turkish children were more likely to live in apartment buildings with stairs.The differences between countries in language milestones must be interpreted with caution.Receptive language is known to be difficult to assess because it is dependent on what caregivers expect and think children understand.30, "Consistent with these recognised difficulties in assessing the attainment of receptive language is the finding that most language milestones attained at different ages were associated with caregivers' understanding of children's speech and their interpretation of what children understand.More objectively, interpretable expressive language milestones such as the use of pronouns, the use of past tense, or the ability to recount a story or event were attained at nearly identical ages across countries, suggesting that overall language acquisition was similar.Milestones on acquisition of sentences might reflect differences in syntax.Furthermore, considerable differences were found across countries in maternal and paternal education."Whether differences in language milestones reflect true differences in children, cultural and ethnic differences in caregivers' interpretations of what children convey or understand, caregivers' use of language with young children, or the effect of psychosocial factors requires further study.Our study has important strengths.First, the cross-sectional design avoids potential biases of repeated questioning and retention of compliant families.Second, the countries included are from diverse geographical areas of the world with ethnic, cultural, and language differences.Third, the sample of almost 5000 children is one of the largest to date, providing information on multiple domains of development of healthy children younger than 3 years.Fourth, our criteria for a healthy sample were more stringent than criteria used in previous research.31–33,We excluded children with health conditions associated with potential adverse developmental outcomes.34, "The fact that half of the recruited sample was excluded supports the high prevalence of health problems in LMICs that has been reported previously,2 which has been shown to adversely affect children's development.More girls than boys were excluded from our study because of health problems, which might support evidence for sex-associated health disparities.Further research using such milestones that are attained similarly in healthy children will enable the development of common methods to examine the effect of health-associated risk factors on child development, and comparisons of child development between populations with differences in the prevalence of such risk factors.Our study has important limitations.We did not include a large number of LMICs, particularly those with lower incomes.We chose four countries that were culturally distinct and had collaborating teams with the capacity to do rigorous research and to provide services for children identified with risk factors.Another limitation is that the sample did not include rural sites.Thus, the applicability of our results to rural populations needs to be established.The small sample size—particularly the small number of older children enrolled in South Africa—is a limitation that is reflected in the larger confidence intervals in South Africa for some of the milestones attained at an older age, and might require repetition in larger samples.The number of children who were excluded because of health problems was more than we expected in all countries, but particularly in South Africa, where we could not change our recruitment strategy as we did in India, because the sociodemographic characteristics of children attending private paediatric clinics would have been substantially different.We recruited children from health clinics and not from homes to enable application of health criteria.This approach might decrease generalisability because our sample might have included more children with health problems using the clinics than children with health problems in the general population, or an increased number of healthy children that access primary care.Bias in either direction should not affect the results of the healthy sample.Direct measurements of undernutrition and anaemia, detailed questioning of caregivers about birthweight, perinatal and chronic illness, and a health checklist provided by clinicians were the most rigorous health criteria we could apply.Nevertheless, we might have erroneously included some children with unknown health conditions.We did not exclude children with psychosocial risk factors such as poverty, a low level of caregiver education, or depression.34,Further research is required to define the effects of psychosocial risk factors on the ages of attainment of developmental milestones.Our study has identified the median age at which healthy children of both sexes and from four countries attain milestones in multiple developmental domains."These findings might contribute to the construction of internationally applicable tools to assess children's development to guide policy, service delivery, and intervention research that might help narrow the gap between high-income countries and LMICs in addressing early childhood development. | Background: Knowledge about typical development is of fundamental importance for understanding and promoting child health and development. We aimed to ascertain when healthy children in four culturally and linguistically different countries attain developmental milestones and to identify similarities and differences across sexes and countries. Methods: In this cross-sectional, observational study, we recruited children aged 0–42 months and their caregivers between March 3, 2011, and May 18, 2015, at 22 health clinics in Argentina, India, South Africa, and Turkey. We obtained a healthy subsample, which excluded children with a low birthweight, perinatal complications, chronic illness, undernutrition, or anaemia, and children with missing health data. Using the Guide for Monitoring Child Development, caregivers described their child's development in seven domains: expressive and receptive language, gross and fine motor, play, relating, and self-help. Clinicians examining the children also completed a checklist about the child's health status. We used logit and probit regression models based on the lowest deviance information criterion to generate Bayesian point estimates and 95% credible intervals for the 50th percentile ages of attainment of 106 milestones. We assessed the significance of differences between sexes and countries using predefined criteria and regions of practical equivalence. Findings: Of 10 246 children recruited, 4949 children (48.3%) were included in the healthy subsample. For the 106 milestones assessed, the median age of attainment was equivalent for 102 (96%) milestones across sexes and 81 (76%) milestones across the four countries. Across countries, median ages of attainment were equivalent for all play milestones, 20 (77%) of 26 expressive language milestones, ten (67%) of 15 receptive language milestones, nine (82%) of 11 fine motor milestones, 14 (88%) of 16 gross motor milestones, and eight (73%) of 11 relating milestones. However, across the four countries the median age of attainment was equivalent for only two (22%) of nine milestones in the self-help domain. Interpretation: The ages of attainment of developmental milestones in healthy children, and the similarities and differences across sexes and country samples might aid the development of international tools to guide policy, service delivery, and intervention research, particularly in low-income and middle-income countries. Funding: Eunice Kennedy Shriver National Institute of Child Health and Human Development. |
31,439 | Empirically derived criteria cast doubt on the clinical significance of antidepressant-placebo differences | Over the last few decades antidepressants have become some of the most widely used and profitable drugs in history.Rates of prescriptions have risen throughout the developed world , leading to debates about the inappropriate medicalization of misery .The more fundamental question, however, is whether antidepressants achieve worthwhile effects in depression in general.Guidelines have attempted to consider the issue of clinical relevance of antidepressant effects, but have not constructed empirically validated criteria.The commonly used method of estimating the ‘response’ to drug treatment in clinical trials of antidepressants, involves the categorisation of continuous data from symptom scales, and therefore does not provide an independent arbiter of clinical significance.Moreover, this method can exaggerate small differences between interventions such as antidepressants and placebo , and statisticians note that it can distort data and should be avoided .Response rates in double-blind antidepressant trials are typically about 50% in the drug groups and 35% in the placebo groups.This 15% difference is often defended as clinically significant on the grounds that 15% of depressed people who get better on antidepressants would not have gotten better on placebo.However, a 50% reduction in symptoms is close to the mean and median of drug improvement rates in placebo-controlled antidepressant trials and thus near the apex of the distribution curve.Thus, with an SD of 8 in change scores, a 15% difference in response rates is about is exactly what one would expect from a mean 3-point difference in HAM-D scores .Lack of response does not mean that the patient has not improved; it means that the improvement has been less, by as little as one point, than the arbitrary criterion chosen for defining a therapeutic response.The small differences detected between antidepressants and placebo may represent drug-induced mental alterations or amplified placebo effects rather than specific ‘antidepressant’ effects .At a minimum, therefore, it is important to ascertain whether differences correlate with clinically detectable and meaningful levels of improvement."The CGI has been criticised for not reflecting the patient's perspective , and other data such as functioning and quality of life measures are also required to fully assess the value of antidepressant treatment.Cuijpers et al. have proposed a different method of establishing a ‘minimal important difference’ based on ‘utility’ measures derived from quality of life scales.However, the study from which the MID was estimated did not include samples of depressed individuals, and the values obtained were found to be unstable.As a result, the authors were only able to provide a “very rough estimate of the cutoff for clinical relevance”.Use of a patient-rated version of the CGI might allow for a more reliable and valid complement to the clinician-rated data used here to assess the clinical relevance of HAM-D scores.In its absence, CGI improvement scores provide the first empirically validated method for establishing the clinical relevance of antidepressant effects.Based on the Leucht et al. data , empirically derived criteria for minimal clinically relevant drug-placebo differences would be, a 7-point difference in HAM-D change scores, and a drug-placebo effect size of 0.875.Currently, drug effects associated with antidepressants fall far short of these criteria.This leaves the problem of how to treat depressed patients, given data indicating little if any difference in clinically relevant effects between one treatment and another .Patients and healthcare funders need to be aware that all treatments, including placebo, produce at least a minimal average response to treatment on symptom scales, while none outperforms a pill placebo to a meaningful degree.We suggest that decisions about treatment should involve the balancing of criteria including patient preference, safety, and cost.Given the choice, most depressed patients prefer psychotherapy over medication , and with respect to safety, antidepressant medication would be the last choice between empirically assessed treatment alternatives .Until recently there have been no empirically validated criteria for establishing the clinical significance of change scores on scales measuring psychiatric symptoms.In the 2004 National Institute of Health and Clinical Excellence guidelines on treating depression, it was suggested that differences of three points on the HAM-D and standardized mean differences of 0.50 might be clinically significant , but no evidence was cited to support these proposed cut-offs, and they were criticised as arbitrary .The specification of criteria for clinical relevance was removed from the later edition of the Guidance published in 2009, but effects continued to be classified according to their ‘clinical importance,’ apparently using the same criteria proposed in the 2004 Guidance .For example, based on a standardized mean difference of 0.34, the 2009 updated NICE guidance concluded that the difference between SSRIs and placebos is “unlikely to be of clinical importance”.Subsequently, an empirical method of establishing the clinical relevance of change scores has been reported in a number of studies .The method links scores on various scales used in psychiatric outcome trials to scores on the commonly used Clinical Global Impressions-Improvement scale, a scale which rates improvement on a scale of 1 through 4 to 7 .The CGI-I is said to be ‘intuitively understood by clinicians’ and has good inter-rater reliability, between 0.65 and 0.92 .It has been judged to be a useful measure in clinical trials and shown to have concurrent validity with other measures, including CGI severity ratings .Spearman correlations ranging between .70 and .80 have been reported between CGI-I and HAM-D .Thus, this method allows one to align the degree of change on a symptom scale to clinician perception of improvement, and provides a means of establishing an empirically derived criterion for clinical significance.The method has been applied to scales measuring symptoms of schizophrenia , and more recently to depression scales, specifically the HAM-D.We suggest that a CGI-I rating of 3, which indicates that the patient has “minimally improved” provides the most liberal criterion possible, as the next step on the scale is “no change.,Leucht et al. used the raw data on the antidepressant mirtazapine gathered from 43 trials in more than 7000 people diagnosed with ‘major depressive disorder’.The data were derived from placebo-controlled, comparative and open label trials that had been sponsored by the drug company, Organon.The linking analysis of absolute change in Hamilton scores to CGI-improvement scores at four time points is presented in Fig. 1.Leucht and colleagues described these data as follows: ‘The results were consistent for all assessment points examined.A CGI-I score of 4 corresponds with a slight reduction on the HAM-D-17 of up to 3 points’."In other words, clinicians could not detect a difference of 3 points on the Hamilton when asked to rate a patient's overall improvement.Examination of the figure reveals that a CGI-I score of 3 corresponded to changes in Hamilton score of around 7 points after two to four weeks of treatment.To attain a CGI score of 2, required a change in Hamilton score of 14 points at the four week assessment.To date, this method has been used to establish the clinical relevance of pre–post treatment differences.We propose that it can also serve as an empirically validated method of evaluating the clinical significance of drug-placebo differences, since these are also frequently calibrated in terms of differences on the Hamilton scale."Applying this to placebo-controlled antidepressant trials, Leucht et al.'s data reveal that the 3-point difference in HAM-D scores proposed by NICE is overly lenient.It results in classifying a difference that cannot be detected by clinicians as clinically important.These data suggest that a difference of 7-points on the HAM-D might be a more reasonable cut-off, as it corresponds to a clinician rating of minimal improvement.Leucht and colleagues also reported that the correspondence of HAM-D change scores to clinical ratings varied somewhat as a function of baseline severity.For less severely depressed patients, a clinician rating of minimal improvement corresponded to a 6-point HAM-D difference, whereas for very severely depressed patients, it corresponded to an 8-point change.One problem with the cut-offs proposed by NICE is that a 3 point difference in HAM-D change scores does not correspond well to the effect size of d = 0.50 that was proposed to indicate clinical significance.The pooled SD of change scores in the Kirsch et al. meta-analysis was 8.0 .However, that meta-analysis did not include the medication assessed in the Leucht et al. analysis.More important, it did not include comparator studies without placebo arms, which were included in the Leucht et al. paper.Thus, it seemed important to assess the reliability of our SD estimate using other data.A meta-analysis of 5 placebo-controlled mirtazapine trials yielded change score SDs of 7.7 for mirtazapine and 8.3 for placebo .Reported in the same paper, a meta-analysis of 5 trials comparing mirtazapine to amitriptyline yielded SDs of 7.9 and 7.8, respectively.A later comparator trial reported SDs of 7.5 for mirtazapine and 7.7 for paroxetine.These data reveal substantial consistency in the variance of HAM-D change scores across different trial designs, antidepressants, and placebos.Using an SD of 8.0, the effect size corresponding to a difference score of 7-points is 0.875.For very severely depressed patients, the effect size corresponding to a minimal difference would be 1.00, and for less severely depressed patients it would be a 0.75.These are the effect sizes that are required to indicate a ‘minimal’ difference as rated by clinicians.They are more than twice the magnitude of the effect sizes derived from meta-analyses, including those examining separately people with the most severe levels of depression .Conventionally, an effect size of 0.50 is considered ‘medium’ and 0.80 is considered ‘large.’,However, Cohen proposed these cut-offs with “invitations not to employ them if possible.The values chosen had no more reliable a basis than my own intuition” .The data considered here suggest that with respect to changes on the HAM-D, effect sizes as large as 1.00 may be required to indicate ‘minimal’ differences as rated by clinicians. | Meta-analyses indicate that antidepressants are superior to placebos in statistical terms, but the clinical relevance of the differences has not been established. Previous suggestions of clinically relevant effect sizes have not been supported by empirical evidence. In the current paper we apply an empirical method that consists of comparing scores obtained on the Hamilton rating scale for depression (HAM-D) and scores from the Clinical Global Impressions-Improvement (CGI-I) scale. This method reveals that a HAM-D difference of 3 points is undetectable by clinicians using the CGI-I scale. A difference of 7 points on the HAM-D, or an effect size of 0.875, is required to correspond to a rating of 'minimal improvement' on the CGI-I. By these criteria differences between antidepressants and placebo in randomised controlled trials, including trials conducted with people diagnosed with very severe depression, are not detectable by clinicians and fall far short of levels consistent with clinically observable minimal levels of improvement. Clinical significance should be considered alongside statistical significance when making decisions about the approval and use of medications like antidepressants. |
31,440 | Datasets of mung bean proteins and metabolites from four different cultivars | The data here represent different omics approaches to understand the mung bean metabolic pathways and compound-containing pattern in seed coat and flesh.The dataset is associated with the research article in BBA Proteins and Proteomes entitled “Proteomics and metabolomics-driven pathway reconstruction of mung bean for nutraceutical evaluation” and contains eight lists of proteins and two lists of metabolites obtained from four cultivars originated from different habitats.Mung beans R. Wilczek) from different habitats in Asian countries were purchased from local supermarkets in Tokyo and Yokohama, Japan.These cultivars were referred as China 1, China 2, Thailand, Myanmar, respectively, according to their habitats.For protein and metabolite extraction, mung bean seeds was soaked in milliQ water to separate coat and flesh.The coat and flesh was ground to powder in liquid nitrogen using a mortar and pestle and transferred to an acetone solution containing 10% trichloroacetic acid and 0.07% 2-mercaptoethanol.The proteins were extracted as described in .After enrichment with methanol and chloroform to remove any detergent, proteins were digested into tryptic peptides with trypsin and lysyl endopeptidase.Peptide identification was performed by nanoliquid chromatography MS/MS with a nanospray LTQ Orbitrap mass spectrometer coupled to an Ultimate 3000 nanoLC system.Full-scan mass spectra were acquired in the mass spectrometer over 400–1500 m/z with a resolution of 30,000 in a data dependent mode as previously described .Proteins were identified by Mascot search engine of a soybean peptide database constructed from the Phytozome.The acquired raw data files were processed by Proteome Discoverer software.Peptides with a percolator ion score of more than 13 alone were used for analysis.Proteins with single matched peptide were also taken into consideration.Protein abundance in mol% was calculated based on the emPAI value .Appendix file A contained the datasets with identified proteins from seed coat and flesh of four different cultivars.In total, 449 and 480 proteins were identified in seed coat and flesh, respectively, as described in .Metabolites were extracted from dried sample powder with 0.4 mL extraction liquid using ball mill as described in .Gas chromatograph time-of-flight-MS analysis was performed according to using an Agilent 7890 GC system coupled with a Pegasus HT TOF-MS.Peak analysis was performed by Chroma TOF 4.3X software and LECO-Fiehn Rtx5 .The similarity value obtained from the LECO/Fiehn Metabolomics Library was used for the evaluation of the accuracy of the discriminating compound identification is reliable.If the similarity is less than 200, the compound is defined as an “analyte”.The compound with a similarity between 200 and 700 is considered as a putative annotation.Appendix file B contains identified metabolic compounds from seed coat and flesh of four different cultivars.Proteins were categorized based on function using MapMan bin codes .For pathway mapping, identifiers in the Kyoto Encyclopedia of Genes and Genomes database were retrieved from MapMan system or KEGG COMPOUND search. | Plants produce a wide array of nutrients that exert synergistic interaction among whole combinations of nutrients. Therefore comprehensive nutrient profiling is required to evaluate their nutritional/nutraceutical value and health promoting effect. In order to obtain such datasets for mung bean, which is known as a medicinal plant with heat alleviating effect, proteomic and metabolomic analyses were performed using four cultivars from China, Thailand, and Myanmar. In total, 449 proteins and 210 metabolic compounds were identified in seed coat; whereas 480 proteins and 217 metabolic compounds were detected in seed flesh, establishing the first comprehensive dataset of mung bean for nutraceutical evaluation. |
31,441 | A novel role for NUPR1 in the keratinocyte stress response to UV oxidized phospholipids | The human skin is the organ most exposed to environmental oxidative assaults that cause cell damage, promote aging and result in pathologies.The dominant extrinsic oxidizing factor is ultraviolet A light which can penetrate deeply into the skin and modifies nucleic acids, proteins and lipids .The UVA induced DNA damage is mutagenic and promotes photoaging , the premature aging phenotype of excessively sun exposed skin .Further, UVA causes oxidative modifications of proteins , rendering them dysfunctional and impairing their degradation .Oxidized protein accumulates in photoaged skin and promotes precancerous actinic elastosis which is together with UV-induced constitutive matrix proteolysis a significant risk factor for keratinocyte- derived cancers of the skin .Phospholipids containing unsaturated fatty acid moieties which are present in all cellular membranes are prone to oxidation and yield a wide array of UVA oxidation products .Reactive oxidized lipid species modify DNA and proteins such as histones thereby affecting cell signaling and epigenetics .Bi-reactive lipid oxidation products like bis-aldehydes crosslink macromolecules which can be detected in photo-aged skin .Signaling molecules like receptors are targets of lipid modification , contributing to the increasingly recognized effects of lipids on cellular signaling.Additionally to the chemically reactive lipids, potent lipid signaling molecules are formed by UV through enzymes or non-enzymatically .Non-enzymatically oxidized 1-palmitoyl-2-arachidonoyl-sn-glycero-3-phosphocholine is an established model substance that contains bioactive lipids found in the circulation within oxidized low density lipoprotein but also in the skin .Oxidation of PAPC yields phospholipid hydroperoxides, -hydroxides, isoprostanoids, endoperoxides, cyclopentenones, carbonyls and lysophosphatidylcholines, all identified by mass spectrometric methods .The individual lipids in these classes differ in their structure and chemical reactivity and, consequently in their biological activity.Through agonism or antagonism of pattern recognition receptors, specific lipid classes can elicit quite distinct modulation of innate inflammation .In keratinocytes, phospholipid UV- oxidation products exert local immunosuppression as agonists of the platelet activating factor receptor .As activators of nuclear factor erythroid 2 like 2, specific OxPL can exert additional immunomodulation , and we have found UVA oxidized PL to be formed in cutaneous cells and to act via NRF2 .Further, OxPL initiate autophagy in KC, and genetic deletion of autophagy led to accumulation of crosslinked protein and of oxidized phospholipids .Thus, to understand the contribution of UV-generated bioactive lipids to the impact of UV light on the skin, we need to identify the lipid species, their activity as signaling molecules and chemical modifiers, and how the cells further process the lipids or their adducts.In this study we investigated generation of OxPL in primary human epidermal keratinocytes through UVA, and their contribution to UV -regulation of the transcriptome and proteome of keratinocytes.Of the regulated OxPL we could assign twenty, and propose structures for five novel UV-regulated species.Investigating the transcriptome and the proteome of UVA- or UVPAPC- treated KC we identified NRF2 signaling, a UPR/ER stress signature and induction of lipid detoxifying genes as shared responses to both stressors.A bioinformatic analysis of upstream regulatory factors predicted nuclear protein 1, to be involved in the stress regulation.NUPR1 is implicated in autophagy-, chromatin accessibility-, and transcriptional regulation in various tissues.We here report that expression of NUPR1 and downstream genes was induced by UVA and exposure to oxidized lipids.Knockdown of NUPR1 increased expression of HMOX1, the detoxifying aldo-keto reductase AKR1C1 and impaired cell cycle progression.We localized NUPR1 in nuclei of epidermal keratinocytes and found, that exposure of recombinant NUPR1 to oxidized PAPC affected its electrophoretic mobility, potentially by modifying and crosslinking the protein.Our data thus suggest a novel role for NUPR1 in the skin, as transcriptional regulator of redox responses, lipid metabolism and the cell cycle of epidermal keratinocytes under stress evoked by UV light and bioactive oxidized lipids.Throughout this study we compared the effects of UVA exposure on keratinocyte lipidome, transcriptome and proteome to the effects of externally addend UVA-oxidized phospholipids.While external addition of OxPL does not exactly model their intracellular generation, cells in an UV-exposed microenvironment are likely to encounter these very mediators, be it as highly amphipathic and membrane permeant lipid species from vesicles, as “whiskers” - or danger associated molecular patterns protruding from the membranes of cells or vesicles , as oxidation products on LDL particles , or among remnants of dead cells.First, we investigated the effect of UVA-1 exposure on phospholipid oxidation in primary human keratinocytes immediately after irradiation and after a twenty four hour recovery period.In parallel, we assessed the oxidized phospholipidome of cells to which 25 µg/ml externally photo-oxidized 1-palmitoyl-2-arachidonoyl-sn-glycero-3-phosphocholine had been added.We applied a semi- targeted lipidomic method using HPLC-electrospray ionization-MS/MS that we have recently developed , and additionally a high-resolution MS method for structural identification of selected unknown PL species for these tasks.Further, using microarrays for transcriptomic profiling we assessed the effect of UVA- and UVPAPC treatment on KC seven hours post exposure, a timepoint at which we had previously studied UV- and UVPAPC mediated gene regulation in dermal fibroblasts .We hypothesized that similar to what we observed in FB also in keratinocytes UVA- oxidized phospholipids would account for a part of the transcriptional UVA response.Additionally, we investigated the effect of both treatments on the proteome at twenty four hours post exposure using a LC-MS method."Using the data from transcriptomics and proteomics we performed an analysis on activation of signaling pathways and upstream regulators to predict factors responsible for the OxPL's contribution to UVA effects on mRNA and protein composition of KC.Finally, we used siRNA silencing to investigate the role of newly identified upstream regulators in KC and their UV response.We followed the changes in the relative abundance of the most prominent known and also unidentified oxidation products derived from PAPC but also from 1-stearoyl-2-arachidonoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-linoleoyl-sn-glycero-3-phosphocholine and 1-stearoyl-2-linoleoyl-sn-glycero-3-phosphocholine at 0 h and 24 h post UVA and UVPAPC exposure, respectively.Immediately after exposure to UVA-1, 173 oxPC species were significantly increased as compared to sham treatment.Addition of UVPAPC to the cells followed by immediate extraction resulted in up-regulation of 205 oxidized PC species of which 141 species were increased in both conditions.UVA and UVPAPC treatment decreased abundance of one and three oxPCs below detection limit, respectively.A principal component analysis of the first two principal components demonstrates high reproducibility within the replicates, whereas the different treatments could be clearly separated within the two dimensions.After a 24 h recovery phase, 84 OxPCs were up-regulated in the UVA treated group, whereas 273 oxidized lipids were up-regulated by UVPAPC treatment.Seventy one of the UVA induced OxPCs were also increased upon UVPAPC exposure after 24 h.PCA analysis and the heatmap at 24 h indicated a high similarity between the UVA- exposed samples and the controls, but both groups were clearly distinct from the UVPAPC treated group.From the 173 species that had been significantly induced by UVA at 0 h, 119 species returned to baseline level after 24 h, whereas from 205 species increased after UVPAPC addition only 8 returned to the baseline.Species that were exclusively elevated over control at 24 h post stress amounted to 30 in the UVA treated cells and to 76 in the UVPAPC treated cells.Together, these results indicate that the oxidized phospholipidome of UVA irradiated KC largely returned to baseline 24 h after exposure to UVA, while a smaller number of distinct OxPC species were induced after 24 h.The earliest products of phospholipid oxidation are PL hydroperoxides, which also represent a gold standard for cellular redox stress .Quantifying these and the less reactive PL hydroxides, which mostly derive from the reduction of the PL –OOH gives an indication whether cells recover from redox stress in a given time.We quantified PC- hydroperoxides and –hydroxides derived from PAPC and PLPC and found in irradiated KC an immediate rise in hydroperoxides.Exposure to UVPAPC led to strong increase in PAPC-OOH, but also PLPC-OOH was immediately increased to a level comparable to what was observed after UVA exposure, an indicator that the cellular oxidative stress level upon both treatments was comparable.After 24 h, PL-OOH levels remained elevated but did not augment further in the UVA exposed cells, whereas they increased strongly in the cells exposed to UVPAPC, indicative of ongoing lipid peroxidation.PL hydroxides were elevated immediately after UVA exposure and remained elevated at the 24 h timepoint.UVPAPC treatment resulted in high levels of PAPC-OH which were further increased at 24 h.We observed immediate generation of PLPC-OH in response to UVA, less so by UVPAPC, but both treatments readily induced PLPC-OH 24 h after exposure.These data indicate that the reduction of PL-OOH and/or the synthesis of PL-OH induced by UVPAPC exposure were slower than that evoked by UVA exposure.Phospholipid oxidation can however yield a broad spectrum of other products that result from cyclization or oxidative fragmentation of the polyunsaturated fatty acid chain , giving rise to bioactive isoprostane-, fragmented carbonyl- or di-carboxylic acid containing PLs, as prominent examples.The kinetics of 1-palmitoyl-2--sn-glycero-3-phosphocholine and 1-palmitoyl-2-glutaroyl-sn-glycero-3-phosphocholine, both products of PAPC show that UVA exposure of KC leads to an immediate rise in these species, but that they return to baseline after 24 h.The same was observed for the fragmented products 1-palmitoyl-2-nonanoyl-sn-glycero-3-phosphocholine and 1-palmitoyl-2-azelaoyl-sn-glycero-3-phosphocholine derived from PLPC.UVPAPC supplementation led, as anticipated to an immediate rise in PAPC derived fragmented species which amplified at 24 h. UVPAPC supplementation did however not lead to an immediate rise in PLPC derived fragmented species, and at 24 h to a moderate elevation of PONPC but not PAzPC.Quantification of the corresponding lipids with stearic instead of palmitic acid in the sn-1 position which yielded comparable results, and quantification of the unoxidized precursors, isoprostanoid- and lysophospholipid species are shown in Supplementary Fig. 2.Thus, this first analysis of oxPC species in UVA- stressed human keratinocytes demonstrated that fragmented PC oxidation products, among them reactive aldehydophospholipids, are immediately elevated by UVA, but efficiently restored to baseline level within 24 h, even when there is ongoing lipid peroxidation, as shown by still elevated levels of PL-OOH and OH at 24 h.Further, oxidized lipid stress as elicited by addition of external UVPAPC, does, with delay induce peroxidation of unrelated PL species shown by PLPC-OOH formation kinetics.However, either the formation of fragmented products of unrelated PLs is selective or prevented by induced cellular responses.Besides these identified species, the bulk of the UV regulated lipid signals could however not be unambiguously identified by our screening method.Thus, 20 analytes were selected for further analysis based on the criteria that they were highly inducible by UVA at 0 h or 24 h and that they were detectable also in human epidermis or in dermal fibroblasts.We used an independent high-resolution MS/MS approach combining data from positive and negative ionization modes.Precursor ions corresponding to the previously identified SRM transitions and collision-induced dissociation analysis in positive ion mode confirmed the presence of phosphatidylcholine fragment ion at m/z 184.1.Negative ion mode tandem mass spectra allowed identification of fatty acid composition for modified lipids.Using this approach, we propose structures for five UVA regulated oxidized PCs.Based on the tandem mass spectra for protonated ions at m/z 596.33, 550.35, and 664.42, carbonyl group containing structures were proposed.The signal at m/z 596.33 which we propose as PC carrying docosahexaenoic acid and C1 terminal carbonyl was highly inducible by UVPAPC immediately and after 24 h, by UVA only immediately after exposure.The signal at m/z 664.42, proposed as PC with C9 terminal aldehyde and hydroxy group within the same fatty acid chain, was increased immediately after UVA exposure but not by UVPAPC stress.The signal at m/z 550.35 corresponding to PC with oleic acid and C1 terminal carbonyl exclusively increased 24 h post UVA exposure, thereby having a strikingly different kinetic than all other aldehyde species described here.The lysoPC at m/z 546.36 strongly increased immediately after addition of UVPAPC but returned to baseline level after 24 h. Finally, the proposed hydroxy derivative of PC at m/z 800.58 had kinetic similar to the other PL-hydroxides described in Fig. 2.We verified the presence of all five newly identified oxidized lipid species in human skin explant biopsies where no significant changes in the relative abundance were observed immediately after UVA exposure.To identify candidate genes, pathways or higher order regulators that may be involved in regulating the partial restoration of lipid oxidation homeostasis after redox stress, we performed transcriptomic and proteomic experiments followed by bioinformatic analysis.UVA irradiation significantly increased the expression of 341 genes more than two -fold and led to a down-regulation of 140 genes.Treatment with UVPAPC increased expression of 143 genes and led to a decrease of 253 genes, respectively.The Venn diagrams show the number of genes whose regulation overlapped upon both treatments.Of the UVPAPC regulated genes, 81 were co-induced, whereas 47 were co-decreased upon UVA exposure.To visualize the mRNA expression pattern of the stressed cells compared to the controls we performed a principal component analysis which together with the heatmap confirmed reproducibility within- and clear separation between the treatment groups.We used “Ingenuity Pathways Analysis” software to investigate whether the regulated gene groups would allow predicting activation or inhibition of canonical signaling pathways.The pathway analysis indicated that both stressors, UVA and UVPAPC, significantly induced the oxidative stress response controlled by NRF2.Additionally, UVA significantly induced the unfolded protein response- and the endoplasmic reticulum stress pathway and inhibited interleukin 17 A and – F signaling.UVPAPC treatment activated, in addition to NRF2, genes attributed with functions in glutathione biosynthesis or methylglyoxal degradation and reduced the “role of tissue factor in cancer” and the “inhibition of angiogenesis by TSP1” pathways.We verified regulation of NRF2 dependent gene expression by qPCR for HMOX1, the autophagy adaptor sequestosome 1 and AKR1C3, and for the UPR marker ATF4.As the dynamic changes in the oxidized phospholipidome over time suggest the involvement of inducible lipid metabolizing enzymes, we screened for candidates carrying the string “lipid” in their gene ontology database entry for biological function.Exemplary lipid metabolism genes that were induced both by UVA and UVPAPC were aldo-keto reductase 1C family genes, patatin-like phospholipase domain containing 8 and the oxysterol binding protein with potential roles in the signaling, detoxification and degradation of reactive oxidatively modifiedlipids.Volcano plots with highlighted members of the NRF2, UPR and “Lipid” groups are provided in Supplementary Fig. 6.Next, we investigated whether the overlapping responses to UVA and UV-oxidized lipids also would be apparent at the proteome level 24 h after treatment.UVA significantly increased the abundance of 144 proteins more than 1.5 -fold whereas 85 proteins were downregulated after 24 h.After the UVPAPC treatment 346 proteins were found significantly increased and 262 proteins were decreased.Venn diagrams show the co-regulation of thirty one proteins by the two treatments after 24 h, which is less than on mRNA level.We investigated activation or inhibition of canonical signaling pathways with IPA software, and found proteins associated with eIF2 stress signaling and protein kinase A pathway activated, additionally a protein ubiquitination and DNA damage response and epithelial adherens junction signaling was regulated upon UVPAPC.Comparing the changes on mRNA level at 7 h post treatment to the effects on the proteome at 24 h post stress, we observed an overlap of only 12 co-regulated mRNAs and proteins for UVA, and 18 for UVPAPC.Discrepancies between mRNA and protein changes upon a comparable type of stress have been reported recently , so we asked whether there would be functional overlaps in the responses beyond individual mRNAs or proteins.We thus plotted the IPA protein pathways ranked by significance to pathways regulated on mRNA level.Here, it became apparent that the NRF2 system was significantly co-regulated on the pathway level upon UVA or OxPL exposure.Importantly, the low overlap in individual genes co-regulated on mRNA and protein level was contrasted by a large functional overlap, as the UPR signature on mRNA level was matched by a downstream eIF2 signature on protein level in the UVA treated cells.A lipid metabolic pathway was significantly enriched in both proteome and transcriptome after UVA stress treatment.Volcano plots for proteins in the groups NRF2, UPA and “lipid” are presented in Supplementary Fig. 6.In Fig. 5 G-O examples of regulated proteins are presented, among them markers for NRF2 activation and lipid metabolization.Besides HMOX1 also SQSTM1 which is both a NRF2 target and a central regulator of autophagy and KEAP1, the cytoplasmic binding partner for NRF2, were found increased in the stressed KC.Superoxide dismutase 1, was UVA depleted but induced by UVPAPC at 24 h post stress, whereas the heat shock protein 70 component HSPA1B was moderately but significantly increased by both treatments.UCHL1, an enzyme regulating the availability of mono-ubiquitin, was strongly decreased by UVA, and UVPAPC led to moderate induction of cyclooxygenase 2 protein.In line with the mRNA results, OSBP, a protein regulating intracellular transport of oxysterols was induced by both treatments.AKR1C1, a NRF2 dependent enzyme that reduces carbonyl groups on lipids was effectively induced by UVPAPC exposure.While the NRF2 pathway was thereby confirmed as shared regulator of UV- and lipid responses, we used the “upstream regulators” feature of the IPA software to predict further regulators potentially implicated in KC redox stress responses.The algorithm indeed predicted NRF2 as an activated upstream regulator, also ATF4, which regulates UPR-dependent transcription was predicted activated, especially in the UVA treated cells, in line with the previous findings.Another upstream factor predicted to be strongly activated was nuclear protein 1, a stress inducible transcriptional regulator .NUPR1 is overexpressed in various malignancies, induced by cellular stress and is an established regulator of autophagy .Expression of NUPR1 has not been previously described in the skin, and we could detect it in the majority of the nuclei of keratinocytes within the living layers of the epidermis.The nuclear staining of NUPR1 in the positive cells appeared more intensive in the spinous layer as compared to the positive cells in the basal layer.We found that NUPR1 mRNA itself was significantly induced by UVA and to a lesser extent also by UVPAPC in cultured primary epidermal KC.NUPR1 had been associated with functions in autophagy and MAPK signaling which are both activated by OxPL.We suppressed NUPR1 in KC with stealth siRNA transfection with no obvious effect on cell morphology and viability.Cell cycle analysis revealed an increased percentage of cells in G0/G1 and G2/M phase upon NUPR1 knockdown and a lower percentage of cells in the S phase.This stop in cell cycle progression was in line with a decrease of cyclin dependent kinase 1 in NUPR1 knockdown cells on mRNA and protein level, respectively.We next investigated how the knockdown would affect target genes and proteins which are stress regulated and have been associated with NUPR1 in other cell types, most prominently HMOX1 .Indeed, we found on protein and mRNA level that HMOX1 was up-regulated in NUPR1 knockout cells compared to scramble transfected cells.On protein level we observed highly elevated expression of HMOX1 in NUPR1 knockout cells after treatment with UVPAPC.The knockdown of NUPR1 led to a strongly increased basal AKR1C1 protein expression, as one example of a lipid detoxification gene.As the several of the genes affected by NUPR1 knockdown are known Nrf2 targets, and as NUPR1 is stress regulated, we investigated we investigated their interdependence.Knockdown of NUPR1 did not significantly alter expression or nuclear translocation of NRF2.NRF2 knockdown did not affect NUPR1 baseline or UVPAPC induced expression, but blunted stress induced expression of HMOX1 and AKR1C3, compatible with a model in which NUPR1 requires functional NRF2 for target gene induction.The interdependence of the two transcriptional regulators requires however further investigation in gene deficient systems that omit lipofection, as the lipofection process possibly causes stress that affects these pathways.NUPR1 contains amino acids that potentially allow modification by electrophilic reactive compounds, thus we investigated whether NUPR1 protein would be modified in vitro by oxidized lipids.We incubated recombinant, purified GST tagged NUPR1 protein with increasing doses of oxidized PAPC or the non-oxidized saturated 1,2-dipalmitoyl-sn-glycero-3-phosphocoline as a control.We also added the singlet oxygen quencher sodium azide NaN3 or the antioxidant butylated hydroxytoluene to the reaction mixture.We then separated the incubation mixtures on an acrylamide gel and performed Western blot.Using an anti-NUPR1 antibody we observed that high molecular weight aggregates were formed with increasing doses of UVPAPC.Incubation with non-oxidized DPPC had no major effects on NUPR1 electromobility.While incubation in presence of 1 mM NaN3 reduced HMW aggregates, we observed an additional band at 150 kDa.Incubation in the presence of 0001% w/v BHT failed to inhibit the formation of oxidized PAPC - HMW aggregates.While we cannot rule out that the GST tag contributed to the observed effects, this is to our knowledge the first time that an interaction of oxidized PAPC with a specific protein is shown to stably modify or crosslink it.In addition, the high molecular weight products can be partially inhibited by sodium azide, but not by the antioxidant BHT.In this study we determined the contribution of bioactive lipids to the responses of epidermal keratinocytes to UV irradiation, the most relevant extrinsic stressor of the skin.We focused on oxidized phosphatidylcholines, which are increasingly recognized as signaling molecules .The semi-targeted HPLC MS/MS screening approach revealed regulation patterns for known and unidentified lipid species.We investigated the latter with a high precision MS method to identify the exact mass and propose potential structures.Using transcriptomic and proteomic profiling, we identified candidate genes and proteins potentially involved in signaling, metabolization, detoxification and de novo synthesis of selected lipid mediators.We identified NUPR1 as so far unknown upstream regulatory factor in the shared response to UV and oxidized lipids of the skin.The knockdown of NUPR1 indeed affected critical UV responsive genes governing the antioxidant response, lipid detoxification, autophagy and cell cycle and thus suggests NUPR1 to be a central, lipid regulated orchestrator of cutaneous stress responses.Among the lipid species we found significantly regulated were several species previously identified in autoxidized preparations of their unoxidized precursors , but also previously undescribed species.We will now discuss the regulated lipids or lipid classes, their potential biological relevance and how the regulated genes and proteins may act as effectors or degraders of UV generated lipids.All four quantified phospholipid hydroperoxides were, immediately elevated in UVA- exposed and also in UVPAPC- treated keratinocytes.Typically, the PL-OOH are reduced to PL-OH, and we found immediate rise in PC-OH after UVA exposure, while the PLPC-OH peaked 24 h after the stress elicited with externally added UVPAPC.A similar kinetic was observed for the newly discovered UVA- and UVPAPC regulated lipid species at m/z 800.This indicates that under UVPAPC-induced stress additional, probably enzymatic, generation of PL-OH was favored.Indeed, with PRDX6 and glutathione peroxidases, we found enzymes induced that catalyze the reduction of PLPC-OOH to –OH.While low levels of PL-OOH are permanently formed in metabolism, increasing concentrations initiate apoptotic signaling and very high levels cause structural membrane damage and necrotic cell death .We could recently associate PRDX6 expression with epidermal PL hydroperoxide levels in the mouse , and our new findings strongly support that this is also the case in humans.Lipid species previously assigned as isoPGF2a modifications of PAPC and SAPC were induced weakly by UVA, strongly by UVPAPC, and increased further at 24 h. Isoprostanoid modifications of PAPC were described as most efficient inducers of UPR genes via activation of the transcription factor ATF4 .Individuals with increased skin photoaging have reportedly higher levels of plasma isoprostanes and as a result of UVB exposure .Moving from chain-intact oxygenated species to oxidatively fragmented species, we found carbonyl- and dicarboxylic acid containing species regulated, and identified three previously undescribed species with a carbonyl modification.POVPC and PONPC, aldehydo-PL oxidation products of PAPC and PLPC, respectively , were augmented immediately upon UVA exposure but declined to baseline levels after 24 h, as did the respective di-carboxylic acid PC species derived from the same precursors.Accordingly, UVPAPC-induced di-carboxylic acid PC species of PLPC, SLPC and SAPC returned to baseline after 24 h.With m/z 596 we propose a 22:6, C1 carbonyl -PC that is inducible by both stimuli, returns to baseline 24 h post UVA, and with m/z 664 we found one UVA inducible species with structural similarity to 4-HNE that also returns to baseline after 24 h.The fragmented carbonylic species, the best studied being POVPC exert, apart from their potentially detrimental chemical reactivity, potent signaling functions that require tight control of their bio-availability.POVPC disrupts the endothelial barrier in lung vessels , activates the NLRP3 inflammasome , has PAF-agonistic activity, and thus together with structurally similar alkyl phospholipids are critical regulators of UV induced immunomodulation and photosensitivity .Together, the data show that KC can restore basal levels of UVA-induced fragmented lipid species within 24 h, with the exception of a newly discovered at m/z 550, proposed to be 18:1; C1 carbonyl PC that warrants further biological investigation.Finally, we discovered a regulated lysophospholipd species, 20:3 lysoPC, which was elevated by UVPAPC immediately after treatment but disappeared at 24 h.This lysophospholipid had been found previously in human plasma , and was released by oncogene induced senescent cells into extracellular vesicles , which may also present a way to modulate the cellular levels of specific lipid species and could also provide novel lipid members of the senescence associated secretory phenotype in addition to those we recently found in melanocytes .In line with our previous findings in fibroblasts, the transcriptomes of UVA- and UVPAPC stressed cells revealed a substantial overlap that mainly reflected induction of NRF2 dependent genes and genes involved in lipid metabolism.The major divergence in the transcriptional patterns was the induction of UPR / ER-stress genes 7 h after UVA exposure.The proteomic study we performed, in our knowledge the first conducted on UV exposed primary keratinocytes, also identified NRF2 targets upregulated, and UPR induction.Of note, the overlaps between UVA and UVPAPC regulation, but also between mRNA and protein regulation were mostly observed on the pathway rather than on the individual gene level, as found by others in comparable settings .The investigation of upstream regulatory factors confirmed activation of the NRF2 and ATF4 pathways and additionally uncovered NUPR1 as central factor affecting antioxidant response, lipid detoxification, cell-cycle- and autophagy genes.NUPR1 is regulated by pattern recognition receptors activation , and OxPL are known ligands for TLR2 but also antagonists of TLR4 .NUPR1 induction was observed upon ER stress and the ER stress sensitive transcription factor ATF4 induces NUPR1 in starvation and toxic stress .ATF4 is activated by oxidized PAPC downstream of NRF2 , and is also induced transcriptionally by UVA and UVPAPC in our keratinocyte system.HSPA5/GRP78, a receptor for OxPAPC, is both a target gene of ATF4 and its transcriptional activator .ATF 4 activation via NRF2 and/or GRP78 is thus the most likely mechanism for lipid- and redox stress mediated NUPR1 regulation in epidermal cells.The reported downstream effects of NUPR1 suggest cell- and tissue dependence of its function in stress mediated proliferation and metabolism control.NUPR1 knockdown in glioblastoma cells suppressed cell growth , by repressing ERK1/2 and p38 MAPK phosphorylation, two signaling pathways inducible by OxPAPC .A further possible route by which NUPR1 attenuated cell cycle progression was via p53/p21 regulation .NUPR1 was downregulated in sebocyte tumors which derive from epidermal appendages, and was identified as a target of Rac1 GTPase .Regarding stress mediated changes in metabolism, inactivation of NUPR1 increased autophagy in cardiac myocytes and neuronal cells experiencing ER stress .As we have recently shown that autophagy is induced by UVA/UVPAPC in keratinocytes , and that genetic deletion of autophagy results in accumulation or secretion of selected oxidized lipid mediators in cutaneous cells , further studies will address the contribution of NUPR1 to epidermal autophagy.We for the first time demonstrate that exposure to oxidized PL can modify and most likely crosslink a recombinant, tagged form of NUPR1.As OxPL treatment and NUPR1 knockdown both result in induction of HO-1 and AKR1C1, we propose that UV/OxPL induced modification could make NUPR1 unavailable to exert repression of these genes.It will be of interest to further follow the hypothesis that NUPR1, possibly in coordination with NRF2, maintains defense mechanisms of the epidermis an autoregulatory fashion through reactive lipids generated by stress or in differentiation, and how such a system would adapt during the chronological- and photo aging process, where an accumulation of the reactive lipid in the tissue would be expeced as a result of decreased redox surveillance of the cells.Taken together, our data add novel aspects to the knowledge on the cutaneous responses to lipid oxidizing stress.The lipid mediators induced by long wave UV light have both signaling function and chemical reactivity towards proteins.We detected redox stress responses, which are likely to contribute to the restoration of lipid homeostasis.NRF2 directed glutathione de-novo synthesis and recycling is essential for PL-OOH reduction, and the identified aldo-keto reductases can reduce lipid aldehydes.Further, phospholipases that cleave arachidonic acid from the PC were induced by both treatments and PAFAH1B3 which specifically cleaves fragmented OxPL and PAFAH prevent lipotoxicity in response to UV stress .Of note, peroxiredoxin 6 which has lipid hydroperoxide reductase- lysophosphatidylcholine acyl transferase and PLA2 function was induced by UVPAPC, adding to our previous findings on its role in epidermal lipid homeostasis .With the bioinformatic identification and functional verification of NUPR1 we put forward a novel factor controlling epidermal cell growth and redox defenses relevant in homeostasis, aging and disease.Human neonatal primary keratinocytes were received from Cell Systems or keratinocytes were prepared from abdominal adult skin from obtained from plastic surgery as described previously .Cells were cultured in serum-free keratinocyte growth medium at 37 °C and 5% CO2 for further treatments.The collection of the biopsies used for immunohistological analysis used in this study was approved by the Ethic Committee of the Medical University of Vienna and written informed consent was obtained from all subjects.Cultured KC were irradiated with UVA-1 emitted from a Sellamed 3000 device at a distance of 20 cm to achieve a total fluence of 20 J/cm2 or 40 J/cm2 as measured with a Waldmann UV-meter, respectively .During the irradiation cells were kept in phosphate-buffered saline on a temperature controlled plate at 25 °C.UVPAPC was generated by exposing dry PAPC to UVA-1 with a fluence of 80 J/cm2 or sham irradiated as described in and cells were treated with 25 µg/ml of UVPAPC.Skin explants – adult skin obtained from plastic surgery was cut into 3 cm2 pieces, floated in PBS and irradiated with 80 J/cm2 of UVA-1, or were sham irradiated to serve as control.Three Stealth siRNAs specific for Nupr1, two stealth siRNAs specific for Nrf2 and a scrambled control were obtained from Invitrogen.RNA duplex sense sequences used for Nupr1 were: 5′- CCUCUAAGCCUGGCCCAUUCCUAC -3′; 5′- CCGGAAAGGUCGCACCAAGAGAGAA -3′; 5′- GGCACGAGAGGAAACUGGUGACCAA -3′; for Nrf2; 5′- UAUUUGACUUCAGUCAGCGACGGAA -3′; 5′- GAGCAAGUUUGGGAGGAGCUAUUAU -3′; and the Medium GC content negative control siRNA: 5′ – GAGUGGGUCUGGGUCUUCCCGUAGA -3′.At 50–60% confluence keratinocytes were transfected using Lipofectamine 2000.5 ml OPTI-MEM medium was mixed with 50 μL Lipofectamine 2000 and 60 μL of a 20 μM siRNA solution or the scrambled control RNA solution.The solution was incubated at room temperature for 30 min and then added to 20 ml KGM-2 and transferred to the KCs.24 h after incubation cells received new KGM-2 for another 24 h before stress treatment.After a recovery time of 48 h upon transfection cells were trypsin digested and prepared as single-cell suspension in a PBS solution."Cell viability was determined by cell counting in the LunaFL cytometer using an Acridin Orange/ Propidium Iodide staining system according to the manufacturer's protocol.Cells that are positive for the cell permeable nucleic acid dye AO but negative for the late apoptotic and necrotic cell marker dye PI, were counted as viable.Human skin obtained from plastic surgery was fixed with 10% formalin, paraffin embedded and microtome sections were immuno-stained.Primary keratinocytes were fixed with 4% paraformaldehyde and the permeabilized with PBS containing 0,1% Trition X-100.Sections were incubated overnight at 4 °C in phosphate-buffered saline with the primary antibody Nupr1 or Nrf2.As secondary antibody goat anti-rabbit IgG, conjugated with Alexa Fluor dyes were used at a dilution of 1:500.For imaging, an Olympus AX 70 was used.All image analyses were performed under the same parameter settings.Nuclear intensity of NRF2 was quantified using ImageJ software."Cell cycle analysis was performed using the BrdU cell-cycle kit according to the manufacturer's instructions.48 h post transfection, cells were incubated with BrdU for 4 h.After fixation with 100 μL of BD Cytofix Cytoperm Buffer for 15 min at 4 °C cells were stained with a fluorescein isothiocyanate conjugated anti-BrdU antibody for 30 min and stained with 7-AAD and immediately analyzed on a FACS-Calibur."Gates were set according to the manufacturer's instructions and data were evaluated using FlowJo software.Cell culture- Immediately after stress treatment or after a recovery time of 24 h keratinocytes were washed with PBS containing DTPA.Keratinocytes from two wells of a 6 well culture dish were scraped on ice in 1 ml of methanol/acetic acid/BHT to obtain material for lipid extraction.Skin explants- 3 cm2 pieces of skin were cut into small pieces and incubated for 1 h at 37 °C in dispase II to separate epidermis from dermis.To dissolve the epidermis it was transferred to Precellys tubes with 2 ml of ice cold methanol/acetic acid/BHT and was shaken 2 times with 5500 rpm for 30 s, centrifuged for 10 min and the supernatant was transferred into a glass tube.Phospholipid isolation – Isolation of lipids from cell culture or skin explants was performed using liquid–liquid extraction procedure, as recently described in Gruber et al.In brief, the experiment was performed on biological triplicate samples and each step was performed on ice.10 ng of internal standard was added into each sample.After washing the samples 3 times with 4 ml hexan/BHT, 4 ml chloroform/BHT and 1.5 ml formic acid were added to the methanol phase and after vortexing the lower organic phase was transferred into a new glass vial, dried under argon and stored at −20 °C until mass spectrometry analysis.Analysis of purified phospholipids was performed at FTC-Forensic Toxicological Laboratory, Vienna as recently described by us .In brief, purified samples were reconstituted in 85% aqueous methanol containing 5 mM ammonium formate and 0,1% formic acid.Aliquots were injected onto a core–shell type C18 column kept at 20 °C and using a 1200 series HPLC system from Agilent Technologies, which was coupled to a 4000 QTrap triple quadrupole linear ion trap hybrid mass spectrometer system equipped with a Turbo V electrospray ion source.Detection was carried out in positive ion mode by selected reaction monitoring of 99 MS/MS transitions using a PC-specific product ion, which corresponds to the cleaved phosphocholine residue.Data acquisition and instrument control were performed with Analyst software, version 1.6.Individual values were normalized to the intrinsic DPPC.Acquity UPLC M-class was coupled online to a Synapt G2-Si mass spectrometer equipped with an ESI source operating in negative ion mode.Eluent A was a mixture of water and acetonitrile containing formic acid, and eluent B was a mixture of isopropanol, acetonitrile, and methanol containing formic acid.Lipids were loaded onto a C18-column Acquity UPLC® CSH™ C18, and eluted with linear gradients from 50% to 90% eluent B and to 99% B.Column temperature was set to 50 °C and the flow rate to 60 μL/min.Sampling cone voltage was set to 40 V, source offset to 60 V, source temperature to 120 °C, cone gas flow to 30 L/h, desolvation gas flow to 650 L/h, desolvation temperature to 250 °C, nebuliser gas pressure of 6 bar, and an ion spray voltage of −2.0 kV.Data were acquired in negative and positive ion data-dependent resolution modes.Precursor ion survey scans were acquired for m/z 200–1200.Tandem mass spectra were recorded for the 12 most intense signals in each survey scan using a dynamic exclusion for 30 s.The signal of Leu-encephalin was acquired as lock mass .Tandem mass spectra were manually analyzed.Total RNA was extracted from human neonatal keratinocytes grown in 12-well culture plates 7 h after stress treatment."Cells were lysed with TriFast Reagent according to the manufacturer's instructions. "qPCR- 7 h after stress treatment total RNA from adult KC were isolated using RNasy 96 system according to the manufacturer's protocol.RNA quality was assessed with Agilent 2100 Bioanalyzer and RNA integrity numbers were determined.Samples with a RIN number above 9.0 were used for transcriptomic analysis."Total RNA cleanup and concentration was performed using the RNeasy MinElute Cleanup Kit according to the manufacturer's recommendations.200 ng of each sample were used for gene expression analysis with Affymetrix human PrimeView 3`IVT."Hybridization and scanning were performed according to manufacturer's protocol.The experiment was performed on biological triplicate samples.The full microarray data was uploaded to the Gene Expression Omnibus with the identifier GSE104870.400 ng of isolated RNA was reverse transcribed using iScript cDNA Synthesis Kit and was diluted 1:5 for further quantitative PCR.LightCycler 480 and the LightCycler 480 SYBR Green I Master was used with a standard protocol described before for qPCR.All primer sequences are shown in Supplementary Table 6.Relative quantification of target genes was performed using beta-2 microglobulin as a reference gene.Western blot 24 h after stress treatment human KCs were washed twice with PBS and then harvested with lysis buffer glycerol, 0005% bromophenol blue) containing protease inhibitor cocktail and Pierce TM Phophatase Inhibitor Mini Tablets on ice and immediately sonicated.Immunoblotting using antibodies for CDK1, AKR1C1, HMOX1, and GAPDH, was performed as previously described .As secondary antibody, goat anti-rabbit IgG-HRP or sheep anti-mouse IgG-HRP were used and subsequent chemiluminescent quantification on ChemiDoc imager was performed.The signal was measured with Image Lab 4.1 analysis software and target bands were normalized to GAPDH.150 ng of the recombinant protein Nupr1 were incubated with oxPAPC, with or without pretreatment with either 0.001% BHT or 1 mM NaN3 in K2HPO4 in a total volume of 22 ml or were sham treated.30 min after incubation at 37 °C lysis buffer glycerol, 0.005% bromophenol blue) with 5% of Mercaptoethanol was added.Immunoblotting using antibody for NUPR1 was performed as previously described and as detailed above.Cells were washed two times with PBS and scraped in cold PBS 24 h after the stress treatment.The cells were washed again two times with PBS and stored at −80 °C until further analysis.For proteolytic digestion samples were prepared as previously .Briefly, cell pellets were solubilized with urea buffer and sonicated.Protein amounts were estimated with Pierce 660 protein assay.Fifty micrograms of samples were digested with trypsin using the filter-aided sample preparation as previously described with minor modifications .Tryptic peptides were recovered, and peptides of protein digests were normalized for tryptophan fluorescence.The peptides were desalted and concentrated with reversed-phase C18 resin.Lyophilized peptides were reconstituted in 5% formic acid and 1 µg of peptides were analyzed by LCMS.Samples were injected onto a Dionex Ultimate 3000 system coupled to a Q-Exactive Plus mass spectrometer.Software versions used for the data acquisition and operation of the Q-Exactive were Tune 2.8.1.2806 and Xcalibur 4.HPLC solvents were as follows: solvent A consisted of 0.1% formic acid in water and solvent B consisted of 0.1% formic acid in 80% acetonitrile.From a thermostated autosampler, 10 μL that correspond to 1 µg of the peptide mixture was automatically loaded onto a trap column with a binary pump at a flow rate of 5 μL/min using 2% acetonitrile in 0.1% TFA for loading and washing the pre-column.After washing, the peptides were eluted by forward-flushing onto a 50 cm analytical column with an inner diameter of 75 µm packed with 2 µm-C18 reversed phase material.Peptides were eluted from the analytical column with a 120 min solvent gradient ranging from 5% to 40% solvent B, followed by a 10 min gradient from 40% to 90% solvent B and finally, to 90% solvent B for 5 min before re-equilibration to 5% solvent B at a constant flow rate of 300 nL/min.The LTQ Velos ESI positive ion calibration solution was used to externally calibrate the instrument prior to sample analysis and an internal calibration was performed on the polysiloxane ion signal at m/z 445.120024 from ambient air.MS scans were performed from m/z 380–1800 at a resolution of 70,000.Using a data-dependent acquisition mode, the 20 most intense precursor ions were isolated and fragmented to obtain the corresponding MSMS spectra.The fragment ions were generated in a higher-energy collisional dissociation cell with first mass fixed automatically and detected with an Orbitrap mass analyzer.The dynamic exclusion for the selected ions was 20 s. Maximal ion accumulation time allowed in MS and MS/MS mode were 30 and 50 ms, respectively.Automatic gain control was used to prevent overfilling of the ion trap and was set to 1 × 106 ions and 5 × 104 ions for a full Fourier transform MS and MS/MS scan, respectively.The acquired raw MS data files were processed in MaxQuant 1.5.3.30 and searched against the human SwissProt protein database version v 2015.11.11.The search parameters were as follows: two tryptic missed cleavage sites, mass tolerances of 5 ppm and 20 ppm for the precursor and fragment ions, respectively.Oxidation of methionine and N-terminal protein acetylation were set as variable modification, whilst carbamidomethylation of cysteine residues were set as fixed modifications.The data was also matched against a decoy reverse database.Peptides and protein identifications with 1% FDR are reported.Protein identifications requiring a minimum of two peptides sequences were reported.The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD008050.The peak intensity was log2-transformed and compared between each condition for timepoints 0 h and 24 h using R version 3.2.2/Bioconductor software package Limma.A linear model was applied for each peak and moderated t-tests were computed.In the model, the "condition" was defined as a factor of 3 levels.The Benjamini and Hochberg procedure was applied to adjust the raw p-values into false discovery rate.A FDR < 0.05 was chosen as the cut-off value.To see the proximities among the conditions in terms of lipids, a Principal Component Analysis was conducted with the conditions in lines and the peaks in columns."Then a heatmap, a double Hierarchical Classification Analysis centered and reduced) with Euclidean Distance and the Ward's method, was performed.Robust multi-array average signal extraction and normalization were performed using custom chip description file at timepoint 7 h.After exclusion of all Gene IDs with a RMA value of less than 50 in all conditions, log2 transformation was applied.Differential expression between the three conditions was tested using moderated t-tests as described above with a BH multiple testing correction.Gene identifiers covering annotated genes or annotated variants are referred to as “genes” throughout the manuscript.Genes that were regulated more than 2-fold with a p-value < 0.05 were used for principal component analysis."Heatmaps were used to visualize double HCA of the mRNAs based on euclidian distance with Ward's method.Protein identifications and LFQ intensities from MaxQuant were analyzed using Perseus statistical package .The LFQ intensity values were log2-transformed and zero-intensities were imputed-replaced by normal distribution.Statistical significance of differences in protein levels between groups were evaluated using two-sided t-test with p < 0.05 with Permutation based-FDR.PCA and hierarchical clustering analysis using Euclidean distance method for both rows and columns with average linkage and k-mean pre-processing."Regulated genes and proteins were analyzed with the software QIAGEN's Ingenuity® Pathway Analysis which allowed prediction of activated signaling pathways and upstream regulatory events that were likely to cause the observed gene expression changes, both based on literature evidence.Heatmaps and activation z-scores were calculated within the IPA software package and modified for better presentation as recently described . | Ultraviolet light is the dominant environmental oxidative skin stressor and a major skin aging factor. We studied which oxidized phospholipid (OxPL) mediators would be generated in primary human keratinocytes (KC) upon exposure to ultraviolet A light (UVA) and investigated the contribution of OxPL to UVA responses. Mass spectrometric analysis immediately or 24 h post UV stress revealed significant changes in abundance of 173 and 84 lipid species, respectively. We identified known and novel lipid species including known bioactive and also potentially reactive carbonyl containing species. We found indication for selective metabolism and degradation of selected reactive lipids. Exposure to both UVA and to in vitro UVA - oxidized phospholipids activated, on transcriptome and proteome level, NRF2/antioxidant response signaling, lipid metabolizing enzyme expression and unfolded protein response (UPR) signaling. We identified NUPR1 as an upstream regulator of UVA/OxPL transcriptional stress responses and found this protein to be expressed in the epidermis. Silencing of NUPR1 resulted in augmented expression of antioxidant and lipid detoxification genes and disturbed the cell cycle, making it a potential key factor in skin reactive oxygen species (ROS) responses intimately involved in aging and pathology. |
31,442 | Isotopic and spectral effects of Pu quality in Th-Pu fueled PWRs | The UK Pu stockpile is the largest separated Pu stockpile in the world, containing 120 tonnes of spent fuel from a variety of sources of varying initial fissile loading, discharge burnups and cooling periods.The result is a varied isotopic mix in SNF which can be recycled using current reactor technology.Using existing technology to reduce the volume of Pu in the stockpile could result in lower developmental costs for an accepted recycle scheme, in addition to the technical advantages associated with decreased radiotoxicity and decay heat burden of stored SNF.The management strategy for UK Pu is expected to focus on the use of MOX fuel in thermal reactors as an interim measure prior to a fast reactor fleet being commissioned.MOX fuel is typically manufactured using U238 as the fertile isotope mixed with plutonium.However, spent U-Pu MOX fuel still contains significant quantities of Pu – roughly 50% of the initial loading – due to Pu production from U238.For the purposes of Pu incineration, it may therefore be beneficial to use Th232 as the fertile isotope rather than U238.Although less prevalent than U-fuels, Th-Pu options have been substantially researched since the 1980s – both from R&D and operational points of view – with results showing that Th-Pu MOX may be an ideal platform for Pu incineration in Light Water Reactors.Th breeds U233 as the fissile component of the fuel which accounts for >90% of isotopes in the Th transmutation chain and results in a significantly lower Pu and MAs content in discharged fuel compared to standard U-Pu MOX fuels.Th fuels show potentially lower levels of radiotoxicity and decay heat compared to standard UO2 and U-Pu fuels after the first 100 years post discharge.This, in addition to volume reduction, holds promise for UK Pu recycle schemes using Th MOX in LWRs.The UK has extensive operating experience of PWRs through civil and naval nuclear programmes.The effect of moderation in PWRs has been well studied and has shown that increased, standard and reduced moderation can have specific benefits relating to Pu incineration depending on the desired objective of Pu disposition. determined that for once-through recycle options using low Pu loadings in Th-Pu MOX, reduced moderation led to unacceptably short cycle lengths due to rapid depletion of Pu239.However, at higher Pu loadings, the production of U233 and the subsequent competition between this and Pu239 led to a decrease in Pu destruction in line with the decrease in moderation.Adding MAs to the Th-Pu MOX showed that increased destruction rates were possible only by increasing moderation.However, the addition of MAs reduced the overall destruction rate by ∼10% compared to Th-Pu only. considered multiple recycle options and showed that TRU material can be virtually eliminated in Th-Pu-MA fueled, standard moderation PWRs.Further studies showed that reducing moderation in multiple recycle schemes led to improved performance when compared with standard PWRs – notably a ∼ 20% increase in achievable burnup when compared to standard PWRs with the same fissile loading.This is important where the moderator reactivity coefficients limit the maximum fissile loading, and therefore total achievable burnup, in the fuel – particularly where a negative VC/FVR is required for a fully voided core.It is possible to achieve a negative FVR in RMPWRs; however, this requires targeted fuel management strategies.In reality, a fully voided PWR core which has not been scrammed is deemed to be extremely unlikely and, while FVR remains of regulatory importance, it is the MTC which will likely be the limiting factor for fissile loading.It is ultimately for policy makers to decide whether once-through or multiple recycle options are preferred and, as no decision has yet been made in the UK, both standard and reduced moderation options are considered in this study.This assembly-level study will determine the sensitivity of reactivity feedback coefficients to the isotopic composition of fresh fuel and the effects of spectral hardening in PWRs and RMPWRs when loaded with the predicted UK Pu vector and other ‘standard’ grades of Pu.Coefficients will be calculated for each grade of Pu for various Pu loadings to determine the maximum theoretically achievable discharge burnup and Pu consumption taking account of the requirement to maintain negative temperature coefficients of reactivity.This may be used to help inform policy makers of the maximum Pu loading for each Pu vector and, therefore, provide an indication of how quickly the stockpile can be reduced.However, these results merely provide an insight into potential options and cannot be used to make a decision without full-core analyses being undertaken.Many of the parameters affecting the overall reactivity of a core depend on changes in material temperature.These changes can occur as a result of transient or accident scenarios and hence it is crucial to be able to predict the outcome of variations in temperature prior to licensing reactors to operate with new fuel types.While previous studies have shown that it is possible to achieve more favourable temperature coefficients of reactivity in LWRs with Th-Pu MOX fuel than with standard UO2 fuel, no studies have thus far considered in detail how UK Pu – subject to the effect of isotopic decay – may perform during operation.It is vital to understand the impact that the isotopic composition of UK Pu may have on reactivity feedback coefficients from a safety and control point of view, especially given that Th-fuels are associated with greater control requirements than UO2 fuels due to the lower delayed neutron fraction of U233 and Pu239 compared to U235.Reactors with negative temperature coefficients of reactivity are considered inherently stable whereas reactors with positive temperature coefficients of reactivity are unstable and should be avoided.Larger values indicate a greater sensitivity to changes in the parameter that has been varied.The coefficients that will be investigated in this study are DC, MTC and FVR along with BW.DC – change in reactivity per degree change in fuel temperature – is caused by Doppler broadening of resonant absorption peaks of isotopes within the fuel.Resonant peaks typically exist in the epithermal energy range and are therefore more likely to affect harder spectrums such as reduced moderation and high Pu content cases.DC must remain negative during operation.MTC – change in reactivity per degree change in moderator temperature – determines the ultimate behaviour of a reactor in response to changes in moderator temperature.In LWRs an increase in moderator temperature causes a reduction in density of water due to thermal expansion.This can result in either a positive or negative reactivity insertion.Several factors can influence MTC:Resonance escape probability: A lower density reduces the effectiveness of the moderator at slowing down neutrons through the resonance region, resulting in an increase in resonance absorption and a consequent decrease in the resonance escape probability.Alternatively, there may be increased fission in the epithermal/fast region.Thermal utilization: The lower density causes the thermal utilization factor to increase due to fewer parasitic absorptions of neutrons by hydrogen nuclei in the light water.Leakage: reduced density affects the non-leakage probability of neutrons across the entire energy range.As density decreases, there will be fewer neutron collisions with light water, resulting in an increase in the average neutron energy in the system.As the spectrum hardens, and the capture cross-sections of fuel nuclides decrease, more fast neutrons will be lost from the core.Interactions with absorbers: this can affect reactivity in a variety of ways depending upon whether they are present in solution or fixed.This study will consider the effect of soluble boron, but no other neutron poisons will be included.MTC is therefore a complex trade-off of positive and negative reactivity contributions.Since the main influencing factor to MTC is typically the resonance escape probability, and each nuclide has a different resonant structure, the MTC is very sensitive to the isotopic composition of the fuel.The contribution of different isotopes must therefore be well understood and, as such, nuclide contribution to MTC will be studied in the later part of this analysis.VC – change in reactivity per percent voiding in the light water of the moderator/coolant – determines the behaviour of a reactor in response to loss of coolant accidents.In this case a near-complete loss of coolant is simulated).Like changes in moderator temperature, voiding also changes the coolant density.As with the MTC in LWRs, an increase in voiding results in a reduction in reactivity if the thermal absorption cross-section of nuclides in the water is not made too large through the addition of soluble boron.In this case, a reduction in density may result in an increase in reactivity due to the reduction in density of boron.Scenarios which result in a fully voided but unscrammed core are deemed unlikely, but this will still be analysed to satisfy regulatory constraints.Since the addition of soluble boron is being considered, BW will also be analysed.BW – change in reactivity per ppm change in boron concentration – typically becomes more negative with burnup as the boron levels are highest at beginning of cycle when the excess reactivity is most substantial.From an operational and regulatory point of view there are a number of key factors that must be addressed.The maximum achievable discharge burnup should be given as this dictates the cycle length of a given fuel type; the rate of Pu destruction should be considered as this may influence policy decisions; and information regarding TRU accumulation in SNF should be provided.While the maximum discharge burnup is an important factor from a commercial point of view, it is also crucial to in-core safety.The current cladding limit for PWRs is ~60 GWd/tHM.Although cladding failure can occur above this limit, research is ongoing in the field of advanced cladding technology and, therefore, higher burnups will be considered in the final part of this study.Assembly-level calculations were performed for a standard 17 × 17 PWR lattice with reference geometry and operational parameters outlined in Table 1.The soluble boron concentration was fixed at 500 pm for the purposes of comparison.Each assembly contains 264 fuel pins with zircaloy cladding and 25 water holes.Increasing the fuel pin diameter from 9.5 mm in the PWR to 11.0 mm in the RMPWR causes the hydrogen-to-heavy-metal ratio to decrease and the neutron spectrum to harden.The increased diameter has been shown to allow for a neutronically feasible design which fulfils thermal-hydraulic constraints and allows multiple reloads to take place if required.The cladding and gap thickness have been kept constant between the two models to maintain the basis of comparison.However, it may be the case that for higher burnups the cladding thickness needs to be increased in the RMPWR to account for the higher stresses caused by the increased pin diameter.In addition, an increased gap thickness or larger plenum may be needed to contain the additional fission gas release compared to the standard PWR.This will be considered in future work along with the addition of control rods and fixed burnable absorbers.The theoretical densities of PuO2 and ThO2 were assumed to be 11.5 g/cc and 10.0 g/cc respectively.For all fuel materials used, the assumed density is 95% of their corresponding theoretical density.A 3-batch loading scheme was assumed with fuel being replaced on average every 18 months, equating to a maximum core-time of ∼4.5 years per assembly.Simulations were carried out to compare the effect of varying the Pu vectors, Reactor grade, MOX grade and UK Pu) and %wt Pu on homogeneously mixed Th-Pu MOX fuel in both reactor types.Realistically, one may wish to consider matching the overall reactivity of the assembly rather than simply the composition and percentage loading.However, since the main objective of this study is to understand the effect of isotopic composition and spectrum changes on reactivity feedback coefficients, this was not considered as part of this work.WIMS10A was used to perform the analysis using the ENDF/B-VII data library.The model used to complete this analysis was benchmarked against published data for 5%wt RG Pu in Th-Pu MOX fuel in a 17 × 17 PWR lattice.Good agreement was found between the model used in this study and the published data.The minor discrepancies are attributable to the older, unspecified codes and data libraries used by the states participating in the IAEA study.Additional validation was carried out using SERPENT and results showed good agreement with WIMS in all cases considered.The results of this study were generated by first performing an approximate flux solution using 172 neutron energy groups with geometric approximations inherent to the code, and then by performing a more detailed final solution using 47 groups and the method of characteristics.Thermal and epithermal region energies were split into a larger number of groups than the fast region energies, as these represent the energy ranges where the majority of interactions occur in standard and reduced moderation PWRs.Burnup calculations were performed and perturbations to temperature, density and boron concentration were simulated.An initial benchmark study was performed using this method and a limited number of results from the original paper.Good agreement was achieved for the majority of nuclides contributing to the calculated coefficients.Discrepancies were found to exist when comparing contributions from U238 due to improved representation of the U238 resonances in the ENDF/B-VII.0 data library used in this study compared to the ENDF/B-V data library used in the original study.Pu destruction rates have been calculated by dividing the difference in mass of Pu in fresh and spent fuel by the reactivity limited discharge burnup for each Pu vector in each reactor type, while the accumulation of MAs is simply the total amount of MAs present in discharged fuel for each Pu vector in each reactor type.Before analysing the effect of isotopics on reactivity feedback coefficients it is useful to understand how perturbations, such as the increased fuel pin size in the RMPWR, alter the neutron energy spectrum.Fig. 9 illustrates how the spectrum changes in response to the increased fuel pin diameter and how the largest perturbation – a total loss of coolant – causes the spectrum to harden.The neutron flux is normalized per 1000 neutron productions.It is noted that the RMPWR has a lower normalized fast flux than the PWR under nominal conditions.In fact, the total fast flux has increased in the RMPWR relative to the PWR but the total epithermal flux has increased by a greater amount, leading to a slight drop in the normalized fast flux.A higher overall flux is required in the RMPWR to achieve the same number of neutron productions as the PWR because fissile nuclides have a reduced fission cross in the harder spectrum of the RMPWR.At low Pu loadings, such as the 5%wt case shown, the effect of reduced moderation is such that the average neutron energy has shifted to the right.However, the fissile content is low enough that the new average neutron energy has not yet reached sufficiently high energies to correspond with the peak fission cross-sections of the key fissile nuclides.In the fully voided case considered there is a significant increase in the high energy flux in both the PWR and RMPWR.However, the flux in the epithermal energy region has again increased by a proportionally larger amount resulting in the perceived increase in intermediate flux/decrease in high energy flux.The increase in epithermal flux is due to the fact that fewer neutrons are absorbed before they reach intermediate energies.The increase in epithermal neutrons is more significant in the nominal vs perturbed conditions compared to the PWR vs RMPWR because the perturbation itself is more significant.Results show that DC remains negative in all cases considered and, as expected, becomes less negative as more Pu is added.DC is less negative in the PWR than the RMPWR.This is due to the large absorption cross-section of Pu240 at 1 eV, which exceeds the absorption cross-sections for all other isotopes within the fuel and has a more significant effect on reactivity as the fuel temperature rises.There is increased absorption and lower reactivity as the spectrum hardens and the average neutron energy shifts towards the Pu240 absorption peak.DC typically becomes slightly more negative with burnup due to the depletion of fissile isotopes and the build-up of absorbing MAs and fission products.WG has the least negative DC, followed by UK Pu, RG and finally MOX grade Pu.This is the case for all loadings in both reactor types and is attributable to the Pu240 content of each of the Pu grades – MOX fuel containing the most and WG the least.Despite containing similar amounts of fissile and absorbing isotopes, UK Pu displays a more negative DC than RG at BOC and a less negative DC than RG at EOC.At BOC this is caused by the presence of Am241 – the strongest neutron absorber of those considered – in UK Pu.At EOC this is caused by Am242m production in UK Pu, resulting in an overall more fissile fuel as the cycle progresses.Fig. 10 shows how DC changes with burnup for all cases with 5%wt Pu content.This represents the most interesting case of those examined as it exhibits the greatest variation.BW displays slightly less predictable trends than DC.In all cases considered, BW is less negative in the RMPWR compared with the PWR.This is to be expected given the combined effect of the ratio of capture to absorption cross-sections of B10 monotonically decreasing with neutron energy and the reduced moderator volume in the RMPWR.Therefore, as the spectrum hardens boron becomes less effective as an absorber.Noticeable differences in BW occur when the Pu vector changes due to the effect of isotopic composition on the neutron energy spectrum.The effect is different between DC and BW as the concentration of boron in the moderator has a smaller effect than the isotopic contributions and their responses within the fuel itself.At low Pu loadings, the RMPWR displays a slightly more negative BW with burnup which levels out as Pu content increases.In the PWR, however, there is a distinct decrease in BW with burnup for low Pu loading cases which levels out with increased fissile loading.At low Pu loadings the RMPWR displays a more negative MTC than the PWR.For the 5%wt case, MTC becomes less negative with burnup for most Pu grades.For both reactor types, WG displays the least negative MTC across the entire cycle whereas UK Pu has the most negative MTC at BOC and MOX, the most negative MTC at EOC.As Pu content increases, the MTC becomes less negative overall and the PWR begins to display a more negative MTC than the RMPWR.The trend with burnup also changes from less negative to slightly more negative as the cycle progresses.In addition, the order in which the Pu grades are ranked from most to least negative MTC begins to change.At 30%wt Pu WG is the most negative in the RMPWR throughout the entire cycle and most negative in the PWR from MOC onwards.Fig. 12 shows the difference in trend with burnup for high and low Pu loadings in the PWR and the variation in MTC by grade.Fig. 13 shows how the MTC in the PWR and RMPWR differs for a given grade at high and low Pu loadings.This illustrates that at low Pu loadings the MTC is most negative in the RMPWR but, as the Pu content increases, the PWR begins to display a more negative MTC.It is worth noting that 10%wt Pu is the point at which the trend switches from one reactor type to the other.The effect of spectral hardening is clearly more complex in this case than for DC and BW and therefore requires a more detailed analysis.FVR might be expected to display similar trends to MTC, given that both result in a reduction in density of the light water moderator/coolant, but this is shown not to be the case.The perturbation for FVR is far more extreme than the perturbation for MTC and the effect is such that the spectrum in the perturbed FVR case is very different to – and much harder than – the spectrum in the perturbed MTC case.The result is that, for all cases, FVR becomes more negative with burnup, including at low Pu loadings.In addition, the order in which the Pu grades are ranked from most to least negative is different for FVR compared to MTC.The order is also inconsistent across the cases considered, as it depends heavily on Pu content.At 5%wt Pu, WG goes from least to most negative FVR during the cycle in both reactor types, while UK Pu goes from most to least negative FVR.At 10%wt Pu, WG has the least negative FVR and MOX grade Pu has the most negative FVR for both reactor types.At 20%wt, the FVR for most grades is positive, with UK Pu displaying the most positive FVR and MOX grade the least by EOC in both cases.At 30%wt loading, UK Pu has the most positive FVR while WG has the least in both cases.The difference in FVR for high and low Pu loadings in the PWR for each grade is shown in Fig. 14.As with MTC, FVR displays a complex response to coolant density perturbations, which have been shown to be dependent on the isotopic composition of Pu and the overall Pu content in the fuel.The second part of this study considers the contribution by isotope and energy bin to MTC and FVR, to better understand the system and allow operators to predict likely responses to isotopic variations in Pu vector.It should be noted that these results do not consider the effects of burnable absorbers, which will make MTC and FVR less favourable.This will be considered in future work.When considered by energy group and individual isotope, the fissile isotopes are shown to be dominant contributors to MTC and FVR, as expected.At BOC, Pu239 is the dominant isotope while all others play a minor role.For all Pu grades considered, Pu239 has a negative contribution at thermal energies and a positive contribution at epithermal energies and above as shown by.At lower Pu loadings, Pu239 provides a larger negative contribution in the thermal energy region in the RMPWR, while at higher loadings, there is a larger negative contribution in the PWR.As the Pu content increases, the negative contributions in the thermal region diminish and the positive contributions in the epithermal region increase causing the overall MTC to become positive at high Pu loadings.A key feature in these isotopic contributions is a large positive contribution – present for all grades of Pu at low percentage loadings – which exists between 0.1 and 1 eV.This corresponds to the energy of the largest absorption cross-section of Pu239 and strongly influences the overall MTC.The positive peak diminishes as the Pu content of the fuel increases.However, in a limited number of cases, it means that increasing the Pu content can result in a more negative MTC, as the reduction in the positive peak has a more significant effect in determining the overall MTC than the combined effects of increased positive epithermal contributions and reduced negative thermal contributions.For example, MTC at BOC in a UK Pu fueled PWR is more negative for 6%wt and 7.5%wt Pu than for 5%wt.However, for >7.5%wt the MTC becomes less negative with increased Pu loading.The effect of the positive peak may therefore be useful in terms of maximising Pu loading in the core while reducing the MTC.It should be noted that while the MTC is more negative at 6–7.5%wt Pu compared with 5%wt Pu in the example case, the FVR is significantly less negative for 6–7.5%wt Pu than for 5%wt Pu.The contributions from different isotopes to FVR should therefore be kept in mind if attempting to utilise this feature to maximise Pu content in the fuel.The largest contributions from the positive peaks between 0.1 and 1 eV exist at lower Pu loadings in the RMPWR than the PWR, because the spectrum is already harder in the RMPWR and, therefore, lower Pu loadings are required to shift the average neutron energy towards the key absorption cross-sections.However, these peaks may still be used to achieve the same reduction in MTC with increased Pu content in a limited number of cases.The reduction in magnitude of the positive contribution between 0.1 and 1 eV with increased Pu loading can be explained by the fact that the neutron energy spectrum hardens as the fissile content increases causing a shift in the average neutron energy to the right past the large Pu239 absorption peak between 0.1 and 1 eV and towards the large absorption peak of Pu240 at ∼1 eV.At EOC Pu239 is only dominant at very high Pu loadings.At lower loadings, Pu239 is almost completely depleted by EOC and therefore has a much less significant effect.Where this happens, other fissile isotopes become dominant.At 5%wt Pu loading U233 is typically the most dominant isotope at EOC displaying somewhat similar trends to Pu239.U233 shows a similar trend to Pu239 in terms of the reduction in magnitude of the positive peaks between 0.1 and 1 eV as Pu content increases.However, the total magnitude of the contribution from U233 decreases in general as the Pu content increases.This is caused by competition between Pu239 and U233 in higher Pu content cases.At 10%wt Pu loading, Pu241 is more dominant than U233 in the thermal energy region, since there is sufficient initial fissile material to warrant the Pu isotopes being of key importance at EOC.Pu239 depletes preferentially to Pu241 due to the larger peak absorption cross-section between 0.1 and 1 eV.As a result, Pu239 burns up such that by EOC in the PWR this isotope does not exist in quantities capable of having such a significant effect on MTC, resulting in Pu241 becoming more dominant at 10%wt Pu and above.In the RMPWR, the harder spectrum causes the average neutron energy to shift further to the right which increases the likelihood of fission in other isotopes and ultimately causes fewer neutrons to be available for fission in Pu239 specifically.The result is that there is sufficient Pu239 to continue having a significant effect at EOC.Therefore, the remaining Pu241, which has a larger fission cross-section than U233 in the thermal energy region, is of greater significance.In terms of trend, Pu241 shows similar contributions at EOC to those of Pu239 at BOC, albeit on a smaller scale.The main difference to note with regard to the contributions from Pu241 is that the magnitude of the negative contributions <1 eV and the positive contributions >1 eV are much less extreme.The impact of Pu241 is therefore less significant than for other isotopes.The RMPWR displays the same trend, in terms of contributions at specific energies, as the PWR.However, as stated, a lower Pu loading is required to achieve positive peaks of a higher magnitude at 0.1–1 eV, e.g. 2.5%wt Pu in the RMPWR is comparable with 5%wt Pu in the PWR.The positive peaks that the fissile isotopes display between 0.1 and 1 eV can be held accountable for the change in trend in which reactor type displays the most negative MTC.At 10%wt Pu loading, which represents the transition point, the positive peaks are almost completely non-existent regardless of reactor type.The larger positive contributions in the epithermal region of the RMPWR become dominant causing the RMPWR to display a less negative MTC than the PWR.Below 10%wt Pu loading, the larger positive peaks in the PWR result in a less negative MTC in the PWR than the RMPWR.FVR shows similar results to MTC in that there are negative contributions in the thermal energy region and positive contributions in the epithermal energy region and above.However, positive contributions towards FVR do not exist at energies <500 eV, unlike with MTC.Pu239 remains the dominant isotope, assuming it exists in sufficient quantities to have an effect.Like MTC, Pu241 has a significant effect on reactivity in Pu grades which contain large amounts of this isotope.In terms of overall trend, the contribution from Pu241 is very similar to that of Pu239 although the magnitude of the contributions is notably different.At EOC Pu239 is only dominant in cases where the initial fissile loading is high.If Pu239 depletes, U233 takes over as the dominant isotope.Again, U233 has a larger positive contribution in the epithermal region; however, Pu241 has a larger negative contribution in the thermal region.As the total fissile loading increases, the negative contribution from Pu239 to MTC decreases and the positive contribution increases.However, for FVR both the negative and positive contributions decrease as fissile loading increases since voiding itself causes a significant reduction in H/HM ratio, which leads to a reduced resonance escape probability as the spectrum hardens.This effect is more significant in cases with a high Pu content as the spectrum is already harder than in lower Pu content cases.FVR becomes positive as fissile content increases due to a reduction in the large negative trough between 0.1 and 1 eV.These results show that low Pu loadings correspond to uneconomically low discharge burnups and short cycle lengths.This is more noticeable in the RMPWR as per.For higher Pu loadings, a greater cycle length may be theoretically achievable; however, in most cases the discharge burnup exceeds current cladding technology limits.MTC remains negative for all cases except RG, MOX and UK Pu at 30%wt Pu loading in the RMPWR and does not fall below −60 pcm/K, which is within the typically accepted lower limit for MTC in PWRs.FVR is positive for >30%wt Pu for all grades in the PWR and >20%wt Pu for all grades in the RMPWR.If a negative MTC is required, the maximum Pu loading, and therefore discharge burnup, is higher in the PWR than the RMPWR.Despite the difference in Pu loading, roughly the same amount of Pu is consumed when MOX fuel is loaded into the PWR and RMPWR; however, other grades display noticeable differences.In the PWR ∼30 kg/GWy more WG is destroyed than in the RMPWR whereas for RG and UK Pu this figure changes to ∼10 kg/GWy.The U233 content in SNF is similar for all grades except WG which results in ∼10 kg/tHM more U233 in RMPWR SNF than PWR SNF.However, the total MA content is ∼10 kg/tHM lower in the RMPWR for all grades except WG which is ∼10 kg/tHM higher in the PWR.Where a negative FVR is required, the maximum Pu loading is ∼10% lower in the PWR than when the requirement to maintain a negative MTC is the limiting factor.The maximum Pu loading is the same in the RMPWR regardless of whether MTC or FVR are taken as the limiting case.Reducing the Pu content results in a significantly lower discharge burnup.When the maximum Pu content is 20%wt for all Pu grades in both the PWR and the RMPWR, a higher destruction rate is possible in the PWR.It is worth noting that when the FVR limits the Pu loadings to 20%wt the Pu destruction in the PWR is higher for all Pu grades despite the lower Pu loading.This is due to the higher levels of U233 in the MTC limited case and subsequent competition between Pu239 and U233 as well as increased absorption due to higher levels of MAs.In the FVR limited case the U233 and total MA content in SNF is 5–10 kg/tHM higher in the RMPWR than the PWR for all Pu grades.Results illustrate how sensitive operational and safety parameters are to changes in the isotopic composition of the fuel and the effects of spectral hardening.The k∞ curves were shown to be heavily influenced by the presence of Am241 in UK Pu despite UK Pu having a similar Pu composition to RG Pu.DC was determined to be strongly dependent on the Pu240 content of the fuel due to the absorption cross-section of this isotope at 1 eV and its subsequent negative effect on reactivity.BW was shown to be reliant upon spectrum-related effects due to the monotonic decrease in the ratio of capture to absorption cross-sections of B10 with increasing neutron energy.MTC and FVR were both shown to be more complex and difficult to predict.Pu239 was found to have the most significant effect on both MTC and FVR throughout the cycle assuming sufficient quantities remain at EOC.Where Pu239 has depleted by EOC, U233 becomes the most dominant isotope at low Pu loadings and Pu241 the most dominant isotope at higher Pu loadings.Reductions in positive contribution peaks from the fissile isotopes in the energy range 0.1–1 eV were shown to offset the increased fissioning in the epithermal energy region at higher Pu loadings and may be used to make MTC marginally more negative in a limited number of cases.FVR was shown to display some similar trends to MTC.However, key differences were noted – particularly the absence of positive contributions between 0.1 and 1 eV.This was found to be caused by the fact that the spectrum in the FVR case is much harder than the spectrum in the MTC case, due to the more extreme perturbation to the moderator density.Where a negative MTC is required to meet safety criteria, it was found that the PWR can tolerate a higher maximum fissile loading and can therefore achieve a higher discharge burnup and Pu destruction rate than the RMPWR in a once-through cycle.Where a negative FVR is required, the maximum fissile loading and discharge burnup are similar in both reactor types, while increased Pu consumption rates were possible in the PWR compared to the RMPWR due to lower levels of U233 and MAs in the SNF, thereby reducing both competition between Pu239 and U233 and neutron absorption in MAs.To the best of the authors’ knowledge, this paper and references herein contain all the data needed to reproduce and validate the results presented. | UK plutonium (Pu) management is expected to focus on the use of uranium-plutonium (U-Pu) mixed oxide (MOX) fuel. However, research has shown that thorium-plutonium (Th-Pu) may be a viable alternative, offering favourable performance characteristics. A scoping study was carried out to determine the effect of isotopic composition and spectral hardening in standard and reduced moderation Pressurised Water Reactors (PWRs and RMPWRs). Lattice calculations were performed using WIMS to investigate safety parameters (Doppler Coefficient (DC), Moderator Temperature Coefficient (MTC), Void Coefficient (VC) – in this case Fully Voided Reactivity (FVR) – and Boron Worth (BW)), maximum theoretically achievable discharge burnup, Pu consumption and transuranic (TRU) composition of spent nuclear fuel (SNF) for the two reactor types. Standard grades of Pu were compared to a predicted UK Pu vector. MTC and FVR were found to be strongly influenced by the isotopic composition of the fuel. MTC was determined to be particularly sensitive to positive ‘peak’ contributions from fissile isotopes in the energy range 0.1–1 eV which diminish as the Pu content increases. The more extreme nature of the perturbation in FVR cases results in key differences in the contributions from fissile isotopes in the thermal energy range when compared with MTC, with no positive contributions from any isotope <500 eV. Where the requirement for MTC to remain negative was the limiting factor, a higher maximum fissile loading, discharge burnup and Pu consumption rate were possible in the PWR than the RMPWR, although the two reactors types typically produced similar levels of U233. However, for the majority of Pu grades the total minor actinide (MA) content in SNF was shown to be significantly lower in the RMPWR. Where FVR is the limiting factor, the maximum fissile loading and discharge burnup are similar in both reactor types, while increased Pu consumption rates were possible in the PWR. In this case, lower concentrations of U233 and MAs were found to be present in the PWR. These results are for a single pass of fuel through a reactor and, while the response of fissile isotopes at given energies to temperature perturbations will not vary significantly, the maximum achievable discharge burnup, Pu consumption rate and TRU build-up would be very different in a multi-recycle scenario. |
31,443 | DFT+U study of the structures and properties of the actinide dioxides | The study of the actinides is primarily due to their importance in the nuclear fuel cycle , where actinides are commonly encountered in their oxide form .Compared to the metal, the oxides demonstrate better thermal and chemical stability , which is of paramount importance when we are considering nuclear energy applications .The oxides enable higher operating temperatures and guard against further oxidation of the nuclear fuels, thus limiting the risk of containment failure and thermal excursion, and ensuring the stability of spent nuclear material for long-term storage .It is, however, known, that under specific conditions, the AnO2 may form hyperoxides of the form AnO2+x .In nuclear fission, fissile isotopes of uranium and plutonium are used as nuclear fuel .Over time the composition of nuclear oxide fuels change, either because of the genesis of irradiation-induced defects , or because of the formation of fission products .Actinide oxides of interest include thorium dioxide, neptunium dioxide, americium dioxide, and curium dioxide.Evaluating the properties of these materials has important implications for the development of new processing strategies and the storage of nuclear material, thus improving the generation and safety of nuclear energy .The study of AnO2 properties faces several challenges.The actinides are inherently unstable, highly toxic, and, due to the risk of nuclear proliferation, subject to strict regulatory controls .In addition, their experimental investigation is further limited by their low abundance, isolation, and high radioactivity, which has led to the lack of precise information on fundamental properties .Important properties that require assessment include the lattice constants, band gaps, magnetic states, and the bulk moduli.Theoretical chemistry represents a powerful tool to unequivocally determine these properties, although conventional computational methods, based on density functional theory, often fail due to the relativistic influences and the highly correlated nature of these materials .Hence, treatment by specific methods that are able to asses this type of material is required.The following methods have been developed to calculate highly correlated materials: the self-interaction correction method , modified density functional theory , dynamic mean field theory , and screened hybrid density functional theory .Among these methodologies, DFT+U represents one of the more computationally tractable means of study, since it offers the possibility to accurately reproduce the properties of highly correlated materials at a reasonable computational cost .This is achieved by manipulating the so-called Hubbard Coulombic and exchange modifiers, thus minimizing the DFT self-interaction error.However, U and J have to be fitted to experimental data, and for actinide materials, relativistic effects must also be taken into account.This is the reason why, to the best of our knowledge, few theoretical reports have been published so far that consider SOI and noncollinear magnetic order .Hence, determining an accurate value of U and J would represent a significant advance, allowing in the exploration of these materials.In this paper, we perform a systematic study to determine a set of Hubbard parameters for the DFT+U approximation for different functionals and different materials.Past studies have so far limited themselves to discussion of LDA, PW91, PBE or hybrid functionals."To the authors' knowledge, this is the first study of the AnO2 systems that investigates the performance of the more current AM05 and PBE-Sol functionals.In previous work in the literature, these functionals have been shown to improve calculations of transition metal complexes in terms of the lattice constant and surface energetics .It is thus important that their application to AnO2 systems is evaluated as well.In addition, we have chosen to implement the magnetic state determined from experimental procedure and/or predicted from the one electron crystal field and Russel-Saunders coupling scheme .Our paper aims to address the issues encountered in the computational modelling of actinide-based materials to provide a basis on which further research can be built.The electronic structure of these materials can be understood from the crystal field theory.The spin-orbit interaction forces the splitting of the single electron f-orbitals into two different levels, with j = 7/2 and j = 5/2 respectively.Since each actinide cation has a cubic environment, the degeneracy of these levels is subsequently broken due to the crystal field.Although this approximation is technically valid only for the single f-electron case, it is widely used to predict the orbital occupancy .Hence, Th4+, Pu4+, and Cm4+ are predicted to have no effective magnetic moment, whereas U4+, Np4+, and Am4+ present an effective magnetic moment, as illustrated in Fig. 2.The display of these magnetic moments in the actinide dioxides leads to different magnetic structures.If the ions show no magnetic moment, the resulting structure is the non-ordered diamagnetic state.In contrast, if the ions do present an effective magnetic moment, we can have two different situations, depending on whether these magnetic moments are coupled or not.If they are decoupled, there is no ordered distribution and this results in the disordered paramagnetic phase; if ordered or coupled, either ferromagnetic or antiferromagnetic states are obtained.In addition, if magnetic moments are coupled but they show different μB, the resulting magnetic structure is known as the ferrimagnetic state.Further, the ordered magnetic AnO2 exhibit non-collinear behaviour , where the magnetic moments of the ions have contributions in more than one direction.Hence, we distinguish 1k, 2k, and 3k magnetic wave vectors .Specifically, AnO2 are non-collinear 3k AFM materials.For the AFM 3k phase, one can distinguish between three different phases: the longitudinal 3k AFM, and the two equivalent transverse 3k AFM domains, depicted in Fig. 3.The non-ordered magnetic systems are relatively simple to calculate compared to their ordered non-collinear magnetic counterparts.Experimentally, for ThO2 and PuO2 diamagnetism has been confirmed.It is worth noting, however, that there is a controversy over the exact magnetic ground state for PuO2, where first-principles methods have suggested the possibility of an AFM ground state .However, for this work we have assumed the DM ground state, as supported by the complete active space self-consistent field method and experimental evidence .CmO2 has been reported as PM, although according to CF theory, it should be DM.It is believed that the presence of impurities plus the small energy difference between the DM and the PM phases in CmO2 are the reason the DM phase has been difficult to determine ; here, we have again assumed a DM ground state for CmO2.The ordered magnetic AnO2 have attracted considerable interest, in part due to their non-trivial multi-k and multipolar nature.In UO2, the magnetic ground state is identified as a transverse 3k AFM state at TN = 30.8 K with a corresponding effective magnetic moment of 1.74 μB/U ion.In addition, neutron studies have indicated an internal oxygen displacement; although no external distortion of the cubic structure occurs .One theoretical study has employed SOI to investigated the relative energetics of the transverse 1-3k AFM states, and has indicated a transverse 3k AFM as the ground state , which is the one considered in this study.In NpO2 the effective magnetic moment of the ground state remains conspicuously elusive .Np4+ is a Kramers ion , an ion with at an odd number of electrons, and therefore should have a magnetic moment.237Np Mössbauer spectroscopy and muon experiments have thus far established an effective magnetic moment of 0.00–0.15 μB/Np4+ ion.In the PM phase, TN = 60 K, the effective magnetic moment increases to 2.95 μB/Np4+ ion .The longitudinal 3k AFM ground state has been inferred from resonant x-ray scattering and 17O NMR studies on NpO2.Thus, the longitudinal 3k AFM state is employed in these calculations.Finally, experimental information on the magnetic structure of AmO2 is extremely limited.Magnetic susceptibility measurements have indicated a AFM ground state although the effective magnetic moment still eludes detection.Conversely, in the PM phase, an effective moment of 1.32 μB/Am4+ ion at 15–40 K and 1.53 μB/Am4+ ion at 50–100 K has been measured .Independent Mössbauer and neutron measurements have been unable to provide evidence for a magnetically ordered ground state .In 17O NMR studies onO2 an effective magnetic moment of 1.38 μB/Am4+ ion is estimated with an internal distortion of the oxygen ions synonymous with transverse 3k AFM order .To our knowledge no study has examined the energetic differences between the longitudinal 3k AFM and transverse 3k AFM phases of AmO2.Thus, the relative energetics of the longitudinal 3k AFM and the transverse AFM states are compared to establish the magnetic ground state for AmO2.In our discussion, the performance of the functionals is addressed with DFT+U where a range of U = 0.00–7.00 eV is investigated, with J = 0.00 eV.Next, the effect of exchange is investigated further with PBE-Sol for U = 0.00–7.00 eV and J = 0.00–1.00 eV.The non-ordered AnO2 materials include ThO2, PuO2 and CmO2.The selection of DFT+U values and the exchange correlation functional focuses first on reproducing the electronic structure as described by the band gap, and second the lattice constant.In general, the functional has little effect on the calculated band gap, but it, greatly affects the lattice constant.One further observes the near identical performance of AM05 and PBE-Sol.It is, however, noted that for PuO2 and CmO2, the DM state may not be the fundamental magnetic ground state.As already mentioned, this has previously been ascribed to how the exchange-correlation energy is evaluated within the DFT framework .In this paper, our aim is to replicate the properties derived from CF theory in conjunction with experimental analysis.In ThO2, the Th4+ f orbitals to which the U is applied are unoccupied and the DFT+U correction thus has a marginal effect on the electronic structure.Here, the experimental band gap of 5.90 eV is systematically underestimated by DFT and DFT+U.The AM05/PBE-Sol functionals provided the closest approximation when U = 6.00 eV with a calculated value of 4.70–4.80 eV.This represent an underestimation of 1.10–1.20 eV but it still offers an improvement over previous DFT studies .The lattice constant is reproduced by LDA, AM05 and PBE-Sol when U = 3.00–6.00 eV, but remains overestimated by PBE.In PuO2, the band gap is greatly influenced by the choice of U.In DFT, PuO2 is calculated in the DM state as a metal; increasing U enables the formation of a band gap.In addition, past DFT studies have additionally reported a FM metallic ground state .Our focus, however, remains on replicating the experimentally derived DM condition.Given the limited experimental information, previous DFT+U studies referenced the band gap from the activation energy for electronic conduction at 1.80 eV , but recent measurements of optical absorbance on epitaxial films have reported a band gap of 2.80 eV .Assuming the correct band gap is 2.80 eV, this is reproduced by all functionals when U = 6.00–6.50 eV.In this range, the lattice constant is best represented by AM05/PBE-Sol.No experimental data has been published on the band gap of CmO2, although it is believed to be an insulator .Our calculations also point in that direction.As U is increased CmO2 transforms from a semi-conductor to an insulator, with a band gap that increases from 0.50 to 2.50 eV for U = 0.00–7.00 eV.Although none of the functionals calculates the experimental lattice constant exactly, an excellent approximation is made by AM05 and PBE-Sol when U = 6.00 eV.In general, the experimental data is best represented by the AM05 and PBE-Sol functionals.Thus, PBE-Sol is used further to investigate the influence of the exchange modifier on the band gap and the lattice constant.In ThO2, the introduction of J is negligible with respect to the band gap and only has a minor effect on the lattice constant.In PuO2, the J value detrimentally affects the band gap, with a reduction of 0.50 eV for every 0.25 eV of J.In fact, the best band-gap fitting is obtained when J = 0.00 eV, with negligible implications to the lattice parameter.Finally, in CmO2, the J modifier increases the band gap by a maximum of 0.75 eV when J = 0.00–1.00 eV, and again, it has barely any influence on the lattice parameter.In conclusion, for the non-ordered magnetic AnO2, we have observed that the PBE-Sol/AM05 functionals, in a combination of high U values with J = 0.00 eV, provide the most accurate reproduction of both band-gaps and lattice constants.Hence, assuming the PBE-Sol functional, we have selected the following U and J parameters for an accurate description of the non-ordered magnetic AnO2: ThO2, PuO2 and CmO2.It is worth noting that the U and J choice for CmO2 is based upon the better fitting to its lattice parameter, thus predicting a bandgap of 2.50 eV.Using these determined values, we have calculated the following variables: band structure, density of states, optical absorbance spectra, and bulk modulus.The band structures of ThO2, PuO2 and CmO2 are presented in Fig. 4.In ThO2, the calculated indirect band gap of 4.63 eV occurs between the respective L and X-Γ points of the valence band maximum and conduction band minimum, compared to the experimental band gap of 5.90 eV .In PuO2, with respect to the band structure, we report a Γ centred direct band gap of 2.81 eV.For CmO2, an indirect band gap of 2.50 eV is predicted between the respective Γ and L of the VBM and CBM.For ThO2 and CmO2, the difference between direct and indirect band gaps is 0.17 eV.The density of states for ThO2, CmO2 and PuO2 are shown in Fig. 5.In each instance the valence band is dominated by the O states with minor contributions from the An and An states.In ThO2, the conduction band comprises of Th, Th, and O states, exemplifying the Mott insulator characteristics of ThO2.In PuO2 and CmO2, the conduction band is primarily formed from the An states, from which we predict these materials to be charge-transfer insulators where electronic transitions occur across the oxygen and actinide ions.Whereas the fundamental band gap represents the transition between the VBM and CBM the optical band gap is restricted by orbital symmetry.In recent studies on In2O3 , AgCuS , PbO2 , Tl2O3 , and SrCu2O2 , it has been shown that the fundamental band gap and optical band gap may differ in value.Our calculated optical absorbance spectra for ThO2, PuO2 and CmO2 are shown in Fig. 6, from which we found that the direct band gap and the optical band gap respectively differ by 0.11 eV, 0.01 eV and 0.19 eV for ThO2, PuO2 and CmO2, as shown in Table 1.Thus, optical absorbance measurements are only representative of the direct band gap in PuO2.Finally, the bulk modulus is notoriously difficult to obtain experimentally, which has led to a large range of values in the literature .However, our calculated values are in excellent agreement with the experimental data for ThO2 and CmO2.For PuO2, where the reported experimental values for the bulk modulus range from 178 to 379 GPa, our value of 217 GPa provides a useful point of comparison for experimentalists.Experimental studies of UO2 and NpO2 indicate a respective transverse 3k AFM and longitudinal 3k AFM ground state, whereas for AmO2, to our knowledge, the domain of the 3k AFM ground state is undetermined.Initially, the relative energetics of the transverse 3k AFM and longitudinal 3k AFM states were calculated with PBE-Sol when U = 0.00–7.00 eV and J = 0.00 eV.DFT calculates a longitudinal 3k AFM ground state, but DFT+U calculates a transverse 3k AFM ground state.Thus, the transverse 3k AFM ground state is used for further calculations of AmO2.As with the non-ordered magnetic AnO2, the choice of functional has a minor effect on the calculated band gap and on the effective An4+ magnetic moment.The sole exception is the band gap of AmO2; here, when U = 7.00 eV, the LDA functional differs considerably from the GGA functionals.The choice of functional, however, has a pronounced effect on the lattice constants.The U modifier is the dominating factor for the band gap and the lattice constant, whereas the magnetic moment is only marginally influenced.In order to be consistent with the analysis of the non-ordered magnetic materials, the influence of J was only assessed for the PBE-Sol functional.For NpO2, the experimental band gap of 2.85–3.10 eV is well reproduced by all functionals when U = 5 eV, and the experimental lattice constant of 5.434 Å is reproduced by PBE, LDA, and AM05/PBE-Sol.The effective Np4+ magnetic moment, however, remains an enigmatic issue.Our calculated effective Np4+ magnetic moment ranges from 2.14 to 2.77μB/Np4+ ion depending on the functional rather than on the U parameter.However, these values are not compatible with the experimental values since, as indicated by several experimental papers, the magnetic moment is only 0.00–0.15 μB/Np4+ion .Interestingly, our value does compare with the effective paramagnetic moment of 2.95 μB/Np4+ ion, determined at 60 K .Focusing now on the PBE-Sol functional, the introduction of J enables a better correlation between the band gap, effective Np4+ magnetic moment and lattice constant when U = 5.00 eV and J = 0.75 eV, as collected in Table 2.Although the calculated effective magnetic moment of 1.87 μB/Np4+ ion is still overestimated with respect to the longitudinal 3k AFM structure, it offers an improvement over previous investigations .Finally, no internal oxygen distortions have been observed in this structure, despite suggestions in some studies .In summary, for the ordered magnetic AnO2 the AM05/PBE-Sol functionals provide the closest representation of the experimental band gap, effective magnetic moment of the An4+ ions and the lattice constant.Thus, from the PBE-Sol functional, the band structure, density of states, optical absorption and bulk modulus are calculated for UO2, NpO2 and AmO2.The band structures for UO2, NpO2 and AmO2 are shown in Fig. 7.In contrast to their non-ordered magnetic counterparts, the degeneracy of the bands is perturbed for ordered magnetic AnO2 systems.In UO2, an indirect band gap of 2.06 eV occurs between Γ and R of the VBM and CBM.For NpO2, a direct band gap of 3.08 eV occurs at the L point.For AmO2, an indirect band gap of 1.31 eV occurs between respective Γ and Γ-R of the VBM and CBM respectively.The DoS for UO2, NpO2 and AmO2 is illustrated in Fig. 8.Unlike NpO2, AmO2 and the non-magnetic actinide oxides, in which the valence band is dominated by the O states, UO2 presents a valence band dominated by U states, clearly indicating that UO2 is a Mott insulator.In each compound the An states contribute heavily to the conduction band.The An states feature more prominently when the energy is greater than 4.00 eV.The calculated optical absorption spectra for UO2, NpO2 and AmO2 are shown in Fig. 9.The direct band gap and optical band gaps respectively differ by 0.14 eV, 0.03 eV and 0.14 eV for UO2, NpO2 and AmO2, as shown in Table 2.Thus, optical absorbance measurements can only be fully relied upon for the fundamental electronic structure of NpO2.Finally, the calculated bulk modulus is in excellent agreement with the experimental data.For AmO2, the calculated value of 196 GPa offers a point of comparison against the large experimental range of 205–280 GPa.In this paper we have presented a systematic and comprehensive computational study of the actinide oxides: ThO2, UO2, NpO2, PuO2, AmO2, and CmO2.We have compared the performance of the LDA, PBE, AM05 and PBE-Sol functionals in combination with the DFT+U methodology.The choice of functional has little effect on the calculated band gap, but it has a major influence on the lattice constant, bulk modulus and the magnetic moment.In the majority of cases AM05 and PBE-Sol behave in an identical fashion providing the best estimate of the lattice constant.We conclude that either AM05 or PBE-Sol would provide the best performance in future research of these oxide materials.The selection of U and J primarily focused on the replication of the magnetic ground state of all oxides to ensure the correct modelling of the magnetic properties as well as the band gap and lattice constants.Where no experimental data was available for AmO2 we have proposed the longitudinal 3k AFM as the magnetic ground state.For the magnetic ordered oxides, UO2, NpO2, AmO2, U and J were also parametrized to reproduce the effective An4+ magnetic moment of the materials.In terms of the lattice, displacement of the oxygen ions results in a Pa-3m structure which is only observed in the transverse 3k AFM structure employed in the calculation of UO2 and AmO2 butno such distortion is recorded from the longitudinal 3k AFM for NpO2.With the DFT+U parameters determined for each compound, we have also studied other properties like the band-structure, density of states, and the optical band gap.Only for PuO2 and NpO2 is excellent agreement found between the fundamental and optical band gap.Furthermore, where it has been generally accepted that the AnO2 are Mott insulators, only UO2 displays this characteristic, whereas the remaining AnO2 are calculated to be charge-transfer insulators.This paper provides the basic tools for future computational study of the actinide dioxides.For instance, surface and doping studies, which are the topic of ongoing research.Unfortunately, due to experimental limitations, detailed information on elastic constants, cohesive energies and enthalpies of formation are thus far absent from the literature, and our models are therefore strictly justified with respect only to the lattice constants and the electronic properties.As such, this paper offers a guide for theoretical researchers and highlights the need for further experimental information to justify our models.Once new experimental data are reported, these parameters may have to be revised.However, at the moment, this work offers as good a set of parameters to calculate the structures and properties of actinide oxides as can be derived from currently available experimental data. | The actinide oxides play a vital role in the nuclear fuel cycle. For systems where current experimental measurements are difficult, computational techniques provide a means of predicting their behaviour. However, to date no systematic methodology exists in the literature to calculate the properties of the series, due to the lack of experimental data and the computational complexity of the systems. Here, we present a systematic study where, within the DFT+U formulism, we have parametrized the most suitable Coulombic (U) and exchange (J) parameters for different functionals (LDA, PBE, PBE-Sol and AM05) to reproduce the experimental band-gap and lattice parameters for ThO2, UO2, NpO2, PuO2, AmO2 and CmO2. After successfully identifying the most suitable parameters for these actinide dioxides, we have used our model to describe the electronic structures of the different systems and determine the band structures, optical band-gaps and the Bulk moduli. In general, PBE-Sol provides the most accurate reproduction of the experimental properties, where available. We have employed diamagnetic order for ThO2, PuO2 and CmO2, transverse 3k antiferromagnetic order for UO2 and AmO2, and longitudinal 3k antiferromagnetic order for NpO2. The Fm 3¯ m cubic symmetry is preserved for diamagnetic ThO2, PuO2 and CmO2 and longitudinal 3k NpO2. For UO2 and AmO2, the transverse 3k antiferromagnetic state results in Pa3¯ symmetry, in agreement with recent experimental findings. Although the electronic structure of ThO2 cannot be reproduced by DFT or DFT+U, for UO2, PuO2, NpO2, AmO2 and CmO2, the experimental properties are very well represented when U = 3.35 eV, 6.35 eV, 5.00 eV, 7.00 eV and 6.00 eV, respectively, with J = 0.00 eV, 0.00 eV, 0.75 eV, 0.50 eV and 0.00 eV, respectively. |
31,444 | Biofuel futures in road transport - A modeling analysis for Sweden | The dependence on fossil fuels and the continuous increase of energy use in the transport sector have brought attention to transport biofuels as a measure to mitigate climate change and improve energy security.While biofuels currently only contribute a small share of the energy supply to the transport sector, several governments and intergovernmental organizations have declared policy targets which can lead to a significant increase in transport biofuel utilization.In the EU, energy from renewable sources in the transport sector should reach at least 10% by 2020.In addition, greenhouse gas emissions should be reduced by 20% to the same year, and a long-term ambition of reducing GHG by 80–95% to 2050 has been stated.In Sweden, the government has declared that the vehicle fleet should be independent of fossil fuels by 2030 while Swedish net emissions of GHGs should be zero by 2050.Meeting stringent climate targets will in significant ways change the energy system and will involve a large scale integration of low-carbon fuels and technologies in the road transport sector.Due to limited resources, an increased utilization of alternative energy carriers in the transport sector can be expected to have system effects over sector boundaries.For instance, biomass is used as raw material in the forest product industry and in the chemical industry as well as for both biofuel production and heat/power production.Changes in biomass demand in any of these sectors will affect biomass markets and, thus, imply altered conditions for other biomass applications.In the analysis of efficient ways of meeting climate targets for transport and energy systems, a wide systems approach is therefore imperative.This study investigates cost-efficient fuel and technology choices in the Swedish road transport sector in the presence of rigid climate policies in line with policy ambitions communicated nationally and by the EU.We focus on options that currently receive the major attention in Sweden, and in the public debate are considered as feasible near-to-medium term options in Swedish context.In particular, we study prospects for first and second generation biofuels, but also electricity is included in the analysis as an alternative option to biofuels.The objective is to provide insights and analytical results to how these options can be utilized in road transport to meet stringent climate and energy security targets.The policy objectives in focus are CO2 emission reductions for the national energy system as a whole and phase-out of fossil fuels in the road transport sector.The research questions are:To what extent could biofuels in road transport contribute to a cost-efficient achievement of stringent, system-wide CO2 reduction policy targets to 2050?,How does the attainment of an almost fossil-free road transport sector to 2030 affect cost-efficient fuel and technology choices and system costs?,To address the complex dynamic relationships between different sub-sectors of a national energy system, a systems modeling approach with an integrated view of the energy and transport system is applied.Important linkages between the transport sector and the rest of the energy system include the reliance on a common resource base in regard to biofuels and biomass-based heat and power, and the linkage between electricity generation and utilization of electric vehicles.A system-wide approach is also essential for the possibility to find cost-efficient GHG reduction strategies on an overall societal level and to avoid sub-optimized solutions.The study considers an array of technologies in all sectors of the energy system but, as indicated above, not all potential transport sector options are within the scope of the study.Examples of transport sector technologies and fuels that are not considered include: algae biofuels, hydrogen, electrofuels, fuel cell vehicles and electrified roads.The number of systems studies that analyze the co-evolution of the stationary energy system and the transport sector has grown in the scientific literature in recent years.The geographical scope of these studies ranges from regional and national to global.Global studies focusing on the development of the transport sector as an integrated part of energy system include Takeshita, Turton, Gül et al., Azar et al., Grahn et al., Hedenus et al., Gielen et al., Akashi and Hanaoka, Van Ruijven and van Vuuren, Kitous et al., 2010, Anandarajah et al., IEA and Kyle and Kim.Studies with a national scope include Jablonski et al., van Vliet et al., Schulz et al., Martinsen et al. and Yeh et al. covering UK, Netherlands, Switzerland, Germany and USA, respectively.The focuses and results of the studies differ.In terms of transport biofuels, the future market penetration range from low to high levels.Most of the studies show low to intermediate transport biofuel market shares at the end of their studied time horizons, with levels below 40% for climate policy scenarios not applying sector-specific polices.In the case of Sweden, the future development of the energy and transport system has earlier been modeled by, e.g., Börjesson and Ahlgren and Krook Riekkola et al.However, these studies do not investigate the recent and more ambitious policy targets, including an almost complete GHG emission and fossil fuel phase-out in the 2030–2050 timeframe.Further analysis in relation to these targets is thus required.In this section, the methodological approach is presented, including a general description of the model, main input data assumptions for the road transport sector and model scenarios.The analysis is based on MARKAL_Sweden, a dynamic, bottom-up, partial equilibrium, energy system optimization model.The model is an application of the well-established MARKAL model generator and includes a comprehensive description of the Swedish energy system.Under the provided conditions, the model delivers the overall welfare-maximizing1 system solution meeting all defined model constraints over the entire studied time horizon.The time horizon is from 1995 to 20502 and is divided in 5-year model periods, each model period being represented by a model year.While most energy carriers are represented on an annual basis, heat and electricity are further represented with three seasonal periods and two diurnal periods.The model applies perfect foresight, which means that there is full knowledge about the future development in the optimization.A discount rate of 6% is used.The current version of MARKAL_Sweden builds upon earlier MARKAL applications describing Sweden presented by, e.g., Bergendahl and Bergström and Unger and Alm but in particular on the more recent studies by Börjesson and Ahlgren and Börjesson et al.The model representation of the national energy system is structured as a network of energy technologies and energy carriers, covering fuel extraction via different types of energy conversion technologies and distribution chains to end-use demands on energy services, such as transportation and heating.The model depends on a large set of input data.Bottom-up technology data include technology costs and performance data such as operation and maintenance costs, investment costs and conversion efficiencies for technologies in all parts of the energy system.The technology representation includes supply technologies/processes, conversion technologies and end-use technologies as well as energy distribution processes.Reference projections for end-use energy service demand over the studied period are inputs to the model, but own-price elasticity of demand applies, making the final demand levels scenario dependent outputs of the model.For road transport, end-used demands are divided in eight different vehicle classes: small and large cars, long and short distance buses, long and short distance heavy trucks, light trucks and motorcycles.End-use service demand elasticities for the road transport vehicle classes are between 0.2 and 0.6 in the model.In the model, all sectors of the national energy system are represented, such as heat and power production, transport, industry, and commercial and residential premises.Thus, the model allows for demands in different sectors to compete for limited energy resources.The transport sector of the model includes road transport, aviation, railway, shipping and working machines.However, road transport is the focus of the study and other modes are modelled in a less detailed manner.The model does not include endogenous modal shift, neither between road transport and other parts of the transport sector nor between different transport classes within road transport.In the study, the limited supply of biomass resources constitutes an important model constraint.While a small increase in imports of biomass and biofuel are allowed in the model during the studied time horizon, the main biomass resource supply is based on estimates of domestic potentials.The main domestic sources of biomass for energy purposes in the model are forestry residues, wood product industry by-products and energy crops.For energy crops, 600 000 ha, or about 20% of the existing agricultural land in Sweden, is assumed to be available for energy crop cultivation.Table 1 presents a summary of the biomass being available for energy purposes in the Swedish energy system as a whole in the model.If energy forest is produced on the available agricultural land, the total bioenergy potential adds up to 178 TWh3 for model year 2050.In the model, the domestic biomass potentials and biomass costs are represented by detailed supply curves.Several first- and second-generation biofuels for transport are included in the study: ethanol, biodiesel, biogas, synthetic natural gas, methanol, dimethyl ether and Fischer Tropsch liquids.Second-generation biofuels here refer to biofuels based on wood biomass as feedstock.For ethanol, two production routes are represented, based on wheat and on wood biomass, respectively.SNG and biogas are produced in different processes but are both methane-based gases.4,An overview of the included biofuel processes and related technology data assumptions are given in Table 2.Properties refer to new plants built from the ground up with thermal input capacities of 200–250 MW, with the exception of biogas plants for which properties represent plant scales of 3 MW.Economical and technical lifetimes of 20 years are assumed for all processes.Table 3 presents assumed costs and energy penalties for distribution from the production site to filling stations and handling at the filling stations.Since planning and construction of new biofuel production and infrastructure are linked to long lead times, in particular for large scale projects, the establishment of new production capacity in the near term is constrained in the model.For model year 2015, new biofuel production capacity is restricted to projects already underway.For second generation biofuel production an upper constraint of 10 TWh on production capacity is applied also for model year 2020.From model year 2025 no restrictions on new biofuel production capacity are applied.The carbon intensity of transport biofuels is an important aspect in regard to their ability to mitigate CO2 emissions.Emissions contributing to the carbon intensity can originate from fossil fuel-based energy use in different parts of the well-to-tank biofuel chain.Further, land use change effects related to production of the biomass raw material can be an important contributor.In the model, the input energy for production and distribution of biofuels can be based both on fossil and renewable sources, and is to a large degree determined endogenously.The carbon intensity linked to this energy use can thus vary between different time periods and scenarios.As mentioned previously, the biomass resources available for energy purposes is to large extent by-products/residues from forestry and forest product industries, and in regard to energy crops, already established agricultural land is used.Since no major land-use changes are related to these biomass resources, carbon emissions related to this have been disregarded in the model.Several different vehicle technologies are represented in the model, including: internal combustion engine vehicles, hybrid electric vehicles, plug-in hybrids and battery-powered electric vehicles.Not all technologies are, however, available in all vehicle segments.PHEVs are only available for cars and BEVs are only available for about half of the car market.HEVs are available for light-duty vehicles as well as for heavy-duty transport, although the fuel savings are comparably small for long-distance heavy-duty transport.Most combinations of represented fuel and vehicle technology options are included.However, methanol and compressed methane, i.e., biogas, SNG, and natural gas, are unavailable for long-distance heavy-duty transport due to its comparably low specific energy content.Liquefied methane combined with 5% diesel is available for long-distance heavy-duty traffic.Assumptions on cost and performance data for cars and heavy trucks are presented in Table 4.Further details regarding the vehicle technology part of the model are also available in earlier publications.For the analysis, several different input scenarios are developed: one main analysis scenario with “base assumptions” and ten alternative scenarios, which test the sensitivity of altered conditions compared to the main scenario.The main scenario applies stringent CO2 reduction targets.General trends regarding end-use energy service demand development in different parts of the stationary energy system are in line with long-term forecasts by the Swedish Energy Agency.For the transport sector, reference demand projections are based on travel demand forecasts by the Swedish Transport Administration, which for cars imply a 65% increase in vehicle kilometers travelled from 2006 to 2050.The alternative scenarios simulate different developments in the stationary energy system as well as in the transport sector.In all scenarios, a stylized energy policy situation is applied; no energy or emission taxes or subsidies are included, but policy ambitions are instead represented as quantitative constraints regarding system-wide CO2 emissions, renewable electricity generation and fossil fuel use in road transport.Table 5 shows an overview of the scenarios.To investigate the system effects of aiming for a “fossil-independent”, i.e., an almost fossil free, road transport sector to 2030, based on the declared Swedish government vision, an additional fossil fuel phase-out constraint is introduced.Thus, for each of the scenarios, two model cases are carried out: one case without and one case with an additional constraint on road transport fossil fuel use.The additional constraint, here denoted the fossil fuel phase-out policy, is defined as an 80% reduction of fossil fuel end-use in the road transport sector to 2030 and a 100% reduction to 2050.This exogenously determined constraint forces the system to replace fossil fuel use in road transport by an increased use of biofuels and/or electricity early on in the studied period.As a reference for calculation of the incremental system costs linked to applied policies, model runs without CO2 or fossil fuel use restrictions are also carried out.Scenario GLOB_CA shows a gradual growth of road transport biofuel use throughout the studied time horizon.The system-wide CO2 emission cap, which implies an 80% emission reduction for the national energy system as a whole to 2050, leads to that no fossil fuels are used in road transport at the end of the studied period.In 2030, the use of biofuels in the road transport sector reaches 15 TWh corresponding to 23% of total road transport final energy use.By 2050, road transport biofuel use is 42 TWh, corresponding to 78% of the final energy use of the sector.Electricity charged from the grid to PHEVs and BEVs accounts at this point for the remaining part, in total 12 TWh.Several different types of biofuels are utilized.Methanol becomes during the studied period the dominating fuel option for light-duty vehicles.In 2030, methanol use is 11 TWh and by 2040 it has increased to 15 TWh.However, as resource competition furthermore increases, more energy-efficient systems solutions are prioritized and, in 2050, methanol use has decreased to 12.5 TWh.At this point methanol is primarily used in PHEVs in the passenger car sector but also in hybrid light trucks and in short-distance heavy-duty traffic.Methane is used from early on, but the origin of the gas changes.Driven by high oil prices and CO2 penalties, fossil natural gas is for a large part of the studied period used as transport fuel, primarily in the heavy-duty sector and both in compressed and liquefied form.As CO2 constraints are tightened, natural gas is gradually phased out while biogas and SNG increase in importance.In 2050, biogas and SNG use reaches 6.5 TWh and 16.5 TWh, respectively.Organic waste resources with limited alternative use are utilized for biogas production; however, feedstock options that are more exposed to competition, such as grown crops, are not utilized.Ethanol and FT liquids in the form of synthetic gasoline and diesel also appear in the model results.The possibility for ethanol imports is utilized up to its assumed allowed amount throughout the studied period.Domestic ethanol production is, however, phased out at an early stage.In 2050, 3.5 TWh of FT liquids are used in road transport.A certain amount of diesel fuel is required for gas-fuelled heavy-duty CI-ICEVs, and FT diesel is at the end of the period chosen for this purpose over biodiesel or conventional diesel.Use of FT liquids in road transport is also promoted since FT liquids, in the model, are one of few low-carbon options for non-road transport.Despite the increase in travel levels, final energy use in road transport decreases significantly during the studied time horizon as consequence of the introduction of fuel-efficient vehicle technologies.Compared to 2000, road transport final energy use is about 8% lower in 2030 and 25% lower in 2050.From 2030, PHEVs play a large role for passenger cars and from 2045 also BEVs are used to a significant degree.In 2050, BEVs account for 75% of the market in the in smaller car segment.HEVs dominate light truck transport as well as short-distance heavy-duty traffic, while CI-ICEVs dominate long-distance heavy-duty traffic for the entire time-horizon.The introduction of a rigid, sector-specific policy reducing use of fossil fuels in road transport by 80% already by 2030, significantly affects the road transport system, in particular in the shorter term.The FFP policy target is met by a large deployment of second generation biofuels, substantially brought forward in time compared to the case without FFP policy, and a larger use of the more fuel-efficient CI-ICE vehicles.Further, slightly lower travel demand levels are noted in 2020–2030 compared to the case without FFP policy.With the FFP policy applied, biofuel utilization reaches 43 TWh already in 2030 and accounts at this point for 72% of final energy use of the sector.A marginal decrease in transport biofuel use to 42 TWh in 2050 then occurs, but the share of final energy use of the sector increases to 78% by the end of the studied time period due to the increasing use of fuel-efficient vehicle technologies.A large part of the FFP policy is met through a larger use of methanol; already in 2025–2035, methanol shows utilization levels in the road transport sector of 23–24 TWh.Natural gas is utilized only in very small amounts in this case.Instead, the FFP policy promotes an earlier increase of biogas and SNG in the gas mix and their combined use reaches 14 TWh in 2030.Due to the inertia of the system and the early establishment of large methanol utilization, the FFP policy case shows larger use of methanol at the end of the period and a smaller use of biomethane compared to the case without the policy.In 2050, methanol use is 17.5 TWh, or 5 TWh higher than without FFP policy, and biomethane use is 18.5 TWh, or 4.5 TWh lower than without FFP policy.The electricity use is basically the same in both cases.The use of HEVs, PHEVs and BEVs is similar to the case without the FFP policy.These technologies are assumed to experience decreasing costs during the studied time horizon, and the fact that they are not used at an earlier stage suggests that the assumed cost levels are not competitive until the latter part of the studied time horizon.By definition, further restraining of the system implies higher system costs.Thus, an early phase-out of fossil fuels in road transport in addition to system-wide CO2 reductions, as with the FFP policy, increases system costs.For GLOB_CA without FFP policy, the CO2 constraint reducing emissions by 80% for the Swedish energy system to 2050 increases the total system cost by 3.6% compared to a situation without CO2 restrictions.The introduction of the FFP policy gives a total system cost increase for both CO2 and FFP policy of 3.8% compared to a situation without CO2 restrictions.Put differently, the implementation of the FFP policy increases total system-wide CO2 abatement costs by about 6%.Travel demand levels are for future model years about 10% and 11% lower for GLOB_CA and GLOB_CA, respectively, compared to a situation without CO2 restrictions.The reduced travel demand will reduce the consumer surplus, and this constitutes a welfare loss that is a significant part of the additional system costs.Higher investment costs for road vehicles with alternative technologies as well as investments in biofuel production also contribute to an increase in system cost.However, even though the specific distribution costs generally are higher for the alternative transport fuels, there is no increase in total distribution costs for the system as a whole in GLOB_CA.For GLOB_CA, the total system distribution costs are even somewhat lower than without CO2 restrictions.The reason for this is that the total amount of transport fuels is significantly lower in these scenarios compared to the situation without CO2 restrictions, due to higher use of energy efficient vehicles and lower travel levels.Further, for the system as a whole, the costs of imported energy decrease while the costs of domestic energy increase.The “Other System Costs” category of Fig. 3 summarizes the net effect of a number of cost items, such as investments in the stationary energy sector, operation and maintenance costs and welfare loss due to demand reductions in other parts of the energy system than road transport.The sensitivity analysis shows that cost-efficient fuel choices in road transport, in general, and biofuel utilization, in particular, are sensitive to changes of some parameters while quite robust to other parameter changes.Changes in fuel use for each respective alternative scenario compared to the main scenario GLOB_CA are visualized for model year 2030 and 2050 in Fig. 6, in for cases without FFP policy and in for cases with FFP policy.In general, since road transport fuel use under the FFP policy to a higher degree is constrained, the differences between GLOB_CA and the alternative scenarios are smaller than in absence of the FFP policy.In addition to biofuels, the only options to achieve CO2 and fossil fuel reduction in the model are increased electricity use and reduction of travel levels.Thus, mainly scenarios representing futures with considerably more costly electric vehicles, reduction of travel demand growth or less stringent CO2 policies show notable differences in transport fuel futures compared to the main scenario.In particular, less stringent CO2 policies for the national energy system as a whole result in significantly lower use of transport biofuels in absence of the FFP policy.With the FFP policy, transport biofuel use in CO2_LR65 and CO2_LR50 is slightly higher than for GLOB_CA due to lower competition for biomass from the stationary energy system in these cases.Not surprisingly, a lower travel demand growth requires less transport fuels, including biofuels, while a slow cost reduction of electric vehicles leads to higher biofuel use.With the exceptions of low CO2 reduction and low travel demand growth scenarios, the road transport biofuel use of the alternative scenarios only differ in a range of ±15% compared to GLOB_CA in 2050 both with and without FFP policy.The percentage change in road transport biofuel use compared to GLOB_CA is generally higher for model year 2030 than for model year 2050 for cases without FFP, while generally lower for cases with FFP policy.With stringent CO2 reductions, high investment costs for second generation biofuels lead to the lowest biofuel use for 2030 without FFP policy, almost 50% lower than in GLOB_CA.Scenario NAT_CA shows higher road transport biofuel use than GLOB_CA.The high oil price in NAT_CA implies a larger incentive for replacing oil with biofuels.Further, this scenario shows an advantageous situation for electricity imports, which imply less demand for biomass in the stationary energy system.Scenario NUC_PO and BIO_LS imply a hardening of the competition for available biomass resources.For NUC_PO this is due to a higher demand for other low-carbon energy sources in the stationary system as nuclear power generation is phased out, while for BIO_LS it is due to lower biomass supply.Scenario MET_NO, which do not allow high blend methanol fuels, implies that higher cost biofuels needs to be used.For all three of these scenarios, total transport biofuel use is somewhat lower than in GLOB_CA.Marginally higher use of transport biofuels compared to GLOB_CA is noted for PULP_SD as a shut-down of part of the Swedish paper and pulp industry implies that more biomass is available for energy purposes.Regarding biofuel choices, several scenarios show the same inertia effect when introducing the FFP policy as seen in GLOB_CA.That is, with FFP policy a larger use of methanol is seen also at the end of the period compared to the case without the policy, while the use of SNG is smaller.Such scenarios include 2GEN_HC, BIO_LS, NUC_PO and PULP_SD.For CO2_LR50 and CO2_LR65, use of both methanol and SNG increases with FFP policy applied, while EV_HC, TRAD_SG and NAT_CA only show small changes in regard to this.For MET_NO, the restrictions on methanol results in a higher use of other options, in particular SNG but also FT liquids as well as DME, which is an option not seen in any of the other cases; adding FFP increases use of SNG while decreasing gasoline use.In Fig. 7, the incremental total system cost, or system CO2 abatement cost, for CO2 reduction as well as for CO2 reduction and FFP policy combined, in relation to the corresponding situation without policy restrictions, are visualized for all scenarios.In addition, the percentage increase in incremental system cost when adding FFP policy to CO2 reduction is given.In general, the more stress that is put on the system, e.g., regarding higher cost for alternative technologies or less supply of biomass or low-cost electricity, the higher the incremental system cost.Scenarios which compared to GLOB_CA imply narrowed options for the system include: NUC_PO, 2GEN_HC, BIO_LS, EV_HC and MET_NO. The highest values are noted for scenario NUC_PO.Scenario conditions that, in comparison to GLOB_CA, put less stress on the system imply lower incremental system costs.Such scenario conditions include lower required CO2 emission reductions, more biomass available for energy purposes or lower travel demand.Among the scenarios that pursue the 80% CO2 reduction target, NAT_CA shows the lowest cost increase.This is due to the high fossil fuel prices assumed in this case, making the additional cost of choosing low-carbon options smaller.While the increase in incremental system costs of implementing the FFP policy for scenarios applying a CO2 reduction of 80% to 2050 are in the range of 5–11%, scenarios with lower CO2 reduction requirements show higher levels, 15–30%.This is line with results showing a low deployment of alternative fuels in these scenarios when FFP policy is not applied.The results of the study show that biofuels in the road transport sector can make an important contribution to the achievement of stringent CO2 emission reductions and fossil fuel phase-out targets without considerable system cost increases or excessive reliance on biofuel imports.The methodological approach is of importance for the interpretation of the results and a few notes can be made.The methodology applied is based on scenario analysis supported by bottom-up energy system modeling.The model represents both the transport sector and the stationary energy system and, in such way, important linkages between these sectors are captured.The model provides cost-optimal systems solutions, taking a large number of conditions into account, and give insights about future potentials and related system effects of technologies and policy strategies.Due to its partial equilibrium approach, macroeconomic effects are essentially not captured by the model; however, own-price elasticity is applied for end-use service demands.The model only represents technological learning in an exogenous manner, and do not capture any potential requirements for learning investments that could be required to achieve the assumed technology cost development.This can to great extent be motivated by the national scope of the study, but is still of significance for the interpretation of the model results.The model calculations are based on direct technological costs and with full knowledge of future developments, and do not account for, e.g., future uncertainties, lack of information or financing issues.Due to the model features of optimization and perfect foresight, the system inertia regarding technological change is lower than in the real world.In the model, technological change occurs instantly as one option gives a lower system cost than the other; there are no “slow adopters” delaying such a change.However, as it is seldom cost-efficient to retire a technology before its technical lifetime is met and due to the technological age structure of segments, technological change still takes time; e.g., in the model results, the shift from ICEVs to PHEVs in the passenger car segment takes about 15 years.The analysis provides several insights on future cost-efficient use of transport biofuels.While there are previous estimates on biofuel potentials, few have utilized a dynamic modeling approach as in the present study.Instead, most rely on static calculations based on appraisals of the amount of biomass resources not used currently and therefore potentially available for biofuel production.One recent estimation by Börjesson et al. suggests the potential for transport biofuel production in Sweden to be in the range of 25 to 35 TWh in the medium term, i.e., somewhat lower than what the present study suggests under some conditions, e.g., if the 80% fossil fuel phase-out to 2030 should be achieved.One reason for this is that the present study allows biomass use to be reduced in one sector if biomass demand is higher in another.In comparison to other model-based studies, the resulting road transport biofuel shares of the present study are in the higher range.There are several reasons for this.One is that the CO2 emission reductions applied is more stringent than what many earlier studies assume for a similar timeframe.Another reason is that Sweden has a comparably high per capita biomass supply and the electricity sector, which is based on hydro and nuclear power, is already to large degree carbon-free.Although some studies investigate emission reductions in the same magnitude as the present study, this is often done with a longer time horizon, which often leads to non-biomass based options, e.g., hydrogen-based pathways, being applied in the second half of the century rather than biofuels.As showed in the sensitivity analysis, lower CO2 reductions significantly affect the cost-efficient potential for biofuels in transport.As mentioned, some potential future options, such as algae biofuels, hydrogen, electrofuels, fuel cell vehicles and electrified roads, are not included within the scope of the present analysis.An advantageous technological development for any of these options could lead to less demand for the options in focus of the present study.However, many of these alternatives have significant obstacles ahead.Regarding hydrogen, which may be one of the most promising options not included in the study, cost-efficient infrastructure and distribution will be a challenge, not least in a country like Sweden with comparably low population density.In terms of biofuel choices, the results indicate, in accordance with previous results based on earlier versions of the model, that methanol is a cost-competitive biofuel option under the assumed conditions and technology characteristics.Advantageous features of methanol include low incremental costs for distribution and vehicles combined with comparably high efficiency in the production process.Similarly to other second generation biofuel options, methanol has also the benefit of a biomass feedstock with high availability.Also biomethane accounts for a large share of the transport fuel supply.Regarding biogas, the benefit lies mostly in the possibility of using waste streams with few alternative areas of use.For SNG, one of the advantages is the high conversion efficiency in production, which is also a factor that grows in importance as competition for limited biomass resources increases with more stringent climate targets.Also other biofuel options are present in the results, but at significantly lower shares than biomethane and methanol.This does not mean there cannot be important roles also for other alternative fuels; due to the formulation of the model, even though differences may be small, the lowest-cost option will take the whole market in a specific demand segment unless other constraints apply.Reality is also far more diversified than what could be represented in a model context, and cost-efficient niche markets could not be ruled out.When methanol is further restricted in the sensitivity analyses, DME as well as FT liquids are seen in the results, although at a higher total system cost.Even though the model only to some part captures inertia and lock-in effects linked to fuel and technology choices, the results indicate that the implementation of targets for an almost fossil-free road transport sector to 2030 also affect choices in the longer term.In this case, early targets favor methanol while disfavor SNG, also in model year 2050, even though the road transport sector is at this point carbon-free whether or not FFP policy is applied.The study has calculated the system cost increase of CO2 abatement with and without early fossil fuel phase-out in road transport.For stringent CO2 constraint, the increase in system CO2 abatement cost due to early fossil fuel phase-out is not insignificant but at the same time, perhaps, not too discouraging.For less stringent CO2 constraints, the cost increase is significantly higher.It should be noted that the model, other than CO2 emission reductions, does not take potential benefits of a fossil fuel phase-out policy into account.Such benefits could include lowered external costs for local pollution from road transport, less societal sensitivity to oil price shock, or the development of know-how in a growing business area potentially leading to trade possibilities.Remaining questions may not so much be whether early fossil fuel phase-out in road transport is possible, but whether the benefits are worth the costs involved.The implementation of climate targets aiming at stringent reductions of CO2 in the 2050 timeframe requires substantial measures starting in the near term and also involving the transport sector.Along with energy-efficient vehicle technologies such as PHEVs and BEVs, biofuels can form an important part of cost-efficient system solutions meeting such targets.In the main scenario, the cost-optimized model results show a biofuel use in the Swedish road transport sector of 15 TWh in 2030 and 42 TWh in 2050, corresponding to an annual growth rate of about 6% per year between 2010 and 2050.Second generation biofuels, in particular methanol and SNG, as well as biogas based on anaerobic digestion, are options showing advantageous cost-performance in the results.The implementation of a fossil fuel phase-out policy, aiming at an almost fossil-free road transport sector already by 2030, requires a doubling of the annual growth rate of biofuels until 2030.The impact on transport fuel choices of a fossil fuel phase-out policy is considerable around 2030 but decreases towards the end of the studied period.However, due to early market establishment, methanol, the preferred option in the 2030 timeframe, becomes more advantageous also in the longer term, 2050, while markets shares for SNG are affected negatively.The fossil fuel phase-out policy increases system CO2 abatement costs by 5–11% for stringent CO2 reduction scenarios.CO2 reduction levels are of large significance for cost-competitiveness of transport biofuels and, therefore, the additional CO2 abatement cost for a fossil fuel phase-out policy under less stringent CO2 reductions is notably higher.Both in terms of system-wide CO2 reduction and fossil fuel phase-out in road transport, measures for reduced travel demand growth can, depending on the costs of these measures, imply opportunities for cost-savings. | First and second generation biofuels are among few low-carbon alternatives for road transport that currently are commercially available or in an early commercialization phase. They are thus potential options for meeting climate targets in the medium term. For the case of Sweden, we investigate cost-efficient use of biofuels in road transport under system-wide CO2 reduction targets to 2050, and the effects of implementation of targets for an almost fossil-free road transport sector to 2030. We apply the bottom-up, optimization MARKAL_Sweden model, which covers the entire Swedish energy system including the transport sector. For CO2 reductions of 80% to 2050 in the Swedish energy system as a whole, the results of the main scenario show an annual growth rate for road transport biofuels of about 6% from 2010 to 2050, with biofuels accounting for 78% of road transport final energy use in 2050. The preferred biofuel choices are methanol and biomethane. When introducing additional fossil fuel phase-out policies in road transport (-80% to 2030), a doubling of the growth rate to 2030 is required and system CO2 abatement costs increases by 6% for the main scenario. Results imply that second generation biofuels, along with energy-efficient vehicle technologies such as plug-in hybrids, can be an important part of optimized system solutions meeting stringent medium-term climate targets. |
31,445 | Feeling Confident and Smart with Webrooming: Understanding the Consumer's Path to Satisfaction | Consumers easily interact with online and offline channels to search for information and buy products.They generally adopt two shopping patterns.With showrooming, they visit physical retailers to search for information and then log on to the Internet to make the purchase.According to eMarketer, 72% of U.S. digital shoppers bought after seeing a product in a store, and 10% of European online users research products offline before buying them online.With webrooming, consumers research products online and then make their purchase offline.Webrooming is the dominant cross-channel combination; 78% of U.S. shoppers and 42% of European online users engage in it.When making large, expensive purchases, consumers may spend over a month researching product information through online and offline sources, and almost two-thirds go through a webrooming shopping experience, whereas 30% prefer to showroom.These shopping patterns afford firms less control of the customer experience and may threaten retailers in the form of free-riding behaviors.However, past research reports that consumers who use multiple channels purchase more products, spend more and are more satisfied than single-channel consumers.Satisfaction is one of the key elements of customer experience management and is the cornerstone for retaining and establishing long-term customer relationships.Yet, however, there are few studies that investigate the specific channel combinations that influence consumer satisfaction.This research aims at filling this gap in the literature.The objective of this research is to analyze how webrooming influences satisfaction with the search experience."We focus on the consumer's feeling of confidence in the adequacy of the product, and his or her feeling of being a smart shopper, as drivers of the influence of webrooming on search process satisfaction.We investigate these relationships in a two-stage methodology, with three studies, combining survey-based data, qualitative-based data and experiment-based data."We contribute to the literature by confirming that webrooming is the most effective cross-channel combination to increase satisfaction, which, in turn, enhances customer loyalty and determines a firm's long-term survival.First, we find that webrooming is undertaken more frequently and is more satisfactory to the consumer than other single-channel and cross-channel behaviors."This may be due to the customer's perception, felt with a high degree of confidence, that he or she has made the right purchase.Second, we demonstrate that webrooming increases satisfaction with the search process, compared to showrooming, and that confidence and smart shopping feelings explain this effect.The perception that money is saved also contributes to search process satisfaction, but convenience appears to have no influence.The results are fairly consistent across different shopping motivations and types of product.Thus, companies who want to integrate channels and take advantage of channel synergies should embrace webrooming as the cross-channel behavior most likely to enhance consumer satisfaction.Allowing the customer to feel in control, feel confident and experience smart shopping feelings is the best path to elicit feelings of satisfaction.Our definition of webrooming is in line with the cross-channel and research shopping streams, and follows previous conceptualizations of two-stage decision-making processes.Webrooming is a cross-channel process with a decision-making phase divided into two parts.In the first stage, the consumer searches for and finds on the Internet an alternative product that probably best matches his or her needs; in the second stage, the consumer confirms the information at the physical store and makes the purchase.The previous literature generally adopts an economic perspective to analyze webrooming, in which consumers weigh up the costs and benefits of channel use during the purchase process.Shoppers combine the channels that allow them to minimize their inputs and/or maximize their outputs."In this way, motivations, goals and schemas determine consumers' informational needs and lead to different channel usage.Goal-directed consumers, and those who demand convenience, combine channels to maximize shopping efficiency.For these consumers, the Internet saves time and effort in searching for product information, and the physical store offers immediate possession of the merchandise.Important purchases with high implicit risks affect the use of cross-channel shopping.Consumers gather objective information about attributes and prices online, which reduces purchase risk; the physical channel thereafter provides them with reassurance.Cross-channel shopping may also help price-oriented consumers find better deals and increase the experiential value of shopping for hedonic-oriented consumers.The economic approach has been applied to attribute-based decision-making research."Consumers' perceptions of channel characteristics at different stages of the purchase process determine cross-channel shopping.In this way, Verhoef, Neslin, and Vroomen classify channel attributes in terms of benefits and costs and compare the Internet, catalogs and physical stores.They find that the Internet is the preferred search channel because it provides the highest convenience and facilitates comparisons.However, the physical store is preferred for purchasing due to its enhanced service quality and low purchase risks.These results have been consistently confirmed in the literature.Less is known about the consequences of cross-channel shopping for customer experience management.Previous studies consider the influence of channel synergies on consumer behavior.Using multiple channels during the shopping process produces complementarities and is positively related to consumption, attitudes toward retailers, satisfaction and loyalty.However, previous studies consider cross-channel behavior in general, neglecting the impact of specific channel combinations.We analyze this research question.Our argument is clear: webrooming is the most frequent cross-channel shopping behavior because it provides consumers with more satisfactory search experiences.We focus on search process satisfaction as a particular outcome of the shopping experience.Fig. 1 shows the research model and the proposed hypotheses.Specifically, we expect that carrying out webrooming behaviors, compared to showrooming behaviors, increases consumer satisfaction with the search experience.Furthermore, we expect that confidence and smart shopping feelings explain this main effect.The possible influence of contextual factors, in terms of shopping motivations and product characteristics, is also controlled.Search process satisfaction is defined as the satisfaction with the actual information search process.Consumers experience satisfaction not only with the chosen product but also with the shopping experience.Previous research notes that satisfaction is a key outcome of cross-channel shopping.Satisfaction with the decision-making process leads to consumption satisfaction and positively influences post-choice behavior.The previous literature addresses the overall impact of channel integration on satisfaction.For example, Pantano and Viassone find that using multiple channels increases perceived service quality, leading to satisfaction, in a multichannel retail environment.Other researchers show that multichannel information searches provide enhanced satisfaction with the experience in comparison to single-channel searches."However, until recently, no study had analyzed the impact of different channel combinations on the consumer's search process satisfaction.In a recent exception, Jang, Prasad, and Ratchford, examine price satisfaction as the final search outcome in car purchases, and find no differences based on the number of channels used to search for information.We propose that webrooming has a positive influence on search process satisfaction.Following cognitive fit theory, webroomers take advantage of each channel during their search experience to satisfy their purchase needs.Compared to showrooming, webrooming offers the reassurance of a physical interaction with the product, immediate possession and sensory engaging experiences.Consumers can derive more satisfaction through the use of self-service technologies during search activities than when purchasing.Therefore:Webrooming has a positive impact on search process satisfaction.Confidence is a mental state of certainty when for evaluating a product, brand or purchase situation.Consumers not only form a favorable impression of the product but also feel right about it."In a pre-decision phase, the concept of the strength of the emerging preference is referred to as “confidence in the leading option” and reflects “the confidence that the emerging leader in the choice process will ultimately be one's final decision, even if confronted with additional information”.Confidence thus relates to traditional marketing concepts such as attitude strength and feelings of certainty in evaluations.Cross-channel consumers are assumed to have a certain degree of involvement with the product and/or the purchase.They are motivated to choose the best shopping option, which increases the uncertainty associated with the purchase.Cross-channel consumers strive to reduce uncertainty and feel confident that the product is the best match to their needs.When consumers combine channels, they create individuated information that enhances their perception of being in control and their belief that they are making the right choice.We propose that webrooming is the best channel combination to evoke feelings of confidence."Webrooming search enhances the consumer's knowledge of and preferences for the product, reduces information asymmetries and enhances control over the purchasing process.For consumers who expend effort to find the best alternative, the Internet is superior to other information channels.Showroomers might seek low prices or convenience, in which case confidence may be less of an issue, which it certainly is for webroomers.Although consumers may showroom to make the best purchase decision, they have less control over their purchases than webroomers, given that online purchases may result in delayed delivery or a product that does not meet with expectations.Thus:Webrooming has a positive impact on confidence in making the right purchase.We also note the positive relationship between confidence and search process satisfaction.The more confident consumers are about their judgments, the more favorably they evaluate the search experience.If webrooming provides consumers with confidence, their search process satisfaction will be enhanced.Formally:Confidence mediates the impact of webrooming on search process satisfaction.Consumers feel smart because they have invested time and effort in searching and using promotion-related information to achieve price savings.From a utility perspective, smart shopping feelings arise when the consumer pays a lower price than the internal reference price and feels pleasure, pride or “like a winner”.Recently, smart shopping feelings are shown to be related to outcomes other than paying a low price.Specifically, Atkins and Kim develop a three-dimensional structure of shopping benefits that leads to smart shopping feelings.They show that consumers feel smart not only because they achieve monetary savings, but also because they achieve time and/or effort savings, or because they perceive they are making the right purchase.Smart shopping feelings are likely to occur in cross-channel shopping settings.Using multiple channels helps to affirm personal traits, such as thrift or expertise.Cross-channel consumers may feel smart because they believe that “searching on one channel allows them to make better purchase decisions on another channel due to their own ‘smart’ search behavior”."We propose that webrooming can lead to smart shopping feelings and thus influence the consumer's search process satisfaction.Pauwels et al. find that informational websites may help consumers to make smarter purchases.The information availability, transparency and convenience of the Internet reduce information asymmetries and empower consumers in their relationship with vendors.Smart shopping feelings arise when consumers perceive themselves as being responsible for the purchase outcome and having control of the causes that generate the outcome.Therefore, webroomers, in comparison to showroomers, may feel more in control and responsible for purchase outcomes.Although showroomers may feel smart when they find low prices or save time in the purchase process, they are not ultimately responsible for, nor do they have control of, the final outcome of the purchase.Thus:Webrooming has a positive impact on smart shopping feelings.Previous studies establish a positive link between smart shopping feelings and satisfaction.Consumers who feel responsible for purchase outcomes have smart shopping feelings and derive positive consequences from the experience, such as purchase satisfaction.We propose that smart shopping feelings may account for the effect of webrooming on search process satisfaction:Smart shopping feelings mediate the impact of webrooming on search process satisfaction.Channel integration benefits may depend on situational factors.We analyze webrooming as a goal-directed cross-channel behavior where consumers invest time and effort to make the best possible purchase.However, we must take into account that cross-channel consumers may also be driven by other goal-directed motivations.Our research proposal focuses on the maximization of the output of the experience, yet consumers may also be motivated to minimize inputs.Balasubramanian, Raghunathan, and Mahajan argue that cross-channel consumers are driven by their desire for convenience, efficient information and the best value.Previous studies show that cross-channel consumers seek efficiency in their purchases and try to save time and effort in making the decision.Furthermore, consumers use information and purchase channels to buy products at reduced prices and the perception that the Internet allows consumers to search and find low prices intensifies this motivation.Finding low prices is a key reason for online shopping and showrooming.For webroomers, previous research shows that they may be motivated to find better deals at stores but also that price may not be an important variable.Our exploration of webrooming behavior takes into account these different instrumental motives for cross-channel shopping.In a similar vein, we note the possible influence of product type on cross-channel behavior."One key determinant of cross-channel behavior is the consumer's capacity to assess product quality before interacting physically with the product.Consumers may prefer to purchase experience or touch products in a physical store more so than is the case for search and non-touch products.Furthermore, products differ in the amount of time, money and energy that the consumer spends in the purchase process.This product dimension has also been identified as a determinant of cross-channel behavior and smart shopping feelings.The purchase of products that require a high investment of resources are carried out through cross-channel processes to a greater extent than products with low resource investment demand.Consumers perceive higher risk in the purchase of high cost products than in the purchase of low cost products."Therefore, the consumer's need to complete the shopping experience in a physical store may be higher for high cost products.Thus, we consider the influence of these product dimensions in the analysis of webrooming behavior.The research uses a mixed methodology, combining experimental procedures with survey-based data to explore webrooming in depth and test the hypotheses.First, an online survey explores the channel combinations most frequently associated with satisfaction or dissatisfaction outcomes.This requires participants to recall and describe a shopping experience that had different satisfaction levels.Two follow-up analyses offer a comprehensive picture of the webrooming phenomenon and a tentative test of the hypotheses.Taking into account the results of the first exploratory study, we carried out a lab experiment to analyze the impact of webrooming and showrooming behaviors on confidence and satisfaction.Finally, Study 3 consists of an experimental design, involving real consumers, who were asked to evaluate webrooming versus showrooming behaviors in terms of confidence, smart shopping feelings and satisfaction.In all the studies we consider different shopping motivations and/or product characteristics to control for the effects of contextual factors.The results of the analyses involving these contextual factors are reported in the Appendix.The studies were conducted with samples made up of millennials, since their growth as consumers is parallel to that of the Internet as an information and purchase channel.Millennials are a generational cohort born roughly between 1980 and 2000 and comprise the largest consumer segment in the US and Western Europe.Millennials are characterized by their ubiquitous use of technology.Thus, they represent the most suitable population to test our hypotheses.The exploratory research was carried out in collaboration with a national multichannel retailer.We prescreened participants to ensure they were part of the millennial generation."A total of 421 customers took part in the study.Participants had to recall and describe a recent purchase episode.We manipulated the types of experience to obtain different levels of satisfaction.We asked participants to recall a recent purchase experience that made them feel especially satisfied, dissatisfied or just a purchase experience.They had to describe the product they purchased and give all the information sources used.They were also asked to give the reasons for theirsatisfaction.The shopping episodes were classified into six categories: two single-channel episodes, namely, purely online and purely physical; four cross-channel shopping episodes, namely, webrooming, showrooming, printed media and the physical store; and at least two channels.The content analysis also allowed us to refine the data and check the satisfaction manipulation."The final sample was 368 participants.Cross-channel shopping dominated single-channel shopping in all conditions.For the cross-channel episodes, virtual and physical channels in two-stage processes were more frequently reported than any other channel combination or sequence.Also, we created a dummy variable and submitted it to a chi-squared analysis crossed by the experimental groups = 46.654, p < .001).Webrooming episodes were higher in the satisfaction group than in the other conditions, providing initial support for H1.We developed two follow-up analyses.First, we developed a qualitative analysis to better understand cross-channel behaviors in terms of motivations and outcomes.Second, we collected additional data to investigate the associations between channel combinations, products and satisfaction.Both follow-up analyses shed light on the influence of confidence and smart shopping feelings on satisfaction, as well as on the influence of the contextual factors.We arranged two group interview sessions with 16 participants from the first study.Eight individuals took part in each session.The samples consisted of graduate and postgraduate students and employees with three to five years of work experience.Three experts in marketing, sociology and new technologies conducted the content analysis using ATLAS.ti v6.2 software.The discussions focused on webrooming and showrooming.The motivations behind the behaviors had common characteristics."First, both webroomers and showroomers seek to reduce the uncertainty related to the purchase situation or the channel.Second, they carry out these behaviors to save time and/or effort in the purchase.Showrooming involves using the physical channel to search for product information with convenience and to take advantage of the assistance that might be offered by sales staff.The convenience motivation was also associated with webrooming.Consumers said that they employed the online channel because it conveniently accelerated the information-gathering process and allowed them to check product availability at the physical store.However, differences were also identified.Paying a low price is a main motivation for showrooming.For webroomers, price does not appear to be such an important factor and the effort exerted during the process is not perceived as a cost.In fact, several participants declared they had paid more than they had planned for the product, but that the surcharge was not a problem, because “the effort is well worthwhile”.Following this idea, webrooming is carried out to make the best possible purchase.The high level of purchase importance motivates the consumer to exhaustively search the Internet to find the product that may be the best match to his/her needs.Yet this still does not dispel all the uncertainty.The consumer still has to visit the store to verify that the researched information is accurate and confirm that the product is the best choice.Although the participants care about the experience and take an active role, webrooming allows them to control the process and purchase outcomes to a greater extent than showrooming.In sum, convenience is shown to be an important determinant factor for cross-channel shopping, both in webrooming and showrooming.However, webrooming and showrooming appear to have different motivations: the wish to make the right purchase leads the consumer to webroom, whereas the wish to save money drives consumers to showroom."In the next follow-up analysis, we explore the consumers' reasons for their satisfaction with webrooming and showrooming.Following the same procedure as in the main study, we obtained 140 additional, valid respondents in the satisfactory condition.The final database was 264 respondents.1,Again, webrooming was the recollected behavior most associated with feelings of satisfaction.Showrooming behaviors were reported by only 30 participants.The appendix displays cross-tabulated information about the types of shopping episodes and product categories.Table 2 displays the reasons for satisfaction depending on the type of purchase episode.The reasons are coded in terms of smart shopping perceptions: time/effort savings or convenience, feelings of having made the right purchase and money savings.However, the codification process identifies other types of reason, also related to the smart shopper construct: effort appreciation, product uniqueness and the hedonic component of the experience.Each statement could be assigned to several codes.Most webroomers indicate that finding the best product/good value for money is the main reason for their satisfaction, whereas they barely mention convenience and price.Webroomers also derive satisfaction from the effort exerted during the process.For showroomers, price is the most important reason for satisfaction; more than three quarters of them highlight the lower prices found online.However, showroomers cite reasons related to having made the right purchase less than the other cross-channel shoppers.Therefore, webroomers and showroomers seem to purchase similar types of products; however, the reasons that lead to satisfaction, which may reflect different shopping motivations, differ.Altogether, the results of this exploratory research reveal that webrooming is more frequently undertaken and satisfactory than other single-channel and cross-channel behaviors.Webrooming purchases are important for consumers, who care about making the best possible purchase.Webroomers need to be sure or be convinced about the suitability of an alternative, and lack of confidence may be a key reason why they visit the physical store.The following studies examine whether confidence in making the best purchase can be the driver of consumer satisfaction with webrooming.In Study 2 we compare webrooming to showrooming and analyze the differences between a product under consideration in a cross-channel interaction and a product with no previous interaction.If webrooming instills confidence in the product initially considered as the best shopping option, it will be interesting to examine how stable this preference is when confronted with new, competing information.Also, we examine take into account the possible influence of product type by analyzing two categories, clothing and electronics, which differ in their search-experience properties.These product categories are the most cited by the Study 1 participants.They are frequently purchased through cross-channel shopping.Previous research categorizes electronics as search goods, and clothing and accessories as experience goods.Moreover, the results of a pre-test confirm the manipulation of the type of product and helped in the selection of appropriate stimuli for the study.The study context was a simulated purchase in the university store.The design of the experiment was full factorial, with 2 × 2 between-subjects conditions.A sample of 262 undergraduate students was randomly assigned to one of the four conditions.Previous studies of multichannel consumer behavior use students, who represent a valid sample population.We prescreened all participants to ensure that they had had Internet purchase experience.The experiment was in two parts.In the first, the participants had an initial interaction with a product that they were told had to be considered for purchase.Participants were randomly assigned to either the online or the physical interaction.After interacting with the product information, the participants responded to 7-point Likert scales regarding confidence and search process satisfaction.In the second part, the participants had either a physical or an online interaction with the same product and a new alternative.The rival product had characteristics similar to those of the target.After a few minutes, they completed the second part of the questionnaire on their confidence levels in both products, and search process satisfaction."The correlations between the dependent variables were positive and significant and the results of the subsequent MANOVA revealed a multivariate effect for the cross-channel sequence = 7.944, p < .001).Participants in the webrooming condition reported higher levels of search process satisfaction and confidence in the target product than participants in the showrooming condition, supporting H1 and H2.Confidence in the rival product was also significantly higher in webrooming than in showrooming."This result is logical and consistent with previous literature regarding the role of direct experiences in reinforcing consumers' confidence. "Regarding the effect of product type, the MANOVA yielded a significant multivariate effect = 6.646, p < .001).At the univariate level, we find that.No interaction effects were found.The direct effects of webrooming on confidence and satisfaction did not depend on the product type.Next, we used the macro PROCESS v3 to examine the mediating effects of confidence.Confidence in the target and the rival products acted as parallel mediators and the product type was included as a covariate.After controlling for product type,2 confidence in the target mediated the effect of webrooming on search process satisfaction; yet, the type of sequence still had a direct influence, revealing partial mediation.Overall, the analysis confirms that webrooming participants are more confident in the adequacy of the product and satisfied with the search process.In fact, confidence drives the customer experience.However, webrooming still has a direct effect on satisfaction.This complementary mediation suggests that there may be other potential mediator omitted from the model.Furthermore, the relationships between webrooming, confidence and search process satisfaction have been proven in a controlled setting and with a limited number of products.To overcome these limitations, Study 3 used real consumers to test the hypotheses and to analyze smart shopping feelings as a complementary mediator of the effects of webrooming on satisfaction.Furthermore, the study analyzes a wider set of products and shopping motivations.By controlling for these contextual factors, we aim to test the robustness of the relationships across different shopping situations.In Study 3 we examine the influence of webrooming on consumer confidence and smart shopping feelings, and whether these variables together explain why webrooming produces more search process satisfaction than showrooming.Feeling smart is a direct consequence of the transfer of power that the proliferation of marketing channels causes in customer–firm relationships, and determines customer satisfaction and customer experience.As previously stated, smart shopping feelings include benefits related to savings in money, time and/or effort and to the perception that one has made the right purchase.These shopping benefits are intertwined with the goal-directed motivations of cross-channel consumers.By manipulating shopping motivation types, we control for this contextual factor in the relationships between webrooming, confidence, smart shopping feelings and search process satisfaction.Specialist retailers were contacted through their trade associations and asked to collaborate in the study.The study pool was formed by 21 multichannel specialist retailers, trading in 6 different product categories.We obtained a final valid sample of 468 real customers.We developed cross-channel shopping situations with a varied set of motivations that could lead to smart shopping feelings.Specifically, we developed an experimental design with 2 × 3 in a between-subjects factorial design.We adapted each condition to the product type sold by each participating retailer; thus, the participants were introduced to a realistic situation.We generated a total of 21 × 2 × 3 = 126 scenarios.The participants were randomly assigned to one of the six conditions.They read an account of a shopping experience that started with an online search and ended with a purchase at a physical store.The accounts were manipulated in terms of different shopping motivations, in accordance with the smart shopping literature.We then asked the participants give their opinion as to judge how the consumer would react to the shopping scenario presented.To check the manipulation of the shopping motivation, the participants were asked to answer the three-dimensional smart shopping perceptions scale developed by Atkins and Kim.They also reported smart shopping feelings: Alex feels happy about this purchase; Alex feels it was a smart purchase; Alex feels pride about this purchase.Confidence and search process satisfaction were measured in the same way as in Study 2.The scales were validated in two steps through an Exploratory Factor Analysis and a Confirmatory Factor Analysis.First, we carried out an analysis of reliability and dimensionality.Second, we conducted a Confirmatory Factor Analysis using Structural Equation Modeling with EQS. 6.3 software.The initial factor structure revealed that all the item loadings scored above the recommended benchmark of 0.7, with the exception of one item of the perceived time and effort savings scale.This item was removed.The composite reliabilities were above 0.65, supporting the internal consistency of the scales.The Average Variance Extracted was higher than 0.5, assuring convergent validity.Finally, discriminant validity was supported, since the square root of the AVE was higher than the shared variance among the constructs.Following the validation of the measurement instruments, we calculated the average values of the items to check the manipulations and test the hypotheses.Type of shopping motivation has a significant effect on perceptions of time/effort savings = 31.701, p < .001).The post-hoc Tukey test shows that these perceptions are higher when the consumers seek to reduce the time and effort they expend in shopping than when they are motivated to make the right purchase and to save money.In addition, perceptions of money savings are significantly affected by the shopping motivation = 90.437, p < .001).These perceptions are higher for participants motivated to save money than for those motivated to save time/effort and for those with the right purchase motivation.However, the best purchase motivation does not directly produce higher perceptions of having made the right purchase than the other motivations.Further analysis indicates that these perceptions depend on the type of shopping experience = 3.794, p < .05).The perception that the right purchase has been made is higher for the right purchase motivation group than for the other motivation groups, but only in the webrooming scenarios.With this exception, the manipulation of the type of shopping motivation was successful.Table 5 shows descriptive data for each treatment and the results of the univariate ANOVAs.In support of H1, search process satisfaction is significantly higher in webrooming than in showrooming.Hypotheses H2 and H4 are also supported by the data: confidence and smart shopping feelings are significantly higher among the webrooming than the showrooming participants.No other main or interaction effects are significant.To test hypotheses H3 and H5 a mediation model with parallel mediators was estimated, using the macro PROCESS v3.The model proposes that webrooming has direct effects on confidence, smart shopping feelings and search process satisfaction.In addition, it is posited that confidence and smart shopping feelings mediate the effect of webrooming on satisfaction.Perceptions of time/effort savings, money savings and having made the right purchase are included as covariates.Table 6 displays the results of the mediation model.The model shows an acceptable explanatory power.The direct effect of webrooming on search process satisfaction disappears when the mediators and covariates are included in the model.Confidence and smart shopping feelings significantly influence satisfaction.The results of the bootstrapping for the indirect effects reveal full mediation of both variables.The pairwise comparison between the two indirect effects is not significant, since zero was included in the bootstrap confidence interval.Thus, both confidence and smart shopping feelings equally mediate the impact of webrooming on satisfaction.Hypotheses H3 and H5 are supported.In addition, perceptions of time/effort savings do not affect satisfaction, which is consistent with our previous findings.Webroomers value the effort invested in their purchases, and efficiency/convenience are not related to their search process satisfaction.However, perceptions of money savings have a significant positive influence on satisfaction.Overall, the results generally support our research proposal.Webroomers are more satisfied with their search experience than showroomers.Search process satisfaction is explained by confidence in having made the right purchase and smart shopping feelings.The analysis regarding the influence of type of product on the dependent variables and their relationships is detailed in the appendix.The general model is highly consistent across product characteristics.This is one of the first studies to analyze the impact of the use of specific combinations of online and offline channels on the consumer experience.Table 7 summarizes the main findings of this research.We contribute to the multichannel marketing literature by showing that webrooming is a more satisfactory cross-channel shopping experience than showrooming.This may be a reason why webrooming is the prevalent form of cross-channel shopping behavior.We consistently find this effect in a series of studies combining quantitative and qualitative techniques."Our research focuses on the analysis of this effect and explains the different factors that define the consumer's search process satisfaction with webrooming.The results of the exploratory research reveal that webrooming is associated with satisfactory shopping experiences.Webroomers are motivated to make the best possible purchase, and they search for information intensively with the ultimate goal of having confidence in their decisions.Webrooming is the best channel combination for accomplishing this shopping goal, which has a strong influence on consumer satisfaction.The results of three conclusive studies confirm our propositions.Confidence is higher following a webrooming experience than following other cross-channel experiences.Confidence, then, explains why webrooming leads to search process satisfaction.Moreover, webrooming generates smart shopping feelings, which also determine satisfaction.Webroomers weigh the benefits and limitations of each channel to make smart choices.The online information search reduces information asymmetries and empowers consumers during the purchase journey.The physical store provides reassurance to the consumer and immediate possession of the product.Thus, the consumers perceive themselves as being responsible for the purchase outcome and as being in control of the causes that generate it, which, in turn, causes smart shopping feelings.The results of the exploratory study support this notion.3,Thus, our findings may help researchers to explain the underlying mechanisms of the effects of webrooming on search process satisfaction."Webrooming increases the consumers' confidence that they have made in making the right purchase and their smart shopping feelings; both mental states equally determine satisfaction.Our findings are fairly consistent across shopping motivations and product categories, but some differences must be noted.First, cross-channel consumers may be motivated to make purchases efficiently.However, saving time and/or effort appears to have no influence on search process satisfaction.This finding is in line with Gensler, Verhoef, and Böhm, who also find that convenience has no significant role in the search and purchase stages of the cross-channel purchase decision-making process.On the contrary, the effort expended may have a positive effect on satisfaction.Nevertheless, the relationships between perceived effort, convenience and satisfaction should be investigated in future research.Second, saving money on the purchase may also be an important motivator of cross-channel shopping and determine satisfaction, which is consistent with previous literature on smart shopping.When consumers want to pay a low price for a product, showrooming appears to be the preferred channel combination.Third, although product characteristics have significant direct effects on smart shopping feelings and satisfaction, the path to search process satisfaction seems not to depend on product type.Our findings have valuable implications for multichannel marketers.In the omnichannel era, retailers must learn to integrate channels to offer seamless and unique experiences that retain consumers throughout the entire purchase experience and avoid free-riding behaviors.Providing customers with satisfactory experiences may be the key to retaining them and establishing long-lasting relationships with them.Our findings advocate that retailers should shift from design orientation to customer orientation in order to deliver satisfaction with the cross-channel shopping experience.Most practitioners focus on delivering convenience in cross-channel experiences, which surely helps consumers make efficient purchases.Convenience represents a critical aspect of online shopping and affects satisfaction in online settings.However, in cross-channel settings we show that providing convenience may not be the best strategy for creating customer satisfaction.In fact, consumers appear to value the time and effort invested in making these purchases.In cross-channel shopping, convenience and efficiency may be hygiene factors, rather than motivators.Instead, this research stresses the importance of developing strategies to foster consumer confidence during the purchase journey.This can be achieved in a number of ways.First, the search for confidence is most associated with important purchases.Highly involved consumers are driven by the desire to make the best possible choice.These consumers strive to feel confident about their choices."Thus, cross-channel marketers should try to boost the consumer's involvement or engagement with the product, brand or purchase process situation so that he or she will seek to gain confidence in their choices.This will establish the grounds for a more satisfactory experience.Second, our findings show that cross-channel consumers gain more confidence from a webrooming experience than from a showrooming experience.Retailers should take advantage of the different capabilities offered by the channels to design optimal customer experiences.In particular, managers should maximize the unique informational power of the Internet to offer the fullest information about their offerings."For example, reading online reviews before making a physical purchase increases consumers' confidence in their choices.Online channels might be used to promote a visit to the physical store to complete the purchase journey.At the physical store, consumers should be allowed to touch and feel the products; this diagnostic information cannot be obtained elsewhere and increases choice confidence.Furthermore, smart shopping feelings are likely to arise in cross-channel shopping.Our findings show that webrooming makes consumers feel smarter than does showrooming.Cross-channel consumers may feel responsible for the final outcome of a purchase and need high control of the shopping process.Webrooming is the channel combination that better accommodates these determinants of causal attribution."Thus, companies should follow this trend in their communications strategies by appealing to their customers' intelligence.Online retailers may also benefit from our findings.Money savings have traditionally been associated with smart shopping feelings, and online shopping and showrooming are associated with the search for low prices.This research shows that paying a low price is the main reason for satisfaction in online purchases and that money saving perceptions lead to search process satisfaction."Furthermore, by increasing the perceived control of the process and the responsibility for the final outcome of the purchase, online retailers may increase the online shoppers' smart shopping feelings and satisfaction.Overall, our results acknowledge the value of information integrity across channels.Offering accurate product information online and good physical interactions in stores may help retailers integrate their channels more efficiently.Most offline retailers have some form of online component that can be integrated into the consumer experience, in the same way as online retailers are increasingly embracing the offline presence."Facilitating consumers' control of the process and their knowledge acquisition during the purchase journey can have great potential to improve their customer experience in terms of confidence, smart shopping feelings and satisfaction.Despite the importance of our findings for marketing research and practice, some limitations need to be acknowledged.First, the experimental nature of the confirmatory analysis should be noted.In studies 2 and 3, consumers were compelled to take part in either a webrooming or showrooming shopping experience.This manipulation was carried out to maintain a balanced number of participants in both scenarios.A more realistic scenario could have been achieved if the participants had self-selected to do webrooming or showrooming; however, the number of webroomers would have greatly surpassed the number of showroomers, as recent reports and previous studies find.Thus, the balance between the conditions required for the experimental design would have been compromised.Nevertheless, future studies should allow participants to choose freely between webrooming and showrooming and analyze their impacts on satisfaction.Second, we focus only on millennials.The use of a homogeneous group of participants guarantees internal validity, and testing the hypotheses over a wide range of customers ensures a certain degree of external validity.However, it would be interesting to validate these results and proposed relationships with other consumer profiles.Third, we focus on the linear online-to-offline sequence better to isolate the effects of webrooming.However, future research should also investigate the cross-channel process as a recurrent path.Mobile technologies allow consumers to use several channels simultaneously at the same stage of the purchase process, turning cross-channel experiences into omnichannel experiences.Further research should analyze how omnichannel environments affect the generation of confidence, smart shopping feelings and satisfaction."Fourth, we examine the impact of cross-channel behaviors on the consumer's search process satisfaction.This optimistic view neglects other outcomes that may be worth investigating.Cross-channel behaviors can also lead to search regret, which results from an unsuccessful search that leads to an undesired purchase decision.Search satisfaction and regret might share some common aspects in cross-channel shopping, but might be caused by different circumstances.For example, the large amount of data that the Internet offers can cause the consumer to suffer information overload, creating confusion and anxiety during the search process, which may lead to heightened feelings of regret if consumers do not find what they need.It would be interesting to examine how consumers evaluate the outcome of the purchase journey, depending on whether it results in a satisfactory or dissatisfactory experience.This work was supported by the Spanish Ministry of Economy and Competitiveness under Grant ECO2016–76768-R; and European Social Fund and the Government of Aragon under Grant S20_17R. | The multichannel marketing literature consistently shows that consumers who use multiple channels in their purchase journeys are more satisfied, loyal, and can be more profitable, than single-channel consumers. However, there is little research investigating how specific channel combinations affect the customer experience. Recognizing that webrooming (research products online, purchase offline) is the prevalent form of cross-channel shopping, this paper examines its influence on the consumer's search process satisfaction. The results of three studies combining qualitative, survey, and experimental methods show that webrooming leads to more satisfaction than showrooming behaviors. Furthermore, we find that webrooming makes consumers feel more confident and like “smart shoppers.” Both factors subsequently determine satisfaction. Perceptions of money savings also affect search process satisfaction. Importantly, saving time and/or effort during the purchase process (convenience) has no influence on satisfaction with cross-channel shopping. The results are robust across shopping motivations and product categories. Theoretical implications and proposals for effective channel integration are offered. |
31,446 | Substantial gains in word learning ability between 20 and 24 months: A longitudinal ERP study | As children approach their second birthday, most of them undergo an increase in the rate of vocabulary acquisition, often termed the “vocabulary spurt”.In this longitudinal ERP study, we sought to capture age-related changes in children’s ability to learn new object-word relations during the dynamic period of language development between 20 and 24 months.Moreover, we investigated how differences in word processing during learning related to individual differences in vocabulary size.It is often assumed that the vocabulary spurt marks a transition between two distinct stages of vocabulary acquisition, a change which has been attributed to qualitative changes such as the emergence of certain word learning constraints, or the shift from associative to referential acquisition, to mention only a few hypotheses.This account has been challenged by modeling showing that vocabulary growth is often continuous rather than discontinuous, and it has been proposed that improvements in domain-general learning abilities can account for the observed acceleration in word learning.The fact remains, however, that vocabulary learning typically undergoes a dramatic acceleration in the end of the second year, and without assuming two distinct stages of learning, we will use the term vocabulary spurt to describe this acceleration.Improving our understanding of the mechanisms driving this acceleration remains an important focus of language acquisition research.A possible contributor to the vocabulary spurt is the emergence of the ability to map a novel word to its referent from only a few exposures, a process called fast mapping.Fast mapping abilities are readily apparent at the end of children’s second year, and the ability has been shown to emerge already around children’s first birthday.Current evidence suggests that the ability to learn words from limited exposure starts to develop around six months, although at this point word retention is very weak.Following this, fast mapping ability continues to grow more efficient and refined, with longer and better retention, up until at least 2½ years of age.Electrophysiological studies have successfully used the N400 component to investigate the fast mapping process.The N400 is a component modulated by lexical priming and/or ease of integration into semantic context, and is elicited by words as well as other meaningful stimuli.Supporting semantic contexts that create an expectation of an upcoming stimulus attenuate the N400 amplitude, which indicates facilitation of lexical-semantic processing.The component has been observed from 6–9 months of age in picture-word congruity paradigms, but continues to mature up until around 19 years of age.The use of the picture-word congruity paradigm is suitable for studying the process of fast mapping.When children are shown pictures of unfamiliar objects which are paired with an unfamiliar pseudoword, it is possible to test the child’s ability to map the novel word to the object after a certain number of presentations.After familiarization, an incongruity effect on the N400 component indicates successful fast mapping.Using this paradigm, it has been shown that 20-month-olds with large productive vocabularies produce an N400 incongruity effect to newly learned pseudowords presented with incorrectly paired novel objects.In this study, each pseudoword was presented with the same novel object five times in a learning phase, and the test phase followed immediately at the end of each learning block.Twenty-month-olds with smaller vocabularies did not show an effect of incongruity on the N400 amplitude for pseudowords, although they did produce such an effect in the control condition using real familiar words and objects.Using a similar task with lower demands, Friedrich and Friederici demonstrated that even 14-month-olds could successfully map novel pseudowords to constantly paired objects and retain this representation for at least a day.This was seen in a larger N400 response to pseudowords presented with incongruous objects compared to the consistently paired object.A modified setup of this experiment was used with 6-month-olds and found evidence of learning the pairings between novel words and objects during the familiarization phase.However, the 6-month-olds did not produce an N400 incongruity effect one day later as the 14-month-olds did.Many other picture-word matching experiments have used real word/object stimuli familiar to young children rather than novel objects and pseudowords.Several of these studies have found interesting links between N400 responses and children’s vocabulary development.In one experiment, 12-month-olds who had high productive vocabularies for their age displayed N400 effects, while those with lower productive vocabularies did not produce an effect even for words their parents had rated as comprehended by the child.Another study found that the lack of an N400 effect at 19 months predicted poor expressive language skills at 30 months.In a task adapted for younger infants, 9-month-olds’ receptive vocabulary was found to correlate with the size of an N400 effect.The lack of an N400 effect in 20-month-old children with familial risk for dyslexia, compared to a control group, has also been demonstrated.A similar paradigm using purely auditory word pairs as stimuli found a relation between N400 responses and productive vocabulary size.Together, these findings suggest that the mechanism behind the N400 response is related to efficient word processing.However, the study by Torkildsen et al. described above is the only one reporting a relation between vocabulary size and the N400 incongruity effect to newly learned pseudowords.The use of pseudowords and fantasy objects is important if one wants to control for individual differences in previous exposure to word and picture stimuli, and the types of representations that children might already have begun to build up based on these experiences.The most commonly reported ERP component to show effects of repetition during word or pseudoword familiarization is a frontal negative component known as the N200-500, most commonly conceptualized as reflecting processes of word form recognition.This component is enhanced in response to known words compared to unknown words, and has been found to emerge for originally unknown words as these are repeated and become more familiar.The above findings were from unimodal experiments involving only words, but a few picture-word priming paradigms have seen a similar effect of an increased early negativity during familiarization with novel words paired with pictures.Additionally, several studies have found an effect of congruity where words presented with congruous pictures elicited a larger N200-500 compared to incongruous words.The N200-500 word familiarity effect in infants has been shown to be related to measures of language development, such as vocabulary size.Another study, based on the same experiment as Torkildsen et al., also focused on the dynamics of the learning phase, and demonstrated a relation between vocabulary size and the modulation of the ERP response to the novel words during learning.Children with smaller vocabularies and no evidence of fast mapping in the test phase, showed a different pattern of repetition effects in the training phase than children with larger vocabularies.The familiarization process was reflected in a modulation of a fronto-central negativity, which appeared to increase in amplitude until a certain level of encoding was reached, and then decrease with further repetition.The children with low vocabularies, however, did not reach a stage of attenuation of the negative amplitude due to repetition.This negative component was interpreted as an Nc, a component seen specifically in infants and young children in response to several types of stimuli, but most commonly visual stimuli.It is commonly viewed to reflect attentional processing, where novel and salient events typically result in an increase in amplitude, and stimulus familiarization or repetition leads to a decrease in amplitude.While the N200-500 component is elicited by word stimuli specifically, and is believed to be associated with word form processing, the Nc component seems to reflect more general cognitive mechanisms that are involved in many different types of tasks.Although the N400 has been studied extensively in infants and young children with regards to semantic incongruity, it is unclear how this component changes during the learning process.Especially in a picture-word matching paradigm where words are given a clear meaning, one would expect to see changes in N400 amplitude as children learn the associations between the picture and the word.In adults, the N400 is attenuated with repetition for both real words and for pseudowords that presumably do not activate a semantic representation.N400 repetition effects for words have been reported in school-aged children as well, and this effect seems to grow stronger with age.While most of the word learning or word familiarization studies on infants have reported mainly frontal or fronto-central effects, and not any clear N400-effects due to repetition, Friedrich and Friederici’s study with 6-month-olds did find a parietal reduction in negativity in the second part of the training phase for words that had been constantly paired with an object compared to words paired with different objects.They interpreted this as an N400 effect showing that the children learned the semantic relation between the word and the object.Other studies of infants and young children have not reported changes in N400 amplitude with familiarization.The critical difference between the N200-500 component and the N400 is that the former does not necessarily involve processing of the relation between the word form and its referent, which in experimental contexts is the semantic representation.Instead this earlier component is mainly modulated by familiarity of a specific phonological form, regardless of its meaning.The N400 component, on the other hand, is primarily affected by the semantic context, which either facilitates lexical access or makes it more difficult.Therefore, changes in the N400 component during repeated word-object presentations should be more indicative of successful formation of an associative memory representation encompassing the word and the object.The research results presented above suggests that the N400 incongruity effect is a sensitive measure of fast mapping ability.It also seems that measures of change in ERP responses during familiarization and learning can capture individual differences in the learning process due to either age or vocabulary.The purpose of the present study was to investigate the development of electrophysiological responses during fast mapping with a longitudinal design spanning the period in the second year where most children undergo a rapid acceleration in vocabulary growth.To this end, we employed the same paradigm as Torkildsen et al. and Torkildsen et al.In this paradigm pseudowords are linked to novel pictures through repeated co-presentation, and subsequently the picture-word associations are switched, eliciting an N400 incongruity response in children who have learned the mappings.In addition, real familiar words and their referents were presented in the same way.Torkildsen et al. found that 20-month-olds with high productive vocabularies, but not those with low productive vocabularies, showed evidence of successful fast mapping as measured by an N400 incongruity effect to the pseudowords after five learning trials.Both groups, however, produced an N400 effect in the real word condition.A follow-up test at 24 months would be able to show whether these individual differences persist even after the most dynamic time period of the so-called “vocabulary spurt”.There have been no longitudinal studies presenting a pseudoword learning experiment to children at two time points around this age.Our first hypothesis was that the size of the N400 incongruity effect to newly learnt pseudowords would predict vocabulary only at 20 months, when many children would still be in an early, slower period of vocabulary development.By 24 months, however, many children with low vocabularies would likely have undergone an acceleration of vocabulary growth and, although their vocabularies would still be relatively small, may have attained fast mapping abilities like those of the high vocabulary group.This would then be reflected in an N400 incongruity effect which was independent of vocabulary size.Furthermore, we expected that, when grouping children according to their vocabulary size, we would see group differences in modulation of the N400 component during learning of the pseudowords, which would indicate differences in the process of establishing a link between a novel word and its referent.If such differences were linked to whether or not the child had undergone the vocabulary spurt, then they should be most apparent at 20 months, and group differences should diminish at 24 months when most children would have productive vocabularies above 100 words.By comparing repetition effects on the N400 component with those on the frontal N200-500 component we would be able to determine whether group differences were due to differences in general word processing, such as familiarity with the word form, or semantic processing.In sum, we had the following hypotheses: that the size of the N400 incongruity effect to newly learned pseudowords would be related to productive vocabulary at 20 months, but not at 24 months, that an N400 incongruity effect to real familiar words would be observed at both 20 and 24 months, that an N200-500 repetition effect would be observed at both ages, independent of vocabulary size or word condition, and that N400 repetition effects during the learning phase would be associated with vocabulary size, primarily at 20 months when a substantial number of children would be at a pre-vocabulary spurt stage.A sample of 77 children were recruited and tested at 20 months of age.The selection criteria were that they were typically developing monolingual Swedish learners born at full term.Reliable electrophysiological data was obtained from 37 children.The remaining participants were excluded from EEG analysis due to fussiness, technical problems, or too few artifact-free trials in one or more of the analyzed conditions.The children were recruited through child health care centers in and around Lund, Sweden, and an information campaign sent by mail to all children in certain areas close to Lund that would fall within the appropriate age range during the study period.The project was granted ethical approval by the Regional Ethical Review Board.The same children that participated at 20 months were invited back at 24 months, and 52 children returned for this second session.In addition, two new children were recruited at 24 months in order to obtain a larger sample size at this age.In total 54 children participated in the 24 month session, and 33 of these were included in the EEG analysis.Longitudinal ERP data was obtained from 23 participants who fulfilled inclusion criteria at both ages.Different stimulus sets were used in the 20 and 24 months experiments.The auditory stimulus material consisted of 51 common count nouns, 30 used at each time point and 60 pseudowords which were phonotactically legal in Swedish.Thus, there was a slight overlap of nine real words between the two time points, but these words were paired with different picture referents.Parents reported that their children comprehended on average 21 of the 30 stimulus words at 20 months, and 26 of the words at 24 months.The auditory material was recorded in an anechoic chamber by a female voice, speaking in an infant-directed manner.The visual stimuli consisted of cartoon images of the objects corresponding to the chosen nouns, and fantasy objects and creatures to be paired with the pseudowords, selected from the web-based collection Clipart.Two parent questionnaires were used to assess the children’s general level of development: a Swedish adaptation of MacArthur-Bates Communicative Development Inventories – the SECDI in the “Words and Sentences” version, and the 20 and 24 months versions of a Swedish adaptation of the Norwegian Ages and Stages Questionnaires.The ASQ assesses the infant’s level of development in various areas including language and motor development.Pictures were presented on a 17 inch computer screen approximately 35 cm from the child, and words were presented from a speaker next to the screen.Children sat on their parent’s lap, with a screen placed around them in order to block out distractions.Breaks were taken in between blocks if necessary, with the possibility of showing a short video clip to recapture the child’s attention.EEG was recorded with infant versions of the 128 channel Hydrocel Geodesic Sensor Nets connected to a Net Amps 300, with a sampling rate of 250 samples/s, referenced to the vertex.The child was video recorded throughout the experiment, allowing for exclusion of trials where the child was inattentive.The stimuli were organized into 10 independent blocks, with each block containing 3 real words and objects and 3 pseudowords and novel objects.Each picture-word pair was presented five times in a pseudo-randomized order.The first trial in each block was always a real word, there was at least one interleaved item in between item repetitions, and at most two successive real word trials or pseudoword trials.Each block ended with a test phase, where the picture word pairings were switched.Each word/pseudoword was now presented together with one of the other pictures from the same block, yielding an incongruous pairing.Real objects were always paired with other real words, and fantasy objects with other pseudowords.The test phase also included additional conditions that are not reported in this paper, where modified versions of the original pictures were presented with congruous and incongruous words.These conditions were included to investigate object recognition processes, and results from this study are reported in Borgström, Torkildsen, and Lindgren.Two different trial lists were created, with the same stimuli but in different pairings and presentation order.Pictures were presented for 2150 ms, with a word onset of 1000 ms after each picture onset, and an inter-trial interval of 500 ms showing a white screen.Sections of inattentiveness were rejected from the EEG by viewing the video time-locked to the data.A bandpass finite impulse response filter of 0.3–30 Hz was applied, and 1250 ms epochs time-locked to word onset were created, with a 100 ms pre-stimulus baseline.We used an automatic artifact detection procedure in Net Station 4.5 to mark large artifacts and bad channels, and trials with more than 15 bad channels were rejected.All trials were then inspected visually and the artifact identification was adjusted so that artifacts caused by eye blinks and eye movements were left in the data for later correction.Remaining bad channels were replaced using spherical spline interpolation.Data was re-referenced to the average of all electrodes.An average reference has been argued to be the best reference choice for high-density recordings since it does not bias the signal relative to a specific reference location.We used EEGLAB to perform an independent components analysis to identify and remove ocular artifacts, and remaining EEG processing was performed in ERPLAB.Only data from subjects who retained at least ten artifact-free trials per condition were included in the grand average and the statistical analyses.The mean number of accepted trials per condition at 20 months was between 15 and 17 for all analyzed conditions, and between 14 and 16 at 24 months.To our knowledge, all studies investigating the effects of word familiarization or repetition have used electrode montages with a relatively small number of electrodes referenced to the mastoids.Since this reference site is relatively close to the brain areas generating the N400, it is possible that this procedure limits the potential to detect more subtle modulations of the N400.It has been shown that components that have a posterior-temporal distribution are enhanced in amplitude when applying an average reference compared to a linked mastoids reference, while frontal components are larger in amplitude using a linked mastoids reference.The use of an average reference is generally argued to be advantageous, given a large enough number of electrodes, since it “unconfounds estimates of the amplitude and topography of components with the location of the reference electrode”.By using a high-density montage and applying an average reference, we hope to be able to better separate the frontal N200-500 component reported in many previous studies and the N400, and investigate whether word and pseudoword repetition affects the N400 in infants in a similar way as in adults.Children were divided into two productive vocabulary groups according to a median split, and this was used as a between-subject factor in all statistical analyses.Nine regions of interest, each including six electrodes, were selected that covered left, midline, and right sections of frontal, central, and parietal regions.The mean amplitude for the six electrodes in each ROI was used in the statistical analyses.Our hypotheses concerned ERP components that generally emerge between 200 and 800 ms, so statistical analyses focused on this segment of the ERP.This segment was divided into 200 ms consecutive time windows for initial omnibus repeated measures analyses of variance: 200–400, 400–600 and 600–800 ms. The mean amplitude within each time window was used as a statistical measure.Due to the considerably larger sample sizes that contributed reliable data at each age compared to the sample that contributed data at both time points, our analyses were primarily conducted on each age group separately.However, when results seemed to differ between the age groups, analyses were followed up with an analysis of the longitudinal sample in order to confirm an effect of age.Results from omnibus level ANOVAs on the longitudinal sample are provided in Supplementary Material 3.For the effect of repetition during the learning phase, repeated measures ANOVAs were performed with Word Type, Repetition, Region and Laterality as within-subject factors, and Vocabulary group as a between-subject factor, for all three time windows and both age groups separately.For the effect of congruity, the same procedure was performed, replacing the Repetition factor with a Congruity factor.The incongruous trial in the test phase was compared to the final congruous trial, at which point the picture-word associations would be best established.Follow-up analyses were directed to best capture the two components of interest.The earliest 200–400 ms time window was used to test the frontal N200-500 word familiarity effect, and the two later time windows were collapsed to 400–800 ms to test for the N400 component of semantic processing.This division was chosen based on previous findings in this age group, as well as visual inspections of the waveforms.The results section includes ERP waveforms from selected channels in the ROIs, but detailed topographies in all conditions can be viewed in Supplementary Materials 4–9.The follow-up analyses were performed on the two word types separately, at specific electrode regions.In general, only significant effects and certain effects approaching significance that include the repetition or congruity factors are reported.An alpha-level of .05 was used for all statistical tests.The Huyn–Feldt correction was used when the assumption of sphericity was violated, and in these cases unadjusted degrees of freedom and adjusted p-values are reported.Data from the parent questionnaires are presented in Table 1.The productive vocabulary groups were formed according to a median split at each age.At 20 months, the median in the ERP sample corresponded well with the population median.The low vocabulary group had a maximum score of 58 words, which is around the common 50- to 75-word threshold often associated with a marked acceleration of productive vocabulary growth.At 24 months, the sample median was higher than the estimated population median.Development of ASQ scores followed the same general pattern, where the sample scores were placed below population medians at 20 months, and above at 24 months.Since this data comes from two different samples, we compared these results with those from the longitudinal sample.The longitudinal sample showed a similar pattern, with slightly higher results compared to the reference groups at 24 months than at 20 months, although the longitudinal sample had a median productive vocabulary of just over the 50th percentile, which was slightly higher than the entire 20 month sample.At 24 months, the longitudinal sample scored a median in the 70th percentile.Overall, the data suggest that the present sample had a steeper growth curve than the reference groups.The overall effects of repetition and congruity were tested in three 200-ms time windows, for each age group separately.As shown in Table 2, effects of repetition were similar at both ages, a main effect of repetition in all three time windows, and consistent interactions with region.In general, amplitudes over frontal and central regions became more negative with repetition, while over parietal regions they were not consistently affected in the earliest time window, but became less negative from 400 ms onward.Real words and pseudowords did not elicit different repetition effects in either time window.Congruity effects differed at 20 and 24 months.At 20 months, the first effects of congruity appeared in the 600–800 ms time window, and there was no main effect of congruity, but an interaction with word type that indicated an incongruity effect only for real words.There was also an interaction with region such that parietal regions showed the largest difference in amplitude between congruous and incongruous presentations.At 24 months, however, there was a significant interaction between congruity and region already in the 200–400 ms time window.Moreover, there was a main effect of congruity from 400 to 800 ms, as well as several interactions with region and word type.These interactions indicated that for pseudowords there was a larger negativity to incongruous presentations across all regions, while for real words only parietal regions had a larger negativity, while frontal regions had a smaller negativity.Further analyses, including ERP plots and follow-up statistical tests on specific regions, are presented for each word type separately.This is motivated both by our theoretical hypotheses and the omnibus ANOVAs showing both effects of region and word type.In order to explore the relation between the two components modulated by repetition, we calculated difference scores between the first and the fifth presentations, capturing the total change in amplitude for each individual participant.The change in amplitude of the N200-500 component correlated significantly with the change in amplitude of the N400 component, r = −.484, p = .002.A larger increase in the N200-500 component was associated with a larger attenuation of the N400 component.As at 20 months, a larger increase in the N200-500 component was associated with a larger N400 decrease due to repetition across the five pseudoword presentations,.The 24-month-olds in this study differentiated between congruous and incongruous pairings of the pseudowords, indicating that they had successfully mapped the novel pseudowords to the novel objects after five exposures.However, there was no such effect at 20 months, suggesting that at this age the children did not fully learn the associations between the novel pseudowords and their referents.Thus, in only four months the children clearly improved in fast mapping ability.This effect of age was not due to sample differences, since the pure longitudinal sample showed the same pattern of development as the full sample.Surprisingly, the response to incongruity was not related to vocabulary size as in Torkildsen et al., where children with large vocabularies at 20 months did show an N400 incongruity effect when associations between novel pseudowords and referents were broken.However, an N400 effect was seen in response to familiar real word-object pairs at both ages, demonstrating that the N400 mechanism with regards to word stimuli was in place, as many other studies have previously established.This incongruity effect for real words was also independent of vocabulary size.The vocabulary groups in Torkildsen et al.’s study followed a 75-word cutoff, which corresponded to the median-split in the present study.Thus, the present study’s lack of effect of productive vocabulary on the N400 response in 20-month-olds cannot be due to differences in vocabulary size compared to the previous study.It seems reasonable to interpret the incongruity effect to the pseudowords, with its parietal peak around 600 ms, as an N400 response.However, the topography also showed a more temporally sustained negativity over central regions to the incongruous pseudowords compared to the real word condition.The central incongruity effect was statistically significant in an earlier time window than the effect over parietal regions, which was not significant until after 600 ms. That pseudowords elicited a later parietal incongruity effect than real words seems reasonable considering that the pseudowords were newly learnt, and would be expected to be processed more slowly.The central distribution may indicate involvement of additional processes in response to the newly learned pseudowords, perhaps associated with enhanced attention.The negative central component is a commonly observed response modulated by attention, novelty detection and saliency in young children, and might be present in the 24 month sample’s response to the incongruous pseudowords.Although the children established a mapping between the pseudowords and the novel objects, these associations are likely to be weak compared to the previously familiar real words and objects.If the children were not confident about the pseudowords’ referents, they might have responded with increased attention to incongruous presentations because they continued to process these stimuli in an attempt to integrate them into the context.This would explain that the central negativity to incongruous pseudowords was sustained throughout the epoch, without a return to baseline.In comparison, when a dog is labeled “car” it might be so obviously incorrect that the children dismiss it more rapidly.This interpretation is reasonable even without assuming the involvement of an additional component, the Nc.Instead, it is possible that the extra resources required to process the novel pseudowords simply generate an N400 component with a more widespread topography.A question that arises is why the 20-month-olds in this experiment did not respond to the incongruity in the pseudoword condition when a sample of the same age in Torkildsen et al. did, as well as even younger children in a similar paradigm.In Friedrich and Friederici’s study, where 14-month-olds produced an N400 to newly learned pseudowords, a larger number of learning trials were used and there were also fewer words to be learned which reduces the demands compared to the present study.Although five learning trials were used in the current experiment just as in Torkildsen et al., the presentation rate was faster, with a 500 ms inter-trial interval compared to 1000 ms ITI used in both Torkildsen et al. and Friedrich and Friederici.Speed of word processing is a measure of linguistic maturity, and thus a faster presentation rate may have presented too much of a challenge to the younger children’s processing capacities.We also only presented one incongruous trial per item in the test phase, while the previous experiment included two trials per item.This, along with the additional trials presenting modified pictures in the test phase may have increased the demands in this experiment and made it too difficult for the 20-month-olds to build up specific expectations about which words would follow each picture.Vocabulary size was not related to the magnitude of the N400 incongruity effect in any of the conditions in this experiment.Previous research have reported mixed results, with some studies showing that better language skills in young children were related to a larger N400 semantic priming effect to real words, while others report no such relation.Similarly, studies using eye tracking measures of word comprehension have shown that differences in the efficiency of online processing of real words are associated with differences in vocabulary size.Torkildsen et al. is the only study to report a correlation between productive vocabulary and an N400 effect to newly learned pseudowords specifically.The difference in demands of the task is the most likely explanation that this result was not replicated.At 20 months, the task was probably too difficult for most of the children, regardless of vocabulary size, while at 24 months, it is possible that most children had attained a similar ability of fast mapping, such that vocabulary size was no longer a relevant dimension of differentiation.Even the low vocabulary group at 24 months had a quite substantial productive vocabulary, with a mean of 173 words.Only four subjects at 24 months had vocabularies under 75 words, which is considered a common milestone of the vocabulary spurt.According to our hypothesis that receptive fast mapping ability is a relevant factor underlying the acceleration of productive vocabulary growth that takes place around 75 words, we had not expected a relation between vocabulary size and an N400 effect to newly learned pseudowords at 24 months.Moreover, individual differences in productive vocabulary size are caused by many factors other than fast mapping ability, such as language exposure, and even if receptive fast mapping ability is one contributing factor, it may be outweighed by others.The data on vocabulary size, as well as general development measured by the ASQ, indicated that the children participating in this study had a slightly steeper growth curve than the reference groups for the instruments.Particularly, they advanced from vocabularies around the 50th percentile at 20 months to the 70th percentile at 24 months.This may be associated with the relatively high educational level of the parents that chose to participate.It is possible that this accelerated growth curve influenced the results in the sense that the neural changes we see in four months in the present sample may be somewhat more protracted in a sample with an average vocabulary growth.During familiarization of both real words and pseudowords, repetition affected the same two components.The first was an early frontal response starting around 200 ms, which can reasonably be interpreted as an N200-500 component.There was an initial positivity in response to the first presentations of both words and pseudowords which changed to an increased negativity as the stimuli were repeated.This effect is in line with results from previous studies and is commonly interpreted as an effect of word form recognition and facilitated lexical-phonological processing following familiarization.However, in addition to word form processing, the N200–500 component has been linked to semantic processing, as it in some cases has been sensitive to word-object incongruity.This was not the case in the present study, however.At 20 months, there was a strong linear repetition effect for both real words and pseudowords.At 24 months, on the other hand, most of the increase in negativity happened between the first and the third presentation, with a weaker effect of subsequent repetitions.This could mean that, with age, the children reached a certain level of familiarity or recognition of the stimuli in fewer presentations.A recent study demonstrated that a faster emerging word familiarization effect in infants is associated with better word processing skills 6 months later, which supports the idea that this is a measure of word processing maturity.However, in the present study there was no effect of vocabulary group on any of the N200–500 effects, which indicates that children with small and large productive vocabularies were similarly able to recognize the words and pseudowords after a first presentation.Previous studies have shown a link between a stronger word familiarization effect and language skills, but these studies have involved younger infants, below 12 months.It is possible that individual differences in this component decrease with age, resulting in the word familiarity effect in older infants being less associated with vocabulary skills.A parietal negativity with a peak around 600 ms, characteristic of the N400 component, was also modulated by repetition.For real words, the negative amplitude of this component was attenuated with repetition independent of vocabulary size, and the linear repetition effect was stronger at 20 months than at 24 months.At 24 months, the difference between the first and subsequent word presentations was smaller, suggesting that the words were comprehended and processed fairly easily already at the first presentation.This is reasonable given the picture-word priming paradigm, where pictures of known objects functioned as primes for the upcoming words.Presumably, the older children had stronger semantic representations of the real words used in the experiment, and therefore even the first presentation of the object functioned well as a prime for the word.This also indicates that the real word condition involved a greater amount of learning at 20 months, where each presentation served to further strengthen the association between the object and the word.At both ages, the modulation of the N400 component in response to repetition of pseudowords differed depending on the children’s vocabulary size, shown by a direct correlation between N400 attenuation and vocabulary size.While children with larger vocabularies displayed a clear linear reduction in N400 amplitude to pseudowords, children with smaller vocabularies displayed an initial increase in negativity over primarily right parietal electrodes up to the third presentation, and then a subsequent decrease in negativity.Statistically, in the high vocabulary group already the second presentation of the pseudowords elicited a lower N400 amplitude than the first presentation, while in children with low vocabularies the effect of repetition on the N400 did not appear until the fourth presentation at the earliest.This suggests that children with large vocabularies reached a certain level of encoding of the pseudowords, or mapping between the novel object and pseudoword, in fewer trials than those with smaller vocabularies.Assuming the standard interpretation of the N400 amplitude where larger amplitude reflects greater effort in semantic processing/access, then children with larger vocabularies reached a level of less effortful processing in fewer trials than children with smaller vocabularies.This parallels recent findings on the N200–500 component in 10-month-olds showing that infants displaying more mature word familiarity effects reached a level of word recognition in fewer trials than infants with less mature responses.The present study’s results indicate that in this sample of older children, similar differences in modulation of the N400 component are related to language skills.Despite this association between vocabulary size and N400 modulation, we cannot draw any conclusions regarding the causal relationship.As with all correlational results, it may be that a quicker N400 attenuation facilitates word learning and therefore leads to larger vocabularies, or a larger vocabulary could itself lead to improved word processing skills, including semantic processing, which would be expressed by the N400 attenuation.Moreover, the N400 attenuation may be an indication of general brain maturation which could also facilitate vocabulary growth.This effect of vocabulary size on ERP modulation during novel object-pseudoword learning is in line with the results from Torkildsen et al.’s study.However, in that study the patterns and topographies of the repetition effects were different.Children with low vocabularies showed a pattern of continuously increasing negativity due to repetition starting from 200 ms after word presentation, while the high vocabulary group showed an initial increase in negativity followed by a decreased negativity after the third presentation.This pattern was mainly seen over frontal and central electrodes and was interpreted as an Nc component indicating a decrease in attention in the high vocabulary group following a certain level of encoding.What these two data sets have in common is that children with smaller vocabularies did not display an attenuation of negative amplitude to the same extent as those with larger vocabularies.The differences in topography between the two studies can likely be found in the different choices of reference.In the present study, it was possible to choose an average reference due to the large number of electrodes.This may have enabled the clear effects over parietal regions, a region where Torkildsen et al. did not find significant effects of repetition.Whether the N400-like effect in the present study is analogous to the component interpreted as an Nc in Torkildsen et al., or whether these studies have captured functionally different processes, is difficult to determine.However, since the effect of repetition began around 400 ms over parietal channels, and the negative component that peaked around 600 ms resembled the component elicited by incongruity in the switch phase, it seems reasonable to interpret our results as a modulation of the N400 component during learning.The correlation between the N200-500 component and the N400 component during familiarization of novel words indicates that the two components are present within the same individuals.Children who had a larger word familiarization effect were more likely to show a larger attenuation of the N400 due to repetition.Such a relation between these two components has not been reported previously, and suggests that a more efficient processing of the word form is associated with facilitated semantic processing.In the younger age group, this association was only relevant for pseudowords, while in the older group the two components were related during both novel and familiar word processing.An interesting aspect of our results is that vocabulary size was related only to modulation of the N400 to pseudowords, not to real words, and not to modulations of the frontal N200-500 component.Thus, the link between the two components is only partial.This suggests that toddlers with different vocabulary sizes processed and recognized the actual lexical item similarly, but differed more in semantic processing.In this paradigm the effect of mere repetition of the words and pseudowords cannot be separated from the priming effect of the picture on the word/pseudoword.Thus, the modulation of ERPs during familiarization will likely depend both on stimulus repetition, i.e. decreased novelty of the stimuli, and on a build-up of the association between the picture and word/pseudoword which enables the picture to function as a prime.However, our knowledge about the N400 component as an index of semantic processing supports the interpretation that the children with larger vocabularies were more efficient at encoding a novel word as a label for a novel object, although the children in this sample with smaller vocabularies were equally efficient at encoding the actual word form.A retrospective study of 19-month-olds who would later show poor expressive language and a study of 20-month-olds at-risk for dyslexia both found temporally extended N200-500 effects in these risk-groups.However, our low vocabulary group did not show such a pattern.This is probably because the children in our study were only classified according to their current vocabulary, and most of the children in the low vocabulary group were not at-risk in terms of their current vocabulary.They were not late talkers, but rather at the lower end of the normal range.The fact that the N400 modulation to real words was independent of vocabulary size also underscores that it is the formation of rapid associations between words and referents that differs between high and low producers, and not the processing of items that have a more consolidated status in long-term memory.The longitudinal design of this study have allowed us to demonstrate a remarkable development of fast mapping ability between 20 and 24 months, measured electrophysiologically in a fairly large sample of children.During this four-month period the children on average tripled their productive vocabulary size, an increase which was coupled with changes in the N400 effect to pseudoword-referent associations.Moreover, we have shown that differences in productive vocabulary size are related to differences in the dynamics of semantic processing during novel word learning, where children with larger vocabularies seem to reach a level of less effortful semantic processing of novel words after fewer exposures.Our data also demonstrates that, in toddlers with large vocabularies, the N400 component responds to word and pseudoword repetition in a similar way as in adults, with a linear attenuation of negativity.The clear N400 repetition effect found in this study is quite new in the infant word learning literature.The general pattern of decreasing parietal negativity with repetition, with the exception of the response to pseudowords in the low vocabulary group, is in line with adult research on the N400 component showing attenuation of amplitude due to repetition of both real words and pseudowords.Although children with larger vocabularies showed evidence of more efficient semantic processing of the pseudowords during learning, at both 20 and 24 months, this difference was not directly predictive of successful fast mapping between words and pictures in terms of an N400 incongruity effect.This most likely has to do with the specific learning load in the experiment which may have been too heavy even for the high producers, and thus the task did not differentiate well between the two groups.Perhaps if the experiment had included more learning trials, or had a slower pace, the high producers at 20 months would have responded to the switched pairings of the pseudowords. | This longitudinal ERP study investigated changes in children's ability to map novel words to novel objects during the dynamic period of vocabulary growth between 20 and 24. months. During this four-month period the children on average tripled their productive vocabulary, an increase which was coupled with changes in the N400 effect to pseudoword-referent associations. Moreover, productive vocabulary size was related to the dynamics of semantic processing during novel word learning. In children with large productive vocabularies, the N400 amplitude was linearly reduced during the five experimental learning trials, consistent with the repetition effect typically seen in adults, while in children with smaller vocabularies the N400 attenuation did not appear until the end of the learning phase. Vocabulary size was related only to modulation of the N400 to pseudowords, not to real words. These findings demonstrate a remarkable development of fast mapping ability between 20 and 24. months. |
31,447 | Dynamics of Translation of Single mRNA Molecules in Vivo | Precise tuning of the expression of each gene in the genome is critical for many aspects of cell function.The level of gene expression is regulated at multiple distinct steps, including transcription, mRNA degradation, and translation.Regulation of all of these steps in gene expression is important, though the relative contribution of each control mechanism varies for different biological processes.Measuring the translation rate from individual mRNAs over time provides valuable information on the mechanisms of translation and translational regulation.In vitro experiments, mainly using bacterial ribosomes, have revealed exquisite information on ribosome translocation dynamics at the single molecule level, but such methods have not yet been applied in vivo.In contrast, a genome-wide snapshot of the translational efficiency of endogenous mRNAs in vivo can be obtained through the method of ribosomal profiling.However, this method requires averaging of many cells and provides limited temporal information because of the requirement to lyse cells to make these measurements.Single cell imaging studies have succeeded in measuring average protein synthesis rates, observing the first translation event of an mRNA, localizing sub-cellular sites of translation by co-localizing mRNAs and ribosomes, and staining nascent polypeptides with small molecule dyes.While ribosomal profiling and other recently developed methods have provided many important new insights into the regulation of translation, many questions cannot be addressed using current technologies.For example, it is unclear to what extent different mRNA molecules produced in a single cell from the same gene behave similarly.Many methods to study translation in vivo require averaging of many mRNAs, masking potential differences between individual mRNA molecules.Such differences could arise from differential post-transcriptional regulation, such as nucleotide modifications, differential transcript lengths through use of alternative transcriptional start sites or polyadenylation site selection, differences in ribonucleic protein composition, distinct intracellular localization, or different states of RNA secondary structure.Heterogeneity among mRNA molecules could have a profound impact on the total amount of polypeptide produced, as well as the localization of protein synthesis, but remains poorly studied.Furthermore, the extent to which translation of single mRNA molecules varies over time is also largely unknown.For example, translation may occur in bursts, rather than continuously, and regulation of protein synthesis may occur by modulating burst size and/or frequency, which could occur either globally or on each mRNA molecule individually.In addition, the ability of an mRNA molecule to initiate translation may vary with time or spatial location, for example as cells progress through the cell cycle or undergo active microtubule-based transport to particular cellular destinations.Such regulation could involve changes in the rates of translation initiation and/or the ribosome elongation.To address these questions, new methods are required for visualizing translation of single mRNA molecules in live cells over time.Here, we present a method, based on the SunTag fluorescence tagging system that we recently developed, for measuring the translation of single mRNA molecules over long periods of time.Using this system, we have measured initiation, elongation, and stalling on individual mRNA molecules and have uncovered unexpected heterogeneity among different mRNA molecules encoded by the same gene within a single cell.Our system will be widely applicable to the study of mRNA translation in live cells.Observing the synthesis of a genetically encoded fluorescent protein, such as GFP, in vivo is difficult because of the relatively long maturation time required to achieve a fluorescent state.Thus, a GFP-fusion protein typically will not fluoresce until after its translation is completed.To overcome this temporal challenge and to create a sufficiently bright signal to observe protein synthesis from single mRNAs in vivo, we used our recently developed SunTag system.In this assay, cells are co-transfected with a reporter transcript containing an array of 24 SunTag peptides followed by a gene of interest, along with a second construct expressing a GFP-tagged single-chain intracellular antibody that binds to the SunTag peptide with high affinity.As the SunTag peptides are translated and emerge from the ribosome exit tunnel, they are rapidly bound by the soluble and already fluorescent scFv-GFP.Importantly, labeling of nascent chains using the SunTag antibody did not detectably alter protein synthesis rates of a reporter mRNA in human U2OS cells, as determined by FACS analysis.At the same time, the mRNA was fluorescently labeled by introducing 24 copies of a short hairpin sequence into the 3′ UTR and co-expressing the PP7 bacteriophage coat protein, which binds with high affinity to the hairpin sequence, fused to three copies of mCherry.When observed by spinning disk confocal microscopy, the co-expression of a reporter construct, scFv-GFP and PP7-mCherry, resulted in the appearance of a small number of very bright green and red fluorescent spots per cell that co-migrated in time-lapse movies.Spot tracking revealed that these spots diffused with a diffusion coefficient of 0.047 μm2/s, which is slightly slower than previous measurements of mRNA diffusion, consistent with the fact that our reporter mRNA contains a larger open reading frame and thus more associated ribosomes.In addition, we observed many dim GFP spots that did not co-migrate with an mCherry signal in time-lapse movies.The bright spots rapidly disappeared upon terminating translation by addition of a protein synthesis inhibitor, puromycin, which dissociates nascent polypeptides and ribosomes from mRNA, indicating that they are sites of active translation where multiple ribosomes are engaged on a single mRNA molecule.The dim spots were unaffected by puromycin treatment, suggesting that they represent individual, fully synthesized SunTag24x-Kif18b proteins that had already been released from the ribosome.Thus, this translation imaging assay allows visualization of ongoing translation of single mRNA molecules.Rapid 3D diffusion of mRNAs makes it difficult to track single mRNAs for >1 min, as mRNAs continuously diffuse in and out of the z-plane of observation, and mRNAs regularly cross paths, complicating identification and tracking of individual mRNA molecules over time.To track mRNAs unambiguously for long periods of time, we added a CAAX sequence, a prenylation sequence that gets inserted into the inner leaflet of the plasma membrane, to the PP7-mCherry protein that served to tether mRNAs to the 2D plane of the plasma membrane.As a result of many PP7-mCherry molecules clustering through their interaction with the multiple recognition sites on a single mRNA, bright red dots appeared on the plasma membrane at the bottom of the cell, representing a tethered mRNA molecule.Tethered mRNA molecules co-migrated with scFv-GFP foci, indicating that they are sites of active translation.Membrane tethering of the mRNA had minimal effects on the protein expression of a GFP reporter construct as analyzed by FACS.While membrane tethering greatly improves the ability to visualize translation on single mRNA molecules over long periods of time and does not appear to grossly perturb mRNA translation, it is important to note that some aspects of translation, especially localized translation, may be altered due to tethering.We first analyzed the PP7-mCherry spots observed on the plasma membrane to confirm that they contained only a single mRNA molecule.The fluorescence intensities of PP7-mCherry foci were very homogeneous.Their absolute intensity was ∼1.4-fold brighter, on average, than single, membrane-tethered SunTag24x-CAAX proteins bound with scFv-mCherry, which is expected to contain 24 mCherry molecules.PP7 binds as a dimer to the RNA hairpin, and each PP7 was tagged with two tandem copies of mCherry.Thus, mRNAs’ spots could be expected to be four times as bright as single scFv-mCherry-SunTag24x-CAAX spots, but previous studies suggested that only about half of PP7 binding sites may be occupied; thus, mRNA spots would be about 2-fold brighter than single mCherry-SunTag24x spots if they contain a single mRNA molecule but ≥4-fold brighter if they contained two or more mRNAs.These results are therefore most consistent with the mCherry-PP7 foci being single mRNA molecules rather than multiple copies of mRNAs.Further supporting this idea, we tracked 63 single mRNA foci for 30–45 min and did not find a single case in which one spot split into two, which would have been indicative of more than one mRNA molecule being present in a single spot.Because single mRNAs were tethered to the plasma membrane through multiple PP7 molecules and thus through many CAAX membrane insertion domains, the 2D diffusion of mRNAs was extremely slow.This slow diffusion made it possible to track individual mRNAs and their associated translation sites for extended periods of time.Furthermore, the very slow diffusion rate of tethered mRNAs allowed us to image tethered translation sites using long exposure times.During this time interval, rapidly diffusing, non-tethered fully synthesized polypeptides only produced a blurred, diffuse image on the camera sensor, which enabled sites of translation to be easily distinguished from fully synthesized molecules.Finally, to confirm that the scFv-GFP was binding to nascent SunTag peptides, we replaced the SunTag epitope peptides in our reporter mRNA with an unrelated nucleotide sequence and found no GFP foci formation near mRNAs.In conclusion, we have developed assays that enable both single mRNAs and their associated nascent translating polypeptides to be imaged over time.This general SunTag-based method can be performed with either freely diffusing mRNAs or mRNAs tethered to the plasma membrane, each of which has unique advantages depending on the specific biological question.For further experiments in this study, we used the membrane-tethered system to follow translation for long periods of time.To estimate the number of ribosomes translating each mRNA, we compared the scFv-GFP fluorescence intensity of translation sites with that of the single, fully synthesized SunTag24x-Kif18b molecules present in the same cell.Several considerations need to be taken into account to calculate ribosome number from the fluorescence intensities of translation sites and fully synthesized single SunTag proteins.First, ribosomes present at the 5′ end of the reporter transcript have translated only a subset of the 24 SunTag peptides, so the nascent polypeptide associated with these ribosomes will have lower fluorescence intensity due to fewer bound scFv-GFPs.We generated a mathematical model to correct for the difference in fluorescence intensity for ribosomes at different positions along the transcript.Second, if scFv-GFP-peptide has a slow on rate for the epitope in vivo, a lag time could exist between the synthesis of a SunTag peptide and binding of a scFv-GFP, which could result in the underestimation of the number of ribosomes per mRNA.To test this, cells were treated with the translation inhibitor cycloheximide, which blocks ribosome elongation by locking ribosomes on the mRNA and prevents the synthesis of new SunTag peptides, while allowing binding of scFv-GFP to existing peptides to reach equilibrium.The translation site scFv-GFP signal did not substantially increase after CHX treatment, indicating that under our experimental conditions, the lag time between peptide synthesis and scFv-GFP binding does not detectably affect translation-site intensity.Based on the above controls and our mathematical model, we could estimate the ribosome number per mRNA from the fluorescence intensity of the translation site.Approximately 30% of the mRNAs did not have a corresponding GFP signal, suggesting that they were not actively translating.For the remaining 70% of the mRNAs that were translating, the majority had between 10–25 ribosomes, corresponding to an average inter-ribosome distance of ∼200–400 nucleotides.We also compared translation-site intensity of two additional reporter mRNAs with either 5× or 10× SunTag peptides with the 24× peptide reporter.This analysis revealed that ribosome density was very similar on the 5× and 10× reporter, indicating that the long 24× SunTag array does not grossly perturb ribosome loading on the reporter mRNA.Next, we measured the translocation speed of ribosomes on single mRNAs by treating cells with harringtonine, a small molecule inhibitor of translation that stalls new ribosomes at the start of the mRNA coding sequence without affecting ribosomes further downstream.As mRNA-bound ribosomes complete translation one-by-one after harringtonine treatment, the GFP signal on mRNAs decreases.Using a simple mathematical model to fit the decay in fluorescence of a cumulative curve from many mRNAs, we estimate a ribosome translocation rate of 3.5 ± 1.1 codons/s.In a parallel approach, we also measured the total time required for runoff of all ribosomes from individual mRNAs, from which we calculated a similar translation elongation rate as the one obtained through our model.A reporter with only 5 instead of 24 SunTag peptides showed similar elongation kinetics, indicating that translocation rates are likely not affected by SunTag labeling of the nascent chain.Finally, we measured elongation rates of a shorter and codon-optimized reporter gene, which revealed a somewhat faster elongation rate of 4.9 codons/s, indicating that elongation rates may differ on different transcripts.Using the elongation rate and ribosome density described above, we were able to estimate the translation initiation rate to be between 1.4–3.6 min−1 on the Kif18b reporter.Together, these results provide the first in vivo measurements of the rates of ribosome initiation and translocation on single mRNA molecules in live cells.To study translation over time, we imaged cells for 2 hr and quantified the scFv-GFP signal from single mRNA molecules that could be tracked for >1 hr.The results show considerable fluctuations in the translational state of individual mRNAs over time.Such large fluctuations were not observed when cells were treated with the translation inhibitor CHX, indicating they were due to changes in translation initiation and/or elongation rather than measurement noise.We also observed heterogeneity of behavior between different mRNAs.Some remained in a high translating state for >1 hr.Others shut down translation initiation and lost their scFv-GFP signal, which may account for the population of non-translating mRNAs observed in steady-state measurements.From the progressive decline in scFv-GFP fluorescence, we could estimate a ribosome run-off rate of 3 codons/s, which is similar to that measured after addition of harringtonine.Interestingly, a subset of these mRNAs later reinitiated translation and largely recovered their original scFv-GFP fluorescence.Individual mRNAs even showed repeated cycling between non-translating and translating states.Such cycles of complete translational shutdown and re-initiation occurred 0.29 ± 0.10 times per mRNA per hour, suggesting that most mRNAs will undergo one or more translational shutdown and re-initiation events in their lifetime.Thus, single mRNA imaging reveals reversible switching between translational shutdown and polysome formation.After synchronized expression of the reporter construct using an inducible promoter, we often observed the initial binding events of newly transcribed mRNAs to the PP7-mCherry at the membrane.Of these initial binding events, 44% of the mRNAs were associated with scFv-GFP fluorescence, indicating that they had already begun translation.However, the majority, 56% of mRNAs, initially appeared at the membrane in a non-translating state and subsequently converted to a translating state, usually within 1–5 min.These mRNAs are likely newly transcribed mRNAs that are translating for the first time, rather than mRNAs that have already undergone translation but transitioned temporarily to a non-translating state.In support of this argument, long-term imaging of single mRNAs reveals that mRNAs spend on average only 2.5% of their lifetime in such a temporary non-translating state, which is not sufficient to explain the 56% non-translating mRNAs that appeared at the membrane after synchronized transcription of the reporter.Rapid initiation of translation on newly transcribed mRNAs was described recently, but our assay additionally allows an analysis of polysome buildup on new mRNAs.Our analysis of the increase in scFv-GFP fluorescence indicates that, once the first ribosome begins chain elongation, additional ribosomes initiate translation with a rate indistinguishable from that on polysomes at steady state.We also examined the rate of fluorescence recovery after complete shutdown of translation and subsequent re-initiation.The polysome buildup on new transcripts was comparable to that observed for mRNAs that were cycling between translating and non-translating states.Several studies reported that ribosomes can pause or stall at a defined nucleic acid sequence with a regulatory function, at chemically modified or damaged nucleotides, or at regions in the RNA with a strong secondary structure.We found that a subset of mRNAs retained a bright scFv-GFP signal 15 min after harringtonine treatment, a time at which ribosomes translocating at ∼3 codons/s should have finished translating the reporter.A similar percentage of stalled ribosomes was observed on two additional reporter transcripts, both of which were designed using optimal codon usage.Ribosome stalling also was observed using hippuristanol, a translation initiation inhibitor with a different mechanism of inhibition, indicating that the stalling was not caused by harringtonine.We also observed stalls when examining ribosome runoff from non-tethered cytosolic mRNAs lacking PP7 binding sites.Importantly, stalls were not observed after puromycin treatment and the prolonged scFv-GFP signal on mRNAs from harringtonine-treated cells rapidly disappeared upon the addition of puromycin, confirming that the observed signal indeed represents stalled ribosomes.The majority of mRNAs with stalled ribosomes could be tracked for >40 min, the typical duration of our harringtonine runoff experiments, indicating that they were not readily targeted by the no-go mRNA decay machinery within this time frame.Ribosome stalls could be due to defective ribosomes causing roadblocks on the mRNA or due to defects in the mRNA.These models can potentially be distinguished by examining how such stalls are resolved.A single defective ribosome will inhibit ribosome runoff until the stalled ribosome is removed, after which, the remaining ribosomes will run off at a normal rate.In contrast, if the stalls are caused by defects to the mRNA, such as chemical damage, then each ribosome passing over the damaged nucleotide will be delayed, resulting in an overall slower scFv-GFP decay rate.Long-term tracking of stalled ribosomes on single mRNAs was consistent with the latter model, indicating that ribosome stalling is likely caused by defective mRNA.Consistent with the hypothesis that chemical damage to mRNA causes ribosome stalling, treatment of cells with 4-nitroquilone-1-oxide, a potent nucleic-acid-damaging agent that causes 8-oxoguanine modifications and stalls ribosomes in vitro, resulted in a slow runoff on the majority of mRNAs, indicating widespread ribosome stalling.Thus, chemical damage to mRNAs stalls ribosome elongation in vivo.Regulated ribosome pausing occurs both in vitro and in vivo at asparagine 256 in the stress-related transcription factor Xbp1, and this ribosome pausing is important for membrane targeting of the mRNA.To test whether our translation imaging system could recapitulate such translation pausing, we introduced a strong ribosome-pausing sequence into the 3′ region of the coding sequence of our reporter.Harringtonine ribosome runoff experiments on the Xbp1 reporter revealed a delay in ribosome runoff, confirming that our reporter faithfully reproduced the ribosome-pausing phenotype.To study the behavior of individual ribosomes on the Xbp1 ribosome-pausing sequence, we tracked single mRNAs during ribosome runoff.Surprisingly, the fluorescence decay was not linear, as would be expected if each ribosome paused a similar amount of time on the pause site.Rather, fluorescence decay occurred in bursts interspaced with periods in which no decay was detectable.These results indicate that most ribosomes are only briefly delayed at the Xbp1 pause site, but a small subset of ribosomes remain stalled for an extended period of time, explaining the strong ribosome stalling phenotype observed in ensemble experiments.We also applied our assay to study the transcript-specific translational regulation of Emi1, a key cell-cycle regulatory protein.Our recent work reported strong translational repression of Emi1 during mitosis and found that the 3′ UTR of Emi1 is involved in this regulation, but a role of its 5′ UTR in translational regulation was not established.Interestingly, Emi1 has at least two splicing isoforms that differ in their 5′ UTR sequence: NM_001142522.1 and NM_012177.3.We found that a GFP protein fused downstream of the 5′ UTR_long was expressed at 40-fold lower levels than a GFP fused to the 5′ UTR_short.Such difference in protein expression could be due to a difference in transcription rate, mRNA stability, or reduced translation initiation or elongation rates.To distinguish between these possibilities, we prepared translation reporter constructs bearing either the short or long 5′ UTR of Emi1.Robust translation was observed on ∼50% of mRNAs encoding the short 5′ UTR.In contrast, the majority of transcripts encoding the Emi1 5′ UTR_long showed no detectable translation, and of the translating mRNAs, only very weak scFv-GFP fluorescence was usually detected.Surprisingly, however, a very small fraction of mRNAs containing the 5′ UTR_long was associated with a bright scFv-GFP signal, indicating that they are bound to many ribosomes.This was not due to ribosome stalling and subsequent accumulation of ribosomes on a subset of mRNAs, as this bright scFv-GFP signal rapidly dissipated upon harringtonine treatment, indicating that these mRNAs were translated at high levels.Calculation of the total number of ribosomes associated with the mRNAs, based upon scFv-GFP fluorescence intensity, revealed that 52% of all ribosomes translating the Emi1 5′ UTR_long reporter were associated with the minor fraction associated with the highest scFv-GFP intensity.These results indicate that the great majority of 5′ UTR_long transcripts are strongly translationally repressed but that a small subset of these mRNAs escape repression and undergo robust translation.Thus, substantial heterogeneity in translational efficiency can exist among different mRNA molecules within the same cell.Interestingly, with the Emi1 5′ UTR_long reporter, we often observed the abrupt appearance of a weak scFv-GFP signal on a transcript that was previously translationally silent.The GFP signal initially increased over time, plateaued, and then was abruptly lost after 6–8 min.This type of signal is best explained by a single ribosome sequentially decoding the 24 SunTag peptides on the mRNA, followed by the release of the newly synthesized polypeptide upon completion of translation.Consistent with this hypothesis, the absolute fluorescence intensity of such translation events at the plateau phase was very similar to the intensity of a single fully synthesized SunTag24x-Kif18b protein.The duration of the scFv-GFP signal per translation event could be converted to a translocation speed of single ribosomes, which revealed an average elongation rate of 3 codons/s.This value is similar to that determined from our bulk measurements of harringtonine-induced ribosome runoff or natural translational initiation shutdown and runoff, indicating that ribosome elongation was not affected by the Emi1 5′ UTR_long.Comparison of translocation rates obtained from single ribosome translation events also revealed heterogeneity in the decoding speed of individual ribosomes in vivo.Using the SunTag system, we have developed an imaging method that measures the translation of individual mRNAs in living cells.Immobilization of mRNAs on the plasma membrane allows the long-term observation of translation of single mRNA molecules, which enables analyses of translational initiation, elongation, and stalling in live cells for the first time.Under conditions of infrequent translational initiation, we can even observe a single ribosome decoding an entire mRNA molecule.Our observations reveal considerable and unexpected heterogeneity in the translation properties of different mRNA molecules derived from the same gene in a single cell, with some not translating, others actively translating with many ribosomes, and others bound to stalled ribosomes.The SunTag translation imaging assay should be applicable to many different cell types, including neurons and embryos, in which the localization and control of protein translation is thought to play an important role in cell function.Ribosome profiling, a method in which fragments of mRNAs that are protected by the ribosome are analyzed by deep sequencing, has found widespread use in measuring translation.The strength of ribosomal profiling lies in its ability to measure translation on a genome-wide scale of endogenous mRNAs.However, a limitation of ribosome profiling is the need to pool mRNAs from many thousands of cells for a single measurement.Thus, ribosome profiling in its present form cannot be used to study translation heterogeneity between different cells in a population or among different mRNA molecules in the same cell.Furthermore, since ribosome profiling requires cell lysis, only a single measurement can be made for each sample, limiting studies of temporal changes.A number of single-cell translation reporters have been developed based on fluorescent proteins.Such reporters generally rely on the accumulation of new fluorescence after the assay is initiated.Advantages of these systems are that they are generally easy to use and have single-cell sensitivity.However, they do not provide single-mRNA resolution, often do not allow continuous measurement of translation, and do not report on ribosome initiation and elongation rates.Finally, two methods were developed recently to image translation on single mRNAs in vivo.In one approach, the first round of translation is visualized.This method, however, does not allow continuous measurements of translation.The second approach involves measurements of the number of ribosomes bound to an mRNA using fluorescence fluctuation spectroscopy.The advantage of this method is that it can detect binding of a single fluorescent protein to an mRNA and different subcellular sites can be probed to study spatial differences in translation.The limitation of this method though is the inability to follow translation of single mRNAs over time, as these mRNAs cannot be tracked in the cell.SunTag-based translation imaging assays are unique thus far in their ability to follow translation of individual mRNAs over time.This translation assay can be employed with either freely diffusing or tethered mRNAs, the choice of which will depend on the biological question to be addressed.In the study by Wang et al. , translation is observed in distinct spatial compartments in neurons using a similar SunTag-based translation imaging method with non-tethered mRNAs.In contrast, for studying ribosome translocation dynamics, the tethering assay provides the ability to track a single mRNA throughout the duration of the ribosome elongation cycle.Using this assay, we could measure polysome buildup rates over time, observe mRNAs cycling between translating and non-translating states, uncover heterogeneity in translation initiation rates and even observe a single ribosome translating an entire transcript.These measurements were aided by the vastly improved signal-to-noise of the tethered assay and the ability to easily track slowly diffusing tethered mRNAs for an hour or more.These long-term observations allowed us to discover that mRNAs can reversibly switch between a translating and non-translating state and have a high variability in pause duration at the Xbp1 site.Thus, the untethered and tethered SunTag assays provide means to study translation of single mRNA molecules, which will be applicable to a wide variety of biological questions and will be complementary to existing methods of studying translation.A drawback of our assay is the need to insert an array of SunTag peptide repeats into the mRNA of interest to fluorescently label the nascent polypeptide and the need to insert an array of PP7 binding sites in the 3′ UTR to label the mRNA.As is true of any tagging strategy, these modifications could interfere with translation and/or mRNA stability under certain conditions.We have performed a number of control experiments to ensure that binding the scFv-GFP to the nascent chain and tethering of the transcript to the membrane do not grossly perturb translation.We have also shown that ribosome translocation rates and ribosome density are similar when using a reporter with a very short or long SunTag peptide array and comparing tethered and non-tethered mRNAs, indicating that many aspects of translation are not perturbed in our assay.Nevertheless, tethering of certain mRNAs to the plasma membrane may influence translation, especially for those mRNAs that undergo local translation in a specific compartment of the cell.Thus, our assay has unique advantages for certain types of measurements of translation, but appropriate controls should be performed for each experimental system or objective.Using our system, we measured the ribosome translocation speed on single mRNA molecules.Ribosome translocation rates have been measured in bulk previously in mouse embryonic stem cells, which yielded a translocation rate of 5.6 codons/s.Our values of 3–5 codons/s are in general agreement with those published values and very similar to those measured by Wang et al.Our experiments, and those of Wang et al., are the first to measure ribosome translocation rates for a single mRNA species, in single cells and on single mRNAs, which provides new opportunities to study regulation of translation elongation.We also found that translation initiation can shut down temporarily on individual mRNAs and rapidly restart.Such shutdown of translation initiation could be due to transient loss of eIF4E binding to the mRNA cap, mRNA decapping followed by recapping, or transient binding of regulatory proteins.Using our mRNA tethering assay, binding and unbinding of single proteins to translating mRNA could potentially be observed using total internal reflection fluorescence, which could open up many additional possibilities for studying translational regulation at the single-molecule level.The pioneer round of translation, the first ribosome to initiate translation on a newly transcribed mRNA, may be especially important, as it is thought to detect defects in the mRNA, including premature stop codons.A recently developed translation biosensor can detect the location of this pioneer round of translation.However, what happens after the first ribosome initiates translation is unknown.We found that the translation initiation rate on our reporter mRNA was similar on newly transcribed, recently shut down, and re-initiating mRNAs and polysomal mRNAs, indicating that the initiation rate is independent of the number of ribosomes bound to the mRNA.The presence of introns in a gene may also affect translation initiation on newly transcribed mRNAs, which could be tested in future studies.A subset of ribosomes stall on mRNAs in a sequence-independent fashion.One possible explanation for this is that ribosome stalling is caused by naturally occurring mRNA “damage”.Previous studies have found that the 8-oxoguanine modification occurs on mRNA in vivo, and such modifications cause ribosome stalling in vitro and in vivo.Alternatively, while we have performed numerous control experiments, we cannot completely exclude that the observed stalling on a small subset of mRNAs is an artifact of our construct or assay.We also observe ribosome pausing in a sequence-dependent fashion on the pause site of the Xbp1 transcription factor.Such pausing had been observed previously in bulk measurements, but our quantitative analysis of single mRNAs revealed a high degree of variability in ribosome pausing at this site.Finally, we show that the 5′ UTR sequence of one Emi1 transcript isoform severely inhibits translation initiation.A likely explanation for this effect is the presence of several upstream open reading frames in this sequence.Surprisingly, a small number of mRNA molecules encoding this 5′ UTR do undergo high levels of translation.It is possible that highly translating mRNAs are generated through alternative downstream transcription start site selection, which generates an mRNA that lacks the repressive sequence.Alternatively, translation could occur if the 5′ UTR repressive sequence is cleaved off, followed by recapping after transcription, if a repressive protein factor dissociates, or if an inhibitory RNA secondary structure unfolds.Further studies will be required to distinguish between these possibilities.In summary, here we have developed an imaging method that enables the measurement of ribosome initiation and translocation rates on single mRNA molecules in live cells.Future developments of this technology could include simultaneous observation of single translation factors or other regulatory molecules together with mRNAs and nascent polypeptides, which would provide a very powerful system to dissect the molecular mechanisms of translational control.U2OS and HEK293 cells were grown in DMEM/5% with Pen/Strep.Plasmid transfections were performed with Fugene 6, and stable transformants were selected with zeocin.Unless noted otherwise, reporter transcripts were expressed from a doxycycline-inducible promoter, and expression of the reporter was induced with 1 μg/mL doxycycline for 1 hr before imaging.Harringtonine was used at 3 μg/mL.5 μM 4NQO was added to cells for 1 hr before imaging.Puromycin was used at 100 μg/mL.Hippuristanol was used at 5 μM.Cycloheximide was used at 200 μg/mL.Sequences of constructs used in this study are provided in the Supplemental Experimental Procedures.Cells were grown in 96-well glass bottom dishes.Images were acquired using a Yokogawa CSU-X1 spinning disk confocal attached to an inverted Nikon TI microscope with Nikon Perfect Focus system, 100× NA 1.49 objective, an Andor iXon Ultra 897 EM-CCD camera, and Micro-Manager software.Single z-plane images were acquired every 30 s unless noted otherwise.During image acquisition, cells were maintained at a constant temperature of 36°C–37°C.Camera exposure times were generally set to 500 ms, unless noted otherwise.We note that stable expression of PP7-mCherry, either with or without the CAAX domain, also resulted in an accumulation of mCherry signal in lysosomes, but lysosomes could be readily distinguished from mRNA foci based on signal intensity and mobility.GFP and scFv-GFP, mCherry, PP7-mCherry, or PP7-2xmCherry-CAAX were expressed from a constitutive promoter, while the two reporters, SunTag24x-mCherry and GFP-PP724x were expressed from an inducible promoter in U2OS cells expressing the Tet repressor protein, and their expression was induced 24 hr after transfection using doxycycline.This ensured that the reporters were translated in the presence of high levels of the scFv-GFP and PP7-2xmCherry-CAAX proteins.Cells were collected one day after doxycycline induction and analyzed by FACS.Cells were gated for GFP and mCherry double positivity, and the mCherry and GFP levels were analyzed using Flowjo v10.1.For detailed description of Image analysis and quantification, see Supplemental Experimental Procedures.M.E.T. conceived of the project with input from R.D.V.; X.Y., T.A.H., and M.E.T. performed the experiments and analyzed the data.All authors interpreted the results.X.Y. developed the mathematical model.X.Y., M.E.T., and R.D.V. wrote the manuscript with input from T.A.H. | Regulation of mRNA translation, the process by which ribosomes decode mRNAs into polypeptides, is used to tune cellular protein levels. Currently, methods for observing the complete process of translation from single mRNAs in vivo are unavailable. Here, we report the long-term (>1 hr) imaging of single mRNAs undergoing hundreds of rounds of translation in live cells, enabling quantitative measurements of ribosome initiation, elongation, and stalling. This approach reveals a surprising heterogeneity in the translation of individual mRNAs within the same cell, including rapid and reversible transitions between a translating and non-translating state. Applying this method to the cell-cycle gene Emi1, we find strong overall repression of translation initiation by specific 5′ UTR sequences, but individual mRNA molecules in the same cell can exhibit dramatically different translational efficiencies. The ability to observe translation of single mRNA molecules in live cells provides a powerful tool to study translation regulation. |
31,448 | An integrated and quantitative approach to petrophysical heterogeneity | Petrophysics is the study of the rock properties and their interactions with fluids.We can define a number of petrophysical properties, for example porosity, saturation, and permeability, and many of these depend on the distribution of other properties such as mineralogy, pore size, or sedimentary fabric, and on the chemical and physical properties of both the solids and fluids.Consequently petrophysical properties can be fairly constant throughout a homogeneous reservoir or they can vary significantly from one location to another, in an inhomogeneous or heterogeneous reservoir.This variation would be relatively easy to describe if petrophysical analysis was only applied at a single scale and to a constant measurement volume within the reservoir.While many petrophysical measurements are typically made in the laboratory at a core plug scale or within the borehole at a log scale, fluid distribution is controlled at the pore scale by the interaction of fluids and solids through wettability, surface tension and capillary forces, at the core scale by sedimentary facies, fabrics or texture, and at bed-to-seismic scales by the architecture and spatial distribution of geobodies and stratigraphic elements.Note we use the words fabric and texture here to indicate generic spatial organisation or patterns.At each scale of measurement various heterogeneities may exist, but it is important to note that a unit which appears homogeneous at one scale may be shown to be heterogeneous at a finer-scale, and vice versa.Clearly, as more detailed information is obtained, reservoir characterisation and the integration of the various data types can become increasingly complex.It is important to fully understand the variability and spatial distribution of petrophysical properties, so that we can understand whether there is any pattern to the variability, and appreciate the significance of simple averages used in geologic and simulation modelling.This is especially true in the case of complex hydrocarbon reservoirs that have considerable variability.Carbonate reservoirs often fall into this category, and the term heterogeneous is often used to describe a reservoir that is complex and evades our full understanding.Indeed, an early definition states heterogeneous as meaning extraordinary, anomalous, or abnormal.Most, if not all, of the literature on reservoir characterisation and petrophysical analysis refers to the heterogeneous nature of the reservoir under investigation.Heterogeneity appears to be a term that is readily used to suggest the complex nature of the reservoir, and authors often assume the reader has a pre-existing knowledge and understanding of such variability.No single definition has been produced and consistently applied.Researchers have started to investigate the quantification of various heterogeneities and the concept of heterogeneity as a scale-dependent descriptor in reservoir characterization.Here we review what heterogeneity means, and how it can be described in terms of geological attributes before discussing how the scale of geological heterogeneity can be related to the measurement volumes and resolution of traditional subsurface data types.We then discuss using a variety of statistical techniques for characterising and quantifying heterogeneity, focussing on petrophysical heterogeneities.We focus here on the principles and controls on the statistics and measures, before applying these to real reservoir data in four case studies.In doing so, we consider approaches used in a range of scientific disciplines to explore definitions and methods which may be applicable to petrophysical analysis.These statistical techniques are then applied to reservoir sub-units to investigate their effectiveness for quantifying heterogeneity in reservoir datasets.Heterogeneity refers to the quality or condition of being heterogeneous, and was first defined in 1898 as difference or diversity in kind from other things, or consisting of parts or things that are very different from each other.A more modern definition is something that is diverse in character or content.This broad definition is quite simple and does not comment on the spatial and temporal components of variation, nor does it include a consideration of directional dependence, often referred to as isotropy and anisotropy.Other words or terms that may be used with, or instead of, heterogeneity include; complexity, deviation from a norm, difference, discontinuity, randomness, and variability.Nurmi et al. suggest that the distinction between homogeneous and heterogeneous is often relative, and is based on economic considerations."This highlights how heterogeneity is a somewhat variable concept which can be changed or re-defined to describe situations that arise during production from a reservoir, and is heavily biased by the analyst's experience and expectations.Li and Reynolds and Zhengquan et al. state that heterogeneity is defined as the complexity and/or variability of the system property of interest in three-dimensional space, while Frazer et al. define heterogeneity, within an ecological model, as variability in the density of discrete objects or entities in space.These definitions suggest that heterogeneity does not necessarily refer to the overall system, or individual rock/reservoir unit, but instead may be dealt with separately for individual units, properties, parameters and measurement types.Frazer et al. commented that heterogeneity is an inherent, ubiquitous and critical property that is strongly dependent on scales of observation and the methods of measurement used.They studied forest canopy structure and stated that heterogeneity is the degree of departure from complete spatial randomness towards regularity and uniformity.This may seem, at first, counterintuitive because heterogeneity is commonly regarded as being complete spatial randomness.Here, the introduction of regular features, such as bedding in a geological context, adds to the heterogeneous nature of the formation in a structured or anisotropic manner.Nurmi et al. suggest that heterogeneity, in electrical borehole images, refers to elements that are distributed in a non-uniform manner or composed of dissimilar elements/constituents within a specific volume.Therefore, as well as looking at a specific element or property, it is also suggested that the volume of investigation influences heterogeneity, alluding to the scale-dependence of heterogeneities.Interestingly, Dutilleul comments that a shift of scale may create homogeneity out of heterogeneity, and vice-versa, and suggests that heterogeneity is the variation in density of measured points compared to the variation expected from randomly spread points.In a discussion of the relationship between scale and heterogeneity in pore size, Dullien suggests that to be a truly homogeneous system random subsamples of a population should have the same local mean values.Lake and Jensen provide a flow-based definition in their review of permeability heterogeneity modelling within the oil industry.In this latter case, heterogeneity is defined as the property of the medium that causes the flood front to distort and spread as displacement proceeds; in this context the medium refers to the rock, and fluid front is the boundary between displacing and displaced fluids.Thus many authors provide the foundation in which we begin to see that heterogeneity may be a quantifiable term.Pure homogeneity, with regard to a reservoir rock, can be visualised in a formation that consists of a single mineralogy with all grains of similar shapes and sizes with no spatial organization or patterns present; in this example, similar grain shapes and sizes, together with lack of spatial patterns would lead to a uniform distribution of porosity and permeability.Therefore, ignoring the scalar component of heterogeneity for a moment, there are two contrasting examples of heterogeneity in a reservoir rock.The first example is a formation of consistent mineralogy and grain characteristics that has various spatial patterns.The second example has no spatial organisation but has variable mineralogy and grain size and shape, i.e. it is a poorly sorted material.Both are clearly not homogeneous but which has the stronger heterogeneity?,Quantifying the degree of heterogeneity would enable these two different systems to be differentiated from each other, and in turn these values may be related to other characteristics such as reservoir quality.In attempting to quantify heterogeneity we can consider several approaches.It is probably best, however, to start by defining the degree of heterogeneity in relation to the nature of the investigation; for example in a study of fluid flow, sedimentological structures may be of more importance than variation in mineralogy.In contrast in an investigation of downhole gamma ray variability the mineralogical variability would be more relevant than any spatial variation.Lake and Jensen suggest that there are five basic types of heterogeneity in earth sciences; Spatial – lateral, vertical and three-dimensional, Temporal – one point at different times, Functional – taking correlations and flow-paths into account, Structural – either unconformities or tectonic elements, such as faults and fractures, and Stratigraphic.Formations may have regular and penetrative features such as bedding and cross-bedding, or alternatively less regularly distributed features, including ripples, hummocky cross-bedding, and bioturbation.The intensity, frequency and orientation of such features may additionally reflect repetition or repetitive patterns through the succession.A heterogeneity, in terms of the grain component, may appear rhythmic or repeated, patchy, gradational/transitional, or again it may be controlled by depositional structures.Homogeneity and heterogeneity can be considered as end members of a continuous spectrum, defining the minimum and maximum heterogeneity, with zero heterogeneity equating to homogeneity.There are a number of characteristics that occur in both end-member examples provided above.Neither end-member is obviously more heterogeneous than the other; there may indeed be a relative scale difference between the two examples.Some researchers may perceive a regularly structured system, for example a laminated or bedded reservoir, as homogeneous because these structures are spatially continuous and occur throughout the formation.The presence of structures within a formation is, however, more commonly interpreted as a type of heterogeneity, regardless of how regular their distribution.In this scenario, the structures represent deviation from the homogeneous mono-mineralic ‘norm’.Equally the concept of increased heterogeneity could be viewed as an increase in the random mixing of components of a formation.Here, as the formation becomes more heterogeneous there is less spatial organization present, so that the formation has the same properties in all directions, i.e., it is isotropic.Although the rock is more heterogeneous, the actual reservoir properties become more homogeneous throughout the reservoir as a whole.If grain-size alone varies, two possible extremes of heterogeneity may occur.An example where there is a complete mix of grain sizes that show no evidence of sorting would be classified as a heterogeneous mixture in terms of its components.The mixture itself would appear isotropic, however, because on a larger-scale the rock properties would be the same in all directions.If this mixture of grain sizes was completely unsorted then the grains would be completely randomly distributed and the rock would appear homogeneous at a larger scale.In another example where a formation has continuous and discontinuous layers of different grain sizes, the individual layers of similar grain size may appear homogeneous, however if looking at a contact between two layers, or the complete formation, then the heterogeneity will be much more obvious.This may be classed as a ‘structural’ or ‘spatial’ heterogeneity, again depending upon the scale of investigation.When defining a measure of how heterogeneous a system property is, it is important to consider only those components of heterogeneity that have a significant impact on reservoir properties and production behaviour/reservoir performance.This leads to the discussion of heterogeneity as a scale-dependent descriptor in the next section.Regardless of reservoir type, geological heterogeneity exists across a gradational continuum of scales.Observations from outcrop analogues have been used to characterise and quantify these features.Hierarchies of heterogeneity are now frequently used to classify these heterogeneities over levels of decreasing magnitude within a broad stratigraphic framework.Heterogeneity hierarchies have been developed for wave-influenced shallow marine reservoirs, fluvial reservoirs, fluvio-deltaic reservoirs, and carbonate reservoirs.These hierarchies break the continuum of scales of geologic and petrophysical properties into key classes or ranges.A single property can differ across all scales of observation.Porosity in carbonates is an example of a geological property that can exist, and vary, over multiple length-scales.In carbonate rocks pore size can be seen to vary from less than micrometre-size micro-porosity to millimetre-scale inter-particle and crystalline porosity.Vugs are commonly documented to vary in size from millimetre to tens of centimetres.Additional dissolution and erosion may create huge caves, or “mega-pores”.In order to investigate heterogeneity at different scales and resolutions, the concept of “scale” and how it relates to different parameters is considered.Figure 2 illustrates the scales of common measurement volumes and their relationship to geological features observed in the subsurface.While geological attributes exist across the full range of length-scale, subsurface measurements typically occur at specific length-scales depending upon the physics of the tool used.For example, seismic data at the kilometre scale, well logs at the centimetre to metre scale, and petrophysical core measurements at millimetre to centimetre scales.In general the insitu borehole and core measurement techniques are considered to interrogate a range of overlapping volumes, but in reality a great deal of “white space” exists between individual measurement volumes.How a measurement relates to the scale of the underlying geological heterogeneity will be a function of the resolution of the measurement device or tool used.The analyst or interpreter should ensure that appropriate assumptions are outlined and documented.The issue of how the scale and resolution of a measurement will be impacted by heterogeneity can be represented through the concept of a Representative Elementary Volume to characterise the point when increasing the size of a data population no longer impacts the average, or upscaled, value obtained.The REV concept lends itself to an extensive discussion on upscaling and the impact of heterogeneity on flow behaviour, which are beyond the current scope of this study.Examples of previous studies into REV, sampling and permeability heterogeneity include Haldorsen, Corbett et al., Nordahl and Ringrose, Vik et al.Different wireline log measurements, for example, will respond to, and may capture, the different parts or scales of geological heterogeneity.The geological features that exist below the resolution of tools shown in Figure 2 will in effect be averaged out in the data.Figure 3 shows how the heterogeneity of a formation can vary depending on the scale at which we sample the formation.Examples are shown for three distinct geological features; beds of varying thickness only, a set of graded beds, again, of varying thicknesses, and a “large” and “small” core sample for two sandstone types.A quantitative assessment of whether a formation appears homogeneous or heterogeneous to the measurement tool as it travels up the borehole is possible.The degree of measured heterogeneity will also change as the measurement volume changes; shallow measurements will sample smaller volumes, whereas deep measurements will sample large volumes.Assessment of thinly bedded siliciclastic reservoirs highlights the issues of correlating geological-petrophysical attributes to petrophysical measurement volumes.Thin beds are defined geologically as being less than 10 cm thick, whereas a “modern” petrophysical thin bed is referred to as less than 0.6 m in thickness, and is defined to reflect the vertical resolution of most porosity and resistivity logs.The micro-resistivity logs have a higher vertical resolutions and so can recognise thin beds on a scale that is more consistent with the geological scale.Figure 3 illustrates how alternating high and low porosity thin beds, that are significantly below the resolution of typical wireline well logs, would appear as low variability within the measurement volume.Up-scaling from core measurements to petrophysical well log calibration, and eventually to subsurface and flow simulation models of the reservoir at circa seismic-scale is a related topic.This process of upscaling represents a change of scale and hence properties may change from being heterogeneous at one scale to homogeneous at another scale.A discussion of up-scaling is beyond the scope of this paper.To summarise, ‘heterogeneity’ may be defined as the complexity or variability of a specific system property in a particular volume of space and/or time.Effectively there is the intrinsic heterogeneity of the property itself and the measured heterogeneity as described by the scale, volume and resolution of the measurement technique.Having defined heterogeneity, we consider a variety of statistical techniques that can be used to quantify heterogeneity.Techniques are grouped into two themes: characterising the variability in a dataset and; quantifying heterogeneity through heterogeneity measures.Firstly we illustrate how standard statistics can be used to characterize the variability or heterogeneity in a carbonate reservoir.Secondly we use four simple synthetic datasets to illustrate the principles of and controls on three common heterogeneity measures, before applying the heterogeneity measures to the porosity data from two carbonate reservoirs, a comparison of core and well log-derived porosity data in a clastic reservoir, core measured grain density as a proxy for mineralogic variation in a carbonate reservoir, and gamma ray log-derived bedding heterogeneities in a clastic reservoir.The core-calibrated well log-derived porosity data from an Eocene-Oligocene carbonate reservoir are used to illustrate the concepts for characterising heterogeneity.Formation A is c.75 m in vertical thickness, and is dominated by wackestone and packstone facies, with carbonate mudstone & grainstone interbeds.Formation B is c.54 m in vertical thickness, and is composed of grain-rich carbonate facies.Micro- and matrix-porosity dominate Formations A and B in the form of vugs, inter- and intra-granular porosity.Metre-thick massive mudstone interbeds are observed toward the top of Formation A.The mudstone is suggested to be slightly calcareous and dolomitic in nature, with trace disseminated pyrite.A simple glance at the wireline data for this reservoir suggests Formation-A is more variable or “heterogeneous”.An early step in completing a routine petrophysical analysis is often to produce cross plots of the well log data; these give additional visual clues as to the presence of heterogeneities within the data.Formation-A has a diverse distribution of values across the bulk density – neutron porosity cross plot, indicating its more heterogeneous character when compared to Formation-B, which is more tightly clustered.The bulk density – neutron porosity cross plot reflects the varied facies and porosity systems of Formation-A, in comparison to the carbonate packstone-grainstone dominated Formation-B with a more uniform porosity system.Basic statistics can be used to characterise the variation in distribution of values within a population of data.The basic statistics and histogram for the values of wireline log derived porosity for Formations A and B clearly reflect different variability within the data populations.Log-derived porosity in Formation A is skewed toward lower values around a mean value of 8.5%, with a moderate kurtosis.The statistics for the log-derived porosity of Formation B records a tendency toward higher values around a mean of 21.9% and a stronger kurtosis.The standard deviation, of values around the mean, is moderate for both Formations.This suggests that values are neither tightly clustered nor widely spread around the mean, although we note that the standard deviation for Formation B is one unit lower.These basic statistics can be used to characterise variation within a dataset, producing a suite of numerical values that describe data distributions.However, we need to complete and understand the full suite of statistical tests to achieve what is still a fairly general numerical characterisation of heterogeneity.We note that we could not use a similar suite of statistics to directly compare the variability between different data types that occur at different scales as the range of values has strong control on the outputs, for example comparing the variability in porosity with permeability.Thus, when using basic statistics, there is no single value to adequately define the quantitative heterogeneity of a dataset as being “x”, that would enable direct comparison of different well data, formations and reservoirs.Instead, to achieve a direct heterogeneity comparison that is both robust and useful we must consider established heterogeneity measures.Measures used in quantifying heterogeneity use geostatistical techniques to provide a single value to describe the heterogeneity in a dataset.Published heterogeneity measures, such as the coefficient of variation and the Lorenz Coefficient, have been in common use throughout most scientific disciplines, and are frequently used in establishing porosity and permeability models in exploration.Four simple synthetic datasets are used to illustrate the impact of common types of variability in a dataset on the heterogeneity measures.These measures are then applied to specific heterogeneities in a series of case studies.Of the synthetic datasets, Dataset is homogeneous with no internal variation, Dataset is composed of two values representing a high and low setting, Dataset comprises a simple linear increase in values, and Dataset represents an exponential increase in values.For our synthetic test datasets, we see coefficient of variation increase with heterogeneity; Cv = 0, Cv = 0.35, Cv = 0.55, and Cv = 2.82.The original Lorenz technique was developed as a measure of the degree of inequality in the distribution of wealth across a population.Schmalz and Rahme modified the Lorenz Curve for use in petroleum engineering by generating a plot of cumulative flow capacity against cumulative thickness, as functions of core measured porosity and permeability.Fitch et al. investigated the application of the Lorenz technique directly to porosity and permeability data.In our application of the Lorenz Coefficient, and to allow comparison of the heterogeneity in a single data type between the different measures, the cumulative of the property of interest, sorted from high to low values, is plotted against cumulative measured depth increment.In a purely homogeneous formation, the cumulative property will increase by a constant value with depth, this is known as the “line of perfect equality”.An increase in the heterogeneity of the property will cause a departure of the Lorenz Curve away from the line of perfect equality.The Lorenz Coefficient is calculated as twice the area between the Lorenz Curve and the line of perfect equality; a pure homogeneous system will return a Lorenz Coefficient of zero, while maximum heterogeneity is shown by a Lorenz Coefficient value of one.The Lorenz Coefficients generated for our synthetic test datasets demonstrate some of the key features of the Lorenz technique; Dataset matches the line of perfect equality, returning an Lorenz Coefficient of zero, Datasets and return Lorenz Coefficient values of 0.16 and 0.25, respectively, and the exponential data of set returns a Lorenz Coefficient value of 0.86, and is clearly visible as the most heterogeneous data with the largest departure from the line of perfect equality) on Figure 7B.Our synthetic datasets show significant differences in the Dykstra–Parsonsplots produced and resultant Dykstra–Parsons values; set VDP = 0.0, set VDP = 0.31, set VDP = 0.57, and set VDP = 0.99.The key advantage to using a heterogeneity measure is the ability to define the heterogeneity of a dataset as a single value, allowing direct comparison between different data types, reservoir units and fields.The coefficient of variation provides the simplest technique for generating a single value measure of heterogeneity, with no data pre-processing required.By calculating the standard deviation as a fraction of the mean value we are looking at the variability within the data distribution, removing the influence of the original scale of measurement.As such the coefficient of variation should provide a more appropriate measure of the heterogeneity of a dataset than the basic statistics, that can be compared between different measurement types and scales of observation.Lake and Jensen comment that the estimate of Cv is negatively biased, suggesting that the Cv estimated from data will be smaller than the value for the true population.Sokal and Rohlf suggest that care should be used in applying the coefficient of variation to ‘small samples’ and provide a simple correction.In addition the coefficient of variation should only be applied to data which exist on a ratio scale with a fixed zero value, for example it is not appropriate for temperature measurement in Fahrenheit or Celsius.The coefficient of variation increases with heterogeneity to infinity as no upper limit is defined in the calculation.Lake and Jensen suggest that this is a major advantage in use of the coefficient of variation as a heterogeneity measure, in that it can distinguish extreme variation.However, we favour a heterogeneity measure with defined upper and lower limits, allowing a clear comparison of variation in different datasets with different scales, resolutions and hypothetical end-member values across a similarly scaled range.We note that Jensen and Lake suggest that high levels of heterogeneity are compressed in the case of the Dykstra–Parsons and Lorenz Coefficients, and urge caution when using these techniques on small datasets.The Lorenz Coefficient provides a simple graphical-based approach to visualising and quantifying heterogeneity.As heterogeneity in a dataset can only vary between zero and one, all data types can be easily compared, regardless of the scale of original measurement.This effectively removes the influence that the scale of the original data may have on magnitude of variability present, which would be described by the mean, standard deviation and other basic statistics.The Lorenz Coefficient values more accurately reflect the heterogeneity within a formation, and provide a measure that can be directly compared between different data types.Our initial work with the synthetic dataset suggests that low heterogeneity occurs around a Lorenz Coefficient of 0.16, moderate linear heterogeneity is associated with a Lorenz Coefficient of 0.25, and high-level exponential heterogeneity increases heterogeneity up to a Lorenz Coefficient of 0.86.We have not yet been able to generate a sufficiently heterogeneous dataset to return the maximum heterogeneity of Lorenz Coefficient = 1.0.For comparison, Lake and Jensen suggest that typical Lorenz Coefficient values, for cumulative flow capacity against cumulative thickness, in carbonate reservoirs ranges from 0.3 to 0.6.Fitch et al. show that the several orders of magnitude variability in permeability measurements play a major control in the heterogeneity recorded using the traditional Lorenz technique.The Dykstra–ParsonsCoefficient may be considered as a more statistically robust technique, but it is more complex and requires additional application and understanding of mathematical and statistical methodologies.Additionally, unlike the Lorenz plot, the Dykstra–Parsonsplot does not provide a simple graphical approach for visually comparing heterogeneity between datasets.Jensen and Currie and Rashid et al. provide discussion of the weakness of using a line of best fit to calculate heterogeneity, rather than the actual “raw” data points, placing weighting on the central portion of the data and decreasing the impact of high or low extreme values.However, as long as the technique is used consistently comparisons can be made between different data types and reservoir settings.A classification scheme based on the Dykstra–Parsons value exists for permeability variation where lower values represent small heterogeneities, while larger values indicate large to extremely large heterogeneities.Results from our initial trial using the synthetic data are comparable; with simple, small heterogeneities varying from VDP values of 0.3–0.6, and the large exponential heterogeneity producing a VDP value of 0.99.Lake and Jensen comment that most reservoirs have VDP values between 0.5 and 0.9.As with any data analysis and interpretation, understanding the measurement device used and what it is actually responding to within the subsurface is key, and this can aid in understanding what heterogeneities are being described and why.This suite of techniques can be easily applied to a range of datasets at a formation scale, providing a comprehensive understanding of heterogeneities and underlying controls.Jensen et al. comment that heterogeneity measures are not a substitute for detailed geological study, measurements and analysis.They suggest that, at this scale, heterogeneity measures provide a simple way to begin assessing a reservoir, guiding investigations toward more detailed analysis of spatial arrangement and internal reservoir structures which may not be shown directly.An overall summary of the heterogeneity measures and the advantages and disadvantages associated with each is provided in Figure 10 for quick reference.Each of these measures provides a quantitative estimate of the heterogeneity in a dataset."There is currently no best practice choice from these heterogeneity measures, indeed it seems that the choice of which measure one should use is based solely upon the analyst's preference, often based on experience, skills, and knowledge.The fact that all measures discussed here point toward similar numerical ranking of the heterogeneity present in the datasets investigated is reassuring.We have a preference for the Lorenz Coefficient as a heterogeneity measure.This uses a simple technique to produce both graphical and numerical indicators of heterogeneity that can be easily compared across a range of datasets, measurement, and reservoir types.In the final section of this manuscript we summarise the findings from four case studies as examples.Jensen and Lake demonstrate that both the Dykstra–Parson and Lorenz Coefficients provide only an estimate of the true heterogeneity, depending on the population size, sampling frequency and location.Sampling frequency and location will play an impact on the measured heterogeneity in a property; this is demonstrated in Case Study 2 below.An additional issue, not addressed by the three static heterogeneity measures discussed here, is spatial organisation of the property, or the non-uniqueness of the heterogeneity measure.Figure 11 provides examples of nine ‘simple’ heterogeneous layered models, each is composed of two sets of fifty layers assigned a value of 1 and 100, respectively.The layers in model A and B are grouped into separate high and low property domains, model Q alternates high and low property layers throughout, and models C to M represent a range in spatial organisation of the layers.The standard statistics are identical for each spatial model.The coefficient of variation, Lorenz Coefficient and Dykstra–ParsonsCoefficient are 0.985, 0.485 and 0.856, respectively, for each of the models regardless of spatial organisation of the heterogeneity.In the case of these permeability models, each will behave significantly differently under flow simulation in terms of fluid production, breakthrough time and sweep efficiency.There is a potential for modifying existing techniques to quantify variability while maintaining the spatial organisation of heterogeneity, for example the Stratigraphic Modified Lorenz Plot.The heterogeneity measures have been applied to the Eocene-Oligocene carbonate reservoir described above in terms of how standard statistics can be used to characterize variability in porosity measurements.To summarise the core-calibrated porosity log values describe Formation A as a moderate to highly variable porosity succession composed of predominantly low values around a mean value of 8.5%, and Formation B as a less variable succession of high porosity values spread around a mean of 21.9%.The coefficient of variation values for the porosity of Formation A is 0.532 and is reduced by c.70 % for Formation B.Formation A porosity values have a Lorenz Coefficient of 0.288, and Formation B has a Lorenz Coefficient of 0.085.The Dykstra–Parsons coefficient for the Formation A porosity values returns a VDP of 0.353 and Formation B, again, has lower heterogeneity with a VDP of 0.123.As with results from the synthetic data, it is reassuring that all three heterogeneity measures provide the same relative ranking of the two formations.Differences in the measures ranges by c.50% for both Formations A and B.This highlights that although we can compare heterogeneity between specific techniques, we should not attempt to compare heterogeneity values measured with the different techniques.To provide a comparison of how heterogeneity levels are captured at two scales of measurement we compare the core measured and well log-derived porosity and permeability data from a North Sea Jurassic sandstone reservoir using the Lorenz Coefficient.Permeability is clearly more heterogeneous than porosity in both measurement types.This reflects the difference in scale of measurement for permeability and porosity.Similar observations were made by Fitch et al. with regard to carbonate rock property data.Heterogeneity in the well log-derived data is typically lower than that of the core data.This observation relates to the irregular sampling of core measurements in comparison to continuous log measurements down a borehole.Resampling the well log porosity data illustrates that measured heterogeneity depends on sampling frequency and whether sampling location captures extreme values in a population.Figure 13c illustrates that decreasing sampling frequency and altering sample locations can enhance the range of heterogeneities recorded, supporting the study by Jensen and Lake.Additional work in this area has the potential of informing best practise sampling protocols in both industrial and scientific drilling.Analysis of grain density and porosity measurements from an Eocene carbonate reservoir allows for a simple comparison of the heterogeneity in grain- and pore-components of the two zones, by using grain density as a proxy for mineralogy and porosity as a proxy for facies, alongside sedimentological descriptions of the core plugs.Reservoir zone X is calcite dominated, with a range in facies from carbonate mudstone, to wackestone and packstone.Low variability in the grain density data, and large variability in porosity with facies type is observed in the raw data, and is reflected in Lorenz Coefficient heterogeneities of 0.028 and 0.334, respectively.Reservoir zone Y is composed of wackestone and packstone facies, with dolomite and disseminated pyrite observed in thin section.Consequently, porosity variability appears lower with a Lorenz Coefficient of 0.198, while grain density heterogeneity is almost twice as high as that of reservoir X.In reservoir characterisation studies, heterogeneity measures are traditionally applied to permeability and porosity data.This pilot study indicates that there is potential to apply the techniques to quantify other types of heterogeneity that are described by any numerical data.These may include other rock property data, digitized sedimentological descriptions, and borehole image facies analysis.The gamma ray log from the North Sea Jurassic sandstone reservoir outlined in Case Study 2 is used to provide an example of how heterogeneity in bedding can be investigated using the Lorenz Coefficient.Figure 15 illustrates how using different gamma ray API values can be used as thresholds to define “bed boundaries”.Different threshold values will impact not only the bed locations but also how many beds are identified and the variability in bed thickness through the succession.By converting the presence of consecutive beds into a binary code we can calculate the heterogeneity in bed thickness.As the gamma ray threshold is increased above 50 API the number of beds is decreased, but the thickness of beds is increased, reflected in a decrease in the heterogeneity level.The lowest GR threshold of 40 API identifies two beds with a bedding heterogeneity of 0.14).A gamma ray threshold of 50 API generates a large number of illogically placed bed boundaries, and subsequently has a higher bedding heterogeneity of 0.34).The original gamma ray log gives a Lorenz Coefficient heterogeneity value of 0.288, which is replicated by the bedding succession identified using a threshold of 120 API).Visual comparison suggests that appropriate bed boundaries between mudstone and sandstone layers are picked using this simple technique, supported by a similar level of heterogeneity being captured.Although this is a somewhat simple application, with a major assumption that the gamma ray signature is only caused by the presence of clay minerals and that bed thickness is greater than the vertical resolution of the gamma ray log, application of this type of analysis could be made to selecting appropriate grid block size in high resolution geological models and subsequent upscaling of rock properties.Further investigations of heterogeneities that occur across a range of length scales in datasets, or with different measurement resolutions may aid our understanding of the scale of variability in reservoir heterogeneity, for example, incorporating core, image logs and numerical sedimentological observations.The term “heterogeneity” can be defined as the variability of an individual or combination of properties within a known space and/or time, and at a specified scale.Heterogeneities within complex hydrocarbon reservoirs are numerous and can co-exist across a variety of length-scales, and with a number of geological origins.When investigating heterogeneity, the type of heterogeneity should be defined in terms of both grain/pore components and the presence or absence of structural features in the widest sense.Hierarchies of geological heterogeneity can be used alongside an understanding of measurement principles and volumes of investigation to ensure we understand the variability in a dataset.Basic statistics can be used to characterise variability in a dataset, in terms of the amplitude and frequency of variations present but a better approach involves heterogeneity measures because these can provide a single value for quantifying the variability.Heterogeneity measures also provide the ability to compare this variability between different datasets, tools/measurements, and reservoirs.Three separate heterogeneity measures have been considered here:The coefficient of variation is a very simple technique, comparing the standard deviation of a dataset to its mean value.A value of zero represents homogeneity, but there is no maximum value associated with extreme heterogeneity.Individual measurement scales will influence the documented heterogeneity level, and therefore comparison between different datasets is limited,The Lorenz Coefficient is a relatively simple yet robust measure that provides graphical and numerical outputs for interpretation and classification of variability in a dataset, where heterogeneity varies between zero and one.The Dykstra–Parsonscoefficient is a more complex technique, requiring greater understanding of statistical methods.Numerical output defines a value of heterogeneity between zero and one.Initial work incorporating synthetic and subsurface datasets allows the prior assumptions and classification schemes for each measure to be tested and refined.Application to a wider selection of subsurface data types, and from a range of complex reservoir types and geographic locations will enhance our understanding of the link between geological and petrophysical heterogeneity.Drawing on a larger volume of examples, this work may also indicate one heterogeneity measure to be of more use than another."At this time, the choice between heterogeneity measures ultimately depends upon the objectives of the analysis, together with the analyst's preference, often based on experience, skills, and knowledge.Beyond the results presented here, but taking account of published research, integration of heterogeneity analysis from outcrop and subsurface examples with geocellular and simulation modelling experiments investigating the impact of geologic features on flow behaviour may help streamline both exploration and production phases by focussing attention on what it is important to capture, at what scale and which of the data types is of most use in characterising heterogeneity in petrophysical properties. | Exploration in anything but the simplest of reservoirs is commonly more challenging because of the intrinsic variability in rock properties and geological characteristics that occur at all scales of observation and measurement. This variability, which often leads to a degree of unpredictability, is commonly referred to as "heterogeneity", but rarely is this term defined. Although it is widely stated that heterogeneities are poorly understood, researchers have started to investigate the quantification of various heterogeneities and the concept of heterogeneity as a scale-dependent descriptor in reservoir characterization.Based on a comprehensive literature review we define "heterogeneity" as the variability of an individual or combination of properties within a specified space and/or time, and at a specified scale. When investigating variability, the type of heterogeneity should be defined in terms of grain - pore components and the presence or absence of any dominant features (including sedimentological characteristics and fractures). Hierarchies of geologic heterogeneity can be used alongside an understanding of measurement principles and volumes of investigation to ensure we understand the variability in a dataset.Basic statistics can be used to characterise variability in a dataset, in terms of the amplitude and frequency of variations present. A better approach involves heterogeneity measures since these can provide a single value for quantifying the variability, and provide the ability to compare this variability between different datasets, tools/measurements, and reservoirs. We use synthetic and subsurface datasets to investigate the application of the Lorenz Coefficient, Dykstra-Parsons Coefficient and the coefficient of variation to petrophysical data - testing assumptions and refining classifications of heterogeneity based on these measures. |
31,449 | Analysis of events related to cracks and leaks in the reactor coolant pressure boundary | The integrity of the reactor coolant pressure boundary is important to safety because it forms one of the three defence-in-depth barriers.For that reason the RCPB is designed and manufactured so as to have an extremely low probability of abnormal leakage, which can be caused by different degradation mechanisms as shown in Fig. 1.Components of the RCPB are designed to permit periodic inspection and testing of important areas and features to assess their structural integrity.Leak detection contributes to the prevention of reactor coolant system loop breaks by detecting any through-wall cracks that may appear in service before they reach a critical size.Evaluation of operating experience is a powerful tool for the safety assessment of nuclear power plants.When applied to cracks and leaks related events, the analysis aims to find response to critical questions, such as:How relevant are the events, treated together, to their categorization?,What conclusions can be drawn on the safety impact and the corrective measures taken?,What are the lessons learned for each category of event?,What are the recommendations to prevent the repetition of such events?,The EU Clearinghouse on NPP Operational Experience Feedback, https://clearinghouse-oef.jrc.ec.europa.eu/, carries out on a regular basis technical work to disseminate the lessons learned from past operating experience as well as background scientific research in OEF.Additionally, the EU Clearinghouse is conducting work on exchange of OEF, as well as collaborating with international organisations.The EU Clearinghouse is managed by the JRC of the European Commission and fosters the collection of operating experience from European nuclear regulators and/or operators, assessing the potential value of lessons learned, and providing support for events relevant for the global OEF to be reported systematically and in consistent manner to the IRS system operated by NEA/IAEA.1,One of the EU Clearinghouse tasks is to provide topical reports of events with similar features or causes, conducting precursor studies of events at selected European NPPs facilitating the trend analyses and enabling better understanding of the main patterns in operational experience events.Fig. 2 shows the organisations involved in the EU Clearinghouse and their main deliverables.This publication is based on the results of the topical study on cracks and leaks related events performed by the JRC in collaboration with IRSN and GRS for the EU Clearinghouse.Other two independent analyses conducted recently are on events involving emergency diesel generators, see and NPP modifications,.Four different databases were used in this study.Namely, the IAEA International Reporting System database IRS, the US Licensee Events Reports database LER, the French database SAPIDE and German database KomPass.The screening period runs for 20 years, from 1991 to 2011 for IRS and LER, and from 1990 to 2010 for French and German databases.After screening, 145 IRS reports and 75 LERs were found to be applicable, to which 129 French event reports and 61 German event reports were added.The total number of events considered is 409.To help identify generic recommendations, the events were classified into families and sub-families according to their safety significance.To evaluate the lessons learned the following approach was applied:Data preparation: extraction of the relevant information from each database.Categorization of the events according to previously agreed families, such as design, plant status, component, sub-component, event cause and type of detection.Additional information on safety impact and corrective actions were derived for the analysis.Screening of the most relevant cracks and leaks events, classified according to families.Deriving lessons learned for each category of events.Elaboration of recommendations to prevent the repetition of such events.The overall process is depicted in Fig. 3.The event investigation was performed from different angles, and the study was very exhaustive.To limit the length of this publication below are presented only some illustrative results classified by component type.The pipes under pressure in the RCS or connected to RCS are usually made of austenitic or austenitic & ferritic stainless steel.Most connections are welded.The pipes may be exposed to various degradation phenomena.Event screening in the databases showed a total of 116 events.Three main causes for failure were identified, namely, fatigue, corrosion and the presence of manufacturing defects.Human factor induced defects proved to have little impact – less than 10% of the cases could be attributed to operation errors.Fatigue was found being induced by several factors: excessive vibration, pressure shocks and the thermal regime of operating the pipe, as well as by combinations of these factors.Corrosion was induced, in most of the cases, by a non-appropriate choice of alloys while not taking into account the chemical parameters of the fluid inside pipes.Manufacturing defects mostly dealt with welding related problems and deviation from the design documentation during post-weld heat treatment.Screening the databases revealed a total of 66 events involving the reactor pressure vessel and the pressuriser.The main cause of failure identified was corrosion.The events dealing with corrosion pointed to various causes, like steam impingement, high oxygen concentration, or use of materials containing water-soluble chlorides as an insulation material.9 of all the 66 events selected were found dealing with inadequate material selection.For example, all of the six events involving RPV head penetrations recorded into the SAPIDE database were related to the vulnerability of Inconel 600 to stress corrosion cracking."Other events were caused by the following deficiencies: deviation from modern design recommendations, inadequacy of the detailed written procedures, improper process of manual electric arc welding, debris on the vessel flange and post weld heat treatment at the manufacturer's works.Lack of feedback from operating experience was also detected in some events.Screening the databases resulted in 88 events involving SGs.The main cause of the SG tube defects was corrosion.It was found that corrosion was induced mainly by improper material selection, and just in few cases by design deficiencies.A second important contributor represents the manufacturing defects, mainly resulted as failures of the Quality Assurance programme implemented during the fabrication process.Some events showed that a circumferential through-wall cracking can occur with a rapid degradation kinetic.In French PWRs, the tubes material varies depending on the year of construction, so the following tubes are found, respectively: “Inconel 600 MA” tubes, “Inconel 600 HT” tubes, and “Inconel 690 HT” tubes.“Inconel 600” and, in particular, “Inconel 600 MA,” are susceptible to stress corrosion, which leads to cracking.For the period 1990–2010, the French operators reported 31 events involving a primary-to-secondary leak rate that was excessive with reference to the thresholds and limits defined in the technical operating specifications.Screening the databases revealed 29 events dealing with reactor coolant pumps.The events recorded into the IRS database were related to inadequate seal design, to inadequate inspection method, to foreign substance intrusion into reactor coolant during operation and/or maintenance, and to the fact that damages on the seal water injection line filters could lead to clogging downstream of the seal housing.19 events were recorded into the French national database alone.By their classification, there were:Events involving damage to dynamic seals and leakage into the nuclear island vent and drain system,Events involving external leakage from a static seal,Events involving lack or loss of screws or bolts tension and generalised corrosion in screws and bolts contributing to maintaining the integrity of the second containment barrier,Events involving inadvertent interruption of coolant flow in the thermal barrier, followed by subsequent damage, events involving damage to the backseat seal, followed by leakage into the nuclear island vent and drain system,One event involving external leakage from a temperature sensor and,One event involving incorrect installation of seal no. 1.Screening the databases revealed 18 events involving safety relief valves.The events in the IRS database were related to the safety relief valves installed on the pressuriser.The analysis of French national database was limited to the safety relief valves installed on the pressuriser of the French PWRs.The analysis only retained events characterised by a failure or leak affecting the relief valve tandems of the pressuriser.The result of screening the French national database shows 8 events involving safety relief valves, of which 4 events involved “Banjo” fitting and/or “Jet” seal failures.Screening the German national database identified three events involving safety relief valves, all of them occurred at BWRs.The first event was caused by a malfunction of testing equipment whereas the second one was caused due to degradation of a control relay – both of them were leaks.The third safety relief valve event had as cause a broken spring, which was detected during revision, but did not lead to a leakage.The main causes for the German events were, apart for human errors, periodic inspection programme deficiencies and spurious actuation, in combination with minor design deficiencies.All these events underline the importance of instrumentation in and surrounding the relief valves and control cabinets, and their associated measurements and alarms.There were found a total of 65 events in the databases involving these types of valves.Screening IRS database revealed that 5 of the 12 events dealt with maintenance induced corrosion, while corrosion alone caused just 2 events.Regarding the US NRC database, it was found that 4 out of the 6 recorded events dealt with manufacturing defects, caused in half of the cases by design deficiencies.By far, the highest number of events is recorded in the SAPIDE database.The events observed concern internal and external leaks that were collected into tank or systems.Of 14 events involving packing gland failure, six involved pressuriser spray valves.Screening the KomPass database revealed a total of 7 events involving valves.Out of them, 6 events occurred at BWRs, and were caused by corrosion.Although usually there are several causal factors that cause the events, for the purpose of this study it was investigated the proportion of events caused by failure of manual operated valves in events.This examination revealed 23 events, recorded in all 4 databases, involving manual operated valves.So, manual operated valves inflict a large share of leaks.The flange joints and mechanical connections are mainly found in small-diameter pipes and in the auxiliary pipes connected to the RCS.The leaks in the flange joints often are characterised by a slow flow rate and by damage localised in the seal.The observations of leaks, are often followed by the replacement of the seal.Screening the databases revealed 24 events dealing with flanges.Diverse causes were identified.It was found that the “O” rings used by the manufacturers were slightly oversized, so could not be contained inside the groove and was swelling outward under pressure.Other causes for leak were the lack of QA during maintenance and staff training, the utilisation of materials sensitive to intergranular and transgranular stress corrosion cracking on the flange contact surfaces and deficiencies in the installation procedures.All the 409 events analysed involving cracking or leakage of reactor coolant have different root causes or sources.In more cases, the events did not have just one cause, but a combination of them.The causes of the analysed events are the following:Corrosion of different types for 35% of the cases.Manufacturing defect for 15% of the cases.Maintenance anomaly for 11% of the cases.Control or operating error for 9% of the cases.Fatigue for 11% of the cases,Unknown or other causes in 19% of the cases.Fig. 4 lists the degradation mechanisms that may be present in the different components of the RCPB, see.Not all these mechanisms were found in the analysis of events due to the limited number of events in the databases.Corrosion is the main root cause of the events analysed.In France, many corrosion induced events are due to Inconel 600.If we do not take into account these events, which are numerous, corrosion is not the main degradation mechanism.In France, corrosion is often a consequence of an external leak.Manufacturing defects are the second largest root cause."These events are nearly all related to welding faults, but also it could be founded cases in which the manufacturing defect was just the “apparent cause”, while it's precursors were dealing with inappropriate QA measures at the manufacturer.Fatigue is also an important degradation mechanism for the analysed events.Both mechanical fatigue and thermal fatigue were identified during the analysis.Mechanical fatigue appeared in certain cases in combination with another cause, like manufacturing or maintenance defects, especially on welds.Thermal fatigue usually appears mainly in conjunction with “Farley–Tihange” phenomenon, but also could be triggered by high temperature differences from one side to the other of a tube, for example.The event investigation showed that the crack and leak related events occurred in specific equipment, components or sensitive areas.SG tubes; SG nozzle dams and associated drain plugs required and operated during outages; reactor vessel head penetration areas; small lines for bypass, instrumentation, venting and drainage and corresponding isolation valves; RCS pumps; RCS relief valves and associated control cabinets; valve housings; all the lines, components and accessories located in such areas where they are exposed to various hazards and quality failures during outages when manual operations are required; flange joints and their seals, as well the mechanical couplings or fittings, in particular those subject to frequent operating or maintenance interventions.The assessment performed shows that cracks have a low safety impact – all the leaks were discovered with the plant in cold shutdown, by performing inspection, and their effect on operation was almost negligible.Besides, it was observed that when a crack is discovered, the operators take the needed precautionary measures to limit or to control the propagation of the crack – usually the cracks discovered during power operation are repaired during the next outage.The mechanism of cracks development varies from case to case, based on the type of the plant and on the type of operation and/or maintenance of the equipment.Regarding the cracks only, it was found that fatigue was the main contributor to the appearance of cracks – for the other cases ageing, corrosion or combinations of these three contributors were the initiating stressors of the systems, structures and components affected.Contrary to cracks, once detected the leaks usually request for an immediate action.The OEF analysis indicates that leaks often have a significant safety impact, mainly on staff radiation protection, as usually decontamination and cleaning operation require numerous human resources to enter controlled areas.Also, leaks can induce corrosion of base materials and of external surfaces of the pressure boundary, i.e., vessel head had to be cleaned several times because of boron deposits.Leaks also induced production of unaccounted wastes that must be treated according to the waste management plans.Some leaks were difficult to detect, to locate and to stop.Event reporting has become an increasingly important aspect of the operation and regulation of all public health and safety related industries.Diverse industries such as aeronautics, chemicals, pharmaceuticals and nuclear all depend on operating experience feedback to provide lessons learned about safety.For events involving failures in operating devices or in human and organisational performance, it is important to analyse the event, to identify its causes and draw lessons, in order to avoid the recurrence of similar events or to ensure with additional defences that their consequences remain small.Numerous recommendations have been elaborated in the analysis of the cracks and leaks related events and are extensively documented in.The analysis was performed from different perspectives.Only a summary of the methodology used and some relevant results are presented in this paper.Plant operating experience has shown that significant increases in the leakage rates below the limits established in the plant technical specifications, but above the baseline values, may indicate a potentially adverse condition.Plants should periodically analyse the trend in the unidentified and identified leakage rates.Evaluating the increase in the leakage rates is important to verifying that the plant will continue to operate within acceptable limits.Prompt corrective action requires continuous online monitoring for leakage, which is important to ensuring the safe operation of a facility because it provides an indicator during reactor operation that a potentially adverse condition may exist. | The presence of cracks and leaks in the reactor coolant pressure boundary may jeopardise the safe operation of nuclear power plants. Analysis of cracks and leaks related events is an important task for the prevention of their recurrence, which should be performed in the context of activities on Operating Experience Feedback. In response to this concern, the EU Clearinghouse operated by the JRC-IET supports and develops technical and scientific work to disseminate the lessons learned from past operating experience. In particular, concerning cracks and leaks, the studies carried out in collaboration with IRSN and GRS have allowed to identify the most sensitive areas to degradation in the plant primary system and to elaborate recommendations for upgrading the maintenance, ageing management and inspection programmes. An overview of the methodology used in the analysis of cracks and leaks related events is presented in this paper, together with the relevant results obtained in the study. © 2014 The Authors. |
31,450 | Spatially-explicit effects of seed and fertilizer intensification for maize in Tanzania | In Eastern and Southern Africa approximately 22 percent of total caloric consumption came from maize in 2005–2007.Smallholder farmers cultivate much of this maize, and their livelihoods depend on maize productivity.Improving the productivity of these smallholders presents one pathway to better livelihoods.Repeated studies have shown the yield gains associated with improved seed and mineral fertilizer for maize in Africa.A wide range of policies in Africa have focused on increasing crop productivity.These policies have ranged from country-specific policies that subsidize seed and fertilizer to continent-wide commitments on increasing productivity and input use such as the Malabo and Abuja Declarations.The Malabo Declaration aims to at least double crop productivity by 2025 in specific African countries.However, yield trends suggest many African countries may struggle to double productivity before 2025, despite countries often increasing their agricultural budgets.In addition, fertilizer use in many African countries lags behind the 2015 goal of applying 50 kg ha−1 of fertilizer stated in the 2006 Abuja Declaration.Against this backdrop of policies and declarations, spatial heterogeneity exists in biophysical and economic conditions among regions within countries, among farmers within regions, and among fields within farms.Capturing the effect of this spatial heterogeneity on crop productivity and economic measures may help increase the relevance of input intensification studies for discussions on the Malabo and Abuja Declarations.Our simulation study asked three questions for maize monoculture in the Mbeya administrative region of the Southern Highlands of Tanzania; how do changes in seed cultivars and fertilizer application rates affect the yield and partial profit of maize in different districts of Mbeya?,; what are the value-cost ratios of fertilizer use for different seed cultivars in different districts of Mbeya?,; and 3) how do maize grain prices, producer surpluses, and consumer surpluses change in Mbeya, the rest of the Southern Highlands, and for all of Tanzania if maize seed and fertilizer practices intensify in Mbeya?,Maize is Tanzania’s main staple crop and Mbeya is Tanzania’s largest maize producing administrative region.Our study combines crop simulation modelling, with household price and cost data, and an economic surplus model.We focused on seed and mineral fertilizer because they are common inputs into maize production.However, using these inputs is often a complementary strategy to applying sustainable farming practices, such as integrated soil fertility management.Multiple approaches exist to achieving the Malabo and Abuja Declarations, such as increasing the use of improved seeds and fertilizers, greater use of sustainable farming practices, or a combination of the preceding two approaches.Our study considered more intensive seed and fertilizer use.Studies on increasing the use of fertilizer have often focused on the value-cost ratio of fertilizer, defined as the value of extra grain yield associated with an additional unit of fertilizer applied.Studies on the value-cost ratio have taken both a field-scale perspective with agronomic experiments and a household-scale perspective with household survey data.Previous studies highlighted the heterogeneity of fertilizer profitability within specific regions, including the importance of planting improved seed to raise fertilizer-use efficiency.Although microeconomic studies on value-cost ratios provide information on the profitability of inputs at the field-scale, policymakers are often interested in the market-scale effects of widespread changes in practices.At the market scale, studies have focused on the economy-wide implications of changes in crop productivity.Ricker-Gilbert et al. found that lifting maize production by subsidizing fertilizer costs in Malawi and Zambia had a slight negative effect on retail maize prices.Pauw and Thurlow simulated the effect of exogenously-imposed changes in total factor productivity on economy-wide indicators in Tanzania.You and Johnson explored crop investment options for countries in Africa with an economic surplus model.Kassie et al. also applied an economic surplus model to show that combining practices such as improved seed and fertilizer for maize increased producer and consumer surpluses in Ethiopia.Studies in Tanzania have developed targeting domains for scaling out sustainable intensification technologies by identifying sites with similar bio-socio-economic characteristics.Our simulation study adds insights into the studies mentioned above by providing additional evidence on the economic effects of changes in seed and fertilizer use in Tanzania at the field scale, regional scale, and national scale.Our study supplements microeconomic adoption studies by considering the effect of different rates of adoption to input intensification on maize prices and changes in producer and consumer surpluses.Using a cropping systems model to simulate yield effects helps isolate the effects of seed and fertilizer use across heterogenous climates and soils.Our study supplements economy-wide analyses by providing a more granular examination of practices that may increase maize yields and how this yield increase may affect prices and economic surpluses.Our results both complement and supplement Pauw and Thurlow because we address how to achieve yield growth and then examine the market-scale effects of the yield growth.Studies on market-scale effects of agricultural inputs often estimate adoption, yields, and costs using econometric methods with household data.We provide an alternative method by simulating the yield effects and then combining the simulations with survey data on prices and costs to assess the economic effects of input intensification.Finally, we provide a concrete example of maize, seed, and fertilizer focusing on yields, prices, and costs, which helps provide a case study to recently proposed targeting domains in Tanzania.We first simulated maize grain yields for different practices related to seed and mineral fertilizer application rates over multiple years with a cropping systems model to capture spatial and temporal heterogeneity in yields.We then calculated partial profits and value-cost ratios for different practices by combining the simulated yields with household survey data on prices and costs.Finally, we examined the price and economic surplus effects for the different practices at three spatial scales: Mbeya, the rest of Southern Highlands, and the rest of Tanzania.We simulated maize grain yields for a baseline and 3 input intensification practices over 30 years from 1980 to 2009 in the Decision Support System for Agro-technology Transfer v4.5.We used the CERES-Maize model within DSSAT.Our baseline included maize monoculture planted with local seed plus 10 kg N ha−1 applied as fertilizer.The baseline reflects the most common approach to growing maize based on data in MoA and the Tanzanian National Panel Survey.The input intensification practices included different combinations of seed and fertilizer: practice 2 included local seed and 40 kg N ha−1 as fertilizer, practice 3 included improved seed and 10 kg N ha−1 as fertilizer, and practice 4 included improved seed and 40 kg N ha−1 as fertilizer.Each simulated practice applied no manure to the field and had 80 percent of crop residues removed from the field, which matched local practices.The planting density of seed varied by district and by seed cultivar.Seed cultivars planted differed among the different districts based on data in MoA.Practices with 40 kg N ha−1 reflect an application rate in line with area-specific fertilizer recommendations for the Southern Highlands that have been historically provided by Tanzania’s Ministry of Agriculture.Appendix A provides additional details on the practices.We simulated the four practices from Table 1 in all districts of Mbeya.We applied a grid-based approach where DSSAT simulated maize growth for each practice in all 5 arc-minute grid cells in Mbeya that reported growing maize.Climate and soils differed among the districts.We parameterized DSSAT using AgMERRA historical daily weather data at a 30 arc-minute resolution overlaid on gridded soil profile data at a 5 arc-minute resolution.Parameterization also included collating district-scale crop management data including seed cultivars and fertilizer application rates, regional-scale maize planting windows, and crop cultivar coefficients.We reset soil parameters at the start of each simulation year to the initial soil parameters.We reset soil parameters because we simulated a monoculture, as opposed to different cropping practices that may involve sizable differences in the accumulation of soil water or nutrients.We then calibrated and evaluated the parameterized model.To calibrate our model we sourced yield data from different districts in Mbeya from MoA that applied a comparable mix of practices to our baseline.The calibration of process-based cropping systems models, such as DSSAT, typical involves an approach where crop cultivar coefficient adjustment occurs within a reasonable range of the initial coefficient values based previous research, knowledge, or experience.Following this approach, we calibrated our model by adjusting crop cultivar coefficients related to cultivar maturity and the planting window.We compared simulated grain yields for our baseline with the district-scale yields reported in MoA after adjusting crop cultivar coefficients.To assess model performance, we calculated three statistics: Normalized Root Mean Squared Error, Willmott index of agreement, and the coefficient of determination.Appendix A provides more details on our model parameterization, calibration, and evaluation, and limitations.Risk avoidance is often a concern for maize growers when considering input intensification.To capture one aspect of risk we examined the effect of input intensification on yield risk, captured by calculating the stability of simulated yields.Our measure of stability was the coefficient of variation, calculated in each grid cell for each practice as the standard deviation of yield over the 30 years divided by the average yield over the same 30 years.In Eq., π is the partial profit in a specific district and year for a specific practice from Table 1.Eq. defines GY and FQ, GP is grain price, FC is fertilizer cost, SQ is seed planted, and SC is seed cost.In our study, 1 US$ = 1473 TZS for 2012.We calculated district-scale prices for maize grain and imputed district-scale costs for different seed cultivars and urea fertilizer with data from the Tanzanian National Panel Survey.The unit costs of seed and fertilizer represent the actual costs paid by the farmers in each district, thus they include transportation costs.Many factors beyond prices and yield responsiveness to fertilizer influence the profitability of fertilizer use, including farmer preferences and attitudes, labor dynamics, and management skills.Our simulation approach attempted to nullify the potentially confounding effect of these other factors on fertilizer profitability.Within each grid cell and year all factors remain constant apart from changes in input use.Panel data econometrics with household data can also control for the factors listed above; however, we opted for a simulation approach as we aimed to capture temporal variability over 30 years and link to a market-scale analysis.Moreover, the objectives of household surveys are often not to quantify spatial and temporal yield responses for alternative practices, rather examine adoption.The widespread uptake of yield-boosting practices and its likely effect on prices has the potential to increase economic benefits to both producers and consumers—in the form of producer and consumer surpluses.We used the DREAM model, Dynamic Research EvAluation for Management, to calculate the potential price and economic surplus effects of input intensification at multiple spatial scales.Our DREAM model considered three markets: Mbeya region, other regions in the Southern Highlands, and the rest of Tanzania.The DREAM model calculates the magnitude and distribution of the economic benefits of agricultural research and development, which in our study involves the adoption of already developed practices.The DREAM model follows the principles and practices for agricultural research evaluation and priority setting outlined by Alston et al.The model also follows the economic surplus method where research-induced supply shifts trigger market-clearing adjustments in one or multiple markets that would affect the flow of final benefits to producers and consumers.Researchers have applied the DREAM model in sub-Saharan Africa to examine the economic effects of irrigation infrastructure, the adoption of improved pigeon pea cultivars, and for multi-country assessments of agricultural technology adoption.Appendix B contains additional information on the DREAM model.In the DREAM model, we specified the maximum percentage of farmer’s plots adopting the practices and the years it took for adopters to fully adopt the practices.Our study had no research time lag because seed and fertilizer are inputs already developed and adopted by some farmers, thus our study focused on the adoption ceiling and speed of adoption for already developed practices.We based the adoption ceiling on the input use patterns documented among plots found in the Tanzanian National Panel Survey.Farmers planted 65 percent of their maize plots with local seed, which had an average 10 kg N ha−1 fertilizer applied.Therefore, only 65 percent of plots had the potential to plant improved seed as 35 percent already planted improved seed.From the 65 percent of eligible plots, we simulated the effects of 20, 60, and 100 percent of land eventually intensifying input use.Therefore, we set the adoption rate at 13 percent, 39 percent, and 65 percent.Adoption ceilings are often taken from different sources, for example, from literature reviews in combination with expert opinions.Furthermore, we assumed an adoption lag of 5 years, which means it took 5 years to reach the adoption ceiling of 13 percent, 39 percent, or 65 percent.Adoption over time followed a logistic adoption pattern.We calculated the net present value for each practice over the 5 years using a discount rate of 5 percent.The DREAM model also considered spillovers of the practices to the other regions of the Southern Highlands with no time lag for an 80 percent spillover.This implies that 80 percent of the maize area planted in the other regions of the Southern Highlands adopted the intensification practices in the same year as those in Mbeya, for example, if 20 percent of maize land in Mbeya intensified input use, 16 percent of maize land in the other regions of the Southern Highlands also intensified input use.The Southern Highlands is a net seller of maize; however maize farmers can be both consumers and producers of maize in Tanzania.Therefore, we calculated both producer and consumer surpluses with the sum of producer and consumer surpluses being the total economic surplus.To calibrate the DREAM model, we collected maize data on region-specific production and consumption from 2010, prices from 2010, and generic elasticities of supply and demand.We considered 2010 as the baseline year with production data at the region-scale from MoA.National-scale consumption was based on 2010 domestic supply, and then converted to regional consumption using rural population data from CIESIN and CIAT and a rural to urban consumption ratio.Appendix B contains additional details on the calibration of our DREAM model.Historically the production of maize grain has increased because of a combination of area planted expansion and yield growth.Looking at yield growth, Mbeya had a 1.7 percent compound annual growth rate for yields of maize grain from 2002 to 2012.Maize grain yields need to grow at a compound annual growth rate of 5.5 percent if yields are to double by 2025 to reach 2868 kg ha−1, relative to the 2012 yield of 1434 kg ha−1.Yields in 2025 would be 2107 kg ha−1 if growth in yield continued at the compound annual growth rate observed between 2002 to 2012.This extrapolated yield in 2025 is 26 percent less than the aim of doubling yield, relative to 2012 yields.Input intensification provides one possible approach to doubling productivity.Comparing our simulated yields for the baseline against the observed yields reported for the baseline in MoA produced a NRMSE of 17 percent, d-index of 0.86, and R2 of 0.56.Our simulated N-AE values in Fig. A.4.lie within the range of common values found in earlier studies.The association between N-AE and fertilizer rates followed agronomic logic.Fig. A.5 shows the temporal variability of yields for each practice.Our simulation results suggest that average yields in Mbeya can double through modest input intensification.Planting local seed and applying an additional 30 kg N ha−1 increased average yields by 40 percent from the 1483 kg ha−1 in the baseline to 2081 kg N ha−1.Using more fertilizer plus improved seed lifted yields to 3243 kg ha−1, a 119 percent increase compared with the baseline.Planting improved seed without changing fertilization improved yields by 31 percent.Extra financial costs are incurred to realize the yield gains because improved seed had a greater per unit cost than local seed and using more fertilizer increased per hectare costs.Applying more fertilizer without a change in seed type increased average costs from US$ 28 ha−1 to US$ 60 ha−1.Profits rose by 26 to 112 percent with the introduction of the input intensification practices compared with the baseline.This rise occurred even though the percentage increase in cost exceeded the percentage increase in yield for all practices.The highest yielding practices generated the greatest profits.Input intensification lifted yields and profits without having a noticeable negative effect on the stability of yields or profits, measured as the coefficient of variation.Results suggest inter-district heterogeneity in the value-cost ratios, partly related to differences in the marginal productivity of fertilizer, grain prices, and fertilizer costs.The MP differed by practice and by district, with planting an improved seed cultivar making yield more responsive to additional fertilizer.The average MP was 20 for local seed and 43.1 for improved seed, with a range from 4.1 to 47.4 for local seed and a range from 39.1 to 49.2 for improved seed.The average VCR was 3 for local seed and 7.7 for improved seed, with a range from 1.2 to 6.6 for local seed and 3.4 to 14.2 for improved seed.The ratio of maize prices to fertilizer costs averaged 0.17, with a minimum of 0.09 in Rungwe and a maximum of 0.30 in Chunya.Average fertilizer costs ranged from a minimum of 0.87 US$ kg−1 N in Chunya to a maximum of 1.72 US$ kg−1 N in Rungwe.The inter-district heterogeneity in prices and costs and MPs resulted in inter-district heterogeneity in VCRs, for example with local seed the average VCR was 1.2 in Chunya, 3.5 in Mbarali, and 2.2 in Rungwe.With local seed the average MP was 19.6 in Mbarali and 25 in Rungwe; however, price and cost considerations meant that the VCR was 3.5 in Mbarali and 2.2 in Rungwe, i.e., the rankings changed once economic factors were included.Across all districts and years, applying an additional 30 kg N ha−1 while also planting local seed produced an 86 percent chance of the average VCR exceeding 1 and a 57 percent chance of the VCR exceeding 2.Applying an additional 30 kg N ha−1 while also planting improved seed resulted in a 98 percent chance of the average VCR exceeding two.The chance of the VCR exceeding one varied by district, for example, with planting local seed the VCR always exceeded 1 in Ileje and Mbozi, the chance was 53 percent in Chunya and 83 percent in Rungwe.Fig. 3 shows the simulated percentage changes in average grain yields and partial profits in different districts between the baseline and planting improved seed with 40 kg N ha−1.Spatial variability exists in the effect of input intensification on yield and partial profit with results suggesting Mbarali benefits the most from input intensification.The ordinal ranking of districts by change in partial profit differs slightly from the ordinal ranking of districts by yields.For example, yields in Mbozi are the least responsive to input intensification and Rungwe is the second-least responsive; however, when prices and costs are considered Mbozi has a slightly larger percentage change in partial profit than in Rungwe.Economic surplus, measured as changes in producer surplus plus changes in consumer surplus, changed at different scales depending on the use of input and its adoption rate.The supply shift associated with the yield increases had a negative effect on maize prices.With an adoption rate for planting improved seed and applying an extra 30 kg N ha−1 of 39 percent in Mbeya and a spillover of 80 percent to the rest of the Southern Highlands the net present value of economic surplus to Tanzania rose by US$ 697 million over five years.With an adoption rate of 39 percent for improved seed without additional fertilizer application and the same spillover we calculated a gain in the net present value of economic surplus of US$ 104 million over five years.If 39 percent of farmers kept planting local seed but applied an extra 30 kg N ha-1 with the same spillover the net present value of extra economic surplus rose by US$ 148 million over five years.Higher adoption rates increased gains and lower adoption rates lowered the gains.Producer surplus expanded in Mbeya and the rest of the Southern Highlands in all practices.Producer surplus declined in the rest of Tanzania for all practices.Consumer surplus increased for all practices as consumers benefited from lower prices induced by greater supply.The increase in production associated with the uptake of yield-boosting practices offset the negative price effect to generate a greater producer surplus and consumer surplus for all of Tanzania.In Mbeya and in the whole of Tanzania, many maize producers are also maize consumers.The economic surplus, which is the sum of producer and consumer surpluses, is one measure of the total benefit to society.The change in the net present value of economic surplus associated with adding an extra 30 kg N ha−1 and still planting local seed was modest compared with the other practices.The increase in yield of 40 percent and the growth in per hectare costs of 60 percent resulted in the size of the supply shift from the extra fertilizer being smaller than in the other practices which had larger differences between the percentage yields gains and the percentage cost increases.Our simulation results from DSSAT support prior studies in Africa that suggest DSSAT can capture maize responses to different management practices.We coupled our DSSAT results with economic calculations to provide insights into the economic effects of seed and fertilizer practices at multiple spatial scales.At the district scale, heterogeneity existed between districts in the effect of input intensification on yields and profits.Although the ordinal ranking of changes in yields among the districts was almost the same as the ordinal ranking of changes in partial profits, Mbozi and Rungwe switched in ranking once prices and costs were considered.This switching of ranking highlights the importance of considering economic factors in spatially-explicit assessments of input intensification.The VCR provides one economic measure of input use.Our results suggest a wide range of VCRs exist in Mbeya.Ragasa and Chapoto found VCRs consistently greater than 3 in Ghana, which implied fertilizer application rates could expand if farmers aimed to maximize profit from maize.Low fertilizer costs relative to grain prices contributed to the relatively high VCRs in Ragasa and Chapoto compared with other studies in Africa.In our study, some VCRs for local seed across the years and districts were less than one, related to district-specific grain prices and fertilizer costs and the marginal productivity of fertilizer.Many fertilizer studies of VCRs in Africa examine panel data from farm surveys.Jayne and Rashid argue that although these studies control for unobserved time-invariant factors they typically suffer from respondent measurement error and readers should interpret the results of these cautiously.Our study took a simulation approach to calculate the marginal productivity of fertilizer.Although readers should also interpret our results cautiously, our simulated VCRs complement household survey estimates of VCRs.Our spatially-explicit approach helps highlight the range of VCRs possible within a region that suggests complexity in designing policies to promote input intensification.Household-scale studies have improved our understanding of how changes in economic factors can affect the profitability of fertilizer.For example, Liverpool-Tasie et al. showed how reductions in transportation costs lifted the chance of VCRs exceeding a threshold that deems purchasing more fertilizer profitable.Our simulated VCRs suggest that planting improved seed cultivars lift the profitability of fertilizer and therefore increase the chance that the VCR exceeds a threshold for profitability, which strengthens calls to promote the adoption of improved seed and fertilizer as a package of practices.The benefit of improved seed on fertilizer-use efficiency has also been shown in the agronomic literature.While multiple factors beyond prices and productivity effect fertilizer uptake, designing programs to improve the uptake of improved seed cultivars appears helpful as planting improved cultivars resulted in almost all VCRs exceeding 2.Given the simulated VCRs almost always exceeded two, extension services that show the on-the-ground effects of input intensification through field trials appear helpful.Extension services can influence input intensification.Farmers in Tanzania often have limited access to agricultural extension services—therefore, policy discussions on greater access to extension services that expose farmers to field trials examining the effect of combining improved seed with extra fertilizer appears worthwhile canvassing.A consideration for policymakers is the benefits and costs of scaling out input intensification beyond the field scale to a regional scale.Our economic surplus model offered some insights into the market-scale effects of simulated higher marginal productivity and profitability at the field-scale.Results suggested a minor decline in maize prices, because of the aggregate increase in supply relative to demand.However, the gain from extra production outweighed the negative price effect to provide a positive effect on producer surpluses.Ignoring price effects at the market scale may overstate benefits; this highlights the importance of capturing market-scale factors in assessments of input intensification.Simulated gains to consumers were universal because of the lower prices.The constraints beyond fertilizer productivity, grain prices, and fertilizer costs that limit input intensification remain an open question, which require other research methods beyond the scope of simulation approaches such as ours.If the policy implementation cost to lift maize production through adopting the yield-boosting practices was less than the calculated gains in economic surplus, this would suggest a positive economic return on investment.Researchers would need to assess these expected returns against expected returns from alternative investments so that policymakers have a range of options to suggest approaches to intensify agriculture.These alternative approaches could include combining our simulated practices with sustainable farming practices and local adaptation of the practices.The Malabo and Abuja Declarations are ongoing commitments from the governments of African countries that aim to improve food security.These commitments have been made partly because of concerns about slower than desired growth in crop productivity.To contribute to debates on using input intensification as one approach to boosting crop productivity we simulated the economic effects of changes in seed and fertilizer use for maize monoculture in Tanzania.Our examination of historical yield growth rates suggested that a doubling of maize yields by 2025 will remain challenging without simultaneous investments that increase the uptake of improved seeds and fertilizer.Planting improved seed cultivars, as opposed to local cultivars, and increasing fertilizer application rates from 10 kg N ha−1 to 40 kg N ha-1 saw a doubling of simulated maize grain yields in some districts, although heterogeneity in yield effects existed.The profitability of applying extra fertilizer, calculated as the value-cost ratio, increased with the planting of improved seed cultivars.This increase suggests planting improved seed may encourage greater fertilizer application.Promoting the use of improved seeds could provide a useful entry point for discussions on investment prioritization as in some cases the VCR was less than one if farmers planted local seed.Results from the economic surplus model implied that there was no economic compromise between farmers in Mbeya who adopted improved seeds and used extra fertilizer and Tanzania’s consumers.Overall, our findings help build evidence for policy discussions on how to improve the productivity and profitability of maize in Tanzania.This evidence directly supports the goals of the Comprehensive Africa Agriculture Development Programme of providing evidence-based planning for African governments to meet their Malabo Declaration commitments. | Slower than desired growth in crop yields coupled with rising food demand present ongoing challenges for food security in Africa. Some countries, such as Tanzania, have signed the Malabo and Abuja Declarations, which aim to boost food security through increasing crop productivity. The more intensive use of seed and fertilizer presents one approach to raising crop productivity. Our simulation study examined the productivity and economic effects of planting different seed cultivars and increasing fertilizer application rates at multiple spatial scales for maize in Tanzania. We combined crop simulation modelling with household data on costs and prices to examine field-scale and market-scale profitability. To scale out our analysis from the field scale to the regional and national scale (market scale) we applied an economic surplus model. Simulation results suggest that modest changes in seed cultivars and fertilizer application rates can double productivity without having a negative effect on its stability. The profitability of applying extra fertilizer, calculated as its value-cost ratio, increased if improved seed cultivars replaced local seed cultivars. Rankings of district-scale profits differed from rankings of district scale yields, highlighting the importance of considering economic factors in assessments of input intensification. At the national scale, simulation results suggest the total benefit could be US$ 697 million over 5 years if there was a 39 percent adoption rate of planting improved seed and applying extra mineral fertilizer. Providing economic assessments of input intensification helps build evidence for progressing the Malabo and Abuja Declarations. |
31,451 | Novel criteria for assessing PV/T solar energy production | PV/T collector technology can be utilized for different purposes, depending on the strategy of design and implementation.The first function could be described as cooling of photovoltaic; as PV cell temperature is maintained, its efficiency can be maintained.Heating is a problem for PV systems which causes drops in voltage, which ultimately impacts their production and efficiency.Hence, the use of pipes or other types of absorbers – through contact is beneficial as temperature transfer from the PV into the cooling pipes.The second function is the generation of thermal yield.As elaborated, the cooling pipes will absorb heat, form the PV system, which can be extracted and used to supply thermal loads .The interesting element in this technology is the customization; meaning there is enough room for innovation.Novel research for this technology explains the different configuration of PV/T systems , the different design material , and the potential for large-scale implementation .Hence, various classification approaches are followed - such as type of fluid, type of absorber, type of auxiliary, etc.The outputs of this system are either electrical or thermal.The PV aspect is described by parameters such as maximum power, Electrical efficiency, maximum voltage, and maximum current .The thermal aspect is described through useful heat gain, thermal power, thermal efficiency .For a better evaluation of PV/T performance, parameters such as thermal and electrical exergy are introduced.The criteria used to compare PV/T and conventional systems to cover all aspects of the two systems and not indicate just a single perspective is crucial.Also, many researchers compared the two systems in term of efficiency, power, energy, etc.However, the comparison is not scientifically accurate because of many differences in the tested system as example PV type, power, efficiency, etc.For that, it is essential to adopt a new criterion that takes into consideration these differences.In this paper, the existing evaluation criteria are summarized and explained, then new criteria for the evaluation of PV/T systems are proposed.Finally, a case study is carried out as a proof of concept with regards to the proposed evaluation criteria.To evaluate a PV/T system for comparison sake, it is critical to establish the right parameters, or criteria, for evaluation.The more detailed and application oriented, the more it can be strategically selected to suit the intended application.The cost of the system has been used to evaluate its feasibility.Another criterion is pumping power.Often the comparison between two different PV/T collectors can be achieved by equalizing the pumping power.Although it is mostly implemented for different types of working fluids.As it was demonstrated by Purohit et al. .In their work , both equal pumping power and Reynolds number criteria were used.The study was numerical and focused on testing the rate of heat transfer coefficient of the nanofluid which was utilized as coolant.The majority of the research shows the enhancement in overall PV/T performance when implementing new design configuration or material .The following evaluation criteria are intended for hybrid PV/T collectors but can also be used for independent PV and solar thermal systems.The detection of unintended bias, or perhaps the engineering towards a particular bias in this technology is essential.PV/T collectors can be thermally or electrically biased.Hence, it is critical to establish a strategy to set the bias which corresponds to optimum energy cost-effectiveness.To test the bias of the PV/T it is necessary to understand that it must be done over the entire testing period.Only testing instantaneous electrical and thermal output will not produce accurate findings, given the variations which occur in either electrical or thermal performance due to weather, design, material degradation, etc.Further analysis can be achieved by comparing the two in terms of power output corresponding to solar irradiance in a 2D representation.The utility of bias design is to strategically choose the ultimate path for energy generation to meet the load demand.The strategy could be based on priority to meeting the majority load.For instance, if the majority load is thermal output, then the PV/T is thermally biased, and the PV output is used to feed an auxiliary heater.The experiments were carried out to examine the validity of the evaluation criteria.Three PV/T systems were installed and tested in Bangi, Malaysia.The systems provided are a conventional polycrystalline PV system, water-based PV/T system, and a PV/T with a PCM tank using water as base fluid as shown in Fig. 1 and.The PCM utilized is paraffin wax, and it is used to store heat and thermally regulate the PV module temperature.The polycrystalline PV was used as a reference, while the water-based PV/T was compared to the PV/T with PCM and water as a coolant.These systems are denoted as PV, PV/T Water and PV/T PCM Water, respectively.All systems were installed in close proximity and utilized a supporting structure.The specifications related to criteria - for the proposed systems are provided in Table 1.The photovoltaics used is 120 Wp polycrystalline type, with a standard test condition efficiency of 14%, a maximum voltage of 17.40 V and a maximum current of 6.89 A.The collectors were thermally insulated using glass wool and tested with different sensing equipment.K-type thermocouples were used to measure the temperature of PV cell, fluid inlet, and outlet."A rotameter was used to measure the mass flow rate, which was fixed for the PV/T's at 0.1 kg/s.Also, an Apogee meter pyranometer was used to measure solar irradiance.All sensors were connected to a data acquisition system.In addition, an electronic source was used to assist the measurement of PV voltage and current.Moreover, the schematic diagram of the PV/T Water and PV/T PCM Water are illustrated in Fig. 1.It is important to note that the purpose of the experiments is to study the improvement in PV or PV/T system yield under the proposed criteria in this paper.The experiments were conducted from July 2017 to July 2018.The systems were operated from 7:00 a.m. and until 8:00 p.m. However, the observable changes are found from 10:30 a.m. and 06:00 p.m. local time.Once measurements of output were made, the exergy calculations were done, and the yield per area graph was drawn, as shown in Fig. 2.It is clearly observed from the figure that the PV/T PCM Water exhibits the highest yield per area among the proposed systems.This is because the area is the same for all systems, and hence the only change is in the total exergy produced.Higher exergy is observed by PV/T PCM Water due to the role of PCM in maximizing heat storage and minimizing the increase of cell temperature.Given that electrical exergy became higher, the PV/T PCM Water exhibits better cooling than PV/T Water.The second highest is the water-based PV/T which at an irradiance of 800 W/m2, exhibits a total yield of around 72.7 J/m2 while the PV is only at 53 J/m2.For rooftops and limited area installation, the best system is the PV/T with PCM using water as a coolant.However, the yield per space criterion shows different results; illustrated in Fig. 3.The highest yield per space was found for a typical PV system.This situation is because it does utilize less volume.However, when comparing the two proposed PV/T systems, the PV/T PCM Water seems to bypass the PV/T Water in Yield per volume when the irradiance is around 500 W/m2.This is due to better thermal regulation for PV and higher thermal exergy across time; once the PCM begins to discharge its stored heat.If selection criteria are focused on limited space, it would be preferable not to utilize a PV/T, for this case.However, this is seldom the case.In most cases, when implementing a solar energy system, the main limitations are either area of installation or weight of systems."Hence, if the yield per space criterion is utilized to compare two different PV/T's, one of which is water-based and other uses PCM with water, then the later is recommended for selection.At a solar irradiance of 600 W/m2 the performance of the proposed PV/T systems was quite similar; this is attributed to fluctuations in the ambient temperature.It is important to note that other aspects affect the performance of the installed systems which are outside the scope of this study.Moreover, the yields per space in Fig. 3 are extracted from yields per actual volume and set for a volume of 1 m3.The yield per weight, on the other hand, is provided in Fig. 4.Indeed, and again, a typical PV produces higher yield per kg.For this criterion, the PV/T Water system produces a YPW of 3 J/kg at an irradiance of 800 W/m2.At the same irradiance, the PV/T PCM Water only produces 2.68 J/kg; making it the lowest-performing.The weight aspect introduces many essential considerations such as the additional costs which needed for improving the support structure or perhaps the limitations of weight which may occur in specific settings.The differences in the findings for each criterion supports the claims that these criteria are essential and useful for establishing better judgment for PV/T comparison and selection.The PV/T PCM Water and PV/T Water were both examined using the PV-ST curve; to assess the yielding bias.The curve, shown in Fig. 5, relates the electrical and thermal exergy for each collector.The linear equations in Fig. 5 simply describe how much the electrical exergy changes corresponding to change in thermal exergy.As Fig. 5 shows, both PV/T systems exhibit positive slopes.The slope for PV/T Water is steeper than that of PV/T PCM Water; given, it shows higher electrical exergy corresponding to change of thermal exergy.The y-intercept of the graph shows higher value for the PV/T PCM Water that is to say the electrical exergy reaches a value of 35.9 till the thermal exergy output is meaningful, this delay is attributed to heat stored in the PCM.While for the PV/T Water the intercept is around 19.013, this occurs earlier due to heat immediately transferring into the cooling pipes and extracted by the working fluid.According to this mean of representation, the PV/T Water is more electrically biased than PV/T PCM Water.Although, it found that the later PV/T system achieved a higher electrical and thermal exergy.Finally, with regards to the Cost per yield criterion, Table 2 provides the LCC, annual total PV/T exergy and Cost of yield for each system.Moreover, for 25 years lifetime of the system, the yield is expected to be reduced.Hence, 10%, 20%, and 30% of the reduction are considered in the years 0–10, 10–20 and 20–25, respectively.According to the cost of yield calculation, the lowest cost per yield is achieved by PV/T PCM Water collector with around 293 $/MJ.Followed by the PV/T Water collector with 404.347 $/MJ and finally, the highest cost of yield is around 478.275 $/MJ for the PV system.In conclusion, this study presents four novel evaluation criteria for hybrid PV/T collectors which are Yield per area/space, Yield per weight, Yield bias, Cost of yield.Each evaluation criterion can be utilized to serve different PV/T design and configuration strategy and can aim users select the optimum PV/T design, which corresponds to their load demands, space, and weight limitations.Moreover, these criteria can be utilized to compare different PV/T systems with slight variations in design material, configuration, and operating conditions.The following recommends are presented for future work:To develop, and examine the utility of, an evaluation criterion based on carbon footprint.To examine the complexity of PV/T systems, in operation and maintenance, for the users and assess the utility of simplistic PV/T configurations with limited operation and maintenance.Further work to prove this concept is also recommended to enrich the analysis with regards to elements of the area, space, weight, yield bias, and cost of yield. | The existing literature in photovoltaic thermal (PV/T) collectors shows a promising development in this technology. The current markers for the technologies' success are examined for various designs configurations and materials. However, the lack of an international standard to follow for PV/T installation, testing, and analysis may hinder the field from advancing to the next stage. In order to develop such standards, it is critical to establish a methodology for evaluating PV/T collectors. This paper examines existing evaluation criteria and proposes four novel methods for PV/T evaluation, namely yield per area/space, yield per weight, yield bias, and cost of yield. Outdoor experiments were performed in Bangi-Malaysia, as a proof of concept to the proposed criteria. The tested systems are (i) PV, (ii) Water-based PV/T and (iii) PV/T with Phase Change Material (PCM) and water as a coolant. |
31,452 | Response of bean cultures' water use efficiency against climate warming in semiarid regions of China | The experiment was made in Guyuan Experimental Station in a typical China’s semiarid region at N35.14″–36.38″ and E105.20″–106.58″.The annual air temperature during 1960–2014 was 6.3 °C–10.2 °C and the multi-year mean air temperature was 7.9 °C.The air temperature distinctly rose in the recent 50 years and especially after 1998.The annual rainfall volume during 1964–2014 was 282.1–765.7 mm and the multi-year mean rainfall was 450.0 mm.The rainfall volume in the recent 50 years was distinctly decreasing.Wheat, broad bean and corns, etc are main crops matured once per year, and the region is a typical semiarid rainfall farming area.The research for warming effect on broad bean water use efficiency was done by field infrared radiator warming methods.In December 2009 on the United Nations Climate Change Conference held in Copenhagen— the capital city of Denmark, it fixed target that the global warming amplitude in the coming 50 years will be controlled at 2.0–2.4 °C.Therefore, the designed warming stages were 0 °C, 0.5 °C, 1.0 °C, 1.5 °C and 2.0 °C.Each plot in the experimental farm was 8 m2 and plots were spaced for 3.0 m. Each plot was equipped with 2 infrared radiator warming tubes, and the support height was adjusted so that the warming pipe was spaced from the crop canopy height for 1.2 m.The warming pipe power was fixed according to the warming requirement and local air temperature.The infrared radiator warming pipe powers used in the experiment were 250 W, 500 W, 750 W, 1000 W, 1250 W and 1500 W respectively.Broad beans were warmed continuously at day and night during the whole growing period.The experimental farm soil was loessal soil with 8.5 g of organic matter, 0.41 g of total nitrogen, 0.66 g of total phosphorus and 19.5 g of total potassium per kg.The experimental farm was fenced at four sides to prevent animals.Farm water consumption was calculated based on the soil moisture data collected in the broad bean seedling stage, ramifying stage, budding stage, blooming stage and podding stage.ET1–2 = Σgi Hi + P0 + M + K,In the equation above, ET1-2 is the stage water consumption, i is the soil layer number, n is the total number of layers; γi is the soil dry bulk density of soil layer i, Hi is the thickness of soil layer i, θi1 and θi2 are respectively the stage beginning moisture content and stage end moisture content of soil layer i, counted by the percentage of the dry soil weight, p0 is the effective rainfall volume, M is the water irrigated in a time section; and K is the moisture content compensated by the underground water in a time section.When the underground water is deeper than 2.5 m, K can be neglected; crops in the researched region are not irrigated, the underground water is deeper than 5 m, so M and K can be neglected.Water use efficiency was calculated as per the equation below.WUE = Y/ETα,In the equation, WUE is the water use efficiency, Y is the yield, ETα is the actual moisture consumption during the crop growing stage, i.e. sum of moisture consumption in all stages.Crops were harvested manually and the yield was measured and recorded actually.Soil moisture content was measured by aluminum box drying method.Soil was sample by a soil drill, each 20 cm was a layer, and the soil sampling depth was 0–100 cm.Each soil sample was put into an aluminum box at the first time and dried to a constant weight at 110 °C before the soil moisture content was calculated.During the broad bean whole growing and warming period, each plot was equipped with an automatic temperature sensor to detect the air temperature at 10 cm, 20 cm and 30 cm to the ground or canopy layer once per 20 min, and results were automatically output and saved in the recorder,Crop yield, soil moisture content, rainfall volume, air temperature and other agricultural and meteorological data were processed and mapped by Microsoft Excel 2003.Broad beans have different features and different requirements for ambient conditions in different growing stages.In the budding stage, dry matter forms and accumulates a lot, and it is also the nutriment growing stage and reproductive growing stage.Temperature influences a lot to the time of broad bean ramifying and budding, as excessively tall plants in the budding stage may bring too much shadow which may cause excessive falling off of pods and lodging of crops.Excessively short plants are not good for rich yield.Broad bean blooming and podding are simultaneous, the blooming and podding stages are the most important growing stages when organs compete for the assimilation product the most severely.This research shows the photosynthesis and transpiration of broad bean in China’s semiarid regions significantly accelerate in the seedling stage, ramifying stage, budding stage, blooming stage and podding stage."With temperature rising, i.e. warmed to 0 °C, 0.5 °C, 1.0 °C, 1.5 °C and 2.0 °C, the broad bean photosynthesis doesn't change too much, but the transpiration changes obviously.Broad bean photosynthesis and transpiration changes in different warming conditions, when warmed by 0.5–1.5 °C, photosynthesis is distinctly faster than transpiration.When warmed by 1.5 °C above, broad bean photosynthesis at the seedling stage and ramifying stage is distinctly faster than transpiration, but transpiration is faster than photosynthesis in the budding stage, blooming stage and podding stage.Drought in broad bean flowering and blooming stage mainly affects the number of pods and number of heavy kernels.Drought in the podding stage and full podding stage decreases the hundred-grain weight.Drought in the blooming and podding stage sharply decreases the yield.Broad bean yield is determined by the number of harvested plants, number of kernels per plant, and the hundred-grain weight.Table 1 shows that warming distinctly affects the number of kernels per plant and hundred-grain weight.Further warming distinctly increases the broad bean number of kernels per plant and hundred-grain weight.But when warmed to 1.5 °C above, number of kernels per plant and hundred-grain weight distinctly drop and caused to a yield decrease.Warming for 0.5–1.0 °C distinctly increases the broad bean yield by 12.9%–16.1%.But the yield decreases by 39.2–88.4% when the temperature was increased by 1.5–2.0 °C.Fig. 4 shows that the water use efficiency distinctly increased in the broad bean seedling stage, ramifying stage, budding stage, blooming stage and podding stage.With warming, i.e. warmed to 0.5 °C, 1.0 °C, 1.5 °C and 2.0 °C, the broad bean water use efficiencies are distinctly higher than those in the unwarmed stage.Fig. 5 shows that with warming, i.e. warmed to 0.5 °C, 1.0 °C, 1.5 °C and 2.0 °C, the broad bean yield and water use efficiency increases before decreasing.Broad bean yield increases when warmed to 0.5 °C below and decreases when warmed to 0.5 °C above.The water use efficiency increased when the temperature was increased by 1.0 °C below, and it quickly decreased when the temperature was increased by 1.0 °C above.Climate warming will significantly affect broad bean growth and yield.Climate warming significantly affects the water use efficiency via modifying the plant productivity and evaporation.Warming affects the plant evaporation via modifying the stomatal conductance.Below a certain threshold, warming increases the leaf stomatal conductance, net photosynthesis increases faster than transpiration, and thus the water use efficiency is improved; above a certain threshold, warming increases evaporation and further decreases the water use efficiency.Yepez et al. made a general observation on the semiarid mesquites, and they found warming affected evaporation significantly modified the water use efficiency.Gao et al. researched and found that leaf temperature rising of sorghum sudanense in the arid region improved the net photosynthesis and transpiration, and caused a significant negative correlation between single leaf water use efficiency and leaf temperature.Tenhunen et al. researched and found that climate warming accelerated crop transpiration and soil moisture evaporation, and influenced the crop water use efficiency in semiarid regions.Water use efficiency in the crop ecological system drops with the decrease of soil moisture, and it means, under the extremely arid condition, the crop photosynthesis changes down by some other factors besides the air pore factor.Zhao et al. researched and found that climate warming inhibited photosynthesis and dry matter accumulation and further influenced the water use efficiency of spring wheat in the northwest semiarid region.Wang et al. researched and found that in the northwest semiarid region, climate warming decreased the water use efficiency of main crops corn and spring wheat, the corn and spring wheat water use efficiencies decreased with an index or parabola curve with the increase of moisture supply in the growing period, and warming is negative to crop water use efficiency.Warming is good for crop photosynthesis and can improve the crop water use efficiency.Loader et al. verified that warming increased plant photosynthesis and further promoted the plant water use efficiency.Yao et al. researched and found that winter and spring air temperature in the semi-humid arid region of Loess Plateau significantly increased, winter wheat over-winter death rate distinctly dropped, and the water use efficiency rose.Xiao et al. researched and found that water use efficiencies of spring wheat, potato and corn in the northwest semiarid region increased with air warming in the past 50 years.However, excessively warming affected crop photosynthesis, increased transpiration and soil moisture evaporation, and decreased the crop water use efficiency.Jiang and Dong researched and found that aggravating drought gradually increased the plant water use efficiency but decreased it above a certain threshold.Below a certain threshold, warming increases the leaf stomatal conductance, net photosynthesis increases faster than transpiration, and thus the water use efficiency is improved; above a certain threshold, warming increases evaporation and further decreases the water use efficiency.Warming affects crop photosynthesis, transpiration and soil moisture evaporation, and further affects crop water use efficiency.A higher temperature brings stronger crop transpiration and soil moisture evaporation, and may decrease the crop water use efficiency.Loader et al. verified that warming improved the photosynthesis and further improved the water use efficiency of stalkless flowered oak seedlings.Gao et al. researched and found that leaf temperature rising of sorghum sudanense in the arid region improved the net photosynthesis and transpiration, and caused a significant negative correlation between single leaf water use efficiency and leaf temperature.The temperature effect on plant water use efficiency is more or less complicated.Warming affects the plant transpiration via modifying the stomatal conductance.Below a certain threshold, warming increases the leaf stomatal conductance, net photosynthesis increases faster than transpiration, and thus the water use efficiency is improved; above a certain threshold, warming increases evaporation and further decreases the water use efficiency.The research result of this paper shows that the water use efficiency of broad bean in China’s semiarid regions rises and then drops with air warming.When warmed by 0.5–1.5 °C, the broad bean water use efficiency distinctly increases.But when warmed by 1.5 °C above, it distinctly decreases.When researching the plant water use efficiency, plant respiration features can be more favorably explained if the effects of moisture conditions, temperature and other factors are put into consideration, and thus the plant water use efficiency can be recognized more exactly.In view of plant physiology, the change of plant water use efficiency in drought is caused by stomatal restrictive factors and non-stomatal restrictive factors.Stomatal restriction was realized by the regulation of leaf air bores protecting the cell movement, when the plant is under a light or medium moisture threat, air bores will be more sensitive to drought, the net photosynthesis is non-linear to the stomatal conductance which decreases before the decrease of net photosynthesis, and thus the transpiration is decreased, the water use efficiency is improved.Effective moisture in arid and semiarid regions is the most important factor for controlling the plant function, and the decrease of effective moisture will aggravate the plant physiological threat and weakness.Ogaya and Peuelas researched and found crops maintained high water availability in drought to reduce the effect of water deficiency and enhance the competitiveness for moisture in drought.Aridification in semiarid regions of Northwest China aggravated obviously in the past 50 years.In the coming 50 years with global warming, crop photosynthesis will be directly affected, crop transpiration and soil moisture evaporation will be highly increased, and thus crop growing will be inhibited, yield and water resource will be degraded, and a new challenge will be thrown to food and water resource security.The simulated experiment of farm warming by infrared radiator shows that broad bean photosynthesis and transpiration changes differently in different warming conditions.When warmed by 0.5–1.5 °C, the broad bean photosynthesis was faster than transpiration.But when warmed by 1.5 °C above, the broad bean transpiration in the budding stage, blooming stage and podding stage was faster than photosynthesis, and warming distinctly affected photosynthesis.Broad bean yield is determined by the number of harvested plants, number of kernels per plant, and the hundred-grain weight.Warming distinctly affects the number of kernels per plant and hundred-grain weight.Further warming distinctly increases the broad bean number of kernels per plant and hundred-grain weight, but when warmed to 1.5 °C above, the number and weight distinctly drop and caused to a yield decrease.The yield decreases by 39.2–88.4% when the temperature was increased by 1.5–2.0 °C.The broad bean yield and water use efficiency increased and then decreased with temperature rising.Broad bean yield increased when warmed to 0.5 °C below and decreased when warmed to 0.5 °C above.The water use efficiency increased when the temperature was increased by 1.0 °C below and it decreased when the temperature was increased by 1.0 °C above.In all, growth and yield of bean cultures in the semiarid regions of Northwest China will be significantly affected by warming. | Farm crop growing and high efficiency water resource utilizing are directly influenced by global warming, and a new challenge will be given to food and water resource security. A simulation experiment by farm warming with infrared ray radiator was carried out, and the result showed photosynthesis of broad bean was significantly faster than transpiration during the seedling stage, ramifying stage, budding stage, blooming stage and podding stage when the temperate was increased by 0.5-1.5 °C. But broad bean transpiration was faster than photosynthesis during the budding stage, blooming stage and podding stage when the temperature was increased by 1.5 °C above. The number of grain per hill and hundred-grain weight were significantly increased when the temperature was increased by 0.5-1.0 °C. But they significantly dropped and finally the yield decreased when the temperature was increased by 1.0 °C above. The broad bean yield decreased by 39.2-88.4% when the temperature was increased by 1.5-2.0 °C. The broad bean water use efficiency increased and then decreased with temperature rising. The water use efficiency increased when the temperature was increased by 1.0 °C below, and it quickly decreased when the temperature was increased by 1.0 °C above. In all, global warming in the future will significantly influence the growth, yield and water use efficiency of bean cultures in China's semiarid regions. |
31,453 | Impact of light color on photobioreactor productivity | Microalgae are an attractive source for biofuels and bulk chemicals due to their high photosynthetic efficiency.At low light intensities, microalgae can achieve values up to 80% of the theoretical maximum PE of 0.125 mol CO2 fixed per mol photons absorbed .However, maximum PE values, as measured under low light conditions, will never be realized in microalgae mass cultures exposed to direct sunlight.The reason is the inherent nature of light.Unlike most chemical substances, light energy cannot be dissolved in the culture medium.Therefore there will always be a steep light gradient proceeding from a high level of sunlight to virtual darkness.Because of the high incident light intensity it is practically impossible to obtain the maximum light use efficiency in microalgae mass cultures.In a high density microalgae culture, most sunlight energy is absorbed in a small volume fraction of the photobioreactor on the light-exposed side.In this volume fraction, cells are coerced to absorb more light energy than the amount that can be converted to biochemical energy by their photosynthetic machinery.This leads to oversaturation and, consequently, waste of sunlight energy through heat dissipation .The result is a PE that is dramatically lower than that which can be obtained under low light conditions .As the photosynthetic machinery is easily oversaturated, the key to optimization is to reduce the amount of light energy absorbed per photosynthetic unit.This can be achieved by proper reactor design using the light dilution principle .However, high material costs limit its application.Considering efficient light utilization is a bottleneck of biological nature, modifications to the light harvesting complex of microalgae would possibly be more effective .In our previous study , we evaluated the areal biomass productivity of four different antenna size mutants under simulated mass culture conditions.These mutants were expected to show improved productivity because of their lower pigment content compared to the wild-type thereby assuring less light absorption per cell.However, none of the studied mutants performed better than the wild-type, possibly due to impaired photo protection mechanisms induced by the antenna complex alterations.Another explanation is the inadvertent side effects caused by the actual process of genetic engineering resulting in reduced fitness of the strains.These genetic side effects will have to be eliminated to fully benefit from the potential of antenna size reduction by genetic engineering.In order to demonstrate the potential of antenna size reduction on an experimental scale, light absorption can also be minimized by shifting the wavelength of the emitted light to the weakly absorbed green region.When supplying narrow-beam LED light at high light intensities, it is the wavelength specific absorption capacity of the algae that determines the extent of photosystem saturation and, consequently, the light use efficiency.Although there is a strong and prevalent agreement that red and blue light are optimal for algal cultivation because of the corresponding peaks in the algal absorption spectrum , the opposite could possibly be true for high density cultures.In dilute cultures, not all incoming light energy is absorbed and, therefore, light absorption is the limiting factor for maximizing productivity.On the contrary, high density cultures are characterized by the fact that all incoming light is absorbed anyway by direct or indirect control of biomass concentration via chemostat or turbidostat operation .Therefore, since total light absorption is already guaranteed in mass cultures by applying a high biomass density, productivity is limited by the efficiency at which the absorbed light is converted to biochemical energy, and not by the efficiency of light absorption.We hypothesize that in high density mass cultures the utilization of weakly absorbed light maximizes productivity while strongly absorbed light causes more oversaturation and is suboptimal for mass culture cultivation.Indeed, the action spectra of microalgal photosynthesis as determined by Emerson and Lewis and by Tanada indicate that green-yellow light is used at high efficiency once it is absorbed.A microalgal growth model was employed to estimate photobioreactor productivity as a function of light intensity and the spectral composition of light.The model takes into account the change of the spectral composition with increasing reactor depth because of preferential light absorption by microalgae.For example, white light becomes greener.The model allows calculation of the optimal biomass concentration leading to maximal productivity.For each color of light, as well as sunlight, the areal biomass productivity, the biomass specific growth rate, and the optimal biomass concentration were computed.Next to overall reactor productivity, this model provides insight into the light use efficiency at different positions in the reactor and how this depends on light color.In this study, we aim to deliver a proof of concept that the biomass specific light absorption rate determines the volumetric biomass productivity in microalgae mass cultures.We do not consider microalgae cultivation using artificial light as a viable process for producing bulk chemicals as the associated energy costs are high whereas sunlight is at no cost and abundantly available .In this study, we employ artificial light only as a tool to generate different specific light absorption rates by spectral tuning.We measured the areal biomass productivity of cultures exposed to warm white, orange-red, deep-red, blue, and yellow light.The area reflects the illuminated surface area of the photobioreactor.Cultivation took place in continuously operated bench-scale flat plate photobioreactors.For each color of light, the applied light intensity was 1500 μmol photons m− 2 s− 1.The biomass concentration was controlled at a fixed level that was high enough to absorb all incoming light energy.By comparing the biomass specific light absorption rate with the measured productivity of cultures exposed to different colored lights, insight was obtained into the importance of minimizing light absorption per cell to maximize productivity.Chlamydomonas reinhardtii CC-1690 was obtained from the Chlamydomonas Resource Center."The algae were cultivated in a filter sterilized medium with the following composition: urea, 0.99; KH2PO4, 0.706; K2HPO4, 1.465; MgSO4·7H2O, 0.560; CaCl2·2H2O, 0.114 and 20 mL L− 1 of a 100 times concentrated Hutner's trace elements solution .The cultures were pre-cultivated in 250 mL shake flasks containing 100 mL of medium at pH 6.7 and at 25 °C at a light intensity of 200–300 μmol photons m− 2 s− 1.The microalgae were continuously cultivated in flat-panel airlift photobioreactors with a working volume of 0.4 L, an optical depth of 14 mm, and an illuminated area of 0.028 m2.The reactors were equipped with a black cover on the backside to prevent exposure to ambient light.Warm white light was provided by Bridgelux LED lamps which are integrated in the Algaemist system.Other colors of lights were provided using 20 × 20 cm, SL 3500 LED panels of Photon Systems Instruments.The following colors were used: blue; orange-red; deep red; and yellow.The yellow light source was equipped with an optical low-pass filter to cut of red light.Unless explicitly stated otherwise, all cultures grown in yellow light described in this paper were supplemented with ± 50 μmol photons m−2 s− 1 of blue light.The rationale behind this procedure is clarified in the results section of this paper.In Fig. 2 the emission spectra of all light sources are shown and these are supplemented with the solar light spectrum and the wavelength specific absorption cross section of C. reinhardtii.Please refer to Table S2–S6 of the supplementary material for a light intensity distribution across the illuminated reactor surface, which is provided for each light source.Reactor temperature was maintained at 25 °C, and the pH was kept at 6.7 by means of on-demand CO2 supply.The medium that was fed to the reactor had a pH of 7.0 and to maintain the setpoint of 6.7 in the reactor, CO2 supply rate was such that both CO2 and HCO3− were present at concentrations of at least a magnitude higher than the saturation constant of Rubisco for CO2 and HCO3−.The reactors were operated in turbidostat mode to ensure a constant light regime; a light sensor measures the transmission through the reactor and if light transmission was below the setpoint, the culture was automatically diluted with fresh medium employing a peristaltic pump.Further details of the photobioreactor setup and its operation are provided in de Mooij et al. , with the exception that the gas stream of di-nitrogen was, at all times, 200 mL min− 1.To determine the biomass dry weight content, the culture broth was passed through glass fiber filters as described by Kliphuis et al. and, subsequently, the mass difference between the dried empty filters and the dried filters with microalgae was recorded.All measurements on an individual sample were performed in triplicate.Light absorption was measured in a double beam spectrophotometer equipped with an integrating sphere.A reactor sample was transferred to a cuvette with a short light path of 2 mm.The same reactor sample was analyzed for its dry weight content.This allowed normalization of the absorption cross section, resulting in a biomass specific absorption cross section.Additional details of this protocol have been described by de Mooij et al. .Only samples from the cultures grown under yellow and warm white light were diluted with medium because of the higher biomass density.All other samples were not diluted.Photobioreactor productivity was estimated employing a microalgae growth model.The model predicts photosynthetic rates at every location in the reactor based on the local light intensity.The light intensity is calculated for each wavelength at every point in the reactor to account for preferential light absorption by microalgae and the resulting change in spectrum composition.A description of the model and a list of the model parameters used are located in appendix A.The following model calculations are based on parameters used in the experiments performed: an ingoing light intensity of 1500 μmol photons m− 2 s− 1, a reactor depth of 14 mm, and the absorption cross section of a continuous mass culture grown under warm white light in turbidostat mode.This mass culture was characterized by complete absorption of the incident light and with an outgoing light intensity of 10 μmol photons m− 2 s− 1.These conditions were chosen to maximize reactor productivity as for C. reinhardtii, 10 μmol photons m− 2 s− 1 was found to be the photosynthetic compensation point , where the net photosynthesis rate is equal to zero.At higher biomass concentrations, dark zones are created where cell maintenance is a dominant process, which reduces the reactor productivity.At lower biomass concentrations, light passes the culture without being absorbed and without contributing to the overall productivity.Therefore, in this study light color was studied at biomass concentrations that were optimized for each light color.Running the cultivations at the same fixed biomass concentration would not be a fair comparison as it does not allow us to use the full potential of each color of light.Biomass concentration optimization is essential in a mass culture setup to maximize biomass productivity.Using the model, we estimated the optimal biomass concentration for each color of light to maximize the areal productivity, assuming the bioreactor is operated at a constant biomass density and constant light intensity.As can be seen in Fig. 3, strongly absorbed light in colors such as blue and deep red result in low biomass concentrations while a weakly absorbed light in a color such as yellow gives a biomass concentration of 2.8 g L− 1.By combining the local light absorption rate with the estimated biomass concentrations for all light sources, we calculated the biomass specific light absorption at each position inside the photobioreactor.Under the described conditions, blue light results in the highest qph while the cultures grown under yellow light absorb the least light energy per unit of biomass.Fig. 4B illustrates the local specific growth rate as a function of the local light intensity in the reactor.Each light intensity corresponds to a certain location in the reactor.A culture exposed to blue light grows at μmax if the light intensity is higher than approximately 100 μmol photons m− 2 s− 1 while a culture exposed to yellow light requires about 500 μmol photons m− 2 s− 1 to support maximum growth.Although it shows the high sensitivity for blue light, this does not imply that the reactor productivity of a mass culture will be higher under blue light.The reason is that when grown under blue light, only low biomass concentrations can be supported and that the light use efficiency is low which limits the volumetric productivity.The spatially averaged μ values can be found in Fig. 3.A table with all the estimated model values can be found in appendix B.Fig. 4C depicts the local biomass yield on light energy as a function of reactor depth.In general, it can be observed that, in a mass culture, weakly absorbed light results in higher yields than strongly absorbed light.In the deeper, darker part of the reactor, the biomass yield on light energy decreases for all colors of light as cell maintenance becomes a significant factor relative to the photosynthetic activity.To maximize productivity, the biomass concentration was chosen in such a way that at the back of the reactor the local biomass yield on light is zero.This is at the photosynthetic compensation point.Stated differently, at every position in the reactor there is a positive contribution to the reactor productivity.The biomass productivity expressed per unit of illuminated surface area is presented in Fig. 3.The highest productivity is predicted for cultures exposed to yellow light while the strongly absorbed blue light is expected to result in a productivity of 27 g m− 2 d− 1.Warm white light, whose spectrum contains a significant fraction of weakly absorbed light, results in productivity as high as 51 g m− 2 d− 1.Deep red light is estimated to result in lower productivity than orange-red light.This is explained by the fact that the deep red light spectrum is overlapping the chlorophyll a absorption peak while the orange-red peak is located in a less absorbing region of the algae absorption spectrum.The light spectrum changes with increasing reactor depth because of the preferential absorption of blue and red light by green microalgae.The light becomes greener as the red and blue fractions are rapidly absorbed.As a consequence, warm white LED light and sunlight are quickly converted into green light with increasing culture depth.As it is evident from Fig. 4A and C, the color of light influences the local qph and Yx/ph primarily in the first 2 mm of the culture.In high light conditions at the surface of the reactor, the highest Yx/ph is observed employing yellow light.However, at a depth ≥ 2 mm, higher yields can be obtained with warm white light and sunlight.Considering that 53% of the incoming light energy is absorbed within the first 2 mm, the photosynthetic efficiency in this surface layer has a dominant effect on reactor productivity.Based on the model predictions, a large difference in productivity can be expected between weakly and strongly absorbed light colors.Except for sunlight, we performed reactor experiments with all of the colors of lights mentioned.Areal biomass productivity was measured at an ingoing light intensity of 1500 μmol photons m−2 s− 1.The outgoing light intensity was maintained at 10 μmol photons m−2 s− 1 by turbidostat control.The cultures exposed to yellow light were subjected to 1450 μmol photons m−2 s− 1 and supplemented with 50 μmol photons m−2 s− 1 of blue light, as will be discussed in detail later.In Fig. 5, the areal biomass productivity, biomass concentration, and the dilution rate are presented.The highest productivity was obtained employing yellow light.A slightly lower value was found for warm white light.Cultures exposed to blue, orange-red, and deep red all yielded a productivity of approximately 29 g m− 2 d− 1.For the exact values of light intensity and obtained experimental data of each experiment, please refer to Table S1 of the supplementary material.The highest biomass concentration was measured for cultures exposed to yellow light and the lowest for cultures grown under blue light.Since all cultures were turbidostat controlled and were as such, forced to absorb 1490 μmol photons m−2 s− 1, the biomass concentration presented in Fig. 5B inherently demonstrates the ability of the algal biomass to absorb light of different colors.A low biomass concentration corresponds to a relatively high biomass specific light absorption rate which was accompanied by a high rate of energy dissipation.In our experiments, the specific growth rate μ equals the reactor dilution rate D, as can be deduced from the biomass balance over the photobioreactor .As expected, the low biomass concentration in cultures grown under blue light is accompanied by a high dilution rate because cells cultivated under blue light will be light saturated at relatively low light intensities.Compared to other light colors, the light intensity will be high enough to saturate the cells in a larger volume fraction of the reactor.The result is a higher spatially averaged biomass specific growth rate.However, since volumetric productivity is the product of biomass concentration and dilution rate, the low biomass concentration limits productivity.It is remarkable that, even though the culture exposed to orange-red exhibited a lower biomass concentration than cultures grown under warm white, which indicated a higher biomass specific light absorption, this lower biomass concentration was accompanied by a lower dilution rate.The maximum Fv/Fm value of dark adapted samples withdrawn from the reactor represents photosystem II quantum efficiency and is an indicator of photoinhibition or down-regulation of photosystem II activity .The highest values were obtained for the cultures exposed to blue and white light.The lowest Fv/Fm value was obtained for the cultures exposed to yellow light.Cultures exposed to orange-red light also demonstrated reduced values, indicating that photosystems did not function at full capacity.Cultivation under both orange-red and deep-red light was difficult.Several experiments at an Iph,in of 1500 μmol photons m−2 s− 1 failed as no stable growth could be obtained.In most cases, there was biomass growth for a few days, after which growth suddenly ceased completely and was accompanied by cell agglomeration.In some cultivations, productivity fluctuated considerably from day to day.Applying orange-red illumination, three out of six experiments were successful, which means that stable, day to day productivity values were obtained for at least six days.Applying deep red light, only one experiment out of five was successful.Assuming that the high light intensity did not allow unconstrained growth of the algae when applying deep red light, two additional experiments were performed at an incident light intensity of 850 μmol photons m−2 s− 1 and an outgoing light intensity of 10 μmol photons m−2 s− 1.As depicted in Fig. 6, cultures grown under deep red had a lower biomass concentration and, therefore, a higher biomass specific light absorption rate compared to those grown under warm white light.The dilution rate, however, was not higher compared to cultures grown under warm white light and, therefore, the productivity was also lower.Otherwise stated, at 850 μmol photons m−2 s− 1, the light use efficiency of deep-red light was also lower than for warm white light.Maximum Fv/Fm values were low for the deep red culture.The culture grown under white light exhibited a Fv/Fm value of 0.61.Fig. 7 shows the measured light absorption spectra of cultures grown under different light colors.In the continuously operated turbidostat cultures with ingoing light intensities as high as 1500 μmol photons m−2 s− 1, the absorption cross section of the microalgae did not markedly change as a function of light color.Up to seven measurements were performed per culture and the experimental variation within these measurements was higher than the variation between the different cultures.We began our experiments exploiting a single yellow light source.Productivity was far below what was estimated.The cultures were unstable as productivity and biomass concentration fluctuated from day to day.In addition, maximum Fv/Fm values were low, indicating a low PSII quantum efficiency.Pigment content was also considerably lower than measured for all other light colors.For this reason, the yellow light was supplemented with a moderate quantity of blue light.The total light intensity thus was 1500 μmol photons m−2 s− 1.Unless explicitly stated otherwise, all cultures grown in yellow light described in this paper were supplemented with ± 50 μmol photons m−2 s− 1 of blue light.By applying blue light supplementation, the volumetric productivity increased from 37 g m− 2 d− 1 ± 11 to 52 g m− 2 d− 1 ± 8, and cultivation was more stable.Furthermore, the maximum Fv/Fm value was clearly higher, indicating improved functioning of photosystem II.The absorption cross section, depicted in Fig. 8C, was demonstrated to be higher in the case of blue light supplementation.The ratio between absorption by carotenoids and chlorophyll a was comparable for both situations.Microalgae photosynthesis is inefficient at high light intensities.Not considering photobioreactor design, two approaches can be distinguished to increase the photosynthetic efficiency: genetic engineering of the microalgae or spectrally tailoring the light source via light engineering.Our previous study showed that the current generation of Chlamydomonas antenna size mutants is not able to outperform the productivity of the wild-type strain under mass culture conditions .To provide a more solid foundation for the hypothesis that biomass productivity is a function of the amount of light absorbed per cell, in this work, we shifted the emission of artificial illumination to both the low and high absorption region of the spectrum by selecting four different colors of light.Our model successfully predicted the biomass productivity for different colors of light.The biomass concentration could be accurately estimated as in a turbidostat controlled culture this is a function of the incident light intensity, the outgoing light intensity, and the absorption cross section of the cells.Calculation of the dilution rate, and the areal productivity is more challenging since there are many factors that influence the light use efficiency.The model assumes that good mixing prevents severe photodamage and therefore, photoinhibition is not considered.At very high light intensities, this assumption might not be valid, rendering the model prediction overly optimistic.Our model assumes that the microalgae suspension is exposed to a homogeneous light intensity distribution.In reality there can be substantial differences between for example the middle of the light exposed surface and the relatively dark corners.Depending on the distribution, this may lead to under- or overestimation of biomass productivity.Please refer to Table S2–S6 of the supplementary material to see the light intensity distribution of the light sources that were employed in our experiments.In accordance with our model predictions, cultures exposed to yellow light resulted in the highest areal productivity closely followed by cultures grown under warm white light.The three strongly absorbed colors resulted in areal productivities that were almost half of the areal productivity measured for yellow light.Cultures were difficult to grow under red light and this affected the productivity.However, our substantial number of successful reactor experiments with different colors of light confirms our model-based expectation that, under mass culture conditions, productivity is inversely correlated with biomass specific light absorption.Analogous to our results, Kubin et al. also showed that maximal productivity with Chlorella vulgaris was obtained exploiting weakly absorbed green light.They also measured productivity values for blue light as being half of that for green and white light.Mattos et al. performed short term oxygen evolution experiments and conclude that weakly absorbed colors of light such as green results in a higher photosynthetic efficiency for high density cultures .In these experiments the cells were not allowed to acclimate to the different colors of light and the applied light regime during the measurements and therefore these conditions do not simulate mass culture conditions.Instead of replacing blue and red light by green light, they suggest that green light should be supplemented to strongly absorbed colors of light .The amount of nitrogen source present in the cultivation medium supports biomass concentrations up to 4.5 g L− 1.To ensure that nitrogen limitation did not occur, we increased the urea content for the cultures exposed to yellow light.There was no measurable effect of the urea supplementation and, therefore, we conclude that the medium was indeed sufficient for unconstrained growth.No substantial difference in absorption cross section of the microalgae was observed after cultivating them under different light colors.Apparently, under mass culture conditions and irrespective of the color of light, the light regime in which the algae rapidly alternate between 10 and 1500 μmol photons m− 2 s− l leads to the same level of pigmentation.The microalgal pigment content is highly dependent on the perceived light intensity.In the process of photoacclimation, the pigment content deceases with increasing light intensity, which reaches a plateau at high light intensities .It could have been expected that pigment content correlates to the biomass specific light absorption rate .If this was the case in our experiments, blue light should have resulted in a lower pigment content to compensate for the higher absorption capacity for blue light.Likewise, yellow light should have resulted in an increased pigment content to harvest more of the weakly absorbed yellow light.As the mechanism behind pigment acclimation in response to light quality has not yet been unravelled and the importance of other light acclimation responses has not yet been studied in detail, our observation is difficult to explain.In literature, statements regarding pigment accumulation under different colors of light are contradictory.This is most likely due to the fact that it is difficult to distinguish between the effect of light intensity and light quality, as the color of light determines the ease of absorption and therefore the biomass specific light absorption rate.For a fair comparison, the pigmentation should be compared for cultures exposed to different colors of light, but with the same biomass specific light absorption rate, which can be challenging to achieve in photobioreactors with steep light gradients.Remarkable is the fact that the lowest Fv/Fm value was obtained for cultures exposed to yellow light while this culture yielded the highest volumetric productivity.The areal productivity for yellow light was almost double compared to blue light, where a Fv/Fm value of 0.63 was measured.This suggests, therefore, that part of the photosystems became inactive which reduced the biochemical conversion capacity, however, yellow light could still be used at a higher efficiency than, for example, blue light.However, Fv/Fm values should preferably be measured with the same color of light as the color of the cultivation light as, for higher plants, it was observed that this is required to measure maximum quantum yield values .The rationale behind this statement is that the PSI/PSII stoichiometry is optimized for the light the plant is exposed to and when there is a sudden change in light spectrum, there might be an imbalanced excitation of the two photosystems .This could have affected our results as we applied red light for our measurements.The hypothesis of this study is that the degree of photosystem saturation dictates the photosynthetic efficiency of the microalgae culture and that photosystem saturation can be controlled by applying different colors of light.The rationale has been previously discussed in literature and applies to both microalgae mass cultures and to the canopy of horticulture crops.In both situations, weakly absorbed light is expected to increase the photosynthetic efficiency as less energy is dissipated in the surface layer of the photobioreactor or the outer zone of the canopy.Indeed, several experimental studies demonstrated that green light supplementation led to increased productivity of crops .Sforza et al. used a spectral converter filter to convert the green and yellow light to red light with the intention to maximize the portion of useful light for photosynthesis .No significant improvement was found.According to our hypothesis this approach would actually decrease the productivity under high light conditions as the culture will become even more oversaturated.To maximize productivity in such a setup, the red and blue light should be shifted to the green range.It remains ambiguous whether yellow light suffices for optimal growth.To our knowledge, blue light supplementation to yellow or green light has not been studied previously.Yellow light could possibly be more difficult for cultivation than green light as the green emission spectrum of some light sources partly overlaps with the blue region.In our experiments, cultures that were supplemented with a moderate amount of blue light gave a higher productivity, had more stable cultivation, and had enhanced cell fitness as indicated by a higher Fv/Fm value.The improvement in performance cannot be attributed to the energy content of the additional 50 μmol photons m− 2 s− 1 of blue light, as this is only a 3.5 % increase in total light intensity.This finding posits the following tentative hypothesis: blue light acts as a trigger for metabolic regulatory mechanisms that are essential for stable cultivation under the described mass culture conditions.Higher plants were ascertained to exhibit photoprotection mechanisms that are solely activated by blue light .Authors of the same paper also observed that blue light is exploited by plants as an indicator of over-excitation and the need to switch to a state enhancing thermal energy dissipation.In addition, for the diatom Phaeodactylum tricornutum, blue light was determined to be essential for the activation of photoprotection under high light as an increased NPQ capacity and a larger pool of xanthophyll cycle pigments could only be observed in cultures grown under blue light .In another study, it was hypothesized that, in Chlorella, blue light produces the same effects that are normally observed for strong white light .Blue light is also known to affect several metabolic pathways and induce gene expression in both algae and plants via blue light receptors .In horticulture, the beneficial effects of blue light supplementation have been demonstrated in several studies.Blue light supplementation was found to double the photosynthetic capacity and prevent abnormal growth in cucumber plants .In spinach, blue light was discovered to enhance the acclimation responses to high light conditions and to increase the chlorophyll content .Other greenhouse plants were found to have increased biomass accumulation, increased vegetative growth, and expanded leafs under blue light supplementation .To conclude, blue light seems to play a key role in the survival and development of photosynthetic organisms.Also our experiments with Chlamydomonas indicate that exposure to blue light is essential for optimal growth under high light conditions, probably caused by wavelength-dependent activation of photoprotection and dissipation mechanisms.Maintaining a stable culture under red light was difficult.Under deep red light at 1500 μmol photons m− 2 s− 1, only one experiment out of eight was successful.Productivity was slightly lower than was estimated by our model based on the light emission spectrum of the deep red light source.Possibly, 1500 μmol photons m− 2 s− 1 of deep red light was too intense for the photosystems.On the one hand, this is striking since the biomass specific light absorption rate is lower than that of blue light.On the other hand, the regulatory mechanisms triggered by the color of light seems to be more complex than initially expected.Therefore, it cannot be excluded that, under high light conditions, a balanced mix of wavelengths is required for optimal growth.At 850 μmol photons m− 2 s− 1 of deep red light two experiments were successful and reproducible.At this light intensity, severe damage to the photosystems is less probable.As expected, based on our theory that strongly absorbed light decreases light use efficiency, the biomass productivity was lower than for the culture in warm white light.Also, under orange-red light at 1500 μmol photons m− 2 s− 1, productivity was lower than our model predicted.The use of orange-red light for microalgae cultivation is common and generally without complications.Kliphuis et al. , for example, used the same light source as we did, but worked with light intensities below 100 μmol photons m− 2 s− 1.The high intensities of red and yellow light in this study have not been previously reported for Chlamydomonas.Therefore, we suggest that the high light intensity must have been the explanation for the poor performance.For cultures exposed to yellow light, blue light supplementation was found to improve reactor performance and productivity.A similar approach could possibly work for red light as well.The cell size of Chlamydomonas is influenced by light color.Continuous blue light is known to delay cell division which signifies that cells continue to grow in size as biomass is accumulating .Otherwise stated, a larger cell size is required for cell division to occur.A blue light receptor is likely to be involved .The opposite was determined for red light.Under red light, cells undergo a division cycle when they have achieved the minimal cell size required for division.In practice, the consequence is that, compared to white light, the average cell size is larger under blue light and smaller under red light .Cell size and the accompanied geometrical arrangement of the chloroplast, as well as the cellular chlorophyll content, are all factors that may influence light penetration and light scattering.This phenomenon, therefore, complicates modeling reactor productivity.Also our productivity measurements may have been influenced by this unintended effect of blue and red light.In this study, we presented areal biomass productivities of high density microalgae cultures exposed to high light intensities of different colors.Tubidostat control ensured that the total amount of absorbed light was equal for each color.Our results demonstrate that, under mass culture conditions, biomass productivity and the biomass specific light absorption rate are inversely correlated as oversaturation of the photosystems leads to a waste of light energy and, therefore, a lower biomass yield on light.Highest biomass productivity, measured under continuous illumination, was obtained employing yellow light, closely followed by cultures grown under warm white light.Cultivation under blue, orange-red, and deep red light resulted in biomass productivities of approximately 29 g m− 2 d− 1 which is nearly half of the productivity measured for yellow light.The microalgae absorption cross section remained the same under all tested conditions.Our approach with different colors of light to investigate photosystem saturation was interfered by intrinsic biological effects.Cultivation under pure yellow light was impeded.Minimal supplementation of blue light to the cultures in yellow light was determined to stimulate normal growth and increase productivity.Additional research is required to reveal the underlying mechanism that is responsible for the beneficial effects of blue light supplementation.Taking into account possible wavelength deficiencies, white light with a high green or yellow content in addition to a small blue fraction would result in the highest productivity of microalgae mass cultures.This study provides a solid base for further research on decreasing the biomass specific light absorption in order to maximize productivity.Presently, the creation of antenna size mutants that permanently absorb less light per cell is the most promising solution. | Microalgae perform photosynthesis at a high efficiency under low light conditions. However, under bright sunlight, it is difficult to achieve a high photosynthetic efficiency, because cells absorb more light energy than can be converted to biochemical energy. Consequently microalgae dissipate part of the absorbed light energy as heat. The objective of this study was to investigate photobioreactor productivity as a function of the biomass specific light absorption rate. A strategy to circumvent oversaturation is to exploit light with a spectral composition that minimizes light absorption. We studied productivity of Chlamydomonas reinhardtii cultivated under different colors of light. The incident light intensity was 1500 μmol photons m− 2 s− 1, and cultivation took place in turbidostat controlled lab-scale panel photobioreactors. Our results demonstrate that, under mass culture conditions, productivity and biomass specific light absorption are inversely correlated. The highest productivity, measured under continuous illumination, was obtained using yellow light (54 g m− 2 d− 1) while blue and red light resulted in the lowest light use efficiency (29 g m− 2 d− 1). Presumed signs of biological interference caused by employing monochromatic light of various wavelengths are discussed. This study provides a base for different approaches to maximize productivity by lowering the biomass specific light absorption rate. |
31,454 | Potential medicinal plants for progressive macular hypomelanosis | Progressive macular hypomelanosis was described by Halder as a disease identified by its symmetrically distributed hypopigmented spots mostly found on the trunk and back.Consequently, these white patches on the skin occur due to a decreased level of melanin in the skin, known as melanogenesis.Westerhof et al. hypothesised that the decreased melanin production in progressive macular hypomelanosis is caused by an inhibitory factor secreted by Propionibacterium acnes.The assumption that P. acnes is the pathogenic factor in PMH was also supported by a study conducted by Relyveld et al., which showed that when both antibacterial and anti-inflammatory treatments were combined with ultra violet irradiation, the antibacterial treatment was significantly superior for the decreasing of hypopigmented lesions in PMH patients.Therefore, eliminating P. acnes with topical antibacterial therapy, such as in acne, could improve re-pigmentation in patients with PMH”.Plants are known for their antibacterial activity against several Gram positive bacteria, or more specifically against P. acnes, therefore, plant extracts could be a possible alternative treatment to antibiotics.Some pharmaceutical drugs such as isotretinoin and benzoyl peroxide, used to treat acne, have extensive side-effects.Furthermore, P. acnes has acquired resistance mediated by bacterial enzymes towards certain pharmaceuticals and antibiotics, such as erythromycin and tetracycline, resulting in the inactivation of the antibiotics.Antibiotic efficiency towards bacterial resistance may be enhanced through structural changes to the aminoglycosides.Zhang et al., investigated the use of antibiotics in combination with other clinically used antibiotics and found that using the combination of antibiotics is a common practice in the treatment of bacterial infections.Combination of antibiotics with other active compounds or plant extracts, may lead to a potential enhancement of the overall efficacy of the treatment, thereby, reducing the dose of antibiotics, or reducing the likelihood of bacteria developing drug resistance."The antibacterial activity of plant extracts, either alone or in combination with the known drug, tetracycline, together with the plant extract's ability to stimulate melanogenesis, could possibly lead to a treatment that not only inhibits the bacterial growth, but also accelerates the production of melanin, decreasing the time for the lesions to fade. "The selection of plants for the present study was based on the plants' traditional uses.Equisetum ramosissimum Desf.subsp. ramosissimum is traditionally used for its antibacterial potential and against skin infections.The Euclea genus, Combretum molle R. Br.Ex G. Don and Momordica balsamina L., have been traditionally used for skin diseases.The dried herb form of Tephrosia purpurea L. Pers.subsp. leptostachya,Brummitt var.pubescens Baker was reported for its effectiveness in the treatment of boils and pimples, which are caused by Gram positive bacteria.Terminalia prunioides M.A. Lawson have many traditional applications including treating bacterial infections, bilharzia and skin diseases.Crotalaria sp. and Leucas sp. are traditionally used for skin diseases by the Indians by means of using the powder of the leaves and root bark to make a paste, which is then applied to treat skin diseases.Other plant extracts chosen based on their traditional uses for skin diseases and wound healing were Ficus glumosa Delile, Ficus lutea Vahl, Ficus sur Forssk., Pelargonium reniforme Curtis, Pelargonium sidoides DC., and Rapanea melanophloeos L. Mez.Ficus religiosa L., Hypericum revolutum Vahl subsp. revolutum and Withania somnifera L. Dunal have been reported for their use in leukoderma, another hypopigmentary disease."The only current treatment available for progressive macular hypomelanosis is the combination of antibiotics with UV radiation, unfortunately there were cases observed where the patients' white macules reoccurred after some time.Additively, UV radiation used together with the antibiotics, provided some risks as it increases the possibility of skin cancer and causes premature ageing of the skin, inflammation due to damaged keratinocytes, DNA breakage and the depletion of antioxidants in the cell or the production of reactive oxygen species.For that reason, the objective of the current study was to identify plant extracts that could prevent or inhibit P. acnes, and increase the monophenolase activity of tyrosinase and induce melanin production in cultured mouse melanocytes.Four compounds commonly found in most of the plant extracts that have previously shown to induce melanin production, investigated in the present study, coumarin, quercetin, withaferin and winthanone, were docked into the active site of tyrosinase enzyme to determine the interaction.The lead plant extracts were also evaluated for their antibacterial activity when combined with Tetracycline, to optimise the concentrations of the plant and drug necessary to have an optimal effect and to counteract antibiotic resistance.Many phytochemicals have been reported for their antibacterial activity and their inducing effect on melanin production.Therefore, the present study identified the major phytochemical groups present in the selected plants.Tetracycline, theophylline, Pesto Blue and 2,2-diphenyl-1-picrylhydrazyl and vitamin C were obtained from Sigma-Aldrich.Nutrient broth and cow brain and heart agar were purchased from Merck SA Ltd.P. acnes was purchased from Anatech Company South Africa.The cell culture reagents, equipment and the B16-F10 mouse melanocyte cell line were purchased from Highveld Biological, Labotech and The Scientific Group.The aerial parts of eight plants were collected, from Lillydale village in the Mpumalanga province of South Africa and ten plants were collected from the Manie van der Schijff Botanical Garden at the University of Pretoria before noon.The leaves and twigs were collected as they are mostly used in the traditional preparations and due to their sustainability; except for W. somnifera where the fruit of the plant was also collected.The fruit of W. somnifera is used in traditional preparations for vitiligo and to improve the texture and colour of the human skin.A voucher herbarium specimen number of each plant given by H.G.W.J. Schweickerdt Herbarium is depicted in Table 1.Alcoholic extracts of each plant material were prepared by soaking the plant material in ethanol for 48 h, filtered, dried and stored at 4 °C until further usage.Ethanol was chosen as the extract solvent, due to its acceptability by the pharmaceutical industries.The extract preparation of the plant extracts was done as described by Sharma et al.The major phytochemical groups present in the eighteen plant extracts, at a concentration of 150,000.00 μ/ml, were determined as specified by Mushtaq et al.The presence of tannins was determined through the addition of ferric chloride to the extracts and the observation of a brown precipitate.The formation of a yellow precipitate indicated the presence of alkaloids after HCl and Dragendroff reagent were added to the ethanol extracts re-dissolved in methanol.Ethanol extracts were re-dissolved in water and formed a froth in the presence of saponins after the extracts were shaken.Cardiac glycosides were determined through the addition of glacial acetic acid, ferric chloride and concentrated H2SO4.The formation of a brown ring at the interface indicated the presence of cardiac glycosides.Chloroform together with glacial acetic acid and concentrated H2SO4 were added to the extract dissolved in water, to determine the presence of terpenes, which were indicated by the reddish brown interface.The appearance of a magenta red colour, after the addition of concentrated HCl and magnesium turnings, indicated the presence of flavonoids.Phenolics were determined through the addition of ferric chloride and identified by the colour change to blue or green.The ethanol plant extracts were tested against P. acnes by determining the minimum inhibitory concentration values obtained through a broth microdilution method."Bacterial cultures were grown on cow's brain and heart agar and incubated at 37 °C for 120 h, thereafter the cultures were sub-cultured on cow's brain heart agar and incubated at 37 °C for 72 h, under anaerobic conditions.The sub-cultured bacteria were suspended in nutrient broth after incubation and the bacteria concentration was adjusted to 0.50 McFarland standard turbidity with an absorbance of 0.13 at 600 nm.A stock solution consisting of the plant extract and 900 μl ddH2O) was prepared.Hundred millilitres of the samples and the positive control tetracycline were added to the first wells of a sterile 96-well plate, already containing 100 ml broth.Threefold serial dilutions were made in broth to give concentrations of 500.00–3.90 and 50.00–0.30 μg/ml for the plant extracts and the positive control, respectively.The bacterial suspension was added to all the wells.The wells with 2.5% DMSO and bacterial suspension without samples served as the solvent and negative controls, respectively.The plates were incubated at 37 °C for 72 h in an anaerobic environment.The MIC was visually determined after the addition of PestoBlue.The MIC was defined as the lowest concentration that inhibited bacterial growth.The cell culture was prepared as described by Lall et al., 2015 with a few modifications."The mouse melanocyte cells were cultured in Minimum Essential Eagle's Medium, containing 50.00 μg/ml gentamicin instead of 10.00 μg/ml streptomycin.B16-F10 cells were seeded into a 96-well plate.The cell viability was conducted according to Sharma et al., 2014.The positive control and the plant extracts were added to the cells and were incubated again at 37 °C for 72 h. Following incubation, 50 μl of XTT-carbony]-3,4-tetrazolium]-bis benzene-sulfonic acid hydrate) reagent was added to the wells and incubated for 3 h, after which the optical densities of the wells were measured at 450 nm using BIOTEK Power-wave XS multi well reader.The cell survival rate was assessed by comparing the absorbance of the cells with the plant samples to the control.The statistical program ‘Graph Pad Prism 4’ was used to analyse the 50% inhibitory concentration of the plant extracts.A similar protocol was followed that has been previously published by Wangthong et al., 2007.Extracts were dissolved in dimethyl sulfoxide to a final concentration of 20,000 μg/ml.This treatment solution was then diluted to 300.00 μg/ml in 50 mM potassium phosphate buffer.Seventy microlitres of each solution of different concentrations was combined with 30 μl of monophenolase tyrosinase in triplicate in 96-well microtitre plates.After incubation at room temperature for 5 min, 110 μl of substrate was added to each well.The optical densities of the wells were determined at 492 nm over a period of 30 min at room temperature with the BIO-TEK PowerWave XS multi-well plate reader.The concentration necessary to inhibit 50% of the enzyme activity was determined by using GraphPad prism software.The amount of melanin produced in B16-F10 melanoma cells, after the treatment with the different plant extracts, was determined by following the method of Hill.Briefly, cultured murine B16F10 melanocytes were trypsinized.Cells were inoculated into 24-well plates using a pipette, and incubated for 24 h at 37 °C in the CO2 incubator.After a 24-hour incubation, 100 μl of each sample solution was added to each well in duplicate, and the 24-well plate was incubated for 3 days at 37 °C in the CO2 incubator.Test sample and the positive control — theophylline were dissolved in DMSO.The final concentration of DMSO was 0.1%.In the control group, the final DMSO concentration was used instead of the sample solution.After incubation, the cultured medium was removed by a pipette, and assayed for extracellular melanin as follows: The cultured medium was centrifuged to give a supernatant.One millilitre of a mixture of 0.40 M HEPES buffer and EtOH was added to 1 ml of the supernatant.The OD at 475 nm of the resulting solution was measured, and the amount of extracellular melanin was determined.The remaining melanoma cells were digested by the addition of 400 ml of 1 N NaOH, washed with 100 ml of CMF-D-PBS and trypsinized, and then left standing for 16 h at room temperature.The OD at 475 nm of the resulting solution was measured, and the amount of intracellular melanin was determined.A melanin standard curve was obtained through measuring the optical density of melanin at a wide range of concentrations and was used to determine the melanin produced intracellularly and extracellularly."A linear curve that perfectly fits the captured data was obtained on the basis of Beer's law.Molecular docking was performed using the GOLD program.It uses a genetic algorithm which considers ligand conformational flexibility and partial protein flexibility i.e. side chain residues.The default docking parameters were employed for the docking study.It includes 100,000 genetic operations on a population size of 100 individuals and mutation rate of 95.The crystal structure of mushroom tyrosinase was taken from the Protein Data Bank.It has a crystal structure resolution of 2.78 Å and contained an inhibitor; tropolone and two Cu2 + atoms in the active site.The structures of the small compounds were sketched using Chemdraw3D and minimized considering RMSD cut-off of 0.10 Å.The docking protocol was set by extracting and re-docking tropolone in the tyrosinase crystal structure with RMSD < 1.00 Å.This was followed by docking of all compounds in the active site defined as 6 Å regions around the co-crystal ligand in the tyrosinase protein.Furthermore, all compounds were evaluated for possible molecular interactions with tyrosinase active site residues using PyMol Molecular Graphics System.Statistical analysis of the results obtained in the respective experiments was analysed with GraphPad Prism to obtain the effective concentrations derived from a sigmoidal dose response curve.Each experiment was conducted in triplicates and repeated at least three times.Excel was used to generate the isobolograms and the CI values were determined by CompuSyn software using the Chou-Talalay method.The major groups of secondary compounds were determined in the eighteen plants selected.C. molle, Euclea crispa, F. sur, R. melanophloeos and W. somnifera contained most of the phytochemicals tested.Leucas martinisensis was not found to contain any phytochemicals, which could be due to too low concentration of compounds.The phytochemical groups with many compounds, identified previously, for their antibacterial activity are alkaloids, terpenes, flavonoids, tannins and phenolics.The extracts containing most of the aforementioned phytochemicals were C. molle and F. sur.However, H. revolutum and W. somnifera showed the best bacterial inhibition.H. revolutum and W. somnifera had a minimum inhibitory concentration of 62.50 μg/ml and 31.25 μg/ml for P. acnes strains 11827 and 6919 respectively."A possible explanation for H. revolutum's effective bacterial inhibition could be due to the presence of acylphloroglucinols that is known for its bactericidal effect against honeybee pathogens.The concentration at which H. revolutum showed antibacterial activity was similar to the concentration at which the extract resulted in 50% viable cells in the cytotoxicity assay.The antibacterial activity of xanthones, also present in H. revolutum, against other Gram positive bacteria, has been published.Withanolide isolated from W. somnifera, which is also responsible for W. somnifera cytotoxicity, has shown to have bactericidal activity.However, W. somnifera, only showed antibacterial activity at concentrations higher than the IC50 value obtained in the cytotoxicity assay.W. somnifera exhibited a higher MIC value than the W. somnifera.The reported isolation of withanolide from W. somnifera included the whole plant, but it was not specified if it was during fruiting season, therefore the concentration of withanolides in the fruit may have been less or even absent.The minimum inhibitory concentration of the positive drug control was determined to be 0.78 and 0.39 μg/ml for the P. acnes strains 11827 and 6919 respectively.Pretorius mentioned that “antibacterial, antifungal and antiviral properties have been associated with individual or collective groups of flavonoids in the past”, the potency is, however, dependent on the concentration of flavonoids present as well as the extraction solvent.Although P. acnes strains 11827 and 6919 belong to the same serotype, namely group I, they belong to two different biotype groups: B1 and B3 respectively.Both strains 11827 and 6919 contain galactose in their cell walls.There are five different biotypes for P. acnes strains determined by the fermentation of ribose, erythritol and sorbitol.P. acnes strain 11827 contains ribose, erythritol and sorbitol, while strain 6919 contains ribose and sorbitol, but not erythritol.Undoubtedly, different responses from the bacteria towards the treatments were expected.Several factors determine the sensitivity of bacteria such as the experimental conditions, the growing conditions and density of the bacteria as well as their sensitivity towards reduction in cell numbers.The plant extracts that showed insignificant or no antibacterial activity were excluded from the combinational study with tetracycline.All the plant extracts showed a significant decrease in minimum inhibitory concentrations when combined with tetracycline, but the initial MICs of tetracycline and the plant extracts needed to be taken into consideration to evaluate whether the combination had synergistic or merely an additive effect.When the antibacterial activity of the compounds tested together is equal to the sum of their separate antibacterial activity, there is no synergy between the compounds and is known as an additive effect.When the MIC of the interaction between the two compounds is better than the compounds alone, it is known as synergy."Antagonism occurs when the two compounds, together, nullify each other's activity, in other words the activity is less in combination than the two compounds separately.Once the most active ratios have been identified, the combination index was determined to ensure statistical verification and quantitatively describes the synergy between the plant extract and tetracycline.H. revolutum, showed the best antibacterial activity with the lowest concentration of tetracycline required.The MIC of H. revolutum decreased from 62.50 μg/ml to 7.03 μg/ml and from 31.25 μg/ml to 7.03 μg/ml for P. acnes strains 11827 and 6919 respectively.The MIC of the positive control decreased from 0.72 μg/ml and 0.39 μg/ml for P. acnes strains 11827 and 6919 respectively to 0.078 μg/ml.C. molle, E. crispa, H. revolutum, Momordica balsamina, Tephrosia purpurea and W. somnifera showed a significant drop in the MIC values for P. acnes strain 6919 and the concentration of tetracycline reduced from 0.39 μg/ml to 0.16 μg/ml, 0.16 μg/ml, 0.078 μg/ml, 156 μg/ml, 0.16 μg/ml, and 0.23 μg/ml respectively.Ultimately, only H. revolutum, showed the most significant synergy.Previous studies have shown that there might be a correlation between the cytotoxicity and the antibacterial activity of a compound, as was the case with fluoroquinolones — a broad spectrum antibiotic.Cytotoxicity is one of the first and foremost steps in finding an active plant extract, as the cytotoxicity of the plant not only determines the concentration safe to use in treatments, but also narrows the range of concentrations necessary to test in experimental analysis as a hypothetical outcome could be predicted.Cytotoxicity is associated by the effective concentration of a compound, which leads to only 50% of the viable cells to be present — known as the EC50 value.The higher the EC50 value, the less toxic the plant extract and, therefore, is recommended to be safe to use in treatments.The EC50 values of C. molle, E. crispa, F. lutea, F. sur and P. reniforme were higher than their minimum inhibitory concentration for P. acnes, therefore, the active concentrations of the aforementioned plants are safe to use in treatments to be performed on mouse melanocytes.The EC50 values obtained for C. lanceolata, E. ramosissimum, F. glumosa, F. religiosa, M. balsamina, P. sidoides, R. melanophloeos and W. somnifera were lower that their MIC for P. acnes and are, therefore, cytotoxic.The aforementioned plants showed melanin inhibitory effects on mouse melanocytes, as discussed in the following sections, which could be due to the cytotoxic effect of these extracts.T. prunioides had an EC50 value of 302.80 μg/ml, which is higher than the concentration required to significantly stimulate melanin production.Consequently, T. prunioides will be safe to use in potential treatments for hypopigmentation.Aside from the bacterial aspect of the PMH disorder, the objective of this study was to determine which plant extracts could stimulate melanin production in B16-F10 mouse melanocytes and also increase the monophenolase activity of tyrosinase.Of the eighteen plant extracts F. glumosa, F. lutea, F. religiosa and H. revolutum showed no effect on the monophenolase activity of tyrosinase at the highest concentration tested.W. somnifera and W. somnifera at a concentration of 100.00 μg/ml increased the monophenolase activity of tyrosinase."Certain flavonoids, more specifically quercetin, coumarin, kaemferol and certain saponins, inhibit tyrosinase with their 3-hydroxylgroup chelating the copper ions, at the active site, necessary for the enzyme's activity.The inhibition of tyrosinase by quercetin and coumarin was verified through molecular docking.Coumarin and quercetin showed good interaction with docking fitness scores of 40.46 and 44.40 respectively.They showed interactions with two Cu2 + ions, with a van der Waals distance of < 2.70 Å.In addition to this, Quercetin was also observed to make H-bond interaction with residue His263 and Met280, which justified its high fitness score.However, if this 3-hydroxylgroup is bound and not available to react when dimerization occurred or when other compounds present in the plant extract interacted with the 3-hydroxylgroup, no inhibition of the enzyme could occur.Additively, if quercetin and kaempferol structures contain 3-O-glycosides, no chelation could take place and tyrosinase inhibition does not occur.Although flavonoids, more specifically quercetin, are present in W. somnifera it does not infer that it will also inhibit tyrosinase activity.As mentioned earlier, it is important that the 3-hydroxyl group of quercetin is free to chelate the copper ions, which is not necessarily the case with the quercetin present in W. somnifera.In a previous report it was mentioned that the tyrosinase activity was not affected by the addition of W. somnifera.Conversely, W. somnifera and its active compounds, withaferin and withanone, have shown to cause skin darkening, which could possibly be due to tyrosinase activation.In the present study W. somnifera activated the monophenolase activity of tyrosinase.During molecular docking analysis, withaferin and withanone showed poor docking fitness scores of − 66.50 and − 42.42 respectively.It signified the unfavourable binding into the active site of tyrosinase.They showed only single interactions to Cu2 + with large distance i.e. > 2.70 Å.In addition, none of the residues were observed to be involved in H-bond interactions, which substantiated the poor fitness scores.It was previously reported that the catalytic activity of tyrosinase was also stimulated by 3-hydroxyanthranilic acid — an intermediate compound produced in the kynurenine pathway synthesising tryptophan.The kynurenine pathway which occurs in plants also produces pyridine alkaloids.This pathway might contain an explanation for the activation of tyrosinase by W. somnifera.Another important pathway in plants is the shikimic acid pathway, which is responsible for the biosynthesis of l-phenylalanine and l-tyrosine — crucial aromatic amino acids that form the substrates for the monophenolase activity of tyrosinase.An increase in tyrosinase activity can lead to induced melanin production in melanocytes.The concentration of pigments synthesised by the melanocytes was estimated spectrophotometrically and was depicted from a melanin standard curve.The absorbance increased as the concentration of extracellular and intracellular melanin present increased.Only three of the plant extracts tested increased both the extracellular and intracellular melanin concentration.Theophylline was used as the positive control.The ethanol extracts of H. revolutum, T. prunioides and W. somnifera increased the amount of extracellular and intracellular melanin.Cells treated with 500.00 μg/ml H. revolutum and W. somnifera produced approximately 12.00 μg/ml of melanin, which was far less than the 40.00 μg/ml of melanin for the positive control.H. revolutum contains coumarins, which have been shown to induce melanin production.Extracts containing withanone, as is the case with W. somnifera, have shown to increase hair melanin in clinical trials conducted by Bone and Morgan in 1996 as published by Widodo et al.Withaferin A and quercetin, both found in W. somnifera, stimulated melanin dispersion mediated by cyclic AMP, which leads to skin darkening.However, both W. somnifera and H. revolutum stimulated melanin production only at higher concentrations than the EC50 values obtained for their cytotoxicity.The stimulation of melanin production could be due to a defence mechanism towards the cytotoxic effect of the plant extracts.Melanin is known for its radical scavenging properties.Therefore, if the cytotoxicity of the extracts is due to the presence of reactive oxygen species, then melanin production would be stimulated and the melanin particles would still be detected through spectrophotometry after the cells have been broken down.Cells treated with 500.00 μg/ml T. prunioides produced approximately 150.00 μg/ml of melanin, which is a much higher concentration than the melanin produced in the positive control.No active compounds, which stimulated melanogenesis, have been isolated from T. prunioides, therefore the activation of melanin production through the treatment of T. prunioides was reported for the first time.The cells treated with R. melanophloeos did not produce any extracellular melanin, but did, however, show a high absorbance for intracellular melanin.The high absorbance for intracellular melanin could be ascribed to the benzoquinones found in R. melanophloeos.Gamma-l-glutaminyl-3-4-benzoquinone is a precursor in the production of melanin, therefore, the absorbance reading could have included both the melanin produced and the γ-l-glutaminyl-3-4-benzoquinone present.Consequently, the absorbance gave a false positive; possibly because γ-l-glutaminyl-3-4-benzoquinone was mostly present, there was no mature melanin which could move to the extracellular space.Most of the plants had an insignificant effect on melanin production or inhibited the production of melanin in the melanocytes.As a result of the aforementioned process, it is noticeable that in most graphs, such as for E. crispa, C. molle, M. balsamina, T. purpurea, C. lanceolate and L. martinicenses, the intracellular melanin decreased with the increase of the plant concentration.Together with the decrease in intracellular melanin, the extracellular melanin increased with the increase in extract concentration for E. crispa, C. molle and M. balsamina.The increase in extracellular melanin could possibly mean that the extract does not only stimulate melanin production, but at higher plant concentrations speeds up the melanogenesis process, leading to more melanin transferred.P. acnes was identified by Westerhof et al., 2004 as the causative bacteria of progressive macular hypomelanosis.The antibiotics currently used for PMH provide short term solutions, but through the investigation of antibacterial plants, an alternative – long term – solution may be identified.Both H. revolutum and W. somnifera are traditionally used for the hypopigmented disorder, Leukoderma, and for skin infections.T. prunioides is traditionally used for bacterial infections and skin diseases.The aforementioned traditional uses led to the investigation of the selected plants for PMH."Although in vitro studies act mostly as peliminary studies, the results obtained provide a good determination of the plant extract's activity and could guide any future in vivo studies, which will strenghten the findings of active plants for the use for PMH.H. revolutum and W. somnifera were the most potent extracts against P. acnes, when combined with tetracycline."The significant synergy between Hypericum revolutum and tetracycline, decreases the chance of the strains becoming resistant and decreased H. revolutum and tetracycline's bioactivity below their cytotoxicity, potentially making it safer to use.H. revolutum, T. prunioides and W. somnifera were the only plant extract that increased the monophenolase activity of tyrosinase.During molecular docking analysis, it was concluded that small molecules like coumarin and quercetin have potential to inhibit the tyrosinase enzyme as they were buried deep and interacted with Cu2 + ions.However, larger molecules, such as withaferin and withanone A, are not supposed to inhibit the enzyme.Similarly, dimerization of single small molecules into larger molecules probed to make them inactive against the tyrosinase enzyme.W. somnifera also showed an increase in the amount of extracellular and intracellular melanin.T. prunioides stimulated melanin production even at the lowest concentration tested, therefore, was found to be active at concentrations lower than its cytotoxicity exhibited on the cells.As a result, H. revolutum, T. prunioides and W. somnifera have been identified as possible plants for PMH and further study would include using chromatography for the identification of the bioactive compounds, which could possibly be incorporated into formulation and used for progressive macular hypomelanosis.The National Research Foundation provided funding.The University of Pretoria provided the research facilities.Stefan Winterboer, who helped with the collection of the plants at Lillydale village in Mpumalanga and James Malhore, the traditional healing practitioner, who identified the plants at Lillydale village are acknowledged. | Progressive macular hypomelanosis (PMH) is a hypopigmentation disorder caused by the bacterium identified as Propionibacterium acnes. The current treatments for PMH are antibiotics together with ultra violet radiation; however, UV radiation is not a recommended method to increase melanin production. Currently, there are no known plants used traditionally or medicinally for PMH. The objective of this study was to find plants that could stimulate tyrosinase activity induce melanin production and inhibit P. acnes' growth. Seventeen ethanol plant extracts, used traditionally in Africa for skin diseases, were screened for their antibacterial activity against P. acnes, their effect on monophenolase activity of tyrosinase and their cytotoxicity and stimulation of melanin production on mouse melanocytes (B16-F10). Hypericum revolutum Vahl subsp. revolutum (Hypericaceae) and Withania somnifera L. Dunal (Solanaceae) (twigs and leaves), combined with the known drug tetracycline, exhibited significant antibacterial activity against P. acnes, with the minimum inhibitory concentration ranging from 5.47 μg/ml to 14.06 μg/ml. The combination of a known drug with other antibacterial compounds not only decreases the concentration needed to inhibit bacterial growth, but also decreases the chances of bacterial resistance. W. somnifera was the only plant extracts that resulted in an increase in the monophenolase activity of tyrosinase. Four compounds typically present in plant extracts, namely coumarin, quercetin, withaferin and winthanone, were docked into the active site of tyrosinase enzyme to determine the interaction with active site residues. Mouse melanocytes (B16F10) treated with H. revolutum, W. somnifera (leaves) and Terminalia prunoides showed an increase in total melanin content as compared to untreated cells at 12 μg/ml, 12 μg/ml and 150 μg/ml respectively. Considering both the antibacterial activity and the stimulatory effect of the treatment on melanin production, H. revolutum and W. somnifera (leaves) could be considered as potential plants for further studies for PMH. |
31,455 | Source code analysis dataset | The code and comment pairs are a compilation of code blocks and their related comments.Doxygen successfully ran on 106,304 different GitHub projects.A total of 16,115,540 code-comment pairs were obtained by running Doxygen on C, C++, Java, and Python projects.The source code in these pairs can be of various granularities, so there are potentially many code-comment pairs per individual source code file.The total count is over each individual code-comment pair, not over the number of contributing source code files.These data provide an association between source code and a description of that code.The data directory contains one directory for each project downloaded from GitHub."These project directories are named with the GraphQL ID from GitHub's GraphQL API.In each of these GraphQL-ID labeled directories, there is a license.txt, a url.txt, and a derivatives directory.The license.txt contains the license for the original project, the url.txt contains a link to the original project on GitHub, and the derivatives directory contains the output of running Doxygen on the project.The Doxygen output is a json file, structured as a dictionary with a “contents” field, where the value of that field is a list of lists containing 3 elements each.The following is a mock example of that structure:The “path” is a filepath relative to the original project from which the code and comment were obtained.The “snippet” is the actual body of the source code.The “comment” is the corresponding comment.For convenience, there is also an initialize.py python script that iterates through all of the json files in the data directory and stores them in an SQLite database called “all_data.db”.The license.txt and url.txt files are necessary to fulfill licensing requirements for redistribution.We used the original license filenames, so they are not always named “license.txt”, but always contain “license”, “licence”, or “copy” in the filename.The code and build artifact pairs are a compilation of source code projects and their related build outputs.The build process, which consisted of running the make command , successfully ran on 3049 different GitHub projects.Over 30,000 build outputs were produced from C and C++ projects."The build outputs are the results of running a particular project's make command.These derivatives include executables, object files, including libraries, and other project-specific build artifacts.The output was accepted as long as the make command completed without error; thus, there is no guarantee that every project will contain every type of artifact.Furthermore, some make files perform cleanup of object files after generating the final executable; for such projects, the object files will not be available.These data provide an association between source code and the build artifacts of that code.The data directory contains one directory for each project downloaded from GitHub."These project directories are named with the GraphQL ID from GitHub's GraphQL API.In each of these GraphQL-ID labeled directories, there is a license.txt, a url.txt, source directory, and a derivatives directory.The license.txt contains the license for the original project, the url.txt contains a link to the original project on GitHub, the source directory contains the original code, and the derivatives directory contains the outputs from building the project, which include the previously mentioned files.The code and static analysis dataset is a compilation of source code projects and their outputs from running the static analysis tool, Infer , on 3170 different C and C++ GitHub projects.These data provide an association between source code and a static analysis of that code.The data directory contains one directory for each project downloaded from GitHub."These project directories are named with the GraphQL ID from GitHub's GraphQL API.In each of these GraphQL-ID labeled directories, there is a license.txt, a url.txt, a source directory, and a derivatives directory.The license.txt contains the license for the original project, the url.txt contains a link to the original project on GitHub, the source directory contains the original code, and the derivatives directory contains the output of Infer."We designed our data collection using GitHub's GraphQL API to locate projects that satisfied our requirements.The GraphQL API allowed us to functionally encode our requirements to query the tremendous quantity of source code on GitHub.Our main concerns for the data included the ability to freely redistribute modifications or derivatives of the code and a reasonable expectation of quality for each project.To address redistribution, we manually selected 15 acceptable licenses: MIT, Apache-2.0, GPL-2.0, GPL-3.0, BSD-3-Clause, AGPL-3.0, LGPL-3.0, BSD-2-Clause, Unlicense, ISC, MPL-2.0, LGPL-2.1, CC0-1.0, EP1-1.0, and WTFPL."To address code quality, we used GitHub's starring system to set a threshold of 10 or more stars.We chose this threshold empirically, during the process of setting up our project-mining infrastructure, after viewing many repositories with a range of star values.Additionally, we accepted projects from a variety of programming languages, which GitHub enumerates in a list of popular languages, that have Doxygen plugins.By setting the license, quality, and language parameters, we were able to receive project URLs from GraphQL.The query string used is shown in the Appendix.Using the project URLs returned from the GraphQL queries, we ran curl commands in parallel to download the master branch of each GitHub repository.We terminated the downloads after 3 weeks, resulting in approximately 8 terabytes of data.After all the downloads completed, we ran three utilities to extract data.These processes were run to completion; we did not terminate them early.We used Doxygen to extract code-comment pairs, which ran and finished in a total of four weeks.We used Doxygen version 1.8.11.We modified the “FILE_PATTERNS” variable in the doxyfile configurations to include the following extensions:.c,.cc,.cxx,.cpp,.c++,.h,.hh,.hxx,.hpp,.h,.java, and.py.We did not make any other modifications to the default settings.We used the make command to build the projects, which ran and finished in a total of two weeks.We did not perform any additional dependency resolution beyond what was available inside the individual source code projects.We also did not attempt to modify any compilation options or flags, as those were defined in the individual make files.The target architecture was Ubuntu 16.04.1 x86_64.We allowed the projects to run any of the four compilers: g++ 4:6.3.0–4 amd64, g++ 6.3.0–18 + deb9u1 amd64, gcc 4:6.3.0–4 amd64, and gcc 6.3.0–18 + deb9u1 amd64.We used Infer to obtain a static analysis of the code, which ran and finished in a total of one week.We chose Infer as opposed to other static analyzers due to its recency and popularity amongst large software projects, which is due in part to its scalability.We used Infer version v0.16.0 with the command “infer run -- make”.We did not change any other parameters of the infer tool.The target architecture and potential compilers are the same as the ones used for the project building.After the artifact generation process, we packaged the data into a legally compliant format."For every project, we created a directory that included the original project's license, a link back to the original project, and any source code that was used in the creation of the artifacts we have provided. | The data in this article pair source code with three artifacts from 108,568 projects downloaded from Github that have a redistributable license and at least 10 stars. The first set of pairs connects snippets of source code in C, C++, Java, and Python with their corresponding comments, which are extracted using Doxygen. The second set of pairs connects raw C and C++ source code repositories with the build artifacts of that code, which are obtained by running the make command. The last set of pairs connects raw C and C++ source code repositories with potential code vulnerabilities, which are determined by running the Infer static analyzer. The code and comment pairs can be used for tasks such as predicting comments or creating natural language descriptions of code. The code and build artifact pairs can be used for tasks such as reverse engineering or improving intermediate representations of code from decompiled binaries. The code and static analyzer pairs can be used for tasks such as machine learning approaches to vulnerability discovery. |
31,456 | The prospectivity of a potential shale gas play: An example from the southern Pennine Basin (central England, UK) | Between 1872 and 1876 two boreholes drilled in Netherfield were found to not only contain thick gypsum deposits but also inflammable gases from ‘petroleum bearing strata’.These findings were the result of the Sub-Wealden Exploration, an academic effort to complete the knowledge of the range of Palaeozoic rocks underneath the Weald.For more than a century, very little research was directed towards further exploring the prospectivity of this energy resource.Then, following the start of shale gas exploration in the United States in 1982, interest of the research community in the UK was reinvigorated with efforts focused on Carboniferous organic-rich shales.More recently, the Namurian Bowland Shale Formation and its lateral equivalents including the Edale Shale and the Morridge formations, were identified as the most promising targets.These British Namurian-aged shales were deposited in a mosaic of interlinked basins in a proximal position with emergent areas to the north and the south serving as the main sources of terrigenous material.During the Mississippian, the Pennine Basin was located in an equatorial position, proximal to Laurussia with Gondwana stretching to the South Pole, bordered by two emerging land masses.Because of its position, ice sheets were likely to persist on Gondwana which influenced the sedimentation history in the patchwork of sub-basins forming the Pennine Basin.Here we present a high resolution, multiproxy dataset to characterize the potentially prospective intervals of the Mississippian successions in the southern part of the Pennine Basin.We present geochemical, palynological and lithological-mineralogical results from two cored boreholes: Carsington Reconstruction Borehole C3 and Karenight 1.The former was drilled in the Widmerpool Gulf while the latter originates close to the Derbyshire High–Edale Gulf boundary.In both boreholes we studied an interval containing the E2a marine band, which is part of the late Serpukhovian.This interval was chosen because of the availability of pre-existing data on this marine band in the Widmerpool Gulf.The Karenight 1 core is one of very few well-preserved cores to prove the same interval from the Edale Gulf, a neighbouring sub-basin of the Widmerpool Gulf in the Pennine Basin.The aim of the study is to describe the kerogen content of the Namurian mudstones in the Widmerpool–Edale Gulf area; establish the amount of variability in the kerogen content of the mudstones within a single marine band; and evaluate the prospectivity of the mudstones within the E2a marine band in the Widmerpool–Edale Gulf area.The Pennines in the central UK comprise a dissected plateau composed of an asymmetrical anticline with Carboniferous strata gently dipping to the east and more steeply to the west.During the Chadian–early Arundian and late Asbian–Brigantian, a SSE–NNW crustal extension caused a fragmentation of Early Palaeozoic deposits giving rise to a rifted topography of fault-bounded blocks interspersed with developing grabens and half-grabens including the Widmerpool and Edale Gulf.By the end of the Visean, rift-associated extension in the area was replaced by a regime of thermal sag and an epicontinental sea transgressed the Pennine Basin leaving only the Southern Uplands and the Wales-Brabant High emergent.The Namurian was a period of prograding deltas in the Pennine Basin when the basin topography, created during the Visean, was gradually infilled.The sediments deposited during this stage correspond broadly to the Millstone Grit Group.Regular marine incursions punctuated the deltaic successions giving rise to a remarkable cyclicity: shale and/or dark-coloured limestone rich in marine fossils, typically goniatites, are overlain by shale and sandstone with fewer fossils.In the 13 million year span of the Namurian around 60 marine bands occur, 46 characterized by the occurrence of a key goniatite species.The average duration of a marine band is estimated at 180 kyr, although considerable variation between the marine bands may occur.For the Pendleian–Arnsbergian interval, marine band periodicities of 111 kyr have been estimated, and they are linked to an eccentricity forcing of glacio-eustatic sea level fluctuations.Superimposed on these minor cycles, eleven longer duration, 1.1–1.35 myr, mesothems have been identified.These mesothems are interpreted as longer term marine transgressions, characterized by the appearance of new ammonoid genera and each one capped by an extensive ammonoid band.In a sequence stratigraphic context, the marine bands can be thought of as parasequences representing the maximum flooding surfaces, while the mesothems correspond to sequences.The Visean–early Namurian Craven Group from the Widmerpool Gulf comprises the Long Eaton, Lockington Limestone and Widmerpool formations.The overlying Bowland Shale Formation, formerly termed the Edale Shales, commences at the base of the Emstites leion Marine Band and has a highly diachronous upper boundary with the feldspathic sandstones typical of the Millstone Grit Group.The Bowland Shale Formation is a dark grey calcareous mudstone with thin turbiditic sandstones which in the southern part of the Pennine Basin, close to the northern margin of the Wales-Brabant High, passes over in the shaly mudstones and pale grey protoquartzitic siltstones of the Morridge Formation.The Morridge Formation is an early Namurian equivalent of the Millstone Grit Series and was introduced by Waters et al. to reflect the predominantly southern provenance of the terrigeneous material making up the sandstone intervals: coarse quartz-feldspathic sediments derived from Greenland and Fennoscandia make up the terrigeneous component of the Millstone Grit Series while quartz-rich sediments in the sandstone intervals of the Morridge formation originate from the Wales-Brabant High.The late Visean, Asbian and Brigantian, deposits in the Edale Gulf were described by Gutteridge and consist of the Ecton Limestone and Widmerpool Formations.The Namurian successions of the Derbyshire High–Edale Gulf are poorly understood as they are only described from the Alport and Edale Boreholes that were drilled in the late 1930s.During the late Brigantian–early Pendleian, carbonate production waned over the Derbyshire High and up to 356 m of mudstone-dominated successions of the Bowland Shale Formation were deposited ranging in age from Pendleian to Kinderscoutian.The lower part of the Bowland Shale Formation in the Edale Gulf consists of a succession of grey mudstone with thin beds of calcareous quartzose siltstones which may represent the distal delta-slope equivalent of the Morridge Formation.The Carsington Dam Reconstruction Borehole C3 was drilled in 1990 to assess fluid movements and pressures of the reconstructed Carsington Dam following its failure in 1984.In the palaeogeographic reconstruction of the Namurian, it is located close to the centre of the Widmerpool Gulf half graben.In total, 38 m of mudstones, siltstones, sandstones and limestones belonging to the Morridge Formation were cored across the E2a Cravenoceras cowlingense, E2a3 Eumorphoceras yatesae and E2b1 Cravenoceratoides edalensis Marine Bands.The E2a and E2a3 bands are part of the N1 mesothem, while the E2a3–E2b1 boundary forms the transition between the N1 and N2 mesothems.The Karenight 1 borehole was drilled as a mineral exploration borehole by Drilling and Prospecting International in 1973.It was drilled near the northern boundary of the Derbyshire High towards the Edale Gulf.The borehole was drilled to a terminal depth of 428.37 m and was cored below 59.59 m onwards.In the current study we investigated the 234.70–251.89 m interval, consisting of mostly carbonate-cemented mudstone interspersed with limestone and siltstone.This interval comprises the upper part of the Pendleian Substage and the E2 Eumorphoceras zone with the E2b marine band tentatively recognized at 241.90m.In total, the palynofacies composition of 55 samples assessed: 22 from the Carsington Dam Reconstruction C3 Borehole and 33 from the Karenight 1 Borehole.Approximately 5 g of each sample was processed at the British Geological Survey utilizing hydrochloric and hydrofluoric acid to eliminate carbonates and silicates.Samples were spiked with Lycopodium clavatum to enable concentration and flux calculations utilizing the marker grain method.Subsequently, the kerogen fraction was sieved on a 10 μm nylon mesh and was strew-mounted on microslides using Elvacite™.Optical examination was performed using a Nikon Eclipse Ci-L microscope equipped with a Prior H101A Motorized Stage that was controlled by a Prior™ Proscan III unit connected to a PC with the open source microscopy software μManager preinstalled.Per sample, 300 particles were identified following the palynofacies classification of Tyson and using randomly generated slide positions.All slides were scanned for the presence of spores which were recorded separately from the palynofacies analyses.Furthermore, we studied the slides using blue light excitation on a Zeiss Universal microscope operating in incident-light excitation mode with a III RS condenser set and the Zeiss filter set 09.We followed the standard recommendations for epifluorescence observations on palynological slides.Images of palynomorphs in transmitted white light were taken with a Nikon DS-Fi3 camera mounted on the Nikon Eclipse Ci-L microscope and the NIS Elements™ microscope imaging software.Thermal maxima values were determined from the highest yield of bound hydrocarbons.The performance of the instrument was checked every 8 samples against the accepted values of the Institut Français du Pétrole standard and instrumental error was S1±0.1 mg HC/g rock, S2 ±0.77 mg HC/g rock, TOC ±0.04% weight, mineral C ± 0.04% weight, Tmax ±1.4 °C.The mineralogy of 17 samples from Carsington DR C3 and 10 samples from Karenight 1 was determined by quantitative X-ray diffraction analysis."Samples were analysed using a PANalytical™ X'Pert Pro series diffractometer equipped with a cobalt-target tube, X'Celerator detector and operated at 45 kV and 40 mA.Whole-rock analysis was carried out on spray-dried, micronised powders which were scanned from 4.5 to 85°2θ at 2.06°2θ/minute."Mineral phases were identified using PANalytical™ X'Pert HighScore Plus version 4.5 software coupled to the latest version of the International Centre for Diffraction Data database.Quantification was achieved by using the Rietveld refinement technique with the same HighScore™ Plus software and reference files from the Inorganic Crystal Structural Database.The clay mineralogy of the samples was determined using a broadly similar approach to that detailed in Kemp et al.Where whole-rock XRD analysis indicated that the samples were composed of substantial amounts of carbonate species, these were removed using a buffered sodium acetate/acetic acid.For this study <2 μm fractions were isolated, oriented mounts prepared and scanned from 2 to 40°2θ at 1.02°2θ/minute after air-drying, ethylene glycol-solvation and heating at 550 °C.Clay mineral species were then identified from their characteristic peak positions and intensities and their reaction to the diagnostic testing program.Further clay mineral characterization and quantitative evaluation was carried out using Newmod II™ software modelling of the glycol-solvated XRD profiles on all the samples.We determined the organic carbon isotope composition from 30 Carsington DR C3 samples and from 57 Karenight 1 samples.13C/12C analyses were performed by combustion in a Costech Elemental Analyser on-line to a VG TripleTrap and Optima dual-inlet mass spectrometer, with δ13COM values calculated to the VPDB scale using a within-run laboratory standards calibrated against NBS-18, NBS-19 and NBS-22.Replicate analysis of well-mixed samples indicated a precision of ± <0.1‰.The palynofacies analysis, Rock-Eval™ and δ13COM and XRD results are presented below and are summarized in the Appendices 1–10.The palynofacies analyses was conducted on samples that were also used for geochemical analyses.Additional samples were collected for XRD analyses.The palynofacies assessment follows the palynological kerogen classification of Tyson.Structureless amorphous organic matter was distinguished from structured material and any residual mineral matter, mainly pyrite, which was not eliminated during sample preparation.The palynological analysis of the 22 Carsington DR C3 samples is summarized in Appendix 1 and Fig. 5.Structureless organic constituents dominate the kerogen fractions making up on average 86% of the counts with a minimum of 74% and a maximum of 95%.Within this structureless category, heterogeneous AOM with grumose AOM as the most important constituent is the dominant organic category averaging 62% of the counts, with a minimum of 19% and a maximum of 86%.Homogeneous AOM, with gelified organic matter as the dominant constituent, forms on average 25% of the kerogen fraction, with a minimum of 4% and a maximum of 60%.The abundance of homogeneous AOM surpasses the abundance of heterogeneous AOM in four samples: SSK46355, SSK46311, SSK45634 and SSK45616.Within the structured organic material, phytoclasts are the most abundant organic constituent averaging about 8% of the counts with a minimum of 2% and a maximum of 22%.Phytoclasts are especially abundant below the E2a3 Marine Band.Palynomorphs are rare throughout the section averaging 4% with a minimum below 1% in SSK46311 and a maximum of 14% in SSK46351.Sporomorphs are the most important palynomorph type.Lycospora pusilla is the most common identified spore with minor abundances of Cingulizonates bialatus, Granulatisporites granulatus, Savitrisporites nux, Densosporites anulatus and Crassispora kosankei.The palynological analysis of the 33 samples of Karenight 1 is summarized in Appendix 2 and Fig. 6.Structureless organic constituents dominate all Karenight 1 samples, averaging a relative abundance of 88% with a minimum of 72% and a maximum of 97%.Heterogeneous AOM is the dominant category of structureless material with on average a relative abundance of 77%.Homogeneous AOM, mostly composed of gelified matter and AOM in a gelified matrix, has an average relative abundance of 11% with a minimum of 3% recorded in SSK51212 and a maximum of 24% in SSK51202.The relative abundances of homogeneous AOM are noticeably less than the relative abundances of heterogeneous AOM.The average abundance of heterogeneous-homogeneous AOM is 66% with a minimum of 39% in SSK51180 and a maximum of 91% in SSK5121.Phytoclasts comprise the most abundant structured organic constituent with a relative abundance of 7%.The highest phytoclast abundances are reached during the Pendleian.Palynomorphs are very sparse in the Karenight 1 section averaging only 1% including five samples where no palynomorphs were recorded in 300 fields of view.Spores are the most common discrete palynomorphs, with poor preservation causing most to remain unidentified.Lycospora pusilla is again the most common identified spore with minor occurrences of Granulatisporites granulatus, Florinites spp. and Cingulizonates bialatus.Nine Rock-Eval™ parameters, OI, Tmax, TOCpd, Remnant Carbon and Pyrolised Carbon) for 169 samples are summarized in Appendix 3 and 30 measurements of δ13COM in Appendix 9.In Fig. 5 we show TOCpd, PC, S1, Tmax, HIpd and δ13COM data alongside the sedimentological log.For the E2a zone up to E2a3, TOCpd and PC are relatively low, with TOCpd values averaging 1.95% are higher in the E2a3 and E2b1 marine bands with TOCpd reaching an average of 3.52% with a maximum of 5.52% at 24.23 m and a minimum of 0.5% while PC averages 0.77% with a minimum of 0.06% at 20.10 m and a maximum of 1.66% at 25.68 m.The HIpd of the section from E2a to the E2a3 boundary averages 162 mg/g TOCpd, with a minimum of 28 mg/g TOCpd at 34.28 m and a maximum of 428 mg/g TOCpd at 38.77 m. For the upper part of the studied section HIpd averages 228 mg/g TOCpd with a minimum of 81 mg/g TOCpd at 27.08 m and a maximum of 366 mg/g TOCpd at 26.78 m.The E2a–E2a3 boundary is also coeval with an uphole rise in S1: the S1 diagram with an average of 0.19 mg/g for the lower part and an average of 0.54 mg/g.The Tmax remains relatively constant throughout the entire interval averaging 435 °C.Only in the sandstone and siltstone interval around 34 m a drop in Tmax to a minimum of 366 °C has been recorded.The δ13COM data follow the trend of TOCpd and S1 with higher values in the lower part of the section averaging −26.2‰.In the upper part of the section, δ13COM average −28.0‰.Nine Rock-Eval™ parameters.In Fig. 6 we show the TOCpd, PC, S1, Tmax and HIpd and δ13COM data alongside the sedimentological log.The Rock-Eval™ parameters show a drop around 242 m near the base of the tentative E2b zone.The TOCpd of mudstone in the lower part of the section averages 7.01% with a minimum of 4.32% at 251.61 m and a maximum of 9.29% at 249.22 m, while PC of the mudstones averages 1.42%.Limestone of the lower section has a much lower carbon content than the mudstone.The TOCpd of the three recovered limestone samples averages 0.84%, and the RC averages only 0.17%.The TOCpd of the upper part of the section between 242.80 and 234.77 m is significantly less than the TOCpd of the lower part, averaging 3.94% with a minimum of 1.65% at 242.15 m and a maximum of 7.08% at 236.66 m.The PC values are low with an average of 0.71%, a minimum of 0.20% at 242.15 m and a maximum of 1.75% at 236.66 m.The δ13COM data follow the trend of TOCpd and S1 with lower values in the 251.89–242.80 m interval, averaging −28.4‰ with a minimum of −29.1‰ and a maximum of −26.7‰.In the upper part of the section, between 242.80 and 234.77 m, δ13COM average −27.6‰ with a minimum of −29.5‰ and a maximum of −25.6‰.The HIpd and Tmax curves follow this trend: from 251.89 to 242.80 m: low and relatively consistent HIpd values, 108 mg/g TOC 260 mg/g TOC and lower Tmax, 424 °C 442 °C.From 242.80 to 234.80 m: low values for HIpd, 82 mg/g TOC 293 mg/g TOC, and Tmax, 425 °C 440 °C.The whole-rock and <2 μm clay mineral XRD analyses for 17 samples from Carsington DR C3 are summarized in Appendices 5 and 7.The XRD analyses for 10 samples from Karenight 1 are summarized in Appendices 6 and 8.Results from both boreholes are summarized in the ternary mudstone classification diagram of Fig. 8.Thirteen samples originate from the bottom part of the section which all plot in the siliceous and argillaceous mudstone fields.These samples are generally carbonate-free with only a single sample containing minor amounts of rhodocrosite.Silicates dominate the composition with on average 41%, with a maximum of 83.6% and a minimum of 23.6%.Pyrite forms up to 7.6% of these samples and the oxidation products jarosite and gypsum were also detected."The phyllosilicate/clay mineral assemblages of the E2a samples are dominated by undifferentiated 'mica' species) with minor amounts of kaolinite and traces of chlorite. "Less than 2 μm analyses and Newmod II-modelling confirm this assemblage and subdivided the 'mica' into discrete illite and an R1-ordered I/S containing 80% illite interlayers.The remaining four samples all plot in the siliceous mudstone field.These samples are characterized by a higher carbonate content and a maximum of 24.2% in the lowermost sample.These samples are also pyritic but are less phyllosilicate/clay mineral-rich than the underlying samples.Although the detected clay minerals in the <2 μm fractions are similar to the shales below E2a3, they noticeably contain a lower proportion of kaolinite and chlorite and higher proportions of I/S and illite.The lowermost six samples 251.84–242.98 m interval,plot in the argillaceous mudstone and siliceous mudstone fields.While low, the carbonate content of this interval is higher than in the bottom part of Carsington DR C3 averaging 4.7% with a maximum of 10.7% at 251.84 m.The four samples from the top part of the section are generally richer in carbonates with a maximum of 49.4% at 238.33 m.This sample plots in the mudstone field of Fig. 8.Less than 2 μm XRD analyses suggest R1 I/S- and illite-dominated clay mineral assemblages with only traces of kaolinite and chlorite, similar to those identified in the upper interval of the Carsington DR C3 borehole.To assess the shale gas prospectivity of the Widmerpool Gulf and Edale Gulf we apply the criteria detailed in Table 2 of Andrews.With the data generated in the current study we can expand the UK data set and discuss four criteria in more detail: organic matter content, kerogen type, original hydrogen index, mineralogy and clay content.The TOCpd or organic richness of a potential source rock was measured using Rock-Eval™ pyrolysis and is reported in dry weight percent.Because organic matter generates hydrocarbons during maturation, TOCpd is generally viewed as an important variable that has a strong influence on the amount of potential hydrocarbons that can be generated.Even though there is consensus that a potential source rock should be rich in organic matter and TOCpd can serve as a proxy for that, reported cut-off values vary from basin to basin and from author to author: e.g. >2%, >4%.For the Carboniferous Pennine Basin, Andrews utilizes a TOCpd cut-off of 2% to screen potentially viable shale horizons.However, it is important to acknowledge that measured values relate to TOCpd composed of the pyrolysable fraction of the organic carbon and the remaining carbon after pyrolysis.The PC represents the present day generative part while the RC represents the present day non-generative part of the organic carbon of the sample that was subjected to Rock-Eval pyrolysis.From the moment of sampling to the acquisition of the pyrolysis results, losses of organic carbon occur due to storage, handling and sample processing.Losses also occur due to natural processes associated with basin evolution, including diagenesis and maturation of the sediments and due to migration of formed hydrocarbons.As a result, the original TOC is composed of the original GOC and the original NGOC.Jarvie provides a methodology to calculate TOCo and uses a cut-off value of 1% TOCo as a criterion for prospective shale plays.Thus, the interpretation of TOC values relies on the assumption that the loss of carbon from deposition to analysis is not biased towards GOC or its components, and quantitatively similar for all compared samples.While this may be a reasonable assumption for a set of samples from the same bed, TOCpd can vary by as much as 10% within the same shale system which draws into doubt the validity of inter-basinal comparisons of TOC values.Indeed, some techniques that rely on indirect estimates of the OM component to estimate shale gas resource, such as the Passey Method applied to down-hole geophysical logs, may give unrealistically high estimates of gas generative potential.In the Carsington DR C3 core 142 of a total of 169 samples have TOCpd values above 1% and 97 samples have TOCpd higher than 2%.Biozones E2b1 and E2a3 have the highest TOCpd with only the thin limestone bed at 20.10 m containing less than 1% organic carbon.However when PC is considered, values are much lower and only 7 samples, 6 of which originating from the E2b1–E2a3 interval, show higher than 1% PC and no samples exceed 2% PC.In conclusion, the majority of the Carsington DR C3 samples meet the >2% criterion of Andrews for TOC values of a prospective shale interval, especially in the E2b1 and E2a3 and only in restricted intervals in the rest of the studied interval.When PC is considered, no samples exceed the 2% threshold.In the Karenight 1 core, 71 of a total of 72 samples have TOCpd higher than 1% and 68 have TOCpd higher than 2%.The only samples that do not meet the 2% requirement are limestone that are present throughout the interval.When PC is considered, only 38 samples surpass 1%, concentrated in the lower part of the core, and only 3 samples exceed 2%.The HIo results from the Carsington DR C3 Borehole are summarized in Table 3.Overall, the samples of Carsington DR C3 have an average HIo of 409 mg/g TOCo and an average TOCo of 2.28%.The geochemical trends observed from HIpd and TOCpd measurements are maintained.Downhole, the E2b1–E2a3 interval shows higher HIo and TOCo values compared to the remainder of the E2a interval.The HIo and TOCo results from the Karenight 1 Borehole are shown in Table 4.Both HIo and TOCo show a similar pattern as the present day values obtained by analysis and shown in Fig. 6.The average HIo value in the Karenight 1 Borehole is relatively stable throughout: 352 mg/g TOCo 607 mg/g TOCo.The upper part of the studied section shows lower HIo and TOCo values averaging 448 mg/g TOCo and 4.43% respectively.The lower part of the section shows an average HIo of 504 mg/g TOCo with an average TOCo of 9.34%.The interval from 243.67 to 245.50 m is notable in that it is characterized by a relatively high average TOCo of 12.11%.The XRD results are plotted as a ternary diagram with carbonates, clay and silicates as end members with the results of similar analyses from producing North American shale reservoirs and the ductile-brittle transition zone from Anderson.Jarvie suggests a clay content <35%, a silicate content exceeding 30% with some carbonates and the presence of non-swelling clays is required to enable hydraulic fracturing.This is again based on observations from the Barnett Shale and there are examples of productive shales with higher clay contents.Indeed, even plastic clays will hydrofracture if the pressurization rate is high enough.The upper part of the Carsington DR C3 borehole is dominantly a siliceous mudstone with a variable carbonate content.On average, these samples contain 28% clay minerals.In contrast, the lower part of Carsington DR C3 is generally carbonate-free and higher clay content.Hence, this interval is considered an argillaceous mudstone.The upper part of the Karenight 1 borehole consists of argillaceous and siliceous mudstones with a considerable carbonate content and a relatively high clay content.Calcite is most commonly developed with the exception of SSK59376 which has a dolomitic content of 27%.The lower part of Karenight 1 consists of siliceous and argillaceous mudstone with a low carbonate content and a clay content 32–58%.These results indicate the high variability of the mudstone mineralogy in both the Widmerpool and Edale gulfs.The same subdivisions that became apparent in the geochemical parameters are reflected in the XRD analyses.A knowledge of the clay mineralogy of these mudstones is important in determining their potential engineering behaviour.No discrete smectite, the most common high shrink-swell clay mineral, was identified in either of the borehole intervals examined.However, the ubiquitous presence of R1-ordered I/S should be noted and influence the design of any hydraulic fracture programmes.In addition, clay minerals can also provide a geothermometer for comparison with more traditional organic maturation indices, as illustrated by the Basin Maturity Chart of Merriman & Kemp.The consistent presence of R1-ordered I/S in borehole intervals suggests burial temperatures of ∼100 °C and places the formation in the late diagenetic metapelitic zone, equivalent to burial of perhaps 4 km at normal geothermal gradients.In terms of hydrocarbon zones, the clay data suggest light oil maturity.This is consistent with the acquired Rock-Eval™ parameters for both studied intervals.The Tmax values in the Carsington DR C3 interval remains constant around 430–440 °C, showing a uniform maturity near the bottom of the oil window.Only the sandstone interval between 33.50 and 34.75 m shows a drop in Tmax to 366 °C.The bulk organic matter in this interval is from a distinctively different origin than the rest of the interval: sample SSK45647 at 33.63 m has the lowest recorded HIpd and a low FS of 2.Hence, the Tmax variations in this instance can be attributed to the origin of the organic matter rather than a change in maturity.In the Karenight 1 core, Tmax averages 435 °C and remains in the narrow interval 424–445 °C over the entire studied interval, averaging 435 °C.There is an increase of about 10 °C in Tmax occurring around 245.70 m below the lowest occurrence of visible plant material in the core.The lithological composition, the fossil content and the occurrence of some of the most negative δ13COM values all point to a more marine depositional environment.Therefore we attribute the minor change in Tmax at 245.70 m to the nature of the organic matter rather than to a change in maturity.Peters-Kottig et al. investigated long and short term variations in δ13COM that occur in land plant organic matter in the late Palaeozoic.The rise of land plants during the Carboniferous and the Permian has been related to the drawdown of atmospheric CO2 and the initiation of glacial episodes.This is linked to the rise in importance of lycophytes during the Serpukhovian; these plants were very efficient carbon sinks, large in size and they possessed photosynthetic leaf cushion covered stems and leafs.Carbon burial during the late Mississippian was exacerbated by the widespread lignin production since the late Devonian, leading to increased burial of organic matter resulting in a further CO2 drawdown and a large pO2 peak around 300 Ma and thus a lower δ13COM signature of the plant material as the Mississippian advanced to values of around −25‰.Even though extant marine plants are generally characterized by higher δ13COM values than land plants, δ13COM measurements of marine kerogen are generally lower than terrestrial kerogen.This was confirmed for Mississippian shales of the Appalachian Basin where terrigenous organic matter has δ13COM of −26 to −25‰ while the δ13COM of the marine organic matter is around −30‰.Lewan evaluated δ13COM values of amorphous kerogens in Phanerozoic sediments and distinguished ‘h’ amorphous kerogens from ‘l’ amorphous kerogens.Phytoplankton residing in environments with well-circulated water masses dominated by atmospheric-derived CO2 will yield ‘h’ amorphous kerogen while ‘l’ amorphous kerogens were more likely formed in more restricted basins overlain by relatively shallow, well-stratified water masses where carbon in the photic zone is sourced from recycled organic material.These principles can be applied to the Mississippian deposits of the Pennine Basin, where the kerogen fraction is dominated by AOM.The contrasting δ13COM values between terrestrial and marine kerogen can be used to delimit marine and non-marine intervals.Stephenson et al. showed that the bulk δ13C values of the mixed marine-terrestrial sequence in the Throckley and Rowland Gill boreholes is a function of the ratio marine:terrestrial δ13COM.Similarly, Könitzer et al., demonstrated the influences of microfacies, organic matter source and biological activity in a cross plot of δ13COM and TOC for deposits of the Carsington DR C4 Borehole, which covers the same stratigraphic interval considered in the current study.In Fig. 10 the Karenight 1 and Carsington DR C3 δ13COM and TOC values are shown.The lower part of the Carsington DR C3 core, corresponding to the E2a biozone contains the lowest TOC and variable δ13COM values.The highest δ13COM values correspond with intervals where sandstone and siltstone, most likely deposited as turbiditic flows in the Widmerpool Gulf, carried more terrestrially sourced organic matter in the basin: around 54 m, 41–44 m, 34 m and 31 m.The isotope signature of the interspersed mudstones is lower and more marine-derived organic matter is incorporated in these sediments.The upper part of the Carsington DR C3 shows the influence of increased marine organic matter on the kerogen fraction.There are two outliers: at 20.10 m a limestone has the lowest recorded δ13COM value combined with the lowest TOC value.Limestone is characterized by a very low kerogen content and due to the depositional environment, what little kerogen there is, consists almost entirely of marine sourced organic matter.The other outlier is sample SSK45602 which has a relatively high δ13COM value of −25‰.Despite the marine character of the lithology, some plant fossils were recovered from the interval which could explain the terrestrial nature of the δ13COM signal.The lower part of the Karenight core displays the lowest δ13COM combined with the highest TOC values, reflecting the marine influence in this part of the section.The upper part contains a mix of marine derived kerogen, around 237–238 m and 240 m, interspersed with intervals containing more terrestrially derived kerogen, 236–237 m. Relatively, the Karenight 1 samples contain a higher abundance of marine kerogen compared to the Carsington DR C3 samples.During the Mississippian the Pennine Basin was located close to the equator, proximal to Laurussia with Gondwana stretching to the South Pole and it was bordered by two emerging land masses; the Southern Uplands to the north and the Wales-Brabant High to the south.Because of its position, ice sheets were likely to persist on Gondwana which influenced the sedimentation history in the patchwork of sub-basins forming the Pennine Basin."These sea level fluctuations most likely exerted less of an impact on the contemporaneous Upper Barnett Shales, deposited in the much more distally located Fort Worth Basin and one of the world's most prolific shale gas plays with an estimated resource of 43 tcf, than the mudstones considered in the current study.The Fort Worth Basin was formed as a foreland basin in response to the collision of North and South America during the formation of Pangea and its more distal position means the five lithofacies that are generally recognized in the Barnett Shale – i.e. black shale, lime grainstone, calcareous black shale, dolomitic black shale and phosphatic black shale – lack the turbidites that were encountered in the Carsington DR C3 core and the siltstones in the Karenight 1 core of the current study and indeed in most of the Namurian cycles in the Pennine Basin.The more continuous nature of marine deposits of the Fort Worth Basin compared to the mudstone and turbidite successions of the Pennine Basin, reflects the respective position of both basins: the glacio-eustatic sea level fluctuations most likely exerted less of an impact on the contemporaneous Upper Barnett Shales deposited in the much more distally located Fort Worth Basin, than the mudstones considered in the current study.Most of the prospectivity criteria that were used to evaluate the Namurian shales from the UK, summarized in Table 1, are based on observations from the Barnett Shale.The Carsington DR C3 samples cover the E2a, E2a3 and E2b1 marine bands.The lower part of the studied interval corresponds to the E2a marine band below E2a3.The TOCpd of the mudstones of this interval is comparatively low.There is variation in the TOCpd with some intervals characterized by a low TOCpd while some intervals have a TOCpd that surpasses 2.5%.However, the pyrolysable content of none of these samples surpasses the 2% constraint that is required for an unconventional gas reservoir.The kerogen fraction contains a high calculated average of 70% Type II, with an important contribution of Type III organic matter.The HIo for this interval averages 468 mg/g TOC falling in the prospective window of 250–800 mg/g TOC defined by Jarvie with an associated TOCo of 2.28% on average.It should be noted that there is a considerable amount of variability in the TOCo values throughout, with a minimum of 0.46% and a maximum of 4.79%.The XRD analyses show there is a very low amount of carbonate contained in the lower part of Carsington DR C3 and a highly variable amount of silicates.Most samples are considered as argillaceous mudstones while three samples contain enough silicates to be classified as siliceous mudstones.The upper part of Carsington DR C3 covers the E2a3 and E2b1 bands which together correspond to an important transgression.The TOCpd averages 3.5% and TOCo 3.44%.In both instances these values surpass the 2% limit set as a criterion for organic matter content that defines an unconventional play.However, when the reactive, pyrolysable carbon content representing the generative part of the kerogen fraction is considered, no samples surpass the 2% boundary.Assigning AOM to a kerogen Type I is tentatively done by utilizing the autofluorescence properties of the kerogen fraction.Kerogen Types II and III are better constrained by transmitted white light observations and average 62% and 1.8% respectively.This shows marine organic matter dominates over terrestrial organic matter which is significantly lower than the lower part of the section.The HIo index averages 532 mg/g TOCo and is well above the 250 mg/g TOC constraint and falls in the 250–800 mg/g TOC interval suggested by Jarvie.The siliceous mudstones contain variable amounts of carbonate and one sample has a low silica and carbonate content plotting in the ductile to brittle transition zone.No discrete smectite was identified in the sampled Carsington DR C3 interval, as required by Jarvie.Both mineralogy and Tmax values indicate however that the studied material is immature for gas generation.The studied section of the Karenight 1 core covers the E1 to E2 biozone transition.The lower part of the section is dominated by mudstone with subordinate and interspersed limestone intervals.All samples of the lower part of Karenight 1 surpass 4% TOC with an average of 6.9%.When the pyrolysable carbon of TOCpd is considered however, the average TOCpd is 1.9% and only three samples pass the 2% threshold.The organic matter mostly consists of heterogeneous AOM with an average calculated 17.4% of Type II and 1.3% Type III reflecting the very limited influx of terrestrially sourced AOM, also reflected in the low δ13COM values.The HIo reaches on average 554 mg/g TOCo with TOCo averaging 9.92%.These values fall in the prospectivity window described by Jarvie.XRD analysis of the bottom part of Karenight 1 shows that the lithology varies from argillaceous to siliceous mudstones, but all samples have a silica content over 30%.The upper part of the Karenight 1 core covers the E2 biozone, possibly containing evidence of the E2b Marine Band around 241.50 m.The TOCpd values are less than in the lower part of the core and the kerogen fraction is composed of 78.6% heterogeneous AOM, a calculated average of 67% Type II kerogen and 2.7% Type III kerogen.Again, these values combined with the relatively light carbon isotopic signature show the dominance of marine conditions.The somewhat higher average values of Type III kerogen show the influence of terrestrial matter which is associated with the presence of siltstone intervals.The average calculated HIo is somewhat lower than the bottom part of the section while TOC0 is significantly lower.These values are still within the constraints of the prospectivity criteria.The upper part of the section contains a highly variable amount of carbonates but all samples have a silicate content that surpasses the 30% threshold.No discrete smectite was identified in the sampled Karenight 1 interval as for Carsington DR C3, the mineralogy and Tmax values indicate that the Karenight 1 material is immature for gas generation.The Arnsbergian mudstones from the Morridge Formation in the Widmerpool and Edale Gulf proved by the Carsington DRC 3 and Karenight 1 boreholes were deposited coeval with active unconventional exploration targets in the Craven and Bowland Basins and the Upper Barnett Shales from the Fort Worth Basin.Karenight 1 contains more marine organic material and is characterized by higher FSI, TOCpd, TOCo and HIo values than Carsington DR C3, which exhibits frequent sandstone intervals and a higher fraction of kerogen Type III.The terrestrial material in the Morridge Formation originates from the Wales-Brabant High to the south of the Pennine Basin.Therefore, we hypothesise that when turbiditic flows entered the Pennine Basin sourced from the Wales-Brabant High to the south, most of the terrestrial material was deposited in the Widmerpool Gulf.The remnant of the south-eastern part of the Derbyshire High, located between Carsington DR C3 and Karenight 1 during the Arnsbergian represents a barrier to sediment input from the south into the Edale Gulf, resulting in the deposition of only a relatively small fraction of these sediments with a terrigenous character at Karenight 1.However, even in the intervals of Carsington DR C3 that are characterized by a terrestrial signature, marine influences are noticeable: Type II kerogen remains important in the palynofacies counts and δ13COM exceeds −24‰.This suggests a continuous sedimentation of marine-derived material, punctuated by influxes of terrestrial material from the Wales-Brabant High diluting the marine deposits.This means that prospective intervals in both the Widmerpool and Edale Gulf are relatively thin and consequently high resolution characterization of these intervals is required in future research to quantify reservoir fairways.Given the high diversity of spores recovered from both cores, a comprehensive, quantitative re-evaluation of the spore biozonation from these intervals on a metre-to decimetre-scale, may aid in the identification of the prospective intervals.The work flow employed in the current study gives a complete assessment of the organic matter of potentially prospective shale gas plays.This approach allows for a calculation of HIo and TOCo which give a more meaningful evaluation of prospecitivity estimates.In this way, we show that the most prospective part of the Carsington DRC3 borehole is the E2b1–E2a3 with a HIo of 465 mg/g TOCo and TOCo of 3.2% while for the Karenight 1 borehole the most prospective part is the E1,–E2a interval with an average HIo of 504 mg/g TOCo and TOCo of 9.3%.In Table 5, these values are compared with the top 10 shale gas systems as reported by Jarvie.The most prospective intervals of both the Carsington DRC3 and the Karenight 1 boreholes have HIo and TOCo values that are comparable to the contemporaneous US shales.The least prospective intervals from the Carsington DRC3 core have values that are well below the contemporaneous US shales, most likely reflecting the higher amount of terrestrial material with on average 30.3% Type III kerogen.For the Barnett Shale, kerogen Type II with a minor admixture of Type III has been reported.Though no actual palynofacies counts were cited, Jarvie et al. use a 95% Type II and 5% Type III for their calculation of HIo for the Barnett Shale.We investigated Namurian mudstones from two boreholes drilled in the southern part of the Pennine Basin: the Carsington Dam Reconstruction C3 borehole from the Widmerpool Gulf and the Karenight 1 borehole from the Edale Gulf.Both prove mudstone-dominated intervals of Arnsbergian age, with the Carsington DR C3 borehole comprising the E2b1–E2a marine bands, while the Karenight 1 borehole the E2b and E1 marine bands.We describe a fully integrated, multi-proxy approach to describe the geochemical, palynological and sedimentological properties:Heterogeneous AOM dominates the E2b1–E2a3 samples in the Carsington core with important contributions of Type II kerogen and only minor amounts of Type III kerogen.The E2a interval below E2a3 contains markedly less heterogeneous AOM and a more important Type III kerogen fraction.The highest TOCpd values are reported from the E2b1–E2a3 interval.However, when the pyrolysable content of the Rock-Eval™ analyses is considered, none of the TOCpd of the Carsington samples exceeds 2%.The calculated HIo and TOCo for E2b1–E2a3 averages respectively 465 mg/g TOCo and 3.2%.For E2a below E2a3, HIo averages 396 mg/g TOC and TOCo 1.0%.Carsington DR C3 contains mainly siliceous mudstones with a variable carbonate content and argillaceous mudstones with a very low carbonate content.The bottom part of the section is further characterized by two sandstone intervals interpreted as turbidites entering the Widmerpool Gulf from the Wales Brabant High.Based on the criteria for prospective shale gas plays, Carsington DR C3 has reasonably high organic contents and hydrogen indices.Calculated HIo and TOCo are within the limits of known producing plays.However, the XRD results and Tmax values suggest the Namurian deposits of the Widmerpool Gulf are too immature for gas generation.The Karenight 1 core samples are dominated by heterogeneous AOM throughout while Kerogen Type II is also important and Type III kerogen is of minor importance.TOCpd surpasses 2% in 68 of 72 considered samples, however when the pyrolysable carbon content is considered, only 3 samples surpass 2%.In the bottom part of Karenight 1 HIo averages 504 mg/g TOCo with a TOCo of 9.3% while in the top part HIo averages 448 mg/g TOC with a TOCo of 4.4%.In the Karenight 1 core we find carbonate poor siliceous and argillaceous mudstones in the bottom part and mudstones with a markedly higher carbonate content in the top part.The clearest marine intervals occur in the lower part of Karenight 1 yielding some of the most organic rich Namurian deposits in the Pennine Basin.As for the Namurian deposits in the Widmerpool Gulf, the Namurian strata in the Edale Gulf have organic contents combined with hydrogen indices that fall well within the limits of known shale gas plays, but Tmax suggests the deposits are immature for gas.The terrestrial material that enters the southern Pennine Basin likely originates from the Wales Brabant High and therefore we conclude that most of the terrestrial material was deposited in the Widmerpool Gulf as turbidite flows and only a relatively small fraction reached the Edale Gulf, resulting in a more marine character in the Karenight 1 core.However, there is still a considerable amount of kerogen Type II in Carsington DR C3, even in intervals with a terrigenous sedimentary signature.This points to continuous marine conditions across the southern part of the Pennine Basin, at times diluted by turbiditic deposits, most likely related to.Because of this, the intervals prospective for hydrocarbon generation in especially the Widmerpool Gulf and to a lesser extent in the Edale Gulf, are relatively thin.Consequently, high resolution characterization, at sub-marine band resolution, of these intervals should be the focus of future research.Quantitative spore analysis and fluorescence microscopy may be of considerable help to achieve this goal given the high diversity of well-preserved spores recovered in the current study. | During the Serpukhovian (late Mississippian) Stage, the Pennine Basin, now underlying much of northern England, consisted of a series of interlinked sub-basins that developed in response to the crustal extension north of the Hercynic orogenic zone. For the current study, mudstone samples of the Morridge Formation from two sub-basins located in the south-eastern part of the Pennine Basin were collected from the Carsington Dam Reconstruction C3 Borehole (Widmerpool Gulf sub-basin) and the Karenight 1 Borehole (Edale Gulf sub-basin). Detailed palynological analyses indicate that aside from the dominant (often 90% or more) heterogeneous amorphous organic matter (AOM), variable abundances of homogeneous AOM and phytoclasts are present. To complement the palynological dataset, a suite of geochemical and mineralogical techniques were applied to evaluate the prospectivity of these potentially important source rocks. Changes in the carbon isotope composition of the bulk organic fraction (δ13COM) suggest that the lower part (Biozone E2a) of Carsington DR C3 is markedly more influenced by terrigenous kerogen than the upper part of the core (Biozones E2a3–E2b1). The Karenight 1 core yielded more marine kerogen in the lower part (Marine Bands E1–E2b) than the upper part (Marine Band E2b). Present day Rock-Eval™ Total Organic Carbon (TOCpd) surpasses 2% in most samples from both cores, a proportion suggested by Jarvie (2012) that defines prospective shale gas reservoirs. However, when the pyrolysable component that reflects the generative kerogen fraction is considered, very few samples reach this threshold. The kerogen typing permits for the first time the calculation of an original hydrogen index (HIo) and original total organic carbon (TOCo) for Carboniferous mudstones of the Pennine Basin. The most prospective part of Carsington Dam Reconstruction C3 (marine bands E2b1–E2a3) has an average TOCo of 3.2% and an average HIo of 465 mg/g TOCo. The most prospective part of Karenight 1 (242.80–251.89 m) is characterized by an average TOCo of 9.3% and an average HIo of 504 mg/g TOCo. Lastly, X-ray diffraction (XRD) analysis confirms that the siliceous to argillaceous mudstones contain a highly variable carbonate content. The palynological, geochemical and mineralogical proxies combined indicate that marine sediments were continuously being deposited throughout the sampled intervals and were punctuated by episodic turbiditic events. The terrestrial material, originating from the Wales-Brabant High to the south of the Pennine Basin, was principally deposited in the Widmerpool Gulf, with much less terrigenous organic matter reaching the Edale Gulf. As a consequence, the prospective intervals are relatively thin, decimetre-to meter-scale, and further high resolution characterization of these intervals is required to understand variability in prospectivitiy over these limited intervals. |
31,457 | Innovative 3D and 2D machine vision methods for analysis of plants and crops in the field | Predicted increases in world population, coupled with the effects of climate change, mean that the need for increases in the efficiency of agricultural methods, concurrent with reductions in their environmental impact, is becoming critical for ensuring sustainable production.Although food production has increased significantly over the last century, due largely to the effects of mechanisation and intensive farming methods, this has come at the cost of an increased utilisation of resources such as water, fertilizers, herbicides and pesticides, which is unsustainable for meeting future demands, both economically and ecologically.It is then perhaps fortuitous that advanced technologies are now emerging that offer potential for providing plant-related data and knowledge that can enable increases in productivity while reducing environmental impacts.This can be realised in the form of new generations of autonomous agricultural robotic devices that will operate with the needed levels of information and intelligence.These could take the form of, for example, small tractors fitted with sensor systems that enable them to operate autonomously and to identify features of interest – such as weeds.Upon detection a weed could then be destroyed by a method such as use of a servo system for spraying a small amount of herbicide at its meristem.Such an approach offers obvious cost and ecology benefits compared to general spraying of herbicide, as well as additional environmental benefits associated with producing less soil compaction than a full-sized tractor and use of less fuel.However, significant technical developments are needed before such a device can operate autonomously and effectively in the field; and principal among these is an effective sensor system.While instrumentation such as GPS, inclinometers and accelerometers may play important roles, perhaps the most useful sensor for agricultural automation is that of machine vision.In fact it could be argued that this is a critical enabling technology, and that significant breakthroughs in machine vision will enable dramatic increases in the use of agricultural automation/robotics.In recent decades there has been considerable development of 2D machine vision technologies for plant detection/analysis; however there has only been very limited transfer of these technologies to use in the field/outdoors.Reasons for this include complications resulting from variations in illumination experienced outdoors, as well as the complexities of images captured.Difficulties can occur in image interpretation/analysis and feature recognition/measurement, due to the inherent complexities of plant morphologies with factors such as unexpected occlusion of view and/or shadowing.If methods could be found for alleviating these difficulties, then the effective employment of machine vision for plant analysis, and all the associated benefits mentioned above, could be realised.One such method is that of 3D machine vision; where, for example, leaf surfaces can be recovered in 3D, so that the true area can be calculated; also, the 3D information on the leaf allows its occlusion or shadowing effects to be evaluated, thereby assisting with interpretation of conventional images of the plant.The utility of 3D machine vision in agriculture does in fact go well beyond simply assisting with understanding the general shape and structure of a plant.Depending upon the exact nature of the 3D machine technique applied, a range of types of information can be derived that can inform the farmer, depending upon the particular needs and requirements.In this paper this range is illustrated by two very different case studies of 3D vision systems that provide useful information in the field.The first describes a 3D machine vision technique that has undergone extensive development in the Centre for Machine Vision at the University of the West of England – photometric stereo; and goes on to show how this can utilize off-the-shelf components in novel configurations that can generate high-resolution 3D surface data for facilitating directed weed elimination, through to new types of plant phenotyping.The second employs existing low-cost high-performance 3D vision systems and shows how they can be combined with GPS data for providing useful produce and field information during potato harvesting.A great deal of literature exists that reports on studies of computer vision agricultural applications; and currently a growing amount is specifically addressing 3D vision work.There is not space here to review all of this; rather, a selection of work will be reviewed which is particularly relevant to the subject of this paper and the associated research.In 2016 Vázquez-Arellano et al. completed a comprehensive review of 3D Imaging Systems for Agricultural Applications.They identified reduced labour availability, scarcity of natural resources, and consumer demand for quality products as drivers for automation in agriculture; and stated that 3D vision is a key technology for agricultural automation.In their 2010 paper , McCarthy et al. state that field environment precision agriculture applications face the challenge of overcoming image variation caused by the diurnal and seasonal variation of sunlight; and put forward a view that augmenting a monocular RGB vision system with additional sensing techniques potentially reduces image analysis complexity while enhancing system robustness to environmental variables.We suggest that 3D data recovery comprises one such additional sensing technique.In their 2015 review of sensors and systems for fruit detection and localisation , Gongal et al. identify occlusions, clustering, and variable lighting conditions as the major challenges for the accurate detection and localization of fruit in the field environment.They say that improved accuracy can be achieved through 3D fruit localisation, but point out that methods such as laser range finding are currently bulky, slow and costly – however these drawbacks do not apply to other 3D vision approaches, such as a RGB-D camera or photometric stereo.In 2017 Binch and Fox reported on an interesting controlled comparison of machine vision algorithms for Rumex and Artica detection in grassland .They conclude that all the accuracies in their implementations were lower than those of other researchers and suggest a number of reasons for this, which were characteristic of their collecting real data in the field – for example, they required their data to come from a wide mixture of lighting and weather conditions.This provides further evidence of the need to address such factors in order to collect data in the field that will be of real use to farmers.They also conclude that: “…the best performing method for the overall spray/no spray decision is based upon Linear Binary Patterns with Support Vector Machine classification…” – this is in line with our findings in the same area and will form part of a future CMV paper.,The demands of in-the-field operation are further illustrated by the example of using machine vision for analysing potato harvesting.Some previous work has been reported on vision inspection for potatoes and in 2015 Rady provided a review of Rapid and/or non-destructive quality evaluation methods for potatoes .However, the systems described in these papers generally operate indoors and often in laboratory type conditions.In contrast to this, there are many significant challenges associated with real-time in-the-field analysis of potato harvesting production; and these are described in Section 4.2.1.The extent to which these can be successfully addressed is highly dependent upon the manner in which images are captured and how the 2D/3D data are generated – particularly what type of camera is employed and how it is configured.Two types of cameras/imaging were employed in the current work: specifically a RGB-D device and photometric stereo – therefore some discussion is provided below of these devices/techniques.RGB-D cameras such as the Microsoft Kinect have been increasingly employed for machine vision research due to their common availability, low cost and relatively good range-finding performance.For example, the RGB-D camera employed in the current work captured data at a rate of 30 frames per second; and for each frame that was captured by the camera, a 2D colour image and a depth map were generated.The latter gives a distance, in mm from the camera, for each pixel, thereby generating a 3D point-cloud.Two versions of RGB-D camera were experimented with; the first employs a distortion of a projected infra-red pattern to calculate the depth data, while the second uses time-of-flight technology.However, the limitation in the pattern resolution, in combination with the fixed field of view and ranges which these devices operate at, and the limitations of the USB interfaces they employ, mean that the 3D data obtained are of relatively low resolution when compared to the dense arrays of surface normals that can be captured using photometric stereo.Despite this there are some agricultural applications where the RGB-D range data can prove useful – an example is provided by the potato measurement application described in Section 4.2.However, the analysis of individual plants, recovering features such as true leaf colour/area and leaf veins, for monitoring plant growth and phenotyping, is beyond the capability of the RGB-D device, but can be achieved with a photometric stereo analysis.Photometric stereo is a technique first described by Woodham in 1980 , which employs a single camera and a set of at least 3 lights in known locations.Here, rather than calculating a depth image or a point cloud, a surface normal field is recovered from an object that is illuminated from different directions while the viewing direction is held constant.The fraction of the incident illumination reflected in a particular direction is dependent on the surface orientation, which can be modelled using Lambert’s Law.Therefore, when the directions of incident illumination are known and the radiance values are recorded, the surface orientation can then be derived.Woodham observed that three views are sufficient to uniquely determine the surface normals as well as albedos at each image point, provided that the directions of incident illumination are not collinear in azimuth.Four illuminants/views can be employed for improved reconstruction performance.The equations involved in determining the albedo and surface normal vectors from the three recovered images can be derived:These equations are derived under the assumptions that 1) the object size is small relative to the viewing distance.2) The surface is Lambertian.3) The surface is exempt from cast-shadows or self-shadows.The PS method reconstructs one surface normal vector per pixel, and therefore it is capable of recovering surface normals in high resolution.3D reconstructions by PS are spatially consistent with PS images captured by a single camera.This eliminates the correspondence problem that places a computation burden upon binocular vision solutions, i.e. the problem of ascertaining how pixels in one image spatially correspond to those in the other image.Furthermore, the resolution of PS reconstructions is flexible and is solely determined by the camera and lens employed; thereby allowing PS to be configured for a specific device or application.In contrast, data obtained by RGB-D cameras are normally of low spatial and depth resolution, which severely degrade as the sensor-object distance increases; and such cameras are generally non user-configurable.In addition, PS reconstructions provide detailed high-frequency 3D texture information.3D depth information can be derived from PS surface normals when necessary.In contrast to PS, binocular stereo is more prone to noise and artefacts, since it directly recovers depth of surface data rather than surface orientations.Although being highly accurate and of high resolution, PS devices can be constructed at a similar or lower cost to the RGB-D or Kinect camera, with the potential flexibility of being portable or long-range; and thus comprise a powerful solution to 3D imaging.Despite these significant potential advantages, utilisation of photometric stereo in machine vision applications has been rather limited in comparison to other techniques such as the RGB-D camera mentioned above.This is perhaps because RGB-D cameras are available off the shelf at low cost in the form of devices such as the Kinect; but this is not true of photometric stereo, where instead the user is obliged to mount and configure a camera and a set of lights in known orientations.In addition, it is necessary to switch each light at high speed and in exact synchronisation with the image capture – all of which can prove to be a considerable challenge in terms of instrumentation and programming.A final reason is that the RGB-D camera, as the name suggests, concurrently produces RGB and depth images for a given scene.In contrast, implementing photometric stereo requires processing and combining image intensities and to ensure maximum resolution when doing this, grey-scale cameras are usually employed – consequently the albedos often emerge as grey-scale images.It is however possible to also capture RGB images by replacing the grey scale camera with a colour one; and we have done this in the field, for plant analysis.Having given an outline of the capabilities of the cameras employed, the following section describes the experiences of applying our machine vision technologies for crop/plant analysis in the field.Recent crop/plant machine vision work that has been undertaken by CMV in the field is that of 2D weed detection and analysis employing a conventional colour camera mounted on a tractor without any shrouding .This produced data that were usefully processed for detection of weeds in grass, which is less trivial than detection of weeds on dirt – the latter can be achieved through analysis of the green and red components of images from colour cameras, for identification of greenness .Our approach to separating the weeds such as dock from the grass involved: edge detection, texture analysis through use of a local entropy filter to differentiate between areas of high entropy and low entropy, and image processing techniques such as erosion and dilation/thresholding, as shown in Fig. 2.The chief complication with this approach is associated with unexpected and dramatic changes in illumination levels.The cause of this can range from the sun emerging from behind cloud, through to the tractor changing direction, resulting in a change in brightness due to a change in the relative position of the sun, or shadows being cast by the tractor or associated equipment.The most serious potential problem associated with light level change is the possibility of image saturation.The chance of this occurring can be minimised through use of an automatic shutter speed adjustment or, automatic iris adjustment.The only problem with employing such approaches is that they will take a short time to adjust to the light level change and during this time good image capture may not be possible, resulting in data loss.The second potential drawback associated with light level changes and shadowing is that it introduces a level of complexity in the images which may result in simple image processing techniques being ineffective for weed feature identification/segmentation.A possible solution to this problem involves the application of machine learning to the weed segmentation.In CMV, neural networks trained on extensive sets of grass and weeds-in-grass images have shown considerable robustness to changes in illumination.Many of our data sets have been collected under conditions of dramatic changes in illumination, with shadows commonly present.The resulting complexities of the images has been exacerbated by an observed wide variation in dock sizes, with some docks being clustered and other being sparsely distributed, as well as the presence of grass/dried grass of different lengths.However, analysis of such scenes has benefitted from recent advances in open-source technologies such as TensorFlow, which have offered new opportunities to experiment freely with contemporary neural network architectures.Particularly promising results have been attained with the application of convolutional neural networks, which were found to be able to reliably identify weeds even in the presence of severe changes in image brightness.Although this approach has the disadvantage of requiring large amounts of training data and a potentially significant training time; this is a promising area of research that is currently being intensively investigated throughout the computer vision community.A high performance computer has been installed in CMV to progress this work, as well as other in-depth image analysis methods, such as Local Binary Pattern modelling with Support Vector Machine classification.In recent tests, we have repeatedly demonstrated reliable detection of dock in images of grass that were captured under a wide range of illumination conditions and when less than 5% of the image concerned was comprised of dock leaf.In addition to conventional 2D imaging, there are 3D machine vision approaches that may provide solutions to illumination change problems as well as potentially offering various types of advanced capabilities.Experiments were conducted on plant analysis using 3D data from a RGB-D camera and it was found that meristem identification was possible for some types of plant.Although the depth resolution limitations resulting from the USB2 interface for the camera used, meant that increased accuracy for meristem detection was not demonstrated, such increased accuracy might be attainable for future 3D measurements using the Kinect version 2 with a USB3 interface.It was however found that combining 3D range thresholding with analysis of 2D image data can improve the reliability of weed identification and enable weed segmentation and measurement of plant row separation .We have also overlaid a grid on images and have applied a dense optical flow method to calculate field maps.As mentioned above, recovering features such as leaf veins, for monitoring plant growth and phenotyping, is beyond the capability of the RGB-D device.This is because the Kinect 1 has a depth resolution greater than 1 mm , while leaf vein thicknesses are often less than 1 mm.The limitation in the 3D resolution of the Kinect is due to the fact that the camera measures range and the limitation in the resolution of the projected pattern and/or bandwidth of the USB camera interface limits the resolution with which 3D features can be detected.However, PS can be usefully employed to analyse such features.In addition to recovering the texture and shape of the surface, photometric stereo provides a robust means of separating surface 3D and 2D textures – thereby providing good quality 3D surface data and accurate colour data in situations where conventional images would reflect hue effects or changes in surface pigmentation.Once the 3D surface shape data has been recovered, it needs to be analysed in order to identify meristems for implementing directed weeding.To do this we experimented with various methods for surface modelling; and employed two metrics: “Shape Index”, as proposed by and “HK Segmentation” .The specific implementation details are beyond the scope of this paper but tests indicated that the HK-measure gives a better indication of where the meristem may fall, compared to the shape index.A major advantage of photometric stereo is that the surface can be analysed in 3D at a generally much higher resolution than is the case for other methods – in fact it is only limited by the imaging resolution of the camera/lens employed.Consequently, the use of photometric stereo also offers potential to employ machine vision for generating new advanced capabilities for plant analysis such as phenotyping.Fig. 3.shows a low-cost photometric stereo rig that we have developed at CMV specifically for imaging plants, to measure phenotypic traits.Fig. 4 shows four images of a leaf captured with the Fig. 3 rig.These are combined with the lighting model to recover the 2D albedo and the 3D surface.In the albedo, lighting effects are eliminated, which allows the vein structure to be more clearly identified – this is useful for plant classification/phenotyping.On the right hand side in Fig. 4, the 3D surface of the leaf has been reconstructed, so that the leaf surface information is known at each point.This allows parallax effects to be eliminated, resulting in a more accurate estimate of the leaf area, which is useful for monitoring plant growth in various growing environments/conditions.Near-infrared light sources are employed in the Fig. 3 rig, which we have found to have the advantage of light reflection being more Lambertian – thereby reducing errors in the recovery of the surface normals, as well as allowing the plants to be imaged 24 h a day over long periods without interfering with their growth cycles.We have in fact employed LEDs of wavelength 940 nm, with matching narrow-bandpass filters, to capture data similar to that shown in Fig. 4, but which were captured while the leaf was subject to direct and bright sunshine.This approach, which makes use of the natural dip in sunlight intensity at around 940 nm , offers great potential for capturing high-resolution 3D data from plants in greenhouse environments, or outdoors, without the need for controlling background light.We therefore consider that near-infrared photometric stereo has great potential; and it is the subject of on-going intensive research effort in CMV.Our investigations also indicate that photometric stereo can enhance the utility of hyperspectral imaging for revealing physiological and structural characteristics in plants.The spectral response of vegetation is highly affected by its orientation ; however this effect can be allowed for by incorporating photometric stereo surface normal data – thereby increasing the robustness of spectral measurements without increasing the overall price of the system.A four source PS system is incapable of capturing data from a moving rig as motion between the first and last frame would prevent successful calculation of the surface normals.Therefore we instead have implemented a two-source method which allows calculation of the surface gradient in one direction.This method requires lights to be in line with the x-axis of the camera in order to recover surface normals in that plane.We developed and described application of ‘dynamic photometric stereo’ to tile inspection in 2005 .Here we employed two near infra-red line lights to recover surface normals from a tile moving along a conveyor.At that time we found that although this does not allow full 3D reconstruction of the surface, the gradient along the x-direction is sufficient to obtain useful representation of surface shapes – thereby enabling segmentation of moulding defects.The situation for plant analysis is similar, in that again the surface normal data using only two sources is incomplete, but the component in the x plane can still be used for locating the meristem.We achieved this by manually labelling the meristems in a series of plant gradient maps, and then employed localised histograms of gradients for training a classifier, using techniques that included support vector machine.The classifier was then used to scan gradient maps of other plants to identify similar gradient features, thereby locating a meristem.Initial tests showed that although identification of the plant meristems was more challenging than in the case of four-light PS, it could still be achieved.On-going work involves further increasing the resolution of the gradient maps, combined with more powerful neural network modelling; with the aim of increasing meristem location reliability over a wide range of plant types.Following on from the weed detection work, the wide range of tasks for which machine vision can provide useful agricultural information is further illustrated by another application developed in CMV.Here newly emerging low-cost vision technologies are employed to gather 3D potato tuber data in the field, during harvesting.Although the potato metrology system employed off-the-shelf vision system components, its development and implementation was nevertheless non-trivial, due to the technical challenges associated with robust and reliable extended operation of a vision system outdoors.The application is concerned with reliable measurement of the size distribution of potatoes as they are harvested in the field, together with concurrent recording of GPS position data.The challenges associated with this can be summarised as follows:Exposure to the elements; also, the system may be subject to cleaning by being sprayed with water from a hose.Dramatic changes in background lighting, both intra- and inter-frame, ranging from direct sunlight to heavy shade.The system will be mounted on a harvester which will be vibrating and moving around.The power supply from the harvester/tractor may be intermittent.Relevant GPS data need to be recorded in real time, associated with the 3D data capture, collated and made easily available to the user.The potatoes will be moving on a conveyor within the harvester, and some will be in contact.Challenges 1 and 2 were addressed by affixing a structured-light RGB-D camera within an IP68-rated weather-proof box, which was mounted within a specially developed rugged shroud, as shown in Fig. 5.While the camera has to be mounted as rigidly as possible in order to withstand the robust environment aboard a harvester, its position also has to be adjustable while not being so heavy as to apply undue loads to the harvester canopy frame during harvesting.Also, to address challenge 3 above, mounting has to be implemented so as to minimise the dynamic effects of the vibrations and movements of the harvester.The solution was to design and manufacture an adjustable and reconfigurable bracket assembly from aluminium alloy, as shown in Fig. 6.Two such assemblies were employed to mount the shroud and camera onto a Grime GT170 harvester that was used for the trials.Waterproofing also involved employment of an IP68 rated protective casing for the ‘Processing Box’, which contained: a small-form computer, which acted as the main processor for the device, a USB hub and an OpenUPS2 uninterruptible power supply.The latter is needed to address challenge 4; so that when the tractor powers down, the device shuts down gracefully, rather than having the power cut immediately.The Processing Box also contained the USB hub and a status LED, used to indicate when USB writing is complete.Fig. 7 shows the contents of the box; and the components on the outside fascia of the box, which include a waterproof gland for the camera and GPS cables, a power D-Socket, status LED and the waterproof USB socket.Finally, challenge 6 was met by employing a relatively fast shutter speed to ‘freeze’ the potatoes’ motion on the conveyor; and any potential problems due to touching potatoes were overcome by employing the 3D data from the surface of each potato to fit an ellipsoid to it, thereby enabling segmentation.This is shown in Fig. 8 below.In order to test the operation of the device, potatoes were measured in daylight in the field, with the harvester moving.The conveyor was started and a number of potatoes of known dimension were passed along the conveyor, under the device.An example of the results are shown in Table 1, where it can be seen that the majority of measurements obtained are within 10% of their true values.There is often an over-estimation of the major axis, due to potatoes not being true ellipsoids; however, we only use the minor axis and height in our 3D sizing estimations , so this does not affect the quality of the resulting size gradings.The next stage was to investigate what useful data can be provided by the system when the harvester is working in anger in the field, digging potatoes; where we expect to generate GPS maps that fit neatly with the satellite view of the harvested field.The approach employed here involved measuring how many potatoes were measured to be within each of five discreet size bands, and then displaying this as a heat map.Five heat maps were generated for each field, one showing the yield for each sizing band.The colours of the heat maps represent the number of potatoes within a corresponding band that have been measured at a particular GPS location, where blue indicates a low number of potatoes, and red indicates a large number of potatoes with the corresponding colours in between.Fig. 9 shows heat maps and satellite image of a field harvested on 13th July 2016.As can be seen in Fig. 9, the heat maps offer a convenient and potentially powerful way to visualise 3-dimensional data.The potential utility of this is clear; it can enable farmers to easily evaluate yields in particular field; and to relate this data to factors such as soil type/condition, possible usage of fertilizers and pesticides, as well as environmental conditions and water usage over particular periods.Such information offers much potential for farmers to more fully understand performance factors, thereby assisting them with optimizing yields while reducing/minimising resource usage.This project is believed to be the first to demonstrate the utility of 3D machine vision for segmenting potatoes immediately after being dug up, and for determining yield from a field by measuring the number of potatoes and their size distribution.The richer data that can be provided by 3D analysis also allows further information to be generated.By recovering the texture and shape of the potato surface, various characteristics that may be of critical interest, can be detected.These include:Presence of disease.Damage due to fauna.Damage sustained during harvesting.The presence of substantial soil on the potato.However, effective and reliable identification of such features does require high-resolution recovery of the 3D surface texture of a potato, and one way of achieving this is to apply PS as described above.Application of PS to in-depth potato analysis is a promising area; and it is the subject of on-going research in CMV.In this paper, the literature review and the outline of machine vision agri-tech work being undertaken in CMV, has identified the potential benefits of the agricultural application of machine vision; and particularly 3D analysis.The specific benefits of the latter can be summarised by the following characteristics of a 3D approach:Allows for straightforward segmentation of objects for analysis.Can provide accurate surface recovery, including high-resolution 3D texture; with many potential applications, such as introducing new plant measurement capabilities in plant phenotyping.Illumination invariant – can still recover useful information in a wide range of lighting conditions, including low light, dark conditions, or even direct sunlight.Pose invariant – 3D data can provide objective measurements – thereby eliminating problematic effects of camera position that are associated with 2D imaging – such as perspective and parallax.Provides additional discriminatory information.Can be used to augment 2D,The example 3D vision agri-tech applications described in this paper illustrate how these benefits can be realised with relatively low-cost vision technologies that are currently emerging.Associated practical considerations and potential advanced capabilities are discussed below.The work detailed in this paper outlines our implementation of 2D and 3D weed analysis methods in order to detect and locate weeds at a field level.CMV has produced a system capable of the detection of weeds in the field and for enabling close analysis of their structure.The system also includes the capability to produce a “field map” by mosaicking frames of video data together then highlighting the weeds upon it.Perhaps most importantly, we have also explored the possibility of using 3D features to determine weed location, both on a static rig and under motion.Alongside the implemented code, we have produced a highly functional user-interface, pulling together the algorithms into one place.Users can capture, analyse and export data in a variety of modalities and modify parameters to examine their effect on the resulting estimations.The work investigated both 2D and 3D vision methods for detecting vegetation and localising the meristem of plants.Results were generally good, with many meristems being localised to within a few millimetres of their true location.The 3D data generated from the PS tests are particularly promising – here advanced feature analysis can generate much information for enabling advanced plant analysis capabilities, in both the field and the laboratory.Regarding the latter, a wide range of experiments have been undertaken using the PS apparatus shown in Fig. 3 for accurately monitoring the growth of crops in situ, with detailed measurement of phenotypic properties made possible by the highly detailed surface models we obtain through PS.An example of such a technique is the measurement of plant growth under various soil or fertiliser conditions or at varied levels of irrigation and/or temperature.Work has also included illumination at infra-red wavelengths to reduce the influence of changes in background light; and with a wide range of light wavelengths with a conventional camera.Also, a hyper-spectral camera has been employed for implementing the PS.Plant imaging is well suited to this, since the considerable time that such a camera can take to recover a hyper-spectral image does not present a problem in this application.This is an on-going area of research – our experiments have been very encouraging and this approach promises to offer new advanced capabilities for plant phenotyping – our intention is to present the results of this in an on-going series of journal papers.A potato metrology system was demonstrated to work well at accuracy levels of within 10% both in the lab and in field trials.Measurements are taken based on a virtual sieving algorithm introduced by Lee , which calculates a minimal cross-section of the potato that will pass through a given sizing grid.The system is also able to operate with a relatively wide range of potato sizes.However, a major challenge when implementing a system of this type is how to effectively deal with unexpected problems.Although the work reported here highlighted a number of unforeseen issues, most of these were effectively addressed.An example issue related to the power supply – once the tractor is connected one would have expected a constant supply; but it was found to be very intermittent.The expectation was that the unit could operate from the tractor power supply, with the UPS only being needed to allow the computer to shut down cleanly when it detected that the line in was disconnected, and also to “clean” the signal from the tractor to produce a reliable 19 V output.However, experience has shown that it would be advisable to run the system from a separate 12 V lead-acid accumulator and to only employ the tractor power for charging this battery.To summarise, the potato project has successfully developed and tested a demonstrator of the key technologies and functionalities that would be needed by an eventual production system.As mentioned above, a number of outstanding issues have been identified along the way; other issues worthy of further investigation are a fuller treatment on the effects of foreign materials and of moisture causing increased mud content and specularity in the camera view, and monitoring of tractor speed for interpreting the rate of potato harvesting.The development of any new technology inevitably requires extensive real-wold testing and a process of iterative refinement via, in this case, a program of testing and modification using significant field data capture over an extended period.The question a farmer or agri-tech company may wish to ask is: which 3D techniques are most likely to enable the potential benefits to be realised?,From a review of supplier literature it becomes clear that there are a wide range of 3D vision technologies available, each with their particular advantages and drawbacks; examples include: stereo vision, laser triangulation, LIDAR, photometric stereo, and RGB-D cameras.The latter device provides colour information as well as the estimated depth for each pixel.In recent years, RGB-D devices such as the Kinect have become widely available at costs that are around an order of magnitude less than that of previous sensors that had similar functionality.This has resulted in a rapid expansion of their use for 3D machine vision research in various sectors, including agriculture.An example of this is provided by the potato inspection project described above, where the RGB camera provided a cost effective means of deriving the metrics that were of interest to the collaborating company – specifically potato sizes and yields.However, if the emphasis had been more on the individual potato quality characteristics, another method may have been employed to capture surface texture/topology characteristics at high resolution.Photometric stereo is a technique that is well suited to achieving this, while also having the advantage of being relatively low-cost.The resolution with which PS allows 3D plant textures to be recovered, allows detailed inspection of leaf size/shape and characteristics/condition for implementing cost-effective plant identification and/or crop monitoring.The potential to employ this for widespread realisation of advanced capabilities with long term potential benefits to agriculture, is very great, since the low cost of the needed emerging equipment means that it is very accessible and so can be widely applied.Therefore, with world-wide food security becoming an increasingly urgent matter, plant imaging, and particularly 3D plant analysis, is likely to emerge as an increasingly important technology.The authors have no competing interests. | Machine vision systems offer great potential for automating crop control, harvesting, fruit picking, and a range of other agricultural tasks. However, most of the reported research on machine vision in agriculture involves a 2D approach, where the utility of the resulting data is often limited by effects such as parallax, perspective, occlusion and changes in background light – particularly when operating in the field. The 3D approach to plant and crop analysis described in this paper offers potential to obviate many of these difficulties by utilising the richer information that 3D data can generate. The methodologies presented, such as four-light photometric stereo, also provide advanced functionalities, such as an ability to robustly recover 3D surface texture from plants at very high resolution. This offers potential for enabling, for example, reliable detection of the meristem (the part of the plant where growth can take place), to within a few mm, for directed weeding (with all the associated cost and ecological benefits) as well as offering new capabilities for plant phenotyping. The considerable challenges associated with robust and reliable utilisation of machine vision in the field are also considered and practical solutions are described. Two projects are used to illustrate the proposed approaches: a four-light photometric stereo apparatus able to recover plant textures at high-resolution (even in direct sunlight), and a 3D system able to measure potato sizes in-the-field to an accuracy of within 10%, for extended periods and in a range of environmental conditions. The potential benefits of the proposed 3D methods are discussed, both in terms of the advanced capabilities attainable and the widespread potential uptake facilitated by their low cost. |
31,458 | WheelerLab: An interactive program for sequence stratigraphic analysis of seismic sections, outcrops and well sections and the generation of chronostratigraphic sections and dynamic chronostratigraphic sections | Sequence Stratigraphy is a template for the analysis of sedimentary deposits with a focus on the patterns resulting from the variations in accommodation and sedimentation and the temporal order in which genetically related sedimentary strata are deposited.Within this template, major time significant surfaces enclose successions of strata to define sequences.The sequences are made up of systems tracts, which can be identified from their geometry, stratal terminations and characteristic stacking patterns; the stratal geometry, stacking patterns and facies heterogeneity of the sedimentary deposits are thus highlighted within a chronostratigraphic framework.The most commonly observed system tracts are the highstand systems tract, the falling stage systems tract, the lowstand systems tract and the transgressive systems tract.Each one of the system tracts is associated with a combination of relative sea level variation and sedimentation and can therefore be used to infer the relative sea level and sedimentation history associated with it.In the past two decades, sequence stratigraphy has revolutionized the field of sedimentary geology and has become a vital tool for the analysis of sedimentary basins.In comparison to sequence stratigraphy, traditional lithostratigraphy emphasizes the organization of strata based on similar lithological characteristics.The resulting difference between the two approaches is that the surfaces separating lithofacies are often diachronous or time-transgressive while the surfaces separating sequences are time-significant.The applications of sequence stratigraphic concepts lead to a better understanding of Earth’s local and global geological history; in addition, these concepts are applied in the petroleum and placer mining industries for predictive purposes .Although there is some consensus on the types of genetic stratal packages or sequence tracts and the architecture of stacking patterns and terminations that characterize them, the boundaries of sequences are often defined differently in different sequence stratigraphic frameworks.Sloss first used the term sequence to define stratal packages bound by unconformities, restricting the use of the term to masses of strata greater than group or supergroup rank; inadvertently limiting its use to regional scale stratigraphic studies .This definition of sequences has since been extended to include correlative conformities as sequence boundaries and is called the depositional sequence .The inclusion of correlative conformities as the continuation of sequence boundaries ensures that the concept of sequences could be extended to an entire sedimentary basin rather than being limited to the basin margin.We note that within the depositional sequence framework, there are divisions based on the position of the sequence relative to forced regression shallow marine deposits .Other frameworks used in sequence stratigraphy include the genetic sequence boundary and the transgressive–regressive sequence boundary frameworks.The genetic sequence boundary framework defined by Galloway uses maximum flooding surfaces as sequence boundaries while the transgressive–regressive framework defined by Embry and Johannessen uses subaerial unconformities as sequence boundaries at the basin margin and the maximum regressive surfaces as sequence boundaries within the sedimentary basin.These different frameworks can be classified based on the temporal and spatial scales in which they are most commonly used .The most likely temporal and spatial scales for each framework are influenced by the relative contributions of tectonics and eustasy although many fundamental sequence stratigraphic concepts are independent of temporal and spatial scale.Genetic stratal packages can be flattened and displayed on a time–space axis in the temporal order and within the spatial horizontal limits of deposition, to create a chronostratigraphic section or Wheeler diagram.Wheeler in his paper titled “Time-stratigraphy” first introduced the concept, replacing the vertical spatial dimension of facies in a sedimentary basin with relative geological time.Wheeler’s time-stratigraphy concept or chronostratigraphy has been further developed by extending its use to the interpretation of seismic data and subsequently to other types of geological data.The representation of geological data within a chronostratigraphic framework reveals significant features such as channel-levee complexes and non-depositional or erosional features that may not be obvious in the original domain.Schulz proposed the usage of the term “chronosomes” to describe a single chronostratigraphic unit containing all the facies within a single period of deposition and “supersome” to describe groupings of contiguous and depositionally sequential chronosomes.Subsequently, there have been many developments in the concepts and usage of Wheeler diagrams and in the software algorithms for chronostratigraphic analysis of seismic data.These developments are discussed in more detail in the next section.Modern computational seismic stratigraphy as coined by Stark et al., stems from the contributions of many authors over an extended period of time.Several authors worked on semi-automatic algorithms for interpreting seismic volumes, hence enabling chronostratigraphic analysis.Nordlund and Griffiths presented a program for the automatic construction of chronostratigraphic sections, although it had limitations in determining the correct order of laterally adjacent stratal packages.Marfurt et al. ,building on the work of Finn and Backus developed 3D dip and azimuth attribute volumes for interpreting seismic data.Stark developed the surface slice interpretation technique which favors surface segment interpretation over line interpretation and is based on finding all locations in a 3D survey with a constant wavelet phase.Zeng et al. ,introduced a method of mapping seismic data to a stratal volume using a stratal time model.Stark introduced the age volume technique in which each sample is associated with a relative age value using standard interpretation techniques or the unwrapping of instantaneous phase.Keskes et al. ,defined a transformation of the vertical axis of seismic data to a geological vertical scale based on using an equalization of histograms to determine equalized seismic sections and subsequently the rates of sedimentation and deposition.Lomask and Claerbout , Lomask and Guitton introduced and developed volumetric flattening, which involves calculation of dips everywhere in the data, resolving the dips into time shifts and vertically shifting the data to output a flattened volume.Qayyum et al. ,recognized that two main differences between the commonly used global interpretation techniques are the ways in which the time-line correlations are established and the ways in which the data is stored.We note that for all these methods the dimensions of the chronostratigraphic section generated could be 2D or 3D depending on the dimensions of the input data being 2D or 3D respectively.Qayyum et al. ,introduced the 4D Wheeler diagrams by including the thickness of stratigraphic units as a fourth dimension represented as a color overlay.There are other software programs for seismic interpretation such as Paradigm SKUA-GOCAD, Total Geotime , Opendtect Sequence Stratigraphic Interpretation System and Eliis PaleoScan, all of which use some semi-automatic or automatic techniques for tracking horizons; these programs however are not open-source codes.For a broader historical perspective on the development of Wheeler diagrams and related software see Stark et al., and Qayyum et al. .Apart form being an open-source code, WheelerLab differs from the above-mentioned software, techniques and algorithms in several ways.First, this paper introduces a new method for interpreting chronostratigraphic sections called the dynamic Wheeler diagram or the dynamic chronostratigraphic section, which can be generated using WheelerLab.The dynamic chronostratigraphic section shows the sequential evolution of the chronostratigraphic chronosomes concurrently with the evolution of the genetic stratal packages.The principle of the dynamic Wheeler diagram can be extended to 3D and 4D Wheeler diagrams as well.Secondly WheelerLab completely relies on the user to interactively define the geometry of the stratal packages.While automatic methods for tracking seismic horizons are useful they still require a geologists insight to ensure that the results agree with geological principles.The third difference is that WheelerLab is designed primarily to work on different types of stratigraphic data such as outcrop data, well sections and seismic sections.Lastly WheelerLab can be used to interpret older seismic data that exist only as images as well as outcrop images and interpreted well sections and then used to generate chronostratigraphic sections in relative geological time and relative distance units.WheelerLab is written in Matlab and has a graphical user interface.The program operates in two modes: The “Layer” mode and the “Surface” mode.The “Layer” button switches WheelerLab between the two modes when clicked.In the “Layer” mode, WheelerLab can be used to draw layers or shapes around sequence tracts.In the “Surface” mode, WheelerLab is used to draw surfaces or lines for interpretational purposes.Flattened chronosomes, which are the basic unit of the chronostratigraphic chart, are constructed using simple logical conditions to determine the horizontal extent of each chronosome and interactive user input to determine the vertical depositional order in relative geological time.The “Load Data” button, is used to input and display seismic sections or seismic images in the top axes.When clicked it opens up a dialog box, which allows the user to select a data file in the SEG-Y file format or in a common image file format.The SEGY file format is a standard data format of the Society of Exploration Geophysicists for storing seismic data.When the “W” check box is selected, the program displays the seismic data with an overlay wiggle plot.Depending on the current mode of the program the “Add” button is used to interpret a layer or a surface.If no data is added when the “Add” button is clicked, a dialog box is displayed, which allows the user to set the horizontal and vertical axis limits for a synthetic sequence stratigraphic section.The program is best controlled with a three-button mouse.To draw a layer or surface the user clicks the left mouse button to select points in a sequential order around the sequence tract.The user may delete a point by clicking the middle mouse button.To finish adding the layer or surface the user clicks the right mouse button and the program then draws a line through the points.If in “Layer” mode the line forms a closed shape.A dialog box is then displayed that allows the user to select from a list of common sequence stratigraphic classifications or create a new classification.The “Delete” button is used to delete one layer or one surface at a time, depending on the mode currently selected.To delete a layer, the user clicks the “Delete” button and then clicks within the layer to be removed.To delete a surface the user clicks on the “Delete” button then clicks on or near the surface to be removed.Once all sequence tracts and surfaces have been identified the user clicks on the done button.The layers will be displayed with distinct colors along with an image containing movable lines matching the colors of the layers.The user may then rearrange the lines in the correct chronostratigraphic order with the youngest layer at the top and the oldest layer at the bottom.Once the rearrangement is complete the user clicks the “Done” button again.To display the Wheeler diagram, the user clicks the “Display Wheeler” button and the Wheeler diagram is displayed in the lower axes.If the “U” checkbox is selected, the program displays sequence tracts that have the same classification using the same color.When the “Save Figures” button is clicked the program generates nine output files in the current MATLAB folder, appending the current date and time stamp to the names.The interpreted sequence stratigraphic section and the chronostratigraphic section are saved in Portable Network Graphics and MATLAB FIG formats while the dynamic chronostratigraphic section is saved in Graphics Interchange Format and Audio Video Interleve formats.Lastly a color map key and numerical coordinates of layers and surfaces are generated and saved.A video demonstrating the usage of the program is included in the supplementary section.WheelerLab can be used to interpret data in the different sequence stratigraphic frameworks and encourages the interpretational flexibility of the user by facilitating the creation of new label categories if necessary.In this section we provide three examples of chronostratigraphic sections generated using WheelerLab, following the depositional sequence framework.The first is a synthetic sequence stratigraphic cross-section of a basin with a shelf slope break and its corresponding chronostratigraphic chart.An unconformity and a condensed section exist between the first highstand systems tract and the transgressive systems tract and between the transgressive systems tract and the shelf margin wedge respectively.The unconformity and condensed section are better observed as erosional/non-depositional gaps in the chronostratigraphic section.The second example,) shows the application of WheelerLab to the southern part of the Las Mingachas outcrop, which is a late Early–Middle Aptian carbonate platform of the western Maestrat Basin, Iberian Chain, Spain.A condensed section and a smaller subaerial unconformity can be observed in the chronostratigraphic section.Four systems tracts are identified following the depositional sequence framework: the highstand systems tract, the transgressive systems tract, the forced regressive wedge systems tract and the lowstand prograding wedge systems tract.The Basal surface of forced regression can be clearly observed in this case.The third example is a sequence stratigraphic interpretation of real seismic data and its corresponding chronostratigraphic chart.The data is a seismic section from block F3 of the northern part of the Netherlands North Sea sector.The first package shows a retrogradational pattern of parallel layers and is interpreted as a transgressive systems tract.The second package shows steeply dipping clinoforms with a sigmoid-oblique reflection pattern.The package consists of normal regression deposits and also downlaps on the surface below indicating a slow relative sea level rise.It is therefore interpreted as a highstand systems tract.The third package consists of prograding clinoforms and shows a clear offlap pattern.It consists of forced regression deposits indicating a relative sea level fall and it is therefore interpreted as the falling stage systems tract.The fourth package forms an aggrading wedge at the base of the third package and onlaps onto it.It consists of normal regression deposits hence it is interpreted as a lowstand systems tract.The fourth package is overlain by steeply dipping normal regression deposits with a sigmoid reflection pattern and is interpreted as the second highstand systems tract.The sixth package consists of thin forced regression deposits and is interpreted as the second falling stage systems tract.The seventh package aggrades and onlaps onto the falling stage systems tract, indicating rising sea level and high sedimentation rates.It is interpreted as the second lowstand systems tract.The eighth package consists of parallel onlapping reflectors with an aggradational pattern.It is interpreted as the second transgressive systems tract occurring when relative sea level rise outpaces sediment supply.The final package consists of aggrading/prograding and onlapping clinoforms and is interpreted as the third highstand systems tract.Maximum flooding surfaces exist between the transgressive systems tracts and the highstand systems tracts.A subaerial unconformity exists between the second highstand systems tract and the second transgressive systems tract.It is better observed as an erosional or non-depositional gap in the chronostratigraphic chart.Basal surfaces of forced regression exist between the highstand systems tracts and the falling stage systems tracts.This chronostratigraphic section produced by WheelerLab is comparable to results produced from the same data set using the other software.The dynamic chronostratigraphic section can be viewed as an avi video file or a gif file.WheelerLab is an open-source interactive graphical user interface program for the sequence stratigraphic analysis of stratigraphic data and the subsequent generation of chronostratigraphic sections.This paper and program introduce a new concept for interpreting chronostratigraphic sections called the dynamic chronostratigraphic section.The dynamic chronostratigraphic section shows the sequential evolution of the chronostratigraphic chronosomes concurrently with the evolution of the genetic stratal packages.This facilitates better communication of the sequence stratigraphic process.In addition, WheelerLab is ideal for creating synthetic sequence stratigraphic cross-sections and for interpreting older seismic data that may exist only as images as well as outcrop images and interpreted well sections and then used to generate chronostratigraphic sections in relative geological time and relative distance units. | WheelerLab is an interactive program that facilitates the interpretation of stratigraphic data (seismic sections, outcrop data and well sections) within a sequence stratigraphic framework and the subsequent transformation of the data into the chronostratigraphic domain. The transformation enables the identification of significant geological features, particularly erosional and non-depositional features that are not obvious in the original seismic domain. Although there are some software products that contain interactive environments for carrying out chronostratigraphic analysis, none of them are open-source codes. In addition to being open source, WheelerLab adds two important functionalities not present in currently available software: (1) WheelerLab generates a dynamic chronostratigraphic section and (2) WheelerLab enables chronostratigraphic analysis of older seismic data sets that exist only as images and not in the standard seismic file formats; it can also be used for the chronostratigraphic analysis of outcrop images and interpreted well sections. The dynamic chronostratigraphic section sequentially depicts the evolution of the chronostratigraphic chronosomes concurrently with the evolution of identified genetic stratal packages. This facilitates a better communication of the sequence-stratigraphic process. WheelerLab is designed to give the user both interactive and interpretational control over the transformation; this is most useful when determining the correct stratigraphic order for laterally separated genetic stratal packages. The program can also be used to generate synthetic sequence stratigraphic sections for chronostratigraphic analysis. |
31,459 | Optimization method to construct micro-anaerobic digesters networks for decentralized biowaste treatment in urban and peri-urban areas | The world economy is dominated by a linear model of resource consumption that follows a “take-make-dispose” pattern.This model leads to significant losses of valuable products while increasing the anthropogenic impact on the environment, the depletion of resources and the social inequalities.This model is not sustainable and it compromises both the services provided by the ecosystems and the human living conditions.The concept of circular economy rises as an alternative pattern to tackle the current issues of the linear model.The circular economy is defined by Geissdoerfer & al. as a regenerative system in which resource input and waste, emission and energy leakage are minimized by slowing, closing, and narrowing material and energy loops.The model aims to be sustainable with low environmental impacts, and at the same time allowing an economic growth and more employment.Even if the traditional linear end-of-pipe system remains the mainstream, a favorable context enabled by technological and social factors allows nowadays a partial transition towards circular economy models.The enforcement of the Landfill Directive in 1999 was an important step towards circular economy made by EU waste policy.It marked a decisive shift from landfill to a more optimized EU waste management policy which gives priority to waste prevention, followed by reuse, recycling and recovery, and aims to avoid landfilling as much as possible.In this regulatory context, the biowaste management became a priority for the EU as it represents a large fraction of the municipal waste, 37% in average of the total waste stream.The biowaste is defined by the Waste Framework Directive as the biodegradable garden and park waste, food and kitchen waste from households, restaurants, catering and retail premises, and comparable waste from food processing plants.The current policies aim at developing recycling or valorization pathway for the biowaste.The anaerobic digestion is considered as one of the most cost-effective biological treatment of the biowaste.It allows energy recovery and the production of nutrient-rich digestate while reducing environmental impacts of the waste disposal.The number of AD plants for biowaste treatment increased in EU, from 244 plants in 2010 to 688 in 2016.These plants treat the biowaste alone or more generally mixed with other substrates but this distinction is not provided in the above data.Their average capacity is 31,700 t/year, corresponding to large scale installations embedded in centralized management scheme.However, small scale AD plants have recently attracted interest to shift toward a more decentralized biowaste management.This approach offers key advantages compared to the conventional centralized management system with a decrease of transport requirement, a potential increase of community involvement and an opportunity for strengthening local nutrient and energy loops.Moreover, Walker et al. demonstrated that small scale AD can be technically viable with potential biogas production performance similar to large scale AD.A decentralized biowaste treatment network is currently developed by the H2020 DECISIVE project and described in Fig. 1.It aims at bringing closer the biowaste sources, the treatment sites, and the product outlets to create local biowaste valorization loops.The system relies on decentralized and micro-scale technologies currently under development: micro Anaerobic Digestion, small combined heat and power Stirling and Solid-State Fermentation.The digestate is valorized in the agricultural area embedded in the urban territory or close by.The relevance and the efficiency of the decentralized approach rely on a close integration of the whole treatment chain, from the biowaste generation to the coproducts valorization and on higher proximity between the different stages of the processing pathway.Therefore, the shift to decentralized valorization network implies an adaptation of the biowaste management at local and territory scale that requires a specific approach to optimize its spatial organization.The optimization of waste management systems is well covered in the literature and the findings remain valid for the specific case of municipal biowaste management.Ghiani et al. shows in a review of operations research in solid waste management that studies on strategical issues are generally based on Mixed Integer Linear Program.This method is flexible and allows the integration of several decision variables into a single model.It can support the selection of the location of facilities and the allocation of waste or products, the treatment capacity of facilities and the type of treatment technologies.Few models handle several waste streams simultaneously while most only focus on a single one.Optimization process generally aims at minimizing the overall system costs.But recent studies tend to integrate several and potentially conflicting objectives: economic, environmental and for some, social to better reflect all dimension of sustainability.The design of the DBTN presented in this paper raises the questions of: the number of mAD requested, their location, their respective capacity and, the allocation of the biowaste sources and of the digestate.It is a common capacitated location-allocation problem, compliant with a linear programming approach.However, none of the papers identified previously focused specifically on a DBTN based on mAD.Compared to other wastes management systems, the DBTN proposed is quite simple, with a single waste stream, fully treated in an AD step and with single type of outlets targeted.The corresponding MILP is then a simplified adaptation of the generic MILP proposed by.However, it differs by the very high number of mAD involved in the system which directly impacts the size of the MILP model, the time to solve it and the underlying data management.Due to the scarce information on mAD costs and environmental impacts, the model was built around a single objective of minimizing the payload distances.Moreover, the high proximity between the components of a decentralized system implies a very detailed knowledge of the territories studied.A comprehensive Geographic Information System need then to be incorporated in the MILP to supply data at the target scale.Yadav et al. underlines the potential interest of linking GIS and MILP while they recognize the scarce literature on that matter.This paper describes a method to optimize the location and the size of mAD of DBTN based on a MILP model feed by a GIS analysis of the territory.The method is applied to the case study of The Grand Lyon, France.The design of the decentralized biowaste treatment system comprised 5 steps as described in Fig. 2.The system targets the following biowaste: the food waste generated by households, restaurants, canteens of public and private activities: schools, hospitals, public administrations and companies, and the green waste from private or public gardens.The locations of the biowaste sources and their annual production have to be accurately estimated.The biowaste collection is supposed to be done door-to-door.Once collected, the biowaste is treated in mAD and the biogas is valorized with a combined heat and power Stirling engine.The mAD and the CHP are enclosed in the same treatment unit.mAD is designed to process from 50 to 200 t/y and needs to be fed by a regular stream of biowaste.Thus, alternative sources should be identified to buffer negative consequences due to fluctuations of non-permanent biowaste sources.Similarly, the fraction of green waste processed should also be restricted as mAD is primarily designed to use FW.The locations of the potential mAD sites are restricted by environmental regulations and urban planning rules but also by proximity constraints related to its accessibility for vehicles and the heat valorization.These sites have to be identified with an appropriate GIS multi-criteria analysis.The digestate is then exported to agricultural lands located near-by the mAD.The amount of digestate that can be used locally has to be estimated based on the agricultural surface, the legal threshold of organic nitrogen fertilization and the current organic nitrogen load in the territory.The transport distances of biowaste and digestate are estimated with the real road infrastructures for better accuracy.Finally, the MILP model is built and solved to design a decentralized system that treats a target amount of biowaste while being well integrated within the existing management systems.Potential mAD site should be at least about 75 m2 to accommodate all the equipment and the required space around.The sites too narrow are also removed based on a surface and a compactness criteria.Biowaste sources, mAD sites and outlets are precisely inventoried which generate a large quantity of data.Their inclusion in the MILP is possible with JuMP, a dedicated package for modeling linear program in Julia language and a tailored data processing.A GIS method was set up to extract information on the biowaste sources, the mAD potential locations, and the digestate outlets at a fine scale level.The ratio of FW production per capita is provided by the literature.The FW production in each building is then calculated by disaggregating the census data at the building scale.The residential buildings are extracted from topographic databases according to their main use or function.The number of dwellings of the buildings are estimated with their floor surface.The surface of the buildings is calculated with their shape and the number of floors is estimated with their height as shown in Fig. 4.The lawn areas from public parks and private gardens are identified with remote sensing approach, a method used for extracting spatial objects from aerial or satellite images.The analysis is based on ortho-images, available at a higher resolution than the satellite images.Several studies, also showed that the accuracy of the image classification can be improved by the integration of Lidar data or nDSM to the multispectral imagery.Since the Lidar data have become more widely available in the recent years, this additional information is included in the present study.With high-resolution images, the pixels are usually smaller than the spatial objects studied and characterize only a small part of them.In this context, the image classification methods based on the properties of pixels provide suboptimal results.To overcome these issues, a geographical object-based image analysis is applied in this study.The core idea of this method consists in grouping similar pixels into meaningful image-objects, based on their spectral properties: color, size, shape, and texture and the ones from the surrounding pixels.The ortho-images combined with nDSM, created from Lidar data, are segmented with SAGA GIS tool.The segments are filtered and classified in the targeted land covers categories with a Maximum likelihood method.The segments under the shadow of trees or buildings can hardly be identified.A simple de-shadowing method is applied in this study by reclassifying the segments under shadow according to the type of surrounding land covers.The accuracy of the image classification is finally assessed with a confusion matrix.The present inventory includes the following catering activities in urban areas: school and health facility canteens, restaurants, administrative and company canteens.Due to the complexity of getting estimates, the biowaste is assessed as a whole, without distinction between the different types of FW.The lists of health facilities are usually provided by the relevant administration or it can be extracted from the OSM.The number of meals is calculated with the number of people taking the meals in each facility, including the number of patients and, for the hospitals, the number of employees.The list of restaurants and catering services is extracted from the official register of companies or from OSM database.The distinction between the two types of restaurants is done in the register of companies but usually not in other sources as OSM.The Fast-foods are not included in this inventory because: 1) the FW is usually collected with packaging and 2) some consumers take away their food and dispose the waste in unknown locations.The number of meals served per employee is greater in collective catering services than in restaurants.This approach provides only rough estimates, but it remains the most accurate with the data currently available.The digestate is the remaining solid and liquid fractions of the biowaste after the AD process.It contains most of the nutrients from the input material and can be used as fertilizer or organic amendment for agriculture.To close the organic nutrient loop locally, the digestate has to be used in farms located nearby the mAD units.This proximity is achieved by targeting the urban or peri-urban agricultural areas.The UPA usually includes the farms located nearby the urban areas, the vegetable farms located in urban areas, the micro-urban farms, the community gardening, and the emerging rooftop and indoor farming.Only the peri-urban farms, similar to conventional farms, are included in the present study.Urban farming is a promising approach but it is still mostly at demonstration stage.A prospective study is hard to perform due to the lack of common information.Moreover, the suitability of the digestate for urban farming is still under assessment.The locations and the areas of the agricultural plots are extracted from the official register of the Common Agricultural Policy in Europe, from land use databases, or from national databases.The small vegetable farms not identified in CAP and in land use databases can be extracted from official registers of companies.In the latter case, the agricultural area is estimated with official farm statistics.The Nitrate directive limits the use of organic fertilizer in agricultural plots presenting a high risk of nitrate pollution.The details of the regulations are country-specific, but usually two criteria are included: the slopes and the distance to surface water.The average slope of each agricultural plot is estimated with a digital elevation model.The surface waters are extracted from national or OSM databases.Moreover, the general regulation states that the spreading should not occur close from housing.The quantity of nitrogen from livestock manure is estimated for each municipality with the livestock census data and the ratio of nitrogen production per capita and per type of animal.In EU countries, the sludge from wastewater can also be used as an organic fertilizer, except in vegetable farms.The location and some characteristics of the wastewater treatments plants are published at the EU level or at national scale.The target sites for mAD have to comply with the regulation that sets distance limits with surrounding elements, to be close from a heat outlet, to be accessible and large enough to cover the space needs of the processing unit.The multi-criteria analysis is summarized in Fig. 6.Regulations concerning waste treatment plant installations are country-specific.In France, the AD installations are included in the list of Installation Classified for the Protection of the Environment heading 2781 and they have to comply with the corresponding regulation.The Ministerial Order of the 26 November 2009 sets an exclusion zone of 35 m around water bodies and 50 m around populated areas except the ones supplying the installation or benefiting from the heat.The gas storage of the mAD is estimated lower than 1t which does not generate specific constraints from the gas storage regulation.In France, the new constructions are very restricted in a 500 m protective perimeter around historical monuments.These areas are then excluded from the potential sites for mAD.Protected areas are also considered not suitable for the mAD plants.The mAD units have to be accessible for the different waste suppliers, the users of digestate or any transportation services as required by the safety regulation.There are no official criteria to qualify the accessibility in this context and the maximum distance from the roads has been set arbitrarily at 50 m.The biogas produced by the mAD is valorized by cogeneration.The heat power generated is low and each mAD can only serve few consumers located at proximity to avoid heat loss.The electrical power generated is only used to feed the mAD unit.So, no additional spatial constraints are considered for the electrical valorization.The methodology was tested on the case study of Lyon Metropole, the Grand Lyon.The GL is a local authority comprising 59 municipalities around Lyon and located in the Rhône Department and the Auvergne-Rhône-Alpes region.The GL covers 534 km2.In 2014 its population was 1.3 million inhabitants, which represent about 600,000 households.It is located within a greater urban area of 2.2 million inhabitants, the second largest in France.The population density is 2383 inhabitants.km−2, with central city Lyon having density of 10,583 inhabitants.km−2.The agricultural land covers about 10,000 ha, corresponding to 20% of the total case study area.The GL separately collects households recyclables, bulky and hazardous waste.The remaining waste is collected door-to-door as residual waste.There is currently no separated collection of the biowaste, which is disposed in the residual waste bin.The location and the biowaste generation in GL were estimated with the data described in the Table 3 and the parameters in Table 4.The quality assessment of the remote sensing analysis showed that 80% of a sample of segments classified as “grass areas” were accurately identified.There was about 5% of false positive and 11% of false negative.The biowaste generated and sorted by the targeted sources in GL was estimated to 102,013 t y−1.Biowaste sources were mainly composed of household FW and green waste.Other sources were representing less than 5% of the global quantity each.The household and school FWs were well distributed over the urbanized areas.The green wastes were mostly located in peri-urban areas while the restaurant FWs in densely populated areas.The mAD network should then target in priority household and school FW, completed with green waste or restaurant FW according to the location of the treatment unit.The location and the shape of the agricultural parcels were extracted from the Graphical Parcel Register.The RPG was completed for the vegetable farms with the SIREN database1 for their locations and an average of crop surfaces from the Agreste database.2,In GL, 64% of the agricultural lands were located inside Nitrate Vulnerable Zones, and for the study, all farms were considered under the directive nitrate regulation.The restricted areas for digestate spreading were built with the IGN BD Topo and BD Alti databases.The remaining agricultural areas covered 9085 ha, 91% of the total agricultural land in GL.Based on the official livestock census and the ratio of nitrogen production per capita, the total production of nitrogen from the livestock in GL was estimated to 104.8 t y−1 and the nitrogen load to 11.2 kg N.ha−1.The location and characteristics of the wastewater treatment plants was made available by the Ministry of Ecological and Solidarity Transition,3 a more accurate source than the generic EU database.The analysis was simplified by selecting only the plants located inside GL.The database details the uses of the sludge for each plant and only the sludge classified as “agricultural spreading” and compost was retained for this analysis.The content of nitrogen of the different materials was given by the MAFOR reference.Due to the complexity and the uncertainties of the wastewater database, the different materials were gathered in three groups with an estimation of their respective nitrogen content: the sludge limed, the dewatered sludge or the urban sludge thickened and the compost of sludge and waste.In GL, 1059 t y−1 of sludge were spread in agricultural areas and 1818 t y−1 were composted, which correspond to 69 t.y−1and 40 t y−1 of nitrogen, respectively.The average nitrogen load was then 11.6 kg ha−1.y−1 for the sludge products.The vegetable farms were not included in the analysis as the use of sludge is prohibited on these types of farms.The nitrogen loading from livestock and wastewater sludge in GL was then about 22.8 kg N.ha−1.y−1.The quantity of organic fertilizer available in the territory was substantially lower than the legal threshold of 170 kg N.ha−1.y−1 of nitrogen even if there were some spatial heterogeneity over the territory.The quantity of nitrogen usable at agricultural plot scale was estimated with the surface suitable for organic fertilization, the legal maximum of nitrogen spreadable and the quantity of nitrogen brought by sludge and the livestock.The quantities of digestate were calculated with the assumption that the digestate contains 10 g N.kg−1 of raw material and that no others mineral fertilizers were used.On average, there was a potential of about 14.7 t y−1 of digestate per hectare of agricultural area and 17 t ha−1 for the vegetable farms.Under the study assumptions, the GL territory could then potentially consume 132,641 t y−1 of digestate.Table 6 summarizes the criteria for the potential location for mAD units in GL.About 6529 sites were identified by the multi-criteria analysis, ranging from 76 m2 to 254,000 m2.Each site was considered as a single potential site.The suitable sites were mainly around commercial or industrial areas or on the outskirts of the residential neighborhoods.In dense areas, there were only a few sites, often located in or close to public gardens.The MILP parameters allow fine adaptation to the specific needs of the analysis: 1) the type of biowaste sources and the quantity of biowaste to be treated by the decentralized valorization network, 2) the maximum collection distances thresholds and 3) the technical constraints of mAD such as their treatment capacity or new restrictions for the biowaste sources.These parameters make possible to define and compare a wide range of biowaste treatment scenarios: biowaste from households only combined with a suitable threshold for walking distances; the restaurants and canteens only combined with longer collection distance; etc.The first scenario studied targeted 10% of the biowaste from the territory, all sources included, with a maximal collection distance of 5 km.The final model included 27,169 biowaste sources, 3351 potential mAD sites and 921 outlets.The optimal decentralized treatment network returned by the MILP and shown on Fig. 10 involved 170 mAD plants.The network would handle 9135 t y−1 of biowaste for a payload-distance of 2882 km.t.y−1 and a total collection distances of 1363 km.y−1.The digestate would be valorized in 194 agricultural plots, for a payload-distance of 855 km.t.y−1 and a total transportation distance of 52 km.y−1.The solution would involve mainly sites in the urban periphery which are close to both biowaste sources and agricultural areas.The maximum distance threshold considered in the scenario was high, allowing collection distances up to 5 km.Despite this low constraint in terms of proximity, the mAD plants would be located in average at 364 m from their sources, which is suitable for a waste collection by foot, like the “bring point” waste system.The network would collect mainly household kitchen wastes.This result was expected as households represent about 66% of the total biowaste generation and are evenly distributed over the territory.The program selected preferably small units with 90% of them treating less than 64 t y−1.The second scenario tested take into account biowaste treatment systems already implemented on the territory.The MILP targeted 20% of the biowaste from the territory considering the existence of a centralized biowaste treatment plant on the territory.As in 2017 there were no specific biowaste treatment systems in GL territory, a hypothetical processing plant was virtually located in the peri-urban area with processing capacities between 10,000 and 30,000 t y−1 of biowaste.Fig. 11 shows the comparison between the treatment network with and without the centralized treatment plant.The optimal network involved 273 mAD plants in the model without central treatment plant and 143 mAD plants when one is added.The central plant handled all biowaste sources located nearby and treat a biowaste quantity corresponding to its minimal capacity of treatment.The model with a central treatment plant has a significantly higher overall payload-distance than a fully decentralized management system.This results support the idea of a better transport efficiency of a decentralized system compared to the centralized one.It also shows that a decentralized biowaste treatment is feasible even in a territory with a mix of dense urban areas and peri-urban areas and in complement to a centralized system.The GIS inventory had to overcome two main challenges: 1) the sources need to be located accurately, at least at the neighborhood level, and 2) the biowaste produced by each source has to be estimated with a method adapted to the type of source.These challenges were tackled by adapting methodologies at the target geographical scale, using suitable databases and disaggregating data when required.These methods involved many individual steps and were based on numerous assumptions that might have impacted the accuracy of the model outcomes.The sources of biowaste were generally accurately located while the estimate of the biowaste generation was much more prone to errors.Some inventory tasks remain complex.Hence, the remote sensing analysis is tedious and its relevance should be assessed in regard to the time invested.The GIS multi-criteria analysis used for the identification of potential mAD sites reflects the legal framework.The criteria only translate partially environmental and social issues, while setting aside the economic ones.Therefore, the number of sites delineated is high, about 6529.Further studies might demonstrate the lack of suitability for a part of these sites, and an improved method could include a broader range of criteria for better integration of local constraints.The linear programming method proposed in this study proved to be a suitable approach to design DBTN.The constraints and objectives of the model explicitly reflected the targeted optimization framework.The method clearly separates the inventory process from the model definition.It allows developing a generic MILP independently of a specific set of data supply by the GIS analysis.As a result, the model can be applied in any geographical contexts, as long as inventory data are available and well-shaped.The accuracy of the results depends on the quality of the input data and the method for distances estimation.The MILP developed in this paper is rather simple compared to MILP generally developed in waste management.However, it differs greatly from the usual methods by the high number of biowaste sources and potential mAD sites considered.This specificity is directly linked to the decentralized approach.The tailored data processing with efficient tools allows building and solving the MILP in reasonable calculation time whatever the size of the problem.The impacts of the biowaste generation uncertainties on the final mAD network design were assessed with a stochastic approach.The MILP was solved 10 times, each time with a random modification of the quantity estimate of each biowaste source in a range of ±20%.The mAD selected in each solution were compared with those selected with the initial inventory.On average, the differences were about 2.0% in absolute value for the transportation distances and 1.3% for the payload-distance.76% of the mAD were selected in all the cases and 85% of them in 8 of the 10 tests.A large part of the network shape remained similar even with slight random changes of the biowaste generation.As a result, the solution provided by the MILP displayed low sensitivity to the biowaste generation uncertainties.The objective function of the actual MILP is limited to the minimization of the payload distances and do not include explicitly the associated economic, environmental or social impacts.In literature, most of the MILPs are focusing on economic objectives.To design a system compliant with the sustainable development paradigm, some models also take into account environmental or even social dimension in the objective functions.At the current stage of the mAD development, the information about economic, environmental or social impacts is still too scarce to be used in the model.When these parameters will be available, their inclusion in the model will be the next step of the approach.The present paper described an approach to design decentralized mAD networks for the valorization of urban and peri-urban biowaste.The method was successfully tested in case of the Grand Lyon.The method relies on a close connection between a tailored MILP and a GIS analysis that generated very detailed data.The inventory step proved to be complicated and time consuming, especially due to the size of the study area and the level of details desired.Hopefully the current open data trend, the EU Open data Policy for example, may contribute to simplify GIS methodology while potentially increasing the quality of the results.The optimization step relies on a MILP method that offers key advantages and allows finding an optimal network according to specific scenarios.The link between detailed GIS data and a MILP proved to be an efficient solution for optimizing local and small valorization loop while covering the need of a large territory.The flexibility of the MILP and its combination with GIS analysis make it a very powerful approach for optimizing numerous kinds of waste management systems involving several treatment steps and transport and at different scales.This study also shows that the quantity data to handle and the size of the MILP were not an issue with suitable tools and approaches.The method developed provides the foundation for an operational tool.Key improvements are already identified to increase its maturity.Firstly, the MILP model should include several objectives to explicitly take into account economic, environmental and potentially social dimensions of the system.Secondly, several steps of the method should be refined in collaboration with key stakeholders of the waste management.Their knowledge of the territory and their own development strategy would improve the selection of the potential mAD sites and refine the parameters of the MILP model. | Innovative small scale treatments solutions are currently proposed to handle the growing need of biowaste valorization through a more circular economy. These new approaches are designed to be embedded in a decentralized treatment scheme which raises new challenges for the biowaste management at the territorial scale. This study, aimed at developing a method to design decentralized and micro-scale Anaerobic Digestion (mAD) networks in urban and peri-urban areas. A mixed integer linear program (MILP) was set up to identify the number of mAD, their sites and their capacities in order to minimize the payload-distances of biowaste and digestate transportation while taking into account the technical constraints of the system. A Geographic Information System (GIS) methodology was developed to feed the MILP model with very fine-scale data about (1) the location and the characterization of the biowaste sources and of the digestate outlets (agricultural areas), and (2) the location of the potential sites for mAD based on a multi-criteria analysis that includes environmental regulations, urban planning rules, site accessibility and heat outlets for valorization. The method was applied to the territory of The Grand Lyon Metropole (534 km2) in France. Optimized mAD networks were identified through the MILP according to different scenarios tested. |
31,460 | Protein structure aids predicting functional perturbation of missense variants in SCN5A and KCNQ1 | Of an estimated 20,000 nonsynonymous single nucleotide polymorphisms in each individual's protein-coding genome, approximately 10 are presently predicted to be clinically actionable .nsSNPs in KCNQ1 and SCN5A, are associated with heritable diseases of the heart including dilated cardiomyopathy , cardiac conduction disease , short QT syndrome , sick sinus syndrome , types 1 and 3 congenital long QT syndromes , and Brugada syndrome .However, in aggregate, rare nsSNPs in SCN5A and KCNQ1 also appear at ~2% in the population, being more common than the rare arrhythmia disorders associated with these genes, suggesting only limited roles in disease.Determining the significance and effect size of these nsSNPs will be of increasing importance as more people undergo genome or exome sequencing .Models used to predict the effect of these nsSNPs are most commonly trained on the information-poor inputs of binary disease-inducing/benign classification.Binary classification reduces information.Moreover, the disease-inducing vs. benign distinction ignores penetrance and the underlying molecular phenotype—or potentially multiple overlapping molecular phenotypes—that may be most informative for therapy.A striking example involves patients presenting with type 3 long QT syndrome due to a gain-of-function SCN5A variant that also impairs trafficking of the encoded channel NaV1.5.Therapeutic targeting of this gain-of-function with the antiarrhythmic drug mexiletine can increase cell surface expression of the mutant channel, leading to the unintended consequence of exaggerating the long QT phenotype .Using literature datasets we have recently curated for both IKs and INa, we test the hypothesis that incorporating variant-specific functional features from KCNQ1 and SCN5A nsSNPs and structure-based features into prediction models will improve our ability to predict if previously uncharacterized nsSNPs will result in altered currents.Secondary structural elements are independent predictors of deleterious variants in SCN5A and can improve current prediction models , suggesting the potential utility of structure-based approaches.In fact, the highest densities of disease-associated variants across the entire spectrum of proteins fall largely in structured, functional segments: the structure/function of these molecules are compromised in the disease state .Here, we generated a set of models able to predict INa and IKs variant-specific current phenotypes.Identifying the variant-specific functional perturbation will provide an additional tool to geneticists and physicians to determine if variants are likely disease-causing and to more accurately stratify the degree of risk that carriers who present without a phenotype will eventually develop channelopathy-based heart disease.For INa, we analyzed peak current, steady state V1/2 activation and inactivation, late/persistent current, and recovery from inactivation .For IKs, we analyzed peak current, V1/2 activation, and activation and deactivation time constants .We selected these functional features because these parameters are most consistently reported in the literature.We only included functional data from KV7.1 variants when functional protocols involved homotetrameric mutated KV7.1 coexpressed with KCNE1, since this protocol was most commonly reported in the literature.Details about how each dataset was collected is contained in the original papers. Briefly, all variants were normalized to WT measurements included in the same publication, i.e. peak current mutant/peak current WT, or V1/2 activation – V1/2 activation, etc.Most functionally characterized variants in SCN5A were characterized by heterologous expression in human embryonic kidney cells, so we used only patch-clamp data derived in human embryonic kidney cells when available.For KCNQ1-KCNE1, most variants were characterized in CHO cells.We averaged the individual parameters in cases where multiple articles reported functional characterization of the same variant in the same cell system.No experimental structure of transmembrane domains of human KV7.1 exists, so we generated models using the recently released Xenopus structure of a closed pore and open voltage sensor and the human sequence NP_000209.2 with 91% identity .We used comparative modeling within the Rosetta scripts utility in Rosetta 3.8 to build KV7.1 .We rebuilt loops on KV7.1 monomers, followed by rebuilding the functional homotetramer with symmetry for 1000 models.Most best-scoring structures had reasonable Cα RMSDs between 1 and 3.We selected the best scoring model for subsequent analysis.We built models both with, and without, human calmodulin bound; however no significant differences were observed in structure-based features, therefore, we selected KV7.1 with CaM bound for the analysis presented here.We generated two human NaV1.5 structural models using the human sequence NP_000326.2 with the American cockroach sodium channel NaVPaS structure , and electric eel NaV1.4 structure .Models of NaV1.5 were refined with small, unstructured segments rebuilt using established protocols as for KV7.1, generating 1000 models.Most best-scoring structures had reasonable Cα RMSDs between 2 and 4.We selected the best scoring model for subsequent analysis.We tested the performance of structure-based features using both models, with very similar results.Because models based on the NaVPaS structure allow the inclusion of more variants in the analysis, we report here features calculated using those structural models.Our objective was to predict variant-specific functional perturbations for the cardiac ion channels KV7.1 + KCNE1 and NaV1.5.We used the variant classifier models PROVEAN , PolyPhen-2 , and SIFT ; sequence alignment-based rate of evolution , and mutation rates derived from BLAST position specific scoring matrices, and Point Accepted Mutation matrix score ; and several structure-based features including burial propensities, neighbor counts, neighbor identities and what we term functional density.These predictive features are described below and summarized in Table S1.As can be seen in the higher off-diagonal R2s, predictive classifiers were modestly degenerate; functional density weight only, i.e. the local enrichment for variants that had been functionally characterized, were more degenerate.NeighborCount is derived from the number of nearest neighbors weighted by distance and within 11.4 Å of the residue of interest, a cutoff found to be optimized to predict protein structure .NeighborVector is a variation of neighbor density, scaled by how evenly distributed the nearest neighbor residues are to the residue of interest.Amino acid neighbor count and amino acid neighbor vector are analogous to NeighborCount and NeighborVector, respectively, modified to account for amino acid-specific propensities for a given degree of burial .NeighborCount, NeighborVector, aaneigh, aaneighvector predictive features were generated using the BioChemical Library and the structures described above.where ρj is functional density of the jth residue and xth functional parameter, Δfunctionx,i is the change in functional parameter x for the ith variant, and di,j is the distance between the center of mass of residues i and j. i does include residue j, but only if the identity of the amino-acid mutation is changed, i.e. mutation ≠ mutation.A graphical representation is shown in Fig. S3.The distribution of neighboring residues is similar between KV7.1 and NaV1.5, with a first shell of contacting residues at ~6 Å and a second shell at ~11 Å.Additionally, we calculated the functional density weights alone to test whether signal derived from functional densities could be attributed to protein region bias in the variants that have been functionally characterized.Because the number of features in our dataset was large relative to the number of variants, regularization was used to fit predictive models.We used a fully relaxed LASSO penalty, which has good predictive performance overall .Prediction models were 10-fold cross-validated.After feature selection, the relaxed generalized linear model was bootstrapped to obtain bootstrapped percentile intervals for quantities of interest.We report the adjusted coefficient of determination, adj. R2, with 95% confidence intervals as a measure of overall prediction of the relaxed LASSO model.We focused on models where LASSO shrinkage yielded at least one significant predictive feature and the lower bound of the naïve 95% confidence interval for the adj. R2 was >0.10.Relatively few models were able to meet these minimum criteria.Note that since the functional density features were calculated from the data, we additionally subjected the fully relaxed LASSO to higher-level 10-fold cross validation procedure which included a functional density construction step.This accounts for any variability or overfitting that might result from using data-determined functional covariates.We further classified loss-of-function variants by degree of functional perturbation, for INa defined as <50% peak current and for IKs < 50% peak current or > 10 mV positive shift in V1/2 activation , to estimate the impact of functional densities on this task.We used commonly available variant sequence-based classifiers PolyPhen2, PROVEAN, BLAST-PSSM, and rate of evolution individually, all combined, and all combined with peak current functional density in a logistic regression model.We generated 95% confidence intervals on AUCs from the candidate models using bootstrap with 2000 replicates and used a two-sided DeLong test to evaluate ROC difference significance.Histograms of all functional parameters analyzed are shown in Figs. 1 and 2 and Table 1.For homotetrameric KV7.1 variants, the distribution of IKs current maxima is skewed towards 0% current compared to WT function, likely a reflection of literature bias.The distribution of INa variant current maxima is bimodal with centers at 0% and 100%.IKs V1/2 activation is also skewed towards more positive values, whereas INa V1/2 activation is more evenly distributed about 0 mV.INa late current is skewed towards higher values.Time constants for IKs activation and inactivation and INa recovery from inactivation are clustered around WT with very wide ranges, populated with few points at extremely long characteristic times.Using a linear model, we could predict peak current, a proxy for overall channel function, for both IKs and INa.Interestingly, sequence-based predictors, especially BLAST-PSSM, had the most significant association with IKs peak current but were not as integral to predicting INa peak current.Conversely, functional density for peak current provided most of the signal for INa but did not contribute meaningfully to IKs peak current prediction.This suggests a spatial dependence of peak current for INa not recapitulated by other published predictive models, contrary to IKs.This difference may be due in part to the comparatively large fraction of reported SCN5A variants that do not perturb peak current yet are still associated with cardiac diseases compared to KCNQ1, such as LQT3 variants with increased late current but no change in peak current; BLAST-PSSM is sensitive to evolutionary fitness of residue changes which may be more homogeneously dependent on peak current for KCNQ1 and more heterogeneous for SCN5A.Alternatively, the spatial distribution of IKs peak current may be more heterogeneous than for INa.The functional density weight, a measure of the number of functionally characterized variants proximal to a residue of interest, was selected out of the IKs peak current model, but not for INa suggesting a modest sampling bias in regions of NaV1.5 sensitive to peak current perturbation.We were able to significantly model IKs V1/2 activation.However, no models could reliably predict INa V1/2 activation or inactivation.The IKs V1/2 activation variance explained is relatively high, 0.29 with a 95% confidence interval lower bound of 0.12.The functional density feature had a significant p-value, suggesting a three-dimensional localization of regions that influence V1/2 activation.Most IKs and INa functional parameters assessed could not be predicted with stable fully relaxed LASSO-regularized linear models and a lower bound of the 95% confidence interval in adj. R2 >0.10.In many cases for these functional parameters, at least one of the 10 folds in the cross validation resulted in only an intercept, i.e. β coefficients for all inputted features shrunk to 0.For some functional parameters, such as time constants for IKs activation and inactivation and INa late current and recovery from inactivation times, lower numbers of characterized variants and relatively low dispersion of values mean the data themselves are limiting prediction.Alternatively, or in addition, our chosen feature set may contain little information relevant to the prediction of these values, likely the case for INa V1/2 activation and inactivation, which may be under sampled for the functional density analysis.For comparison with published variant classifiers predicting binary functional perturbation of these two channels , we calculated receiver operating characteristic curves for models trained using only published models as features and models trained additionally with structure-based features.We generated binary classifications of loss-of-function SCN5A and KCNQ1 variants using criteria described above in the methods section.We calculated the ability of several variant classifiers to correctly classify LOF variants.The resulting areas under the curve from logistic models trained to predict KCNQ1 LOF were as follows: PolyPhen-2, rate of evolution, BLAST-PSSM, PROVEAN, all published predictive models, all published predictive models with functional density for peak current.Most variant classifiers performed reasonably well and the addition of structural information did not meaningfully improve classification for this task.However, the resulting AUCs from logistic models trained to predict SCN5A LOF were as follows: PolyPhen-2, rate of evolution, BLAST-PSSM, PROVEAN, SIFT, all published variant classifiers, all published variant classifiers with functional density for peak current.This improvement in classification ability for LOF variants in SCN5A when adding functional density for peak current suggests structure-based features contribute information not contained in other predictive features an observation gaining appreciation elsewhere .Most IKs and INa functional parameters analyzed could not be predicted reliably: IKs time constants of activation and inactivation; and INa V1/2 activation/inactivation, recovery from inactivation, and late current.However, three important functional parameters could be predicted: IKs peak current and V1/2 activation and INa peak current.In two of these models, IKs V1/2 activation and INa peak current, the functional density features have the greatest predictive value, indicating three-dimensional enrichment of regions of the proteins that influence these functional parameters.“Functional densities” are measure of how dense pathogenic variants are near the residue of interest, i.e. are they near “hotspots” that influence a particular function.Given the influence of the functional density calculation in predicting IKs V1/2 activation and INa peak current, there is likely a spatial influence over both of these parameters.As can be seen in Figs. 4 and 5, there are regions where variants that have a large influence on IKs V1/2 activation and INa peak current are localized.Not surprisingly, the greatest perturbations in IKs V1/2 activation are in the regions of the channel known to be functionally critical: the selectivity filter, voltage-sensing helix in the voltage sensing domain, and in the constriction point in the middle of the pore, as we have seen previously. ,The S6 helix in KV7.1 influences activation in part through its intrinsic flexibility, a necessary property for activation. ,S0 helix has been found to provide stabilization to the voltage sensing domain. ,S4 helix is canonically responsible for voltage-dependent activation .Interestingly, the variants most disruptive to INa peak current are located in the extracellular region of the channel, mostly near the selectivity filter.The pore region of voltage-gated sodium channels is canonically responsible for Na+ conduction and is also enriched BrS1 variants, an NaV1.5 loss-of-function disorder .These data suggest the utility in leveraging combined structural and previously determined functional perturbation datasets to predict functional disruption of previously uncharacterized channel variants.To identify potential commonalities among the most challenging variants to predict, we identified the five least congruent predictions, at extremes both greater and less than experiment, for IKs peak current, IKs V1/2 activation, and INa peak current.All variants, with one exception, occur in the transmembrane region and on structured segments, not flexible loops or linkers.Some commonalities for challenges in predicting IKs peak current and V1/2 activation prediction are the extracellular half of the voltage sensing domain, especially S3 and S4 helices, and the interface between the pore loop helix and helices S5 and S6.The S3 and S4 helices of the voltage sensing domain undergo large conformational changes in response to voltage which are not captured by the static structure we used in this analysis.However, the distribution of predictions both greater than and less than experiment within these two segments suggests changes in function in these regions are heterogeneous possibly due to individual residues in these regions having special roles in voltage-gated activation.Interestingly, several of the challenging IKs peak current variants are located on the S0 helix in KV7.1.We previously observed an anomalous sensitivity to expression level in the S0 helix and suggest the protein is stabilized by intramolecular interactions between the S0 helix and the rest of the voltage sensing domain. ,Challenging variants for INa peak current are more evenly distributed though the protein molecule.Classification of variants inherently reduces the richness of available data, in our case the continuous functional perturbation induced by variants in SCN5A and KCNQ1.However, to assess how well structure-based features contribute to predicting variant loss-of-function classification, we built logistic models trained on variants classified as loss-of-function or not loss-of-function.For INa, structure-based features improve the AUC; for IKs there is no significant improvement.This is consistent with our previous KCNQ1 work suggesting sequence and evolutionary-based features, BLAST-PSSM and residue rate of evolution, yield a competent classification model and suggests alternative features will be needed to further improve prediction of KCNQ1 variants .For SCN5A, structure-based features improve the classification of loss-of-function variants from an AUC of 0.69 to 0.78.Recently.Clerx et al. attempted to predict classification of functionally compromised INa for many of the functional parameters we report here .The authors report modest classification ability for INa late current and V1/2 activation/inactivation with better performance predicting complete loss of function.We too find limited ability to predict most functional perturbations; however, we found significant and quantitative correlations between predicted and experimental INa peak current and challenge the use of functional classification in favor of quantitative perturbation prediction.Interestingly, the authors also noted difficulty in predicting late current which we recapitulate here suggesting this feature is a more challenging target to predict.Furthermore, here we put forward a feature based on knowledge of the three-dimensional structure, functional density, and demonstrate its utility in predicting variant phenotype.The field is still evolving on how to include in silico predictions and experimental functional data quantitatively .We suggest the model presented here could be useful in a pipeline whose first-pass filter aims to detect pathogenic variants.Our previous publication suggested the degree to which a loss-of-function variant produces non-negligible penetrance was an INa peak current 50% or less than that of WT.We suggest this implies the need to have a variance explained of experimental data from our predictions >50% such that the probability a variant predicted to be WT actually has <50% peak current is very low.Predicting around 0.2 of the variance in relevant IKs and INa functional parameters we show here is significant; however, further improvement is needed before the predictive models will be useful in classifying variants for clinical use.The dataset used was limited by those variants available in the literature, which are biased towards functionally perturbed variants.We chose to analyze IKs generated with homozygous KV7.1 variants because this configuration is reported most consistently in the literature.In a majority of cases, KV7.1 variants are heterozygous in individuals.Furthermore, we have begun to investigate the influence of variant-specific functional perturbation on clinical presentation , but the exact relationship is complicated and warrants further investigation.Another limitation is that the structural models are imperfect estimates of the functional state they represent and are also only representative of a single functional state in channels known to have at least two functional states.Models reflecting greater conformational diversity may be another source for improved features.We have derived predictive features from three-dimensional structures of NaV1.5 and KV7.1 and have demonstrated these features improve our ability to predict variant-induced functional perturbations in each channel.These predictive features are based on recognizing that residue positions for pathogenic variants are likely to be clustered in three-dimensional space in proximity to other pathogenic residues.Based on this recognition, we can account for approximately 0.2 of the variance in IKs peak current, IKs V1/2 activation, and INa peak current.For IKs V1/2 activation and INa peak current, structure-based features contribute meaningfully to the predictive model and in a way not recapitulated by commonly used sequence, evolutionary features, or genetic variant classifiers methods.For predicting variant-induced loss-of-function, structure-based features contribute meaningfully to INa but not IKs.This work was supported by the National Institutes of Health K99HL135442 to B.M.K.; R35GM127087 to J.A.C.; HL122010 to A.L.G., C.R.S., and J.M.; and P50GM115305 to D.M.R. | Rare variants in the cardiac potassium channel K V 7.1 (KCNQ1) and sodium channel Na V 1.5 (SCN5A) are implicated in genetic disorders of heart rhythm, including congenital long QT and Brugada syndromes (LQTS, BrS), but also occur in reference populations. We previously reported two sets of Na V 1.5 (n = 356) and K V 7.1 (n = 144) variants with in vitro characterized channel currents gathered from the literature. Here we investigated the ability to predict commonly reported Na V 1.5 and K V 7.1 variant functional perturbations by leveraging diverse features including variant classifiers PROVEAN, PolyPhen-2, and SIFT; evolutionary rate and BLAST position specific scoring matrices (PSSM); and structure-based features including “functional densities” which is a measure of the density of pathogenic variants near the residue of interest. Structure-based functional densities were the most significant features for predicting Na V 1.5 peak current (adj. R 2 = 0.27) and K V 7.1 + KCNE1 half-maximal voltage of activation (adj. R 2 = 0.29). Additionally, use of structure-based functional density values improves loss-of-function classification of SCN5A variants with an ROC-AUC of 0.78 compared with other predictive classifiers (AUC = 0.69; two-sided DeLong test p = .01). These results suggest structural data can inform predictions of the effect of uncharacterized SCN5A and KCNQ1 variants to provide a deeper understanding of their burden on carriers. |
31,461 | Nonlinear analysis on progressive collapse of tall steel composite buildings | Progressive collapse is a relatively rare occurrence as a result of sudden loading such as earthquakes or explosions on a structure that causes local damage in that structure and then extends to other structural parts.The first time that researchers looked at the issue of progressive collapse was the structural failure of a 22-story building at Ronan Point.Especially after the World Trade Centre disaster, more and more research studies and design efforts are directed to this area.Progressive collapse is defined as the expansion of an initial local breakdown of an element into another element of the structure and ultimately leading to the collapse of the whole structure or a large part of it in a disproportionate way .The American Society of Civil Engineers and the American National Standards Institute published a document ASCE 7/ANSI , called Minimum Design Loads for Buildings and Other Structures.In 1972, ASCE 7/ANSI introduced a design provision to avoid progressive collapse.Stating that structures should resist progressive collapse due to “local failure caused by severe overloads”.In ANSI, Section 1.3, entitled General Structural Integrity, presented a more complete statement of structural performance needed to resist abnormal loadings such as explosions.Also, Section 1.3 of ASCE 7-93 stated that buildings should resist local damage while remaining stable as a whole.The damage to the structure should be constrained and prevent disproportionate collapse.The structure should possess alternate load paths to transfer loads, without failure from damaged regions to adjacent regions.Two design alternatives are provided in the code: direct design and indirect design.In direct, the structural component is designed with alternate load paths or specific local resistance.In indirect design, the structures are designed implicitly to resist progressive collapse via warranting minimum levels of strength, continuity, and ductility.Base on direct design, not only alternate load path – allows for local damage but provides alternate load path to absorb damage and prevent major collapse and specific local resistance-provides sufficient strength for critical load-carrying members to withstand failure at explosion or accident point, and critical members are designed to sustain direct impact or explosion.According to indirect design, structures implicitly consider the resistance through providing minimum levels of strength, continuity and ductility.The alternate load path method, a significant design approach to reduce progressive collapse, has been mentioned by a number of design codes including GSA and DoD .APM allows local failure to occur when subjected to an extreme load, but seeks to provide alternate load paths so that the initial damage can be contained and major collapse can be averted.For designing new buildings or checking the capacity of a structure, the technique can be employed.In the term of progressive collapse, robustness is a desirable property of structure which helps to reduce the structural sensitivity to disproportionate collapse .To evaluate the robustness of a structure against disproportionate collapse following a component loss, a small amount of robustness would be desirable, such that small amount of robustness can express the sensitivity and behavior of the structure to progressive failure .Li et al. evaluated robustness of steel frames against progressive collapse.A finite element modeling study on the progressive collapse of steel frames under a sudden column removal scenario presented.Their study showed that for a column-instability induced progressive collapse mode, the effects of the damping was found to be greater than the effects of the strain-rate sensitivity of the material .Fu investigated progressive collapse in a high-rise steel building.Four column removals and two stability systems were considered.Kim et al. studied the effects of concrete slab on progressive collapse robustness of steel moment frames.Stylianidis and Nethercot considered the modeling of connection behavior affecting progressive collapse phenomenon.Li and Sasani Tsai et al. conducted progressive collapse analysisthat showed maximum displacement response after element failure.Shariatmadar and Beydokhti tested three full scale precast beam to column connections by considering different detailing i.e., straight spliced, U-shaped spliced and U-shape spliced with steel plates within connection zone which was part of five-storey frame under reverse cyclic loading and compared its performance with monolithic connections.Choi et al. studied on prevention of progressive collapse for building structures to member disappearance by accidental actions.Choi et al. proposed design of precast beam column connections using steel connectors constructed by bolting steel tubes and steel plates fixed within precast components.Yang and Tan conducted seven experimental tests focusing on the performance of bolted steel beam-column connections in catenary action.The extremity of beams in the test was pinned as a simplified boundary condition.The experimental results displayed the behavior and failure modes of different bolted connections, especially their deformation ability in catenary action.Kai et al. simulated the 3-D buildings including sudden column removal and investigated the resistant contribution of the slab .During the past two decades, significant dynamic analyses were conducted on the disproportionate collapse of steel structures under fires scenario or column case removal .Jiang et al. presented the simulations on progressive collapse resistance of steel moment frames under localized fire.Their research showed that after the buckling of the heated column, the temperature elevation of the beams has significant influence on the displacement increase of the frames .Full scale test of the progressive collapse phenomenon is problematic because of high cost; however, a high number of full-scale component and sub-assembly tests conducted over the last ten years are related to progressive collapse.The finite element modeling is one of the good options.In the current study, a 3-D finite element model is developed using ABAQUS software .Eight 3-D finite element models including two lateral resistance systems, two column removal scenarios and two types of plans were modeled to investigate the progressive collapse analysis.The maximum reaction forces, reaction moment, displacements, acceleration, tensile damage and compressive damage are recorded to evaluated progressive collapse phenomenon and provided crucial data for additional design guidance.Many researchers use finite element method to evaluate progressive collapse.Therefore, the use of proper modeling techniques is necessary for accurate examination of structural behavior under the influence of progressive collapse.In this study 3-D full scale finite element model has been used to evaluation the progressive collapse.All models were designed using the commercial multi-story building analysis program ETABS , and then reset up by ABAQUS.To make sure that the structures in this study are exactly the same as the conventional constructs, the structure sections designed using ETABS software.These structures are designed to withstand dead, live and seismic loads based on .ETABS software is widely used to model tall buildings.However, it has defects to assess progressive collapse; for example, this software does not have the ability to simulate tensile and compressive failure for concrete materials.To evaluate progressive failure, structures with common dimensions and geometry that are used widely in construction are selected.To evaluate the progressive collapse phenomenon in the high-rise steel building, the steel moment frames were modeled using ABAQUS software.The steel building has seven spans of 5 m in both directions.The height of each floor is 3.3 m. Two types of lateral load resistance systems, including moment frame and moment frame with inversed V centrically braced system, were considered for steel buildings.Also, two types of plans, including the regular plan and the irregular plan, are evaluated.For all models, the concrete slabs with thickness of 200 mm are selected.Furthermore, BOX profiles, I-shape beam, and 2UNP were used for columns, beams, and braces, respectively.The dimensions and properties of steel sections are provided in Tables 1 and 2.In most experiments and studies it has been observed that with the sudden collapse of a key member of the structure such as columns, the structure behavior enters the plastic region.The model possesses non-linear material property, non-linear geometric behavior, and non-linear analysis.Isotropic hardening rule was used, with a Von Mises yielding criterion, to simulate the plastic deformations of the models’ shell and beam components.To define the steel plastic properties, the true stress-strain diagram is entered into the software.The true stress-strain diagram is determined from the engineering stress-strain diagram.Also, rate-effects could change the failure mechanism of joints and influence the collapse capacity .When dynamics analysis is used for models, the density should be defined for members.Density defined with the mass density option, 7850 kg/m3 value is used."To define the elastic phase of steel, Young's modulus and Poisson's ratio are introduced. "The 2.1 × 105 N/mm2 value and 0.3 for the Young's modulus and Poisson's ratio were selected, respectively.The plastic part of the stress-strain curve is described with the plastic option.Steel grade ST37 was applied for all the structural steel.The yield stress of 370 N/mm2 is applied for all steel members’ material.The stress–strain relationship was considered multi-line mode.The concrete’s constitutive behavior is modeled by a three-dimensional continuum, plasticity damage model .The concrete damaged plasticity model may model concrete in all element types including beams, trusses, shells, and solids.Inelastic behavior of concrete is shown via isotropic damaged elasticity concept along with compressive plasticity and the isotropic tensile.The concrete mass density value is 2400 kg/m3.The strength of concrete compressive was considered a nominal 28 N/mm2.The compressive yielding curve was assumed a typical concrete .Conservatively, the tensile cracking stress was approximately assumed 5.6% of the peak compressive stress .The stress-strain relationship in tension softens as load is transferred to the reinforcement after tensile cracking.The concrete tensile strength is neglected after concrete cracking .Since the models of this study have many components and are full scale modeled, so the beams, columns, and braces are simulated employing the B31 linear beam element.Using this element, a large amount of structural calculation is reduced.The slab is simulated using the S4R element, which has four nodes and six degrees of freedom per node.Using the REBAR element from the ABAQUS library, reinforcement was represented in each shell element through defining the reinforcement area at the appropriate depth of the cross-section.The main reinforcement included is the A252 mesh assumed acting 20 mm from the top of the slab and the 20 mm thick at the bottom.This reinforcement is defined in both directions of the slab.The model meshing is shown in Fig. 2.Mesh sensitivity analysis was performed to determine the size of the mesh in each section of the study system.For this purpose, mesh size was changed in different parts of the study system and the results of structural analysis were obtained for these changes.The mesh sensitivity analysis was used to compare the force-displacement and frame connections rotation.Table 4 shows the mesh size in the model parts which mentioned.Table 4 also shows the results for the selected meshes.According to Table 4, and given that the difference in the response given to C.N.5 is negligible, therefore, based on the mesh sensitivity analysis, the criterion for determining the meshing of the studied system is according to the answer given in C.N.5.Therefore, in order to verify the system, the minimum mesh size of structure parts are 30 mm.The structural beam elements are modeled close to the major beam elements’ centerline, and the concrete slab is modeled via shell elements at the slab centerline.Then, “tie constraint” is used to define the contact between the beam and the concrete slab.In this contact technique, the nodes of the beam cross section are constrained to the nodes of slab edges.Furthermore, the steel beam to column connections is rigid.Join connector described braces to beams and columns connections.The brace ends translate the constraint to columns’ movement by this connector, but the connections may freely rotate.“Encastre” option was selected to define the columns support.Using this option, all rotations and displacement are fixed at the base level.The loads are considered dead loads, containing self-weight of the structure and 25% of the live load 2.5 kN/m2.In this study, two analysis steps were defined; at the first step, static load was applied to the models, and then one of the columns was removed according to the scenario when the static load was still present.Large experimental model is too difficult to assume the structure’s progressive collapse.Finite element is a great option to investigate issues such as progressive collapse.Using finite element method, it is possible to consider different sort of models if the step analyses, elements, contacting prescriptions, and models assumptions are validated by experiment.To validate the proposed models, one story composite steel frame model was made.The model repeated the full scale experiment of a steel-concrete composite frame through Astaneh Asl .The model was developed based on the same modeling techniques of part 2 of this article.As the full sale tests, the frame size, slab thickness, and boundary conditions are exactly the same .W14 × 61, W21 × 44, and W18 × 35 sections sizes are defined for columns, beams on wide direction, and beam on length directions, and the section properties are given in Table 3.In , the floor system was of a steel gravity system with shear tab connections.The connection between slabs and beams connection is modeled as rigid which limits slab’s translation to beam.However, slab nodes are free to rotate.The column bottom was defined constant for the proposed model.Steel A572, grade 60 was defined using the material function of Abaqus.Concrete was modeled by the same method and property explained in part 2.2.Dynamic analysis was chosen for the model.Furthermore, the join connector was selected to describe beams to columns connections.By this connector, the beam ends translate the constraint to columns’ movement; however, the connections are free to rotate.Fig. 9 depicts the modeling results of force-displacement relationship of column removal compared with experimental result, which is the relationship of force-displacement of the full scale test of .Displacement distribution of finite element model is shown in Fig. 10.Fig. 11 displays the experimental model in maximum displacement time; there is a good agreement between the stiffness and yield strength.Catenary cables effects are ignored in modeling because B31 element cannot model adjunct members such as catenary cable.Also, join connector is selected to model shear tab connection which cannot exactly model its behavior.Therefore, the model without catenary cable has some errors obtained from force-displacement curve.Another two-story composite steel frame was modeled to obtain a realistic modeling.To reach this goal, the experimental model of a steel-concrete composite frame using Wang et al. was selected.Finite element model is shown in Fig. 12.Joint 1 and Joint 2 were selected to compare finite element model and experimental sample.Fig. 13 shows the finite element and experimental results of moment rotation relationship of Joints 1 and 3.As indicate in Fig. 13, J-3 moments magnitudes are 190 kN m and 195 kN m for experimental and numerical models respectively, which shows 2.5% differences.Furthermore, moments magnitudes are 147 kN m and 151 kN m for experimental and numerical models at J-1.By comparing the results, it can be observed that there is a good agreement in the initial stiffness and maximum moment magnitude.The alternate path method offered by DoD and GSA is applied to perform the progressive collapse test of the high rise steel composite buildings.The dynamic effect is incident independent under severe incident, such as blast and impact.Sudden column removal shows a more appropriate design scenario, containing the dynamic effect while it is incident independent.Although such a scenario is not the same in dynamic effect to column damage caused by impact or blast, it has the effect of column failure in a short duration proportionate to the structure’s response time.Therefore, abrupt column removal is applied as the main design scenario in DOD and GSA.The building structure ability under abrupt missing column is tested applying non-linear dynamic analysis with 3-D finite element models.The columns to be removed and the subsequent response of the frames were investigated.Also, two types of plans, including the regular plan and the irregular plan, are evaluated.The maximum forces, moment, displacements, acceleration, tensile damage and compressive damage for each member in the scenario are recorded.Analysis cases in this study and the columns’ removal cases are listed in Table 4.Fig. 1 shows the column removal cases.Fig. 14 shows the acceleration contour under the effect of sudden column removal for all models.In this study the unit of acceleration for all models is m/s2.Fig. 14 shows the collapse acceleration for all models, including models regular and irregular planes with moment frames and braced frames system under the effects of two types of column removal scenario.Acceleration contour is a parameter which shows the element acceleration when falling down or translate from a point to other points.By comparison models with moment frame and moment frame with centrically brace observed that maximum collapse acceleration occur on removal column point and in the top of it for moment frame model.However, maximum collapse acceleration for the model with moment frame with centrically braces was occurs in the other points of structure.It’s mean that braces effect of structure behavior to translate apart of shucks.By comparison the corner and side cases removal for all models it can be seen that acceleration magnitude for side cases removal is greater which means speed of structure displacement is bigger in these scenarios.For cases 1 and 3, when column A1 was removed abruptly, the node on the top of the removed column vibrated and reached significantly maximum magnitude vertical displacement of 9.7 mm and 14.5 mm respectively.As indicated in Fig. 15, the displacement pick occurs exactly after column removal.Eventually, the response rests at 7 mm and 10.4 mm.For cases 2 and 4, when the column A4 was abruptly removed, the node on the top of the removed column vibrated and reached significantly maximum magnitude vertical displacement of 9.7 mm and 4.55 mm, respectively.As indicated in Fig. 16, the displacement peak reached exactly after column removal.Eventually, the response in cases 2 and 4 rested at 7.5 mm and 4.45 mm, respectively.Fig. 14e and f show the acceleration contour under the effect of sudden removal.The displacement of node on the top of the removed column elevated during a very short time and reached maximum magnitude vertical displacement of 10.9 mm and 11.3 mm.Fig. 17 shows the peak of displacement after column removal.The response eventually rested at 8.1 mm and 7.8 mm for cases 5 and 7, respectively.Regarding cases 6 and 8, when column A4 was quickly removed, the node over the removed column vibrated and significantly reached the maximum magnitude vertical displacement of 10.17 mm and 10.1 mm.Fig. 18 shows the displacement peak after column removal.The response finally rested at 8 mm and 7.7 mm for cases 6 and 8, respectively.Displacement history curve of the node above removal column for regular moment and moment braced frames has been show in Fig. 15.As it can be seen the displacement in moment frame is less rather than moment brace frame.Compare these two model scenarios with corner case removal show the less displacement in bracing model.Displacement history curve of the node above removal column for regular and irregular moment and moment braced frames with both corner and side removal column has been show in Figs. 17 and 18.As it can be seen the displacement in irregular moment frame and braced moment frame in both scenario is the same.Also by comparison between corner and side case removal it can be observe that displacement on side case is greater.The adjacent column was overloaded initially and started deforming nonlinearly.Fig. 19 shows that the force has increased significantly after the collapse of the side column; the force in column A2 increased from 2230 kN to 3500 kN peak before settling down at a fixed value of 3000 kN for case 1.A great redistribution of forces occurred.The force in A2 increased from 2404 kN to 3306 kN peak prior to settling down at a steady value of 2930 kN for case 3.The adjacent column was overloaded initially and deformed nonlinearly.A large redistribution of forces took place; the force in column A5 increased from 2230 kN to a peak of 3510 kN before settling down at a steady value of 3170 kN for case 2.A great redistribution of forces occurred for case 4; column A5 force increased from 2019 kN to a peak of 4568 kN before settling down at a fixed value of 4130 kN.The adjacent column was initially overloaded and nonlinearly began deforming.The force has increased significantly after the collapse of the side column; the force in column A2 increased from 2221 kN to 3517 kN before settling down at a constant value of 2993 kN for case 5.A great redistribution of forces took place.The force in A2 increased from 2482 kN to 3361 kN peak prior to settling down at a steady value of 2920 kN for case 7.The adjacent column was overloaded initially and deformed nonlinearly.A great redistribution of forces took place.The force in column A5 increased from 2306 kN to 3449 kN prior to settling down at 3086 kN for case 6.Furthermore, the force in column A5 for case 8 increased from 2330 kN to a peak of 3437 kN before settling down at a steady value of 2998 kN.In addition, all of reaction forces are listed in Table 6.For the structural members like the column, after the sudden removal of the column, the axial forces are less doubled.As the load combination used in the analysis is DL + 0.25LL, suggested in GSA guidelines.So, the members at the same floor level are designed to have the axial capacity of twice the static axial force of the member under DL + 0.25LL load combination to avoid potential progressive collapse.Structure dead load is an important parameter, having fundamental impact on potential progressive collapse.So, applying material with low mass density in slab can avoid structure progressive collapse.Fig. 23 shows that the moment forces has increased significantly after the collapse of the side column; the moment force in column A2 increased from 3.14 kN m to 61 kN m peak before settling down at a fixed value of 25.7 kN m for case 1.A great redistribution of moment forces happened for case 3; the moment force in A2 increased from 5.2 kN m to 61.7 kN m peak before settling down at 31.45 kN m.A large redistribution of moment forces occurred.The moment force in column A5 increased from 3.14 kN m to 76 kN m peak before settling down at a constant value of 34.2 kN m for case 2.A large redistribution of moment forces took place for case 4.The moment force in column A5 increased from 6.78 to 112 kN m peak, before settling down at 39.8 kN m.A large redistribution of moment forces occurred.The moment force in column A2 increased from 10.08 kN m to 24.8 kN m peak.As shown in Fig. 25, the moment force in column A2 increased from 10.07 kN m to a peak of 20.5 kN m. Fig. 25 shows that the magnitude of moment force in column A2 changes frequently during analyses for cases 5 and 7.A great redistribution of moment forces took place; the moment force increased in column A5 from 9.94 kN m to 32.5 kN m.In addition, the moment force in column A5 increased from 7.29 kN m to 19.8 kN m peak for case 8.In addition, all of reaction moments are listed in Table 6.By comparing the increase of moment force in cases 1 and 2, the initial moment before column removal for both columns A2 and A5 is 3.14, but after column removal, they become 61 and 76, respectively, clearly showing that side case removal in moment frame system is more critical and destructive compared with corner case removal.By comparing the increasing moment force in cases 3 and 4, the initial moment values before column removal for columns A2 and A5 is 5.2 and 6.7, respectively, but after column removal, they become 61 and 112, respectively, indicating that side case removal in moment plus CBF system is more critical and destructive compared with corner case removal.Comparing case 1 with case 3, case 2 with case 4, case 5 with case 7, and case 6 with case 8, for the two different lateral resistance systems, the dynamic response of columns are different, but are not significant.Comparing all models with regular and irregulars plans, moment differential of adjusting column in regular models damped sooner than irregular models.This means that in irregular models the structure fluctuations are larger and longer.Furthermore, increasing moment force after column removal on regular structures is significantly greater than irregular structures; however, it was expected because of diminishing of whole structures’ mass.When the concrete slabs are affected by tensile stresses, its tensile strength decreases after observing the first crack.These cracks may be due to shear force or bending stress.Fig. 27a–d, show the tensile damage of concrete slab on models cases 1 to 4, respectively.As it is show tensile cracks under side column removal in building with irregular steel frames are more than other cases which indicate case 4 is in the dangerous condition.Fig. 27d show that a vast area of concrete slabs above side column removal cracked.In addition, by comparison between models with regular an irregular moment frame it can be observe that models with irregular plane had poor performance and concrete slab should be enrich.In addition, the tensile damage areas are listed in Table 6.The compressive behavior of the concrete is such that after resistance in the elastic region, it exhibits resistance to the plastic strain of 0.0025, then with increasing compressive stress, the resistance will collapse.This section examines the failure of concrete slab for two types of column removal scenarios.Fig. 28a–d, respectively, show the compressive damage of concrete slab on models cases 1 to 4.As it is noticed that damages area under corner column removal in building with irregular steel frame is more than other cases which show case 3 has critical condition.Fig. 28c show that a part of all concrete slabs above corner column removal cracked.In addition, by comparison between models with regular an irregular moment frame it can be observe that models with regular plane had better performance and compressive damages is minimum.In addition, the compressive damage areas are listed in Table 6.Two full scale experimental models were developed for the validation of the proposed modeling method.The elastic and plastic properties of steel and concrete materials were introduced.Element failure for steel members and cracking for concrete slabs were considered.All models were analyzed using dynamic explicit analysis.To ensure the accuracy of modeling, the numerical results were presented and compared with the experimental data.It suggests a reliable and affordable alternative to laboratory testing.The behavior of eight types of high rise steel composite frame buildings exposed to two lateral resistance systems, two column removal scenarios and two types of planes were investigated, applying a 3-D finite element modeling.The results provide the following information:Side case removal in moment frame and moment with centrically braced frame systems was more critical and destructive compared with corner case removal.Comparing the models, for the two different lateral resistance systems, the dynamic response of columns were different, but were not remarkable.Comparing all models with regular and irregular plan, it was observed that moment differential of adjusting column in models with regular plan damped sooner than irregular models.After all column removal cases, it was noticed that the increasing moment force on buildings with regular plan is greater than irregular buildings.The results by comparison between models with regular an irregular moment frame shows that models with irregular plane had poor performance and concrete slab should be enrich.To avoid potential progressive collapse, it suggests that the columns were designed and controlled for DL + 0.25LL load combination.The authors have no conflicts of interest to declare. | Progressive collapse is defined as the expansion of an initial local failure of an element into another element of the structure and ultimately leading to the collapse of the whole structure or a large part of it in a disproportionate way. Three dimensional modeling, using the finite element method was developed and investigated to understand the progressive collapse of high rise buildings with composite steel frames. The nonlinear dynamic analysis examined the behavior of the building under two column removal scenarios. Two different types of lateral resistance systems were selected to be analysis and compared. The buildings included regular and irregular plans. The response of the building was studied in detail, and measures are recommended to reduce progressive collapse in future designs. The results of this study shows that side case removal in moment frame and moment with centrically braced frame systems was more critical and destructive compared with corner case removal. Comparing the models, for the two different lateral resistance systems, the dynamic response of columns were different, but were not remarkable. |
31,462 | Does specialized psychological treatment for offending reduce recidivism? A meta-analysis examining staff and program variables as predictors of treatment effectiveness | The overarching aim of offense specific psychological treatments for individuals who have offended is to reduce recidivism.Knowing whether such treatments result in meaningful recidivism reduction is crucial for informing future rehabilitative policy."Sexual offense and domestic violence programs comprise the lion's share of specialized psychological programs offered in correctional and community settings, although some programs have emerged targeting general non-familial violence.To date, meta-analyses and reviews have been conducted separately to examine sexual offense and domestic violence programs.Evaluations of general violence programs have tended to either group these in with sexual and domestic violence programs or focus broadly on violent offenders but not violence specific programs per se.As such, no review has yet synthesized all specialized treatments across these three violent offending groups.Meta-analyses examining sexual offense programs appear to indicate some level of treatment effectiveness.The three most comprehensive meta-analyses to date are the best illustrations.Hanson et al. examined 43 evaluations of specialized and non-specialized1 psychological treatment for adults and adolescents who had sexually offended and found significant unweighted average reductions for sexual recidivism and any general recidivism.Although few program variables were examined, Hanson et al. found that specialized treatments produced the best effects.Significant treatment effects were comparable across institutions and community settings.Lösel and Schmucker examined 69 treatment evaluations for individuals who had sexually offended—incorporating biological and psychological treatments as well as adult and adolescent clients—and found significant n-weighted relative reductions for sexual, violent, and any general recidivism.Biological treatments produced the strongest treatment effects, as did treatments specifically targeting sexual offenses.Of the psychological treatments, only CBT and behavioral approaches were effective.Quality of evaluation design did not moderate the results, although studies with smaller samples produced stronger overall effects.Schmucker and Lösel later updated this meta-analysis, restricting the inclusion criteria to only the highest quality research designs.This time, biological treatments did not meet inclusion criteria, and n-weighted treatment effects for recidivism, although significant, were notably smaller.In addition, only community programs significantly reduced sexual recidivism.Specialized psychological treatment targeting sexual offenses and treatment for adolescents also produced stronger effects, as did treatment that was individualized."Schmucker and Lösel's study represents the latest authoritative meta-analysis on psychological treatment for individuals who have sexually offended.One large scale single study evaluation published by Mews, Di Bella, and Purver for the UK Ministry of Justice examined the “Core” sexual offense treatment program delivered to men across prisons in England and Wales from 2000 to 2012.Mews et al. propensity matched 87 variables to promote equivalence between the treated and untreated groups and found that sexual recidivism for treated individuals increased by an absolute value of 2% and a relative value of 25% over a mean 8.2-year follow-up.The sheer scale and apparent rigor of this individual study has cast significant international doubt on whether individuals who have sexually offended can be rehabilitated using specialized psychological programs."This is despite the fact that Mews et al.'s findings have not yet been incorporated into a meta-analysis.Several reviews and meta-analyses have been published that focus on treatment for domestic violence, each generating largely equivocal findings.In the first meta-analysis, Babcock et al. reported a “small” treatment effect for studies using police reports as the recidivism outcome.However, they did not publish comparative weighted or unweighted reoffending rates and their study was not limited to specialized psychological treatment.A limited number of moderators were examined showing that, although results did not vary according to treatment approach, experimental designs were associated with a slight reduction in treatment effects.This meta-analysis was relatively large but many comparison groups included treatment dropouts who hold unique risk characteristics that impact recidivism.Two later published meta-analyses have been unable to establish treatment effectiveness for specialized domestic violence programs.Feder and Wilson limited their meta-analysis to court-mandated treatment programs in North America and found a significant reduction in domestic violence recidivism for studies using some type of randomization, but no effects for those conducted without randomization.Smedslund et al. focused their meta-analysis solely on treatments using CBT elements and randomized controlled designs.In this small meta-analysis of North American studies, Smedslund et al. concluded that findings were “inconsistent and heterogeneous”.Given the difficulty researchers have had examining domestic violence program effectiveness, it is unsurprising that potential program and staffing moderators have not yet received attention.Further, no meta-analysis has examined how specialized domestic violence programs might impact recidivism more generally.Researchers have typically focused on research design as a key factor hindering knowledge proliferation regarding treatment effectiveness.However, variables relating to the program and its implementation are also important.Correctional policy makers experience huge pressures to provide effective specialized offense treatments on a large scale at low cost.This has resulted in a growing reliance on paraprofessionals—rather than qualified psychologists—to implement treatment.Gannon and Ward hypothesized that programs facilitated by qualified psychologists should produce optimal outcomes.Their predictions centered on the premise that fully trained psychologists hold the level of expertise and associated clinical competencies necessary to expertly detect and respond to complex client need.Problems with treatment delivery may well have underpinned the disappointing results from the British Ministry of Justice sexual offense program evaluation, since fully qualified psychologists were rarely involved in hands-on treatment.Yet, to our knowledge, this variable remains untested."Other staff variables such as the provision of facilitator clinical supervision may also impact upon treatment effectiveness and, as a corollary to Gannon and Ward's predictions, whether or not supervising staff hold psychological expertise.However, again, these variables have not yet been formally tested.Regarding program variables, meta-analyses show that adherence to the Risk, Need, and Responsivity principles of correctional treatment reduce many types of recidivism.For psychological approaches, CBT appears to generate optimal recidivism reductions with the seeming exception of domestic violence programs.Other program variables—except for a small selection investigated in sexual offending—have received less attention.Previous meta-analyses examining offense programs have focused on one single offense type and have often examined a mixture of specialized and non-specialized treatments.No previous work has synthesized specialized psychological offense treatments to examine their impact on both offense specific and non-offense specific recidivism.Our predefined hypotheses are publicly available via the Open Science Framework repository.We predict that individuals treated with a specialized psychological offense program will show reduced offense specific and non-offense specific recidivism.Based on the extant literature, we expect the largest recidivism effects to be associated with sexual offense programs.Previous meta-analyses have not examined the impact of staff variables—in particular qualified psychological input—as a moderator of recidivism outcomes.We examine this and predict that specialized psychological offense treatment facilitated by psychologists will be associated with greater reductions in both offense specific and non-offense specific recidivism.In addition to these key hypotheses, we explore the effects of demographic variables, data source variables, treatment staff, and treatment program variables on both offense specific and non-offense specific recidivism.We report our method in line with the Meta-Analysis Reporting Standards, PRISMA, and with our publicly available Open Science Framework study plan.We did not time limit publication or study completion dates when undertaking searches.However, we did limit searches to articles published in English.We electronically searched PsychINFO®, Web of Science™, ProQuest®, MEDLINE, Dissertation Abstracts International, the Cochrane Controlled Trials Register, the National Criminal Justice Reference Service, the UK Ministry of Justice, UK Home Office, Canada Correctional Services, New Zealand Correctional Services, the UK National Archives, and the National Police Library.All keyword combinations used in our searches are available in our Open Science Framework study plan.We searched publication reference lists and sent requests to three international Listservs and one national Listserv.We also sent individual e-mails to key researchers identified in our search strategy asking them to identify unpublished data.We concluded the search process on 1 February 2018; approximately 12 months following our first computerized search.For inclusion, studies needed to evaluate an offense specific psychological treatment provided to adjudicated offenders, examine recidivism as an outcome variable, include a comparison group of adjudicated offenders who did not receive the specialized treatment in question—and for whom recidivism was also examined, and provide descriptive or inferential statistics adequate for effect size calculation.We excluded studies focusing on clients under 18 years since these clients have been associated with strongest treatment effects, clients with learning disability or other cognitive impairment, or those committed to a mental health facility due to a significant mental disorder.2,We also excluded drink driving treatment evaluations since these programs are less usual within clinical-forensic settings.Where multiple studies described the same treatment outcome data or programme, the manuscript outlining the highest quality data and typically the largest and most representative sample was used for analysis.We coded 27 predictor and outcome variables using over 80 categories.Variables were informed by previous offending behavior meta-analyses and research literature gaps.Key variable descriptions are provided below.For each variable, an unknown category was used to incorporate information that could not be classified using preexisting categories.Age; race; gender; offense type; and sample size N.Year of publication or study completion; country of publication origin; type of publication.Facility setting; therapeutic community; primary treatment method used; type of offense targeted in treatment; mode of treatment provision; treatment format; treatment length; treatment site roll out; polygraph usage; treatment quality.For programs targeting sexual offending we also examined whether behavioral conditioning procedures had been used in an attempt to recondition inappropriate sexual arousal.Presence of registered autonomous postgraduate psychologist in hands-on program provision; facilitator supervision; profession of individual providing facilitator supervision.Recidivism source; recidivism type; recidivism follow up time; and recidivism/non-recidivism sample size ns.Matching of the control and treatment participants; study design; and recidivism quality score5.A coding protocol incorporating all variables described above was used to code each individual study.Studies were independently double coded and cross-checked by Theresa A. Gannon and Jaimee S. Mallion.Discrepancies stemmed from minor coding oversights and were resolved easily through discussion.When information was missing for key predictor and outcome categories, Theresa A. Gannon used electronic mail to make contact with either the corresponding manuscript author or, if that contact was unsuccessful, another co-author.At least two reminder emails were sent and when contact was unsuccessful, a follow up phone call was made.We attempted to contact the study author of all but three articles6 and obtained a response rate of 79%.Responding authors were not always able to provide all information requested due to job changes or significant time lapses.Categories were purposefully merged with other categories when they were underused prior to hypothesis testing.The final coding protocol is available, upon request, from the first author.Odds Ratios were computed for the treatment and comparison groups, comparing the ratio of recidivists to non-recidivists for each offense specific and non-offense specific recidivism type.ORs were computed so that values below 1.0 indicated lower rates of recidivism for treatment, above 1.0 indicated higher rates of recidivism for treatment, and 1.0 indicated zero effect.We did not include studies that contained treatment drop-outs in the comparison group due to the higher recidivism rates associated with this group.Instead, we included all participants originally assigned to receive the offense specific treatment in the treatment group wherever possible.This is likely to represent a more conservative test of the effects of specialized psychological offense treatment.All effect size calculations were electronically calculated by Mark E. Olver and seven studies were randomly selected and hand recalculated by Mark James.Overall, there was 100% agreement across the 13 effect sizes.ORs were aggregated to generate overall effect sizes with 95% confidence intervals with both fixed and random effects models using Comprehensive Meta-Analysis 3.0.A minimum of k = 3 studies was required to compute a meaningful effect size.Effect size heterogeneity across studies was examined using the Q test with associated p value and I2 statistic.Analyses were conducted including outliers and with outliers removed.Moderator variables were examined through aggregating effect sizes at different levels within moderators and examining the difference in effect size magnitude for a given moderator to ascertain the effects of these variables on recidivism outcomes.Publication bias was examined for each moderator variable that met the criteria for asymmetry testing proposed by Ioannidis and Trikalinos.Three sets of asymmetry testing were conducted: funnel plots of precision, trim and fill, and fail-safe N.As Fig. 1 shows, our searches initially identified 6633 articles of which 68 articles describing 70 studies met the full inclusion criteria.These studies described the recidivism of 55,604 offenders from 70 independent samples.Studies originated from 39 peer reviewed journal articles, 6 theses/dissertations, 2 poster/presentations, 19 government reports, 1 book chapter, and 3 unpublished materials.Most studies had been published since 2000, with some published in the 1990s and 1980s.Overall, studies were judged to be of reasonable quality with 77.1% holding a recidivism quality score of high or very high.Only six studies used a randomized design, and of the remaining studies just under one third used an appropriately matched treatment and comparison group.Key variables are shown in Table 1.Open access data is available from http://dx.doi.org/10.17632/mvdw7xd9rb.2,Across all program types, using an average follow up of 66.1 months, offense specific recidivism was significantly lower for individuals who received specialized treatment relative to those who had not in both the random and fixed effect models.This represents an absolute decrease in recidivism of 6% and a relative decrease of 30.9%.Table 2 shows meta-analysis results for sexual recidivism.Readers should note that Mews et al. was identified as an outlier for the bulk of analyses, featuring an extremely large sample size.For this reason, we report all findings with this study removed and included.Readers should also note that random effects models are less influenced by outliers than fixed effects models which weight effect sizes strictly by sample size; as such, random effects models were less impacted by inclusion of Mews et al.Sexual offense programs generated a stable and significant treatment effect regardless of whether random or fixed effects models were used.Similar to previous meta-analyses, significant heterogeneity was present across studies.Over an average follow up time of 76.2 months, sexual recidivism was 9.5% for treated and 14.1% for untreated individuals.This represents an absolute decrease in recidivism of 4.6% and a relative decrease of 32.6%.While the Mews et al. evaluation had a limited effect on the random effects model, it impacted the fixed effect model, which maintained significant, but smaller, associations with decreased sexual recidivism.We limit our moderator commentary below to key findings.Treatment was most effective in reducing sexual recidivism when a qualified licensed psychologist was consistently present in treatment.This effect remained when Mews et al. was included.Receiving supervision from other staff when facilitating treatment also led to better reductions in sexual redivism relative to supervision not being provided or its provision being unknown.This effect remained when Mews et al. was included in the random effects model but reduced in the fixed effects model.Supervision provided by psychologists held the best associations with reduced sexual recidivism.A k of 1 for non-psychologist provision made it impossible to draw adequate comparisons.However, provision by both psychologists and non psychologists appeared less effective or not effective.All sexual offense treatment was CBT.There were larger reductions in sexual recidivism when treatment service quality was rated as promising or most promising relative to weaker services.The fixed effect for most promising programs was driven by the single large sample study of Mews et al.The association between program intensity and outcome was not uniform, with treatment effects generally observed across programs of various lengths, although 100–200 h programs generated smaller effects.Treatment across institutions and the community produced comparable sexual recidivism reductions.When Mews et al. was included within institutional settings, however, community programs generated comparably larger effects.Group-based treatment, rather than mixed group and individual treament, produced the greatest reductions in sexual recidivism except, again, when Mews et al. was adjusted for in the fixed effects model.Relatively larger treatment effects were observed for programs that incorporated some form of arousal reconditioning.Programs that incorporated polygraph use produced less convincing recidivism reductions; the fixed effects model for polygraph absent programs was driven by Mews et al.Finally, programs provided in New Zealand or Australia and Canada produced substantial reductions in sexual recidivism relative to other countries.One in four of these programs was characterized by consistent psychologist input.With the exception of studies rated fair-moderate studies rated as high or very high on recidivism quality were associated with robust recidivism reductions.The fixed effects model with Mews et al. included was the only exception.Studies that employed matching criteria produced less superior, yet significant, reductions in sexual recidivism.Again, the addition of Mews et al. in the fixed effects model was the only exception.Domestic violence programs generated a significant treatment effect regardless of whether random or fixed effects models were used, with significant heterogeneity across studies.Over an average 62-month follow-up, domestic violence recidivism was 15.5% for individuals who received treatment and 24.2% for untreated comparisons.This represents an absolute decrease in recidivism of 8.7% and a relative decrease of 36.0%.As shown in Table 3, ks were < 3 for many staff variables.Similar to sexual offense programs, however, domestic violence treatment appeared most effective when a qualified psychologist was consistently present.The exception was the fixed effects model for consistant psychologist presence driven by a single large sample study.Receiving supervision from other staff when facilitating treatment for domestic violence perpetrators also appeared important in reducing domestic violence recidivism.The relative effects of various professions providing supervision was unclear, however, due to the large number of studies for which supervisor profession remained unknown.All domestic violence programs were provided in groups, mostly closed in format, almost exclusively community based, and of short duration.In addition, none involved therapeutic communities; likely because treatment was largely community based.Interestingly, the association between program quality and domestic violence recidivism ran counter to that for sexual offense programs.The fixed effect for promising programs was driven by a single large sample study with a positive treatment effect.However, the random effects reduced the impact of this study on the overall effect.The so-called “weaker” programs, which tended to feature education based groups, generated strong treatment effects, accounting for large reductions in domestic violence recidivism.CBT treatment methods did not produce convincing reductions in domestic violence recidivism.However, the Duluth model—which itself is a pro-feminist yet also CBT-based program—and psychoeducational models both produced robust reductions in domestic violence recidivism.Programs provided in one location, as opposed to multiple locations, were most effective in reducing domestic violence recidivism.Variations on recidivism quality score were difficult to interpret due to small k in the poor and very high categories.However, studies rated moderate and high were associated with comparably robust reductions in domestic violence.The random effects OR for the high category was driven by Dutton et al.Only one study employed matching criteria making interpretation of this variable difficult.Since four studies employed a randomized design, however, we were able to examine ORs for studies with and without this feature.Both studies that employed randomization and studies that did not employ randomization were associated with robust reductions in domestic violence although randomization was associated with weaker ORs.Programs targetting general violence comprised only a small subcategory of studies and so we could not examine staff or treatment program moderators.However, a stable and significant treatment effect was found regardless of whether random or fixed effects models were used with almost negligible study effect size heterogeneity.Over an average follow up of 25.0 months, general violence recidivism was 29.0% for treated and 38.3% for untreated individuals.We examined the overall ability of all specialized programs to reduce any form of violent recidivism, operationalized as a single outcome variable that included both sexual and nonsexual violence, where this information was available.Programs produced a significant reduction in violence in the random and fixed effects models with significant heterogeneity.Across programs, over an average follow up time of 65.4 months, general violence recidivism was 14.4% for treated and 21.6% for untreated individuals, corresponding to an absolute decrease in recidivism of 7.2% and relative decrease of 33.3%.When effects were disaggregated across each of the three program types, similar OR magnitudes were observed, with a little more variation observed for sexual offense programs.Consistent with findings for offense specific recidivism, facilitator input from a qualified psychologist produced superior reductions in violence relative to inconsistent psychological facilitator input.It is unclear what produced the superior ORs noted for the none or unknown category.Reductions in general violence across programs did not appear to be substantively impacted by whether staff supervision was provided.However, when psychologists and non-psychologists provided supervision on the same program, treatment effectiveness diminished substantially.Treatment effects were found across the various levels of service quality although programs classified as most promising were associated with the best violence reductions, except when Mews et al. was entered in the fixed effects model.Treatment effects were also found across the various levels of treatment intensity although programs of lower intensity appeared slightly less effective than higher intensity programs.Treatment that was group-based, rather than a mixture of group and individual modalities, produced the greatest reductions in violent recidivism, except when Mews et al. was entered into the fixed effects model.Programs administered at one treatment site also appeared slightly more effective than treatments administered across multiple sites.For recidivism quality ratings, all categories were associated with robust recidivism reductions; however, ratings of very high quality, which included Mews et al., produced slightly weaker associations with violent recidivism.Similarly, whilst both matched and non-matched designs produced notable reductions in violence recidivism, the weakest associations were found for matched designs.Thirty-six specialized programs examined general, that is any and all, recidivism operationalized as a single outcome variable.These programs significantly reduced general recidivism in both the random and fixed effects models with significant heterogeneity.Across all program types, over an average 62.4 month follow-up, any general recidivism was 30.0% for treated individuals and 37.7% for untreated comparisons, corresponding to absolute and relative recidivism decreases of 7.7% and 20.4% respectively.Similar OR magnitudes were observed across the three program types.Here, findings did not always mirror those already reported since treatment effects did not vary according to the presence of a qualified psychologist.However, treatment effects lessened when supervision was provided for the same treatment program by both psychologists and non-psychologists.Co-facilitation of programs appeared beneficial relative to individually facilitated programs.The promising and most promising programs produced the strongest associations with general recidivism reduction relative to programs rated as weaker.For the most part, treatments of varying intensity exerted robust treatment effects with the exception of the fixed effect for longer-term treatment.Programs across all countries exhibited reductions in general offending although Canada held the lowest associations.There did not appear to be a uniform relationship between recidivism quality score and reductions in general recidivism.However, matched designs held slightly lower associations with recidivism reduction."We used tests of asymmetry to assess publication bias associated with the file drawer problem for all moderating variables that met Ioannidis and Trikinos' criteria.Thirteen variables qualified for testing.When visually inspected, funnel plots showed clear symmetrical dispersal of effects sizes around the mean.Based on the funnel plots, trim and fill tests assign any missing values as required to create symmetry as well as provide an adjusted overall effect size.These analyses are based on the premise that without a publication bias, studies would show natural sampling error and a symmetrical distribution of results.The trim and fill test adds studies hypothetically missing due to publication bias to recreate what an unbiased summary is likely to look like.As shown in Table 6, very few variables required effect sizes to be imputed to obtain symmetry, with the adjusted imputed value not substantially different from the observed effect size.The fail-safe N figures are also impressive, showing that 6–255 of missing studies would be needed to diminish significant effect sizes to non-significance.The present meta-analysis is the first to review the impact of various specialized psychological offense treatments on recidivism.In relation to our preplanned hypothesis, we found substantially lower recidivism rates for individuals who received specialized psychological treatment versus untreated comparisons, using a sample of > 55,000 individuals.We hypothesized that the strongest treatment effects would be found for programs targeting sexual offending rather than domestic violence; yet surprisingly we found comparable significant treatment effects across domestic violence and sexual offense programs.Indeed, our meta-analysis is the first to suggest that domestic violence programs produce reductions in more general offending and differs from previously conducted reviews since we found evidence of a reduction in domestic violence regardless of whether or not a randomized study design had been used.It is unclear why our results regarding domestic violence programs differ from the previous literature which presents largely equivocal findings.Our meta-analysis differs from those conducted previously in various ways; all of which are associated with our inclusion criteria.For example, we focused only on specialized domestic violence treatment, used intent-to-treat analyses, included treatments from various countries, and included a range of study designs and treatment approaches.Readers should note that our results in relation to the effects of domestic violence programs on offense-specific recidivism are associated with the findings of fourteen studies."This meta-analysis is also the most exhaustive to date that examines the effects of specialized psychological treatments for sexual offending, including 11 new studies since Schmucker and Lösel's original searches in 2010.The sexual recidivism reductions that we found for these programs were higher than, or at the top end of, those reported in previous meta-analyses.This is especially notable given that this meta-analysis included the large scale study of Mews et al. which has cast significant international doubt on the effectiveness of specialized psychological programs for individuals who have sexually offended.Further, in contrast to the most recent meta-analysis on sexual offending, both prison and community treatments were associated with reduced recidivism.The non-offense specific recidivism reductions were broadly comparable to those reported previously.Finally, our review also showed that general violence programs were associated with significant offense specific and non-offense specific recidivism reductions.This meta-analytic evidence is the first to exclusively focus on offense specific violence programs suggesting that they are exerting their intended effects.In line with our preregistered hypothesis, sexual and domestic violence psychological programs characterized by consistent qualified psychologist facilitator input were associated with better outcomes than programs without this feature.This supports previous researcher assertions that qualified psychologists are important for the treatment success of specialized psychological offense programs.Programs that provided clinical supervision for facilitating staff were also associated with better outcomes and variations in outcome according to supervisor profession.For example, for sexual offense programs, qualified psychologist supervisors were associated with superior sexual recidivism reductions.However, the provision of supervision by qualified psychologists and non-psychologists across the same program appeared to result in reduced effectiveness and—in some cases—ineffective treatment.This suggests that psychologists and non-psychologists offer guidance that conflicts in some way, resulting in confused facilitation.Our review found that numerous program variables impacted treatment effectiveness.The clearest results were associated with sexual offense programs.Here, predictors associated with the best sexual recidivism reductions were: treatment rated as higher quality; treatments of shorter or longer duration; a group-based treatment format; polygraph absence; and arousal reconditioning.The first outcome supports previous research indicating that RNR adherence reduces sexual recidivism.The findings regarding treatment intensity are harder to interpret, however, since we did not code treatment participants according to risk level.The superior effects for group only programs may stem from qualified psychologist faciliators being consistently present most often in the group only programs relative to the other coded categories for treatment modality.Furthermore, since facilitators knew there were no “mop up” sessions, this may have forced all critical issues to be discussed within the group; improving group cohesion which is critical for treatment effectiveness.Our findings on this aspect stand in direct contrast to those of Schmucker and Lösel, who reported that programs with more individualized formats exerted best effects.Our findings may differ simply because our meta-analysis included more studies in the mixed group and individual category for comparison.Polygraph testing and arousal reconditioning had yet to be examinined in previous treatment meta-analyses, despite widespread use on many programs.Proponants of polygraphy hypothesize that it enables more effective treatment through ensuring clients adhere to program conditions and provide accurate sexual histories.The only single-study research available suggests that combining treatment with the polygraph has little discernable effect on sexual recidivism.Our meta-analytic results are the first, however, to suggest that polygraph use is associated with lower treatment effect sizes.Although the mechanism of this effect is as yet unclear, we anticipate—as others have—that the therapeutic alliance may be negatively impacted when honesty is formally tested and challenged as part of the treatment process.Moreover, the use of arousal reconditioning for addressing inappropriate sexual interests appears to have lost favor in some jurisdictions.Waning enthusiasm may stem from the lack of research examining such techniques, as well as recent research suggesting that pedophilia represents a sexual preference with biological origins.The present findings, however, are the first to report that programs incorporating active behavioral attempts to restructure and manage such arousal are associated with larger reductions in sexual recidivism.Given that inappropriate sexual arousal is a key predictor of re-offending sexually, this finding is particularly pertinent.Due to relatively small k for the domestic violence programs, establishing more definitive program predictors of decreased recidivism and, hence, improved treatment success was more difficult.However, a set of key predictors did emerge: treatment rated as lower quality; treatments using the Duluth approach; and treatments that were provided at a single institution.Initially it was unclear why treatments rated as less evidence-based exhibited more effectiveness.A close examination of program content, however, showed that they tended to be Duluth or purely psychoeducational programs.This suggests that it is the provision of educational information—that may or may not be rooted in feminism—that is important for reducing domestic violence, rather than complex psychotherapeutic manipulations designed according to “best practice”.This may explain why Duluth and psychoeducational approaches produced superior recidivism reductions relative to CBT.However, readers should note these suggestions cautiously since they are just that and are based on relatively small ks.Finally, the superior outcomes associated with treatments administered at a single site suggests that treatments are most effective when administration is tightly focused.Our findings for general violent recidivism, across all programs, showed that qualified psychologist input, receiving supervision, and the absence of conflicting psychologist/non-psychologist supervision were associated with the largest violent recidivism reductions.This mirrored the staff effects found for offense specific recidivism outcomes; however, similar effects were not found for general recidivism.It may be that the effects of qualified psychological input, receiving supervision, and supervisor professions are less visible for general recidivism since the content of specialized offense programs and, by extension, supervision are most likely to focus on offense specific—and typically violent—criminogenic issues.In fact, few program variables emerged as consistent predictors of non-offense recidivism and, when they did, they largely reflected those already targeted for offense specific recidivism.The finding that treatment is associated with best results when administered at a single site suggests that treatment integrity may be a critical, yet neglected, factor associated with treatment success more broadly.Good meta-analyses should represent a complete and accurate picture of the overall study population.Limiting our searches to documents written in English may have omitted a small number of studies from our analyses.Nevertheless, we made every effort to obtain a full cohort of studies.Just under half of the documents we obtained were gathered from materials other than peer reviewed journals and asymmetry tests illustrated that publication bias was not a concern.Previous meta-analyses examining specialized offense treatments have been critiqued regarding the quality of evaluation studies examined, with most authors arguing that stronger randomized designs are required.Our meta-analysis is no exception to such critique since few studies used a randomized design.However, we did record quality of study design through examining whether each study employed matching criteria as well as the overall quality of recidivism variables used within each study.Using these indicators we were able to show that, with the exception of domestic violence programs, study design and matching had surprisingly little impact on recidivism reductions.In fact, since higher recidivism rates are associated with drop-outs, our intent-to-treat meta-analysis is likely to represent a more conservative test of the effects of specialized psychological offense treatment.All meta-analyses, including this one, are affected by potentially confounded moderator effects.Where possible, we examined the individual studies generating each key moderating effect for any obvious patterns of confounding variables.However, we recognize that numerous unidentified confounders could also be present.A further key limitation was that we did not always have enough information to populate both an “unknown” and a “not present” group for each moderating variable.Whilst this could not be avoided, it suggests that study authors could improve upon the quality of staffing and treatment program information provided in published and unpublished reports.We know, for example, that many competent professionals would not have been classified as independent registered psychologists.However, information was simply not available to conduct coding and analyses based on facilitator profession.We suggest authors clearly report each of the program and staff variables outlined in Tables 2 and 3 in all future evaluations as an absolute minimum.The outcomes of this meta-analysis are the first to suggest that specialized psychological programs that target various offending behaviors are effective.Although there was significant heterogeneity across the outcomes of individual studies, our review suggests ways that policy makers and program providers might optimize program outcomes.First, the results indicate that program developers should provide qualified psychologists who are consistently present in hands-on treatment; and second, facilitators should be provided with supervision opportunities that are similar across the program.Interestingly, less than one in five programs consistently used qualified psychologists in hands-on facilitation and the majority of these were implemented in the 1970s, 1980s, or 1990s rather than more recently.The provision of supervision was more evenly spread.We recognize the significant pressures that policy makers face providing cost effective programs to large numbers of individuals.As an indication of this, correctional systems in a number of international jurisdictions have been moving away from the direct involvement of psychologists as treatment providers, with therapeutic activities such as running manual-based groups being delegated to correctional program officers who may have little or no formal clinical training.Ironically, it seems that this variable is correlated with optimum behavioral change and yet qualified psychologist hands-on input is lacking in programs implemented in recent years.This may explain why we did not find more modern treatments to bring about improved outcomes.Qualified psychological staff and regular supervision come at a clear financial cost.Program providers could consider the benefits of pruning down staff facilitation numbers as a compensatory financial strategy given that individual and co-facilitated programs seem to be equally beneficial.Program providers might also want to consider methods for tightly controlling program implementation given that we found single site treatments seemed to fare better than multisite treatments.Further offense specific practice implications are available for those involved in sexual offense and domestic violence policy.Regarding sexual offense programming, the results indicate that best practice guidelines in this area should be revised to include cautionary messages regarding polygraph use within the therapeutic context, and further commentary on—and expansion of—the evidence base around behavioral reconditioning as a treatment tool.Those tasked with developing and managing programs for those who have been domestically violent should seek out the best educational materials possible and consider how such materials can be skilfully woven into program facilitation to produce optimal results.Previous researchers have noted that it is difficult to ascertain the exact variables responsible for apparent recidivism reductions when engaging in large scale meta-analytic work; we agree, particularly when heterogeneity of findings is present across studies.However, the findings from this review across traditional and emerging specialized psychological offense programs presents converging evidence that such programs impact a broad range of offending behaviors in addition to impressive reductions in offense specific recidivism.Amidst these findings, however, lies an important moderating variable that has been neglected in previous meta-analyses: program staffing.If specialized psychological offense programs are to be effective, then our review suggests that researchers and clinicians must seriously consider these factors in addition to study design quality.This research did not receive any specific grant from funding agencies in the public, commercial, or not for profit sectors.Authors Theresa A. Gannon, Mark E. Olver, and Jaimee S. Mallion designed the study and coding manual.Authors Theresa A. Gannon, Jaimee S. Mallion, and Mark James conducted the literature searches.Author Theresa A. Gannon contacted all authors of identified manuscripts.Authors Mark E. Olver and Mark James conducted the statistical analyses.Author Theresa A. Gannon wrote the final draft of the manuscript and all authors contributed to and have approved the final copy.There are no known conflicts of interest that could have inappropriately influenced or be perceived to have influenced this research manuscript. | A meta-analysis was conducted to examine whether specialized psychological offense treatments were associated with reductions in offense specific and non-offense specific recidivism. Staff and treatment program moderators were also explored. The review examined 70 studies and 55,604 individuals who had offended. Three specialized treatments were examined: sexual offense, domestic violence, and general violence programs. Across all programs, offense specific recidivism was 13.4% for treated individuals and 19.4% for untreated comparisons over an average follow up of 66.1 months. Relative reductions in offense specific recidivism were 32.6% for sexual offense programs, 36.0% for domestic violence programs, and 24.3% for general violence programs. All programs were also associated with significant reductions in non-offense specific recidivism. Overall, treatment effectiveness appeared improved when programs received consistent hands-on input from a qualified registered psychologist and facilitating staff were provided with clinical supervision. Numerous program variables appeared important for optimizing the effectiveness of specialized psychological offense programs (e.g., arousal reconditioning for sexual offense programs, treatment approach for domestic violence programs). The findings show that such treatments are associated with robust reductions in offense specific and non-offense specific recidivism. We urge treatment providers to pay particular attention to staffing and program implementation variables for optimal recidivism reductions. |
31,463 | Direct and indirect influences of executive functions on mathematics achievement | A good understanding of mathematics is essential for success in modern society, leading not only to good job prospects but also a better quality of life.Children develop an understanding of mathematics throughout their primary and secondary education.In order to ensure effective pedagogy that supports the needs of all learners it is critical to recognise the range of factors that contribute to mathematical achievement so that teaching practices can be targeted appropriately.One set of factors that play an important role in mathematics achievement are the cognitive resources that an individual can draw on.Here we evaluate the direct contribution of domain-general skills, in particular executive functions, the set of processes that control and guide our information processing, to mathematics achievement.In addition we explore to what extent the contribution of executive functions to mathematics achievement is mediated by domain-specific mathematical abilities, and whether this changes with age.Addressing these questions will refine our understanding of the ways in which executive functions support mathematics achievement, which can then inform intervention approaches that aim to capitalise on this relationship.Attainment in mathematics rests on success in a number of underlying cognitive skills.Several researchers have proposed a multi-component model in which mathematics is underpinned by both domain-specific mathematical knowledge in addition to more general cognitive processes.For example, Le Fevre’s Pathways Model of early mathematical outcomes includes linguistic and spatial attention pathways in addition to a quantitative pathway.Geary outlined a hierarchical framework in which achievement in any area of mathematics is underpinned by skill in applying the appropriate procedures, and an understanding of the underlying concepts.In turn, these domain-specific processes draw upon a range of domain-general skills, including language and visuospatial skills and in particular executive functions.This model therefore suggests that the influence of executive function skills on mathematics achievement is mediated through its role in domain-specific mathematical competencies.It is well established that an individuals’ procedural skill and conceptual understanding contribute to their mathematical achievement, in addition to their factual knowledge: the ability to recall stored number facts from long-term memory.More recently, a growing body of evidence has demonstrated a link between domain-general executive functions and mathematics achievement.Executive functions, the skills used to guide and control thought and action, are typically divided into three main components following Miyake et al.These are updating or working memory, the ability to monitor and manipulate information held in mind, inhibition, the suppression of irrelevant information and inappropriate responses, and shifting, the capacity for flexible thinking and switching attention between different tasks.Below we review the literature exploring the links between each of these components of executive functions and overall mathematics achievement before going on to consider its contribution to the underpinning skills of factual knowledge, procedural skill and conceptual understanding.Across many studies working memory has been found to be a strong predictor of mathematics outcomes, both cross-sectionally and longitudinally.According to the influential Baddeley and Hitch model of working memory, adopted by the majority of researchers in this field, working memory is made up of short-term stores for verbal and visuospatial information in addition to a central executive component that coordinates these storage systems and allows the manipulation and storage of information at the same time.Accordingly, tasks that simply require information to be stored for a short amount of time are used as an index of the capacity of the verbal and visuospatial stores, while tasks that require the simultaneous storage and manipulation of information are used to also tap into the central executive component of working memory.In general, tasks that tap into this executive working memory system show stronger relationships with mathematics achievement than those which simply measure the short-term storage of information.The results from a recent meta-analysis of 111 studies found that verbal executive working memory showed the strongest relationship with mathematics, followed by visuospatial executive working memory and short-term storage, which did not differ, and finally the short-term storage of verbal information.This suggests that it is the central executive component of working memory that is most important for mathematics.The tasks that are typically used to tap into the central executive are not a pure measure of this process however, as the short-term storage and processing of information is also required.To try and isolate the exact components of working memory that contribute to mathematics achievement Bayliss and colleagues adopted a variance partitioning approach whereby they used a complex span combining the storage and processing of information, as typically used to index executive working memory, but also measured storage and processing independently.Using a series of regression models they were able to isolate the unique variance associated purely with the central executive, storage capacity and processing speed, as well as the shared variance between these processes.In one study with 7–9-year-olds, Bayliss, Jarrold, Gunn, and Baddeley found that the executive demands of combining verbal storage and processing explained significant variance in mathematics achievement, but that combining visuospatial storage and processing did not.Moreover, the executive working memory tasks involving verbal storage explained more variance in mathematics achievement than a short-term verbal storage task alone.A follow-up study investigating developmental changes in working memory and cognitive abilities demonstrated that shared variance between age, working memory, storage and processing speed across both verbal and visuospatial domains contributed most to mathematics achievement across ages, explaining 38% of the variance.The central executive accounted for around 5% of unique variance, as did shared variance between age, working memory and storage.Storage alone accounted for 2.5% of the variance, which was attributed to variation in the ability to reactivate items in memory.Processing speed accounted for a small amount of variance both uniquely and shared with working memory and age.Taken together, these findings suggest that all components of working memory play some role in successful mathematics achievement but that the demands of combining the storage of verbal information with additional information processing do seem to be particularly important for mathematics achievement in childhood.The findings of Friso-van den Bos et al. and Bayliss et al. suggest that there may be some domain-specificity in the relationship between working memory and mathematics achievement, with verbal working memory playing a larger role than visuospatial working memory.Other researchers have argued for the opposite pattern however, with a stronger relationship between mathematics and visuospatial working memory than verbal working memory, particularly in children with mathematics difficulties but with typical reading and/or verbal performance.In a comprehensive study which tested a large sample of typically developing 9-year-olds on an extensive battery of measures, Szűcs, Devine, Soltesz, Nobes, and Gabriel found that visuospatial short-term and working memory were significant predictors of mathematical achievement, while verbal short-term and working memory were not.Phonological decoding and verbal knowledge were found to be significant predictors however, which may have accounted for some of the variance associated with verbal short-term and working memory.These conflicting findings may be due to the type of mathematics under study, and could also be related to age.Li and Geary found that central executive measures, but not visuospatial short-term memory measures predicted mathematics achievement in 7-year-olds, but that the children who showed the largest gains in visuospatial short-term memory from 7 to 11 years achieved a higher level of attainment in mathematics at 11 years of age.Age-related differences in these relationships could reflect either maturation-related changes in the involvement of working memory, or differences in the mathematical content of curriculum-based or standardised achievement test.For example, verbal working memory may be more important for basic topics such as arithmetic, whereas visuospatial working memory may be more important for more advanced topics, such as geometry.Research with adults also points to a greater role for visuospatial than verbal working memory.This suggests that visuospatial short-term memory becomes increasingly important for mathematics with age.The relationships between mathematics achievement and inhibition and shifting tend to be less consistent than the relationship with working memory, with significant correlations found in some studies, but not others.One suggestion for this variety is that inhibition and shifting contribute unique variance when they are studied as sole predictors, but that if working memory is also included then this accounts for the variance otherwise explained by inhibition and shifting.Another possibility for the inconsistency is that inhibition and shifting make a lesser contribution to mathematics achievement than working memory, which in some studies reaches significance, while in others it does not.The results from the meta-analysis of Friso-van den Bos et al. suggest that inhibition and shifting are indeed less important for mathematics achievement than working memory.They found that inhibition and shifting explained a similar amount of variance in mathematics achievement, but significantly less than was explained by both verbal and visuospatial short-term storage and executive working memory.Despite a growing focus on understanding the neurocognitive predictors of overall mathematics achievement, there has been relatively little research investigating the contribution of executive function skills to the component processes that underpin mathematics achievement; retrieving arithmetic facts from long-term memory, selecting and performing arithmetic procedures and understanding the conceptual relationships among numbers and operations.It is important to study the role of executive functions to each of these component processes separately as although they all contribute to successful mathematics achievement, children can show different patterns of strengths and weaknesses across these processes, suggesting that the domain-general processes that support them may also differ.Moreover, it is currently unclear whether the relationship between executive functions and these components of mathematics may in fact mediate the relationship between executive functions and overall mathematics achievement, as Geary suggests.Most research to date has taken place within the domain of arithmetic, therefore below we review the role of executive functions in factual knowledge, procedural skill and conceptual understanding of arithmetic in turn before going on to compare the contribution that executive functions make to each component.According to theoretical models, arithmetic facts are stored in an associative network in long-term memory in a verbal code.Many models of working memory propose that one of its roles is to activate information in long-term memory.Taken together, these models suggest that verbal working memory may be required to recall arithmetic facts and that inhibitory processes may be required to suppress the neighbouring solutions or alternative operations that are co-activated when a fact is retrieved.There is evidence that individuals with low verbal short term and working memory capacity are less likely to choose a retrieval strategy for solving simple arithmetic problems, and are also likely to retrieve them less accurately.In contrast, verbal and visuospatial working memory tasks have not always been found to uniquely predict performance on arithmetic fact fluency tasks in elementary school children over and above basic numerical skills and other domain-general skills, although this may be due to the wide ranging influence of working memory in many of these processes.There is also recent evidence that inhibitory processes play a role in arithmetic factual knowledge in terms of suppressing co-activated but incorrect answers.De Visscher and Noël have demonstrated that a patient with an arithmetic fact retrieval deficit, and 8–9-year-olds with poor arithmetic fact fluency all demonstrate difficulties in suppressing interfering items within memory.There is therefore some evidence for a role of working memory and inhibition in the retrieval of arithmetic facts, although to date there has been little research in this area.The ability to accurately and efficiently select and perform appropriate arithmetic procedures is likely to rely on executive functions in order to represent the question and store interim solutions, select the appropriate strategy and inhibit less appropriate ones, as well as shift between operations, strategies and notations.Convincing evidence that working memory, in particular the central executive, plays a key role in using arithmetic procedures comes from experimental dual-task studies which have found that procedural strategies are impaired by a concurrent working memory load.Correlational studies have also demonstrated a relationship between working memory and procedural skill, although there are mixed findings concerning whether simple storage or central executive processes play a larger role, and whether the storage of verbal or visuospatial information is more important.There is some evidence that children with better inhibitory control are better able to select the most efficient strategy and also perform better on tests of procedural skill.Similarly for shifting, children with better cognitive flexibility have been found to have better procedural skill, although evidence that children with a mathematics difficulty show a significant deficit in shifting in comparison to typical controls has been mixed.The contribution of executive functions, in particular inhibition and shifting, to procedural skill may well depend on age- or schooling-related changes in mathematical content and strategies.These domain-general skills may play a greater role in younger, less-skilled children but become less important with age as procedural skills become more automatic and children begin to use fact retrieval and decomposition, breaking a problem down into smaller parts, to solve arithmetic problems.In their meta-analysis Friso-van den Bos et al. found that the contribution of shifting and the visuospatial sketchpad decreased with age, while the contribution of visuospatial working memory increased with age.The role of verbal short-term and working memory and inhibition remained constant.The majority of these studies were based on measures of overall mathematics achievement.Given that procedural skills are required in most of these general mathematics measures, these findings suggest that, for at least some aspects of executive function, their role in procedural skills changes during childhood.This needs to be confirmed with a more specific measure of procedural skill however.Theoretical models suggest that executive functions may be required to switch attention away from procedural strategies to allow underlying conceptual numerical relationships to be identified and also to activate conceptual knowledge in long-term memory.Comparatively little empirical work has investigated the role of domain-general skills in conceptual understanding however.Robinson and Dubé found that 8–10-year-old children with poorer inhibitory control were less likely to use a conceptually-based shortcut than children with good inhibitory control when presented with problems where such a strategy was possible.They suggested that this may be because the children found it difficult to inhibit well-learned procedural algorithms.Empirical studies do not appear to support the role of working memory in conceptual understanding however, at least in the domain of fractions.A total of eighty-four 8–9-year-olds, sixty-seven 11–12-year-olds, sixty-seven 13–14-year-olds and seventy-five young adults took part in the study.The young adults were students at the University of Nottingham and all spoke English as their first language.They gave written informed consent and received course credit or an inconvenience allowance for taking part.The 8–9-year-olds attended suburban primary schools and the 11–14-year-olds suburban secondary schools in predominantly White British, average socio-economic status neighbourhoods of Nottingham, UK.Primary schools in the UK are attended by pupils aged from 5 to 11 years.UK secondary schools are typically attended by pupils from 11 to 18 years.Parents of all children in the school year groups taking part in the study were sent letters about the study and given the option to opt out.All children were given a certificate for taking part.The study was approved by the Loughborough University Ethics Approvals Sub-Committee.The arithmetic and executive function tasks were created using PsychoPy software and presented on an HP laptop computer.For the mathematics tasks, the experimenters recorded response times for child participants by pressing a key immediately as participants began to give their answer.The Mathematics Reasoning subtest of the Wechsler Individual Achievement Test was administered following the standard procedure.This test provides a broad assessment of curriculum-relevant mathematics achievement and is a good predictor of performance on the national school achievement tests used in the UK.It includes a series of verbally and visually presented word problems covering arithmetic, problem solving, geometry, measurement, reasoning, graphs and statistics.Raw scores were used as the measure of performance.This task assessed participants’ knowledge of number facts.On each trial an arithmetic problem was presented on screen for 3 s and participants were asked to retrieve the result without mental calculation.The participants were instructed to give their answer verbally, at which point the experimenter pressed a key and inputted by the answer.Participants were instructed to say “I don’t know” if they could not retrieve the answer.Participants completed four practice trials and then 12 experimental trials in random order.An additional four easy ‘motivational trials’ were intermixed with the experimental trials.To ensure that performance was not at floor or ceiling level in any group we selected a different set of items for each age group.Following pilot testing, the problems given to the primary school students were composed of single-digit addition operations only, those given to the secondary school students also included subtraction operations.The problems for the 11–12 year olds involved single-digit numbers, and the problems for the 13–14 year olds were composed of one single-digit number and one double-digit number.The problems given to the young adults involved addition, subtraction, multiplication and division operations composed of one single-digit and one double-digit number.The measure of performance was the proportion of items answered correctly within the 3 s presentation time.This task assessed the strategy choice and efficiency with which participants could accurately perform arithmetic procedures.Prior to starting the task participants were shown pictures representing different strategies to ensure that younger participants understood that any strategy was acceptable in this task.The experimenter described the strategies and told participants that any of these strategies, or others, could be used to solve the task.Following this, on each trial an arithmetic problem was presented on screen and participants were instructed to solve it using any mental method they preferred.Participants were given four practice trials and then 10 or 12 experimental trials.The operations were designed to be age appropriate, and of a difficulty level where retrieval would be unlikely.The problems for all age groups involved a mix of single and double-digit numbers, with a greater proportion of double-digit numbers for the older groups.The trials given to 8–9-year-olds and 11–12-year-olds were composed of addition and subtraction operations and the trials given to 13–14-year-olds and young adults were composed of addition, subtraction, multiplication and division operations.The items in each version were presented in one of two orders counterbalanced across participants.The participants were instructed to give their answer verbally at which point the experimenter pressed a key and inputted the answer.The measure of performance on this task was the mean response time for correctly answered trials.This task assessed participants’ understanding of conceptual principles underlying arithmetic.As with the other arithmetic tasks, a different set of problems was used for each age group.The operations were designed to be difficult to solve mentally, to discourage the participants from attempting to do so.The 8–9-year-olds watched a puppet solve a double-digit addition or subtraction problem using counters and were shown the example problem written in a booklet.They were then shown four probe problems that were presented without answers and asked whether the puppet could use the example problem to solve each probe problem, or if he would need to use the counters to solve it.Of the four probe problems, one of the related problems was identical, one was related by commutativity, one was related by inversion and one was unrelated.The children were first asked to decide whether or not the example problem could help the puppet solve each probe problem, and asked to explain how.The children completed two practice example problems, with feedback, followed by 24 experimental trials.The items were presented in one of two orders counterbalanced across participants.The conceptual task for the 11–12-year-olds, 13–14-year-olds and young adults was presented on a computer.On each trial an arithmetic problem with the correct answer was presented on the screen.Once this was read, the experimenter pressed ‘return’ on the computer keyboard and a second, unsolved operation appeared below the first problem.The participants were asked to state whether or not the first problem could help solve the second problem, and then were asked to explain how.Participants were given four practice trials and thirty experimental trials.Eighteen of the thirty problem pairs were related.The pairs of problems were related by the subtraction-complement principle, inverse operations, and associative operations.The trials given to the 11–12-year-olds were composed of addition and subtraction problems involving two operands of two and three digit numbers.The trials given to the 13–14-year-olds were composed of addition and subtraction problems involving two or three operands of double-digit numbers, as well as some multiplication and division problems involving single and double-digit numbers.The trials for the young adults were composed similarly but they also included some division problems including two double-digit numbers.The items in each task version were presented in one of two orders counterbalanced across participants.All participants gave their response verbally and the experimenter recorded this.Accuracy measures were calculated for how many relationships were correctly identified, and for how many accurate explanations each participant provided.The measure of performance used here was the proportion of trials for which the presence or absence of a relationship was correctly identified.Higher scores indicated better performance.All participants completed separate verbal short-term memory and verbal working memory tasks.Verbal short-term memory was assessed via a word span task.Participants heard a list of single syllable words and were asked to recall them in order.There were three lists at each span length, beginning with lists of two words, and the participants continued to the next list length if they responded correctly to at least one of the trials at each list length.The total number of words correctly recalled was used as the dependent variable.Verbal working memory was assessed via a sentence span task.Participants heard a sentence with the final word missing and had to provide the appropriate word.After a set of sentences they were asked to recall the final word of each sentence in the set, in the correct order.Participants first completed an initial practice block with one trial with one item and two trials with two items.The practice trials could be repeated if necessary.They then continued to the test trials where they received three trials at each span test length, starting with a test length of two items.Provided they recalled at least one trial correctly, the sequence length was increased by a single item and three further trials were administered.Participants’ performance on the processing task was also assessed separately in two blocks of 20 trials each.In these blocks they only had to provide the final word of the sentence, without the need to recall the words.Response times were measured for the processing trials and the total number of words correctly recalled was calculated for the storage element of the sentence span task.The participants completed separate visuospatial short-term memory and visuospatial working memory tasks.In the visuospatial short-term memory task participants saw a 3 × 3 grid on the screen.They watched as a frog jumped around the grid and after the sequence finished they had to point to the squares he jumped on in the correct order, which was recorded by the experimenter using the mouse.There were three trials at each sequence length, beginning with sequences of two jumps, and participants continued to the next sequence length if they responded correctly to at least one of the sequences at each length.The total number of correctly recalled locations was used as the dependent variable.Visuospatial working memory was assessed via a complex span task.Participants saw a series of 3 × 3 grids each containing three symbols and they had to point to the ‘odd-one-out’ symbol that differed from the other two.After a set of grids children were asked to recall the position of the odd-one-out on each grid, in the correct order.Participants first completed an initial practice block with one trial with one item and two trials with two items.The practice trials could be repeated if necessary.For the test trials there were three trials at each span length, beginning with a test length of two items, and children continued to the next span length if they responded correctly to at least one of the trials at each span length.Participants’ performance on the processing task was also assessed separately in two blocks of 20 trials each.In these blocks they only had to identify the location of the odd-one-out, without the need to recall the position.Response times were measured for the processing trials and the total number of locations correctly recalled was calculated for the storage element of the complex span task.To assess participants’ ability to inhibit irrelevant information in a non-numerical context we used an animal-size stroop task.On each trial two animal pictures were presented on the screen.One animal was selected from a set of large animals and the other animal was selected from a set of small animals.The participants’ task was to identify which animal was the larger in real life.On each trial, one animal image was presented with an area on screen four times larger than the other image.On congruent trials the animal that was larger in real life was also the larger image on the screen, and on incongruent trials the animal that was smaller in real life was the larger image on the screen.Participants were required to ignore the size of the images on the screen and to respond based on the size in real life only.On each trial the images were presented on screen and participants responded as quickly as possible by pressing one of two buttons on the keyboard that corresponded to the side of the screen with the larger animal.Participants completed four experimental blocks each containing 48 trials in random order.The time taken to complete each block was recorded and presented to participants at the end of each block to encourage them to respond quickly.In the first two experimental blocks 75% of the trials were incongruent and 25% were congruent and in the second two experimental blocks 75% of the trials were congruent and 25% were incongruent.Participants had the opportunity to take breaks during the task as needed.Prior to commencing the task participants were shown each of the animal images in one size and asked whether the animal was large or small in real life to ensure they had the necessary real-world knowledge to perform the task.All participants completed this without problem.Median RTs for correctly-solved trials were calculated for the congruent and incongruent trials.Inhibition score was the difference in RT for congruent and incongruent trials.Larger differences indicate lower levels of inhibitory control.To assess participants’ ability to inhibit irrelevant information in a numerical context we used a dot comparison task.On each trial the participants were shown two sets of white dots on a black screen and were instructed to identify which set had the highest number of dots.The dots were created using an adapted version of the matlab script provided by Gebuis and Reynvoet.This method produced four types of trials, of which two were analysed.On fully congruent trials the more numerous array has larger dots and the array encompasses a larger area.On fully incongruent trials the more numerous array has smaller dots and the array encompasses a smaller area.Participants were required to ignore the size of the dots and the array on the screen and to respond based on the number of dots only.The number of dots in each array ranged from 5 to 28 and the ratio between the number of dots ranged from 0.5 to 0.8.Participants completed 6 practice trials and 80 experimental trials in random order.They were given breaks during the task as needed.Mean accuracy was calculated for the fully congruent and incongruent trials.Inhibition score was the difference in accuracy for fully congruent and incongruent trials.Larger differences indicate lower levels of inhibitory control.To assess participants’ ability to formulate basic concepts and shift from one concept to another we used the Animal Sorting subtest from the NEPSY-II.The task requires participants to sort eight cards into two groups of four using self-initiated sorting criteria.The cards are coloured blue or yellow and include pictures of animals, and can be sorted in 12 different ways, for example blue vs. yellow cards, one animal vs. two animals, pictures with sun vs. pictures with rain.Following a teaching example the participants were given 360 s of cumulative sort time to sort the cards in as many different ways as they could.The test was discontinued before 360 s if the participant stated they had finished, or if 120 s elapsed without a response.Sorts were recorded using correct sort criteria and a raw score of the total number of correct sorts was calculated.A larger score indicates better performance.Each participant was tested individually in a 2 h session.The tasks were presented in one of two orders, counterbalanced across participants, with executive function and mathematics tasks intermixed.The children were all tested in their school in a quiet room away from the classroom.Young adults were tested in a lab at their University.Nine participants; four 8–9-year-olds, four 11–12-year-olds and one 13–14-year-old failed to complete either one or two measures from the full battery of tests.Their missing data was replaced using the multiple imputation option in SPSS.Six participants; two 8–9-year-olds, one 11–12-year-old and three young adults, were classed as multivariate outliers using Mahalanobis distance and excluded from the study.A further three 8–9-year-olds were excluded for floor performance on the procedural skills task.This left a final sample of seventy-nine 8–9-year-olds, sixty-six 11–12-year-olds, sixty-seven 13–14-year-olds and seventy-two young adults.The content of the arithmetic tasks varied for each age group to prevent floor or ceiling effects on any tasks.As a result it was not appropriate to use raw scores in analyses involving multiple age groups.We therefore transformed raw scores on all measures to z-scores within each age group and used these in the subsequent analyses.For measures where a lower score indicated better performance the z scores were multiplied by −1 so that for all measures a higher z score indicated better performance.The consequence of using z scores was that overall age differences in mathematics or executive functions between the groups were not assessed, only how the relationships between executive functions and mathematics may differ with age.Descriptive statistics for raw performance on the mathematics and executive function tasks are presented in Table 1.There was a good range of performance on all of the tasks, with no evidence of floor or ceiling effects.Four sets of analyses were conducted.First, we established that the mathematics component skills were related to overall mathematics achievement.Second, regression models were used to determine the relative contribution of working memory, inhibition and shifting to overall mathematics achievement as well as factual knowledge, procedural skill and conceptual understanding of mathematics and establish how this changes with age.Third, a mediation analysis was performed to ascertain if cognitive components of mathematics mediate the relationship between executive functions and overall mathematics achievement.Finally, a variance partitioning approach explored which components of working memory are driving the relationships with mathematics.To establish that factual knowledge, procedural skill and conceptual understanding all independently contribute to mathematics achievement, we conducted a hierarchical linear regression predicting WIAT mathematics reasoning scores from our measures of factual knowledge, procedural skill and conceptual understanding.To determine if the contribution of these three components changes during development, we also included interaction terms with two nested dummy coded contrasts.The first of these, D1, compared the young adults to all groups of children.The second contrast, D2, compared the primary school pupils to the two groups of secondary school pupils, i.e., the 8–9-year-olds to both the 11–12- and 13–14-year-olds.The age contrasts were entered in the first step of the model along with the measures of factual knowledge, procedural skill and conceptual understanding.The interaction terms were entered in the second step.As shown in Table 2 the three components of arithmetic all explained unique independent variance in mathematics achievement and there were no interactions with age.To assess the role of executive functions in mathematics achievement as well as factual knowledge, procedural skill and conceptual understanding we carried out a series of hierarchical regressions.The dummy coded age contrasts and executive function measures were entered in the first step.For these analyses only the combined storage and processing verbal and visuospatial working memory tasks were included.Interaction terms between the executive function measures and age contrasts were entered in the second step.As shown in Table 3 the executive function measures alone explained 34% of the variance in mathematical achievement, 12% of the variance in factual knowledge, 15% of the variance in procedural skill and 5% of the variance in conceptual understanding.No further variance was explained when interaction terms were added to the model for any of the outcome measures.Verbal working memory was a unique independent predictor of factual knowledge, procedural skill and conceptual understanding as well as mathematics achievement.Visuospatial working memory was also a unique independent predictor of all of the outcome variables with the exception of conceptual understanding.Shifting and non-numerical inhibition did not independently predict any of the outcome variables, while numerical inhibition was a unique independent predictor of factual knowledge and procedural skill.The results so far indicate that working memory skills are related to mathematics achievement and also to the component arithmetic skills of factual knowledge, procedural skill and conceptual understanding.This raises the possibility that these component arithmetic skills mediate the relationship between working memory and mathematics achievement.In order to explore this possibility mediation analyses were performed using the Process macro for SPSS.This calculates bias-corrected 95% confidence intervals using bootstrapping with 10,000 resamples.A confidence interval that does not straddle zero represents an effect that is statistically significant.Two separate models were run for verbal and visuospatial working memory respectively.In both models, the mathematics achievement measure was the dependent variable.Factual knowledge, procedural skill and conceptual understanding were included as potential mediators and all other executive function measures were included as covariates.There were small but significant indirect effects of verbal working memory on mathematics achievement through all three component arithmetic skills; factual knowledge, procedural skill and conceptual understanding.The size of these indirect paths did not differ significantly from each other.There remained a substantial direct effect of verbal working memory on mathematics achievement however.For visuospatial working memory there were small indirect effects on mathematics achievement through factual knowledge, procedural skill but not conceptual understanding.The indirect path via procedural skill was significantly larger than the non-significant path via conceptual understanding.There was also a substantial direct effect of visuospatial working memory on mathematics achievement.These findings demonstrate that working memory supports mathematics achievement directly, but also indirectly through factual knowledge, procedural skill and conceptual understanding.The measures used to index working memory in these analyses required participants to undertake concurrent storage and processing.Coordinating these two activities is thought to rely on the central executive, however the task is not a pure measure of the central executive and therefore it is possible that the lower-level storage and processing demands of the task are contributing to the relationships with mathematics achievement and components of arithmetic, in addition to the central executive demands of combining the two tasks.In order to investigate this, linear regression modelling was used to partition the variance between the storage, processing and central executive components of verbal and visuospatial working memory.This method helps disentangle the unique contributions each component makes as well as commonalities between them.This allowed us to determine whether it was simply storing information in mind, processing information, or the executive demands of combining the two that accounted for variability in the different components of mathematics as well as overall mathematics achievement.This was done separately for the verbal and visuospatial domains.The proportion of unique and shared variance explained by each combination of the working memory variables for each of the outcome measures is presented in Fig. 2.The first thing to note is that the pattern was largely similar across verbal and visuospatial domains.Both the verbal and visuospatial working memory tasks accounted for unique variance in mathematical achievement, factual knowledge and procedural skill even once simple storage and processing speed were controlled for.This contribution was largest for mathematics achievement followed by procedural skill and then factual knowledge.Verbal but not visuospatial working memory also accounted for unique independent variance in conceptual understanding.A similar pattern was found for shared variance between the working memory and short-term memory tasks.It contributed the largest amount to mathematics achievement with broadly similar contributions for procedural skill and factual knowledge.The shared variance between verbal short-term and working memory was also linked to conceptual understanding.Unique variance associated with the verbal and visuospatial short-term memory and processing speed tasks differed slightly in the contribution that they made to mathematics outcomes.The verbal short-term memory task accounted for a small amount of unique variance in mathematics achievement and factual knowledge only whereas verbal processing speed did not explain variance in any of the mathematics outcomes.The visuospatial short-term memory task accounted for a small amount of unique variance in mathematics achievement, factual knowledge and procedural skill, whereas visuospatial processing speed accounted for unique variance in mathematics achievement, factual knowledge and conceptual understanding.To summarise, the verbal and visuospatial working memory tasks contributed both unique variance as well as shared variance with short-term storage to mathematics achievement, factual knowledge, procedural skill and conceptual understanding.The unique variance associated with verbal and visuospatial short-term storage differed across components of mathematics, and whereas visuospatial processing contributed unique variance to some mathematical processes, verbal processing did not.This study investigated the role of executive functions in factual knowledge, procedural skill and conceptual understanding as well as overall mathematics achievement in individuals aged between 8 and 25 years of age.The findings support a modified version of a hierarchical framework for mathematics in which domain-general executive function skills, in particular working memory, support domain-specific mathematical processes, which in turn underpin overall mathematics achievement.We extended previous models by demonstrating that working memory also directly contributes to mathematical achievement.This pattern of relationships between domain-general and domain-specific skills was found to be remarkably stable from 8 years of age through to young adulthood.Below we discuss the contribution of executive functions to mathematics, and the resulting theoretical implications, in more detail.We begin with a discussion of the role of executive functions in overall mathematical achievement, factual knowledge, procedural skill and conceptual understanding separately before moving on to compare across components and consider how executive functions contribute to mathematics achievement both directly and indirectly.In line with a large body of literature we found a significant relationship between verbal and visuospatial working memory and overall mathematics achievement.This indicates that the ability to store and manipulate information in mind in the face of ongoing processing is strongly linked to the aptitude to do well in mathematics.The predicted relationship between inhibition and shifting and overall mathematics achievement was not found however.This is partially consistent with evidence that inhibition and shifting account for less variance in mathematics achievement.It also provides support for the suggestion that inhibition and shifting may contribute unique variance to mathematics achievement when they are studied independently, but not when working memory is also included in the model.The executive functions that contributed to factual knowledge and procedural skill were very similar, with verbal and visuospatial working memory as well as numerical inhibition accounting for unique variance in both components.This is consistent with theories that propose that working memory is required to activate and retrieve mathematical facts stored in long-term memory, and also that inhibitory processes are needed to suppress co-activated but incorrect answers.It also highlights the role of working memory in representing a problem and storing interim solutions in procedural mathematics skills, and suggests that inhibitory control may be required in order to select and employ the appropriate procedural strategy.In this study we did not find a relationship between shifting and procedural skill.This conflicts with findings from other studies that have examined the extent to which performance on a cognitive flexibility task predicts performance on a test of procedural skill.Some of these positive findings were found in pre-schoolers indicating that the role of shifting in mathematics may be greater earlier in childhood, as we suggested.Other positive relationships were found when a trailmaking task that involved numerical stimuli was used.Relationships between working memory and mathematics have been found to be stronger when numerical stimuli are used within a working memory task and it is plausible that this could also be the case for measures of shifting.Similarly, we found that inhibitory control measured in a numerical context, but not including Arabic digits, was related to mathematics achievement as well as factual and procedural knowledge, but non-numerical inhibition was not.This is in line with previous research and provides evidence in support of the proposal that there are multiple domain-specific inhibitory control systems, rather than a single inhibitory system which applies across all domains.The predicted relationship between working memory and conceptual understanding was found, albeit only in the verbal domain.This again is consistent with the idea of working memory being necessary to activate information stored in long-term memory.The fact that only verbal working memory was related to the retrieval of conceptual information, whereas both verbal and visuospatial working memory were implicated in the retrieval of mathematical facts could be because conceptual information is stored in a verbal code, whereas mathematical facts perhaps also contain a visuospatial component, related to the way that sums are often presented or the use of visual aids, such as times tables squares, at time of encoding.The predicted relationship between shifting and inhibition and conceptual understanding was not found.This may be because we used a task that required participants to apply conceptual knowledge that they already have.It may be that suppressing procedural strategies and rearranging problems into different formats in order to identify conceptual relationships are more important when conceptual information is being learnt rather than once it has been acquired.This is the first study that has directly compared the contribution of executive function skills to factual knowledge, procedural skill and conceptual understanding across both children and adults using a comprehensive battery of executive function tasks.Together the executive function measures predicted more variance in factual knowledge and procedural skill than conceptual understanding, consistent with the findings of Hecht et al., and Jordan et al.We found that executive functions explained a similar amount of variance in both factual knowledge and procedural skill, which is inconsistent with the findings of Cowan and Powell and Fuchs et al. who found that domain-general factors accounted for more variance in procedural skill than factual knowledge.The amount of variance explained was also much lower in our study than that of Cowan and Powell, where domain-general factors accounted for 43% variation in procedural skill and 36% variation in factual knowledge.This difference is likely due to the fact that Cowan and Powell included other domain-general factors in their model, such as visuospatial reasoning, processing speed and oral language.This may also explain the difference in variance explained between factual knowledge and procedural skill.It may be that while the contribution of executive functions is similar in both, other domain-general skills such as reasoning and language are more important for procedural skill than for factual knowledge.Similarly, the role of IQ in explaining variance in each mathematics components has yet to be fully explored.For example, it is possible IQ may explain more variance in conceptual understanding than executive functions.The relationships between executive functions and factual knowledge, procedural skill and conceptual understanding were assessed across four different age groups; 8–9-year-olds, 11–12-year-olds, 13–14-year-olds and 18–25-year-olds.We predicted that executive functions would be more strongly related to procedural skill in the youngest age group in comparison to the older children and adults on the basis that executive functions may be required less with age as procedural skills become more automatic.Contrary to our predictions we found that the relationships between executive functions and all components of mathematics were the same from 8 years of age through to adulthood.There are two possible reasons for this.The first may be due to the nature of the mathematical measures that were used.Raghubar et al. distinguished between whether a skill is in the process of being acquired, consolidated or mastered, and suggested that the role of working memory in a particular mathematic process may differ depending on which of these stages the learner is at.By selecting separate age-appropriate content for the mathematics measures for each group it is possible that we were in fact assessing the role of executive functions in performing and applying already mastered mathematical skills and knowledge in all age groups, and that the role of executive functions in doing this is the same at all ages.This is consistent with another recent study that found little variation in the relationship between working memory and mathematics between the ages of 8 and 15 years.Further evidence that executive functions, in particular working memory, are required when individuals of all ages apply already mastered mathematical knowledge and procedures comes from dual-task studies in which solving relatively simple mathematical problems using factual and procedural strategies is impaired by a concurrent working memory load.A second possibility for the stable relationship between executive functions and mathematics across age groups is that executive functions are particularly important for early skill acquisition leading to individual differences in learning arithmetic early in childhood, but that these individual differences remain and are still evident later in life.This would imply that executive functions play a greater role in learning new mathematical skills and knowledge compared to executing already mastered mathematical material.Further research directly comparing how executive functions are involved in mathematics at different levels of skill acquisition, for example when facts, procedures and concepts are first being learned compared to when they are mastered, is required to test these two possibilities, although they may not be mutually exclusive.The regression analyses demonstrated that working memory contributed unique variance to overall mathematics achievement and also to factual knowledge, procedural skill and conceptual understanding.We subsequently carried out a mediation analysis to determine if, in line with hierarchical models of mathematics, performance on the domain-specific mathematics skills of retrieving mathematical facts, applying procedures and understanding concepts mediated the relationship between working memory and overall mathematics achievement.We found that verbal and visuospatial working memory do indeed contribute to mathematics achievement indirectly through factual knowledge, procedural skill and conceptual understanding, but that there is also a substantial pathway directly from working memory to mathematical achievement.A similar mediation analysis was conducted by Hecht et al., who compared the contribution of working memory to basic procedural arithmetic and conceptual understanding of fractions, and in turn to performance on tests of fraction word problems, estimation and computation.Hecht and colleagues found that working memory was a direct predictor of performance on fraction word problems, but not fraction computation.We have already discussed how working memory may support the different factual, procedural and conceptual components of mathematics, but what is its additional direct role in mathematics achievement?,One suggestion is that an additional demand of mathematics achievement tests is the need to identify the mathematical problem that’s presented within a verbal or visual description, construct a problem representation and then develop a solution for the problem.It is likely that working memory plays a key role in these processes in terms of maintaining and manipulating these problem representations in mind.In keeping with this, studies have found that working memory is related to performance on word problems.It remains to be established whether this relationship holds once the role of working memory in performing the appropriate arithmetic operation is taken into account.The regression and mediation analyses demonstrated that working memory plays a key role directly in mathematics achievement, but also indirectly through its contribution to factual knowledge, procedural skill and conceptual understanding.These analyses could not reveal which components of working memory are driving these relationships however, and whether they differ depending on the mathematical process involved.This was because the verbal and visuospatial working memory tasks included in these analyses involved short-term storage as well as the executive demands of maintaining that storage in the face of concurrent processing.To that end, a variance partitioning approach was used to isolate the independent contribution of the central executive, short-term storage and processing as well as the shared variance between them.This was done separately for both verbal and visuospatial working memory.Consistent with previous findings the working memory measures accounted for a moderate amount of unique variance in mathematical achievement as well as a smaller amount of variance in factual knowledge, procedural skill and conceptual understanding once short-term storage and processing had been accounted for.This is indicative of the contribution of the central executive and adds further evidence that it has a strong link with mathematics performance.A similar amount of variance in the mathematics tasks was explained by the shared variance between the working memory and short-term storage task however.This is likely to measure the ability to hold information in mind for a short amount of time given that this is a requirement of both the storage only and combined storage and processing tasks.This suggests that simply being able to hold information in mind is as important for mathematics as being able to hold that information while undertaking additional processing.Within the literature there have been mixed findings suggesting that either verbal or visuospatial working memory plays a larger role in mathematics performance.Some previous evidence, including a meta-analysis of 111 studies, indicates that verbal working memory is more important for mathematical achievement in children compared to visuospatial working memory.In contrast, other researchers have suggested that it is in fact visuospatial working memory that plays a greater role, and that its importance may increase with age.Very few studies have directly compared the role of verbal and visuospatial working memory using tasks that require storage alone and combined storage and processing in both domains however.In doing so we found that the contribution of verbal and visuospatial working memory was in fact very similar, both across different components of mathematics and also across age groups.The only major difference was that verbal, but not visuospatial, working memory contributed unique variance to conceptual understanding.Overall, these findings suggest that the ability to store both verbal and visuospatial information in mind in the face of ongoing processing is important for successful mathematics achievement.The domain-general central executive skills of monitoring and manipulating information play an important role, as do the domain-specific skills of holding both verbal and visuospatial information in mind.This is consistent with multi-component models of mathematics achievement which include both linguistic and spatial pathways.In addition to the variance shared with the working memory tasks, the verbal and visuospatial short-term memory tasks also both contributed unique variance to mathematical achievement and factual knowledge.The visuospatial short-term memory task also accounted for unique variance in procedural skill.This reflects a process that is not shared between the storage only and combined storage and processing tasks.One possibility is that this reflects the rehearsal of verbal items and visuospatial locations as there was more opportunity for this in the storage only tasks.It has also been proposed that this reflects the ability to reactivate items in memory.Visuospatial, but not verbal, processing accounted for unique variance in all of the mathematics tasks except for procedural skill, consistent with a large body of evidence demonstrating links between spatial skills and mathematics.Taken together, the results from the variance partitioning approach support our hypothesis that all components of working memory; storage, processing and the central executive, contribute to mathematics achievement.We did not find that the central executive was the most important component however.This is not inconsistent with previous findings as many studies use a combined storage and processing working memory task as a measure of the central executive when in fact it involves both the short-term storage components of working memory in addition to the central executive.Their results would perhaps be better interpreted as showing that both the short-term stores and central executive are important for mathematics, which is exactly what we found.The results from this study provide further evidence that working memory capacity is linked to mathematics achievement, but indicate that the mechanisms by which working memory influences mathematics achievement might be varied and complex.This has important implications for current intervention approaches that aim to improve academic outcomes by training working memory capacity.To date, many studies have failed to show any improvement on standardised tests of mathematics achievement following working memory training.Our results suggest that an intermediary approach may be beneficial to first ascertain whether working memory training can successfully enhance factual knowledge and procedural skill, or whether it has any impact on constructing problem representations.Such an approach has the potential to evaluate current interventions but would also further test our theoretical model.More broadly, our findings support multi-component frameworks of mathematics which highlight that there are a wide range of skills, both domain-general and domain-specific, that contribute to successful mathematics achievement.A further corollary of multi-component models is that there are a range of reasons why children might struggle with maths.In terms of interventions it is important to identify the reasons children might be having difficulties, be it problems with factual knowledge, procedural skill, conceptual understanding or underlying working memory or inhibitory control problems such that interventions can be tailored accordingly.However, given that these processes are likely to interact training them in isolation may not be the most beneficial approach.In conclusion, this study has shown that working memory plays a direct role in mathematics achievement in terms of identifying and constructing problem representations as well as an indirect role through factual knowledge, procedural skill and, to a lesser extent, conceptual understanding.Inhibitory control within the numerical domain also supports mathematics achievement indirectly through factual knowledge and procedural skill.Perhaps surprisingly, these relationships appear to be stable from 8 years through to adulthood.The results from this study support hierarchical multi-component models of mathematics in which achievement in mathematics is underpinned by domain-specific processes, which in turn draw on domain-general skills.These findings begin to help us to comprehend the mechanisms by which executive functions support mathematics achievement.Such an understanding is essential if we are to create targeted interventions that can successfully improve mathematics outcomes for all learners.The full dataset for this study are available to download at http://reshare.ukdataservice.ac.uk/852106/.In summary, it can be seen that executive functions do seem to play a role in an individual’s ability to recall arithmetic facts from long-term memory, select and perform arithmetic procedures and understand the conceptual relationships among numbers and operations.However much of this evidence is drawn across separate studies and thus it is not possible to directly compare the contribution of executive functions across these three core competencies of arithmetic.A direct comparison is important theoretically in order to be able to accurately refine multi-component models of arithmetic.It is also of practical importance in order to understand the mechanisms through which interventions aimed to enhance mathematics outcomes via executive function training might be operating, as well as provide some indication as to how they could be modified and improved.To our knowledge, only a handful of studies have compared the role of domain-general skills across factual, procedural and conceptual components of arithmetic.Cowan and Powell examined the contribution of working memory to fact retrieval and procedural skill at written arithmetic in 7–10-year-olds, alongside other domain-general skills including reasoning, processing speed and oral language as well as measures of numerical representations and number systems knowledge.They found that domain-general factors accounted for more variation in procedural skill than in fact retrieval and that much of this variance was shared among the domain-general predictors.The unique predictors differed across tasks.While visuospatial short-term memory and verbal working memory predicted procedural skill only processing speed, naming speed and oral language emerged as a significant unique predictor of factual knowledge.Similarly, Fuchs et al. found that domain-general skills accounted for more variance in procedural skill than factual knowledge.Language and phonological processing were significant unique predictors of fact retrieval and phonological processing also predicted procedural calculation skill.Working memory was not a significant predictor for either component, however teacher ratings of attention uniquely predicted performance on both.Inattention is known to be strongly related to working memory capacity, therefore it is possible that any variance associated with working memory is shared with this measure of attention.The studies of Cowan and Powell and Fuchs et al. suggested that working memory and other domain-general skills play a larger role in procedural skill than factual knowledge.They did not include a measure of conceptual understanding however.Two studies have compared the contribution of working memory to procedural skill and conceptual understanding within the domain of fractions.Both Hecht et al., and Jordan et al. found that working memory was a significant predictor of procedural skill but not conceptual understanding.This adds further evidence that working memory makes a greater contribution to procedural skills than other components of mathematics.Only one study to date has compared the contribution of working memory skills to all three components of mathematics; factual knowledge, procedural skill and conceptual understanding.Andersson tested a large sample of children with and without mathematics and reading difficulties three times between the ages of 10 and 12 years on a large battery of mathematics and cognitive tasks which include measures of factual, procedural and conceptual knowledge, as well as visuospatial working memory, verbal short-term memory, and shifting.Regression analyses revealed that executive functions accounted for more variance in procedural skills than in factual knowledge or conceptual understanding.Processing speed and verbal short-term memory were significant unique predictors of fact retrieval accuracy whereas shifting, but not the working memory measures, predicted procedural skill.Visuospatial working memory was a predictor of conceptual understanding however.This discrepancy with the studies of Hecht et al., and Jordan et al. may be because a visuospatial working memory task was used here, in contrast to the verbal working memory tasks used by Hecht et al. and Jordan et al.Taken together, these findings indicate that working memory skills do play a different role in recalling arithmetic facts from long-term memory, selecting and performing arithmetic procedures and understanding the conceptual relationships among numbers and operations.Yet to date no studies have used a comprehensive battery of working memory and wider executive function tasks in order to gain a full picture of the contribution of executive functions to these three components of arithmetic.Moreover, given that performance in these three component skills underpins overall mathematics achievement it is likely that factual, procedural and conceptual understanding may mediate the overall relationship that has been found between executive functions and mathematics achievement.Revealing these subtleties will allow us to pinpoint the mechanisms by which executive functions support mathematics achievement and perhaps refine intervention approaches that build on this relationship.The current study aimed to investigate the role of executive functions in factual, procedural and conceptual knowledge of arithmetic, ascertain how this might change with development, and determine whether these cognitive components of arithmetic mediate the relationship between executive function and overall mathematics achievement.A large sample of 8–9-year-olds, 11–12-year-olds, 13–14-year-olds and 18–25-year-olds were administered a battery of mathematics and executive function measures in addition to a standardised test of mathematics achievement.Three sets of analyses were conducted: The first used regression models to determine the relative contribution of working memory, inhibition and shifting to factual knowledge, procedural skill and conceptual understanding of mathematics and how this changes with age.The second used mediation analysis to ascertain if cognitive components of mathematics mediate the relationship between executive functions and overall mathematics achievement.The final set of analyses used a variance partitioning approach to explore which components of working memory are driving the relationships with mathematics.In light of theoretical models of mathematical cognition and the available empirical evidence we predicted that executive functions would be significantly related to overall mathematics achievement, with working memory contributing more variance than inhibition and shifting.We predicted that all components of working memory would contribute to mathematics achievement but that verbal executive working memory would explain the most variance.We anticipated that the contribution of visuospatial working memory might increase with age.We expected that executive functions would play a greater role in procedural skills than in factual knowledge and conceptual understanding.We predicted that factual knowledge would be demanding of cognitive resources, particularly verbal working memory and inhibition to suppress activated but incorrect answers.We anticipated that all aspects of executive function would be associated with procedural skill, but that the strength of this relationship would change with age, with stronger relationships in 8–9-year-olds in comparison to 11–12- and 13–14-year-olds.For conceptual understanding we anticipated that while working memory may be required to retrieve conceptual information from long-term-memory, inhibition and shifting would play an important part in suppressing procedural strategies in favour of conceptual ones, as well as rearranging problems into different formats in order to identify conceptual relationships. | Achievement in mathematics is predicted by an individual's domain-specific factual knowledge, procedural skill and conceptual understanding as well as domain-general executive function skills. In this study we investigated the extent to which executive function skills contribute to these three components of mathematical knowledge, whether this mediates the relationship between executive functions and overall mathematics achievement, and if these relationships change with age. Two hundred and ninety-three participants aged between 8 and 25 years completed a large battery of mathematics and executive function tests. Domain-specific skills partially mediated the relationship between executive functions and mathematics achievement: Inhibitory control within the numerical domain was associated with factual knowledge and procedural skill, which in turn was associated with mathematical achievement. Working memory contributed to mathematics achievement indirectly through factual knowledge, procedural skill and, to a lesser extent, conceptual understanding. There remained a substantial direct pathway between working memory and mathematics achievement however, which may reflect the role of working memory in identifying and constructing problem representations. These relationships were remarkably stable from 8 years through to young adulthood. Our findings help to refine existing multi-component frameworks of mathematics and understand the mechanisms by which executive functions support mathematics achievement. |
31,464 | Can product service systems support electric vehicle adoption? | Increasing awareness of the transport sector’s significant contribution to climate change, oil dependency, particulate matter pollution, nitrogen oxide emissions and noise particularly in urban areas has resulted in activities for road transport electrification.Substituting internal combustion engine vehicles with plug-in electric vehicles, i.e. full battery electric vehicles, range extended electric vehicles and plug-in hybrid electric vehicles, seems a very promising step to cope with the challenges of individual road transport and fits to the smart city paradigm, which has become one of the most important urban strategies to foster green growth and to improve urban sustainability against the backdrop of climate change.A wide range of definitions for the fuzzy smart city paradigm exist.Despite the risks accompanying hyper-connected societies, the common notion is that in smart cities information and communication technologies are used to increase citizens’ quality of life while contributing to sustainability.Highly connected information systems providing real-time digital platform services connecting citizens with urban infrastructures are key resources for smart cities.Policy incentives and car manufacturers’ portfolio decisions have a positive influence on EV adoption.Consequently, registrations of EV have been continuously increasing in industrialized countries on the global scale since 2008.Particularly countries subsidizing EV with pricing incentives and increased access to charging stations, i.e. electric vehicle supply equipment, have comparatively higher growth rates.However, several barriers to widespread adoption of EV have been observed.Sierzchula et al. distinguish techno-economic, consumer specific, as well as contextual factors such as the distribution of EVSE.Thus, further research on decreasing barriers is needed.The commercial sector in Germany seems particularly promising for EV diffusion.Vehicles in the commercial sector perform longer trips than private vehicles and drive more regularly.Both aspects are advantageous for EV usage, as a higher mileage allows a faster amortization and a higher regularity permits to better cope with a limited vehicle range.In addition to that, a large share of annual first registrations, i.e. 65% in Germany, are due to commercial owners.Furthermore, in company fleets, which also include ICEV, EV can be easily replaced for extraordinary long distance trips.Therefore, organizational fleets provide an important lever for the integration of EV into the vehicle stock and thus for mass market introduction.Another driving factor is that organizations might be willing to pay somewhat more for EV than for ICEV.As private users they might have a willingness to pay more due to the recognized positive impact on their green image.The research question how acceptance of EV can be fostered in order to increase sales numbers of EV have been subject to many research activities during the last years.E.g. reviews are provided by Hjorthol and Rezvani et al.More specifically, Wikström et al. focus on commercial fleets, Koetse and Hoen on company car drivers and Sierzchula on fleet managers.Furthermore, new mobility concepts and business models could transform the technological advantages of EV into value added for the customers.A function-oriented business model or product service system is a combination of products and services in a system that provides functionality for consumers and reduces environmental impact.In PSS tangible artefacts and intangible services jointly fulfill specific customer needs.Supported by the rapid advance of ICT technologies during the last years, many types of PSS became more economical and practical.As technology has taken its toll on human life and is present everywhere the Internet of Things paradigm evolved combining physical and digital components to create PSS and to enable novel business models.E-mobility business models and therefore e-mobility PSS have experienced increasing attention during the last years.While business models or services in the automotive industry in general are the subject of many scientific contributions, only few authors have focused on business models for EV for a review).EV and corresponding infrastructure increase complexity of business model approaches for multimodal mobility platforms.Yet, there are some qualitative studies on business models for EV.Bohnsack et al. study the evolution of EV business models.Kley et al. present a systematic instrument describing business models for EV charging.Cherubini et al. identify the main sub-systems of the PSS in the electric car industry, i.e. vehicle, on-board electronics, infrastructure and energy.They attribute the following actors and roles to these sub-systems: Automobile manufacturers define the PSS value proposition and corresponding product-service bundle pricing strategies.Electronic system companies develop advanced navigation systems.Public institutions are considered being key actors fostering e-mobility infrastructure solutions.They decide on incentive schemes, can push forward implementations of alternative transport systems and can run advocacy campaigns to inform and acquaint citizens.Both, energy providers and public institutions are considered being responsible for the location and availability of charging points, their ease of use and corresponding standardizations.Stryja et al. provide an overview of existing e-mobility services, classify them and provide a framework to characterize and describe services in the context of EV usage.However, research on PSS is still dominated by conceptual work and additional empirical research is required.Beyond that, specific case studies evaluate costs and benefits of EV.Kosub applies the technique of cost-benefit analysis to evaluate the choice of an organization to incorporate hybrid vehicles into a vehicle fleet.Piao et al. compare lifetime net present values of costs and benefits between EV and ICEV to answer the question whether it is beneficial to purchase EV from a private and societal point of view.Costs and benefits of EV compared to ICEV were already studied profoundly, particularly with total cost of ownership approaches.Some approaches are based on individual driving profiles and some were extended by also considering non-monetary factors as WTPM for EV.Madina et al. change the perspective and focus on TCO assessments of EVSE business models instead.In addition to costs Nurhadi et al. consider sustainability effects in their assessment of current car specific business models.However, to the best of our knowledge no studies published so far analyze costs and benefits of e-mobility PSS from a bottom-up user perspective based on empirical data.Consequently, we intend to fill this gap by analyzing actual costs and benefits of organizations who adopted e-mobility PSS in an e-mobility field trial.Based on these results we conclude on the question whether e-mobility PSS can support EV adoption.This paper is structured as follows: In Section 2 the framework and data used to evaluate costs and benefits of e-mobility PSS is described.In Section 3 the results of the cost benefit analysis are presented.In Section 4 methodological aspects and results are discussed.The article ends with a summary, a conclusion, and an outlook in Section 5.In order to evaluate e-mobility PSS in organizational fleets cost-benefit analysis is applied.Organization-specific costs per vehicle for using e-mobility PSS are assessed and compared to the organization-specific WTPM in order to calculate corresponding net benefits.Section 2.1 describes the methodological framework to evaluate costs and benefits of the PSS.Section 2.2 describes the data collected during an e-mobility project in south-west Germany between 2013 and 2015.The project included a large-scale fleet trial with 109 organizations owning 327 EV as well as a regional charging network with 181 interconnected charging points.While all cost parameters are determined by average cost values derived from scientific studies, parameters on WTPM are survey based.All parameters have equal weights as WTPM, costs for the vehicles, EVSE and services have monetary units that are directly capable of being totaled.All organizations interested to get involved in the project’s field trial were asked to participate in a voluntary fleet analysis involving a detailed analysis of driving patterns by logging driving profiles in order to find out about their fleets’ electrification potentials.45 of the interested 234 organizations volunteered to participate.109 participated in the field trial, i.e. decided to purchase at least one EV, project specific EVSE as well as additional hardware connectivity services.26 organizations are represented in both subsamples, i.e. provided driving profile data and participated in the field trial.As this study focuses on early EV adopting organizations we did not expect the organizations’ distributions concerning industrial sectors and size to be representative for south-west Germany.We are positively surprised that industrial sector distributions of participating organizations represent the number of employees in the different industrial sectors in Baden-Württemberg fairly well.38% of the participating organizations belong to the manufacturing sector, 14% to the wholesale and retail sector, 13% to public administration, 8% to information and communication and 6% to the construction sector.Less than 5% of the population are represented in the remaining sectors.Differences can be observed concerning organizations’ sectoral distributions compared to Germany’s new car registrations.About 80% of the organizations in this study employ less than 250 persons, i.e. are small and medium-sized enterprises.This share is comparably high, as only about 50% of employees in Baden-Württemberg work for organizations employing less than 250 persons.The fleet managers and decision makers in the participating organizations are on average 45 years old, are predominantly male and are well educated.About half of them have completed academic studies and about 30% have a degree at university entrance level or a master craftsman diploma.50% have a technical, about 40% a commercial background.On average, the respondents have been employed for 16 years in their organizations and have an experience level with fleet management activities of 10 years on average.Half of them dedicate more than 10 h per month to fleet management activities, 25% four hours or less and 25% more than 20 h.Field trial participation came along with a monthly compensation of expenses.Constrained by the real costs for EV the participating organizations received up to 500 Euros monthly per BEV or REEV and 350 Euros monthly per PHEV in order to compensate for additional costs of project specific EVSE, for the still existent economic disadvantages of EV and for providing data.Due to this setup collecting the substantial amount of high quality organization-specific information was possible.An overview on the specific datasets used for the assessment of WTPM, costs and net benefits of EV, EVSE and the services considered is provided in Table C.1.The vehicle driving profile dataset is available for the subsample of 45 organizations, who have not necessarily decided to participate in the field trial at the point of time the data was collected.The dataset consists of profiles of all trips of a vehicle within at least three weeks of observation which were collected with GPS-trackers for ICEV to test potential replacements by EV.Several information about the company as well as the vehicle was collected in a short survey.This dataset contains driving profiles of 223 commercially licensed vehicles of 45 organizations participating in the project.The 223 vehicle driving profiles were collected over an average observation period of 23.1 days with an average daily mileage of 56.6 km.At night, the vehicles are mainly parked at the company site with a dedicated parking spot.Most of the vehicles are fleet vehicles which are used by several users.Similar to the sample of organizations participating in the field trial, most of the 45 companies which volunteered to provide GPS tracks have less than 250 employees and are mostly located in small cities below 100,000 inhabitants.The detailed questions asked in the survey are provided in Appendix E.In order to evaluate costs and benefits of e-mobility PSS we first analyze EV.Second, we analyze EVSE and corresponding connected e-mobility services.Third, the results concerning costs and benefits of EV, EVSE and e-mobility services are combined in order to calculate net benefits of the whole e-mobility PSS.Sensitivity analysis are conducted to analyze effects of parameter variations on TCO and overall net benefits.We first have a look at the TCO differences of the vehicles).For this reason, we compare annual TCO for the cheapest EV with those of the cheapest ICEV.We observe that in 2015, annual TCO are about 800 € higher for most users and hardly amortizable with current prices and driving behavior.Thus in 2015, based on the driving profiles analyzed EV cannot be paid off.Positive TCO seem only possible in selected use cases.These cost differences do not include the costs for EVSE which might be considerable.However, the significant additional EV specific costs might be compensated by companies’ WTPM for an environmental or marketing effect.Our sensitivity analysis focuses on parameters potentially influencing TCO calculations as corresponding cost parameters might heavily change in the future and regional aspects might influence some of the parameters.Fig. 4 shows the effects of parameter variation on TCO calculations and consequently e-mobility PSS specific net benefits.Sensitivities within the range of 25% are highest for battery price, followed by interest rate, fuel prices and electricity price.This study deals with the analysis and evaluation of e-mobility PSS based on an approach that combines techno-economic and user-behavioral aspects in order to answer the central research question whether e-mobility PSS in company fleets can support EV adoption.Section 4.1 discusses and critically reflects methodological aspects of our research.In Section 4.2 results are discussed.In order to combine the techno-economic analysis of e-mobility PSS with non-monetary benefits, we conducted a survey with participants of a field trial with EV and let them estimate the non-monetary value of e-mobility PSS including different service bundles.The TCO approach is often criticized as being inconclusive for a vehicle purchase decision, yet it was repeatedly stated by organizations that it is the most important aspect in a commercial vehicle buying decision.However, this does not necessarily mean that organizations really calculate TCO in their vehicle buying decisions.They might rather use perceived estimates in their decision-making processes.Though the vehicle buying decision is complex and may include several decision making steps, the focus of this study is to analyze net benefits of e-mobility PSS as a whole.Therefore, the approach applied combining techno-economic assessments with behavioral aspects seems reasonable.The design of the questionnaire allowed the fleet managers and decision makers of commercial vehicle users to set non-monetary values of EV, EVSE and e-mobility services in relation to their actual costs.As we asked the fleet managers about WTPM for EV, EVSE, and corresponding services, they tried to monetarize the benefits.However, do these monetarized benefits appropriately represent the real benefits of the e-mobility PSS?,The large spread in the net benefits shows that corresponding perceptions vary significantly between organizations.Consequently, the results should be interpreted carefully.The participating organizations represent a very special early adopter group that received expense compensations for participating.The survey sample consists of early EV adopters so it is difficult to draw conclusions about future adopters of EV who will enter the market later and might have different motivations.For example, the average WTPM for e-mobility PSS might be lower when an early majority is about to enter the market and it is also expected to decrease with increasing market diffusion.Hence, results should be considered as an upper estimate.However, some services could potentially increase their attractiveness with increasing market penetrations potentially resulting in increasing WTPM.Furthermore, in addition to incomplete datasets due to the different subsamples of different datasets, missing value problems reduced sample sizes.We controlled for potential errors by additionally calculating net benefit based on the subsample with full data availability.Differences observed between the two approaches are not significant.Costs for charging infrastructure and services were taken from an early stage e-mobility project."The participating organizations' decisions to adopt and use the interconnected charging infrastructure and services might also be linked to an increased WTPM.However, the comparably high costs for EVSE and e-mobility services provided within this project might represent the current market situation.Our TCO and e-mobility PSS net benefit calculations are based on a large set of parameters and assumptions.These are related to values observed in 2015, the observation country and the specific observation population.In the future preconditions for e-mobility PSS specific net benefits might change increasing net benefits of EV.Battery prices are assumed to continue to decline.Second hand values of diesel and gasoline cars might decrease quicker than assumed due to governments’ plans to ban ICEV from many cities.Although the short investment horizon of 3.8 years used in this study represents the average holding time of cars in the commercial sector of Germany, strong variations depending on organizations’ commercial sector are possible.According to analyses of other studies, long investment horizons are in favor of EV and sensitivities are comparably low.Furthermore, electricity and gasoline prices as well as incentive schemes differ between countries and regions and should therefore be considered when interpreting the results presented.The results presented in Section 3.1 indicate that despite the disadvantages of EV WTPM for EV of organizations participating in the field trial compensate additional EV specific costs.These findings are in line with Plötz et al. and Peters and Dütschke showing that private EV users are willing to pay more for EV compared to ICEV.WTPM for EV can be explained by innovative car pool managers benefitting from EV procurements due to technophilia.Furthermore, organizations benefit from positive effects on employee motivations and EVs’ innovative and environmental image.The results presented in Section 3.2 indicate that net benefits of individually customized service bundles are higher than net benefits of the predefined ones.Individual consulting for individual composition of service bundles could increase corresponding net benefits significantly.The market for e-mobility PSS is still highly diversified.Hence, requirements for individual consultancy services are high.There might not be only one e-mobility PSS to succeed in sales activities in organizational fleets.Creating customized e-mobility PSS offers for different types of potential EV adopting organizations might be a convincing strategy for stakeholders and would fit with the smart city paradigm assuming a higher flexibility of services to accommodate individual needs.As interconnected EVSE of other organizations were used only infrequently in our case study, inter-organizational charging activities could hardly be observed.Nevertheless, it seems that the users interpret this service as a kind of insurance against flat batteries and are therefore willing to pay for this charging platform service even without its actual usage.Furthermore, platform-connected EVSE delivers a sound basis for providing smart energy services and multimodal platform services including offerings as e.g. ride sharing and corporate carsharing in addition to the connected charging platform services considered in this case study.Such additional platform services are co-creating value for EV users and providers by using information exchanged in real-time between EV, EV users, service platforms and other stakeholders.This could contribute to balance the negative benefits of EVSE.The results presented in Section 3.3 show that today financial support can be an important incentive for EV adoption.If prices for EV, EVSE and e-mobility services are further decreasing, monetary incentives could also be reduced.The findings of this paper show that annual net benefits for most organizations are clearly positive due to the compensation of expenses granted.For about half of the organizations net benefits of the EV are positive without considering the effect of the monetary incentives.However, net benefits of interconnected EVSE and corresponding services are negative for about 80% of the participating organizations.Sierzchula et al. as well as Harryson et al. show that financial incentives and availability of EVSE are positively correlated with different countries’ EV market shares.The results of this case study point out that the diffusion of EV could be supported not only by providing incentives to vehicle acquisitions but also by incentivizing e-mobility PSS including interconnected EVSE and corresponding platform services being part of publicly accessible charging networks.This could result in positive spillover effects, as additional publicly accessible EVSE offering smart charging services would be put in place that would again positively impact EV sales and developments towards smarter mobility solutions and be in line with the development towards the smart city paradigm.The results of our sensitivity analysis show that EVs’ TCO are particularly sensitive to variations of battery prices.Expected fast decreasing battery prices more than halving until 2020 would significantly increase net benefits of EV.Effects of electricity and fuel price parameter changes within the range of 25% are comparably low.Despite the high sensitivity potential of future battery price developments, the lever of governments’ incentive programs on overall net benefits of e-mobility PSS is comparably high.Incentives amount to more than 2500 €/a in Norway, more than 2000 €/a in France and to more than 2000 €/a in this project’s fleet test.These findings are in line with Palmer et al. showing that government support for low-emission vehicles clearly needs to address financial barriers if EV market share is to break out of the niche market.Recently many field trials with EV intending to counteract climate change, to reduce oil dependency, particulate matter pollution and noise emissions in urban areas by electrifying road transport were carried out in order to develop corresponding technologies.During a field trial with 109 organizations using 327 EV driving profiles, survey data, actual costs for interconnected EVSE solutions as well as information on the compensation of expenses granted to organizations participating were collected.Net benefits for e-mobility PSS, i.e. EV, interconnected EVSE and e-mobility services were evaluated based on fleet managers’ perspectives by analyzing costs and WTPM.The central research question addressed in this article whether e-mobility PSS can support EV adoption can be answered as follows: Currently the costs for interconnected EVSE solutions and e-mobility services outweigh corresponding WTPM.However, e-mobility PSS offerings are more promising if they are adapted to individual needs.They might become even more beneficial to EV users, particularly by considering benefits of smart energy services and multimodal platform services in addition.Consequently, WTPM for e-mobility PSS could increase resulting in a higher probability to adopt.In addition to that it is very likely that interconnected EVSE solutions and corresponding services allocated on the market underlie economies of scale and so will become cheaper in the future.Consequently, positive net benefits of e-mobility PSS might be possible for more organizations in the future without government incentives, particularly if increasing inter-organizational usage frequencies of EVSE are taken into account.Therefore, it is very likely that e-mobility PSS will directly support EV adoption in the future.However, in the current market phase the EVSE and e-mobility services offered rather negatively affected the adoption of EV, the high prices of interconnected EVSE in particular.Although the EVSE and e-mobility services offered did not directly contribute positively to higher net benefits of e-mobility PSS in most organizations, extending the e-mobility charging service offering by further additional smart platform services following the smart city paradigm might positively affect overall net benefits.Smart energy demand response services, billing services permitting to charge private EV with photovoltaic energy produced at the home roof top at the workplace, billing services to charge company cars at home with electricity paid by the employer and multimodal platform services are additional services that could enhance e-mobility PSS offerings.In addition, positive effects of publicly accessible EVSE encountering range anxiety should be considered before conclusions are made concerning the research question whether e-mobility PSS can support EV adoption.Considering positive indirect effects of EVSE availability on EV diffusion should be particularly considered when incentive schemes are designed.The financial incentive program of this study’s field trial supported PSS sales activities, i.e. to allocate EV, project specific interconnected EVSE and corresponding charging platform services.This resulted in positive overall net benefits for most of the participating organizations.Future work could in addition to services considered in this analysis focus on further additional, customer-oriented smart services, as interconnected EVSE solutions provide the basis for EV specific EVSE being part of the internet of things fostering possibilities to offer further e-mobility specific charging platform services to organizations and EV users.Future work could focus on evaluating costs and benefits of such advanced e-mobility PSS integrating additional innovative charging platform services forming service bundles supportive to the smart city paradigm. | Plug-in electric vehicles are seen as a promising option to reduce oil dependency, greenhouse gas emissions, particulate matter pollution, nitrogen oxide emissions and noise caused by individual road transportation. But how is it possible to foster diffusion of plug-in electric vehicles? Our research focuses on the question whether e-mobility product service systems (i.e. plug-in electric vehicles, interconnected charging infrastructure as well as charging platform and additional services) are supportive to plug-in electric vehicle adoption in professional environments. Our user oriented techno-economic analysis of costs and benefits is based on empirical data originating from 109 organizational fleets participating in a field trial in south-west Germany with in total 327 plug-in electric vehicles and 181 charging points. The results show that organizations indicate a high willingness to pay for e-mobility product service systems. Organizations encounter non-monetary benefits, which on average overcompensate the current higher total cost of ownership of plug-in electric vehicles compared to internal combustion engine vehicles. However, the willingness to pay for e-mobility charging infrastructure and services alone is currently not sufficient to cover corresponding actual costs. The paper relates the interconnected charging infrastructure solutions under study to the development of the internet of things and smarter cities and draws implications on this development. |
31,465 | Elements for optimizing a one-step enzymatic bio-refinery process of shrimp cuticles: Focus on enzymatic proteolysis screening | Purification of crustacean chitin shells has been studied by many authors and today represents an important economic activity particularly in the context of shrimp shells value-enhancing schemes .In fact the applications of chitin and its derivatives are more and more widespread.However, the process used is purely chemical and allows only an enhancing value of a small portion of the biomass.Efforts were therefore made to limit the use of chemicals and make this type of purification more sustainable.Bio-refining of crustacean shells, especially shrimp, is an economic, technical and scientific objective already described by some authors .Two biotechnological ways are found in literature: fermentation or enzymatic hydrolysis .A bio-refining process in a single step by an exogenous proteolysis in acidic media would enable us to perform chitin purification and deproteination in the same time.Recently, we have shown the promising potential of the bio-refining in a single step of Litopenaeus vannamei shrimp shells.The authors have mainly focused on the kinetics of demineralization and the choice of a suitable acid that could ensure a high demineralization yield for a pH value close to 4.0.Formic acid best fits the selected target criteria.This acid achieves a demineralization yield of 99% at pH 3.5 and 98% at pH 4.0, depending on the selected volume.An increase in solution volume promotes final demineralization.In 6 h, a combination of formic acid and ASP enzyme, in sufficient concentration, allowed to go beyond the 95% protein removalyield, at pH 3.5 or 4.0.The purity of the obtained chitin is respectively 92% at pH 3.5 and 90% at pH 4.0.The resulting chitin purity over 90%, for a single stage process working in 3.5–4 pH range avoids the additional steps of neutralization of both the solid and dissolved phases.Here we focus on determining the effectiveness of ten other commercial proteases compared to the ASP enzyme working in 3.5–4.0 pH range.The determination of an enzyme reaching a maximum deproteination yield after 6 h of hydrolysis in 3.5–4.0 pH range, and preferably at pH 4.0 needing less amount of acid, was first sought.The amount of residual proteins was determined using the sum of the quantitative analysis of 16 amino acids.The amino acid profile was also analyzed.The study of size exclusion chromatographs in conjunction with the molecular weight distribution of the generated peptides was conducted on the dissolved phase.All information collected will provide substantial support for the choice of the enzyme.The raw material used here corresponds to the Litopenaeus vannamei shrimp exoskeleton thawed, peeled by hand, dried, crushed and sieved.The size of the pieces of shell was between 0.5 and 1.0 mm.The protocol for obtaining the raw material is described in the previous article .Composition of the ground cuticle, after mild drying, was: 11.2 ± 2.0% water, 23.4 ± 3.6% minerals, 35.0 ± 2.0% proteins, 25.2 ± 3.0% chitin, and ∼5% others.Composition in brackets are given for 5 g of dried raw material.Ash content was measured gravimetrically, percentages of residual minerals and demineralization yield calculated as described in Baron et al. .Protein content is obtained by summing the concentrations of 16 amino acids which were identified, percentages of residual proteins and deproteination yield were calculated according to Baron et al. .For experiments, a fixed initial weight of 5.0 g of mild dried shrimp cuticles was used in a preset volume of acid solution under constant continuous stirring with magnetic stirrers.Temperature was controlled at 50 °C with thermostatic plates.Each time point corresponded to a specific test with 5.0 g of cuticle and the whole reaction volume was collected to ensure the consistency and accuracy of the results.All the solids were removed by filtering with Nylon filters of mesh size 300 μm.Reaction on solids was stopped by rinsing abundantly with 500 mL of distilled water.Formic acid was purchased from Sigma-Aldrich.Solution pH was measured with an analytical pHmeter and with an electrolytic pH electrode.Enzyme activities are either not identical, or expressed in different units, or not supplied by the manufacturer.This makes it difficult to determine the amount of enzyme to be added in order to carry out this comparative work.We have chosen to work with a sufficient amount of enzyme with a weight to weight ratio of enzyme/proteins of 25%.For 5 g of shell, 1.75 g of proteins is assumed to be present.437.5 mg of enzyme are added 5 min after shells were poured in 150 mL reaction volume.Twenty milligrams of lyophilized aqueous phase samples from the hydrolysates were eluted in 10 mL solvent: 30% acetonitrile/0.1% trifluoroacetic acid, and were then centrifuged at 10,000g during 10 min in a Beckman Coulter Avanti J-25 refrigerated at 10 °C.The sludge and the soluble fraction were then separated .Peptides molecular weight distributions of the soluble fraction were determined by gel filtration chromatography on a FPLC Superdex Peptide 10/30 GL column: exclusion size range of 100 − 7.000 Da, eluting solvent.The flow rate was 0.5 mL/min.Detection signal was performed with a Diode Array Detector DAD Shimadzu SPD M20A.Detection of peptide bonds was preferentially measured at an absorbance of 205 nm.Standards injected were Glycine: Gly, Gly–Gly, Gly-Gly–Gly, Gly-Gly-Gly-Gly–Gly, Leupeptin, Substance P, Neurotensin, Insulin Chain B, Aprotinin.A calibration curve between retention time and peptide weight was established using standard peptides in triplicates.The relation between molar ratio and experimental pH value after 6 h at 20 °C was sketched in .On this basis and using the tendency given by the Henderson equation, we approximated the relationship at 50° C by first examining the pH obtained after a 6 h reaction time, for quantities of formic acid, respectively, 25, 30, 35 and 40 millimoles, added to 150 mL of water.A linear relationship of the form pH = −0.74*MR+4.83 fairly approximates pH as a function of MR.The Molar ratio needed to obtain the desired pH is respectively MR = 1.78 and MR = 1.12.An important increase of pH is observed in the first 15 min and, after this period, the pH increases very slowly.Indeed, for a molar ratio MR = 1.78 at 50 °C, the pH values after 15 min, 1 h, 2 h and 6 h were respectively 3.46, 3.48, 3.49 and 3.53.This result is very advantageous for the one-step enzymatic proteolysis process because pH remains constant during reaction time.In order to compare deproteination yields, eleven enzymes were tested in formic acid media at pH 4.0 and pH = 3.5 and at a temperature of 50 °C in a predefined volume solution.Results are shown in Fig. 1.Two preliminary assays without enzymes were realized at pH 3.5, 4.0 and 7.0, temperature was 50 °C.Residual protein percentages were 75.0 ± 5.0%, 77.1 ± 5.2% and 77.9 ± 5.2% respectively meaning there is no significant difference in residual proteins in solid-phase whatever the pH tested.But the amount of protein extracted when adding enzymes is significantly higher for all assessed proteases as well at pH 3.5 then 4.0.At pH = 4.0, the average residual minerals percentage was 2.0 ±0.3% and only one enzyme lead to less than 5% residual peptides/proteins.Seven enzymes rendered deproteination yields superior to 90%.At pH = 3.5, the final percentage of minerals was 0.48 ± 0.1% and residual aminoacids percentages were around 5% for five enzymes.The amount of peptide/protein recovered in liquid phase is only slightly lower at pH 4.0 than at pH 3.5 with a relative difference of only 4.3%.The amount of minerals that pass in liquid phase is almost complete with a slightly lower value at pH 4.0 with a relative difference of 1.6%.Meanwhile, the amount of acid consumed is 60% lower at pH 4.0.For both pHs, even though important differences between residual amino acids percentage were found to be from 4 to 24%, this didn’t increase the percentage of residual minerals dispersion, meaning that the degree of demineralization is not linked to the degree of deproteination.At pH 4.0 with ASP enzyme, 1.23 ± 0.14 g of chitin containing 0.08 g proteins and 0.02 g of minerals, forming the residual solid, was obtained after filtration.These values indicate, for a biotechnological process, a high degree of purification of the chitin .Moreover, during the process involving chitin transformation into chitosan, its deacetylated derivative, the residual peptides are easily eliminated which allows achieving purity levels of over 98%.In order to compare amino-acids composition of hydrolysates obtained using 10 commercial enzymes and pepsin at pH 3.5, we analyzed the amino-acids composition of the residual solid once the enzymatic reaction had taken place.Results are shown in Fig. 2.Amino-acid profiles obtained at 50 °C are very similar for all enzymes originating from micro-organisms.A significant difference, when compared with pepsin result, was observed for glycine percentage.Amino-acid composition, obtained at pH = 3.5 and 50 °C using pepsin, was similar to the raw material.For all enzymes used in this study, working at pH 4.0 did not significantly affect the observed amino acids composition compared to pH 3.5.All human essential amino-acids are present in shrimp shell in important proportions when compared with those existing in human proteins, except for methionine which represented around 0.7% relative to total quantity of amino-acids.The total of human essential amino-acids represented about 39% of all amino-acids analyzed in shrimp shell, meaning approximately 0.7 g in 5 g of raw material.This percentage is very close to that of the soybeans which contain between 40 and 45 percent of proteins and have a nutritional quality higher than wheat if we consider their chemical score.In order to smooth the effect of the amount of extracted peptides and signal intensity fluctuations observed during repetitions of experiments, the signal was normalized by calculation on the basis of the area under the curve between the retention times from 20 to 50 min.Fig. 3 illustrates the two major categories of molecular profiles observed.The profile of DP401 was chosen to illustrate the maximum dispersion observed with 10 enzymes besides pepsin.The average profile for the class of “fungal” enzymes is shown by the proximity of curves obtained for the Sumizym, protex 26L enzymes and Asp.The molecular profile when using ASP, 26L protex or Sumizym presents only a very small proportion of peptides below 900 Da.Peptides showed mainly sizes between 400 and 600 Da.The profile obtained with pepsin is clearly different.Its distribution is more spread out and starts at much shorter retention times.This curve is characterized by a maximum size of peptides of around 2000 Da.A significant proportion of peptides is larger than 6500 Da.Conversely, the proportion of peptides around 360 Da is very low.This profile is similar to those observed in previous authors work for lower pH at 40° C with pepsin and formic acid and retranscribed in size class in .Those previous results demonstrate that increasing hydrolysis time to 12 h or 24 h does not alter the molecular profile and does not significantly reduce the amount of residual proteins.The profile we observe is therefore comparable to that obtained in steady state.It is thus clear that for our matrix, enzymes cleavage sites are different in the case of pepsin compared to the other enzymes tested.The use of pepsin alone does not allow to obtain a significant proportion of small peptides, unlike the other enzymes tested.Considering that on one hand biological activity, especially antimicrobial activity, is increased for peptides weighting between 2000 and 300 Da , and that, on the other hand, digestibility of the hydrolyzate is facilitated by small sizes , it is clearly preferable to use “fungal” enzymes instead of pepsin alone.With regard to protein extraction yields, the degree of purification of chitin, the amount of acid used and the specifications generally required to utilize the soluble fraction of the hydrolyzate in animal feed, the results obtained with the “fungal” ASP enzyme at pH 4.0 are the most favorable outcome for the implementation of the bio-refinery process in one step proposed by the authors. | This article complements an earlier work published in 2015 Baron et al. (2015) that showed the interest of a shrimp shells bio-refining process. We compare here the effect of eleven commercial proteases at pH 3.5 or 4.0 on a residual amount of shrimp shells proteins after 6 h at 50 °C. The two pH are obtained when respectively 40 and 25 mmol of formic acid are added to 5 g of mild dried shell. Deproteinisation yield above 95% are obtained. Residual amino acids profile in the solid phase was identical for the eleven proteases except for pepsin which was similar to the raw material profile. A significant relative increase in the proportion of Glycine is observed for the ten other cases. Likewise, shapes of size exclusion chromatograms of the dissolved phase are similar except with pepsin. |
31,466 | RARβ Agonist Drug (C286) Demonstrates Efficacy in a Pre-clinical Neuropathic Pain Model Restoring Multiple Pathways via DNA Repair Mechanisms | The identification of an effective therapy for neuropathic pain has been challenging owing to three main factors: first, multiple mechanisms are involved for which no single multifactorial drug has been developed; second, differences in cellular and molecular mechanisms between animals and humans have hampered progress; and third, no single “switch” has been identified that could curtail the pathological cascade and provide a therapeutic target.There are two primary features of NP: hyperalgesia, increased pain from a stimulus that usually evokes pain; and allodynia, pain due to a stimulus that usually does not provoke pain.It appears that there are at least two distinct aspects to the development of these features: peripheral sensitization, involving changes in the threshold of peripheral nociceptors including possible spontaneous firing, and central sensitization, in which there are changes in the responsiveness at the central synapses relaying nociception, especially in the dorsal horn of the spinal cord.There is still debate about the importance of central sensitization and whether it relies, for its maintenance, on the peripherally sensitized input.Although it is generally agreed that there are a profusion of gene expression changes in NP, the underlying general mechanism by which they are induced is still uncertain.One suggestion is that the underlying cause is an inflammatory reaction to injury, which in turn causes DNA damage.Madabhushi and colleagues have shown that even neuronal activity can be sufficient to induce DNA damage, particularly in the promoter region of early response genes, causing their upregulation, and this, in turn, can alter the expression of late response genes, such as brain-derived neurotrophic factor.Their experiments supported the conclusion that DNA DSB formation was necessary and sufficient to induce early response gene expression and that DNA repair could reverse the gene expression.Similarly, Fehrenbacher and her colleagues showed that enhanced DNA repair could reverse the changes in neuronal sensitivity that they observed.In terms of cellular responses, converging lines of evidence support that a specific microglia inflammatory phenotype characterized by the de novo expression of the purinergic receptor P2X4 is critical to the induction of core pain signaling, mediated by the release of BDNF, which produces hypersensitivity in nociceptive neuron in the spinal dorsal horn.It is not understood how this specific spinal microglia phenotype that arises during the acute stage following peripheral nerve injury results in imprinting of the chronic and persistent changes in the spinal nociceptive networks after the acute inflammatory response has subsided.Epigenetic alterations in spinal microglia during the acute inflammatory response presents a favorable paradigm for the imprinting mechanism driving chronicity of the pain state because of the high transcriptional activity induced by the inflammatory response and the associated increase in DNA DSB.Indeed, a wealth of data suggests that the fragility of actively transcribing loci is intertwined with genomic changes that are linked to altered cellular function and disease.This raises the question: could this represent a biological switch and thus a therapeutic target, whereby inducing an increase in DNA repair following PNI would preserve the genomic landscape of the spinal microglia during acute activation, when high transcriptional activity is expected, and thus provide an effective way to target NP?,Here we show that a novel drug, Retinoic Acid Receptorβ agonist, C286, prevents NP by restoring pathways that are chronically altered in the spinal cord after PNI and that this is associated with a switch in the spinal microglia P2X4R phenotype via a mechanism dependent on the breast cancer susceptibility gene 1.Since the retinoic acid pathway is highly conserved between species, our findings support C286 as a plausible impending therapy for NP and provide evidence that DNA repair mechanisms are disease-modifying therapeutic targets.RA has been shown to inhibit TNFα and iNOS in reactive microglia, and our previous work shows that stimulation of RARβ hampers astrogliosis after spinal cord injury.We therefore hypothesized that a novel drug RARβ agonist, C286, may modulate the inflammatory response of activated microglia to prevent the onset of the microglia-neuron alterations that underly NP.Because we specifically wanted to investigate the effect of the drug in P2X4R+ microglia and this phenotype has been shown to evoke spinal mechanisms of nerve injury-induced hypersensitivity predominantly in males but not in female rats, we chose male rats only for this study.Using an established rat model of NP, L5 spinal nerve ligation, we assessed the effect of C286 given orally for 4 weeks on mechanical and thermal pain thresholds over the treatment period.C286 treatment reversed the hypersensitivity caused by SNL to levels comparable with the preinjury state.We next used co-expression analysis of genome-wide RNA sequencing of dorsal horns isolated from non-injured and L5-SNL rats that had been treated with vehicle or C286 to delineate pathways that may have a role in the formation of the long-term hyperalgesia-related imprint in the SC.The non-injured tissue was used to establish the normal gene expression with and without C286, whereas the L5-SNL vehicle-treated tissue served as a platform to identify gene expression patterns that were induced by the surgery and peripheral lesion and was used as a control to directly compare gene expression changes that were altered solely owing to the drug treatment.Through analysis of co-expression paths we identified a variety of genes involved in a broad range of cellular functions, including neural transmission, cell adhesion, growth cone and synapse formation, and mitochondrial function.Among differentially expressed transcripts we identified genes associated with pain-related pathways, altered in different models of pain, or encoding products interacting with proteins involved in pain-related pathways.We observed that C286 upregulates pathways that are compromised in NP: cell adhesion, growth cone, and gap junction and downregulates pathways back to non-injured baseline that are upregulated in NP: long-term potentiation, WNT, MAPK, erbB, TRP channels, and cAMP.Because of their prominent role in the regulation of nociceptive signal perception we focused on the MAPK and WNT pathways for further analysis.WNT signaling in the SC stimulates the production of proinflammatory cytokines through the activation of WNT/FZ/β-catenin pathway in nociceptive neurons.MAPK is activated in spinal microglia after PNI and, upon nuclear translocation, activates transcription factors that promote dynamic nuclear remodeling.This results in the transcription and translation of proteins that prolong potentiation and decrease the threshold for receptor activation, the molecular underpinnings of clinical allodynia.The WNT receptor Frizzled 10 and the death domain-associated protein, Daxx, components of the WNT and MAPK pathways, respectively, were highlighted by our co-expression analysis owing to the magnitude of their expression changes between vehicle and C286-treated L5-SNL rats.FZD10 has been shown to be expressed in pain pathways, including dorsal horn neurons, and Daxx has a well-established role in apoptosis but can also participate in numerous additional cellular functions as a mediator of protein interactions, as a potent suppressor of transcription, and as a modulator of cargo-loaded vesicles transport, an important emerging factor in neuron-glia cross-talk during NP.Immunohistochemistry confirmed downregulation of FZD10 and Daxx protein levels by C286.Next, we wanted to ascertain if the switch in the microglia phenotype from predominantly P2X4R+ to P2X4R− correlated with higher DNA repair efficiency.We reasoned that an increase in DNA repair during the acute phase of microglia activation, when transcriptional changes are occurring during adaptation to the injury, could prevent the occurrence of transcriptional imprints that contribute to chronic pain.This would favor regaining the non-activated genomic state.The involvement of the DNA repair protein BRCA1 in spinal microglia after injury has been recently described where an initial physiological attempt to repair is seen by an increase in BRCA1 expression, but that is not sustained beyond 72 h post injury.A link between BRCA1 and RA signaling has been highlighted by previous studies; genome-wide analysis suggests a role for BRCA1 in transcriptional co-activation to RA and RAR/RXR-mediated transcription requires recruitment of the BRCA1 co-repressor C-terminal-binding protein 2, which could result in the elevation of BRCA1 transcription, a mechanism already described for estrogen.To assess if C286 could be prolonging BRCA1 expression, we measured BRCA1 levels in the dorsal horn by western blotting and by immunochemistry in spinal microglia and found that C286 significantly increased BRCA1 levels, predominantly in the nucleus.Cellular responses to DNA damage are mediated by an extensive network of signaling pathways.The ataxia telangiectasia mutated kinase responds specifically to DNA DSBs, which are associated with signal-induced transcriptional changes.ATM can be activated by RA and suppresses MAPK pathways via a DSB-induced response whereby MKP-5 is upregulated and dephosphorylates and inactivates the stress-activated MAP kinases JNK and p38.We therefore assessed ATM phosphorylation levels in the SCs and found that C286 significantly increased pATM in spinal microglia.Concomitantly, we observed a significant decrease in the ATM target and DNA damage marker γH2AX.To confirm if the modulation of these two DNA repair mechanisms was a direct effect of the agonist in microglia, we treated lipopolysaccharide-activated microglia cultures with vehicle, C286, an ATM inhibitor alone, or with C286 and found that C286 significantly increased BRCA1 and pATM and significantly decreased γH2AX compared with vehicle.Importantly, the effect on pATM was completely abrogated in the presence of KU55933, suggesting a direct effect on ATM auto-phosphorylation.To functionally validate the RARβ-BRCA1 pathway in pain we used lentiviral transduction of shRNA BRCA1 in our rat model of NP.Treatment with C286 yielded no significant improvement in the pain thresholds when BRCA1 was ablated.Confirmation of effective lentiviral transduction was obtained by immunochemistry.Further analysis of BRCA1 expression in spinal microglia showed that this was significantly decreased in LV/BRCA1shRNA + C286-treated rats compared with LV/sc + C286, and the inverse was seen with γH2AX.In agreement with the pain behavioral tests, we found that the calcitonin gene-related peptide, which contributes to the hypersensitization, was significantly upregulated in the dorsal horn of LV/BRCA1shRNA + C286-treated rats.To establish if there was a direct link between BRCA1 and the microglia activation, we assessed the levels of P2X4R in spinal microglia and found a significant increase in the LV/BRCA1shRNA + C286-transduced rats.This effect was also seen for NGF, TNFα, and TNFR1, indicating that an inflammatory environment was still present in the SC.Concurrent protein expression analysis of BDNF and the components of the MAPK and WNT pathways, which had been modified by the agonist in L5-SNL non-transduced rats, showed a significant increase with the suppression of BRCA1 despite the agonist treatment.Collectively, we show that C286 generates a “repair proficient” environment that may influence epigenetic modification of some enhancers in microglia, resetting the transcriptome toward a resting state after injury and thus reducing the long-term transcription of NP-associated genes.C286 modulates DNA repair mechanisms involving BRCA1 and ATM in spinal microglia, the former being directly linked to the P2XR4 phenotype and the development of NP.This supports the concept that transcription-induced persistent damage that is inefficiently repaired could chronically alter the epigenetic landscape, in line with the emerging importance of BRCA1 in neurodegenerative diseases.Current therapeutic strategies generally aim at a single molecular target.These are yielding unsatisfactory results and are thus giving ground to a multifactorial approach targeting the numerous pathways involved, one possibility being to influence DNA repair mechanisms.Here we show that C286 has multiple effects on pathways that contribute to the chronicity of the neuronal sensitivity and thus might prove a more successful approach for the treatment of NP.We found that the WNT pathway is one of the most significantly downregulated pathways by the agonist.The importance of the WNT/FZD signaling in signal transduction and synaptic plasticity alterations, which are essential to SC central sensitization after nerve injury, has been documented before.It is thought that WNT/FZD/β-catenin signaling contributes to the onset and persistence of pain after nerve injury, through activation of signaling pathways that recapitulate development, such as axon guidance, synaptic connection, and plasticity in the spinal cord.Spinal blockade of WNT signaling can inhibit the production and persistence of PNI-induced NP and prevent upregulation of the NR2B receptor and the subsequent Ca2+-dependent signals CaMKII, Src/Tyr418, pPKCγ, ERK, and cAMP response element-binding protein within the SC pain pathways.Curiously, we found that C286 suppresses WNT/FZD signaling and upregulates pathways involved in regeneration, which are also important during development.This may seem an incongruence, but we must consider that the overall biological effect is determined by a network of interacting pathways.WNT is known to interact with ephrinB-EphB receptor signaling, which also activates various developmental processes of the nervous system in response to nerve injury and is thought to contribute to pain enhancement.These interactions may result in an exacerbation of neurochemical signs within development pathways that trigger and sustain pain pathways.Therefore, it is likely that the C286-mediated stimulation of the regeneration and development pathways is quite different, both qualitatively and quantitatively, because C286 upregulates transcription of these pathways to preinjury levels but not beyond.This promotes the restoration of homeostasis and prevents activation of pathways that sustain pain.It is interesting that Daxx showed the highest downregulation within the MAPK pathway.Daxx is associated mostly with triggering apoptotic pathways that result in cell death and/or senescence.The agonist prevented the upregulation of Daxx in response to the injury and concomitantly upregulated various other pathways that are associated with normal cellular functions: cell-adhesion, mitochondria function, etc.The counterpart scenario, i.e., the downregulation of these pathways in the vehicle-treated rats, possibly reflects a state of compromised cellular functions in the SC.Therefore, it seems that Daxx could be an important contributor to cell fate in PNI-induced NP in the SC.We found that RARβ activation downregulates TNF-α, which is one of the cytokines that induces phosphorylation and stabilization of Daxx through ASK1 activation.This is essential for activation of the pain signaling pathways, JNLK and p38."Thus, it is possible that the marked downregulation of Daxx by C286 is in part a consequence of the agonist's anti-inflammatory effect.Similarly, the prevention of the reactive microglia P2X4R phenotype could be a direct consequence of a milder inflammatory milieu facilitated by the acute agonist action.Nonetheless, the overall effect of C286 cannot be justified entirely and solely by an initial anti-inflammatory effect.If that was the case, then anti-inflammatory treatment would be a successful therapeutic approach.Arguably, it is a combination of different mechanisms directly and indirectly affecting various intracellular functions: DNA repair, transcription, organelle transport, energy supply, and secretion of signaling molecules, which contributes to the RARβ modulation of NP.C286 also induced an upregulation of cell adhesion and cell junction pathways.This is noteworthy because adhesion proteins, which normally build and modify synapses, also participate in different aspects of synaptic and circuit reorganization associated with NP.We challenge the dogma that nuclear receptor agonists are unpromising therapeutic targets.Nuclear receptor signaling has been overlooked as a therapeutic avenue.Although nuclear receptor signaling regulates many pathways, it is thought that some of these might be detrimental to the cells casting doubt on the overall biological effect.However, effective therapies need to be multifactorial, especially if they are aimed at chronic conditions in which a myriad of cellular functions has been altered.Retinoic acid modulates transcription and exerts its biological activity via the nuclear receptor RAR/RXR heterodimers, of which three isoforms have been identified.Each isoform differs in spatial expression and yields different biological responses.In this regard, it is therefore beneficial to use specific receptor agonists targeted to the particular receptor that will induce the desired/anticipated effect.Because RXRs are promiscuous receptors and partner with various other nuclear receptors integrating their signaling pathways, they are less attractive as drug targets.We have demonstrated target engagement previously and shown the upregulation of RARβ in response to treatment with specific RARβ agonists.Our work illustrates an example of where a nuclear receptor agonist provides an effective treatment for a chronic condition without induction of detrimental pathways.C286 is currently undergoing a phase 1 trial and can rapidly progress to further clinical testing proving an attractive therapeutic avenue to explore for NP.DNA damage has recently been proposed to play an important role in transcriptional regulation.Here we show that it is involved in setting an inflammatory state in spinal microglia that triggers NP.Our results demonstrate a novel role for BRCA1 in NP.BRCA1 is a DNA repair protein, best known for its association with breast cancer.We demonstrate that, by increasing DNA repair via BRCA1, NP can be prevented.This revolutionizes the therapeutic exploration for NP, shifting its focus from targets whose modification provides symptomatic and temporary amelioration to a more permanent disease-modifying target: DNA repair.Recovery of normal cellular functions through effective and timely DNA repair might be a successful prophylactic and/or therapeutic approach that is extendable to other chronic conditions similarly associated with an inflammatory etiology.Exploring other drugs that, like C286, modulate BRCA1 and identifying other key DNA repair mechanisms could be a step change in therapeutic development.This study was conducted in male rats only and as such does not address the sexual dimorphism in pain.All methods can be found in the accompanying Transparent Methods supplemental file. | Neuropathic pain (NP) is associated with profound gene expression alterations within the nociceptive system. DNA mechanisms, such as epigenetic remodeling and repair pathways have been implicated in NP. Here we have used a rat model of peripheral nerve injury to study the effect of a recently developed RARβ agonist, C286, currently under clinical research, in NP. A 4-week treatment initiated 2 days after the injury normalized pain sensation. Genome-wide and pathway enrichment analysis showed that multiple mechanisms persistently altered in the spinal cord were restored to preinjury levels by the agonist. Concomitant upregulation of DNA repair proteins, ATM and BRCA1, the latter being required for C286-mediated pain modulation, suggests that early DNA repair may be important to prevent phenotypic epigenetic imprints in NP. Thus, C286 is a promising drug candidate for neuropathic pain and DNA repair mechanisms may be useful therapeutic targets to explore. |
31,467 | Trait-based approaches in rapidly changing ecosystems: A roadmap to the future polar oceans | Climate change is a serious threat to humanity, in particular the rapid changes observed in polar regions have global implications.Although we have gained insights into how certain polar marine species, taxon groups and local assemblages were affected by climate change, we have considerably less understanding of how certain ecosystem processes, let alone overall ecosystem functioning will be affected.A major reason for this uncertainty is that we lack knowledge of community structure-function relationships that could be related to environmental parameters in large-scale approaches.Animals, plants and microbes shape ecosystem functions via their collective life activities or traits; accordingly, we infer that stressor-induced changes in community structure will alter certain functions.Today, polar marine communities are facing drastic changes.The Arctic is warming at twice the rate of the global average, most visibly reflected in the drastic decrease of Arctic sea ice thickness and extent within the last decades.The Antarctic shows a different trend and stronger natural variability than the Arctic.This variability is reflected in the long-term decline of sea ice in the Bellingshausen Sea, and in the increase of sea ice in the adjacent Ross Sea.In austral spring 2016, overall Antarctic sea ice decreased at a record rate, leading to a decrease of sea ice 28% greater than the mean.Sea ice is the central structuring force in polar ecosystems: it serves as a habitat for a variety of taxa, while its seasonal growth and melt rhythms control the stratification of the water column and light availability, and thus the availability of nutrients and the onset of the productive season.Consequently, drastic changes in sea ice affect the entire marine ecosystem, from surface waters down to the deep-sea floor.In addition, warmer water masses have been directly linked to range shifts in the distribution of some polar species and the poleward expansion of boreal species and communities, the latter also influenced by the increased human activity in the polar regions.Traits – here defined as the life history, morphological, physiological and behavioral characteristics of species – provide a link between species and multiple ecosystem-level functions, such as oxygen, nutrient and energy fluxes.Since species influence these functions via their collective traits, such traits are termed effect traits.When the relationship of traits and functions is soundly assessed we can use the collective trait pattern of a community in combination with community abundance or biomass to indicate ecosystem functioning Fig. 2).This approach benefits studies on large spatial scales, as measuring functions at those scales is inherently difficult, and in many cases impossible.Another advantage of trait-based approaches is that early responses to changes are visible in the functional structure rather than in the taxonomic structure of a community.Recent use in terrestrial ecosystems has demonstrated that trait-based methods enable generalizations of the trait diversity-functioning relationship within and between ecosystems, and aid prediction of the functional consequences of biodiversity loss.Accordingly, those traits showing a response to a disturbance are termed response traits.These traits are highly relevant in assessing species thresholds and the resilience of ecosystems to change and consequently are important indicators for management and conservation.Uses with immediate application potential for polar regions include identifying regions that are most vulnerable to changes, detecting early conservation outcomes in marine protected areas, and identifying functional hotspots.Given their high invasion potential, polar areas would also benefit from using trait-based approaches to assess the response of ecosystems to species invasions as done in European Seas, or to assess the potential of particular species to become invasive.Similarly, Suding et al.’s and Hewitt et al.’s application of trait-based approaches to estimate climate change effects on ecosystem functions is highly relevant to polar regions.The origin of trait-based approaches lies in freshwater and terrestrial ecology.They are, however, used with increasing frequency in marine systems: a survey of 233 peer-reviewed studies on marine communities showed that only 5% where published prior to 2000, and after 2010 the number of publications showed an almost threefold increase despite great challenges in sampling, observation and manipulation of natural assemblages in marine ecosystems.These obstacles are particularly prevalent in polar marine systems, likely contributing to the sparsity of studies, which consider trait-function relationships in these regions.In the Arctic Ocean, only one study from the Canadian Arctic measures a function and relates it to the functional diversity of local benthic communities.A small number of further studies – mostly benthic and very recent – harness trait-based approaches to indicate ecosystem functions, explore the functional responses to human impacts and climate change, or relate functional observations to environmental parameters and gradients.Functional diversity was estimated for Arctic zooplankton, fish, and benthic meiofauna.A majority of studies analyze the correlation of only one or two biological traits – chiefly feeding type, body size, and/or mobility – to certain environmental parameters, or use them to indicate community vulnerability or functional redundancy.From the Southern Ocean we are aware of four studies that included at least two traits each, while a further study investigated the correlation of one trait and sediment parameters.To briefly summarize the current status of trait-based approaches in the marine realm, there is a clear dominance of studies on benthic invertebrates and fish over other ecosystem components.This trend is visible also in trait studies from polar regions, which additionally show a clear majority of Arctic over Southern Ocean studies.Applied methods are mostly of correlative and indicative nature, while studies that estimate functions and trait-function relationships – thus providing a foundation for the correlative approaches – are scarce.As such, we are currently limited in our ability to answer ecological questions which are of ever increasing relevance to ecosystem scientists and managers.These include:How do climate induced changes affect polar marine communities, ecosystem functions, goods and services today and under future scenarios?,How functionally redundant, and resistant to stressor effects are marine communities in polar regions?,Which polar marine regions, ecosystems and functions are most threatened or prone to change?,Here we discuss the main challenges of the application of trait-based approaches in polar regions and provide a roadmap to overcome the current obstacles.As the present paper resulted from the Arctic Trait Workshop that was attended predominantly by benthic ecologists, we use Arctic benthic ecosystems as model systems by which to approach many of the specific issues we discuss.Nonetheless, the presented roadmap underpins a holistic ecosystem approach, and is applicable to all marine ecosystem components.The two polar marine ecosystems of our planet are – despite being both shaped by low temperatures and strong seasonality in light regimes – highly dissimilar.The Arctic Ocean comprises about half mostly shallow, continental shelves, and half slopes, extensive deep sea plains and steep deep-sea ridges.Due to its proximity to densely populated northern landmasses, areas of the Arctic have been explored for hundreds of years.The Southern Ocean comprises mostly narrow shelves around Antarctica that are submerged deeply by the burden of the inland ice masses, and that drop off steeply towards the basins of the Atlantic, Pacific, and Indian Ocean.The exploration of the Southern Ocean started much later in history due to the harsh conditions and remote location, far off the nearest human populations.Technological advancement and improved accessibility have facilitated larger sampling campaigns in recent decades, and global efforts like the Census of Marine Life that included an Arctic and Antarctic census improved our knowledge of polar biodiversity.Most campaigns have, however, focused on shallow, easy to reach and seasonally ice-free shelf ecosystems including areas of commercial interest.For the Antarctic, the regions with longest history of scientific exploration are the islands of the Scotia Sea, the West Antarctic Peninsula, the Eastern Weddell Sea, and the Ross Sea.Given the high cost of polar exploration, in several other regions – such as the Arctic basins, and Antarctic regions including the Amundsen Sea and the shelves covered by floating ice shelves – our knowledge on species biology and, especially, ecology remains limited.Systematic monitoring programs provide ongoing quantitative time-series in certain areas, however gaps in pre-impact baselines hinder assessment of the magnitude and speed of change in large regions.Apart from the general lack of biological information from some polar regions, huge knowledge gaps exist when it comes to ecological characteristics or traits of polar species, which form the basic input to all types of trait-based approaches.Even information about fundamental traits is sometimes hard to find, such as body size, despite authors acknowledging it to be among the most important and interlinked traits.This lack of accessible trait information is not a problem restricted to the polar regions.For example, a study quantifying data availability for the demersal fauna of the United Kingdom reveals that information about eight fundamental traits was available for only 9% of the benthic community in that study.The authors noted that body size was the best documented trait, while data on fecundity was especially scarce.Publications that provide such ecological information are usually often cited clearly stating their high value to the scientific community.A notable example is the study of polychaete feeding guilds on family level by Fauchald and Jumars which is currently cited >1300 times.A basic requirement when using traits to indicate ecosystem functioning is sound knowledge of trait-function relationships.To date experiments that quantitatively explore specific trait-function relationships in the marine realm are quite rare, and even less data are available to build correlative approaches in polar marine regions.Indeed, those experimental studies which do exist have often only limited explanatory power as they are: 1) conducted in the lab or on small scales the field, 2) include only a few species, and/or 3) usually focus only on a single ecosystem function.By contrast, real world ecosystems are species rich, and species usually contribute to more than one function, while the overall ecosystem functioning is sustained by multiple processes across multiple spatial and temporal scales.This complexity is emphasized in a recent study by Thrush et al. that nested holistic Biodiversity-Ecosystem functioning experiments into a natural landscape.Their results clearly showed that the community-function relationship was not always linear or stable, but could change with changes in either the ecological landscape or the environmental drivers.It is unclear whether trait-function relationships determined at lower latitudes are valid also at high latitudes.Polar ecosystems are unique in several respects, and polar species are often highly adapted to life in small physiological ranges and extreme environmental conditions.Thus, their response to stressors like temperature increase may strongly differ in comparison to their relatives in temperate regions.For example, Sainte-Marie and Węsławski and Legeżyńska suggested gammaridean amphipods might express intraspecific variability in reproductive traits along latitudinal gradients.Decreasing growth rates have been documented for a single sea urchin species with decreasing temperature along a latitudinal gradient in Greenland.For widely distributed species with ranges extending into subarctic/subantarctic or even temperate regions, trait information mostly originates from those lower latitudes.Trait approaches at lower latitudes commonly assume that traits of the same species stay constant across a wider geographical range, enabling large scale assessments or comparison of community functioning among different regions.However, there is evidence that species in polar regions might have developed certain adaptations that influence their trait expression, including slow growth, high longevity, high lipid content or production of anti-freeze agents.Such considerations advise against the use of information from lower latitude trait literature and further restrict the pool of available trait data.Table 2 shows the most commonly used traits per ecosystem component in the marine realm, based on a literature survey.The number of traits and trait categories used in marine studies varies from 5 or less to >50.While a number of traits are common to many trait-based studies and across ecosystem components, the diversity of language which surrounds them grows as trait-based studies are adopted more widely.Across literature, traits are named and defined in a myriad of contrasting ways that act to lessen their applicability and ease of use.Non-standardized and even contradictory trait definitions prevent authors from readily comparing their findings.Alternatively, imprecise trait definitions or classifications could mean information is mishandled and wrongly incorporated.Issues are likely to be particularly complex during broad-scale studies investigating ecosystem management strategies or consequences of change.For example, traits such as ‘body size’ are likely to be interpreted differently when not clearly defined depending on the preconceptions of the author, and the morphology of the species in question.Practices may work well within a scientific field or research group, or where the appropriate terminology is apparent for a given taxa, but become ‘lost in translation’ when considered across multiple data sources.This variety limits the potential to generalize findings across studies or compare patterns across spatial and temporal scales.Studies are needed that directly relate environmental change to responses in ecosystem functions in polar regions, as has previously been explored in terrestrial and freshwater ecosystems.A number of polar studies have succeeded in documenting climate change effects on certain species, taxon groups, local assemblages or larger-scale community properties like biodiversity.So far, only one recent study on Arctic fish tackled how these changes affect the communities’ trait pattern and consequently ecosystem function as a whole.Frainer et al. detected that the recent warming period in the Barents Sea triggered a rapid shift from an Arctic fish community characterized by small-sized bottom-dwelling benthivores, towards a boreal fish community characterized by the traits large body size, longer lifespan and piscivory.This phenomenon – also known as the borealization of Arctic fish communities – has the potential to reconfigure Arctic food webs and affect ecosystem functioning in the region.Given this scarcity of studies, there is currently not a sufficient basis from which generalizations can be drawn.The same holds true for past ecosystem changes: few studies from lower latitudes use fossil trait data to estimate how past climate change affected ecosystem functions in order to contextualize contemporary observations, and even fewer focus on the past and present of the polar regions in this context.Vemeij and Roopnarine refer to trans-Arctic invasions that occurred during the warm mid-Pliocene epoch and predict a resumed spreading from the Pacific through the Bering Strait into a warmer Arctic Ocean and eventually into the temperate North Atlantic.The adding of Pacific-derived fast-growing and large bodied mollusk species to the Arctic and North Atlantic species pool might affect the regional food webs and ecosystem functions.Bonn et al. used biogenic opal as proxy for paleo-productivity and observed a glacial/interglacial pattern with high productivity during peak warm stages with reduced sea ice coverage back to 400 ka.Here we present a roadmap consisting of six steps to address the identified challenges.The aim is to facilitate the successful application of trait-based approaches on various spatial and temporal scales, and to assess trait and functional patterns in order to understand current and predict future ecosystem functioning in rapidly changing polar marine systems.Recommendations regarding each step of the roadmap are summarized in Fig. 4.Step 1: International network,Much knowledge of species traits is unavailable in the published literature, and is instead dispersed among individual scientists’ personal observations and unpublished notes.The formation of a comprehensive, international network of experts is thus indispensable to integrating expert knowledge and to covering the entire polar regions.Any such network should reach internationally.We suggest an editorial board be formed that is interlinked with the World Register of Marine Species platform WoRMS, today’s largest standardized and expert-curated marine species inventory.Such a board is necessary to organize the network, identify traits experts for particular taxonomic groups, regions, or methods, and document where relevant trait sources such as voucher collections or video material are deposited.Recommendations on standardized protocols for field and laboratory work could be made available for download.The network could also promote the integration of traits data into ongoing Arctic and Antarctic monitoring programs, including the Circumpolar Biodiversity Monitoring Program in the Arctic and the Commission for the Conservation of Antarctic Marine Living Resources Ecosystem Monitoring Program in the Antarctic.Step 2: Trait terminology and methodology,Application of trait-based approaches in the polar regions is a relatively recent development.Though a drawback in many ways, this offers an opportunity to promote standardized trait definitions and best practices before the plethora of approaches seen in other regions also inundates the polar regions.While acknowledging that approaches are case specific, unification of the following aspects will allow enhanced comparability and extrapolation of results.2.1.Choice of traits and terminology,While the choice of traits is driven by a given research question, a comprehensive list of traits and their categories needs to be developed for polar regions using standardized terminology.In the currently developed traits data base linked to WoRMS, Costello et al. have proposed a prioritized list of 10 traits, which resulted from a study on the availability and use of traits, both in literature and existing database, and consultation of experts.Here, we focus solely on biological traits, i.e. such that comprise morphological, physiological, life history and behavioral traits of organisms.For taxonomic traits, we refer to specific platforms like the World Register of Marine Species, Integrated Taxonomic Information System, and for biogeographic information to Global Biodiversity Information System or Ocean Biogeographic Information System.Within WoRMS, registration of trait data requires that one should easily be able to apply a trait to any given taxon.More specific research questions however may require the use of additional taxon or ecosystem specific traits.Clarity and standardization of trait terminology should be a foremost priority, in order to facilitate meta-analysis or comparison of results.Successful communication platforms should be developed to ensure this standardization process.An example of how such a platform can be organized is given in Costello et al.2.2.Sources of trait information,A diversified, holistic approach is appropriate when it comes to compiling traits information.Many biological traits can be readily assessed during sampling.These include morphological traits, e.g. body size/length/weight and body form.In situ imaging and diver-based observations may identify some behavioral traits, as well as feeding and movement traits.Other traits can be derived from biochemical analysis.Traits defining trophic roles can be assessed from stable isotopes, fatty acids and other chemical markers.Traits related to energy-storage can be derived from lipid and caloric content analysis.Radionuclides have been used as particle tracers to assess sediment reworking rates by benthic organisms.Stable isotopes can be also used to inform on the migratory habits of pelagic fish.Museum or other voucher collections are of high value especially when containing specimens that are rare or from remote polar regions.Accordingly, a list of relevant collections together with the contact details of responsible authorities/persons should be added to the joint management system.It is noteworthy, however, that museum collections may be biased towards “exceptional” individuals, and organisms preserved in ethanol and formalin experience shrinkage or body deformations.When physical samples are unavailable, the common approach to build up a traits collection is by performing a literature survey of articles relating to either individual traits or groups of traits of species or taxon groups.Often trait information is recorded in grey literature not accessible to the general public, like internal reports, descriptions in museum libraries, ship protocols or lab notes.Such grey literature should be added to a joint database managed via the network.Time- and cost-saving needs fueled the search for surrogate methods in biodiversity assessments.In benthic studies for example, lowering the taxonomic resolution of taxonomic identification to family level or choosing a certain representative taxon gave similar results to those obtained based on all taxa in detecting the effects of natural and anthropogenic disturbance on benthic diversity and distribution patterns.The search for similar surrogate methods in trait and functional diversity assessment may be needed for the >7000 known marine animal and plant species in the Arctic and >8800 in the Antarctic, but only after a thorough assessment of the effect of lowering the functional resolution.2.3.Trait data analysis,Trait-based approaches in the marine realm are diverse and range from the assessment of ecological indices, such as various measures of functional diversity and the development of ecological indicators, to more complex methods, often summarized under the term biological trait analysis.A BTA enables scientists to explore the relationship between species or community trait patterns and environmental characteristics.The choice of which trait-based method and subsequent equation, multivariate statistic or model to use depends very much on the respective research question and environment, and thus cannot be discussed here in detail.The required data structure underlying all methods however comprises the construction of several data matrices, and here some standardizations appear useful.The degree to which a species expresses a trait category can be indicated in the ‘traits per species’ matrix via a coding procedure.The fuzzy coding procedure offers a pathway from descriptive words to a common code enabling analysis of diverse kinds of biological information derived from a variety of sources.Fuzzy codes found in the literature range from 0–3, 0–4, and 0–6.We promote the use of the 0–3 coding scheme as it is the most commonly used, and provides a compromise between binary codes and many not clearly delineated graduations.Including code matrices as supplements to publications enables comparisons and aids the use of consistent coding.The analysis of how uncertainties in input data affect the output of a mathematical model is a central aspect of any model evaluation.In trait-based approaches on the other hand, a formalized sensitivity analysis is not a standard procedure.Several authors do, however, point out uncertainties potentially affecting the outcome of the respective trait study and suggest ways of addressing this issue.This includes the type and number of traits or functional groups used, the weighting of traits, the number of traits not measured “on site”, and generally incomplete trait information.Regarding the fuzzy coding of traits, the degree of subjectivity of scientists and the consequent potential differences of fuzzy codes is a concern that is not yet studied.To get a first insight we conducted a small experiment among participants of the Arctic traits workshop.Participants coded 27 trait categories of three common Arctic benthic invertebrates, the final matrices were compared.We found that 83% of the coding was identical, while 17% were different in at least one category per trait.We assume that the differences result from a combination of the fact that most of the participants were new to the use of fuzzy codes, and partly not experts for the chosen taxa.The exchange of trait information and fuzzy codes via publications and databases along with explicit guidelines on how to code will improve the consensus.Step 3: Trait databases,As traits are already being documented within several databases it is key to look for collaboration opportunities with these initiatives, thereby avoiding duplication of effort.Accordingly, a large number of traits in the WoRMS traits portal are sourced from these databases that are focused on certain taxonomic groups.The traits compilation currently being tackled by the WoRMS Data Management Team and the Editorial Board is grouped at higher taxonomic level as much as possible, thereby lessening the workload.The use of ‘generalistic’ traits, here such that are applicable across ecosystem components, enables analyses across marine realms and large biogeographical scales.This represents a potential for investigating mechanistic relationships between ecological function and the physical environment.The use of such traits also allows more holistic investigations of function that encapsulate several ecological groups, such as invertebrates, fish and mammals.More specific traits – as may be desirable for polar taxa - can be added.For example, the maximum sediment burrowing depth is highly relevant for benthic ecosystem functions, but does not apply to pelagic taxa.Under these circumstances these pelagic taxa must not be excluded from analysis, but are merely scored a ‘0′ for that respective trait, indicating that it is not expressed.Several options exist regarding the way in which trait information is stored in databases, specifically as text information or coded.The Arctic Traits Database, a trait platform for Arctic benthic taxa, provides both text and fuzzy codes by species for direct download and import into relevant analysis software.The advantage of this approach is that researchers who are less trained in fuzzy coding can also perform rapid trait analyses.This goal is supported by the additional provision of manuals and R-code.Also, detailed records of the information from which the fuzzy codes were sourced are provided to ensure data traceability and reproducibility, and potential revision of codes in the event of new findings.In general, quality assurance for all data repositories should aspire to be of the same rigor as that of printed publications.A best practice example is the Polytraits database, where every trait is referenced, often by directly quoting the exact literature passage which has led to the coding of the information.In addition to trait information, a BTA requires presence/absence, abundance, or biomass data by location, as well as environmental data.Tools have already been developed through the EMODnet Biology Project, allowing to query a selection of traits in combination with taxonomic and distribution data.Step 4: Trait-function relationships,In order to apply trait-based approaches to study ecosystem functioning in the rapidly changing polar regions, it is crucial that sound links between polar community traits and functions are established.Substantial understanding of trait–function relationships could be gathered from experimental approaches at lower latitudes, but we must question whether these relationships can be directly transferred to polar functions.We consequently recommend performing a series of experimental and observational approaches in contrasting ecosystems and several polar regions.Most naturally, experimental approaches in polar regions should be performed in well studied regions, and/or where regular monitoring is carried out, and a large volume of species and environmental information is already available.In the planning phase of experiments, a critical step is to define the traits that are relevant for the ecosystem function of interest.Given the uncertainty of which traits are the appropriate ones to select, we follow Lepš et al. and suggest treating the final choice as a hypothesis that needs to be tested.Once trait-function relationships are identified, the use of these particular traits in future studies and their inclusion in trait databases should be encouraged.Not only the traits but also the functions investigated need to be chosen a priori.While some traits are intimately linked to particular functions, others serve only as indirect indicators.Functional diversity should be included among the functional parameters, as it explains variation in ecosystem functions.The use of thresholds was successfully used to identify if overall functioning was sustained in disturbance experiments.Below we list four types of studies that we suggest be carried out in polar regions:4.1.Lab and microcosm experiments,The relation and effect of single and multiple traits of polar species to specific ecosystem functions can be assessed in laboratory or microcosm experiments.Communities, selected based on their expression of certain traits, can be exposed to different environmental conditions.Fluctuations in functions are then measured in order to provide important baseline information on trait-function relationships and on potential thresholds of functions.The microcosm experiment by Braeckman et al., as one specific example, showed a pronounced influence of the trait bioirrigation on benthic respiration, nutrient release, and denitrification, compared to the trait biodiffusion.The traits that explain most of the effects in lab and microcosm studies should be clearly identified so they can be further analyzed, manipulated, and quantified in field studies.While most challenging, field experiments offer important trait-function relationships previously identified in lab experiments to be analyzed under real-world conditions.Complex community interactions resulting in complementarity necessitate multiple aspects of the total community response to be considered simultaneously or integrated into a multivariate index of ecosystem performance.As an example, Thrush et al. manipulated light, nutrient concentrations and species densities in a coastal sandflat and detected a changed interaction network between biogeochemical fluxes, productivity, and macrofauna, once potential thresholds were crossed.Coordinated experiments on different community components could be performed in different regions of the Arctic to provide the basis for a ‘functional atlas’.For example, functional traits response to experimentally disturbed seafloor on different spatial scales can provide insight into the recovery processes of communities and associated functions exposed to various stressors and along stressor gradients.We support the recommendation by Gamfeldt et al. that future studies should explicitly consider also manipulations of species density in order to disentangle density-dependent population processes and diversity effects.Observational studies are empirical and do not involve manipulative experimentation.Examples are descriptions of trait patterns in space and time, correlative studies of traits and functions, or broad-scale hypothesis tests of patterns and relationships which may result from observational field studies designed for the question or from data mining.The advantage of such studies is that they can – contrary to field experiments – rather easily be performed on large spatial scales, given the ecological information is available.A disadvantage, however, is that the conclusions drawn are based on statistical correlations and not causality, thus underlying mechanisms might remain unknown.We, therefor, recommend to perform observational studies including those significant traits and functional groups that were previously identified in field and lab experiments from polar regions, to validate the outcomes of the experiments on a larger scale and, in turn, to generate new hypotheses and trait-function relationships.Although a generally positive effect of higher diversity on ecosystem functioning is assumed, experimental findings are often contradictory and the relevance and generality of many experimentally determined B-EF relationships are questioned.We assume that the same will hold true for the outcome of trait-function experiments in polar regions.Accordingly, Srivastava and Vellend highlight the need for rigorous and integrative science if general principles are to be found.Following Thrush and Lohrer, integration can for example be achieved by testing predictions based on theory or small-scale experiments with broad-scale observational studies.Regarding our topic this suggests a meta-analysis of field and lab experiments from polar regions, the identification of the significant trait-function relationships, and a correlative study of these traits and functions on larger, up to pan-Arctic scales.Given the scarcity of trait-based research in polar regions, modelling approaches that incorporate multiple traits are equally rare.A prerequisite to large-scale modelling studies are data of sufficient geographical coverage, which are extremely difficult to obtain in many polar regions, as well as trait information on largely unknown taxonomic units.Access to sufficient data would enable the application of species distribution models, which are commonly used tools to study the distribution of species and support conservation measures.Two different frameworks exist to model species distribution patterns and take into account the traits of species during analysis, i.e. 4th-corner models and joint SDM.The advanced model-based approach to the fourth-corner problem suggested by Brown et al. allows to develop a predictive model for species abundance, as a response to traits and environmental variables.Joint SDM are model-based hierarchical approaches that assess the strength and direction of trait-environment relationships, allow model-selection procedures, and facilitate prediction to new scenarios while propagating data uncertainty.For example, it would be possible to assess which traits are most associated with particular environments and determine their fate if these environments decline or even disappear.In their study on geographic range shifts in an ocean-warming hotspot Sunday et al. used multi-model averaging of mixed-effects linear models with maximum likelihood estimation to test the effect of species traits on shifts in poleward range boundaries.Their study showed that including traits more than doubled the variation explained than if climate velocity alone was used as predictor.GLMMs were also used by Lefcheck and Duffy and showed that functional diversity predicted ecosystem functioning better than species richness.Predictive models can be used to assess ecosystem functioning in future scenarios, e.g. when traits and functions are included into ecosystem models.Another relevant method are generalized additive models, which have the advantage of allowing the assessment of non-linear relationships without fitting arbitrarily selected functions.Valdivia et al. used GAMs to test for an effect of environmental gradients on the functional richness of a subtidal community in the Western Antarctic peninsula.Gross et al. promote a new modelling approach to assess trait distributions on global or at between-ecosystem scales, i.e. models that include skewness-kurtosis of trait-abundance distributions.These models have a higher predictive power of multifunctionality, as they enable generalizations across ecosystems and prediction of the functional effect of biodiversity loss.However, although the prediction of multifunctionality would be of highest interest in the polar regions, to our knowledge these approaches have not been applied there yet, nor in the marine realm in general.Size is often regarded as a ‘master’ or ‘key’ trait of an organism as it may determine its functioning in terms of physiology, trophic strategies, life-histories and interactions with abiotic and biotic elements of the ecosystem.At the community level, size spectrum models have been employed, for example to estimate consumer biomass, productivity or predict the changes in community structure through time.In the Arctic, size spectra have been described for pelagic and benthic communities.Recently, a need to include additional functional traits into size-based ecosystem models has been raised, especially in systems where not all processes are size-based.For example, a study assessing the number of dimensions that determine whether individuals of different species interact found that models accounting for only a small number of traits already dramatically improve understanding of the structure of ecological networks.Step 6: Management and conservation,One of the motivations for applying trait-based approaches in the rapidly changing polar regions is that new insight and information are needed to advise management and conservation efforts.Consequently, traits are now included among those “ecosystem Essential Ocean Variables” recommended to address the dynamics and change in Southern Ocean ecosystems.In the Arctic, currently eleven Ecologically and Biologically Significant Areas are defined after seven criteria: 1) uniqueness or rarity, 2) special importance for life history stages of species, 3) importance for threatened, endangered or declining species and/or habitats, 4) vulnerability, fragility, sensitivity, or slow recovery, 5) biological productivity, 6) biological diversity, and 7) naturalness.Recent studies have shown that trait approaches provide new means to define these marine areas of interest, as they can identify functional hotspots and vulnerable regions and assist in the boundary setting of MPAs.BTAs were successfully applied to assist in the formulation of conservation objectives, e.g. by testing fisheries effects on ecosystem function.In addition, trait-based approaches were applied in the monitoring of already defined MPAs.A study by Coleman et al. comparing different types of MPAs showed that functional traits can elucidate early conservation outcomes, while traditional multi-metric diversity indices were not able to distinguish between the differently treated habitats.Recently, the very promising approach to integrate traits and functional diversity into ecosystem models was launched.Such models provide a holistic view of ecosystems and the opportunity to assess the impacts of conservation and management as they enable us to project possible states of future marine ecosystems.Several legislative agreements require management schemes to directly address the functioning of ecosystems.The European Marine Strategy Framework Directive, for example, has the objective to “enable the integrity, structure and functioning of ecosystems to be maintained or, where appropriate, restored”.Article 11 states further that coordinated monitoring programs for the ongoing assessment of the environmental status – accounting for the structure and functioning of marine waters – should be established.There are already several studies from other realms where trait approaches were successfully used in that field, mainly in terrestrial, but also marine habitats.However, to date, there are only few multivariate applications of biological traits to support environmental policies in the marine realm.In order to advise marine management, recent studies stress the importance of trait-based approaches for reliable indicator development to assess environmental health or status and to detect tipping points or regime shifts.Still, these studies also highlight obstacles related to missing ecological understanding on trait-function relationships and the inherent difficulties related to large scale approaches, points we have discussed in the previous paragraphs.Trait-based approaches are valuable tools to study the effects of rapid climate change and associated anthropogenic stressors on ecosystem structure and functions in the world’s oceans, to predict future scenarios, and to advise decision makers on the required steps for sound ecological management.However, polar regions and ecosystems are unique and pose scientific challenges – divergent from those in temperate regions – to those aiming to apply trait-based approaches.Most of these challenges concern basic requirements that demand community consensus before any actual methodical approach can be taken.These essential basics comprise trait information of polar taxa, standardized trait terminology and methodology, and knowledge on polar trait-function relationships.In the present paper, we reviewed the existing challenges thoroughly and suggest a six-step roadmap to overcome these obstacles and progress forward.This roadmap comprises the building of a strong and active international and interdisciplinary network, capable of defining basic trait terminology and best practice assessments that will lead to harmonized, well-structured and easy accessible trait databases and coordinated experimental approaches.These first four steps provide the essential baseline for trait-based and modelling approaches on large spatial and temporal scales, appropriate to tackle the pressing questions related to climate change and to predict future scenarios.This new insight then can be used to give sound advice to decision makers and marine conservation.Given the complexity and speed of changes that polar ecosystems, and especially those in the Arctic, are facing it might seem illusory to keep pace by following such an apparently long and nested working process.But what are the alternatives?,Isolated research projects and small-scale approaches will not be able to answer the urgent questions related to climate change, inherently a multifaceted and large-scale phenomenon.Nor are they appropriate to satisfy the need of decision makers to consider, prepare for and potentially mitigate future scenarios.Several initiatives included in this roadmap are already ongoing, while others are in progress.With joined forces and commitment of the international research community to permanent exchange and coordinated initiatives all steps of this roadmap can be tackled.A fact we noted during the work on this paper – and that was also highlighted in Beauchard et al. – is a clear predominance of benthic trait studies in the marine realm, and – with exception of fish – less focus on pelagic ecosystems.As climate change effects are not restricted to only one compartment, and coupling between the pelagic and benthic realm is particularly tight and thus crucial in the polar regions, we encourage more holistic ecosystem approaches, including traits of multiple species across the entire marine system.There is no technical limitation to a trait-based approach, even in broad extents when combining algae, invertebrates, fish, mammals, and birds as long as every trait is measurable or possible to code in all organisms.Our literature review showed that several traits are used across all ecosystem components.Additionally, climate change effects do not stop at the shorelines, terrestrial ecosystems might be included in the joint trait efforts.The existing Register of Antarctic Marine Species has recently decided to broaden its scope and to also include the non-marine species from the Antarctic Region and a number of their relevant traits.As we demonstrated with many examples from current literature, trait-based approaches provide the most insight when used in addition to species based methods in marine community ecology.Ongoing efforts to harmonize terminology and methodology, methodical improvements via use and promotion of sensitivity analyses, and the easy access to trait information, best practices, manuals and scripts will make these methods more easily accessible to a broader community of users within the scientific community.Joint trait initiatives, workshops and outcomes like this perspectives paper will aid to their further promotion, potentially making them standard practice in marine ecology in the future. | Polar marine regions are facing rapid changes induced by climate change, with consequences for local faunal populations, but also for overall ecosystem functioning, goods and services. Yet given the complexity of polar marine ecosystems, predicting the mode, direction and extent of these consequences remains challenging. Trait-based approaches are increasingly adopted as a tool by which to explore changes in functioning, but trait information is largely absent for the high latitudes. Some understanding of trait–function relationships can be gathered from studies at lower latitudes, but given the uniqueness of polar ecosystems it is questionable whether these relationships can be directly transferred. Here we discuss the challenges of using trait-based approaches in polar regions and present a roadmap of how to overcome them by following six interlinked steps: (1) forming an active, international research network, (2) standardizing terminology and methodology, (3) building and crosslinking trait databases, (4) conducting coordinated trait-function experiments, (5) implementing traits into models, and finally, (6) providing advice to management and stakeholders. The application of trait-based approaches in addition to traditional species-based methods will enable us to assess the effects of rapid ongoing changes on the functioning of marine polar ecosystems. Implementing our roadmap will make these approaches more easily accessible to a broad community of users and consequently aid understanding of the future polar oceans. |
31,468 | Intercropping contributes to a higher technical efficiency in smallholder farming: Evidence from a case study in Gaotai County, China | It is expected that the global demand for food will increase for at least another two decades due to the continued growth in our population and consumption, which will intensify the competition for scarce natural resources.In China, a national policy aimed at attaining self-sufficiency in grain production has put additional pressure on land and water resources in recent decades.Major increases in efficiency are needed to meet the challenge of land and water scarcity in agriculture, particularly as the demand for these resources from non-agricultural sectors is also rising.Intercropping is defined as the simultaneous cultivation of two or more crop species in the same field for either the entire growing period of a part thereof.This practice can achieve higher yields on a given piece of land by making more efficient use of the available resources, while also contributing to soil fertility and soil conservation.Although it is an ancient and traditional cropping system, intercropping is still applied worldwide.High levels of land use efficiency, as measured by the land equivalent ratio, have been reported for various intercropping systems; however, this may be obtained at the expense of the efficiency with which other inputs can be used, such as labour, nutrients and water.To assess the relative efficiency with which all natural resources and other inputs are used in intercropping systems, the technical efficiency of intercropping should be compared with that of monocropping systems.TE is defined either in an output-oriented manner as the ability of a farmer to maximise the output of a specific crop using given quantities of inputs and a given level of technology, or in an input-oriented manner as the ability of a farmer to minimise input use for a given quantity of outputs using a given technology.TE is commonly expressed on a scale of 0 to 1, where a score of 1 means a farmer is perfectly technically efficient and cannot increase output without using more inputs.There are several reasons why the TE of intercrops may differ from that of sole crops grown in the same environment.Systems such as relay intercropping may enable a longer total growth period than that of a sole crop, enabling abiotic resources, especially light, to be captured over a longer period and thereby increase yields per unit land.A second reason is the fact that the temporal spread of activities such as planting, weeding and harvesting in intercropping systems often allows farmers to use only, or predominantly, family labour.Farmers usually prefer family labour over hired labour in the management of intercrops because the activities required for this system call for more care and a sense of discrimination.A third motivation is that intercropping often reduces production risks because the different crop species respond differently to weather extremes and outbreaks of pests and disease.When the growth of one crop in a field is negatively affected, the resources might be used by the other crop in the intercropping system, providing a higher assurance that the available growth resources will be used and transformed into a yield.Moreover, pests and diseases generally spread more easily in monocropping systems than in intercropping systems.Some resources may be used less efficiently in intercropping systems however; for instance, Huang et al. found that labour use in a novel wheat–maize/watermelon intercropping system in the North China Plain was much higher than in the traditional wheat–maize double cropping system.This increase in labour was attributed to the fact that the cash crop requires much more labour than the cereal crops, and the maize had to be sown by hand into the intercropped system.Likewise, Mkamilo found that the estimated labour use of maize/sesame intercropping in southeast Tanzania was much higher than that of sole crops.In the wheat–maize/watermelon intercropping system, Huang et al. further found that irrigation water use and nutrient surpluses per unit land were significantly larger than in the wheat–maize double-cropping system, because extra water and fertiliser were given at the watermelon transplanting and fruit production stages.Water and nutrient use efficiencies per unit output were not examined in their study, however.Intercropping may thus have positive and negative effects on the use efficiency of individual resources, and its effect on TE in smallholder farming therefore remains unclear.The impact on TE is likely to differ between regions with different agro-ecological and socio-economic conditions; for instance, the TE of intercropping may be relatively high in regions with a lot of surplus labour.Empirical research comparing the TE of intercropping as a farming practice with that of monocropping practices is currently very limited.Alene et al. estimated that the TE of maize–coffee intercropping systems in southern Ethiopia were very high, and concluded that farmers in the region make efficient use of land and other resources in the innovative and evolving indigenous intercropping systems combining annual and perennial crops.Their study focussed on different methods for estimating TE, and did not present TE estimates of monocropped maize or coffee in the same region.Dlamini et al. found that integrating maize with other species increased the TE of farmers in southeast Tanzania; however, Tchale and Sauer identified a significant negative impact on TE when maize-based smallholder farmers in Malawi practised intercropping.The main focus of the studies by both Dlamini et al. and Tchale and Sauer was on the TE of maize production as a whole.Neither study explained why an intercropping dummy was included as an explanatory variable in the equations explaining the TE levels of maize production, nor did they interpret the estimation result of this dummy variable.The studies mentioned here provide some insights into the TE of intercropping in three regions in eastern Africa, but cannot be generalised to regions with different agro-climatic and socio-economic conditions.The aim of this study is to examine the contribution of maize-based relay-strip intercropping to the TE of smallholder farming under current farming practices in northwest China.To achieve this objective, we applied a stochastic frontier analysis to detailed survey data collected from 231 farmers in Gaotai County, Gansu Province, China.The resulting farm-specific TE scores were regressed against the proportion of land under intercropping and other explanatory variables to determine which factors significantly affect TE.Intercropping is practised in several regions in China; however, there are no official statistics on the prevalence of intercropping in this country."Li et al. claimed that around 18% of China's annually sown area is intercropped; however, more recent estimates indicate that intercropping may cover only approximately 2–5% of the total arable land area, a proportion that has not significantly changed in recent years.In northwest China, intercropping techniques are particularly used in irrigated areas where the growing season is not long enough to support two sequential crops but is sufficient for growing two species with partially overlapping cultivation periods.This practice, known as relay intercropping, has become popular in this region.It usually takes the form of strip intercropping, in which two or more crops are grown in strips wide enough to permit separate crop production but close enough for the crops to interact.Wheat/maize relay intercropping was developed as a highly productive system in the 1960s in northwest China, particularly in Gansu Province, and was found to be an efficient way to deal with growing land scarcity.Previous studies of the efficiency of wheat/maize intercropping systems in northwest China focussed on the land-use efficiency and resource-use efficiency of other single resources, such as nutrients, water or light.These studies were mostly based on field experiments; however, little is known about the farm-level efficiency of wheat/maize intercropping systems or more recent intercropping systems combining maize and cash crops in this region.Due to the relatively high influence of the weather, pests, diseases and other uncontrollable factors on resource-use efficiency in agriculture, a common approach for estimating TE involves the use of stochastic production frontiers.In Section 2.1 we briefly explain this approach, commonly referred to as a stochastic frontier analysis, and in Section 2.2 we describe the way in which the SFA was applied in this study.The data collection method is presented in Section 2.3, while the definitions of the variables used in the empirical analysis and the descriptive statistics of these variables are discussed in Section 2.4.Battese and Coelli suggest that TE should be predicted using its conditional expectation, given the composed random error, vi − ui, evaluated at the maximum-likelihood estimates of the coefficients of the model Eq.We followed earlier studies in using a translog specification for the production frontier, f,in Eq.A translog function is flexible and can be interpreted as a second-order approximation of any true functional form.Another popular specification for production functions or production frontiers is the Cobb-Douglas function, which is nested within the translog function.We formally tested which functional form is most appropriate.The variable ‘other inputs’ comprises the use of pesticides, rented machinery and film mulch, which has a value of 0 for some observations.Translog functions are defined for non-zero right-hand variables only.To deal with this problem, we changed these 0 values into a value of 1 and added the dummy variable Di to the model.Estimates of TIE obtained using Eq. were used as the dependent variable in the model examining the contribution of intercropping and other factors to TIE."A farmer's TIE depends on their farm management practices, such as the timing of sowing, irrigating and harvesting, and the ways in which the inputs are applied.Farm management practices are in turn related to a host of variables, including knowledge, experience and education.The main explanatory variable of interest is the proportion of land used for intercropping.As explained in the Introduction, it is unclear whether intercropping has an overall negative or positive effect on TIE because arguments can be given for both.The age of the head of household could also have either a positive or a negative effect on TIE.The greater farming experience of older farmers could reduce their TIE; however, older farmers could also be less willing to adopt new practices and thus have a higher TIE.Labour productivity is likely to increase then decrease with age.The optimal balance of skill and productivity may be achieved at an intermediate age; therefore, we also included the square of the age of the farmer to test for such nonlinearities.Education is expected to reduce TIE as better-educated farmers are assumed to have a higher ability to understand and apply production technologies.Plant growth, water and nutrient requirements, diseases and pests, and crop damage caused by animals or thieves are easier to monitor on small farms than large farms.On the other hand, farmers with less land are more likely to be engaged in off-farm employment, which may negatively affect the time spent on monitoring and appropriately timing sowing, weeding, harvesting and other activities.The impact of farm size on TIE is therefore indeterminate.To examine possible nonlinearities in the impact of land size on TIE, we also included the square of the land size as an explanatory variable.An increase in the number of plots could reduce TIE, because variations in agro-climatic conditions at the micro-level imply that peaks in the demand for labour and other inputs tend to level off, and because diseases and pest are less likely to spread on fragmented farms in comparison with farms of the same size with fewer plots.The efficiency of water management may decrease with increasing numbers of plots however; different crops or crop varieties grown on different plots may need water at different times, but irrigation usually needs to be co-ordinated with farmers cultivating neighbouring plots.Moreover, the amount of time spent travelling between the family house and plots will usually be higher when the number of plots is greater, and more effort is required to monitor crops across higher numbers of plots.The impact of the number of plots on TIE is therefore indeterminate.Finally, a dummy variable for one of the two townships where we held the survey is included in the model to capture the variation in other factors that may systematically differ between the two townships.The computer program FRONTIER 4.1 was used to obtain maximum likelihood estimates of the unknown coefficients of the stochastic production frontier model.We adopted a one-step estimation procedure, in which the relationship between TIE and its explanatory variables Eq. is imposed directly when estimating the frontier production function Eq. and the TIE level of the farm.We used household-specific information on the use of each input to estimate the input-output elasticities for each household in our sample.The resulting household-specific estimates were then used to obtain standard errors and t-ratios of the input-output elasticities.During the period August 2014 to November 2014, survey teams from Northwest Agricultural and Forestry University, Yangling, China, and the University of Chinese Academy of Sciences, Beijing, China, conducted a survey of farms in the Heihe River basin in Gansu Province, China.The main goal of the survey was to assemble information on farm household livelihoods and water use in different parts of the basin."Data were collected on the use of inputs in agricultural production, the agricultural output, resource consumption and expenditure, and the farmers' attitudes towards the current water policy.The data used in this study were a sub-sample of the full dataset, involving farmers in Gaotai County, Zhangye City, Gansu Province.Zhangye City is located in an oasis formed by the Heihe River.The annual precipitation is 89–283 mm, 70% to 90% of which falls between July and September, while about 1700 mm of water evaporates each year, resulting in a desert climate.Due to the availability of irrigation water from the Heihe River, the abundant sunshine and the flat and fertile land, the area has become a major agricultural base.Gaotai County is located between 98°57′–100°06′E, 39°03′–39°59′N, and is one of the six administrative counties in Zhangye City.Wheat and maize are the main staple food crops, while beans, cotton, rapeseed and other seed crops are grown as cash crops.Maize intercropped with either wheat or other crops are the major intercropping systems in the region.Wheat/maize intercropping is the conventional practice used for 40 years, while seed watermelon/maize intercropping started in the early 2000s and cumin/maize was introduced in recent years.Two townships in Gaotai County, Luocheng and Heiquan, were selected for inclusion in this study.In both townships, intercropping systems were historically popular and remain important.Three administrative villages in Luocheng township were randomly selected for inclusion.In addition, we included two natural villages in Heiquan township where intercropping is practiced.3,Within each village, the interviewees were randomly selected among the households present in the village at the time of the farm survey.In total, 360 farm householders4 were interviewed about their use of inputs and their outputs during the 2013 agricultural season.A total of 129 respondents did not provide complete information on the major variables in our analysis, particularly labour input,5 and were therefore excluded from the sample.The final sample included 231 households.Output was measured as the total value of crop output produced by the farm, derived using the prices6 received for each crop.The most important crops in the two selected townships were cotton, wheat, and maize grown as sole crops, and intercropped wheat/maize, cumin/maize and seed watermelon/maize.Maize and wheat are normally sold at local markets, while cotton, cumin and seed watermelon are commonly sold on contract to firms.Inputs used in the study included the farmland area; labour employed in the cultivation of crops; fertiliser; seed; irrigation water; expenditures on film mulch, pesticides, and hired machinery; and the proportion of good-quality land.The average farm size in the sample was 11.0 mu, of which 60% was used for the intercrops.The mean number of plots was almost 9, with an observed maximum of 25, indicating a large degree of fragmentation.The average number of labour days worked on a farm in 2013 ranged from 6 days to 432 days, with an average of 108 days.Fertiliser was the biggest cost item, with a mean cost of slightly more than 3500 Yuan.The amount spent on this input varied dramatically, with costs ranging from only 394 Yuan to 37,050 Yuan.Seed was another important input cost, with a mean value of more than 1000 Yuan; high-quality seed was commonly used by the surveyed farmers.The aggregated costs of film mulch, pesticides and hired machinery were also slightly more than 1000 Yuan; however, some farmers did not spend any money on these other inputs in 2013.To begin, we tested whether a Cobb-Douglas or a translog functional form would be more appropriate for the production frontier.We used a likelihood ratio test to test the hypothesis that the coefficients of the interactions between the inputs are jointly equal to 0).The test result8 indicated that the null hypothesis should be rejected at a 10% significance level, indicating that the interaction terms are jointly significant and a translog specification is more appropriate.The value of the generalised likelihood ratio test exceeded the critical value = 24.73) at the 1% level; therefore, the null hypothesis that the sampled farms are perfectly technically efficient was rejected.This result provides statistical support for the presence of a one-sided error component characterising TIE, and suggests that a conventional production function is not an adequate representation of the data.The estimated value of 0.377 for the variance parameter γ indicates that 38% of the variation in the residual output can be explained by TIE and 62% by random disturbances, such as those caused by weather conditions or measurement errors.In order to assess the contributions of each individual input, we calculated the input-output elasticities using Eq.The amount of land farmed was found to have the largest impact on crop production, with an elasticity of 0.896.Other inputs were also found to be important, with an output elasticity of 0.106.On the other hand, the estimated elasticities for seed, fertiliser, irrigation and labour did not differ significantly from 0.Similar large and significant elasticities for land and small or insignificant elasticities for other agricultural inputs have previously been reported in similar assessments in other parts of China.The sum of estimated input-output elasticities, i.e., the scale elasticity, was 0.898.This suggests that farmers in the region are confronted with diminishing returns as they increase the size of their farm, and can improve their efficiency by reducing the scale of their operations while keeping the same input mix.This is consistent with the findings for agricultural production in other parts of China reported by Tan et al. and Tian and Wan, but not with the constant or increasing returns when increasing the scale of farm operations reported by Chen et al. and Liu and Zhuang.The TE scores of the sampled farms ranged from 0.18 to 0.97, with a mean score of 0.85; 80% of the farms had a score of 0.8 or more.As mentioned in the Introduction, a score of 1 indicates a perfect TE.The average TE for our sample is comparable to estimates obtained in other studies in rural China, which ranged from 0.80 to 0.91.The estimated TE score means that the average value of the crop production fell 15% short of the level that could be achieved by the most efficient farmers.This suggests that, at the time of the survey, there was still considerable room to increase crop yields in the short term at the current levels of input use, even for the majority of farmers who had above-average TE levels.The dependent variable in the inefficiency function is the estimated TIE; hence, for the TIE estimation, a positive estimated coefficient implies that the associated variable has a negative effect on TE and a negative coefficient means the variable has a positive effect on TE.The results of the TIE estimation equation are shown in Table 4.We will focus on their interpretations in terms of TE.The estimated coefficient of the main variable of our interest, the share of land under intercropping, had a highly significant positive effect on TE.The corresponding elasticity was 0.744, implying that a 1% increase in the share of land used for intercropping would increase TE by 0.744%, on average.Taking into account the relatively high land-use efficiency of intercropping systems observed in many studies, this finding indicates that the potential negative effects of intercropping on the use efficiency of other resources are more than offset by its high land-use efficiency when compared with monocropping.The traditional wheat–maize intercropping system, practiced in the Hexi corridor for almost 40 years, is still used on 23% of the land in the surveyed villages.It may therefore be assumed that farmers have gained much experience with this particular system, and have optimised the overall efficiency of this system and possibly other intercropping systems over time.Official statistics on the cultivated areas of China do not distinguish sole crops from intercrops; there is no official record of intercropping.Our findings suggest that the failure to take intercrops and their contribution to TE into account may have severely biased the outcomes of previous studies on the productivity of Chinese agriculture, particularly in areas where intercropping is practiced on a considerable scale."Regarding the control variables, we found that the coefficient for the age of the head of household had a significant positive effect on TE, while the square of the householder's age had a significant negative effect on TE.These findings indicate that experience plays an important role in crop production, but that its marginal impact on TE declines with increases in age.This result is consistent with the findings of Liu and Zhuang, who investigated farm productivity in Sichuan Province in southwest China.The estimated coefficient of land area had a significant positive effect on TE, while the coefficient of the squared area had a significant negative effect.This finding indicates the existence of a non-linear relationship between farm size and TE, with a maximum TE achieved at a land size of approximately 22 mu, which is twice the average size of farms in the sample.The estimated coefficients for the two education variables did not significantly differ from 0.Although better-educated farmers are expected to perform agricultural activities in a more efficient way, they also have a higher probability of being engaged in off-farm employment and may therefore spend less time on crop monitoring and have a suboptimal timing of on-farm activities.We found that the number of plots had a significant, positive impact on TE.In other words, the potential positive TE impacts of having more plots as a way to adapt to variations in micro-level agro-climatic conditions outweighed the potential negative effects for the farmers in this sample.This finding is consistent with the positive effect of land fragmentation on TE previously observed in the Jiangxi and Gansu Provinces of China.Finally, the significant positive effect of the township dummy indicated that farms in the Luocheng township have a lower TE than those in the Heiquan township, when other factors affecting TE remain constant.Differences in agro-climatic factors and market conditions may explain this finding.Intercropping systems generally have higher land-use efficiencies than monocropping systems.It remains unclear, however, to what extent the higher yields per unit of land are obtained at the expense of the efficiency with which other inputs such as labour, water and nutrients can be used.In this study, we examined the contribution of intercropping to the TE of a smallholder farming system in northwest China.TE measures the output obtained for a certain crop, or combination of crops, as a share of the maximum attainable output from the same set of inputs used to produce the crop.The farm-level TE of a cropping system is a key determinant of its profitability, and thus an important determinant of the livelihood strategies of smallholder farmers.Although our analysis is limited to a relatively small region in northwest China, the insights it provides are likely to be relevant for other regions where intercropping methods are practiced, both in China and the rest of the world.The contribution of intercropping to TE was examined by estimating a translog stochastic production frontier and an efficiency equation using farm input and output data collected from 231 farm households in Gaotai County, in the Heihe River basin in northwest China.Our main finding is that intercropping has a significant positive effect on TE, implying that the potential negative effects of intercropping on the use efficiency of labour and other resources are more than offset by its higher land-use efficiency when compared with monocropping.The estimated elasticity of the proportion of land under intercropping was 0.744, indicating that TE goes up by 0.744% if the proportion of land used for intercropping increases by 1%.The large and significant value of this estimate gives strong support to the view that intercropping is a relatively efficient land-use system in the studied region.Our results imply that there is still considerable scope for increasing TE in Gaotai County without bringing in new technologies.Increasing the proportion of land used for intercropping may play an important role in this respect, given that only 60% of the land in this region was under intercropping in 2013 and that the elasticity of TE in terms of the proportion of land under intercropping is close to 0.8.The expected increase in TE will contribute to increasing farm output and farm profits without affecting the availability of scarce land resources.It should be noted, however, that this conclusion only holds under the assumption of constant output prices.If the production of non-grain crops like cumin and seed watermelon would increase, this could result in lower prices for these crops and negatively affect the TE, and hence the profitability, of these intercropping systems.Recent price declines for maize in China, on the other hand, have increased the TE of maize-based intercropping systems when compared with single maize crops.Farm size was found to play a key role among the control variables affecting TE.The non-linear relationship between TE and the area of cultivated land implies that ongoing policies aimed at increasing agriculture through the promotion of so-called family farms and the renting of land to co-operatives and private companies may make a positive contribution to the overall efficiency of farming in the region we examined.TE was estimated to be highest for farms that are twice as large as the average size observed in our study.The TE analysis employed in this study takes the available technology at the time of the survey as a given; however, productivity gains could also be made through the development and introduction of new technologies, both in intercropping systems and monocropping systems.In the case of intercropping systems, these changes could involve the promotion of new varieties to replace conventional cultivars of component crops, as well as the development of specialised machinery for intercropping systems to reduce its large labour demand. | Intercropping entails the concurrent production of two or more crop species in the same field. This traditional farming method generally results in a highly efficient use of land, but whether it also contributes to a higher technical efficiency remains unclear. Technical efficiency refers to the efficiency with which a given set of natural resources and other inputs can be used to produce crops. In this study, we examined the contribution of maize-based relay-strip intercropping to the technical efficiency of smallholder farming in northwest China. Data on the inputs and crop production of 231 farms were collected for the 2013 agricultural season using a farm survey held in Gaotai County, Gansu Province, China. Controlling for other factors, we found that the technical efficiency scores of these farms were positively affected by the proportion of land assigned to intercropping. This finding indicates that the potential negative effects of intercropping on the use efficiency of labour and other resources are more than offset by its higher land-use efficiency when compared with monocropping. |
31,469 | Electric vehicle charging choices: Modelling and implications for smart charging services | In a context of a progressively decarbonised power sector, electric vehicles can bring significant reductions in CO2 emissions from road traffic.Moreover, EVs’ roll out improves air quality in urban areas.However, EVs bring both challenges and opportunities for power systems.Amongst the challenges that large EV penetration may bring there is the potential increase of peak power demand if charging operations occur in coincidence of current demand peaks.Amongst the opportunities there is the possibility to use EV as flexible loads that can provide balancing services to grids with large shares of intermittent or fluctuating renewable energy generation.In order to fully exploit the potential of EVs as a flexible load smart charging strategies need to be implemented.Smart charging can occur in a centralised way via aggregators or through decentralised control architectures.In the centralised framework EV owners do not have transactions in electricity markets, because of the low power of a single transaction.In this centralised framework EV load aggregators act as an intermediary between vehicle owners and grid markets and contract power demand from several EVs.In the decentralised framework, individual EVs respond to market information made available to them.Typically, a static or dynamic price signal is used to incentivise a particular charging behaviour.An example of a static price signal is time-of-use tariffs that would incentivise charging overnight, similar to current time-of-use domestic tariffs for electricity.The typical aggregator based approach to charging demand management implies direct control.This means that control actions are imposed on electric vehicles without the involvement of the electric vehicle owners.Such actions must, however, respect the constraints imposed by owners’ travel needs.Thus the aggregator must collect charging requirements from each member vehicle.Sundstrom and Binding formalise the requirements that EV users communicate to the aggregator in terms of an energy requirement and a timing requirement.The energy requirement specifies the battery level required by the end of the charging operation while the timing requirement specifies the time by which the charging operation must be completed.Under this scheme, users directly affect the flexibility of the controls that can be imposed on the charging operation through their charging preferences.Therefore it is in the interest of the operator to incentivise charging preferences that allow for more flexible operation.Contracts regulating the service provision could include the option for users to override the control imposed by the aggregator.However, even in a decentralised framework, a central entity might provide these pricing signals to owners of electric vehicles.From this perspective, the centralised and the decentralised frameworks overlap.Despite the importance of users’ behaviour in the context described, current smart charging strategies largely rely on simplistic or theoretical representation of EV charging and travel behaviour.The main contribution of this paper is a charging behaviour model that bridges the gap between the representations of charging behaviour used in integrated transport and energy system analyses for the appraisal of smart charging strategies, and the representations used in charging behaviour studies.In integrated transport and energy system analyses, charging behaviour is represented either through charging behaviour scenarios or theoretical models.The former are not policy sensitive and thus are not suitable for assessing the response of EV drivers’ to smart charging services.The latter are policy sensitive, but they are not estimated empirically.The absence of strong empirical foundations may lead to weak behavioural realism of the responses to smart charging services.Empirical evidence from charging behaviour literature shows that charging behaviour is heterogeneous amongst drivers.Such heterogeneity is related to differences in driving patterns, individual attitudes towards risk in dealing with limited range vehicles, and idiosyncratic preferences.However, the charging behaviour literature does not provide operational models of charging behaviour that can be used to analyse driver’s response to smart charging services, because the response of drivers to pricing of charging services in smart grid contexts has yet to be addressed.In the present work we develop a random utility model for charging behaviour that is empirically estimated using discrete choice analysis.Because we model jointly activity-travel scheduling choices and charging choices under the activity-based demand modelling paradigm, our charging behaviour model is well suited to be implemented as a module in integrated transport and energy model systems.However, unlike previous charging behaviour models applied in such model systems, ours captures the behavioural nuances of tactical charging choices in smart grid context.It can do so because the trade-offs involved in tactical charging choices in smart charging contexts are captured empirically by model estimation using discrete charging choice experiment data.Our discrete choice experiment were specifically designed to elicit drivers’ charging preferences in smart charging contexts,In addition, another significant contribution is the introduction of the concept of effective charging time as a dimension of charging choice.The definition of ECT makes it possible to use the same representation of charging choice independent from the charging service under analysis.As, Daina et al. point out in a recent review, by and large the current practice for the appraisal of smart charging strategies assumes predefined charging scenarios and exogenous EV use patterns.The use of predefined charging scenarios prevents analyses that are sensitive to electricity pricing, because the charging behaviour is set using rules.It also limits the representation of users’ behaviour heterogeneity.The reliance on exogenous travel patterns implies that travel patterns are independent from the charging decisions.However, if individuals are flexible in their travel choices, ruling out an interdependence between travel and charging choices may lead to biased estimates of EV use.Moreover, treating EV driving patterns as exogenous inhibits representing potentially relevant interactions between time-of-use electricity pricing and road pricing policies.1,Exceptions to the use of charging behaviour scenarios and exogenous travel patterns are less common, but do exist.Two related works by Galus and Andersson and Waraich et al. use agent based modelling and micro-simulation for plug-in hybrid electric vehicles use and charging modelling.These works, integrates the transport simulation environment MATSim, with a power system simulator.The integration occurs via a charging behaviour module sensitive to electricity pricing.The electricity price signal, generated by the power system module, affects PHEVs agent simulated decisions to drive and charge in MATSim.MATSim using an evolutionary algorithm modifies the driving and charging schedules of all PHEVs in the simulation attempting to increase the utility of each agent at each simulation step.Although the charging behaviour model adopted is coherent with a game-theoretical framework, allowing the modelled PHEV agents to compete for limited electric network capacity, it lacks a strong empirical foundation.Kang and Recker apply Recker’s Household Activity Pattern Problem framework to enable changes in travel-activity patterns caused by charging patterns.However, they use charging behaviour scenarios to define charging patterns.Other models that take into account the effect of electric vehicle use on EV driver’s activity-travel patterns include a study of vehicle to grid-operations using activity-based equilibrium scheduling and a study on intra-household interactions when scheduling EV use in households with multiple vehicles.These two studies account for the interaction of EV use decisions and activity travel scheduling decisions in different ways.Nourinejad et al. adopt and extend Lam and Yin’s time-based utility theory model, which does not treat schedule constraints explicitly but models the utility of an activity as time dependent and expresses the scheduling problem as a continuous equilibrium problem.Khayati and Kang use the Recker’s Household Activity Pattern Problem framework, in which the disutility of travel is minimised under spatial and temporal constraints.In the studies above, the effect of limited range of EV is accounted for, as a spatial constraint that excludes infeasible trip chains from a driver’s choice set.In an analogous manner, limited range availability has also been treated as spatial constraints in traffic assignment models applied to electric vehicles.However, to define distance constraints, range levels needs to be observable.Instead, the range levels perceived by EV drivers are latent.Perceived range levels only partially indicated by the available energy stored in the vehicle battery.This poses a limitation in approaches that represent the effect of limited range in EV travel and charging decisions purely as spatial constraint.The dominant tendency to rely on charging scenarios and exogenous travel patterns is a likely result of the dearth of data on EV charging and use from which to estimate empirical models.Revealed preference data about EV use and charging are difficult to access.Data generated in settings with variability in prices and tariff structures for charging services are still limited.In order to analyse the response to charging service pricing, it is necessary to rely on choice experiment data, where hypothetical choice situations are presented to a sample of drivers.These “stated response” experiments offer the opportunity to collect “stated preference” data when preferences cannot be “revealed” in real world markets.SR tools are powerful in non-market situations but unavoidably carry biases inherent with hypothetical choice situations.While much effort has been devoted to describing aggregate spatiotemporal charging demand patterns only few studies have focused on gaining insights on the factors driving charging behaviour.However, understanding individual behaviour is necessary for developing models to assess how an EV driver may respond to charging service propositions with varied characteristics.Franke and Krems analysed the charging behaviour of participants in a German EV trial.As one would expect, they found strong evidence that range level affects charging decisions.Moreover, they discovered a significant heterogeneity in the range levels participant feel confortable with while driving their EVs.We submit that varying degrees of comfortable range can be represented in terms of heterogeneous preferences for range levels.This heterogeneity in preference can be explained by a number of factors which we will explore in later in this a paper.Franke and Krems provide compelling evidence of the behaviour mechanisms potentially underlying charging behaviour, but the theoretical framework they adopt to describe them, drawn from control theory and behaviour self-regulation, is difficult to operationalise.Zoepf et al. used a random coefficients mixed logit model to model the occurrence of a charging operation at the end of PHEV journeys.The authors identified the following as significant explanatory variables: the current state of charge of the PHEV battery; the available time before the next journey; the distance travelled in the most recent journey; whether the journey was in fact a tour and the end time of the most recent journey.The authors also found a significant standard deviation in most of the utility coefficients, bringing additional evidence for heterogeneity in charging preferences.While Zoepf et al.’s model can be used for forecasting the occurrence of charging events, it is not, however, suited to model the behavioural response to tariff structures for charging services, because is not price sensitive.More recently, Yang et al., brought further evidence, analysing data from stated choice experiments of the importance of state of charge levels in charging decisions.Yang et al.’s work is also interesting because it address the study of charging choices contextually with route choice.This highlights the importance of considering the charging choice as further dimension of travel choices.The study of charging choices in contexts where smart charging options are available deserves further investigation.To date behavioural studies around smart charging propositions have mainly focused on acceptance of smart charging services rather than trying to assess the users charging behaviour response.A notable exception is a study by Latinopoulos et al. who study the risky choice in early reservation of dynamically priced charging and parking services.Our aim is to model charging behaviour within the context of smart charging services.In order to achieve this aim we require an operational definition of charging choice, bridging the perspectives of EV drivers and the charging service provider.Let us consider the drivers’ perspective first.EV drivers are interested in the amount of energy stored in the vehicle’s battery at any time.The amount of energy stored in the battery defines the spatial constraints of EV vehicle use.The time dimension is important as it determines when EV drivers can reach certain destinations to carry out their activities.The charging operation varies the amount of energy stored as a function of time.The duration of a charging operation depends on the vehicle and charger characteristics and on the charging strategy adopted by the supplier.We thus define charging choice as the decision made by driver, at a given charging opportunity, to charge their vehicle to a specific charge level, starting from a specific instant, and to be made available by a certain time.This definition assumes that the driver has multiple charging options that are characterised by different battery levels at the end of the charging operation and different charging durations.These options might vary in price depending on pricing policies imposed by the CSP.In turn, the CSP is interested in the time profile of power drawn by the EV is supplying when charging.This time profile is called charging profile or charging schedule.In a centralised framework, the CSP defines the charging profile within the constraints determined by the following three pieces of information:The instant the vehicle is connected to the grid;,The instant the vehicle must be disconnected;,The total amount of energy required over this period.The three quantities itemised above are easily derived from a charging choice.So the charging choice, as we defined it, reconcile the perspectives of driver and CSP.The definition of charging choice we gave above can be further simplified.Consider the instant the vehicle is connected to the grid.Such instant can only be coincident with or delayed from to the arrival time to the charging facility.From the electric car driver’s perspective, the start of the charging operation could be viewed as coincident with the arrival time, regardless of when the actual energy transfer may start.The simplification described makes it possible to represent in smart charging and conventional charging through the same simple attributes:The amount of energy available in the battery after charging, hereafter target energy;,The time it takes to obtain E since the arrival at the charging facility, hereafter effective charging time;,The charging cost.An EV driver will evaluate alternative charging options based on E, ECT and CC.The utility that EV drivers associate with an option will vary depending on their idiosyncratic preferences and the context in which they make their charging choices.EV users may seek to charge their vehicle as fast as they wish depending, for example, on the flexibility of their departure time or their perceived risk of unanticipated vehicle use before the planned departure.They also may seek battery levels consistent with their planned travel distances and with buffers allowing for the uncertainty they associate with their travel plans, as well as the uncertainty they associate with driving range predictions.This conceptual framework, inter alia, takes into account preferences of EV “charging level” standards, which describe the maximum charging speed a charging point can deliver.The higher the charging levels the shorter charging times.Therefore, “charging level” preferences are expressed by the contribution of ECT in determining the utility a driver can attain from a charging option.Understanding preferences for charging levels can enable planning charging infrastructure requirement based on the demand for the specific standard.In summary, our conceptual framework for charging choice builds on the finding of the charging behaviour literature as follows.It captures the effect of range preferences in charging decisions, by means of the contribution of E to the utility drivers attained by charging their EV.Additionally, it accounts for the effects of travel patterns on charging decisions as observable contributions to the heterogeneity in preference for the three charging attributes.Finally, we account for unobservable heterogeneity in charging behaviour by specifying empirical models using random parameters that are distributed so to describe the variability of preferences for E and ECT across the drivers.To clarify the concept of charging choice adopted in this paper, and its relationship to the users and the CSP perspective, consider Fig. 1.Fig. 1a shows the concept of charging choice as proposed in this study.At a given charging opportunity, drivers choose their target state of charge and effective charging time.The SOC is a representation of the battery level as a fraction of the total battery capacity.The charging choice space is constrained by the maximum charging rate, the SOC before charging and the maximum battery capacity.The maximum charging rate in SOC per unit time is the slope of the left boundary line of the light grey areas in Fig. 1.A particular charging alternative is represented by a point in the feasible charging choice space.We should point out, that in Fig. 1, the maximum charging rates are shown as constant.In reality, however, these are a function of the SOC, typically slowing down for high SOC levels.In the picture, these are presented as constant purely for simplicity of exposition, but whether charging rates are a function of SOC does not affect our concept of charging choice; it simply changes slightly the shape of the choice space.Fig. 1d depicts a stepwise charging profile that a CSP may impose to a vehicle.Such charging profile may be the result of the CSP operational management process that optimises the energy distribution to several EVs.Each EV could connected to the grid at a different time, t0, and request different target states of charge and effective charging times.The vectors of t0, target SOC and ECT would enter as constraints in CSP’s operational management optimisation.As the charging profiles would be chosen by the CSP, the drivers will not know in advance their vehicles state of charge at any moment.This uncertainty may induce the strongly risk averse to prefer choosing to charge at the maximum charging rate, i.e. choosing target SOC, ECT points that belong the left boundary line of light grey area of the graphs of Fig. 1.These risk averse choices correspond to inflexible load.Less risk averse drivers will make choices away from the boundaries of the light grey area.Choices away from the boundaries of the light grey area allows the CSP to flexibly determine the charging profile.In fact, the CSP might incentivise highly flexible choices with the devising a suitable tariff structure for the charging service.The charging choice space of Fig. 1 is depicted as continuous within its boundaries.However, in practice drivers would face discrete options, when setting their charging preferences via a smart charging device.This justifies modelling charging choices as discrete choices.The standard approach for discrete choice analysis is based on the theoretical framework of random utility.According to RUT, decision makers are assumed to choose the alternative that maximises their own utility from a set of mutually exclusive alternatives.RUT accommodates the analyst’s inability to describe perfectly the utility attained by the decision maker by specifying the utility Ui of the alternative i, as the sum of a function Vi of the attributes of the alternative Xi and an error term εi.The parameters β, in linear-in-parameter utility formulations, represent the marginal utilities with respect to the attributes of the alternative i, and are also called taste parameters.The error term accounts for the unobserved utility as well as for measurement and specification errors.Note that the utility for an alternative, in general, varies across individuals.Thus in theory each individual has his/her own utility function.However in practice, heterogeneity is captured by using individual characteristics as covariates and by opportunely breaking up the error term so that the idiosyncratic component can be meaningfully interpreted as random heterogeneity.2,A driver will chose alternative i if all the other alternatives in his/her choice set give a lower utility.The formulation above as well as that of the theoretical models in Sections 2 and 3 omit representing the fact that utilities are individual-specific to avoid proliferation of subscripts.The empirical models, detailed in Section 4, will however capture individual heterogeneity.Daily activity and travel behaviour and charging behaviour present manifold interrelated choice dimensions.Charging choices, and corresponding values E and ECT, are intertwined with the dimensions of daily activity-travel choices.Amongst these interactions, the relationship between charging choices and timing choices of activities and travel is of particular interest when modelling the effect of smart charging strategies and the tariff structures of the charging services.Consider for example to following two scenarios.A common scenario of residential electricity pricing is a two-period time of use pricing, in which electricity is cheaper at night.In such a case, EV drivers spending enough time at home during the night, would most likely charge overnight and not consider delaying their departure in the morning just to extend the duration of the charging operation, as this second option would not provide any benefits.As alternative scenario we take one in which the electricity generation mix with significant penetration of wind power.In this second scenario, there may be days in which moderate wind is forecasted during the morning hours at times when usually drivers depart their homes.Using such forecasts, the CSP might offer very low charging prices for vehicles plugged-in early in the morning.In this case, some drivers might consider delaying their departure, if their travel plans are flexible.The charging choice and departure time choice thus become joint decision.The modelling framework we develop here account for these types of interactions.In order to do so we embed the charging choice dimension into the random utility modelling framework for activity and travel timing decisions traditionally used for choice of time of travel under road pricing.We now extend the scheduling choice model described above to model EV use scheduling and charging choices.To achieve this, we make the following broad assumptions:Individuals make their charging decisions once they arrive at a location where charging is available, having in mind their next travel requirements;,They decide when to depart jointly with the charging decision as the duration of the charging operation may or may not affect their departure time;,Such joint decision affects a portion of an EV driver’s schedule delimited between charging opportunities;,The evaluation of a charging alternative is based on three attributes that characterise it: E, ECT, and CC.The assumption to model jointly the choices related to a portion or an entire day’s activities and travel schedule is common in activity based modelling.In the previous subsection we have mentioned the case of tours, but in general activity-based modelling views demand for activity and travel is “as a choice among all possible combinations of activity and travel in the course of a weekday”.Our approach is thus in line with activity based modelling literature.The modelled charging choice is myopic, because a driver considers at each charging and scheduling decision only the current charging opportunity, disregarding other charging opportunities that may occur after the current.A myopic choice, however, appears consistent with the view of charging behaviour as a coping strategy resulting from range appraisal, as conceptualised and tested by Franke and Krems.Nevertheless, this is a simplification if one considers situations in which a variety of charging opportunities with different electricity prices were available to an electric vehicle driver.Fig. 2 shows the activity-travel episodes that constitute the setting for each EVSUC choice.Each choice refers to a charging opportunity and the activity-travel episode that spans between the arrival time to the location with the current charging opportunity and the arrival time to the next charging opportunity.In the figure such an episode is illustrated between vertical dashed lines.In order better to explain the meaning of the model presented above, we consider the specific case of a two leg home based tour, with home charging as the only charging opportunity, instead of a general activity-travel episode.The meaning of the expression above is that EV drivers, when planning their EV use, adjust their departure times from home, and their activity participation at their destination, by taking account of travel time and travel cost changes, just as conventional car drivers do.They also, however, choose charging durations and target energy levels by responding to charging costs.The expression above allows for situations in which EV drivers may trade schedule delay late levels with target energy levels.For example, at a charging facility with a fixed charging power, they may opt for a longer charging time, to obtain a higher target energy level, at the cost of a delay holds with the equality sign).The weights that charging and travel attributes have in EV charging and use scheduling choices need to be estimated empirically.The variability of such weights across individuals needs also to be estimated.The empirical estimates of βX determine how charging cost, target energy, effective changing time or schedule delays, etc. are traded one another in such choices.Estimating their heterogeneity across individuals enables capture the extent to which such trade-offs vary across individuals.Indeed the home-based tour version of the model is easily amenable to empirical estimation making use of stated choice experiments.These, in fact, can be designed specifically to estimate the salient parameters of the charging choice, using as a hypothetical situation the choice amongst alternative charging options upon an EV driver’s arrival at home, before undertaking his/her next tour.We adopt the modelling framework presented above in order to analyse home charging choices from stated choice experiments.In the experiments, hypothetical settings were designed such that the subjects were required to charge their vehicle in order to undertake the next home based tour, without having the possibility to charge at the destination.The charging choice data used for the empirical analysis presented in this paper were collected in 2012 as part of a PhD research project on charging behaviour.The choice experiments were part of a broader computer aided personal interview survey called ECarSim.ECarSim is an extension of the reflexive survey concept which was developed in order to collect data about unfamiliar hypothetical situations, while mitigating the potential negative effects of this unfamiliarity on the response reliability.ECarSim is an internet-based interactive tool consisting of three parts:A questionnaire on socio-demographic characteristics of the respondents and a one car diary;,A stated adaptation of the respondents’ travel diary to an EV diary, including the specification of the charge timing, in a setting of conventional charging with constant charging power;,A stated choice experiment section consisting of two types of tasks: a charging and tour scheduling choice.ECarSim is specifically addressed to car drivers who in most cases have never operated an electric car.They are thus expected to have little or no familiarity with the charging operation and what this may entail for their travel patterns.In order to mitigate potential bias caused by lack of familiarity with EV we use the stated adaptation section of the survey.The stated adaptation makes respondents think about the potential effects of limited range recharging times and how charging operations can be accommodated in a typical day.Respondents are asked to set up the charging operation and possibly adapt their car diary in order to accommodate the use of an electric car with characteristics matching those of the Nissan Leaf.This section of the questionnaire thus serves as a “reflexive section” to engage respondents and smoothly introduce them to the hypothetical world for the charging choice exercises.The stated choice experiments are the main data generation process of ECarSim in terms of eliciting charging behaviours in a hypothetical smart charging scenario.In this scenario, the smart charger requires as input data only the target battery level and the time this is to be achieved, “Time EV ready”.Once these data are entered, the smart charger dashboard provides information about the cost of the charging operations.Respondents are told that “the price of electricity to charge your electric car will vary depending on how fast you want your electric car ready with a given battery level, and on the time of day you charge.,Respondents face two series of 12 choice situations.In the first series, after being reminded of the features of their first tour of the day, they are asked to choose between two alternative settings for the charging operation occurring during the vehicle dwell time at home before the tour.In this series, the ECT of an alternative never exceeds the original observed dwell time at home.The design variables are E, the ECT and CC.The battery level before charging and the start charging time are fixed across alternatives as well as across choice situations.While the former is also fixed across individuals, the latter is individual-specific.An example of choice situation in DCE1 is shown in Fig. A1.The choice attribute levels are the following:Four target battery levels.These are spread between the minimum energy to charge to make the tour feasible and the maximum energy that can be charged into the battery, given the battery state of charge before charging, the battery capacity and the maximum charging power.Four levels of charging operation duration.These are spread between a minimum given by the time to charge the minimum energy to make the tour feasible at the maximum charging power and the original vehicle dwell time at home.CC levels are obtained by three unit price levels multiplied by the amount of energy charged in the corresponding alternative.The experimental design is based on a respondent-specific efficient design approach.This approach is similar to one proposed by Rose et al. for the design of choice experiments in the presence of a reference alternative.Although the alternatives were generated only based on the three design variables described above, these were presented to the respondent in an extended form to ensure sufficient clarity in their description.It is worth mentioning here that the target battery level is also described in terms of the corresponding available driving range.This range is presented as an interval, to remind respondents that for a given battery level the range may vary, depending on their driving style, use of heating or air conditioning and road/traffic conditions.In the second series of choice situations, the charging operation may exceed the original vehicle dwell time at home.Moreover, instead of a simple binary choice, respondents are offered options to partially absorb the schedule delay late by decreasing the time spent at the main destination of the home based tour.Up to four activity contraction options are offered for each of the two charging alternatives, depending on the original dwell time at the destination and the schedule delay level of the alternative.Finally, in the second choice experiment, respondents have the alternative to avoid charging and to use another mode, or to suppress the tour entirely.An example of choice situation in DCE2 is shown in Fig. A2.Note that only activity participation penalties at the destination, corresponding to a decrease in activity duration are retained in the specification, because a reasonable response to a charging-induced schedule delay may be curtailing the activity at the destination.Screen shots of the DCE1 and DCE2 as they appear in ECarSim are provided in Appendix A.Additionally a summary of the attributes presented in the two choice experiments is shown in Table 1.Further details on ECarSim and the design of the choice experiments can be found in Daina’s PhD thesis.In addition to capturing unobserved heterogeneity using mixed logit specifications, we attempt also to capture systematic variations in EVSUC preferences by interacting alternative attributes with,Planned activity-travel characteristics exogenous to the setting of the choice experiments;,Indicators of tour flexibility.While a variety of specifications for these interaction terms were tested, in the results shown in this paper we present specifications that attempt to balance parsimony, goodness of fit and interpretability.We conducted a specification search estimating a large number of models with different specifications for the systematic portion utility, in order to achieve the final specifications presented in Section 4.4.These resulting specifications were chosen to provide a good balance between goodness of fit and parameter interpretability as it is common practice in DCM estimation.In particular, in our specifications’ search the interaction terms between alternative attributes and individual specific variables are chosen to reflect of a priori notions of possible effects.Amongst these effects, the planned travel driving distance is expected to affect the marginal; utility for E; demographic variables as it are expected to affect the cost parameter; flexibility of travel and other journey characteristics such as travel in peak time period are expected to affect the marginal disutility of schedule delays.More details on the interpretation of these interaction terms is reported in the results’ section.The number of car drivers in the dataset used for model estimation is 88.For each choice experiment, a respondent faces 12 choice situations, therefore the datasets consist of 1056 observations for DCE1 and 1056 for DCE2.As DCE1 is a binary choice between two generic charging choice alternatives, we observed only a slightly higher number of choices for alternative A than for alternative B.For DCE2 alternatives, availability and choice statistics for the sample are reported in Table 2.We observed that a charging alternative is chosen for 75% of the choice situations.Unsurprisingly, when respondents find charging alternatives unattractive, they tend to state a preference for a shift in mode rather than for travel suppression.The descriptive statistics of respondents’ characteristics, and the tour characteristics used to specify interaction terms in the utility specification, are reported in Fig. 4.Amongst the car drivers in the sample over 47% are aged 35 or younger, and the majority are in full employment.About 55% of the tours respondents are expected to drive after charging have a distance that is between 30 and 40 miles inclusive, while for the remainder the tours are longer than this.The first leg of 38% of those tours is completely or in part driven in peak traffic hours.48% of the tours are considered inflexible in timing.It should be pointed out that selecting drivers who frequently carry out a tour of a fairly long distance and designing the hypothetical charging choice situations for the choice experiment around those tours was a deliberate survey design decision, motivated by the necessity to make the choice situation particularly meaningful to the respondents given the considerable fraction of battery capacity required to drive such distances.We present in this section models’ estimation results.These were obtained using in part BIOGEME and in part using MATLAB codes written by the authors.Specifications, parameter estimates, and relevant model statistics are reported in Table 3 and Table 4 for DCE1; and Table 5 and Table 6 for DCE2.The logit estimates from DCE1 show a positive marginal utility of E, a marginal utility of ECT that changes sign across the sample and a negative cost coefficient.Considering the effect of systematic variations, at planned driving distances above 40 miles, we observe that a positive jump in the marginal utility of E occurs and a variation in sign from positive to negative of the marginal utility of ECT.The cost coefficient varies with age and employment status.This variation is such that the cost coefficient is larger in magnitude for younger people and smaller for individuals in full time employment.These systematic effects are all also significant in the mixed logit specification, bar the effect of the employment status on the cost coefficient.A positive marginal utility of E is unsurprising.Similarly, the strong dependence on the driving distance of the marginal utility of E is unsurprising.When a journey is longer the consequences associated with remaining stranded are higher and thus the utility attached to higher available ranges is larger.The sign variation in the marginal utility of ECT across the planned driving distance levels is an unexpected result and deserve further investigation.Notwithstanding, if we assumed that this unexpected result is “behavioural”, it would mean that individuals planning to drive shorter distances prefer having the vehicle charged just in time for departure, whereas those planning longer travel have a preference for having the charging operation finished well ahead of departure time.We note that preferences for shorter ECT and for higher E when planning longer trips are consistent with risk aversion.Longer trips increase the exposure to uncertain events, which might cause delays and require range-consuming detours.Given this uncertainty, a vehicle charged earlier than the planned departure with higher range buffers seems a reasonable cautionary response.Nevertheless, potential hypothetical biases or survey design effects could not be ruled out, as causes of the dependence on distance of marginal utility of ECT.Therefore, our interpretation of this dependence should be treated as speculative.We show the probability density functions of the marginal utilities of E and ECT for the estimation sample in Fig. 5.The empirical distribution of marginal utility of E shows that 78% of the mass of the distribution is above zero, but a still considerable 22% is below zero.Clearly the assumption of a normally distributed parameter necessarily implies that this will have a mass both above and below zero.Observing that nearly the 80% of the sample value E positively is reassuring, as it is reasonable that higher ranges are generally preferred to lower ones.However, a 22% negative mass in the distribution of the marginal utility of E deserve further investigation.This negative mass could be the result of a binary choice in DCE1 that forces a choice even between alternatives that may both be unattractive to the respondents.We note that in DCE2, where respondents can avoid unattractive charging alternatives, the distribution of marginal utility of E shows that 90% of the sample value E positively.The empirical distribution of the marginal utility of ECT highlights a considerable variation in sign across the sample, already highlighted in the logit model.For the models estimated using data from DCE2, we observe that the marginal utility for E is intuitively positive.In the mixed logit model, 90% of the mass of the empirical distribution of the marginal utility of E is positive.Both SDL and CC have negative coefficients.In the mixed logit model 90% of the mass of the empirical distribution of the marginal utility of SDL is negative.The disutility for SDL is intuitive as it signals disutility for late departures w.r.t. a preferred departure time.The effect of the activity participation penalty PD is insignificant for the charging alternatives.However, if the activity is totally cancelled, the number of hours lost affects the utility for this alternative.In the models for DCE2 we also estimates constants that capture baseline preferences for EV charging and use, avoid charging and travelling, and avoid charging and shift mode.We observe a stronger preference for the first option if the travel distances are low, all else equal.For longer travel distances, the first option is preferred the least, all else equal.As in DCE1, in DCE2 the variation in planned travel distance captures a systematic variation in preference for E.The taste for SDL is not affected by planned travel distance.Instead, a systematic variation in taste for SDL is observed as a function of whether one plans travelling in peak times and of whether one considers inflexible the time of their travel.In particular, both peak time travel and travel inflexibility increase the disutility for delays.The systematic heterogeneity in the cost coefficient is captured by age, but not by employment status, consistently with the results from the mixed logit results from DCE1.In terms of goodness of fit, unsurprisingly, also for DCE2, the mixed logit has a better performance than the logit specification.Overall, the results of this first empirical implementation of the conceptual and modelling framework introduced in this paper highlighted a potentially large preference heterogeneity that may affect charging choices in smart grid contexts.An important implication of the potential heterogeneity in charging behaviour uncovered here is that analyses of the impact of electric vehicle deployment typically obtained by making use of charging behaviour scenarios deserve caution.Individual preferences and specific travel needs may induce EV drivers to respond differently to smart charging offerings, and therefore charging behaviour scenarios designed to represent the effect of smart charging strategies in impact analyses may greatly overestimate of such strategies.Our results also suggest interesting implications for the operations of charging service providers.If the valuation of ECT is indeed as heterogeneous as our results suggest, and such heterogeneity is not a spurious effect of our survey instrument, charging service providers could make use of such heterogeneity.Assuming that they will have enough observations to classify drivers along the ECT preference dimension, they could “extract flexibility” from those more inclined to longer ECTs without incentives.On the other hand, they could charge a premium for “inflexibility” to those preferring quick charging operations.Such a premium may induce some drivers to switch to more flexible charging options, depending on their cost sensitivity.More generally, consumer heterogeneity in charging preferences could by exploited in revenue management strategies for charging services.The main limitation of the analytical results presented above is that they are not readily generalised to the entire population of UK drivers due to the small sample available in our dataset.Notwithstanding this, we believe that the useful insights on charging behaviour obtained in this paper provide a basis for further exploration of charging behaviour in smart grid contexts.Our insights can guide the design of additional choice experiments, that ideally could be administered to a larger a more representative sample of the drivers population over specific geographical areas of interest.The integration of the road transport and power systems requires the exploitation of EV load flexibility by implementing smart charging services.The effectiveness of smart charging services need appraisal through modelling.The state of the practice of integrated transport and energy analyses for the appraisal of smart charging services rely on simplistic representation of charging behaviour that are not policy sensitive or theoretical models that lack strong empirical foundations.Our work bridges the gap between the representations charging behaviour used in integrated transport and energy system analyses, and the literature of charging behaviour studies.We bridge this gap by developing a random utility model for joint EV drivers’ activity-travel scheduling choices and charging choices.Our model easily integrates in activity-based demand modelling systems for the analyses of integrated transport and energy systems.However, unlike previous charging behaviour models used in integrated transport and energy system analyses, our model empirically captures the behavioural nuances of tactical charging choices in smart grid context.We estimate empirical versions of the model using data from two discrete choice experiments.Our empirical results provide insights into the value placed by individuals on the main attributes of the charging choice.These attributes target energy, effective charging time and charging cost.We found that E tended to have a positive marginal utility for a large majority of drivers in our sample: 80–90%, depending on which choice experiment is considered.This result is intuitive and was in accordance with our expectations, because a positive marginal utility for E means that drivers prefer higher range levels.The result for the marginal utility of ECT is more mixed and therefore more interesting.When charging levels do not induce schedule delays, the marginal utility of ECT is positive for about 60% of the drivers in our sample and negative for the rest.Therefore, when charging does not induce delays the majority of our sample of drivers prefer keeping the vehicle under charge as long as they are at home.Only 40% of the sample prefer charging as fast as possible.The split in preference sign for ECT across the sample is minimum when the effective charging duration induces late departures.90% of drivers accrue less utility from alternatives with higher schedule delays, all else being equal.The significant heterogeneity in the valuation of E and ECT levels has important implications.First, heterogeneity in charging behaviour suggests that the use of fixed charging behaviour scenarios in the analysis of the impact of EV charging management strategies may misrepresent the load from EV charging.In addition, heterogeneity implies that charging service providers could incentivise more flexible charging choices with targeted actions for “inflexible drivers”.Indeed, charging service providers could exploit the segmentation in charging behaviour for their revenue management strategies.The insights provided by our empirical work should be further validated in studies with larger samples and revealed preference data in addition to choice experiment data.However, the characterisation of charging behaviour preferences enabled by our analytical framework opens the way for further studies on the behavioural response to smart charging and vehicle-to-grid services.We recommend that further research in this direction is pursued, as revealed preference data becomes available with increased market penetration of EVs. | The rollout of electric vehicles (EV) occurring in parallel with the decarbonisation of the power sector can bring uncontested environmental benefits, in terms of CO2 emission reduction and air quality. This roll out, however, poses challenges to power systems, as additional power demand is injected in context of increasingly volatile supply from renewable energy sources. Smart EV charging services can provide a solution to such challenges. The development of effective smart charging services requires evaluating pre-emptively EV drivers’ response. The current practice in the appraisal of smart charging strategies largely relies on simplistic or theoretical representation of drivers’ charging and travel behaviour. We propose a random utility model for joint EV drivers’ activity-travel scheduling and charging choices. Our model easily integrates in activity-based demand modelling systems for the analyses of integrated transport and energy systems. However, unlike previous charging behaviour models used in integrated transport and energy system analyses, our model empirically captures the behavioural nuances of tactical charging choices in smart grid context, using empirically estimated charging preferences. We present model estimation results that provide insights into the value placed by individuals on the main attributes of the charging choice and draw implications charging service providers. |
31,470 | A mouse model-based screening platform for the identification of immune activating compounds such as natural products for novel cancer immunotherapies | In 2016 cancers in general caused 22% of deaths worldwide with respiratory cancers alone ranked 6th among the top ten causes of death.1,Additionally, the American Association for Cancer Research predicts a global increase in cancer prevalence from 15.2 million cases in 2015 to up to 24 million cases in 2035.2,This indicates a strong need for new treatment approaches.Besides the traditional options of surgery, chemotherapy and radiation, immunotherapy is being applied with increasing success.In the latter case, medications are used to support the immune system to target the cancer without affecting healthy cells.For their pioneering work in this field, James P. Allison and Tasuku Honjo were awarded with the Nobel Prize in Physiology or Medicine in 2018 after discovery of immune checkpoint inhibitors and their usage as anti-cancer drugs.3,Immune checkpoint proteins are expressed on certain types of immune cells like T cells in order to protect from uncontrolled immune activation.Cancer cells can hijack that mechanism by expressing ligands for these proteins and thus prevent anti-cancer immune responses.4,To mount an effective immune response, antigen-presenting cells such as dendritic cells and macrophages sample non-self antigens e.g. from cancer cells and activate T cells expressing T cell receptors specific for that antigen.In the presence of supportive cytokines such as interleukin-12 these specific T cells differentiate and become capable of targeting the cancer cells.This process defines APCs and T cells as key players in anti-cancer immunity.DCs can be subdivided into plasmacytoid DCs and conventional DCs.pDCs are specialized type I interferon producers and are important in antiviral immunity.5,However, their role in cancer development has been increasingly studied in the last few years and pDCs can both promote and inhibit tumor progression.6,7,8,9,They tend to be tolerogenic but can also be immunogenic under certain stimulation conditions.10,cDCs on the other hand comprise roughly all other DCs except for monocyte-derived DCs.DCs have already been intensively investigated for their application in vaccination approaches in which antigen-loaded, activated DCs get injected into patients to induce effective anti-tumor responses.Combinations with for example immune checkpoint therapies seem additionally encouraging but the development of resistances against common medications is still a main challenge in drug development.11,12,Strategies for overcoming intrinsic as well as acquired resistances call for new lead structures.One solution might be provided by natural products.Natural products originate from living organisms which can synthesize secondary metabolites as an adaption to biotic and abiotic stress factors.13,14,15,They provide a wide variety of bioactive molecules and compounds.Here, penicillin and aspirin are widely known examples for natural product derived drugs.16,It can be hypothesized that there are compounds which could serve as immunotherapeutic agents helping the immune system to target the tumor.Natural products are not yet commonly used as immunotherapeutic agents and screening guidelines are lacking.Here we describe a methodical screening pipeline which can be applied to identify immunostimulatory properties of natural products and to select compounds that are promising for further medical research and drug development.We tested natural products for cytotoxic and immunomodulatory characteristics on murine bone marrow-derived cell cultures comprising cDCs, pDCs and macrophages and present an outlook on opportunities to further analyze, develop and optimize such compounds.MTT and IL-12p40 experiments were performed with wildtype or with get40 reporter mice17, respectively.OVA-specific, MHC class II-restricted TCR-transgenic mice were used for T cell activation assays.18,Animals were kept under specific pathogen-free conditions in the animal research facility of the University of Düsseldorf according to German animal welfare guidelines.Mice for bone marrow preparation were sacrificed via cervical dislocation.Femurs and tibias were removed from the bodies, kept in PBS, disinfected in 70% ethanol for 3 min and washed in PBS again.The ends of the bones were cut off and the bone marrow was flushed out with FCS-containing medium.To remove erythrocytes, the bone marrow was resuspended in 3 ml erylysis buffer.After 3 min incubation erylysis was stopped with 5 ml medium.The cell suspension was filtered through 100 µm cell strainers.After centrifugation, cells were resuspended in medium and counted for cell culturing.For obtaining cultures with high amounts of cDCs, bone marrow cells were cultured in VLE DMEM containing 10% heat-inactivated FCS, 0.1% 2-mercaptoethanol, and granulocyte–macrophage colony-stimulating factor.GM-CSF cultures were performed as previously described.19,In short, 2 × 106 cells in 10 ml medium were added to 94 × 16 mm non-treated petri dishes and kept for 10 days.On day 3 of the culture 10 ml GM-CSF containing medium was added to the plates.On day 6 10 ml medium was carefully removed and centrifuged.Supernatant was removed and the remaining cell pellet was resuspended in 10 ml medium and added to the dish again.Stimulation was performed on day 9.For pDC-rich cultures bone marrow cells were cultured in medium consisting of VLE RPMI containing 10% heat-inactivated FCS, 0.1% 2-mercaptoethanol, and FMS-like tyrosine kinase 3 ligand for 8 days.Flt3L cultures were performed as previously described.19,20,21,In short, 20 × 106 bone marrow cells in 10 ml were added to 94 × 16 mm non-treated petri dishes.On day 5 medium was refreshed by careful removal of 5 ml medium, centrifugation, resuspension in 5 ml fresh medium and return to the culture dishes.Stimulation was performed on day 7.Cultures mainly containing macrophages were obtained by culturing bone marrow cells in medium consisting of VLE RPMI containing 10% heat-inactivated FCS, 0.1% 2-mercaptoethanol, and macrophage colony-stimulating factor.M−CSF cultures were performed as previously described.19,In short, 1.5 × 106 bone marrow cells in 10 ml were added to 94 × 16 mm non-treated petri dishes and cultured for 7 days.On day 3 5 ml fresh medium were added to the cultures.Stimulation was performed on day 6.After preparation of cell cultures as described in section 2.2, cells were stored at 4 °C for 30 min and afterwards scraped off the dishes.Cells were centrifuged, resuspended in fresh medium and counted for culturing in non-treated 96-well plates.GM-CSF-cultured cells were seeded at a density of 8 × 104 cells/well, Flt3L-cultured cells at 4 × 105 cells/well and M−CSF−cultured cells at 6 × 104 cells/well.Afterwards, cells were stimulated with 0.1, 1 or 10 µM of natural products or as controls with DMSO or staurosporine for a final volume of 200 µl and incubated at 37 °C for 24 h.At the end of stimulation 20 µl of 5 mg/ml thiazolyl blue tetrazolium bromide were added to each well.Samples were incubated at 37 °C for 3 h. Afterwards, plates were centrifuged, 170 µl of the supernatant was discarded and 100 µl 5% formic acid in isopropanol were added to the wells and mixed carefully.Absorption was measured at a wave length of 570 nm using a microplate reader.For determination of IL-12p40 production by activated DCs and macrophages, GM-CSF cultures were set up from get40 mice as described in section 2.2.1.On day 9, cells were stored at 4 °C for 30 min and afterwards scraped off the dishes.Cells were centrifuged, resuspended in fresh medium and counted for culturing in non-treated 12-well plates.Cells were seeded at a density of 5 × 105 cells/well for a final volume of 1 ml and initially treated with 5 concentrations of 0–1 µM CpG 2216 or 0–10 ng/ml lipopolysaccharide, respectively, for 16 h to determine suboptimal stimulation conditions.For testing of natural compounds, cells were incubated with 0.1, 1 or 10 µM of natural products with or without 0.1 µM CpG 2216 or 1 ng/ml LPS or left untreated at 37 °C for 16 h. Untreated cells or cells treated with 0.1 µM CpG 2216 or 1 ng/ml LPS were used as controls.At the end of stimulation, supernatant was collected for ELISAs before storing the plates at 4 °C for 30 min.After scraping off, cells were distributed to flow cytometry tubes.Cells were centrifuged to remove the supernatant and Fc binding sites were blocked with an anti-CD16/CD32 antibody at 4 °C for 10 min.Subsequently, cells were stained with anti-CD11b APC, anti-CD11c APC-Cy7, anti-CD86 PE-Cy7 and anti-MHCII biotin at 4 °C for 30 min.After washing the cells with PBS containing 2% heat-inactivated FCS and 2 mM EDTA, they were stained with streptavidin PerCP-Cy5.5.After another washing step, cells were resuspended in DAPI containing buffer.Following this, samples were kept on ice in the dark and measurement was performed at a BDFACS Canto II.Mean fluorescence intensities were calculated by FlowJo version 10.5.3.To test whether IL-12p40-producing DCs are also capable of sufficient T cell priming T cell activation assays were performed.For the initial test experiment GM-CSF cultures were set up and stimulated with 1 µM CpG 2216 on day 9.On day 10 OT-II mice were sacrificed by cervical dislocation and mesenterial, axial, brachial, inguinal, paraaortal and submandibular lymph nodes were removed and collectively kept in PBS for further processing.LNs were then put onto 70 µm cell strainers in a small petri dish filled with 3 ml OT-II medium consisting of RPMI 1640 containing 10% heat-inactivated FCS.Organs were homogenized by pressure applied via a syringe plug.Following, cells were counted and CD4+ cells were separated by MACS according to the supplier’s protocol.In short, cells were Fc-blocked as described in section 2.4 and incubated with biotinylated anti-CD4 antibodies.Subsequently, magnetic anti-biotin beads were added and CD4+ cells were positively selected by running cells along a MACS magnet.CD4+ cells were afterwards added to non-treated 96-V-bottom plates in an amount of 2 × 104 cells/well according to Reinhardt et al.17 GM-CSF-cultured cells were put at 4 °C for 30 min and afterwards scraped off the dishes.Samples were centrifuged, resuspended in OT-II medium and added to the wells at 5 × 104, 2.5 × 104, 1 × 104, 2 × 103, 4 × 102 or 0 cells/well in triplicates.Lastly, cells were pulsed with 300 nM OVA323-339 peptide.Samples were incubated for 5 days before analysis of the supernatant for IL-2 amount by ELISA.Analysis of natural products was performed accordingly: 1 × 104 GM-CSF-cultured cells were incubated in technical duplicates with cytochalasin D or 18-dehydroxycytochalasin or manzamine J N-oxide with or without 0.1 µM CpG 2216 or 1 ng/ml LPS or left untreated at 37 °C for 24 h. Untreated cells and cells treated with 0.1 µM CpG 2216 or 1 ng/ml LPS were used as controls.After the incubation time, isolated CD4+ cells were added to the GM-CSF-cultured cells, pulsed with 300 nM OVA323-339 peptide and the supernatant was collected after 5 days.Due to the lack of appropriate guidelines a new pipeline for screening of natural products for immunotherapeutic potential was developed.We discuss which aspects should be considered to decide if a compound might serve in the development of novel anti-cancer drugs.Moreover, the reasons behind the decision for the techniques applied are presented and further methods suggested.Natural products that can be categorized to be potential candidates for immunotherapeutic drug development should fulfill certain criteria in first preclinical screenings.First, determination of cytotoxicity early on in drug development is essential as compounds should be non-toxic for target immune cells.In this study, primary immune cells were used because these are non-transformed and thus biologically close to physiological conditions in vivo.Secondly, screenings for favorable biological activities e.g. toxicity for cancer cells or modulation of autophagy have to be considered.Lastly, natural products chosen on the basis of the first two aspects should induce or enhance immune stimulation to prove promising for immunotherapeutic application.Compounds that fulfill all three criteria and are non-toxic to primary immune cells, show advantageous biological capacities for cancer treatment and are able to potentially stimulate the immune system can then be selected for more in-depth biochemical analysis and optimization.Determination of potential cytotoxicity is fundamental when starting the investigation of natural products without previous knowledge about their biological properties.As the primary aim of this screening pipeline is to find compounds that stimulate immune cells, toxicity against this cell type has to be analyzed.Here, the effects on primary cells derived from GM-CSF, Flt3L and M−CSF cultures were tested.These cultures generate cDCs, pDCs and macrophages in different amounts and thus provide cell types crucial to mount effective innate and adaptive immune responses against cancer.22,10,23,One type of cytotoxicity assay is the MTT assay which is simple and efficient.It’s a widely used and accepted assay which generates highly reproducible data.Alternatives are provided by e.g. one-step solutions such as MTS assays.Living cells convert MTT into formazan which precipitates as violet crystal-like structures.These structures can be dissolved by acidified isopropanol and optical densities are determined at 570 nm using a spectrophotometer.These ODs are proportional to the amount of metabolically active cells.24,Before running actual experiments with natural products, optimal cell numbers for an OD of 0.75–1.25 have to be defined.Cell counts for GM-CSF- and M−CSF−cultured cells were determined for an OD of 1, while Flt3L-cultured cells were adjusted to an OD of 0.75, as the cellular yield of this culture is considerably lower as compared to the GM-CSF and M−CSF culture systems.The optimal cell number can be calculated by the formula of a best fit line.In this case, GM-CSF-cultured cells were seeded at a density of 8 × 104 cells/well, Flt3L-cultured cells at 4 × 105 cells/well and M−CSF−cultured cells at 6 × 104 cells/well.After establishment of optimal cell densities, cells are incubated with natural products at 0.1, 1 and 10 µM for 24 h to provide a range of conceivably effective concentrations.An incubation period of 24 h was chosen due to the fact that time periods of more than 12 h often are necessary for induction of effective immune stimulation and incubation times should be kept similar along different assays.Incubation of the cells with the protein kinase inhibitor staurosporine, a potent apoptosis inductor, served as a positive control.As negative control, cells were treated with the same amount of solvent as in the 10 µM samples to determine possible toxic effects of the solvent alone.Normalization of the ODs of the samples to the negative control sample and plotting gave the percentage increases or decreases in metabolic activity.Here, cytotoxic as well as proliferative effects become visible of which proliferative effects can indicate positive, desirable effects which involve an increase in APC numbers for potent induction of adaptive immune responses.Among the 240 natural products tested in this work 41 compounds were found to induce a metabolic activity of 120% and above for at least one cell type and concentration.These compounds could also be interesting for further research with a focus on the induction of proliferation of specific cell subsets.However, natural products which are toxic for DCs and macrophages should be excluded according to the guidelines defined in section 3.1.Natural products are defined here as non-toxic if they show a viability rate of more than 80% in all concentrations and for all cell types.In this work, 240 natural products were tested.From these a minority of 14 exhibited viability rates below 80% for all three cell types, and 34 and 89 showed viability rates below 80% for only one or two cell types.103 non-toxic compounds were selected for analyses in further immune stimulation assays on the basis of a viability rate of more than 80% in all concentrations and for all cell types.Besides the impact on viability, conclusions regarding structural similarities of the compounds can be made especially in the case where derivatives exhibit significant differences in cytotoxicity.Such effects might be due to small structural changes and help in drawing conclusions on responsible functional groups.On the other hand, effects of derivatives which show similar properties are probably based on a common basic structure and might be suitable for biochemical modification and optimization.Comparative analyses and the examination of other biological activities are useful as the tested natural products could potentially show similar resistance mechanisms and thus comparisons can help identifying underlying modes of actions.Lambert et al. hypothesized that resistances evolve from interacting cells in cell communities as they appear in cancerous tissue as well as in bacterial populations.25,Thus, studies of bacterial resistance mechanisms might allow conclusions on those of cancers.Additionally, compounds can be identified which are interesting not only for immunotherapeutic anti-cancer treatment but also for other approaches in this field such as the induction of autophagy.26,This work was conducted as a cooperative effort together with other teams that investigated different types of activities of the compounds against e.g. tumor cells, pathogenic gram-negative bacteria and intracellular parasites.In more detail, toxicity against Jurkat and Ramos lymphoma cell lines, effects on damage response mechanisms, toxicity for Mycobacterium tuberculosis and various gram-negative bacteria strains such as Escherichia coli, toxicity for Toxoplasma gondii and effects on the induction of autophagy have been assessed.Out of 103 non-toxic natural products 41 compounds showed at least one additional favorable biological capacity.These compounds were chosen as ideal candidates for further screening and an examination in IL-12p40 assays detecting immune activation.IL-12 is a cytokine which has been extensively studied over the last years especially with regard to its implications in cancer immunotherapy.It consists of the two subunits p40 and p35 which get upregulated upon Toll-like receptor stimulation.27,TLRs are pattern recognition receptors that detect pathogen-associated molecular patterns.The IL-12 receptor is found on T and NK cells and gets upregulated upon antigen contact.On the cellular level, IL-12 signaling induces interferon γ production by these cell types and thus promotes innate and adaptive immune responses as well as T and NK cell proliferation.28,Recently it has been shown that an intratumoral crosstalk of DCs and T cells involving IL-12 and IFNγ is necessary for successful anti-cancer treatment with the immune checkpoint inhibitor anti-PD-1.29,Thus, IL-12p40 produced upon stimulation can be used as a reliable readout of immune stimulation.Usage of get40 knockin reporter mice which express IL-12p40 linked to GFP facilitates the detection of IL-12p40 production by tracking GFP via flow cytometry.17,GM-CSF-cultured cells are potent in IL-12p40 expression upon stimulation and were therefore used for immune activation assays.22,After staining and flow cytometric measurement events are gated on single, living cells and analyzed for the frequency of IL-12p40/GFP+ cells to detect culture-wide stimulatory effects.In Figure 6 an exemplary gating strategy of a CpG 2216-stimulated sample is shown.Besides stimulation with natural products in the same concentrations as in the MTT assays supplementary TLR stimuli were used.Addition of these TLR ligands ensures a weak immune activation and reflects the physiological situation in the presence of a tumor or during an infection.30,As TLR stimuli, either CpG 2216 or LPS are added to the cells simultaneously to the natural products.CpG 2216 is a synthetic class A oligonucleotide, that mimics e.g. bacterial DNA in which unmethylated CpG motifs are enriched and signals via TLR9.31,LPS on the other hand is a component of the outer cell membrane of gram-negative bacteria and is detected via TLR4.32,Both substances are strong stimuli of IL-12p40 secretion by DCs.To avoid a saturated response and allow for the detection of an additive or even synergistic effect of the natural compound to be tested, CpG 2216 and LPS have to be titrated on the target immune cells in advance.A range of 5 concentrations is advisable.Here, CpG 2216 was tested at 0–1 μM while LPS was used at 0–10 ng/ml, both in decadic steps as shown in Figure 7.While it cannot be excluded, IL-12p40 frequencies above 20% could be achieved with higher concentrations of either CpG 2216 or LPS, a final dose should be chosen where a slight but detectable increase in IL-12p40/GFP expression can be seen that is significantly lower as compared to the highest concentrations of the stimulus used.In this study, for both stimuli the second highest concentration appeared to be most suitable.The suboptimal concentrations of CpG 2216 and LPS, respectively, are also applied for positive controls in the following assays, while untreated cells are used as negative control.Samples stimulated with natural products and TLR ligands are incubated overnight to meet the time requirements of an effective immune stimulation and to minimize slight cytotoxic reactions which could be seen in the MTT assays after 24 h.Natural products have been defined here to be non-toxic if they show a viability of at least 80% for all cell types.Thus, only compounds that met this criterion are included in immune activation assays.Nevertheless, slight inhibiting effects of the natural products indicated by a viability rate in the range of 80–99% in the MTT assays might be compensated by shorter incubation times.Considering mean fluorescence intensities can be beneficial in addition to examination of the frequency of IL-12p40/GFP+ cells.MFI values are calculated by the sum of all intensity values divided by the number of events.They display the mean expression strength of a marker of a measured sample.33,Thereby direct conclusions can be drawn on the levels of IL-12p40 production.In Figure 8 the MFI of TLR-stimulated samples as positive controls are presented in comparison to the unstimulated negative control.IL-12p40 assays were performed as three independent experiments on material from two mice per experiment treated separately to validate the reproducibility of data and stability of the compounds taking variations due to the application of primary cells into account.From the IL-12p40 activation assays natural products are selected which show at least a 2-fold increase in IL-12p40+ cell frequency or MFI compared to cells treated with CpG 2216 or LPS alone to ensure an assortment of sufficiently immunostimulatory compounds, only.Analysis of further markers which – depending on the cell type – change under activating conditions such as MHC class II, CD11c and the surface activation marker CD86 can contribute additional valuable information.Different cell types could be more or less sensitive to natural products and thus need other effective concentrations for stimulation.In this work, 3 out of the 41 biologically active compounds showed immune activating potency with regard to IL-12p40 production.Of these compounds, one, manzamine J N-oxide, was immunostimulatory without an additional TLR stimulus, while the two cytochalasins cytochalasin D and 18-dehydroxycytochalasin H needed additional TLR stimulation by CpG 2216.These 3 were chosen for further examination in T cell activation assays.IL-12p40-producing DCs are capable of inducing T cell activation and thus provide a linkage of innate and adaptive immunity.17,To test whether IL-12p40-inducing natural products are also competent in activating DCs for T cell priming, Ovalbumin-specific CD4+ cells from OT-II transgenic mice are used.OT-II mice are genetically modified in a way that they harbor CD4+ T cells expressing a transgenic TCR specific for an OVA-derived peptide when presented by APCs in the context of MHC class II.18,These mice consequently provide an advantageous source for antigen-specific T cells for T cell activation assays.For establishment of an assay protocol according to Reinhardt et al., optimal DC numbers have to be defined in the first step17.As a readout for efficient OTII CD4+ T cell activation IL-2 production was chosen and analyzed by ELISA.IL-2 is a cytokine which gets secreted by activated CD4+ T cells.34,Its application in cancer therapy has already been tested in clinical settings and due to side effects combination treatment of IL-2 with other immunostimulatory drugs is in discussion.35,For analysis of the T cell activating potential of natural products, a suboptimal amount of GM-CSF-cultured cells has to be chosen to allow detection of an increase in IL-2 secretion after stimulation with natural products.Similar to the IL-12p40 assays, natural products with or without additional TLR stimulation are added to the cells overnight before cocultivation with OTII CD4+ T cells.The suboptimal concentrations of CpG 2216 and LPS, respectively, are here applied again for positive controls, while untreated cells are used as negative control.An induction of IL-2 production after stimulation indicates a successful DC:T cell crosstalk and an initiation of an adaptive immune response which is fundamental for cancer immunotherapy.There might also be compounds which act agonistically with IL-2 and could help eliminating current side effects of treatment of patients with IL-2 alone.As for the IL-12p40 assays, T cell activation assays should also be performed at least three times with a minimum of two separately treated mice each.An overview over the structural and biological properties of the 3 most promising natural products according to the presented screening guidelines is provided in Figure 10.All 3 compounds are non-toxic to the tested target immune cells, possess beneficial bioactivities and show an at least 2-fold increase in IL-12p40 production with or without an additional TLR stimulus.Both cytochalasins, namely cytochalasin D and 18-dehydroxycytochalasin H, induce an elevation in IL-12p40 at 1 µM compound plus CpG 2216 and also exhibited successful T cell activation capacities.Manzamine J N-oxide displayed the highest number of additional bioactivities in the comparative analyses and a relatively strong increase in IL-12p40 production at 10 µM alone but was not able to induce a sufficient DC:T cell crosstalk.Natural products which have been identified to support immune activation of T cells by DCs can be further optimized and analyzed for modes of action as discussed below.In this work a new pipeline for first-line screening of natural products for immunotherapeutic drug development approaches was established with the aim to define guidelines that help in the straightforward identification of promising compounds.The screening pipeline was prototypically performed on 240 natural compounds derived from marine sponges and endophytic fungi using bone marrow-derived murine DCs and macrophages.We found 3 out of 240 natural products as potential candidates for further drug development.Following the guidelines, compounds should be non-toxic to target cells, possess beneficial anti-cancer activities and be immunostimulatory.For determination of immune activating capabilities of compounds, assays for IL-12p40 production and T cell activation were established.IL-12p40 can be seen as a marker of activation of innate immune cells such as DCs and macrophages.IL-12p40 expression was analyzed via flow cytometry using cells from cytokine fluorescence reporter mice.17,These mice but also other genetically modified mouse strains like the OTII TCR-transgenic mice used in the T cell activation assays can later be applied for in vivo testing of promising natural products.The direct step into the in vivo model is hereby optimized, as primary murine cells from the very same mouse model are used in all assays instead of cell lines or primary human cells.This is also where the advantage of phenotypic screenings over target-based approaches comes into effect: Instead of an early focus on a specific target as in target-based drug discovery which later on might be difficult to retrieve in physiological conditions, phenotypic screenings are more open with regard to the complexity of possible modes of action and consequently closer to in vivo settings.36,37,38,Phenotypic screening approaches have already been successfully applied to target deconvolution of natural products and are apparently more efficient in drug discovery processes in general.39,Investigation of specific IL-12p40-producing cell subsets from the above mentioned reporter mice by flow cytometry allows deeper insights into biological modes of action.For that, staining of cultures with specific cell markers e.g. CD11b, CD115 and CD135 for the discrimination of cDCs vs. macrophages and analysis via flow cytometry can be implemented.22,The presented experimental setup can moreover be complemented with enzyme-linked immunosorbent assays for IL-12p40.ELISAs provide the opportunity to corroborate the obtained results by an independent method from the same experimental sample and provide a direct definition of the amount of secreted IL-12p40.Additionally, ELISAs specific for the heterodimer IL-12p70 consisting of the two subforms p35 and p40 further deliver information about the amount of the bioactive form of the cytokine necessary for T cell activation.IL-12 as a key factor for the induction of adaptive immune responses has already been applied in murine and human cancer studies.40,29,Despite the observation of severe side effects in clinical trials IL-12 is still a focus of research and might be a candidate for combination therapies together with natural product-derived drugs.41,29,Assays using cocultivation of DCs and T cells can be deployed as an in vitro surrogate to determine the effectiveness of immune activation by natural products.IL-2 is a marker for activated T cells after antigen presentation by DCs.34,The TCR-transgenic mouse model used here provides a simple, advantageous tool for elaboration of an effective DC:T cell crosstalk upon antigen contact.18,Successful T cell activation should be confirmed by evaluation of T cell proliferation and measurement of the incorporation of thymidine or a proliferation dye.17,The same immune activation assays as shown in this work with GM-CSF cultures can be carried out additionally with other primary cell types such as Flt3L- and M−CSF−cultures to interrogate immunostimulatory effects of natural compounds on additional types of APCs.The 3 most promising natural products identified by this screening platform are the two cytochalasins cytochalasin D and 18-dehydroxycytochalasin H as well as the alkaloid manzamine J N-oxide.The latter one has shown the most encouraging results of all tested 103 compounds in the comparative analyses as it is active against tumor cell lines as well as bacteria.Despite its capacity to induce IL-12p40 it did not exert any T cell activation potential which led to the decision to exclude manzamine J N-oxide from further screenings.The compound was slightly toxic at 10 µM and IL-12p40 production could be explained by recognition of damage-associated patterns as a reaction to these toxic effects.42,43,One manzamine derivative from the screened library, manzamine F, exhibited no immunostimulatory effects while other manzamines have already been shown to possess antibacterial, antiviral and antiparasitic activities.44,Especially manzamine A and analogs are discussed as anti-cancer therapeutics.45,46,Biochemically optimized derivatives of manzamine J N-oxide might thus provide more favorable characteristics.Cytochalasins are fungal toxins, well established inhibitors of actin polymerization and have been extensively used in studies of the cytoskeleton.47,Different cytochalasins including cytochalasin D have been described to have antitumor capacities which probably can be explained by the rapid proliferation cycle of malignant cells.48–51,Actin plays essential roles in endocytic processes and it has been shown that cytochalasin D forms specific actin aggregates that associate with endosomal proteins.52,53,As in our experiments the strongest increase in IL-12p40 as well as in IL-2 was observed in combination with CpG 2216 it can be hypothesized that the two cytochalasins that were the only potentially actin-targeting agents present in the compound library induce a concentration-dependent retention of CpG 2216 in endocytic aggregates and consequently a longer exposition time of the stimulus in this TLR containing compartment.31,Thus, a next step would be to test this hypothesis by e.g. staining of the actin skeleton with phalloidin and to further investigate the immune activating potential of cytochalasins.Here, additional inhibitors of actin polymerization with different chemical scaffolds can be used to functionally verify the molecular mechanism.Beside their immunostimulatory characteristics cytochalasins might also prove promising chemotherapeutic agents especially in combination with other microfilament-directed drugs and have additionally already been shown to have a positive impact on anti-infectious immune responses.54,55,After evaluation of immunostimulatory properties, application of target finding approaches is the next step to generate insights into modes of action of attractive compounds.For target identification, indirect or direct approaches can be used.In indirect target analyses, effects on for example gene expression profiles of primary cells or of ex vivo isolated cells treated with natural products are compared to those of a known substance e.g. by DNA microarray analyses or next generation sequencing.56,57,58,Indirect target deconvolution consequently requires substances with similar phenotypes and gives the first hints on involved pathways but is not as straightforward as direct approaches.38,In direct approaches protein targets directly interacting with a compound can be identified.For instance, possible target proteins get overexpressed, purified and assigned to protein microarrays to examine compound-binding affinities.Still one challenge here is that many small molecules are hydrophobic and thus also bind unspecifically to non-target proteins.59,38,Computational approaches can additionally promote target identification processes by e.g. molecular modelling and machine learning approaches.60,61,There is a large number of possible methods and the appropriate one has to be chosen according to the structure and chemical characteristics of the natural product.Finally, if the target structure has been identified specific assays for validation can be designed which use for example existing target inhibitors.After determination of mechanistic properties underlying immune activation chemical optimization of compounds should be carried out.Such optimization approaches aim to improve natural products in regard of their physiological in vivo effects, their efficacy and accessibility via e.g. modification of functional groups.62,Of course, these derivatives would then have to be tested in the listed assays again with special focus on immune activating characteristics.Natural products which potentially have been chemically modified and which finally have been shown to be non-toxic to the mentioned immune cells, which exert beneficial anti-tumor activities and possess immunostimulatory capacities could then be tested in in vivo tumor mouse models e.g. xenograft models.Here, in vivo toxicity and dosing of a compound have to be assessed initially.Following this, effects on the tumor burden and size are analyzed to determine direct anti-cancer reactions besides examination of immunostimulatory responses such as infiltration of IL-12p40-producing DC and T cells into tumor microenvironments.63,29,The exact design of an in vivo experiment depends on the biological accessibility of the compound and the type of mouse model used and affects for example the route of drug administration.Natural products potent in tumor mouse models could subsequently be introduced to clinical trials and inspire immunotherapeutic drug development.The presented pipeline enables screening of natural products especially for academic institutions without access to industrial high throughput facilities.It allows the determination of promising compounds at relatively low financial burden and high success probability according to the presented results.Establishment of a collective compound library with natural products from various sources and with diverse chemical structures affords an unbiased analysis.Particularly when working in consortia which integrate groups working on the same natural products but with different foci, a database centralizing all gained information is highly recommended.64,Gathering researchers from various fields such as biology, chemistry, pharmacy and computer science promotes a fruitful environment for the investigation of distinct natural products from different points of view and is a promising approach for the detection of resistances early on in drug discovery.The allocation of the Nobel Prize 2018 for Medicine or Physiology for novel tumor immunotherapeutics and the increasing interest in this field of research highlight the importance of immune activation for anti-cancer treatments.The current knowledge supports the concept of the broad implication of immune activation assays in screening approaches.Immune activation is essential for fighting cancer in new impressive ways and the screening pipeline presented in this work aims to provide a tool for the detection of immunostimulatory natural products and other compounds early on in drug discovery. | The therapy of cancer continues to be a challenge aggravated by the evolution of resistance against current medications. As an alternative for the traditional tripartite treatment options of surgery, radiation and chemotherapy, immunotherapy is gaining increasing attention due to the opportunity of more targeted approaches. Promising targets are antigen-presenting cells which drive innate and adaptive immune responses. The discovery and emergence of new drugs and lead structures can be inspired by natural products which comprise many highly bioactive molecules. The development of new drugs based on natural products is hampered by the current lack of guidelines for screening these structures for immune activating compounds. In this work, we describe a phenotypic preclinical screening pipeline for first-line identification of promising natural products using the mouse as a model system. Favorable compounds are defined to be non-toxic to immune target cells, to show direct anti-tumor effects and to be immunostimulatory at the same time. The presented screening pipeline constitutes a useful tool and aims to integrate immune activation in experimental approaches early on in drug discovery. It supports the selection of natural products for later chemical optimization, direct application in in vivo mouse models and clinical trials and promotes the emergence of new innovative drugs for cancer treatment. |
31,471 | Use of cleaner-burning biomass stoves and airway macrophage black carbon in Malawian women | Exposure to carbonaceous particulate matter from the burning of biomass fuels is associated with a range of adverse health effects, including chronic obstructive pulmonary disease in adults, and increased risk of pneumonia in infants and young children.Despite robust data from epidemiological studies, interventions aimed at reducing exposure to household air pollution have not produced the expected benefits to health.First, in a randomised controlled trial in Guatemala, the provision of a woodstove with a chimney did not reduce physician-diagnosed pneumonia in young children compared with open fire using controls, albeit severe physician-diagnosed pneumonia was reduced in a secondary analysis.Second, in an recent open cluster randomised study in Malawi, we found no difference in rates of pneumonia in young children from households in community clusters assigned to cleaner burning biomass-fuelled cookstoves compared with continuation of open fire cooking."Possible explanations for this finding include exposure to smoke from other sources including burning of rubbish, tobacco, and income generation activities and exposure from neighbours' cooking fires since cleaner cookstoves were issued only to households that had a resident child younger than 5 years.An important outstanding question is whether or not use of cleaner burning biomass-fuelled cookstoves reduces inhaled dose of PM in the group most exposed to HAP; i.e. women who do the family cooking.Although measuring long-term personal exposure to PM in adults by portable monitoring is not yet practical, we previously developed a method for assessing inhaled dose of carbonaceous PM by measuring the amount of carbon in airway macrophages obtained using sputum induction.In previous studies, we have found that AMBC is increased in biomass-exposed women in Gondar compared to UK women, and in UK children, found that higher AMBC is associated with impaired lung function.Although the kinetics of AMBC have not been fully defined, since AM are long-lived cells, AMBC is thought to reflect longer-term exposures.Since the cookstove used in the CAPS trial reduces PM emissions by about 75% compared to open fires in field tests, we hypothesised that AMBC would be reduced in women randomised to the intervention arm of the CAPS trial.We therefore sought to compare AMBC in women using the cleaner cookstove with those using a traditional open fire.We recruited these two groups from women nearing end of the CAPS trial who were also recruited into the Malawi Adult Lung Health Study.In order to give comparison with a non-biomass exposed population a small group of British women were also recruited.This cross-sectional study recruited women from Chikwawa, one of the two sites in rural Malawi used for the CAPS trial.Chikwawa is a district in southern Malawi with a surrounding population of approximately 360,000 people, the majority of whom cook over open fires.We approached women from households included in the CAPS trial who were part of a sub-study called the Adult Lung Health Study.ALHS was designed to address the prevalence and determinants of COPD in adults in rural Malawi and the extent to which exposure to HAP explains the rate of decline in lung function.Recruitment of women to the study was carried out over 10 days."Before the study, the communication team from the Malawi Liverpool Wellcome Trust's Clinical Research Programme visited potential participants to explain sputum induction to identify potential participants at the village level.Twenty villages closest to the Chikhwawa District Hospital that were broadly representative in structure and income of the wider CAPS trial were included.Those that expressed a wish to take part were transported to the Malawi Liverpool Wellcome Research Centre at Chilwawa District Hospital.On arrival, they were provided with group and personal level information, prior to obtaining written consent.Women underwent spirometry and sputum induction in accordance with the American Thoracic Society/European Respiratory Society guidelines.Women were excluded if they were; i) receiving treatment for active pulmonary tuberculosis, or ii) HIV positive.The Malawi College of Medicine Research Ethics Committee and the Liverpool School of Tropical Medicine Research Ethics Committee.40) approved the protocol which was peer reviewed and published by The Lancet and is available in open access at www.capstudy.org.Trial registration ISRCTN 59448623.To compare AMBC in Malawian women with women exposed only to fossil fuel PM, we recruited a small group of healthy British women living in London and working at Queen Mary University of London.They were approached by the research team with written information and completed sputum inductions after written consent was obtained.The same team who did the sampling in Malawi carried out the sputum induction and processing in the UK.Ethical approval for UK controls was granted by HRA NRES Centre Manchester REC committee 13/LO/0440.Sputum induction was done using a standardised technique using nebulised hypertonic saline.Induced sputum samples were placed on ice, and transported to the University of Malawi, College of Medicine, Blantyre, for processing within 4 hours.In the UK sputum induction was done onsite at the Royal London Hospital and samples were placed on ice and processed within 4 hours.Specimens from Malawi and the UK were processed identically.Briefly, mucolysis was first carried out by vortexing in the presence of 0.1% Dithiothreitol, then cells are cytospun as previously described.Slides were imaged by light microscopy at ×100 magnification in oil, digital images transferred to ImageJ software, and analysed for AMBC as previously described.Briefly, digital images of 50 randomly selected AM were analysed for AMBC and data expressed as mean area per AMBC per subject.Personal exposures of Malawian women to CO in mean ppm and fine particulate matter in μg/m3 were measured over a 48-h period as part of the ALHS study using Aprovecho Indoor Air Pollution meters.Monitoring of CO and PM2.5 was done once the intervention cookstoves were in place and at least one year before assessment of AMBC and are indicative of average exposures over the study time-period.From our previous AMBC data, recruitment of 18 subjects in the traditional cookstove and 18 intervention cookstove groups had a power to detect a 50% difference in mean AMBC at 5% significance and 80% power.Data are summarised as median and compared by Mann Whitney U test.Age and lung function is summarised by mean and compared by t-test.Statistical analysis was carried out using GraphPad Prism version 6.We recruited 58 women, Table 1).One potential participant was excluded as she had failed to disclose her HIV status prior to the sampling.Five healthy non-smoking women were recruited in the UK, all lived and worked in central London, cooked using gas, and commuted to work by public transport or by cycle.Age, FEV1, FVC predicted and FEV1/FVC were similar between the two Malawian groups.Sputum induction was done in 58 Malawian women, and 5 British women.Aggregates of carbonaceous PM were visible in AMs from all Malawian women, with some cells exhibiting particularly high levels of carbon loading.In contrast, high carbon loading was not seen in AM from any of the British women.Induced sputum samples from 26 Malawian women had either too few AM to calculate AMBC or contained large sheets of bacteria that obscured induced AMs.The few AM that could be visualised under light microscopy in women with bacterial sheets contained phagocytosed bacteria.Since the upper airway does not contain AM, this observation suggests that the bacteria seen originated from the lower airway.There was no significant difference in the baseline characteristics of the participants that had samples suitable for AMBC analysis compared to those that were not.None of the samples from British women had bacterial sheets.Malawian women in the intervention group had lower AMBC compared to those in the control group; median 4.37 μm2 vs. 6.87 μm2, p = 0.028.There were no differences between intervention and controls groups in lung function, personal 48-h CO or PM2.5 exposure.Furthermore there were no differences in lung function, personal 48-h CO or PM2.5 exposure between women in the intervention and traditional groups when analyses included all women who took part in the study, i.e. not only those in whom AM carbon was determined.Malawian women had significantly higher AMBC compared with British women; 5.38 μm2 vs. 0.89 μm2, p = 0.0006.There was no significant correlation between AMBC and either the lung function variables or the exposure variables.This is the first study of the effect of a cleaner burning biomass-fuelled cookstove on inhaled dose of carbonaceous PM.We found that Malawian women using the cleaner cookstove had 36% lower AMBC compared with those who used an open fire for cooking.This is consistent with our previous bronchoalveolar lavage study in Malawi, which found that AMBC reflects the fuel used for heating and lighting in the home.For example, we found highest AMBC in subjects using tin lamps for lighting, and the lowest AMBC in those who used electric lighting.The present study suggests that, in individuals who are regularly exposed to high levels of PM emitted from the burning of biomass, use of a cleaner burning cookstove can reduce exposures which may in turn result in health benefits.Although the nature and extent of these benefits remain unclear, we previously reported that in vivo AM carbon particulate loading is inversely related to capacity to produce an effective antibacterial response and thus speculate that reduced AMBC loading from use of a cleaner cookstove reduces the risk of lower respiratory tract infection.In addition, since chronic exposure to biomass smoke is associated with lung function changes that are compatible with chronic obstructive airways disease, significant reductions in inhaled PM dose from cleaner cookstoves may attenuate the accelerated lung function decline thought to occur in this population of women.The observation of sheets of free bacteria in the induced sputum of a subset of Malawian women was an unexpected finding, since these women were free of respiratory symptoms.Indeed, we have never observed this phenomenon in our extensive experience analysing induced sputum samples from UK subjects.Although an upper airway origin for these bacteria cannot be excluded, a lower airway origin is most likely since; i) all women rinsed their mouth and blew their nose prior to induction, and ii) we observed AM phagocytosis of bacteria in subjects with bacterial sheets.Whether excessive free bacteria reflects, as previously discussed, PM-induced impairment of host immune defence is unclear, but this observation is consistent with the recent study by Rylance et al. who found increased abundance of Neisseria and Streptococcus in bronchoalveolar lavage samples from apparently healthy Malawian adults who were exposed to high concentrations of PM.The prevalence and mechanisms for persistent bacterial lower airway colonisation, and alterations of the airway microbiome in biomass-exposed population therefore merits further study.The marked overlap in AMBC between the two Malawian groups, is compatible with our observation reported in the CAPS trial paper that there are other major sources of exposure to carbonaceous PM in this population.For example, women regularly visit other homes, there is frequent open burning of rubbish in villages, and women walk alongside roads where traffic, albeit light, is dominated by diesel cars and trucks with considerable exhaust emissions.Exposure to these other sources may explain why no difference in 48-h personal CO/PM2.5 exposure was found.But the reason for the discrepancy between AMBC and short-term monitored exposure is unclear.We speculate that one explanation is that high peak exposures to biomass PM have a disproportionate effect on inhaled dose, and thus AMBC.Indeed, we previously observed a disproportionate effect of PM peaks on AMBC in a study of two groups of London commuters who were either cycling or walking to work.In this previous study, we found that, although overall 24-h monitored black carbon was not significantly different between the two groups, monitored black carbon during the commute was higher in cyclists, AMBC was higher in cyclists, and there was an association between monitored black carbon peaks and AMBC.A limitation of this study is that, due to the short time period available for sampling by the UK research team, the study is underpowered for other secondary outcome of potential interest.For example, in a group of 65 healthy young people living in Leicester exposed to fossil-fuel emission, we previously found a significant inverse correlation between FEV1 and AMBC.It would therefore be of interest to assess this inverse association in a larger, and adequately powered, study of young people whose AMBC is predominately from exposure to biomass smoke.In summary, we found direct evidence that use of cleaner burning biomass-fuelled cookstoves by women reduced the inhaled dose of carbonaceous PM.We also demonstrated new insights into the possibility of higher bacterial load in lower airway samples than previously thought, this is important when considering the implications for higher pneumonia incidence.We found that it is feasible to induce sputum samples in the field, and subsequently transport and process samples for advanced mechanistic studies.We therefore conclude that use of sputum induction in future studies will provide important insights into the development of respiratory disease in rural populations in low-income countries.The ALHS within which this work was nested was funded by:New Investigator Research Grant from the Medical Research Council.National Institute for Environmental Health Sciences grant,The ALHS was one of the research themes of CAPS funded by:Joint Global Health Trials Grant from the Medical Research Council, UK Department for International Development and Wellcome Trust.Additional support was provided by a MRC Partnership Grant “BREATHE-AFRCA” and the Malawi Liverpool Wellcome Trust Programme of Clinical Research. | Exposure to particulate matter (PM) from burning of biomass for cooking is associated with adverse health effects. It is unknown whether or not cleaner burning biomass-fuelled cookstoves reduce the amount of PM inhaled by women compared with traditional open fires. We sought to assess whether airway macrophage black carbon (AMBC) - a marker of inhaled dose of carbonaceous PM from biomass and fossil fuel combustion - is lower in Malawian women using a cleaner burning biomass-fuelled cookstove compared with those using open fires for cooking. AMBC was assessed in induced sputum samples using image analysis and personal exposure to carbon monoxide (CO) and PM were measured using Aprovecho Indoor Air Pollution meters. A fossil-fuel exposed group of UK women was also studied. Induced sputum samples were obtained from 57 women from which AMBC was determined in 31. Median AMBC was 6.87 μm2 (IQR 4.47–18.5) and 4.37 μm2 (IQR 2.57–7.38) in the open fire (n = 11) and cleaner burning cookstove groups (n = 20), respectively (p = 0.028). There was no difference in personal exposure to CO and PM between the two groups. UK women (n = 5) had lower AMBC (median 0.89 μm2, IQR 0.56–1.13) compared with both Malawi women using traditional cookstoves (p < 0.001) and those using cleaner cookstoves (p = 0.022). We conclude that use of a cleaner burning biomass-fuelled cookstove reduces inhaled PM dose in a way that is not necessarily reflected by personal exposure monitoring. |
31,472 | Materials Design on the Origin of Gap States in a High-κ/GaAs Interface | The computational study of electronic device materials has played a critical role in the introduction of new functional materials to meet device-scaling requirements.Conventional field-effect transistor devices are composed of a Si semiconducting channel, a silicide metal electrode source/drain, and a SiO2 insulating gate dielectric with a polysilicon metallic gate for the field control of the channel.Over the last decade, Si-based device materials have rapidly been replaced, with high dielectric constant oxides replacing SiO2 and a metal gate replacing the polysilicon gate .During this rapid process, materials design has played a critical role in guiding potential high-κ dielectric and metal gate material selection from numerous candidate materials .Recently, the US government started the Materials Genome Initiative with the goal of accelerating the development and commercialization of new materials for advanced engineering system applications .Rather than following the conventional dependence on an empirical trial-and-error approach, the MGI is targeting the introduction of rational materials design during new functional materials development and is intended to achieve an accelerated materials development cycle.It is worthwhile to note that our research work has demonstrated the MGI concept through a commercial product development from a conceptual design within eight years .From this perspective, the next step in FET device scaling, the introduction of high-mobility channel materials to replace the Si channel, falls into the category of an MGI problem, and a predictive quantum simulation would provide critical guidance to reduce the development cycle by focusing on the most promising material systems selected by materials design.In this review, we discuss the origin of the trap states formed at the interface between a high-mobility channel and a high-κ gate oxide in order to place this FET device challenge within the MGI framework.The low interface quality between a high-κ oxide and a III-V channel remains one of the main obstacles in achieving ultra-high-speed semiconductor devices.Despite considerable work on improving the microscopic process, passivation, self-cleaning, and characterization , the limited understanding of the physical origins of the interface states, trapping the Fermi level in the high-κ oxide/GaAs interface, hinders the progress of a breakthrough in enhancing interface quality.This hindrance is largely due to the complicated interfacial bonding configurations, which vary based on the experimental growth conditions.At the high-κ/GaAs interface, Ga may induce various structural disorders such as Ga-oxides, Ga—Ga dimer bonding, Ga— dangling bonds, and so forth, which might degrade the insulating properties of the interface.As-oxides can be reduced by2S and NH4OH and more recently by certain atomic layer deposition processes, in a self-cleaning effect .The residual imperfections’ impact on the distribution of the gap states therefore becomes critical .Experimentally, the density of interface states are distributed as three parts, that is, P1, P2, and P3 as shown in Figure 1.The origin of such Dit distributions remains an active area of research.For example, there has been a long-term debate over whether Ga3+ contributes to gap states or not.In addition to Ga 3+, Ga partial charge states, between 3+ and 1+, may occur as oxygen atoms with an unfixed number bond to Ga.These partial charge states may coexist with Ga 3+ and 1+, and may possibly act as a source to generate gap states.Without completely understanding the impact of various Ga charge states on the distribution of the interface gap states, the mechanism of the gap states’ distribution is likely incomplete.Moreover, the presence of such oxidation states is likely a sign of a bonding disorder at the interface, which would result in further defect generation such as dangling bonds.Theoretically speaking, according to an empirical tight-binding analysis in bulk GaAs, as shown in Figure 1, Ga— dangling bonds generate conducting edge states.As—As dimers and Ga antisites produce mid-gap states—P2 in Figure 1 .Nevertheless, this model provides insufficient insight into the origin of gap states in the GaAs/oxide interface, because interfacial bonding is not considered.In fact, at a high-κ/GaAs interface, high-κ oxides such as HfO2 have ionic bonding without a fixed atomic coordination and fixed bond angles, leading to various charge states of interfacial Ga and As.Such charge states correspond to different types of unsaturated Ga bonds, generating gap states.Intrinsically, it is difficult to achieve an electronically abrupt high-κ/GaAs interface without any structural imperfections because: the GaAs surface is polar, and Ga is tri-valent and As is penta-valent.Each Ga—As bond has 0.75 electron from Ga and 1.25 electrons from As, which makes electron-counting rules, and Ga covalent bonding angle and direction rigid requirements hard to satisfy.This situation implies that the partially charged Ga bonding leads to the interface gap states.Recently, Robertson et al. proposed that-oriented GaAs/HfO2 interfaces be modeled by 1 × 1 unit cells of a GaAs and HfO2 surface, and concluded that an insulating interface is possibly obtained by substituting one O on an As site in the GaAs layer below.However, this interface model contains a large interfacial planar strain originating from the large lattice mismatch between HfO2 and GaAs, which makes the model interface problematic.On the unreconstructed GaAs surface terminated with Ga, the atoms form a square array.Each surface Ga has two partially occupied dangling bonds pointing out of the surface.Neither Hf nor O can directly passivate Ga— dangling bonds based on the Ga—O covalent bonding rigid requirement, unless a large planar strain is applied on the interface, which is not realistic in practice.An alternative approach is to systematically model the effect of oxygen incorporation at the interface .The detection of such bonding and the corresponding oxidation state from high-κ dielectric processes or surface treatments is well-documented experimentally .One example is the interface model labeled O10, shown in Figure 2, in which each oxygen layer has ten oxygen atoms and each Ga layer has four Ga atoms.Thus, five oxygen atoms are required on the HfO2 surface in order to form an insulating HfO2 surface.There are eight Ga— dangling bonds at the GaAs surface, which need to be passivated by three oxygen atoms.Consequently, interface O10 ends up with two excess oxygen atoms at the interface.These two excess oxygen atoms are removed to generate a neutral interface.Although the overall interface is charge neutral, the three oxygen atoms are not likely to assign 1.25 electrons uniformly to each Ga bond, so the situation inevitably creates gap states, trapping the Fermi level.In addition, another neutral interface could be built after removing two interfacial Ga atoms and five oxygen atoms from the interface, after which both the GaAs surface and the HfO2 would be insulating.However, this second interface would be unstable due to a much lower degree of Ga—O bonding than in the interface with full-oxygen termination.One important parameter influencing interface stability is the external oxygen chemical potential, which controls interfacial oxidation and essentially governs interface stability.For example, for the state-of-the-art ALD, interface stability strongly relies on the oxidant concentration.In order to explore external oxygen chemical potential impact at the interfacial oxygen concentration, one to seven oxygen atoms are removed from the interface sequentially , and the formation energy of the interfaces as a function of the oxygen chemical potential, constrained by O2 and HfO2, is applied.This exploration reveals that an interface with nine interfacial oxygen atoms has the lowest formation energy within a large oxygen chemical potential range.In addition, we find that the neutral charged interface with eight interfacial oxygen atoms is only stable within 21.6% of the whole growth condition range.Another neutral charged interface with two Ga and five O atoms at the interface is apparently not stable within the whole oxygen chemical potential range.On the contrary, the ~ 49.0% growth condition produces a non-neutral charged interface, O9.This observation explains why atomically and electronically abrupt interfaces are not likely to occur spontaneously.Based on their oxygen-rich conditions, O8, O9, and O10 interfaces are the most relevant in realistic interfaces where oxide formation is not carefully controlled, and are thus presented in Figure 2.For the O9 interface, the most stable model within a large oxygen chemical potential range, one of the Ga—As bonds between the second and third atomic layers of GaAs is broken, and the optimized interface structure spontaneously forms a Ga— dangling bond and two As—As dimer pairs two layers and one layer away, respectively, from the Ga—O interface bonds.A similar effect is found for the O10 interface.Interestingly, the O8 interface has the least “wrong bonding” due to the neutral charge at the interface.To further check Ga and As charge states, Bader charge calculations are performed, as shown in Figure 4 and.For the O10 model interface, interfacial Ga charge states are close to 3+.As oxygen is depleted from the O10 interface, more partial charge states of Ga begin to occur for the O9 and O8 models, as shown in Figure 4.For As charge states, As—As dimer charge states are found for the O9 and O10 interfaces, as shown in Figure 4.Figure 5 shows the bulk GaAs and interface density of states for the O8/Ga4, O9/Ga4, and O10/Ga4 models with a Heyd-Scuseria-Ernzerhof hybrid functional.The bulk DOS is represented by the Ga atoms far away from the interface.Within the bulk GaAs gap region, three interface states are found, causing Fermi-level pinning.In order to explore the origin of the gap states, the partial charge distribution within the gap region is displayed in the inset diagram of Figure 5.As O9/Ga4 shows the most stability within a large growth condition of oxygen chemical potential, we examine the charge states of interfacial Ga and As atoms.We label the four interfacial Ga atoms G1, G2, G3, and G4 with the charge states 1.64e, 1.63e, 2.23e, and 1.63e, respectively.The partial charge clearly indicates that the interface gap states P1, P2, and P3 arise from Ga partial oxidation, Ga— dangling bonds, and As—As dimers, respectively.Ga 3+ does not directly contribute to gap states.Essentially, interfacial oxygen attracts electrons from adjacent Ga atoms driven by the strong oxygen electronegativity.As a result, Ga charge states such as Ga 3+, Ga 1+ and some intermediate states form.As atoms located beneath interfacial Ga offer more electrons to their upper bonding partners than to their lower bonding neighbors.To compensate for the As charge loss, As—As dimers form.We find that the formation of Ga— dangling bonds and As—As dimers strongly depends on interfacial Ga charge states.Specifically, the big-charge-loss-induced formation of Ga 3+ drives an As big charge loss as well, and thus As forms As—As dimers.Experimentally, the coexistence of Ga 3+ and As—As dimers makes it difficult to identify the real contributor to gap states.Hypothetically, interfacial Ga charge states reduce to 1+, which donates less charge than 3+; since interfacial As, As—As dimers, and Ga— dangling bonds would also be removed, the gap states are expected to be eliminated accordingly .This finding sheds a bright light on efficient interface passivation mechanisms.The use of amorphous Si appears to be an effective passivation scheme.With the presence of a-Si at the interface, Si donates charges to interfacial Ga and transforms Ga partial oxidation states to 1+; meanwhile, As—As dimers obtain enough charge from the Si to break the dimer bonds .In addition to the interface thermal stability and the DOS, the injection barrier is another critical parameter of a semiconductor device , and is represented by the band offsets between the VB edges of the gate oxide and semiconductor.For device applications, the injection barrier requires greater than 1.0 eV in order to prevent electrons from entering the oxide CB, through which carriers can cross the gate oxide .The valence band offset is accurately predicted using the reference potential method .For the interface model O9, the VBOs are 1.81 eV.As a comparison, the experimental data show diverse values of VBO: 2.00 eV , 2.10 eV , and 2.85 eV .Robertson and Falabretti used the charge neutral level method without considering the observed interface bonding in order to predict a value of 3.00 eV .These findings confirm the promising injection barrier between HfO2 and GaAs.In summary, we apply density functional theory in order to find the origin of HfO2/GaAs interface gap states; that is, As—As dimers, Ga partial oxidation states, and Ga— dangling bonds inducing gap states.Ga 3+ does not directly generate any gap states.In addition, band offset results illustrate that the GaAs/HfO2 interface is a good candidate to prevent carrier injection through oxide thin film.A 10 Å vacuum region was used in order to avoid interactions between the top and bottom atoms in the periodic slab images.The bottom Ga atoms are passivated by pseudohydrogen to mimic As—bulk bonds.The top layer of HfO2 is initially terminated by ten oxygen atoms in the unit cell; half of these are removed in order to generate an insulating HfO2 surface.The applied passivation of the top and bottom surfaces guarantees that the top and bottom surface states are removed and all the calculated gap states originate from the interface.The GaAs slab is 27.16 Å thick with ten layers of Ga and nine layers of As, while the HfO2 slab is 13.42 Å thick with five layers of Hf and six layers of O.This slab thickness is big enough to reduce the quantum size effect .The conjugate gradient was used to perform structural optimization with only the bottom of the passivated GaAs layers fixed.The force accuracy reaches 0.01 eV·Å−1.The nature of interface electronic states depends strongly on the details of the atomic structure and bonding at the interface, signifying the importance of an accurate interface model.In the process of building a convincing theoretical model for an HfO2/GaAs interface, several issues must be considered: First, the lattice mismatch needs to be accommodated by the strain; second, the interface should be thermally stable and realistic; third, experimental information must be taken into account and must validate the theoretical model; fourth, the DFT limit of searching the global minimum energy for a relaxed interface needs to be reduced as much as possible.In this work, we consider the interface between cubic HfO2 and GaAs.Although HfO2 exists in cubic, tetragonal, and monoclinic phases, these all have very similar local ionic bonding characteristics, and the atomic structures are closely related .Cubic HfO2 is allowed to change into lower energy structures during the atomic structure optimization.Furthermore, the conclusions from the current analysis would be applicable to other phases, because the key requirement is valence satisfaction, which depends on the local bonding configuration rather than on long-range crystalline symmetry.We use a periodic slab model with Ga—O bonds at the interface, which is supported by experimental data .The-oriented HfO2 surface was compressed by ~ 0.3% and rotated counter-clockwise by 28.04°, that is, 28.04 = 45 − 16.96, to match the GaAs surface.The CG method was applied for the atomic structure optimization.Because the CG method would only lead to the local minimum, it is possible that the optimized structures are metastable interface structures.To investigate diverse interface structures formed between the GaAs and HfO2 surfaces, the HfO2 slab was moved in the x and xy directions relative to the GaAs slab, and the local energy minimum structures were obtained as a function of the relative shift.The interface formation energy was lowered by 1.5 eV at the shift of 1.0 Å along the xy direction, resulting in the lowest total energy.We use this minimum energy interface structure as the initial structure and perform a full relaxation.For high-level electronic structure calculations, HSE , which produces band gaps and equilibrium lattice parameters that are in much better agreement with experimental results than local density approximations or generalized gradient approximations , was applied in order to overcome the well-known DFT limitation : band gap underestimation.A 25% Hartree-Fock exchange potential was incorporated into a Perdew-Burke-Ernzerhof potential, resulting in a 1.40 eV gap being obtained; this result is close enough to the experimental value of 1.42 eV . | Given the demand for constantly scaling microelectronic devices to ever smaller dimensions, a SiO2 gate dielectric was substituted with a higher dielectric-constant material, Hf(Zr)O2, in order to minimize current leakage through dielectric thin film. However, upon interfacing with high dielectric constant (high-κ) dielectrics, the electron mobility in the conventional Si channel degrades due to Coulomb scattering, surface-roughness scattering, remote-phonon scattering, and dielectric-charge trapping. III-V and Ge are two promising candidates with superior mobility over Si. Nevertheless, Hf(Zr)O2/III-V(Ge) has much more complicated interface bonding than Si-based interfaces. Successful fabrication of a high-quality device critically depends on understanding and engineering the bonding configurations at Hf(Zr)O2/III-V(Ge) interfaces for the optimal design of device interfaces. Thus, an accurate atomic insight into the interface bonding and mechanism of interface gap states formation becomes essential. Here, we utilize first-principle calculations to investigate the interface between HfO2 and GaAs. Our study shows that As−As dimer bonding, Ga partial oxidation (between 3+ and 1+) and Ga− dangling bonds constitute the major contributions to gap states. These findings provide insightful guidance for optimum interface passivation. |
31,473 | Sustained expression of CYPs and DNA adduct accumulation with continuous exposure to PCB126 and PCB153 through a new delivery method: Polymeric implants | Polychlorinated biphenyls, a group of chemicals with 209 individual congeners, are persistent organic pollutants as reflected in their ubiquitous prevalence in the environment and lipid solubility .Intentional commercial production of PCBs in the US ceased before 1980, but PCBs remain as minor by-products of dye and paint manufacture , and PCBs are still in use in closed systems, like some transformers, which, according to the Stockholm Convention, are scheduled to be removed by 2025.Humans continue to be exposed to low doses of PCBs because of their presence in the environment, their accumulation in the food chain, and accidental release from disposal sites .Despite the gradual decline, PCBs remain a major contaminant in human tissues .Exposure to PCBs in humans results in gastrointestinal effects, respiratory tract symptoms, mild liver toxicity, and effects on the skin and eyes such as chloracne, altered pigmentation, and eye irritation.Other potential adverse health effects of PCBs such as immunological disturbances, neurological defects and implications in cardiovascular and liver diseases have been described .The IARC recently concluded that there is sufficient evidence of a link between PCBs and melanoma, and limited evidence for PCBs and non-Hodgkin lymphoma and breast cancer sufficient to upgrade the entire class of compounds to Group 1 Human Carcinogens .Several studies have reported an increase in liver cancer among persons occupationally exposed to some PCB formulations .The PCBs’ carcinogenicity is clear in animal models.Using Sprague-Dawley male and female rats, a comprehensive chronic toxicity and carcinogenicity study analyzed the effects of four different commercial PCB mixtures at multiple dietary concentrations, ranging from 25 to 200 ppm, during 24 month exposure .They demonstrated more severe liver toxicity in females than in males and also found incidence of hepatocellular neoplasms was highly sex-dependent.Likewise, chronic carcinogenicity studies with individual PCB congeners, PCB118 and PCB126 found clear evidence of carcinogenicity in female Sprague-Dawley rats .Most of the rodent models of chemical carcinogenesis involve exposure to the test compound either via gavage, intraperitoneal injection, or diet.The bolus doses used are often orders of magnitude higher than the typical continuous environmental exposure through ingestion, inhalation and dermal uptake.These current animal exposure models are good for tumorigenesis studies to understand the kinetics of carcinogen uptake, distribution, metabolism, and elimination.They also provide great insights into the mechanism of action of carcinogens.However, the conditions do not mimic the human scenario where the exposure is generally to low doses for very long period of time.Moreover, application of highly toxic compounds via diet or gavage may put human animal handlers at risk of exposure to these compounds and are very stressful for the animals.We hypothesize that continuous exposure to PCB126 and PCB153 via novel subcutaneous polymeric implants, which provide controlled release for long duration, will provide a more natural and environmentally safe way of exposure which will lead to sustained overexpression of different enzymes and a steady accumulation of DNA damage that can potentiate PCB-induced liver and lung toxicities.PCB126 and PCB153 were selected because of their different mode of actions.PCB 126, a known AhR agonist and CYP1A1 inducer, is the most toxic PCB congener and has anti-estrogenic properties.PCB153 is a diortho-substituted congener with CYP2B1 inducing properties and is one of the most highly concentrated congeners found in human tissues .Moreover, in a study conducted by National Toxicology Program, there was clear evidence of carcinogenic activity of a binary mixture of PCB126 and PCB153 in female Sprague-Dawley rats .The objectives of this study were 3-fold: to correlate the release kinetics of PCB126 and PCB153 from polymeric implants in vitro and in vivo, as well as investigate tissue distribution; to determine if co-exposure to PCB126 and PCB153, which have distinct modes of action, could change the pharmacokinetic profiles in rats and affect the DNA damage accumulation; and to investigate the effects of continuous exposure to PCB on two target tissues, liver and lung, as a hypothesis-testing tool for deriving mechanistic insights for the observed results.PCB126 and PCB153 were synthesized and characterized as described previously .ɛ-Polycaprolactone mol.wt.80,000 was purchased from Sigma–Aldrich.Pluronic F68 was a gift from BASF Corp.Silastic tubing was purchased from Allied Biomedical.Bovine calf serum was from Hyclone.Dichloromethane, absolute ethanol and acetonitrile were from BDH chemicals, Pharmco-AAPER and Sigma–Aldrich, respectively.Materials used in the 32P-postlabeling assay for DNA adduct analysis were as described .All other chemicals were of analytical grade.All solvents used for the PCB analysis were pesticide grade and obtained from Fisher Scientific.All analytical PCB standards were purchased from AccuStandard, New Haven, CT, USA.Polymeric implants were prepared as described previously .Briefly, first, we prepared polycaprolactone:F68 blank implants by extrusion method .These blank implants were then coated with 20–25 layers of 10% of PCL dissolved in DCM containing 0.15% and 5.0% PCB126 and PCB153, respectively.The weight of the implant coatings was 60 mg each thus containing 90 μg PCB126 or 3 mg PCB153, respectively.The coatings were achieved by dipping the blank implants with intermittent drying .Sham implants were prepared in the absence of PCB.The implants were dried overnight and stored under argon until use.The release of PCB153 was measured as described for other compounds .Briefly, implants were separately placed in 20 ml amber vials containing 10 ml phosphate buffered saline and 10% bovine serum.The vials were incubated at 37 °C with constant agitation in a water bath.The medium was changed every 24 h. Media containing PCB153 were extracted using acetonitrile and chloroform.The release was measured spectrophotometrically and the concentration was calculated against the standard curve.The absorbance was measured directly at 209 nm.A calibration curve of PCB153 was generated by spiking PBS containing 10% bovine serum and 10% ethanol with known concentrations of the test PCB.Since the amount of PCB126 in each implant was very small, they were not assayed for PCB release in vitro.Five- to 6-week-old female Sprague-Dawley rats were purchased from Harlan Laboratories.All procedures were conducted after obtaining approval from the Institutional Animal Care and Use Committee and animals were maintained according to the IACUC guidelines.Animals were housed in cages and received 7001 Teklad 4% diet and water ad libitum.The diet was purchased as pellets from Harlan–Teklad, Inc. and stored at 4 °C till use.After a week of acclimation, animals were randomized into six groups.Under anesthesia sham or PCB implants, one per animal, were grafted into the subcutaneous cavity on the back of the animals and closed using a sterile 9-mm clip.Body weight and diet consumption were recorded twice a week.Four groups were euthanized after 15 d of treatment, additional two groups were euthanized after 6 and 45 d. Rats were euthanized by CO2 asphyxiation, blood and tissues were collected, snap frozen and stored at −80 °C until use.Blood was collected by cardiac puncture, plasma was separated and stored at −80 °C.Implants were also recovered from the animals, wiped, dried under vacuum and stored at −80 °C until analysis.Implants collected from animals after 6, 15, and 45 d were analyzed for residual PCBs as described previously .Briefly, implants were dried overnight under vacuum, weighed, and dissolved in 5 ml DCM, followed by the addition of 5 ml ethanol to completely dissolve the PCB.The solution was then diluted and analyzed using a UV spectrophotometer at 209 nm.The cumulative release from the implant was calculated by subtracting the residual amount from the initial amount.The daily release was calculated by dividing the total release by time in days.Total PCB released in vivo was compared with cumulative release in the same period in vitro.Extraction and clean-up of PCBs from serum, lung, liver and mammary tissue were performed after mixing with pre-extracted diatomaceous earth using a Dionex ASE 200 system as described previously .2,3,4,4′,5,6-Hexachlorobiphenyl was added to all samples as a surrogate standard.The concentrated extract was subjected to a sulfur clean-up following a published procedure .PCB126 and PCB153 were quantified with 2,2′,3,4,4′,5,6,6′-octachlorobiphenyl as internal standard using an Agilent 6890 N gas chromatograph with a 63Ni μ-ECD detector equipped with a SPB™-1 column.The oven temperature program was as follows: 100 °C, hold for 1 min, 5°/min from 100 to 250 °C, hold for 20 min, 5°/min to 280 °C, hold for 3 min.Injector and detector temperatures were 280 °C and 300 °C, respectively, with a carrier gas flow rate of 1 ml/min.The detector response for PCB126 and PCB153 was linear up to concentrations of 1.0 μg/ml.The limit of detection calculated from blank samples was 3.6 ng for PCB126 and 3.9 ng for PCB153.The recovery of the surrogate standard was 90 ± 11%.The recoveries of PCB126 and PCB153 from spiked blank samples were 84 ± 10% and 85 ± 9%, respectively.Corrections were made for recoveries lower than 100%.Mammary tissue samples were pooled from four animals for the PCB analysis.DNA was isolated from the liver and lung tissues by a solvent extraction procedure involving removal of RNA and proteins by digestion of isolated crude nuclei with RNases and proteinase K, respectively, followed by sequential extractions with phenol, phenol:Sevag and Sevag.DNA was recovered by precipitation with ethanol, washed, dissolved in water and its concentration and purity was estimated by spectrophotometry.DNA adduct profiles were determined by 32P-postlabeling/TLC systems to assess DNA damage comprised of polar adducts, including 8-oxodG and lipophilic adducts.Briefly, following enzymatic digestion of DNA, adducts were enriched by treatment with nuclease P1, 32P-labeled and resolved by 2-D PEI-cellulose TLC by development with 1 M formic acid in the presence of increasing concentration of sodium phosphate and isopropanol:4 M ammonium hydroxide, 1:1.8-OxodGp was enriched by PEI-cellulose TLC, 32P-labeled, and resolved by 2-directional TLC as described .Normal nucleotides were labeled in parallel with adducts and separated by 1-D PEI-cellulose TLC.Adduct and normal nucleotide radioactivity was measured by a Packard InstantImager.Adduct levels were calculated as relative adduct labeling and expressed as adducts per 109 or 106 nucleotides.The level of thiobarbituric acid reactive substances, expressed in terms of malondialdehyde, was used as index of liver and serum lipid peroxidation and performed according to the methods described previously .Briefly, liver homogenate or the serum lipid fraction was incubated with 0.8% or 0.67%, respectively of thiobarbituric acid at 95 °C for 60 min and the red pigment produced in the reaction was extracted using a n-butanol–pyridine mixture or n-butanol.The pigment concentration was determined spectrophotometrically at 535 nm for liver samples and spectrofluorometrically at excitation 515 nm and emission 553 nm for serum samples.The total serum antioxidant capacity was determined using the Ferric Reducing Ability of Plasma assay .This assay measures the reduction of ferric-tripyridyltriazine to the ferrous form as putative index of antioxidant or reducing potential in the sample.Aqueous solutions of known Fe2+ concentration were used for calibration and results were expressed as μmol/l Fe2+ equivalents.Paraoxon and phenylacetate were used as two individual substrates in PON1 activity measurements as described .Briefly, the enzyme activities were determined spectrophotometrically following the initial rate of substrate hydrolysis to p-nitrophenol or phenol, respectively.The units of enzyme activity were calculated from the molar extinction coefficients, E412 and E270, respectively, and expressed as U/ml serum or U/mg protein in liver homogenate."Total RNA from tissue samples was isolated using an RNeasy Mini KitTm following the manufacturer's instructions.An on-column DNase digestion was performed to further remove residue genomic DNA contaminants.The quantity and quality of RNA was determined by the absorbance at 260 nm and ratio between A260 and A280 in 10 mM Tris buffer at pH 7.0."For each sample, 2.5 μg total RNA was reverse-transcribed into cDNA in a 25 μl reaction volume using the High Capacity RT KitTm following the manufacturer's instructions. "qPCR was performed as described earlier in a 20-μl reaction with 4 ng of cDNA template and 900 nM primer using a SYBR Green Master Mix kit from Applied Biosystems Inc. according to the manufacture's protocol.The primers used were taken from previous publications as indicated in supplementary materials and synthesized by Integrated DNA Technologies Inc.The relative gene expression levels were calculated using the relative standard curve method.The target gene expression levels were adjusted to the house keeping gene, ribosomal protein L13a.Final results are fold change derived by dividing the expression level of each gene in the treatment groups by that in the control group.Western-blot analysis was carried out as described .Briefly, microsomal proteins were resolved on a 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred onto a PVDF membrane.After blocking with 5% non-fat dry milk in blocking solution, the membrane was incubated with CYP1A1, CYP1A2 and CYP1B1 antibody overnight at 4 °C.The membrane was then incubated with the appropriate horseradish peroxidase-conjugated secondary antibody, and the immuno-reactive bands were visualized using the Pierce chemiluminescent substrate kit.To ensure equal protein loading, each membrane was stripped and re-probed with β-actin antibody to normalize for differences in protein loading."Data for DNA adducts represent average ± SD of 3–5 replicates and statistical analyses were performed using the Student's t-test. "Data for gene expression, PON activity and TBARS are presented as means ± SD with statistical analysis performed by Student's t-test.All calculations were performed with GraphPad Prizm statistical software.In each assay a p value of <0.05 was considered to be statistically significant.The polymeric implants of PCB126 and PCB153 were prepared by the coating method with a PCB load of 0.15% and 5%, respectively.When 1.5-cm PCB153 implants were shaken in PBS containing 10% serum to simulate in vivo extracellular fluid conditions, a gradual release of PCB153 was observed.The PCB153 release varied between 75 and 90 μg/d for the first 7 d reaching a cumulative release of nearly 23% in the first week.The release gradually declined subsequently, reaching a cumulative release of about 45% at the end of 3 weeks.Due to the low load of PCB126, we were unable to measure its in vitro release kinetics.The in vivo releases of PCB126 and PCB153 were assessed by measuring the residual PCBs in the implants recovered from the animals at the time of euthanasia.At the end of the 6, 15 and 45 d, PCB126 was released from the polymeric implants by 26%, 36% and 49%, respectively.PCB153 was released at nearly 50% higher rates, i.e., 40%, 53%, and 73%, respectively.These data show that both PCB126 and PCB153 continued to be released for more than 6 weeks, with over 25–50% of the compound still present in the implants at the end of the study.No physical change to the polymeric implants or any sign of toxicity at the site of implantation in the animals was observed.A comparison of the in vitro and in vivo release kinetics suggest that almost twice as much PCB153 was released from the implants in vivo compared with the in vitro release.Based on the release kinetics in vivo, the average daily doses of PCB126 and PCB153 over 45 d of treatment were 0.98 ± 0.10 and 48.6 ± 3.0 μg, respectively.Average body weights of animals treated with PCB126 or PCB153 for 15 d were not statistically different when compared with sham-treated animals.However, a slight reduction in body weight was observed when animals were treated with a combination of PCB126 and PCB153.There was no treatment-related effect on food consumption.Compared to sham controls, liver weights of animals treated with PCB126 alone or in combination with PCB153 increased significantly.However, treatment with PCB153 alone did not affect liver weights.The lung weights were non-significantly elevated by treatment with PCB126 alone or in combination with PCB153.In contrast, PCB126, with or without PCB153 co-exposure, caused a dose- and time-dependent reduction in thymus weight.In fact, the relative thymus weight was reduced by three quarters after 45 d treatment with a combination of PCB126 and PCB153.Some reduction in the weights of ovary and mammary tissues were also observed in these treatment groups although not significant.PCB153 alone did not influence the lung or thymus weights, but caused a small non-significant increase in the relative weights of ovary and mammary.Kidney weights were essentially unaffected by any of the PCB treatments.The levels of PCB126 and PCB153 were determined in different tissues and serum by GC analysis.No detectable levels of PCB126 were found in the serum and lung.However, PCB126 accumulated in the liver after 15 d of the PCB126 alone treatment.The levels of PCB126 in the liver were somewhat higher when PCB126 was combined with PCB153.The liver PCB126 levels increased from 6 d of treatment to 15 d but then remained unchanged up to 45 d of treatment.Compared to the livers, nearly 10-fold lower levels of PCB126 were detected in the mammary tissue.PCB153 was readily detected in the serum and tissues analyzed since the dose of PCB153 administered was about 50 times higher than that of PCB126.In serum PCB153 was present at the level of 150 to 220 ng/g serum after exposure to PCB153 alone or in combination with PCB126.Lung and liver showed similar levels of PCB153 accumulation and the levels were essentially sustained from 6 d to 45 d of the treatment.Liver levels were nearly doubled when PCB153 was given in combination with PCB126.The maximum accumulation of PCB153 was found in the mammary tissues, particularly at the early time point.After 15 d of exposure PCB153 treatment alone resulted in 7150 ng PCB153/g tissue while levels were much lower when PCB153 was combined with PCB126.The levels thereafter remained unchanged until 45 d of the treatment in this treatment group.DNA adduct analysis was performed using the highly sensitive 32P-postlabeling technique.A wide array of DNA adducts, ranging from highly polar to highly lipophilic, were detected in the liver and lung of PCB126- and PCB153-treated animals.Based on the chromatographic properties, the adducts were grouped into P-1, P-2, PL-1, and PL-2 subgroups, where P stands for polarity and PL is used to designate adducts that have both polar and lipophilic properties based on requirement of salt concentration during the chromatography .No qualitative difference in adduct profiles among the various treatment groups were observed.However, significant quantitative differences occurred for some subgroups of adducts.Time and treatment effects on mean adduct levels of the different adduct groups in liver tissue are depicted in Fig. 2B and Supplemental Figure 1A.Liver samples showed high basal levels of P-1 adducts that did not change with PCB153 and/or PCB126 treatments.P1 adduct levels after 6 d of co-exposure were almost identical to sham treatment and did not change even after 45 d of treatment.The known oxidative lesion, 8-oxodG, which constitutes part of the P-2 subgroup, showed significant increases in DNA adducts by PCB126 treatment.PCB153 also produced a modest but non-significant increase in adducts.Co-treatment produced a doubling in 8-oxo-dG levels which remained below the level of significance.The adduct levels did not change when different time points were tested in co-treatment.Highly lipophilic PL-1 adducts increased with the treatment of PCB153 compared to sham, however, the increase was not significant.The increase in the adduct levels was similar with PCB126.Non-significant increases were also seen in PL-2 adducts levels with either treatment which seemed to be higher with PCB126 compared to PCB153 and at the earlier time point with the co-treatment than at the later time point.These different groups of adducts were also analyzed in the lungs of PCB-treated animals.Irrespective of the adduct type, the baseline levels were almost 3-fold lower in the lung samples compared to liver.P1 adducts were almost unchanged compared to sham by the treatment with either PCB153 or PCB126 alone.However, adduct levels increased when PCB126 and PCB153 were co-administered.In the co-treatment group, adduct levels peaked after 15 d and remained essentially unaltered after 45 d.As seen for the liver samples, 8-oxodG increased significantly following treatment with PCB126.The adduct levels in co-treatment groups were maximum after 6 d and decreased gradually as time of treatment increased.There was no difference in the PL-1 adduct levels by any of the treatment but the level was somewhat decreased with time in the co-treatment group.Although in the liver PL-2 adducts seemed increased in groups exposed to PCB126 either alone or in combination with PCB153, this effect was not significant.Interestingly, the increased PL-2 adduct levels in co-treated groups remained the same even after 45 d of treatment.Liver and serum TBARS levels, expressed in terms for MDA, and serum antioxidant capacity measured as the ferric-reducing ability were measured in serum and liver of the various treatment groups.There was no effect of any treatment on total serum antioxidant capacity.Similarly, PCB126 or PCB153 alone or in combination did not increase liver TBARS values.No significant effect of PCBs on serum TBARS was observed either initially, although non-significant increases after 15 and 45 d exposure to PCB126 together with PCB153 were visible.Two substrates, paraoxon and phenylacetate were used to measure the PON1 activity in serum and liver tissues of the various treatment groups.PON1 activity in the hepatic tissue of control animals was about 2.4 and 4.8 U/mg protein with paraoxon and phenylacetate substrate, respectively.Hepatic levels for PON1 activity were unaffected by PCB153 treatment.However, the PON1 activities increased 3–5-fold with PCB126 alone and in combination with PCB153.This effect was evident during the entire 45 d of treatment.Serum PON1 paraoxonase and arylesterase activity in sham groups was found to be 370 U/ml and 270 U/ml, respectively, and remained unchanged following treatment with PCB153.However, treatment with PCB126 alone or in combination with PCB153 significantly increased the PON1 activity in serum.The effect was even more pronounced with paraoxon as substrate.Hepatic gene expression analysis of PON1, PON2, and PON3 by qRT-PCR showed no difference in sham and PCB153-treated animals.However, an almost 1.5-fold increase in the mRNA levels of PON1 was observed in animals treated with PCB126 alone and in combination with PCB153 groups.Similarly, PCB126 alone and in combination also increased the amount of PON3 mRNA by almost 2-fold.In contrast, PON2 and APOA1 mRNA expressions were unaffected by the PCB treatments.A significant increase in the mRNA levels of AhR was observed after exposure to PCB126 alone and in combination with PCB153.Cytochrome P450 levels were determined by mRNA analysis and also tested at protein level by western blot analysis.As expected, CYP1A1 mRNA was upregulated very substantially by PCB126 treatment.A similar effect was observed when PCB126 was co-delivered with PCB153.This overexpression was sustained during the entire course of the treatment.In these treatment groups we also observed the same trend with more than 200-fold increases in CYP1A1 protein levels and similar increases in CYP1A2 and CYP1B1 protein levels.CYP2B1/2 mRNA was elevated by PCB153 treatment.PCB126 did not alter the CYP2B1/2 mRNA expression but interestingly, a significant increase of the PCB153-induced levels of CYP2B1/2 gene transcription was noted after co-exposure to PCB126.The effect was maximal after 6 d of exposure to PCB153 given in combination with PCB126, but decreased gradually after 15 and 45 d of treatment.Dose response is a well-established phenomenon, but studies almost always use bolus doses of test compounds to determine their DNA-damaging and carcinogenic potential.Such bolus doses, however, are far from the normal scenario in which humans are generally exposed to very low doses of toxicants for long durations.The polymeric implant-delivery system provides continuous exposure to low doses.To assess the effect of such continuous, low-dose exposure, PCBs were delivered via the implant route to rats.In published studies bolus doses were administered by daily gavage or multiple injections .The implant delivery method used in this study slowly but continuously released over 45 d an average daily dose of 49 μg PCB153 and 0.98 μg/d PCB126, equivalent to a daily dose of about 200 μg PCB153/kg and 5 μg PCB126/kg from a single polymeric implant.Thus, the implants delivered comparable daily doses but at a continuous rate and without the need of daily handling of the animals for gavage or i.p. injections.This is less stressful for the animals and also lowers the risk of accidental human contact with the toxic test compounds.The organ weights were differentially affected depending on the type of PCB used.Liver weights were significantly increased by PCB126 alone or in combination with PCB153 and a similar, non-significant, effect was also observed in lung tissues.Elevated liver weight and reduction in thymus weight are well-known consequences of AhR-mediated in endoplasmic reticulum , and PCB126 is the most potent AhR agonist of all 209 PCB congeners.The release is largely sustained in the coated polymeric implants used in this study .The in vitro release of PCB153 from the coated polymeric implants was fairly constant during the first 7–8 d and then it gradually declined effects such as an increase.We have previously described that during in vivo release, extracellular fluid from the site of implantation enters into the polymeric matrix, dissolving the compound that then diffuses out into the surrounding tissue .In this study the average in vivo release of PCB153 was almost 1.6-fold higher than the average in vitro release over the first 15 d.The higher release in rats presumably results from differences in ‘sink’ conditions where the large volume of circulating body fluid may cause higher releases .PCBs from implants are expected to enter into the systemic circulation and to be distributed to different tissues.In this study, PCB153 accumulation in the liver and lung was similar at all time intervals, but much higher in mammary tissue, most likely due to the very high fat content of mammary tissue .Even PCB126, which was below the detection limit in the lung and serum, was present at measurable amounts in mammary tissue.It is intriguing, however, that despite the 50-fold higher average daily dose of PCB153 compared to PCB126, these PCB congeners were present at essentially the same levels in the liver during the entire course of the study.The higher hepatic accumulation of PCB126 compared with PCB153 may reflect their respective binding sites in the liver.PCB126 levels were 10-fold higher in the liver than in mammary tissue, supporting the concept of hepatic sequestration of PCB126 found in previous studies with mice .The protein responsible for the hepatic sequestration of PCB126 has been shown to be CYP1A2, an AhR-inducible gene .In contrast, PCB153 levels in mammary tissue on day 15 were lower when PCB126 was co-administered while liver levels were higher than in the PCB153 only group.To our knowledge the mechanism of this increase in liver to fat ratio of the very lipophilic PCB153 during co-exposure with AhR agonists has not been elucidated.However, the role of induction of metabolism or the metabolic capabilities cannot be completely ruled out.Studies on hepatic sequestration of PCB153 with CYP1A2 knock-out and wild-type mice did not show any difference in tissue levels, unlike PCB126 which was higher in the liver of wild-type mice only .Delivery of PCBs with the implant system resulted in near steady-state DNA adduct accumulation for most types of adducts tested in this study, even after 6 weeks of exposure.This is consistent with steady DNA adduct accumulation following long-term exposure of A/J mice to cigarette smoke and multiple low-dose administration of benzopyrene .One type of adduct, the P-2 group which is mostly composed of 8-oxo-dG, showed significantly increased levels in the liver and lung of PCB126 alone-treated animals after 15 d of exposure.Co-exposure to PCB126 and PCB153 resulted in an almost doubling of P-2 adducts in the liver and lung of treated animals compared to sham controls on day 6.Although this increase was not statistically significant, it was sustained over the whole treatment period in the liver, while it decreased in the lung.To the best of our knowledge this is the first report of significantly increased 8-oxodG levels in PCB126-exposed rats.Other types of adducts, particularly PL-1, seemed elevated by PCB126 and PCB126 plus 153 treatment, but none reached statistical significance.Similarly, other investigators reported increased M1dG adducts in rat livers co-exposed daily to 300 ng/kg PCB126 and 3000 μg/kg PCB153 for a year .It is not clear how PCB126 and other toxicants increase oxidative stress, but uncoupling of idle high levels of CYPs, changes in the metabolic pathway of endogenous compounds like estradiol leading to redox- cycling estradiol quinones, and others have been suggested as the possible mechanisms .The elevated 8-oxo-dG levels suggest increased oxidative stress in the liver and lung of PCB-exposed rats.However, we did not observe a significant increase in liver and serum TBARS values.Total serum antioxidant capacity was not significantly reduced, although it was higher when serum TBARS was lower and vice versa.One antioxidant parameter in tissues and serum are paraoxonases.PON1 prevents lipid peroxidation and we observed a significant increase in the levels of PON1 activity in serum and livers of animals exposed to PCB126 alone or in combination.Thus elevated PON1 may have suppressed increased oxidative damage to lipids as measured in TBARS without being equally efficient in preventing oxidative damage of DNA."PON1, a member of a three gene family which includes PON2 and PON3, is the body's first line of defense against exposure to certain toxicants like paraoxon .Similar to published report , PCB126-treated groups showed significantly increased PON1 activities in serum and livers while PCB153 failed to increase PON1 activity.The PCB126-induced increase in PON1 activity was sustained and not time-dependent from day 6 to day 45, reflecting the continuously sustained exposure to PCB126.Similarly, proteome analysis showed a large increase in serum paraoxonase/arylesterase levels in male mice livers after exposure to PCB126, a high dose of PCB153, or a combination of the two PCBs .To analyze the cause of the increase in PON1 activity and to compare of effects of PCB126 and PCB153 on CYPs with those on the members of the PON family and others, we measured PON1, PON2 and PON3, CYP1A1, CYP2B1/2, AhR and APOA1 mRNA levels in livers, and CYP1A1, 1A2 and 1B1 protein levels in livers and lungs.PCB153, a CAR agonist, only affected the expression of CYP2B, an effect that was significant only with coexposure to PCB126.This is consistent with our observation that co-exposure with PCB126 increased PCB153 accumulation in the livers, possibly through a sequestration mechanism like the one described for PCB126 and CYP1A2.This finding is also in agreement with the combined effect observed with PCB126 and PCB153 for cancer induction and M1dG adduct formation and our finding may provide the explanation for this combined effect, underscoring the need for more mixture experiments to achieve realistic risk assessments.PCB126 is the most potent AhR agonist among all PCB congeners tested .Consistent with published data, CYP1A1 was nearly 1000-fold upregulated at all time-points and a large increase in CYP1A1/2 protein was seen in the livers and also in the lungs of PCB126-exposed rats.PON1 transcription was nearly doubled in the livers of all PCB126 groups.The PON1 gene promoter has XRE-like sequences, suggesting AhR activation of PON1 transcription .However, the mechanism and the ligand-specificity are not fully understood .Surprisingly, PCB126 had an even stronger inducing effect on PON3 transcription.PCB126 also significantly increased the mRNA level of the AhR in the liver.Sustained AhR activation is believed to be the mechanism of the many negative health effects of dioxin like compounds .Implant studies with lower doses of PCB126 or other AhR agonists may be a useful tool to further explore the long term consequences of low dose exposure to these compounds via food, air or water and possible intervention strategies with antagonists like resveratrol.In summary, the use of polycaprolactone implants is well established for the delivery of chemopreventives and contraceptives .However, their use in the delivery of persistent environmental pollutants like PCBs is novel.Data presented in this manuscript are the first demonstrating steady accumulation of DNA adducts in potential target tissues, liver and lung, with continuous exposure to PCB126.This study also demonstrates for the first time that low-dose exposure to PCBs leads to sustained overexpression of known CYP1A1/2, and new, PON1/3, marker genes.Finally, these studies show that the combined genotoxicity and carcinogenicity of PCB126 with PCB153 may lie in the increased sequestration of PCB153 in the target organs.Thus, polymeric implants provide an efficient, low cost, and safe delivery mechanism to explore the carcinogenicity of compounds and mixtures that we encounter in everyday life.The authors declare no conflict of interest.The Transparency document associated with this article can be found in the online version. | A new delivery method via polymeric implants was used for continuous exposure to PCBs. Female Sprague-Dawley rats received subcutaneous polymeric implants containing PCB126 (0.15% load), PCB153 (5% load), or both, for up to 45 d and release kinetics and tissue distribution were measured. PCB153 tissue levels on day 15 were readily detected in lung, liver, mammary and serum, with highest levels in the mammary tissue. PCB126 was detected only in liver and mammary tissues. However, a completely different pharmacokinetics was observed on co-exposure of PCB153 and PCB126, with a 1.8-fold higher levels of PCB153 in the liver whereas a 1.7-fold lower levels in the mammary tissue. PCB126 and PCB153 caused an increase in expression of key PCB-inducible enzymes, CYP 1A1/2 and 2B1/2, respectively. Serum and liver activities of the antioxidant enzymes, PON1 and PON3, and AhR transcription were also significantly increased by PCB126. 32P-postlabeling for polar and lipophilic DNA-adducts showed significant quantitative differences: PCB126 increased 8-oxodG, an oxidative DNA lesion, in liver and lung tissues. Adduct levels in the liver remained upregulated up to 45 d, while some lung DNA adducts declined. This is the first demonstration that continuous low-dose exposure to PCBs via implants can produce sustained tissue levels leading to the accumulation of DNA-adducts in target tissue and induction of indicator enzymes. Collectively, these data demonstrate that this exposure model is a promising tool for long-term exposure studies. |
31,474 | Agroforestry creates carbon sinks whilst enhancing the environment in agricultural landscapes in Europe | Increased market price volatility and the risks of changing climate are - according to the EU Agricultural Markets Briefs – the biggest challenges European farmers will face in near future.Facing the complex relationship between competitive farming and sustainable production, the current Common Agricultural Policy, supports farmers’ income, market measures and rural development.In spite of cross-compliance mechanism and the recently introduced greening measure that links environmental standards to subsidies, the agricultural sector is still one of the prime causes of pressure on natural resources and the environment.To address these environmental problems, the European Commission has issued policies such as the Nitrate Directive in 1991, the Water Framework Directive in 2000 and the Biodiversity Strategy in 2010 244).Nonetheless, major environmental problems persist and are still linked to or caused by intensive agricultural production on the one hand, and by land abandonment on the other.Most recently and in line with the COP21 Paris Agreement the Effort Sharing 2021–2030 2018/842) includes agricultural practices, aiming to reduce greenhouse gas emissions or balance with an equal amount of GHG sequestration.In this context, the future CAP for the next funding period after 2020 proposes three focal areas: a) “natural” farming, b) sustainable water management and use and c) dealing with climate change.This will require strategies to manage the above mentioned financial and environmental risks of production, ideas to expand the agricultural product range, and a focus on sustainable farming systems with climate adaptation and mitigation functions.Agroforestry, the integrated management of woody elements on croplands or grasslands, may become part of those strategies because it provides multiple products while simultaneously moderating critical environmental emissions and impacts on soil, water, landscapes, and biodiversity.In addition, it is highlighted as one of the agricultural practices with the greatest potential for climate change mitigation and adaptation.For example, agroforestry can enhance the sequestration of carbon in woody biomass and in the soil of cultivated fields, increase soil organic matter, improve water availability, protect crops, pastures, and livestock from harsh-climate events.Against this background, our study aimed to evaluate the potential contribution of agroforestry towards achieving zero-GHG emissions agriculture in pursuit of the ambitious Paris Agreement COP21 and CAP targets.Using a transdisciplinary approach including scientific and practical knowledge, the study focused on three key questions: I. Where and to what extent is European agricultural land affected by environmental pressures that could be reduced through agroforestry?,II.Which regional types of agroforestry can be used to reduce these environmental pressures and provide multiple products?,and – as an example of an ecosystem service that agroforestry can provide – III.What is the impact of the proposed systems on European climate change targets, in particular on carbon storage and GHG emissions?,The study was conducted in three main phases: First, the agricultural areas most seriously affected by environmental pressures were identified using various spatially explicit datasets on e.g. soil erosion, water pollution, and pollination pressures.In a second step, local agroforestry experts were consulted to propose suitable agroforestry practices for their regions suffering from environmental pressures.Finally, the annual carbon storage impact of the proposed systems was identified and evaluated in the light of European agricultural GHG emissions.In the next subsections, these three main phases are described more in detail, while we address advantages and limitations of the adopted approach, as well as possible improvements, in the Discussion section.Bearing in mind that agroforestry is only one aspect of a diversified agriculture, our focus was on agricultural areas facing combined environmental pressures, in which agroforestry can mitigate several environmental pressures.Fig. 1 illustrates the conceptual background of the Priority Area approach.The analysis uses the Corine Land Cover 2012 to identify the area of European arable and pasture land.From this farmland layer, the areas of high nature value such as Natura 2000, High Nature Value Farmland, and the existing agroforestry areas were subtracted.The remaining “Focus Areas” were the starting point for the pressure analysis.The indicator selection passed three stages as visualised in Fig. 2.First, indicators characterizing benefits provided by agroforestry systems were chosen.Torralba et al. summarized them into i) timber, food, and biomass production, ii) soil fertility and nutrient cycling, iii) erosion control, iv) biodiversity provision and Hart et al. completed the list with v) climate change mitigation.Given that continental spatial datasets covering most of the European countries and addressing the indicators in a consistent way were limited, the selection focussed in a second step on the CAP 2014–2020 context indicators.The CAP monitoring and evaluation framework is composed on a set of socio-economic, sectorial, and environmental indicators to reflect the impact and provide information of the performance of the strategy.Within this list four context environmental indicators were related to agroforestry benefits.These were i) water abstraction in agriculture addressing pressures on available fresh water resources, ii) water quality dealing with agricultural water pollution by nitrates and phosphates, iii) soil organic matter in arable land as SOC influences soil structure, aggregate stability, nutrient availability, water retention and resilience, and iv) soil erosion by water the most widespread form of soil degradation.Third, as indicators for climate change mitigation and biodiversity were not addressed within the CAP monitoring, we reviewed the literature and identified relevant datasets.In conclusion, only consistent spatial datasets, which were available with a wide European coverage, were included in the analysis.Accordingly, environmental pressures related to: i) soil health, ii) water quality and abstraction, iii) climate change, and iv) biodiversity were identified.Individual pressure maps were spatially aggregated and combined into the “Pressure Areas” map showing all regions where one or several environmental pressures occur.To identify the “Priority Areas” for intervention, the sum of pressures per spatial unit was expressed as an accumulation map or a “heatmap of environmental pressures”.The European water erosion map and the Swiss soil erosion risk map together with the European wind erosion map were used to locate areas with potentially critical loads of soil losses.According to Panagos et al. a critical threshold is reached if the soil loss is more than 5 t soil ha−1 a−1.The analysis of potential wind erosion was limited to arable land, which is more affected than grassland.The Soil Organic Carbon saturation capacity provided at European level by Lugato et al., 2014b) expresses the ratio between actual and potential SOC stocks.Regions with a ratio of less than 0.5 were identified as Pressure Areas, meaning that these soils contain less than half of their SOC storage potential.Irrigated fields regardless of whether they were pasture or arable land were included in the pressure analysis.Irrigation maps were provided by the JRC Water Portal and the Farm Structure Survey and expressed the proportion of irrigated land on the total agricultural area.Regions with more than 25% of the agricultural area under irrigation were included as Pressure Area.The nitrogen surplus, which can lead to both high levels of nitrate leaching and denitrification to gaseous nitrous oxide, was assessed for the European Union using the CAPRI model by Leip et al.For Switzerland data were obtained from modelled accumulated nitrogen losses.According to the German Ministry of Environment, there is a critical load if the annual nitrogen surplus exceeds 70 kg N ha−1 a−1 and this threshold was used to identify areas with high nitrogen surplus.Annual mean temperatures from the current climate and the forecast for 2050 were used to derive the predicted regional temperature increase up to 2050.According to Hart et al., agroforestry systems remain robust within an average temperature increase of up to 4 °C.Therefore, all areas with a predicted increase of temperature of more than 2 °C and less than 4 °C were qualified as Pressure Areas where agroforestry could potentially be beneficial.Soil fauna, microorganisms and biological functions derived from the spatial analysis by Orgiazzi et al. were used to assess soil biodiversity.The areas identified with “high” and “moderate-high” levels of risk were defined as Pressure Areas.The pollination assessment was based on the indicator of landscape suitability to support pollinators by Rega et al.The indicator is a dimensionless score; areas with “very low” and “low” suitability were defined as Pressure Areas.The pest control index was used as input for the assessment of regions with potential pressures in natural pest control.Again, the indicator is a dimensionless score; areas with “very low” and “low” suitability to support natural pest control, corresponding to the first two quintiles of the values’ distribution, were combined and defined as Pressure Areas.Using the thresholds previously mentioned, the nine environmental pressures were spatially combined using GIS.In each spatial unit the number of pressures were added together by weighting each indicator equally.Implications, advantages and drawbacks of this methodological approach are addressed in the discussion section.In the resulting “heatmap”, the 10% of the area with the highest number of pressures were defined as the Priority Area for the implementation of agroforestry.Based on Mücher et al. the Priority Areas were clustered into seven biogeographical regions: Atlantic; Continental lowlands, Continental hills; Mediterranean lowlands, Mediterranean hills, Mediterranean mountains; and Steppic.The spatial analysis was performed in ArcGIS10.4.The outcomes were processed in R with packages plyr, Hmisc, and ggplot2.Potential agroforestry practices, which are: 1) of interest to farmers and the most likely to be adopted by them, 2) the most adapted to mitigate the prominent environmental issues in the region, 3) the most developed in the region and 4) the most suitable to face climate change, were compiled by local experts and the authors for each Priority Area.A total of 20 experts, mainly national delegates of the European Agroforestry Federation or associated researchers, were asked for their contribution.We used a uniform emailing consisting of an explanation letter, maps of the Priority Area and a structured template.The template was divided into eight questions: i) type of agroforestry, ii) title and a short description of the system, iii) tree and hedgerow species, iv) number of trees per hectare or the percentage of woody cover per hectare, v) planting scheme and management system, vi) crop species and products, vii) tree products, and viii) harvesting year.The outcomes were summarized by biogeographical region.The total biomass production of the woody elements and the carbon storage potential of the proposed agroforestry systems were assessed based on literature data and from test sites .Herein the values represented an average potential per year of tree life and did not consider any dynamics of tree growth over time, or other impact factors such as water and nutrient availability, temperature, tree density, etc.Potential minimum and maximum values of carbon storage in biomass of each agroforestry practice for each biogeographic region were extracted separately for pasture and arable land.These values were used for upscaling the results to the “Priority Area”, assuming that in those regions, the total available farmland would be converted into agroforestry with one of the recommended agroforestry practices.In EU and Switzerland, the total area of European agricultural land is 1,544,022 km2.Subtracting existing agroforestry and nature protection areas, the analysis was then restricted to 1,414,803 km2 as Focus Area.This area consisted of 1,071,179 km2 of arable land and 343,624 km2 of pasture.Fig. 3 gives an overview of the size of the individual “Pressure Areas” in relation to the Focus Area.Soil loss risks over 5 t soil ha−1 a−1 from water erosion were identified on 11.9% of the arable area and 9.5% of the pasture.Areas suffering from an annual loss greater than 5 t soil ha−1 a-1 by wind erosion were relatively small, whereas a low SOC saturation capacity was present on 58.7% of arable lands and on 12.8% of pastures.In total, 8.4% of the arable areas and 1% of the pastures had irrigation levels greater than 25%.High nitrogen pollution risk was mapped on 20.6% of arable lands and on 34.5% of the pastures.Around 63.0% of arable lands and 53.6% of pastures were located in regions where temperature is expected to rise between 2 and 4 °C by 2050 according to the HadGEM2-ES forecast scenario.Pressures in biodiversity and resulting potential underprovision of ecosystem services are widely spread all over European agricultural land.In total, 66.4% of the arable lands and 36.8% of pastures in the Focus Area were predicted to have low or very low natural pest control potential, whilst 41.8% of the arable areas and 21.0% of pastures were predicted to be not suitable for supporting pollinators.Potential soil biodiversity pressures were mapped on 11.5% of arable lands and on 18.7% of pastures.By combining the nine individual pressure maps, we created a heatmap for environmental pressures.For the total Pressure Area, a lower proportion of pasture areas were identified than of arable lands.Only 4% of the arable lands in the Focus Areas had no pressures, while in pasture it was around 12%.More than half of the pasture areas had less than three pressures, while 35% of arable area were affected by more than four pressures, and 9% had more than five pressures.Whilst we defined the Priority Areas as arable lands with more than five pressures, we set the threshold to only four pressures for pasture, as we evaluated only eight pasture pressure indicators.Together, they represent the worst 10% of the Pressure Area.These combined Priority Areas for arable and pasture land amounted to 136,758 km2, which corresponds to about 8.9% of the total European agricultural land.Table 2 gives an overview of the Priority Areas according to country and biogeographical region.In total, 64 agroforestry practices were proposed by the authors and local experts.They cover a wide range of practices from hedgerow systems on field boundaries to fast growing coppices or scattered single tree systems.Table 3 lists, for each biogeographical region, the proposed system with the lowest, medium, and highest carbon sequestration potential.In line with the largest Pressure Areas, the highest number of agroforestry practices was proposed for Atlantic regions followed by Mediterranean arable lands.The complete list can be found in Supplementary material.For each system the annual carbon storage potential of the woody elements was identified using data from the literature and in each geographical region, the minimum and maximum storage potential were determined.The wide range of practices selected corresponded to a wide range of carbon storage potentials, between 0.09 and 7.29 t C ha−1 a−1.In Table 4 these data were upscaled to the entire Priority Area of each biogeographical region.Overall, implementing the proposed agroforestry practices in the Priority Areas could mitigate between 2.1 and 63.9 million t C a−1 depending on the systems chosen, which is between 7.7 and 234.8 million t CO2eq a−1.In 2015, the 28 members of the European Union together with Switzerland emitted 4,504.9 million t of greenhouse gases, with agriculture contributing 12%.Converting the conventionally used farmland in the Priority Area to agroforestry could therefore capture between 1.4 and 43.4% of the European agricultural GHG emissions.This research investigated three questions: I) Where and to what extent is European agricultural land affected by environmental pressures?,II) Which regional types of agroforestry can be used to reduce environmental pressures?,and III) What is the potential contribution of the proposed systems to the European zero-emission agriculture climate targets?,In response to the first question, several environmental pressures that can be mitigated by establishing agroforestry practices were selected.According to Alam et al. and Torralba et al. these include soil conservation, the improvement of water quality, nutrient retention, climate regulation, and enhanced biodiversity.We investigated nine environmental pressures and mapped their occurrence in European agricultural land, based on existing spatially explicit databases at a continental European scale.The best available data were used, although it should be noted that differences in scales, time periods and models existed that might result in spatial inaccuracies.However, as other authors have pointed out, this is an intrinsic limit of all pan-European, spatially explicit studies: “as fully harmonized data on the different aspects are not available, the possible bias from inconsistencies between the different data layers is unavoidable”.All the datasets used, required some degree of modelling and the maps therefore show predicted rather than measured environmental pressures.Moreover, not all the existing environmental problems in agricultural areas could be addressed.Methane emissions, ammonia emissions, and zoonoses contamination, for example, were not included in the analysis presented here.In addition, biodiversity aspects in terms of quality and diversity, the amenity value of the landscape, and natural hazards, such as avalanches, floods, droughts, and landslides were not considered.Recommendations from the literature were used to define the thresholds for delimiting the Pressure Areas.The definition of thresholds is always arbitrary to some extent: different thresholds exist and modifying these or using different models would affect the size and spatial location of the Pressure Areas.For erosion, we used 5 t soil ha−1 a−1 as a threshold for erosion caused by water and erosion caused by wind, whereas for example, adopting a “tolerable” soil erosion rate of 0.3 to 1.4 t soil ha−1 a−1 as recommended by Verheijen et al. would strongly have increased the Pressure Area.The 5 t soil ha−1 a−1 threshold was uniformly used for the whole of Europe.However, soil erosion threshold values could also be defined by the nature of the soils in a particular area, depending for example, on soil quality and depth, with lower quality and shallower soils given lower thresholds to reflect their already precarious state and the relative importance of conserving what remains.Surplus regions for nitrogen have also been defined in different ways by the European states.The Nitrate Directive limits the nitrate content in ground and drinking waters to 50 mg NO3 l−1, and uses this limit for national governments to identify Nitrate Vulnerable Zones.In an earlier study on arable target regions for agroforestry implementation, based on soil erosion risk and NVZs, Reisner et al. identified 51.6% of the European arable land as Pressure Area.Yet the delimitation of NVZs was partly also a political process.In some countries they are limited to areas where the nitrate content in groundwater regularly exceeded the 50 mg NO3 l-1 threshold.In other countries, entire territories or regions were designated where special actions for nitrate reduction are compulsory for farmers.For example, almost the entire territory of Germany is labelled as NVZ.To allow for a spatially more differentiated analysis, we opted to locate areas with modelled annual nitrogen surplus above 70 kg N ha−1.Together, they accounted for 22% of arable lands and 36% of pastures, which is substantially lower than the 51.6% of European arable land identified by Reisner et al. as Pressure Area for nitrate emissions.The most prominent pressure in terms of area affected was the impact of rising temperature and climate change.This is in line with Olesen et al. and Schauberger et al. who modelled effects of climate change on crop development and yields.They found an earlier start to the growing and flowering period followed by enhanced transpiration in combination with water stress resulted in a reduction of maize yield of up to 6% for each day with temperatures over 30 °C.In fact, already during the summer of 2017 the potential impact of climate change was revealed by drought and heat waves, which impeded cereal production in various parts of Europe, mainly in southern and central Europe.However, by contrast, Knox et al. predicted positive effects of between 14–18% on the yields of wheat, maize, sugar beet, and potato by 2050 in Northern Europe.To identify Priority Areas, we accumulated all indicators.This simple addition implied assigning the same weight to all the environmental pressures addressed, and not considering the magnitude of each pressure and its relevance for the local context.For instance, soil erosion could be more damaging for agricultural practices than pests in a particular region or vice versa.However, a more sophisticated approach incorporating these two aspects, would have introduced a further level of arbitrariness in the study, in relation to the assignment of different weights.The approach used here has the advantage of being straightforward and immediate to understand and interpret for decision-makers.Indeed, our methods and results are in line with other pan-European studies, e.g. Mouchet et al. and Maes et al., that both analysed the ecosystem service provision of European landscapes.Mouchet et al. aggregated bundles of ecosystem services and found a longitudinal gradient of decreasing land use intensity from France to Romania.Maes et al. assessed the quantity of green infrastructure that maintained regulating ecosystem services and showed that regions with intensive agricultural production generally had lower levels of regulating ecosystem services provision.Both studies referred to the sum of all assessed indicators.The similarity among the three studies for the spatial output gives confidence to the overall outcomes of this study.To address the second research question, the collection of agroforestry practices, we hypothesized that agroforestry could mitigate the environmental pressures identified and that for each region, suitable practices could be proposed.Although agroforestry provides multiple ecosystem services, there is a general lack of uptake by farmers.Therefore, instead of trying to propagate the most suitable agroforestry for a particular pressure area and environmental pressure, we argue that the highest impact could be achieved by proposing an array of agroforestry practices that are locally adapted and attractive for farmers.This was how the experts selected the proposed practices.The suitable combination of tree and crop species is highly dependent on soil, water, and climate conditions at specific locations.For this reason, we have provided only a list of examples of agroforestry practices.The composition, implementation, and management of the agroforestry systems needs to be discussed with regional agroforestry experts and developed in partnership with the farmers themselves1 .For soil conservation, silvoarable alley cropping systems have been evaluated in earlier studies.Palma et al. and Reisner et al. estimated that their introduction on eight million hectares of arable land subject to water induced erosion risks would reduce soil erosion in those areas by 65%.Similar findings were provided by Ceballos and Schnabel and McIvor et al., who analysed how agroforestry can contribute to soil protection and preservation.Hedgerow systems lowered wind speed and consequently soil erosion by wind.Regarding the reduction of nitrate leaching, Nair et al. and Jose showed that agroforestry reduced nutrient losses by 40 to 70%.The conversion of 12 million ha of European cropland in NVZ to agroforestry with high tree densities could reduce nitrogen leaching by up to 28%.Moreno et al.; Birrer et al.; Bailey et al.; and Lecq et al. investigated the potential of agroforestry to provide multiple habitats for flora and fauna and enhance biodiversity.Flowering trees, such as orchards with fruit trees, were especially important in providing nesting and foraging habitats for pollinators and could enhance pest control.In general, findings from recent literature suggest that green infrastructure, such as agroforestry, enhances the overall provision of multiple ecosystem services.Our third research question focussed on the most prominent pressure “climate change” in pursuit of a zero-emission scenario in European agriculture.To do this, we estimated the carbon storage potential of the proposed agroforestry systems in the above- and belowground biomass of the woody elements.Whilst we are aware that agroforestry can also increase soil organic carbon, soil carbon storage is difficult to quantify.E.g. Feliciano et al. reported inconsistent results for temperate agroforestry ranging from a decrease of -8 t C ha−1 a−1 to an increase of 8 t C ha−1 a−1.They affirmed that different climatic conditions and the previous land management had a higher impact on soil carbon storage than the established agroforestry system.At the scale of this study it was therefore not sufficiently reliable to account for soil carbon storage.We found an overall average carbon sequestration potential of agroforestry of between 0.09 to 7.29 t C ha−1 a-1.The lower values were related to systems involving fewer woody elements per area.The higher values were mainly related to systems with higher densities of fast growing tree species and good soil conditions, which would also be associated with some reduction in food and feed production.Previous studies estimated a sequestration range of between 0.77 and 3 t C ha−1 a-1 for alley cropping, and Aertsens et al. proposed an average sequestration of 2.75 t C ha−1 a-1.Our estimates ranged from 0.09 to 7.29 t C ha-1 a−1 for implementing different agroforestry systems across Europe.In comparison, European forest stands sequestered 167 million t C in 2015 on 160.93 million ha.This value is a continental average and also comprises trees grown at latitudes and altitudes where growth is relatively slow.In general, the competition between trees, e.g. for light and nutrients, is higher in forests than for trees in agroforestry systems.The hotspots of environmental pressures were mainly located, as was expected, in intensively managed agricultural regions mostly correlated with a high level of production.The implementation of agroforestry in these regions would have the greatest environmental benefits.In spite of the rising awareness of the importance of improving the environment and the investment in supporting measures of the European and national Rural Development Programs of the EU Member States, the impact on green infrastructure is mixed.For example in the UK, whilst the area of woodland is increasing, the area of hedgerows declined from 1998 to 2007.Agroforestry, landscape features, agro-ecological systems, and green infrastructure are still in decline.This implies that the established incentives are insufficient or do not adequately address the problem and actors.In contrast, a promising trend can be observed in Switzerland, where since 1993 agroforestry trees and hedgerows in open landscapes are qualified as ecological focus areas.This measure and the related payments have allowed a consolidation of the area under agroforestry.There might be a trade-off between the introduction of agroforestry on arable and grassland, food production and the challenge of food security over the coming decades with a rising human population.For example, for a poplar silvoarable system in the UK, García de Jalón et al. predicted that crop yields would be 42% of those in arable systems, and that timber yields would be 85% of those in a widely-spaced forest system.Thus, the crop production and hence the production of food for human nutrition would be reduced.In the case of silvopastoral practices, Rivest et al. showed that trees did not compromise pasture yields, though the impact of future drought pressures on yield would strongly be related to the chosen species.In addition, no significant correlation between the number of semi-natural vegetation on agricultural output was found.The potential reduction of agricultural yields after the introduction of trees is an argument that is often put forward by farmers, who see themselves foremost as producers of food and fodder.However, under Mediterranean conditions, Arenas-Corraliza et al. predict that crop production could be reinforced under silvoarable schemes compared to open fields if the recurrence of warm springs keeps increasing.In addition, farmers are increasingly being asked to provide environmental goods and services beyond food production and policy makers and researchers are seeking for ways to sustainably intensify agricultural production, which necessitates increasing productivity whilst at the same time reducing environmental damage and maintaining the functioning of agro-ecosystems in the long-term.In many cases, this will require a shift towards more complex and knowledge intensive agro-ecological approaches.Trees on farmland have been identified for a long time as key elements in the design of sustainable agricultural systems and can contribute to multiple ecosystem services beyond carbon sequestration in combination with other types of semi-natural vegetation.Agroforestry implementation in the Priority Areas, which made up 8.9% of total European farmland, would capture between 1.4 and 43.4% of European agricultural GHG emissions, depending on whether the focus is on increasing tree cover in hedgerows as field boundary or supporting within field silvoarable and silvopastoral systems.These values support the observation by Hart et al. and Aertsens et al. who championed agroforestry as the most promising tool for climate change mitigation and adaptation in agriculture.Consequently, agroforestry can contribute significantly to the ambitious climate targets of the EU for a zero-emission agriculture.Finally, implications of this study are not restricted to the agricultural sector.Promoting agroforestry should be part of a more general land use policy aiming at the design of multifunctional agricultural landscapes.Scholars maintain that this will require coordinated actions at scales larger than individual farms and suggest that mechanisms for coordination and integration between spatial planning and agricultural measures will need to be put in place.This is also in line with the European Biodiversity Strategy and the Communication on Green Infrastructure 249 fin.l), which advocates for the integration between green infrastructure and spatial planning to achieve the Strategy’s Target 2 objectives – ecosystem services enhancement and ecosystem restoration.In this frame, agroforestry should be considered as a key component of green infrastructure and, in turn, green infrastructure can offer a suitable policy frame, beyond the CAP, to promote agroforestry.We investigated the potential for implementing agroforestry in agricultural areas subject to multiple environmental pressures of agricultural land in Europe and its contribution to European climate and GHG emission reduction targets.We found around one quarter of European arable and pasture land to be affected by none or only one of nine analysed environmental pressures and not primarily in need of restoration through introduction of agroforestry.Pastures were less affected than arable lands.For the Pressure Areas, we propose a wide range of agroforestry practices, which could mitigate the environmental pressures.The collection confirms the huge potential of agroforestry to be introduced and established in nearly every region in Europe and to adapt to various contexts, ideas, and needs of farmers.The estimated potential carbon storage depends on the selected agroforestry practice.The evidence from this study, that agroforestry on 8.9% of European agricultural land could potentially store between 1.4 up to 43.4% of the total European agricultural GHG emissions, is encouraging and demonstrates that agroforestry could contribute strongly to prepare the ground for future zero-emission agriculture.Imposing e.g. carbon payments or penalties for nutrient or soil loss pollutions as presented would make agroforestry a more financially profitable system.Future analysis should regionalize the approach to individual countries making use of data of higher spatial and thematic resolution, and ultimately to the farm scale, accompanied by extension and advice.In sum, agroforestry can play a major role to reach national, European and global climate targets, whilst additionally fostering environmental policy and promoting sustainable agriculture, particularly in areas of intensive agricultural management where environmental pressures accumulate.Future policy and legislation, e.g. the future Common Agricultural Policy, should explicitly promote and strengthen agroforestry. | Agroforestry, relative to conventional agriculture, contributes significantly to carbon sequestration, increases a range of regulating ecosystem services, and enhances biodiversity. Using a transdisciplinary approach, we combined scientific and technical knowledge to evaluate nine environmental pressures in terms of ecosystem services in European farmland and assessed the carbon storage potential of suitable agroforestry systems, proposed by regional experts. First, regions with potential environmental pressures were identified with respect to soil health (soil erosion by water and wind, low soil organic carbon), water quality (water pollution by nitrates, salinization by irrigation), areas affected by climate change (rising temperature), and by underprovision in biodiversity (pollination and pest control pressures, loss of soil biodiversity). The maps were overlaid to identify areas where several pressures accumulate. In total, 94.4% of farmlands suffer from at least one environmental pressure, pastures being less affected than arable lands. Regional hotspots were located in north-western France, Denmark, Central Spain, north and south-western Italy, Greece, and eastern Romania. The 10% of the area with the highest number of accumulated pressures were defined as Priority Areas, where the implementation of agroforestry could be particularly effective. In a second step, European agroforestry experts were asked to propose agroforestry practices suitable for the Priority Areas they were familiar with, and identified 64 different systems covering a wide range of practices. These ranged from hedgerows on field boundaries to fast growing coppices or scattered single tree systems. Third, for each proposed system, the carbon storage potential was assessed based on data from the literature and the results were scaled-up to the Priority Areas. As expected, given the wide range of agroforestry practices identified, the carbon sequestration potentials ranged between 0.09 and 7.29 t C ha −1 a −1 . Implementing agroforestry on the Priority Areas could lead to a sequestration of 2.1 to 63.9 million t C a −1 (7.78 and 234.85 million t CO 2eq a −1 ) depending on the type of agroforestry. This corresponds to between 1.4 and 43.4% of European agricultural greenhouse gas (GHG) emissions. Moreover, promoting agroforestry in the Priority Areas would contribute to mitigate the environmental pressures identified there. We conclude that the strategic and spatially targeted establishment of agroforestry systems could provide an effective means of meeting EU policy objectives on GHG emissions whilst providing a range of other important benefits. |
31,475 | Motor correlates of phantom limb pain | Following arm amputation individuals generally perceive vivid sensations of the amputated limb as if it is still present, with varying ability to voluntarily move this phantom hand.In up to 80% of arm amputees these phantom sensations are experienced as painful and can manifest as an intractable chronic neuropathic pain syndrome.Phantom limb pain often does not respond to conventional analgesic therapies and poses a significant medical problem.A large number of studies have associated PLP with plastic changes in the sensorimotor nervous system.Following this, a surge of behavioural therapies that aim to normalise the representation of the phantom hand have been developed in recent years.The overarching objective of these behavioural therapies is to relieve PLP by improving the ability to move the phantom limb .The assumption behind these therapies is that increased motor control over the phantom hand would cause PLP relief.Despite the large number of PLP therapies relying on this notion, the link between PLP and phantom hand motor control is only recently starting to be uncovered behaviourally, or using neuroimaging.Systematic evidence for the role of phantom hand motor control in predicting PLP is lacking.The current study aimed at characterising the assumed link between PLP and phantom hand motor control in fourteen upper-limb amputees suffering from chronic PLP.Functional magnetic resonance imaging was used to further examine the neural correlates of deteriorated phantom hand motor control.Specifically, we investigated the relationship between deteriorated motor control and the representation of the phantom hand in primary sensorimotor cortex.Fifteen unilateral upper-limb amputees who experienced PLP episodes more than once a week in the month preceding recruitment and fifteen age- and sex-matched controls were recruited through the Oxford Centre for Enablement and Opcare.In this study, we specifically targeted amputees suffering from relatively high chronic PLP.As such, the variance and range of chronic PLP sampled was reduced in the current study compared to our previous study that demonstrated a relationship between chronic PLP and primary sensorimotor phantom hand representation."However, we note that this difference in chronic PLP variance was not significant, as assessed using Levene's Test of Equality of Variances .Ethical approval was granted by the NHS National Research Ethics service and written informed consent was obtained from all participants prior to the study.Data from one amputee was discarded due to inability to perform the motor task with the phantom hand.Amputees participated in four consecutive testing sessions that were separated by at least one week, as part of a larger study.Here, only methods related to results reported in the current paper are detailed.One amputee completed only three testing sessions.Control participants took part in a single session.To compare between the amputees and controls, the phantom hand was matched to the non-dominant hand of controls, and the intact hand was matched to the dominant hand of controls.At the start of the first testing session, amputees rated the frequency of PLP, as experienced within the last year, as well as the intensity of worst PLP experienced during the last week.Chronic PLP was calculated by dividing worst PLP intensity by PLP frequency."This approach reflects the chronic aspect of PLP as it combines both frequency and intensity.A similar measure was obtained for non-painful phantom sensation vividness and stump pain.Ratings of transient PLP intensity were obtained in each testing session prior to the finger-tapping test.Motor control was assessed using the ‘finger-to-thumb opposition task’.In this task, participants sequentially opposed each of the four fingertips to the tip of their thumb, starting with the index finger.Participants were instructed to repeat this movement cycle five times, and verbally indicated the ending of each cycle.Participants first performed the finger-tapping task with their intact hand and then repeated the task using their phantom hand.Importantly, phantom hand movements are distinguishable from imagined movements, as is supported by empirical evidence demonstrating that phantom limb movements elicit both central and peripheral motor signals that are different from those found during movement imagery.As such, emphasis was given to making “actual” instead of imagined phantom hand movements.Participants were encouraged to perform the finger-tapping task as well as possible, given their volitional motor control over the fingers.If it was impossible to make the full finger-to-thumb movements with the phantom fingers, participants were asked to attempt to perform the instructed movement.During the task, participants were requested to keep their eyes closed, their intact hand relaxed in their lap and all other body parts still.Note that this task has no spatial components, and therefore the intact hand position was not expected to modulate task performance.Participants were further asked to perform the finger-tapping task bimanually, where they used their intact hand to mirror the precise degree and speed of movement of the phantom hand.Lastly, participants were asked to perform the finger-tapping task using imagined intact and phantom hands movements separately."Response timing for completing the five movement cycles was recorded in real time by an experimenter using a stopwatch, based on participants' verbal reports.To establish a normalised measure for phantom hand movement response time accounting for inter-subject response variability, the intact hand movement response time was extracted from the phantom hand movement response time.Upon completion of each trial, participants were asked to rate the movement difficulty, as well as whether the movement induced transient PLP.Performing the phantom hand finger-tapping task increased transient PLP in 38% of all trials, with an average PLP increase of 10 points.Intact hand finger-tapping never induced PLP.The bimanual finger-tapping task elicited PLP in 44% of all trials, with an average PLP increase of 10 points.The imagined phantom hand finger-tapping task increased transient PLP in 13% of all trials, with an average PLP increase of 2 points.Imagined intact hand finger-tapping induced PLP in 3% of all trials, with an average PLP increase of 1 point.Participants were visually instructed to make simple feet, lips, intact hand, and phantom hand movements, in a block-design fashion.Each movement condition was repeated four times in a counterbalanced protocol, alternating 12 sec of movement with 12 sec of rest.The movement pace was instructed at .5 Hz.Participants were clearly instructed to make actual rather than imagined phantom hand movements.If it was impossible to perform full phantom hand movements, participants were asked to attempt to perform the movements.By asking amputees to perform phantom hand movements, we directly targeted otherwise latent phantom hand representation in the primary sensorimotor missing hand cortex.We have previously shown that this task is successful in producing primary sensorimotor cortex activity across a heterogeneous group of upper limb amputees.Instructions were delivered visually using Presentation software.Head motion was minimized using padded cushions.MRI data acquisition, preprocessing and analysis followed standard procedures, as detailed in Appendix A: Supplementary materials.Functional images were obtained using a multiband T2*-weighted pulse sequence with an acceleration factor of 6.This provided the opportunity to acquire data with increased spatial and temporal resolution.Data collected for individuals with an amputated right hand was flipped on the mid-sagittal plane before all analyses, such that the hemisphere contralateral to the phantom hand was consistently aligned."Common pre-processing steps for fMRI data were applied to each individual run, using FSL's Expert Analysis Tool FEAT.First-level parameter estimates were computed using a voxel-based general linear model based on the double-gamma hemodynamic response function and its temporal derivatives.Two main contrasts were specified between different task movement conditions: 1) intact hand versus feet, and 2) phantom hand versus feet.To investigate a potential relationship between chronic PLP and activity in the cortical phantom hand area, phantom hand movements were also contrasted with rest."Hand regions of interest were selected based on the control group's average hand movement activity, as detailed in Appendix A: Supplementary materials.The percent signal change was extracted for all voxels underlying the hand ROIs and then averaged across scans for each amputee.Statistical analysis was carried out using SPSS software and Matlab.For each measure, cases more than 3 standard deviations from the mean were replaced with within-participant means.Data were inspected for violations of normality using the Shapiro–Wilk test.If normality was violated, non-parametric statistical tests were utilised.Two-tailed significance testing was applied unless stated otherwise and standard approaches were used for statistical analysis, as mentioned in the results section and detailed in Appendix A: Supplementary materials.Here we focus on the normalised measure for phantom hand movements, i.e., phantom minus intact hand response times.To confirm that the results were not driven by intact hand response times, results were also examined for phantom hand response times and intact hand response times separately.These results are summarised in Table A.1.All results reported below were similar to phantom hand response times only, unless stated otherwise.Below, we only report results based on a priori hypotheses derived from previous research, as described in the introduction.Specifically, we focus on correlations between chronic PLP, phantom hand movement response times and activity in the primary sensorimotor phantom hand cortex.Secondary control analyses showing null results were not adjusted for multiple comparisons.More exploratory analyses are reported in Appendix A: Supplementary materials.No significant difference in phantom hand movement response times was found across the four sessions .Phantom hand movement inter-session consistency was further confirmed using intraclass correlations.ICC values range from 0 to 1: ICC values <.4 are considered poor, .4 to .59 are fair, .6 to .74 are good, and >.75 suggest excellent inter-session consistency.For phantom hand movements, this measure indicated good inter-session consistency with an ICC value of .64 and 95% confidence interval = .37–.86 .Inter-session consistency was only fair for imagined phantom hand movements.Average response times across sessions were used for further analysis.Good inter-session consistency was found for phantom hand activity in the primary sensorimotor phantom hand cortex ."Phantom hand movement response times were greater in the amputee group compared to the control group .When considering phantom and intact hand response times separately, motor control over the phantom hand was deteriorated, as demonstrated by increased phantom hand movement response times."Amputees' phantom hand response times were slower both compared to intact hand response times and compared to controls' non-dominant hand response times.Intact hand response times were not significantly different between amputees and controls = .70, p = .49) and no difference in response times was found between dominant and non-dominant hand movements in controls.These results are consistent with previous reports.Phantom hand movement response times associated with chronic PLP.This result is consistent with previous studies.Amputees experiencing worse chronic PLP were slower at performing phantom hand movements.The linear regression line denoting the relationship between chronic PLP and phantom hand movement response times in Fig. 1B can be defined by y = 2.3962x + 17.251.This means that for every 1 sec increase in response times there was a 2.3962 point increase in chronic PLP.As an exploratory test, we also examined the links between phantom hand movement response times and other measurements relating to chronic PLP, such as chronic non-painful phantom limb sensations and transient PLP.We observed that the relationship with phantom hand movement response times did not translate to chronic non-painful phantom sensation experience.Furthermore, no significant correlation was found between transient PLP and phantom hand movement response times in the individual sessions.The observed correlation between chronic PLP and phantom hand movement response times was not driven by PLP evoked by the task, as shown using a partial correlation including task-evoked PLP as a nuisance regressor.A further exploratory analysis revealed that there was no significant correlation between imagined phantom hand movement response times and chronic PLP.These results extend previous findings, by showing that the link between phantom hand movement response times and chronic PLP is non-transmutable.Activity in the primary sensorimotor phantom hand cortex associated with phantom hand movement response times.Amputees who were slower in performing the finger-tapping task with the phantom hand outside the scanner activated the primary sensorimotor phantom hand cortex more during flexion and extension of all phantom fingers.The linear regression line denoting the relationship between phantom hand movement response times and cortical phantom hand activity in Fig. 1C can be defined by y = .0754x + 1.0439.This means that for every 1 sec increase in response times there is a .0754% signal increase in phantom hand activity.When regressing out task-evoked PLP using a partial correlation, a strong trend towards a correlation between phantom hand activity in the primary sensorimotor phantom hand cortex and phantom movement response times was observed.Correlations between activity in the primary sensorimotor phantom hand cortex and chronic PLP reached significance in the first and second scanning sessions, but not in subsequent scanning sessions.Note that variations in primary sensorimotor phantom hand cortex activity levels across participants did not result from inter-subject differences in task difficulty: First, phantom hand movements used in the neuroimaging task were customised per participant such that they were comfortable to perform for all participants.Second, the correlation between phantom hand movement response times and cortical sensorimotor activity was independent of difficulty ratings in the finger-tapping task.This confirms that the observed increased activity in the primary sensorimotor phantom hand cortex reflected movement representation, and not difficulty.The correlation between response times and activity in the primary sensorimotor cortex was not significant for the intact hand or for controls.Although suggestive, the observed relationship with phantom hand movements might reflect abnormal movement representation, potentially pointing at aberrant processing.Previous studies reported that chronic PLP positively correlated with the duration of movement execution with the phantom hand, as well as difficulty.Furthermore, it was shown that this relationship with chronic PLP did not hold for imagined phantom hand movements.In the current study, we confirm and extend these initial findings.First, we validate the reliability of phantom hand movement response times in the finger-tapping task by demonstrating good inter-session consistency.We therefore propose that this measure offers a means to quantify phantom hand motor control.Second, we show that deteriorated phantom hand motor control positively associated with the strength of cortical sensorimotor phantom hand representation, suggesting that deteriorated phantom hand motor control may be rooted in aberrant cortical representation of the phantom hand.Third, we demonstrate that phantom hand movements are associated with chronic PLP, but not transient PLP or chronic non-painful phantom sensations, thus consolidating the exclusive link between phantom hand motor control and chronic PLP.Over the past decades various theories have been proposed to explain the neural mechanisms underlying chronic PLP within the context of motor control and sensory inputs.For example, PLP has been suggested to be caused by a incongruency between motor and sensory signals, problems in the cortical body matrix representation, a vicious cycle between pain and avoidance behaviour or prediction errors.We wish to highlight the maintenance of nociceptive peripheral signals following amputation, previously shown to drive PLP, as a potential source for the observed association between PLP and deteriorated motor control.It is possible that aberrant inputs from the residual nerves to the primary sensorimotor phantom hand cortex also disrupt the functioning of the sensorimotor system, leading to deteriorated phantom hand motor control.As such, the current results are in line with our previous neuroimaging findings that link chronic PLP with activity in the primary sensorimotor phantom hand cortex during phantom hand movements and replicated in Kikkert et al.).Here we did not observe a consistent significant correlation between chronic PLP and activity in the cortical phantom area.This could potentially be explained by the restricted range of chronic PLP sampled in the current study, as we specifically targeted individuals with relatively high chronic PLP.When the variation in chronic PLP is reduced, this can explain less variation in brain activity, leading to a lower correlation coefficient.Indeed, lower variability is known to reduce the sensitivity of identifying correlations.As such, further research is needed to determine whether the observed relationship between deteriorated phantom hand motor control and chronic PLP is mediated by the cortical sensorimotor representation of the phantom hand.The accumulating evidence for a correlation between phantom hand motor control and chronic PLP highlights the importance of studying phantom hand motor control as a feature of chronic PLP, and provides opportunities for refining currently available clinical applications.Current behavioural therapies aiming to relieve PLP through phantom limb movement therapy have shown mixed effectiveness.While these therapies are based on the assumption that increased motor control over the phantom hand can cause a change in PLP, many of these therapies make use of motor imagery, rather than motor execution.Despite the mounting evidence linking phantom hand motor execution and PLP, the existence of a link between phantom hand motor imagery and chronic PLP remains tenuous, and our current findings highlight the diminished consistency of motor imagery performance.It is therefore possible that phantom limb movement therapy outcomes could be improved when using actual, instead of imagined, phantom movements in rehabilitation approaches.An alternative explanation for the limited effectiveness of phantom limb movement therapies is that the observed link between phantom hand movements and chronic PLP may not be causal.Indeed, insufficient evidence currently exists to support the assumed causality of this link.The motor test investigated in this study provides an option for implicit, and potentially more objective, measurement of chronic PLP.Since no implicit measure currently exist for assessing chronic PLP, clinicians rely solely on self-report for diagnostics and monitoring of treatment outcomes.Self-report is known to sometimes be unreliable, biased and influenced by mood states.In certain circumstances our motor task may provide an implicit proxy measure that is more resistant to the confounds sometimes inherent to self-report, as has been shown to be useful in several previous studies exploring analgesic efficacy.A potential confound of our approach is that performing the phantom hand finger-tapping test increased transient PLP in a subset of the amputees, and one participant was unable to perform the task.For amputees who are unable to move the phantom hand, performing the task using motor imagery could be an alternative option, but more research is needed to validate this approach. | Following amputation, individuals ubiquitously report experiencing lingering sensations of their missing limb. While phantom sensations can be innocuous, they are often manifested as painful. Phantom limb pain (PLP) is notorious for being difficult to monitor and treat. A major challenge in PLP management is the difficulty in assessing PLP symptoms, given the physical absence of the affected body part. Here, we offer a means of quantifying chronic PLP by harnessing the known ability of amputees to voluntarily move their phantom limbs. Upper-limb amputees suffering from chronic PLP performed a simple finger-tapping task with their phantom hand. We confirm that amputees suffering from worse chronic PLP had worse motor control over their phantom hand. We further demonstrate that task performance was consistent over weeks and did not relate to transient PLP or non-painful phantom sensations. Finally, we explore the neural basis of these behavioural correlates of PLP. Using neuroimaging, we reveal that slower phantom hand movements were coupled with stronger activity in the primary sensorimotor phantom hand cortex, previously shown to associate with chronic PLP. By demonstrating a specific link between phantom hand motor control and chronic PLP, our findings open up new avenues for PLP management and improvement of existing PLP treatments. |
31,476 | An essential role for neuregulin-4 in the growth and elaboration of developing neocortical pyramidal dendrites | The neuregulins are widely expressed pleiotropic growth factors related to epidermal growth factor that signal via the ErbB family of receptor tyrosine kinases.Very extensive work on the first neuregulin discovered, has revealed that its numerous isoforms play many roles in the development and function of neurons and glia, including regulating the assembly of neural circuitry, myelination, neurotransmission and synaptic plasticity.Likewise, numerous studies on NRG2 and NRG3 have revealed that they participate in synaptogenesis, synaptic function and aspects of neuronal development."Importantly, the Nrg1, Nrg2, Nrg3, ErbB3 and ErbB4 genes have been identified as susceptibility genes for schizophrenia, depression and bipolar disorder and numerous genetic and functional studies have directly implicated the Nrg1, Nrg2, Nrg3 and ErbB4 genes in the development of psychotic behaviour.Although much less work has been done on the latest neuregulins to be identified, NRG5 and NRG6, both are highly expressed in brain.NR6 plays a role in radial neuronal migration in the neocortex and is a potential susceptibility gene for schizophrenia.In contrast with other neuregulins, NRG4 is expressed in a limited number of adult tissues, such as brown adipose tissue, and has been reported to have no or negligible expression in adult brain.NRG4 functions as a secreted endocrine factor in vivo produced and released by brown adipose tissue.NRG4 decreases hepatic lipogenesis, increases fatty acid β-oxidation and increases energy expenditure.While NRG4 has been implicated in the regulation of metabolic homeostasis, it has no known function in the brain.Our analysis of mice in which the Nrg4 locus has been disrupted reveals a very striking phenotype in neocortical pyramidal neurons both in vitro and in vivo.As such, we provide the first evidence that NRG4 plays a major role in the brain.Mice were housed in a 12 h light-dark cycle with access to food and water ad libitum.Breeding was approved by the Cardiff University Ethical Review Board and was performed within the guidelines of the Home Office Animals Act, 1986.Nrg4 null mice in which the Nrg4 locus was disrupted by retroviral insertion of a gene trap between exons 1 and 2 were purchased from the Mutant Mouse Resource Centre, UC Davis.These mice were backcrossed from a C57/BL6 background into a CD1 background.Nrg4+/− mice were crossed to generate Nrg4+/+ and Nrg4−/− littermates.Primary cortical neurons were prepared from E16 embryos.The protocol for culturing hippocampal pyramidal neurons was used with modifications.Briefly, dissected cortices were mechanically triturated in Neurobasal A medium supplemented with 2% B27, 0.5 mM GlutaMAX 1, 100 units/ml penicillin and 100 μg/ml streptomycin.15,000 cells/cm2 were plated on poly-l-Lysine-coated 35 mm dishes and incubated at 37 °C in a humidified atmosphere with 5% CO2.In some cultures, the culture medium was supplemented with 100 ng/ml recombinant NRG4 after plating.Neurons were cultured for either 3 or 9 days in vitro.Neurons were fluorescently labelled in 3 day cultures by treating the cultures with the fluorescent dye calcein-AM for 15 min at 37 °C.In 9 days cultures, the neurite arbors of a subset of the neurons were visualized by transfecting the neurons with a GFP expression plasmid using lipofectamine 2000 after 7 days in vitro.Briefly, 1 μg of DNA was mixed with 2 μl of lipofectamine.After 20 min, this mixture in 2 ml of Opti-MEM media was added to the cultures.After 3 h at 37 °C, the cultures were washed with culture medium and incubated for a further 2 days.At the end of the experiment, the neurons were fixed for 30 min with 4% paraformaldehyde.Images of fluorescent-labelled neurons were acquired with an Axiovert 200 Zeiss fluorescent microscope.Neurite length and Sholl analysis were carried out using Fiji software with the semi-automated plugin Simple Neurite Tracer.Brains were fixed overnight using 4% paraformaldehyde in 0.12 M phosphate-buffered saline at 4 °C, washed in PBS and cryoprotected in 30% sucrose before being frozen in dry ice-cooled isopentane.Serial 30 μm sections were blocked in 1% BSA, 0.1% Triton in PBS and then incubated with 1:500 anti-MAP2 and anti-NRG4 antibodies at 4 °C overnight.After washing, the sections were incubated with 1:500 rabbit polyclonal Alexa-conjugated secondary antibodies for 1 h at room temperature.Sections were washed, incubated with DAPI and visualized using a Zeiss LSM710 confocal microscope.Neurons were fixed for 10 mins in 4% paraformaldehyde in 0.12 M phosphate-buffered saline, washed 3 times in PBS and blocked in 1% BSA, 0.1% Triton in PBS for 1 h, then incubated with primary antibodies against NRG4, ErbB4, Emx1 overnight at 4 °C.After washing, the neurons were incubated with polyclonal Alexa-conjugated secondary antibodies 1:500 for 1 h at room temperature.Cells were then washed, incubated with DAPI and visualized using a Zeiss LSM710 confocal microscope.The levels of Nrg4 mRNA was quantified by RT-qPCR relative to a geometric mean of mRNAs for the house keeping enzymes glyceraldehyde phosphate dehydrogenase, succinate dehydrogenase and hypoxanthine phosphoribosyltransferase-1.Total RNA was extracted from dissected tissues with the RNeasy Mini Lipid extraction kit."5 μl total RNA was reverse transcribed, for 1 h at 45 °C, using the Affinity Script kit in a 25 μl reaction according to the manufacturer's instructions.2 μl of cDNA was amplified in a 20 μl reaction volume using Brilliant III ultrafast qPCR master mix reagents.PCR products were detected using dual-labelled hybridization probes specific to each of the cDNAs.The PCR primers were: Nrg4 forward: 5′-GAG ACA AAC AAT ACC AGA AC-3′ and reverse: 5′-GGA CTG CCA TAG AAA TGA-3′; ErbB4 forward: 5′-GGC AAT ATC TAC ATC ACT G-3′ and reverse: 5′-CCA ACA ACC ATC ATT TGA A-3′; Gapdh forward: 5′-GAG AAA CCT GCC AAG TAT G-3′ and reverse: 5′-GGA GTT GCT GTT GAA GTC-3′; Sdha forward: 5′-GGA ACA CTC CAA AAA CAG-3′ and reverse: 5′-CCA CAG CAT CAA ATT CAT-3′; Hprt1 forward: 5′-TTA AGC AGT ACA GCC CCA AAA TG-3′ and reverse: 5′-AAG TCT GGC CTG TAT CCA ACA C-3′.Dual-labelled probes were: Nrg4: 5′-FAM-CGT CAC AGC CAC AGA GAA CAC-BHQ1–3′; ErbB4: 5′-FAM-AGC AAC CTG TGT TAT TAC CAT ACC ATT-BHQ1–3′; Gapdh: 5′-FAM-AGA CAA CCT GGT CCT CAG TGT-BHQ1–3; Sdha: 5′-FAM-CCT GCG GCT TTC ACT TCT CT-BHQ1–3, Hrpt1: 5′-FAM-TCG AGA GGT CCT TTT CAC CAG CAA G-BHQ1–3′.Forward and reverse primers were used at a concentration of 250 nM and dual-labelled probes were used at a concentration of 500 nM.PCR was performed using the Mx3000P platform using the following conditions: 95 °C for 3 min followed by 45 cycles of 95 °C for 10 s and 60 °C for 35 s. Standard curves were generated for each cDNA for every real time PCR run, by using serial threefold dilutions of reverse transcribed adult mouse brain total RNA.Relative mRNA levels were quantified in whole brain, BAT and various brain regions dissected from at least 3 animals at each age.Primer and probe sequences were designed using Beacon Designer software."Modified Golgi-Cox impregnation of neurons was performed using the FD Rapid GolgiStain kit according to the manufacturer's instructions on 150 μm transverse sections of P10, P30 and adult brains of Nrg4+/+ and Nrg4−/− mice.Total dendrite length, branch point number and Sholl analysis was carried out separately on the apical and basal dendrite compartments of cortical pyramidal neurons of P10 and P30 littermates using the plugin Sholl Analysis of Fiji software after neuronal reconstruction with the plugin Simple Neurite Tracer.To begin to explore the possibility that NRG4 functions in the developing brain, we used qPCR to investigate if Nrg4 mRNA is expressed in the embryonic mouse neocortex.This revealed that Nrg4 mRNA is clearly detectable in the embryonic neocortex, although its level is some 400-fold lower than that in adult brown adipose tissue.During development, there is an approximate 3-fold increase between E14 and birth.Measurement of Nrg4 mRNA in newborn brain regions revealed that Nrg4 mRNA is widely expressed, with the highest levels in the cerebellum and olfactory bulb.Measurement of ErbB4 mRNA, which encodes the receptor for most neuregulins, including NRG4, revealed that this is also widely expressed in the newborn brain, with the highest levels in the neocortex.To investigate the significance of Ngr4 mRNA expression in the developing brain, we compared the brains of Nrg4−/− and Nrg4+/+ mice.Golgi preparations of transverse sections were made of the most rostral part of the neocortex, including the frontal/motor cortex and the most rostral region of the somatosensory cortex of postnatal day 10 mice.These revealed that the size and complexity of the dendritic arbors of pyramidal neurons throughout the full thickness of the neocortex were dramatically reduced in Nrg4−/− mice compared with Nrg4+/+ littermates.Representative high power images show that the reduction of dendrite size and complexity in Nrg4−/− mice affected both apical and basal dendrite compartments of these neurons.Analysis carried out separately on the apical and basal dendrite compartments revealed a highly significant four-fold reduction in total dendrite length and a significant two-fold reduction in the number of branch points in both dendrite compartments.Reductions in dendrite length and branching were reflected in the Sholl analyses of apical and basal compartments.These findings suggest that NRG4 plays a major and unexpected role in promoting the growth and elaboration of pyramidal neuron dendrites in the developing neocortex.To determine whether the pronounced phenotype observed in neonatal Nrg4−/− mice is retained into adulthood or whether compensatory changes occur with age, we repeated the Golgi studies in juvenile and adult littermates.In P30 mice, the difference in the size and complexity of neocortical pyramidal dendrites between Nrg4+/+ and Nrg4−/− mice was still evident, though less pronounced than in neonates.Quantification confirmed this impression, and showed that both the apical and basal dendrites of neocortical pyramidal neurons of P30 Nrg4−/− mice were shorter and less branched than those of Nrg4+/+ littermates.Compared with P10 preparations, the proportional differences in dendrite length and branching between Nrg4−/− and Nrg4+/+ mice were less pronounced.While the differences were still highly significant for apical dendrites, they had lost statistical significance for basal dendrites by this age.The differences in apical and basal dendrite length and branching between Nrg4−/− and Nrg4+/+ mice were reflected in the Sholl analyses of apical and basal compartments.In adult mice there did not appear to be pronounced differences in the size and complexity of neocortical pyramidal dendrite arbors of Nrg4−/− and Nrg4+/+ mice.However, the large size and complexity of neocortical pyramidal dendrites in adults precluded accurate quantification.Taken together, these findings suggest that compensatory changes in the pyramidal dendrites of Nrg4−/− mice occur with age.This may be mediated by other members of neuregulin family, which are expressed at high levels in the mature brain.Various phenotypic changes have been reported in neurons in genetic studies of other neuregulins and ErbB receptors.Although constitutive deletion of the Nrg1, ErbB2 and ErbB4 genes results in embryonic lethality, conditional deletion of ErbB2/B4 in the brain decreases pyramidal neuron dendritic spine maturation in the cortex and hippocampus without affecting the gross dendrite morphology.Spine density is significantly reduced in cortical pyramidal neurons when ErbB4 is conditionally deleted in these neurons and synaptic spine size and density is likewise reduced in CA1 hippocampal pyramidal neurons by RNAi knockdown of ErbB4 in these neurons.While impaired dendritic spine formation has not been consistently observed following ErbB4 deletion in pyramidal neurons, deletion of ErbB4 in parvalbumin-positive GABAergic interneurons results in reduced spine density on hippocampal pyramidal neurons.Decreased dendritic spine density and impaired growth and elaboration of the basal dendrites of cortical pyramidal neurons have been reported in mice with targeted disruption of type III Nrg1.Retroviral knockdown of Nrg2 in granule cells impairs dendrite growth and branching from these neurons in vivo.As well as interfering with the growth and elaboration of cortical pyramidal dendrites, in further work it will be interesting to ascertain whether deletion of Nrg4 affects dendrite spine density and maturation and the functional properties of synapses.Moreover, we focused on neocortical pyramidal neurons in our current study.Given the widespread expression of Nrg4 mRNA in multiple brain regions in the brain of newborns, it will be interesting to explore the consequences of Nrg4 deletion more widely in the developing brain.To ascertain whether the dendrite phenotype observed in developing Nrg4−/− mice in vivo is replicated in vitro and can be rescued by NRG4 treatment, we set up dissociated neocortical cultures from Nrg4−/− and Nrg4+/+ littermates.Cultures were established from the E16 cerebral cortex, a stage at which the predominant neuron type in culture is the pyramidal neuron, as shown by staining our cultures for Emx1, a homeodomain protein that is specifically expressed by pyramidal neurons in the developing cerebral cortex. >,80% cells in our cultures were positive for Emx1 after 3 days in culture.An added advantage of studying cultured pyramidal neurons is that dendrites and axons can be distinguished and studied separately.After 3 days in culture, the single long axon that emerges from these neurons is clearly distinguishable from the multiple, short dendrites.After 9 days in culture, the axon remains the longest process and MAP-2-positive dendrites become well-developed.Pyramidal neurons cultured from Nrg4−/− mice were clearly smaller and less branched compared with those cultured from Nrg4+/+ littermates in both 3 day and 9 day cultures.Quantification of axon length and total dendrite length in the arbors of individual pyramidal neurons revealed significant reductions in axon and dendrite length in neurons cultured from Nrg4−/− mice compared with those cultured from Nrg4+/+ littermates.Quantification of the number of branch points in the dendrite arbors of 9 day cultures also revealed a significant reduction in cultures established from Nrg4−/− mice compared with Nrg4+/+ littermates.The differences in the size and complexity of neurite arbors of neurons cultured from Nrg4−/− and Nrg4+/+ littermates were reflected in the Sholl plots of these neurons, which provide graphic illustrations of neurite length and branching with distance from the cell body.The reductions in axon and dendrite length and dendrite branching in pyramidal neurons cultured from Nrg4−/− mice were completely restored to wild type levels by treatment with recombinant NRG4.These results show that the pyramidal neuron phenotype observed in vivo in NRG4-deficient mice is replicated in cultured neurons and is rescued by soluble NRG4.Our findings also raise the possibility that NRG4 plays a role in regulating the growth of pyramidal axons as well as being required for the growth and branching of pyramidal dendrites.However, because the full extent of pyramidal neuron axons cannot be reliably discerned in Golgi preparations, we cannot definitively conclude that NRG4 enhances the growth of pyramidal axons in vivo.Although we have shown that recombinant NRG4 rescues the stunted phenotype of neocortical pyramidal neurons cultured from Nrg4−/− mice, we cannot exclude the possibility that the metabolic changes that occur in these mice contributes to the development of the in vivo phenotype that we observed in the neocortex.To ascertain the identity of the cells that produce NRG4 in neocortex, we studied NRG4 immunofluorescence in cortical sections and dissociated cortical cultures.In sections of the neocortex of P10 mice, NRG4 immunoreactivity was observed throughout the cortex.In dissociated cultures of E16 cortical cultures, double labeling for NRG4 and the pyramidal neuron marker Emx1 revealed that 82.1 ± 2.3% of the Emx1-positive cells were co-labelled with anti-NRG4 after 3 days in vitro.NRG4 immunoreactivity was not observed in cultures established from Nrg4−/− mice, demonstrating the specificity of the anti-NRG4 antibody.The majority of Emx1-positive cells were also co-labelled with antibodies to ErbB4, the principal NRG4 receptor.While ErbB4 is abundantly expressed by neocortical and hippocampal GABAergic interneurons, lower levels of ErbB4 have been convincingly demonstrated in pyramidal neurons of the neocortex and hippocampus.For example, ErbB4 immunoreactivity is observed in a subset of CamKII-positive cortical pyramidal neurons, staining which is eliminated in mice with conditional deletion of ErbB4 in the brain.The above observations suggest that the majority of embryonic cortical neurons co-express NRG4 and ErbB4.To formally demonstrate co-expression, we double labelled E16 cortical cultures with antibodies to NRG4 and ErbB4.80.3 ± 5.5% of the cells exhibiting a neuronal morphology were double labelled with these antibodies.This finding increases our confidence that many cortical pyramidal neurons co-express NRG4 and ErbB4, at least in culture, and raises the possibility that NRG4 exerts its effects on pyramidal neurons in vivo at least in part by an autocrine/paracrine mechanism.NRG1 autocrine/paracrine signaling has been shown to promote remyelination following peripheral nerve injury and an ErbB4/NRG2 autocrine signaling loop has been demonstrated in inhibitory interneurons.Neuregulin autocrine signaling has also been implicated outside the nervous system, for example, in promoting the proliferation of certain cancers.In future work, it will be interesting to ascertain how the putative NRG4/ErbB4 autocrine loop is regulated in pyramidal neurons, which will provide us with a better understanding of why it is employed in developing brain.Our demonstration that NRG4 is a major physiologically relevant regulator of the growth and elaboration of pyramidal neuron dendrites in the developing neocortex raises a host of important questions for future work, especially what are behavioural consequences of the major cellular phenotype observed in NRG4-deficient mice and whether NRG4 contributes to the pathogenesis of particular neurological disorders.Given the widespread expression of NRG4 in brain, it will be interesting to investigate whether the lack of NRG4 elsewhere in the brain affects other neurons or circuits and whether axons are affected in addition to dendrites.This work was supported by grant 103852 from the Wellcome Trust.BP did the culture and Golgi studies, SW did the qPCR and AD supervised the work and wrote the paper.The authors declare no conflicts of interest. | Neuregulins, with the exception of neuregulin-4 (NRG4), have been shown to be extensively involved in many aspects of neural development and function and are implicated in several neurological disorders, including schizophrenia, depression and bipolar disorder. Here we provide the first evidence that NRG4 has a crucial function in the developing brain. We show that both the apical and basal dendrites of neocortical pyramidal neurons are markedly stunted in Nrg4−/− neonates in vivo compared with Nrg4+/+ littermates. Neocortical pyramidal neurons cultured from Nrg4−/− embryos had significantly shorter and less branched neurites than those cultured from Nrg4+/+ littermates. Recombinant NRG4 rescued the stunted phenotype of embryonic neocortical pyramidal neurons cultured from Nrg4−/− mice. The majority of cultured wild type embryonic cortical pyramidal neurons co-expressed NRG4 and its receptor ErbB4. The difference between neocortical pyramidal dendrites of Nrg4−/− and Nrg4+/+ mice was less pronounced, though still significant, in juvenile mice. However, by adult stages, the pyramidal dendrite arbors of Nrg4−/− and Nrg4+/+ mice were similar, suggesting that compensatory changes in Nrg4−/− mice occur with age. Our findings show that NRG4 is a major novel regulator of dendritic arborisation in the developing cerebral cortex and suggest that it exerts its effects by an autocrine/paracrine mechanism. |
31,477 | Proteome response of Phaeodactylum tricornutum, during lipid accumulation induced by nitrogen depletion | In the last few decades there has been a growing interest in developing microalgae as the third generation biofuel feedstock .However, in order to develop economically viable processes for biofuel production using microalgae, a greater understanding of microalgal metabolism and its organization in effecting the accumulation of biofuel precursors is necessary.One of the most widely employed strategies that triggers the storage of energy reserves in microalgae is nitrogen limitation or depletion in the growth medium, which has been described for several species .Recent studies have attempted to further understand this stress at the ‘-omic’ level, primarily using the model algal species Chlamydomonas reinhardtii.However, given the diverse lineage of organisms classified under ‘microalgae’ , such investigations are required in other lineages to develop a broader understanding of biofuel precursor synthesis and accumulation.Diatoms play a significant role in the global carbon cycle, accounting for ~ 20% of total photosynthesis , and are of ecological significance.In addition, diatoms are also very interesting for conducting studies in algal physiology and applied phycology.Specifically, the marine diatom Phaeodactylum tricornutum has been used for aquaculture and as a model for cell morphological investigations .The marine nature of this organism is also of interest as a biofuel crop, as it allows for surmounting the water resource limitations associated with fresh water cultivations .In this sense, P. tricornutum has been recommended as a favorable species for biodiesel production, with high lipid content and lipid productivity being reported , as well as having suitable lipid profiles for the derivation of biodiesel with desirable octane rating, iodine number and cloud point.The fact that its genome is sequenced , with descriptive information available in UniProt and KEGG, makes this species an excellent model organism for studying diatom based biofuel production .As with many other microalgal species, P. tricornutum has been shown to increase lipid content in response to nitrogen stress .Therefore, it is an excellent candidate to investigate the metabolic effect of the nitrogen trigger in diatoms, allowing its comparison with previous investigations from other taxonomic groups, such as Chlorophyta, and enabling a broader understanding of lipid accumulation in microalgae under this condition.The effect of nitrogen stress has been examined previously at the molecular level in P. tricornutum, but this has been predominantly at the transcriptomic level using microarrays and RNAseq .In these investigations, changes in the proteome has been inferred from transcript expression profiles.Such approaches only provide assessment of the transcriptional control, disregarding the fact that both translational and degradation controls also affect the amount of protein present inside the cell .This is of particular relevance in a nitrogen stress environment where protein degradation, as a way of nitrogen recovery, may play a significant role in fulfilling cellular nitrogen demands .Hence, transcriptomic studies themselves cannot be relied upon to represent the true protein cellular levels .These should either be supported with targeted protein analysis, such as western blots or multiple reaction monitoring, or a global proteomic investigation.Whilst several proteomic investigations have been published in the Chlorophyta , there is limited information in other phyla.Within diatoms, Thalassiosira pseudonana and P. tricornutum have been the species most investigated.Some of these studies have referred to nitrogen stress in some form .Among these studies, the recent investigation by Ge et al. reported proteomic changes using isobaric tags for relative and absolute quantitation.iTRAQ utilizes amine linking isobaric tags to allow quantitative comparison of numerous proteins in an unbiased way and has become a popular tool for proteomics, being a major improvement compared to 2D SDS PAGE gel methodology .Proteins detected in the work by Ge et al. showed an increase in the carbohydrate metabolic processes and branched-chain amino acid catabolism, and a decrease in enzymes involved in cellular amino acid biosynthesis and photosynthesis.However, the proteomic analysis was done when P. tricornutum growth was well advanced and lipid accumulation was triggered by the natural depletion of nitrogen in the medium after 60 h of growth.In that kind of setting, the physiological state of P. tricornutum would be the result of the simultaneous change of other components in the medium in addition to the nitrogen concentration, and therefore, the observed proteomic changes could not be solely attributed to the nitrogen limitation.In the present analysis we aimed to study the effect of nitrogen starvation as the sole trigger of lipid accumulation in P. tricornutum by controlled removal of this key element from the culture medium, and by observing the changes relatively earlier than other investigations so far.The dynamics of P. tricornutum proteome reorganization were analysed using the iTRAQ methodology at 24 h after nitrogen removal, when lipid production in the nitrogen starved culture, compared to the nitrogen replete control conditions, was noticed to be at its highest, and when lipid accumulation appeared to take precedence over carbohydrate accumulation.The choice of the time point to analyse was based on the criteria to observe changes early enough under nitrogen depletion, but sufficiently delayed so as to differentiate the changes attributable to carbohydrate accumulation.We believe this aspect has not been addressed in previous investigations on the subject and would offer a more informed access to the relevant metabolic changes.Through the use of this mass spectrometry based proteomic quantification method and the unique design mentioned above, we aimed to increase current understanding of the relationship between nitrogen stress and lipid accumulation within microalgae.The results are also compared with previous analysis in C. reinhardtii, to gain insights into metabolic differences and similarities between different taxonomic affiliations.The ultimate goal is to acquire a better knowledge of the universality of the molecular mechanisms underlying the induction of lipid accumulation in microalgae that will lead to improved strategies for biofuel production from microalgae.P. tricornutum was obtained from the Culture Collection of Algae and Protozoa.F/2 + Si medium was prepared as described by CCAP diluting in seawater made with 33.6 g Ultramarine synthetic salts per liter.Cultures were grown in either F/2 + Si medium or F/2 + Si medium omitting sodium nitrate.P. tricornutum was cultured in 250 mL bubble columns sparged with air at 2.4 L min− 1.Filtered air was first passed through sterile water for humidification, before being introduced by silicone tubing to the bottom of the column providing both mixing and gas transfer.The top of the bubble column was sealed using a foam bung.The columns were placed in a water bath maintained at 25 °C and under 24 h continuous lighting with two side facing halogen lamps).The lamps were placed horizontally across a series of columns.This arrangement resulted in an average light intensity of 200 μE m− 2 s− 1 for each column that varied by ± 50 μE m− 2 s− 1 along the length of the column, as measured using a Quantum Scalar Laboratory Radiometer in a water filled column.All columns in the experimental set-up received similar light exposure along the lines indicated above, such that the average for each column fell within sufficient to saturating light intensity for P. tricornutum).Given that a considerable culture volume was required for proteomics, two batch cultures were carried out for each condition in triplicate.The first batch was used to generate sufficient biomass for profiling chlorophyll a, carbohydrate and lipid profiles, and the second batch was used to generate the biomass for proteomics.The biochemical analyses were also carried out on a single time point from the second batch to ensure comparability of the batches.For both batches, the cultures were grown in nitrogen replete medium for 48 h reaching an optical density > 0.4.Culture from four columns was then pooled and the combined optical density used to calculate the culture volume required for giving an OD750nm of 0.2 upon re-suspension to 250 mL.The calculated culture volume was then harvested from the pooled culture by centrifugation at 3000 g for 5 min and resuspended in F/2 + Si medium with or without nitrate to generate the nitrogen replete and deplete treatments, respectively.These treatments were then sampled at 0, 6, 12, 18, 24, 36, 48, and 72 h, post resuspension, in the first batch to generate the biochemical profiles.Similarly sampling was done at 24 h for proteomic analysis and 72 h for biochemical analyses, with the second batch.For the first three time points the sample volume was 20 mL, whilst it was 15 mL for the subsequent five time points, for each biological replicate, for each treatment.Culture samples were pelleted in a pre-weighed 1.5 mL eppendorf tube by centrifugation at 3000 g for 5 min.Pellets were frozen before freeze drying for > 12 h in a Modulyo freeze drier.Dried samples were weighed to determine the dry cell weight and stored at − 20 °C.Chlorophyll a, carbohydrate and lipid analysis were conducted in the stored samples using modified versions of the Wellburn , Anthrone and Nile red methods respectively, as described in Longworth et al. .In the same manner, chlorophyll a, carbohydrate and lipid analysis were conducted on the single time point sample collected from the second batch.There were thus four replicate data sets that were combined for the data analysis.Samples for microscopy were taken from the proteomic experimental set at 24 h post resuspension in each treatment condition by centrifugation of 1 mL sample at 3000 g for 5 min.After removing 950 μL, the pellet was resuspended in the remaining 50 μL and 10 μL was then placed on a glass slide with a cover slip on top.Visualization was done on an Olympus BX51 microscope and images captured by using ProgRes CapturePro 2.6.At 24 h after starting the treatments, two 50 mL aliquots were taken in each biological replicate and centrifuged at 3000 g for 10 min at 4 °C, then resuspended in 1 mL 500 mM triethylammonium bicarbonate buffer and transferred to protein low bind tubes.Samples were then stored at − 20 °C till all harvests were completed.Protein extraction was achieved by liquid nitrogen grinding.Stored cell samples were resuspended with 500 μL 500 mM TEAB.Samples were immersed in a cooled sonication water bath for 5 min and subsequently ground using a mortar and pestle cooled by liquid nitrogen.Samples were then collected into a fresh protein low bind tube and then immersed in a cooled sonication water bath for a further 5 min and sonicated for two cycles with a Micro tip Branson sonifier.Subsequently, samples were centrifuged at 18,000 g for 30 min at 4 °C to separate the soluble and insoluble fractions.After quantifying using RCDC, 100 μg of protein was acetone-precipitated before being resuspended in 30 μL 500 mM TEAB with 0.1% sodium dodecyl sulphate."Proteomic samples were then reduced, alkylated, digested and labelled with the 8-plex iTRAQ reagents, as described in the manufacturer's protocol.To assess the proteomic changes occurring within P. tricornutum under nitrogen stress an 8-plex iTRAQ experiment was designed.iTRAQ labels 114, 113 and 119 were used for nitrogen replete biological triplicate cultures and 116, 117 and 118 for nitrogen depleted biological triplicate ones.Note that labels 115 and 121 were intended for analysing the samples from a silicon stress experiment.However, silicon was not effectively depleted and thus these proteomic results were incorporated into the nitrogen replete ones.High-resolution hydrophilic interaction chromatography was carried out using an Agilent 100-series HPLC.One iTRAQ labelled sample was resuspended in 100 μL buffer A).The resuspended sample was loaded onto PolyHydroxyethyl A column, 5 μm particle size, 20 cm length, 2.1 mm diameter, 200 Å pore size.With a flow of 0.5 mL min− 1 buffer A was exchanged with buffer B) to form a linear gradient as follows: 0% B, 0–15% B, 15% B, 15–60% B, 60–100% B, 100% B, 0% B.Fractions were collected every minute from 18 min through to 41 min followed by three, 3 min fractions to 50 min."The fractions were vacuum centrifuged, before being cleaned up using C18 UltraMicroSpin Columns according to the manufacturer's guidelines.RPLC-MS was conducted using an Ultimate 3000 HPLC coupled to a QStar XL Hybrid ESI Quadrupole time-of-flight tandem mass spectrometer, Framingham, MA, USA).Samples were resuspended in 20 μL buffer A before loading 9 μL onto a Acclaim PepMap 100 C18 column, 3 μm particle size, 15 cm length, 75 μm diameter, 100 Å pore size.With a flow of 300 μ min− 1, buffer A was exchanged with buffer B to form a linear gradient as follows: 3% B, 3–35% B, 35–90% B, 90% B, 3% B.The mass detector range was set to 350–1800 m/z and operated in the positive ion mode saving data in centroid mode.Peptides with + 2, + 3, and + 4 were selected for fragmentation.The remaining sample was subsequently injected in the same manner to acquire two RPLC-MS runs for each submitted fraction.Proteomic identifications were conducted using Mascot, Ommsa, X!Tandem, Phenyx, Peaks and ProteinPilot for searching against the Uniprot reference proteome for Phaeodactylum tricornutum.Each search was conducted with a decoy database formed using reversed sequences or randomized sequence.Searches were restricted to a peptide false discovery rate of 3% prior to decoy hits being removed and peptide spectral matches from the six search engines being merged using an R based script that was also used to remove those showing disagreement in terms of peptide assignment or protein identification between the search engines.Where protein groups were clustered, such as with Mascot, the most common identification between the search engines was selected.Separately, for quantification, the reporter ion intensities for each peptide spectral match were extracted and matched to the merged results."Thus only reporter ion intensities from PSM's matched by the above merging contributed to the protein reporter ion intensities, each PSM match having equal weighting whether identified by single or multiple search engines.Variance stabilization normalization, isotopic correction and median correction were performed on the label intensities before averaging by protein and performing a t-test between replicate conditions to determine significance and fold change.KEGG analysis was derived using the KEGG “Search&Color Pathway” tool .Proteins with a significant positive fold change were labelled with “blue” whilst proteins with a significant negative fold “red”.Gene ontology annotations were identified using the functional annotation tool DAVID .The GO terms were then grouped into biological concepts as shown in Supporting Information Table S1.To determine the relative change, the number of proteins identified as increasing within a class was divided by the number of proteins identified as decreasing with the change being log transformed.This provides an observation of the relative change observed in each species balanced on 0 for each grouping of GO terms.Gene ontology annotations were identified using the functional annotation tool DAVID .The GO terms were then grouped into biological concepts as shown in Supporting Information Table S1.To determine the relative change, the number of proteins identified as increasing within a class was divided by the number of proteins identified as decreasing with the change being log transformed.This provides an observation of the relative change observed in each species balanced on 0 for each grouping of GO terms.The assessment of P. tricornutum biochemical changes under the exclusive influence of nitrogen deprivation is shown in Fig. 1.Here, the ratio of the relative biomass normalized response of the variables under nitrogen depletion with respect to the control can be studied.As can be seen from the plot, both carbohydrates and lipids are produced at higher levels under nitrogen depleted conditions compared to the replete scenario, in the initial stages of the exposure.The carbohydrate levels peak initially reaching a maximum of 3 fold increase under nitrogen depleted condition.Neutral lipid levels are significantly higher in relative terms at all times, and peak latter than carbohydrates, at 24 h.This initial increase in carbohydrates followed by increase in lipids is as was observed in C. reinhardtii under nitrogen stress .As can be seen from the upper panel of the figure, the ratio of Chlorophyll A response decreased rapidly over the first 24 h.This was confirmed by the visible decrease in chloroplast content in the nitrogen depleted treatment as observed under the microscope.Considering the results observed in Fig. 1 and in order to investigate changes in the proteome associated with the lipid accumulation, a sampling point of 24 h post resuspension in nitrogen free medium was chosen for conducting the proteomics analysis.The chosen time point is one where the lipids were being accumulated at a rate higher than in the control condition, but one where the relative carbohydrate accumulations were minimal, suggesting a switch in resources from carbohydrate accumulation to lipid accumulation.A snapshot of metabolism at this time point can be considered to reflect changes that are more relevant to lipid accumulation than those attributable to carbohydrate accumulation.To ensure culture comparability to the biochemical profile data set, samples for biochemical and microscopy analysis were also taken along with those for proteomic analysis at 24 h post resuspension.A t-test showed a statistically significant increase in carbohydrates and lipids when cultures were under nitrogen stress for 24 h. Conversely, pigmentation showed a significant reduction in the nitrogen depleted treatment.Concurrent with proteomics and biochemical analysis, 1 mL of culture was also prepared for microscopy.The nitrogen stressed cells were observed to have reduced pigmentation, which is in accordance with the observations made for the Chlorophyll a and Carotenoids concentration.Within the proteome dataset, 23,544 spectra were matched to peptide and protein without disagreement among the six search engines, each of which were limited to a false discovery rate of 3% at the peptide level.The derived PSM list represented 7777 unique sequences matched to 1761 proteins of which 1043 had two or more unique peptides.To assess sample arrangement, hierarchal clustering and principal component analysis was performed on the merged PSM list.From this analysis, it can be seen that the nitrogen stress replicates cluster apart from the replete cultures and is responsible for > 80% of the variation between the samples.The list of PSM was then processed to provide the degree and significance of the change between the two treatments.Between the nitrogen replete and deplete conditions, 645 significant changes were observed, which corresponds to 62% of the confidently identified proteins.Though double that observed by Ge et al. this high level of statistically significant change is comparable with other studies of nitrogen stress in algae.For biological description two sets of statistically significant proteins were used.The 645 changes identified as showing a significant difference were used for pathway and gene ontology analysis, which requires deduction of hypotheses based on protein clusters rather than individual observations.A more stringent significance level comprising of 498 differences was used for direct hypothesis derivation in Table 1.Within the proteome dataset, 23,544 spectra were matched to peptide and protein without disagreement among the six search engines, each of which were limited to a false discovery rate of 3% at the peptide level.The derived PSM list represented 7777 unique sequences matched to 1761 proteins of which 1043 had two or more unique peptides.To assess sample arrangement, hierarchal clustering and principal component analysis was performed on the merged PSM list.From this analysis, it can be seen that the nitrogen stress replicates cluster apart from the replete cultures and is responsible for > 80% of the variation between the samples.The list of PSM was then processed to provide the degree and significance of the change between the two treatments.Between the nitrogen replete and deplete conditions, 645 significant changes were observed, which corresponds to 62% of the confidently identified proteins.Though double that observed by Ge et al. this high level of statistically significant change is comparable with other studies of nitrogen stress in algae.For biological description two sets of statistically significant proteins were used.The 645 changes identified as showing a significant difference were used for pathway and gene ontology analysis, which requires deduction of hypotheses based on protein clusters rather than individual observations.A more stringent significance level comprising of 498 differences was used for direct hypothesis derivation in Table 1.Significant changes between nitrogen replete and deplete conditions were used to colour KEGG maps.The overall map of the metabolism is shown in Fig. 4.Given limited annotation of KEGG available for P. tricornutum, the most significant changes were further investigated individually.Within the dataset, the abundance of 498 confidently identified proteins was significantly altered.These were matched to protein names using UniProt.Discounting those described as ‘Predicted Protein’ or ‘Predicted protein’ 193 identifications with descriptive names were grouped using the protein name and information provided on the UniProt entry page.Both KEGG and individual analysis showed significant trends in the reorganization of P. tricornutum proteome under nitrogen stress, mostly towards maximizing the use of the remaining nitrogen.Among others, those pathways involved in increasing the availability of the intracellular nitrogen and minimising its loss were favoured.Amino acid synthesis was reorganized between the different families, as is suggested by the decrease in the synthesis of the families of the aromatic-like, aspartate-like and pyruvate-like amino acids.There was, however, observation of an increase in serine tRNA, suggesting that whilst decreased in general, proteins associated with some amino acid synthesis may have increased.In contrast to previous reports that suggest a general decrease of amino acid synthesis in P. tricornutum, grouping the amino acid production based on their type did not reveal any meaningful trend.The ample coverage of the decrease of ribosomal proteins confirmed the reduction of protein synthesis associated with nitrogen stress that has been reported previously .This would be linked to the cellular need to economize the use of the available nitrogen.Given the nature of the stress condition, it was also expected that nitrogen scavenging would be strongly promoted within the cell as a way of supplying nitrogen demands.In this sense, focusing on the nitrogen metabolism pathway, proteins with greater abundance in the nitrogen depleted treatment included aliphatic amidase and formidase, both of which are known to free ammonia from other macromolecular compounds .Conversely, nitrate reductase, responsible for converting the available nitrate in the medium to nitrite in the initial step of nitrate assimilation, was decreased, contrasting with recent studies in P. tricornutum, likely due to the fact that in these studies the effect of nitrogen limitation rather than nitrogen starvation was addressed.Similar down-regulation has been reported for the diatom T. pseudonana under nitrogen starvation and iron stress that also coincided with an increase of the enzyme urease in the former, matching the increased abundance of the urea transporter found in this study.The possession of a complete urea cycle by the diatoms has been suggested to be a way of increasing the efficiency of nitrogen re-assimilation from catabolic processes .An increased abundance of the proteins involved has been reported to be linked to the increase in the glycolytic pathway of P. tricornutum facing nitrogen deprivation .In conclusion, this increase in nitrogen scavenging when seen with the reduction in the nitrogen assimilation enzyme suggests a more active rather than a passive response to the nitrogen stress focused on intracellular nitrogen recycling.The possession of this active nitrogen scavenging strategy might also be demonstrated by the increases in proteasome proteins and the changes of endocytosis and phagosome.KEGG analysis showed an increase in ‘Endocytosis’ and ‘Phagosome’ activity under nitrogen stress."Such increases in phagosomal activity have previously been reported for other algae under nitrogen stress, for example in Bihan et al.'s proteomic study on Ostreococcus tauri, This would suggest a scavenging response of microalgae under nitrogen deprivation.In this sense, when facing reduced nitrogen availability, P. tricornutum cells might enhance the intake and processing of extracellular debris and perhaps attempts to consume other organisms such as bacteria to obtain additional nitrogen supplies.Thus, nitrogen stress could be suggested to induce phagotrophy , In addition to external nitrogen retrieval, many of the proteins associated with endocytosis and phagocytosis have been reported to be similarly involved in autophagy .Transcriptional evidence of a link between nitrogen stress and autophagy induction has been previously shown in the chlorophyta Neochloris.Pathways associated with fatty acid metabolism were also significantly changed under nitrogen stress, coinciding with the previously described enhancement in the lipid content.Increases in KEGG pathways included ‘biosynthesis of unsaturated fatty acids’, ‘fatty acid biosynthesis’ and “short chain fatty acids”; and a relative decrease was observed in ‘fatty acid elongation’ and ‘fatty acid metabolism’.Coinciding with previous reports , individual protein changes also displayed an active dynamism of the proteome involved in this metabolic pathway, implying an increased abundance of enzymes key to lipid biosynthesis, such as acyl-carrier proteins and malonyl-CoA:ACP transacyclase.Additionally, a decrease in fatty acid catabolism related proteins was found, suggesting that a down-regulation in the degradation of fatty acids might be a key metabolic route for explaining lipid accumulation under nitrogen stress conditions.These results have been shown previously and are supported by recent reports of the preservation of existing triacylglyderides after nitrogen stress situations .Similar dynamism of the proteins related to the fatty acid synthesis and degradation has been reported previously for Chlorophyta .These results contradict those shown by the transcriptomic study conducted in P. tricornutum by Valenzuela et al. , highlighting the inappropriateness of using transcriptomic data to infer proteomic changes, as has been previously reported .The discord between these findings might suggest a translational control for proteins associated with fatty acid biosynthesis and degradation that would not be necessarily reflected at the transcriptomic level.The photosynthetic pathway was significantly down-regulated under nitrogen stress in P. tricornutum, as observed by a decrease in the relative abundance of the most important enzyme in the carbon fixation pathway, and the general decreased abundance of key proteins of photosynthesis such as the light harvesting proteins and the photosynthetic electron transport system.This observation matches a similar trend detected by the KEGG analysis and the decrease in pigment content described previously.Further, it is in agreement with previous studies both in P. tricornutum and other algae, supporting ample evidence on the close linkage between carbon and nitrogen metabolism .Such degradation of the photosynthetic pathway would be due to the fact that photosynthetic proteins have a high content of nitrogen, and therefore, under conditions of nitrogen scarcity, cells tend to actively down-regulate their synthesis in order to preserve the little nitrogen that is left and to divert it to the synthesis of those proteins that are essential for cell maintenance .The reorganization of the proteome under nitrogen starvation would also have an impact on the central energy metabolism.Acetyl CoA plays an important role in the carbon partitioning for oil accumulation within the cell, and therefore, metabolic pathways would be redirected to increase of the availability of this metabolite in the cell.In addition, fatty acid synthesis requires high levels of ATP and NADPH that would be generated through a switch from a gluconeogenic to a glycolytic metabolism."In this sense, in our study an increased abundance of those proteins involved in the Kreb's cycle, the glycolysis and the oxidative pentose phosphate pathways were observed.Conversely, those enzymes regulating the glycolytic and the gluconeogenic pathways reported decreased abundance, confirming previous reports for diatoms and cyanobacteria under nitrogen stress .Finally, nine proteins with antioxidant properties were increased under nitrogen stress, suggesting a change in the concentration of reactive oxygen species within the cellular environment.An increase in ROS has been reported to be a major source of cellular damage under abiotic and biotic stresses in plants .Specifically, ROS increases under nitrogen starvation conditions are closely linked with the malfunctioning of the photosynthetic pathway.Nitrogen uptake and metabolism require reducing equivalent power and ATP that under nitrogen deprived conditions tend to accumulate, causing metabolic imbalance and leading to the generation of oxidative stress.Nitrogen is also required for the synthesis of photosynthetic proteins, especially light harvesting proteins, and, as has been explained before, its lack tends to slow-down the electron flow through the photosynthetic apparatus, in turn causing the production of more ROS.Therefore, it can be hypothesized that the observed increase in antioxidant proteins is a mechanism used by P. tricornutum to limit this oxidative stress damage, as has been reported for algae facing other or similar stressful conditions .Another indication of the stress to which P. tricornutum was subjected to under nitrogen starvation is the increased abundance of the heat-shock protein HSP20.Heat shock protein expression has been reported to be triggered in microalgae growing under stressful conditions , including nitrogen stresses .However, it is also interesting that, while HSP20 was increased, other heat shock proteins, which have also been described to be present in stress responses, showed an opposite pattern, suggesting their possible differential role in the cell.To investigate differences in the proteome response under nitrogen stress between very different microalgae taxonomic affiliations such as Bacillariophyceae and Chlorophyceae, the results obtained in this study for P. tricornutum and the published earlier work of ours for C. reinhardtii were compared.Although there were differences in terms of sampling time points and culture conditions between the studies, both were conducted under active increase of cellular lipid content and thus this comparison is of interest.As far as we know this is the first study aiming at such comparison under situations of nitrogen starvation.Observing the changes in the GO groupings did not show any strong unidirectional change between the two species, however observation of the relative changes in proteins captured showed some differences.The direction of the protein abundance change in the number of proteins was the same for both species with two exceptions, those proteins that are involved in energy metabolism and protein degradation.Both showed increases in C. reinhardtii and decreases in P. tricornutum.P. tricornutum also demonstrated more consistent protein abundance changes involved in photosynthesis, pigment metabolism, carbohydrates metabolism, central energy metabolism and glycolysis than C. reinhardtii; suggesting that the reorganization of the proteome in this species towards these metabolic pathways was more important.Of special note is the markedly larger number of proteins involved in the photosynthetic pathway that were reduced in abundance in P. tricornutum.This might be due to the differences in the photosynthetic machinery between both species in terms of energy dissipation pathways and photosynthetic components of the electron transport system.Accessory pigments are very important in diatoms for dissipating excess energy due to the photosynthetic activity and, given their high nitrogen content, tend to be scavenged very early in the onset of nitrogen starvation .The larger number of proteins with increased abundance in central energy metabolism, mainly the GO terms acetyl-CoA and acyl-CoA metabolic processes, and glycolysis in P. tricornutum also suggest the relevance of these pathways in the cellular response to nitrogen starvation.These are likely involved in increasing the availability of the acetyl-CoA, chemical energy and reductant power required for lipid biosynthesis.The relative higher increase of glycolysis and carbohydrate catabolism also might indicate that P. tricornutum tends to mobilize carbon stores rather than increase them under nitrogen scarcity, as has been previously reported .Of special note is the markedly larger number of proteins involved in the photosynthetic pathway that were reduced in abundance in P. tricornutum.This might be due to the differences in the photosynthetic machinery between both species in terms of energy dissipation pathways and photosynthetic components of the electron transport system.Accessory pigments are very important in diatoms for dissipating excess energy due to the photosynthetic activity and, given their high nitrogen content, tend to be scavenged very early in the onset of nitrogen starvation .The larger number of proteins with increased abundance in central energy metabolism, mainly the GO terms acetyl-CoA and acyl-CoA metabolic processes, and glycolysis in P. tricornutum also suggest the relevance of these pathways in the cellular response to nitrogen starvation.These are likely involved in increasing the availability of the acetyl-CoA, chemical energy and reductant power required for lipid biosynthesis.The relative higher increase of glycolysis and carbohydrate catabolism also might indicate that P. tricornutum tends to mobilize carbon stores rather than increase them under nitrogen scarcity, as has been previously reported .Conversely, C. reinhardtii had more proteins regulated that relate to cellular homeostasis, respiration, phosphorous metabolism, DNA metabolism and cell organization compared to P. tricornutum; of practical note are the relatively large number of proteins involved in respiration and cellular organization.In our previous work C. reinhardtii was grown in the presence of organic carbon and the observed higher number of respiratory proteins could be explained by the diversion of the metabolism towards heterotrophy as a consequence of the compromise of the photosynthetic pathway in conditions of nitrogen scarcity.This switch from photoheterotrophic to heterotrophic metabolism has been described before for this species under conditions of Iron deprivation .The respiratory pathway would be used for generating chemical energy and reductant power needed for lipid biosynthesis.Induction of gametogenesis in C. reinhardtii under nitrogen stress has been reported , and the active increased abundance of cellular organization proteins observed here might play an important role in such physiological response.Finally, C. reinhardtii seemed to be more susceptible than P. tricornutum to the oxidative stress caused by nitrogen starvation, as suggested by the observed relatively higher number of oxidative stress proteins.Oxidative stress increase in microalgae under nitrogen starvation conditions has been described widely in the past , and has been related to the damage of the photosynthetic electron system proteins due to the nitrogen scarcity.However, the results of our comparison would suggest that there would be differences in both species in the way they counteract the oxidative stress damage, with a higher protein response in C. reinhardtii that might be associated to a different source of oxidative stress.While P. tricornutum remained photoautotrophic when growing under nitrogen starvation and therefore mostly the oxidative stress was caused by an inefficient functioning of the photosynthetic pathway and the xanthophyll cycle, C. reinhardtii growth conditions were mixotrophic and in conditions of Nitrogen starvation would switch towards a heterotrophic growth and the oxidative stress associated to the increase in respiration would be added to that caused by the damaged photosynthetic pathway.It must be noted that the above comparison is not comprehensive, taking into consideration all the relevant physiological and biological differences between the organisms and cultivation conditions.Nevertheless, it provides vital clues that will enable us to explore and develop a better understanding of microalgal metabolism needed for developing viable strategies for bioenergy generation.In the present study, the biochemical and proteomic changes associated with nitrogen starvation as a trigger for enhancing lipid production was addressed in P. tricornutum and compared with those previously described for C. reinhardtii.From biochemical analysis, it can be concluded that nitrogen stress increases energy storage molecules in P. tricornutum.This increase would be coupled with a decrease in photosynthetic pigments.We examined the proteome at an earlier stage of exposure to exclusive nitrogen starvation than has been reported, but at a time point when changes attributable to lipid accumulation can be captured in preference to those due to carbohydrate accumulation.Through the use of an iTRAQ methodology, 1043 proteins were confidently identified, of which 645 were shown to be significantly altered abundance under nitrogen stress.This represents a 17-fold increase with respect to the number of proteins detected in previous nitrogen stress assessments of P. tricornutum, and as such provides greater understanding of the effects of nitrogen stress in this model diatom species.The extent to which the proteome changes in response to nitrogen stress has been demonstrated to be > 60%, with over 60% of the confidently identified proteins being significantly changed in abundance.Several patterns of response have been identified within the proteome highlighting increased scavenging of nitrogen and the reduction of lipid degradation, as well as stimulation of central energy metabolism in preference to photosynthetic pathways.The GO comparison of P. tricornutum and C. reinhardtii conducted here highlights important differences in the degree of protein investment among the different metabolic pathways.In this sense, under nitrogen starvation, whilst P. tricornutum might reorganize its proteome by largely decreasing the number of photosynthetic proteins and increasing the ones involved in central energy metabolism, C. reinhardtii appears to invest in cellular reorganization, respiration and oxidative stress response.The following are the supplementary data related to this article.Reference table for Gene ontology groupings.Peptide table of six merged search engines.Fold change and significance table.Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.algal.2016.06.015.The authors declare no competing financial interest.JL contributed to the conception, design, data acquisition and drafting of the article, DW contributed in the data acquisition and drafting of the article, MHO contributed to the design, and drafting of the article with critical insights and input, PCW contributed to the design, conception and drafting of the article, and SV contributed to the conception, design, supervision and drafting of the article.All authors give their final approval of the submitted manuscript. | Nitrogen stress is a common strategy employed to stimulate lipid accumulation in microalgae, a biofuel feedstock of topical interest. Although widely investigated, the underlying mechanism of this strategy is still poorly understood. We examined the proteome response of lipid accumulation in the model diatom, Phaeodactylum tricornutum (CCAP 1055/1), at an earlier stage of exposure to selective nitrogen exclusion than previously investigated, and at a time point when changes would reflect lipid accumulation more than carbohydrate accumulation. In total 1043 proteins were confidently identified (≥. 2 unique peptides) with 645 significant (p < 0.05) changes observed, in the LC-MS/MS based iTRAQ investigation. Analysis of significant changes in KEGG pathways and individual proteins showed that under nitrogen starvation P. tricornutum reorganizes its proteome in favour of nitrogen scavenging and reduced lipid degradation whilst rearranging the central energy metabolism that deprioritizes photosynthetic pathways. By doing this, this species appears to increase nitrogen availability inside the cell and limit its use to the pathways where it is needed most. Compared to previously published proteomic analysis of nitrogen starvation in Chlamydomonas reinhardtii, central energy metabolism and photosynthesis appear to be affected more in the diatom, whilst the green algae appears to invest its energy in reorganizing respiration and the cellular organization pathways. |
31,478 | Comparison and clinical utility evaluation of four multiple allergen simultaneous tests including two newly introduced fully automated analyzers | The detection of allergen-specific IgE, along with the patient’s chief complaints and medical history, is diagnostically valuable for allergic diseases, such as allergic rhinitis, atopic dermatitis, and asthma .Although in vivo skin test has been traditionally used in the clinical environments, there are several limitations of in vivo skin test including error-prone results in patients with anti-histamine medication or skin diseases such as dermographism, possibility of subjective interpretation, and the lack of standardization for protocols .Therefore, in vitro allergen-specific IgE measurements have been developed using various principles of radioimmunoassay, enzyme immunoassay, fluorescent enzyme immunoassay, immunoblot, and chemiluminescent assay .Among the commercially available in vitro allergy tests, multiple allergen simultaneous tests have been continuously developed with the improvements in smaller amounts of serum consumption, shorter turnaround time, and wider spectrum of allergens included in the test .Since the difference in prevalence of allergic diseases according to age, sex, and ethnicity is prominent, the selection of multiple allergen screening panels should be modified in the context of geographical regions and race of the target populations .At the same time, the change of environmental substances in modern society must be considered for the progressive development of MAST assays .Moreover, there is no appropriate medical evidence to define any assay as the standardized reference method due to variability of allergen original materials, extraction methods, attachment processes, and detection techniques .Therefore, it is very difficult to analyze true sensitivity, specificity, positive predictive value, and negative predictive value of a specific MAST assay.Nevertheless, actual comparison of new MAST assay with currently used MAST assays can appropriately provide important information in the practical clinical settings.Recently, two fully automated analyzers with high-throughput were developed and introduced in the market; AdvanSure Allostation Smart II which is the upgraded version of previous AdvanSure AlloScreen by LG Life Science, and PROTIA Allergy-Q which was newly developed by ProteomeTech.Herein, we compared the diagnostic performances of these assays with two most commonly used MAST assays in Korea, today.In addition, we evaluated propensity of each assay to give positive results for certain allergen, which we defined as “positive propensity”.We randomly selected the study samples from MAST assay requested serum samples of patients who visited Severance Hospital with symptoms of allergy including urticaria, sneezing, and itching for diagnosis of allergic disease in all age ranges.Additionally, we excluded patients with chronic comorbid diseases such as autoimmunity, malignancy, chronic infection, and other immune-related diseases.Since two different panels were evaluated, we classified patients into two groups so that appropriate panel could be tested based on clinical symptoms and medical records.Due to the variety of allergen types included in the panel of four assays and lack of sufficient sample volume in some patients, different samples were analyzed by different numbers of analyzers with various combinations of allergens.Therefore, only pairs of matched allergens by the same sample were compared among four analyzers.Serum aliquots were tested by four different systems; AdvanSure AlloScreen, AdvanSure Allostation Smart II, PROTIA Allergy-Q, and RIDA Allergy Screen.All the test procedures were performed following the manufacture’s instruction.Although detection ranges were various among four analyzers, results were identically classified into 7 levels and were interpreted as class 0–6 in all analyzers.We compared a pair of analyzers each time in order to maximize the comparison efficiency because different allergen lists are available by four analyzers.Furthermore, we focused on comparison of two specific analyzers, because AdvanSure Allostation Smart II is the upgraded version of AdvanSure AlloScreen, both of which are developed by the same manufacturer.Afterwards, we compared two newly introduced analyzers with currently widely utilized assay as reference values.No standardized specific cut-off level for positive result is defined worldwide until today .Moreover, previous studies which compared various MAST assays utilized different cutoff levels.For instance, several studies used class 1 as the cutoff level for positive results , whereas class 2 was adopted as the cutoff level for positive results in other studies .Considering the natural characteristics of semi-quantitative results in MAST assays, comparison of different cut-off levels in the paired results might provide clinical clues for more precise diagnostic interpretation.Therefore, we applied cut-off levels of class 1, class 2, and class 3 as minimal requirement for positive results for all comparison analyses.We analyzed the concordance degree by calculating total agreement percentage following the same methodology used in a previous study ; total agreement percentage=×100/total number of results.Additionally, concordant positive rates were calculated with the proportions of agreement for positive responses because low frequency of positive results can affect the total agreement percentage."Furthermore, agreement of detection results between two analyzers was determined by Cohen's kappa analysis .Finally, the presence of propensity toward positive results in specific assay for certain allergen was determined when the difference between discrepant results accounted for over 10% of all pairs.For example, when assay A and assay B are compared for allergen C, ×100/ total number of results ≥10% can be interpreted as the positive propensity of assay A for allergen C compared to assay B.For all statistical analyses, we used MedCalc 11.0 and SPSS 18.0.The serum samples from a total of 53 and 104 patients were tested for food panel and inhalant panel in this study.Characteristics of study participants are summarized in Table 2.Although several patients presented multiple allergic symptoms, urticaria was the most common clinical feature for participants in food panel while allergic rhinitis was the most frequent clinical symptoms for participants in inhalant panel.As mentioned earlier, different numbers of matched pairs were compared in each comparison analysis among four analyzers.When we compared qualitative results between AdvanSure AlloScreen and AdvanSure Allostation Smart II, we used class 2 as the cut-off level for positive result since the manufacturer suggested the possibility of class 1 result indicating insufficient clinical significance to trigger allergic progression.A total of 43 and 90 paired serum samples were tested for 39 and 41 allergens in food and inhalant panel, respectively.All allergens showed total agreement percentages over 93.0% and 92.2% in food and inhalant panel, respectively, which indicates good concordance between old and new versions of AdvanSure assays.However, 6 allergens in food panel and 4 allergens in inhalant panel showed no concordant positive result, possibly due to rare frequency of specific IgE antibodies to these allergens among Koreans and restricted number of paired samples in this study.On the contrary, two most common allergens in both food and inhalant panels which were Dermatophagoides pteronyssinus and Dermatophagoides farina showed high total agreement percentages of over 95.0% and high agreement levels with kappa indices over 0.9.However, total agreement percentage and kappa index decreased to 93.0% and 0.8, respectively, for house dust, which was the third most common allergen.We evaluated concordance rate of two newly developed fully automated assays with results by RIDA Allergy Screen considered as the reference values in this study utilizing class 2 for the cut-off level for positive result.Total agreement percentages were over 90.0% in most allergens in both assays for food and inhalant panels.However, allergens with the most frequent positive results presented concordance rates ranging from 69.6% to 90.0% for both AdvanSure Allostation Smart II and PROTIA Allergy-Q in food panel as well as inhalant panel.Furthermore, several allergens which showed propensity toward positive result in specific assay were noticed in both comparison analyses.While AdvanSure Allostation Smart II and PROTIA Allergy-Q showed positive propensity for some allergens when compared with RIDA Allergy Screen, RIDA Allergy Screen did not show positive propensity for any allergens with 10% discrepant results.For evaluation of AdvanSure Allostation Smart II, three allergens with the highest positive propensity results were D. farina, D. pteronyssinus, and house dust in both food and inhalant panels.Similarly three highest positive propensity results for PROTIA Allergy-Q were observed in D. farina, D. pteronyssinus, and house dust in inhalant panel.However, D. farina and dog showed highest positive propensity results in food panel of PROTIA Allergy-Q.Interestingly, the allergen with largest class difference between AdvanSure Allostation Smart II or PROTIA Allergy-Q and RIDA Allergy Screen was pupa silk cocoon in food panel although it showed positive propensity of 10.0%.To evaluate the effects of various cut-off levels for positive result determination, we applied two more cut-off levels other than the conventional criteria of class 2 as minimal requirement for positive result; class 1 and class 3 as cut-off levels.Total agreement percentages and concordant positive rates were fairly influenced by application of both higher and lower cut-off levels.Since higher cut-off level led to more negative results, concordant positive rates decreased naturally.However, the changes of total agreement percentage according to the increase in cut-off level varied among allergens by different analyzers.Since different analyzers include various allergens, analyzer-specific allergens present diverse frequencies among patients.Among the accretional allergens introduced in Advansure Allostation Smart II, Acarus siro and apple in inhalant panel showed significant positive rates of 20.0% and 8.9%, respectively, when class 1 was utilized as cut-off level.When cut-off level was increased to class 2, these positive rates decreased to 12.2% and 4.4%, respectively.Because multiple allergen positive results might indicate cross-reactivity between similar allergens, frequency of patients with two or more positive results was analyzed according to four different analyzers.By application of class 2 as the cut-off level for positive result, AdvanSure AlloScreen and AdvanSure Allostation Smart II presented highest frequency of patients with multiple positive allergens with maximum positive allergen numbers of 23 and 34 in food panel and 28 and 32 in inhalant panel, respectively.During the last decade, there have been several remarkable introductions of new MAST assays by different manufacturers into the clinical field of allergic diseases.Accordingly, evaluation and comparison studies of these novel MAST analyzers were reported by few groups .Until today, a total of four MAST assays were frequently evaluated with each other and showed comparable clinical performances .Recently, Lee and colleagues presented favorable performance of newly developed PROTIA Allergy-Q .In this current trend, we evaluated four MAST analyzers including two newly developed and fully automated assays.This study is the first evaluation report for AdvanSure Allostation Smart II and only the second comparison study for PROTIA Allergy-Q.Also, our study is unique for evaluating upgraded version of specific assay to ensure the improvement by including both AdvanSure AlloScreen and AdvanSure Allostation Smart II.Based on our results, most results of comparison analyses presented good concordance levels by means of total agreement percentages over 90.0%.Satisfactory agreements were observed not only in the comparison between AdvanSure AlloScreen and AdvanSure Allostation Smart II, but also in the evaluation of AdvanSure Allostation Smart II and PROTIA Allergy-Q compared with RIDA Allergy Screen.Although four allergens with the most frequent positive results, which were D. farina, D. pteronyssinus, house dust, and storage mite, showed slightly lower concordance rates, these different results could be sufficiently overcome by careful interpretation of MAST results in association with clinical manifestations.One interesting finding we focused on in this study was positive propensity of each analyzer.In the midst of various available MAST analyzers with comparable diagnostic performance, it is important for laboratory physicians to recognize the unique propensity of each analyzer which might easily lead to positive results for particular allergen.Our study suggests that AdvanSure Allostation Smart II and PROTIA Allergy-Q are more sensitive or prone to report positive results for three common allergens in both food and inhalant panels than RIDA Allergy Screen.Considering the multiple positive result frequencies related with cross-reactivity among similar allergens as possible mechanism for explanation , positive propensity of each analyzer should be cautiously understood.Moreover, variations in the allergen extraction method by different manufacturers might have caused this phenomenon of diverse positive propensity in each analyzer.Adding new allergens in the panel list is another issue for future development of MAST analyzers.Candidate allergens should be assessed based on evidences for continuous and dramatic changes in the environment and socio-behavioral lifestyle of modern individuals .At the same time, cost-effective approach is required for choices of clinically efficient allergen with reference to epidemiologic results of geographically characteristic allergen studies .Our results support the significant positive rate for Acarus siro among Korean population , which is unique allergen included only in Advansure Allostation Smart II inhalant panel.Further investigations for Acarus siro as an inhalant allergen in general population might highlight the advantage of Advansure Allostation Smart II.One of the most important approaches we performed in this study was the re-evaluation of cut-off levels in order to avoid false positive results.Besides the conventional cut-off level of class 2 as the minimal positive result criteria, we analyzed the changes of total agreement percentages and concordant positive rates according to cut-off level decrease to class 1 or increase to class 3.Although the increase of cut-off level seemed to make clinical circumstance more simple and concise by presenting only the definitive positive allergens, this modification resulted in lower concordant positive rates with possibility of missing potentially critical allergens.On the other hand, decrease of cut-off level produced lower total agreement percentages in most allergens, which might obscure physicians from clear identification of clinically relevant allergens.Detailed evaluation with similar approach for optimal cut-off class should be conducted for each analyzer according to specific regional frequency and distribution of allergens in the future.A critical limitation of this study was the use of RIDA Allergy Screen assay as the reference value for evaluation of newly developed analyzers.Among the comparison studies published until today, most studies included the ImmunoCAP system for comparison analyses as an empirically reference method .However, the ImmunoCAP system is neither the official nor the definite reference procedure for measurement of allergen-specific IgE antibodies despite its good reliability and reproducibility.While the ImmunoCAP system might be impractically expensive for efficient clinical service for small to medium sized clinical laboratories , RIDA Allergy Screen assay has been continuously evaluated and reported for favorable clinical correlation with not only the ImmunoCAP system, but also serum total IgE .We anticipated that objective comparison between currently available MAST analyzers might provide sufficient information for clinical use in the practical medical field.In conclusion, AdvanSure Allostation Smart II maintained steady concordant performance in the upgrade process from AdvanSure AlloScreen, with the uniquely extended allergen list including Acarus siro which showed certain positive rates.AdvanSure Allostation Smart II and PROTIA Allergy-Q presented favorable agreement performances with RIDA Allergy Screen, although positive propensities were noticed in some allergens.The conventional cut-off level of class 2 as the minimal positive result criteria appeared to be suitable for current MAST analyzers in the clinical interpretation.All authors declare no conflict of interest. | Background: We compared the diagnostic performances of two newly introduced fully automated multiple allergen simultaneous tests (MAST) analyzers with two conventional MAST assays. Methods: The serum samples from a total of 53 and 104 patients were tested for food panels and inhalant panels, respectively, in four analyzers including AdvanSure AlloScreen (LG Life Science, Korea), AdvanSure Allostation Smart II (LG Life Science), PROTIA Allergy-(ProteomeTech, Korea), and RIDA Allergy Screen (R-Biopharm, Germany). We compared not only the total agreement percentages but also positive propensities among four analyzers. Results: Evaluation of AdvanSure Allostation Smart II as upgraded version of AdvanSure AlloScreen revealed good concordance with total agreement percentages of 93.0% and 92.2% in food and inhalant panel, respectively. Comparisons of AdvanSure Allostation Smart II or PROTIA Allergy-with RIDA Allergy Screen also showed good concordance performance with positive propensities of two new analyzers for common allergens (Dermatophagoides farina and Dermatophagoides pteronyssinus). The changes of cut-off level resulted in various total agreement percentage fluctuations among allergens by different analyzers, although current cut-off level of class 2 appeared to be generally suitable. Conclusions: AdvanSure Allostation Smart II and PROTIA Allergy-presented favorable agreement performances with RIDA Allergy Screen, although positive propensities were noticed in common allergens. |
31,479 | Use of a monitoring tool for growth and development in Brazilian children - Systematic review | The function and use of a child health monitoring tool have been discussed in the context of primary health care policy over the past three decades in Brazil.1–5, "This tool's form, features, and content have gone through many changes.Furthermore, it had its goals and target audience expanded in an attempt to become an effective tool in child health promotion.3,6,7,In those same three decades, the economic, social and demographic transformations have changed the epidemiological profile of the population.8,9, "These were accompanied by changes in the country's policy and health system,10 which caused a reordering of priorities in the Brazilian public health agenda.4,5",There have been many advances in the indicators of primary care, such as increased access to prenatal and immunization services and breastfeeding rates, and all contributed to the decline in child mortality.8,11,All these changes have posed new challenges to ensure the health of a growing and developing individual.12–15,It also caused the transition from a model of care focused on acute illness to one based on the integration of health services and intersectoral health promotion.8,10,16,In this transition, the Family Health Program is the key strategy to restructure the care model of the Brazilian Unified Health System since 1994.10,The first contact of the population with the local health system is through the family health teams, which coordinate care and seek to integrate health services.The health promotion activities go beyond the walls of the health centers and take place in the territory, that is, in the homes and community,10 and it is in the performance of such activities that the child monitoring tool recovers its historical function.17, "The actions carried out in child's primary health care are essential for early detection of potential growth and development changes, as well as to decrease morbidity and mortality risks.Child growth is a dynamic and continuous process of differentiation from conception to adulthood, which depends on the interaction of biological characteristics and life experiences in the environment.2,17, "The best monitoring method is the periodic record of the child's weight and height18 and, currently, the body mass index.5",The development, in turn, is broad and refers to a progressive transformation that also includes growth, maturation, learning, and psychic and social aspects.2,Its monitoring involves activities that assess steps or milestones of psychomotor development of children in each age group and can detect problems and changes in child development.19,Originally, the Child Health Card, proposed for the country in 1984,2 was the monitoring of basic actions of the Ministry of Health for child health."From 1984 to 2003,2,3 the CHC has been modified and revised, with the addition of children's rights and some milestones of child development.The adoption of the CHC was explicitly mentioned in 2004 in the Agenda of Commitments for Complete Health and Mortality Reduction.4,In 2005, the CHC has taken the form of a booklet and is now called the Child Health Record.6,7,In this booklet, new information has been added for families and healthcare professionals in order to expand knowledge in child care and facilitate the understanding of aspects related to their growth and development."CHR is considered by the MOH a key tool for monitoring the promotion activities of the child's full potential of growth and development and preventing prevalent childhood diseases.Currently, the MOH distributes three million copies of the CHR to the municipal departments, which must pass them to public and private hospitals."It is a free document delivered to the newborn's family.There is no quantitative study compiling evidence from previous studies regarding the use of CHC/CHR.17,20–26,Therefore, the purpose of this article is to perform a systematic review to assess the completeness of CHC or CHR by health professionals in Brazil, based on evidence published in the literature, with emphasis on variables of monitoring the growth and development of the child.The search was performed without restriction on year of publication in the following electronic databases: Cochrane Brazil, Latin American and Caribbean Health Sciences, Scientific Electronic Library Online, Medical Literature Analysis and Retrieval System Online and reference lists of articles, according to Preferred Reporting Items for Systematic reviews and Meta-Analyses.27,The following descriptors and keywords were used: “growth and development”, “child development”, “child health record”, “child health handbook”, “health record and child”, and “child handbook”.The articles included attended the following criteria for methodological quality28: hypotheses or defined objectives, outcome description, characteristics of participants, studied variables, main results and characteristics of losses, and adequacy of statistical tests used.This review includes only works performed in Brazil and published in indexed journals, which measured the use of the growth and development monitoring tool prepared and distributed by the Ministry of Health from 1984, and quantitatively assessed the filling out of booklets.Exclusion criteria were review articles, manuals, and completion of course work; the method of data analysis was qualitative, restricted only to vaccination or those whose sample consisted of specific risk groups, such as low birth weight and prematurity, with genetic and underlying diseases.The 1984 version of CHC is a brochure on coated paper, printed in different colors and sizes for boys and girls, which can be folded in three, with spaces for child identification data, consultations, weight measurement according to age, growth monitoring chart up to 5 years old, and immunizations done.Since 1995, CHC included 11 milestones of child development with spaces to record the age in which they were achieved.CHR, in booklet format that has been reprinted since 2005, has spaces for recording information of the basic health care of children from gestation to 9 years old, complications, treatments and graphics to indicate the variation of weight-for-age, height, head circumference, and BMI."It also provides a space for recording the presence of the psychomotor developmental milestones according to the child's age.The CHR should be filled in the routine follow-up visits.The Ministry of Health recommends seven visits in the first 12 months, two in the second year, and from that age on, one visit per year.7,Sixty-eight non-repeated articles were identified in the electronic databases and reference lists.In the first screening stage, four qualitative theses and 29 articles were excluded by reading the titles.Of these, 12 studies were restricted to vaccination, nine involve risk groups and/or underlying disease, three were of instructional materials, three copies of booklets, one review, and one professional training study in primary health care.In the second screening stage, 17 articles were excluded after reading the abstracts for not verifying the CHC/CHR completion.Eleven articles may be grouped as evaluation studies: three of nutritional indicators, three of Supervised Practical Activities, two of care practices, two of records analysis, and one of professionals’ knowledge.Five articles may be grouped as qualitative studies: two studies of the meaning of child care, one discourse analysis, one experience report, and one multidisciplinary approach to growth and development follow-up."In addition to these, a literature review of the role of nurses in children's nutritional health is excluded.Eighteen articles remained for full text reading."Ten articles29–38 were excluded for not quantitatively assessing the children's health monitoring tools.Of the eight included articles, five evaluated the filling out of CHC17,20–23 and three24–26 of CHR.The searches were made in the Northeast,17,21,23 Southeast,20,24,25 South,26 and Midwest regions.22, "Information was obtained from questionnaires addressed to the mother or child's guardian, or to the directors of health services, or was collected directly from the instrument studied.The surveys were made in services within the public health network and home visits.The variability of the measured items and of the evaluation criteria of filling out the tool made it difficult to compare the filling frequency for all items of CHC or CHR.The percentage of tools filled out with data regarding identification, pregnancy monitoring, and birth is presented in Table 3.In 2005, only 55.6% of the CHR had the name of the child filled in.24,The authors reported that the mean age of these “unnamed” children was 68 days, median of 59 days, time at which this information should have been filled out by health professionals after several opportunities to see child—in the maternity and primary care visits.We also noted that there was an increase in the percentage of CHC/CHR filled out between 2005 and 2008 for all the identification variables, except for the number of the Certification of Live Birth.The highest increase was in the number of Birth Certificates.Only one study evaluated the serology data filled in during prenatal24 and found that this was the lowest filling percentage of the pregnancy monitoring variables: about 50% of the CHR studied."Birth weight was the most described record among the variables related to the child's birth.There was an increase in the filling percentages among the studied CHC/CHR, but there was a decrease when the tool changed, such as the gestational age, for example.Between 2001 and 2006, there was an increase in the filling out of Apgar and little variation in the filling of height and head circumference.The results of the monitoring variables of growth and development are shown in Table 4.Only two studies20,26 reported consultation records concerning growth.The lowest percentage of CHR filling out was 74.6%, weight monitoring in 1998.20,However, 10 years later, the weight, height, and HC records were more than 80% filled out in the work by Linhares et al.26,Records of weight and HC at birth in the graphs showed low frequency of CHR filled out.In works performed in Pernambuco, birth weight at birth was only indicated on the chart in 36.9%17 and 44.1%21 of the cards, although it was recorded in 86.8%17 and 89.4%21, respectively, of these cards.Similarly, in Belo Horizonte,25 only 69.3% and 15.5% of the CHR had markings on the charts of weight and HC at birth, respectively.The filling out percentage of the weight-for-age chart showed great variation between studies due to the criteria used to consider the filling out as appropriate.For children up to one year, when a record every three months was required, Vieira et al.23 reported 41.1% of adequate filling out in the weight-for-age chart.In the study that considered a single marking as sufficient, a percentage of 96.3% was reported.26,In the Federal District,22 21.1% of correct filling out were found, according to the recommended by the Ministry of Health.It was found that the filling out percentage decreased with age, from 53.8% in the age group up to five months to 6.6% in the age group of 48–60 months.In Pernambuco,17 59.9% of CHC had a record in the weight chart on the day of consultation."In this same work, according to the child's age, 38% of CHC had none or only one weight record in the chart.The condition “no point recorded” is similarly distributed in all age groups: 27.8%, 21.7%, and 27.2%.However, 40.5% of the CHC had two to six points on the chart.Of these, 46% in the age group were under one year and 29.7% between 48 and 60 months.Linhares et al.26 were the only ones to observe the filling out in the length/height-for-age chart."Of the 107 CHR, 42.1% had at least one record, regardless of the child's age.There was no report on records of BMI chart for age by the authors of the works included in this review.Only two studies assessed the presence of records in the development monitoring tool."In Feira de Santana,23 22.1% of CHC had records in the chart, but only 7.8% were complete, considering the child's age.In Belo Horizonte,25 only 18.9% of CHR met the criteria for presenting records in three or more age groups.For three decades, the children health programs in Brazil proposed as a strategy a tool to monitor and promote child health."The results presented in this study have identified important issues in using this instrument to provide the child's primary health care.Although studies report that most children have the CHC or CHR, the monitoring of child growth seems not to receive the proper attention by health teams.Of the three studies that assessed the CHR,24–26 two presented results regarding the filling out of the HC chart, one regarding the length/height and none regarding BMI for age, regardless of the epidemiological nutritional profile in Brazil.Currently, the coexistence of two antagonistic situations justifies the conduct of different clinical and epidemiological approaches: nutritional deficiency and, at the opposite pole, the combination of problems related to overeating and unhealthy life styles.39,40,As the occurrence of malnutrition declines, the prevalence of anemia, overweight and obesity increases in the Brazilian population.39,The IMC has been validated as a marker of adiposity and overweight in children and as a obesity predictor in adulthood.41, "Therefore, its use is recommended since the child's birth.42",To assess the cranial growth rate and its internal structures in childhood, HC systematic measurement and recording on the HC chart for age are needed.It draws attention to a filling out as low as 30.7%25 and 35.5%26 of a parameter that reflects the state of child neurodevelopment,43–45 so it should be routinely used for individual follow-up of children up to 24 months, the period of greatest postnatal growth.5,45,Low birth weight is one of the best indicators of the quality of health and life of children due to its close relationship with children mortality and damage to the linear growth, weight, and mental and motor development.46, "However, the low recording of weight at birth in the chart shows the underestimated role assigned to this indicator in monitoring the child's health status at the places evaluated by the works reviewed here.Another problem found in this review is the poor result in the filling out of the milestones of child development chart.The monitoring action consists of performing physical examination, thorough neuropsychomotor evaluation, identification of risk factors, and record in the CHR of all procedures performed in the child, as well as the findings of the medical visits.5,This action is a form of preventive intervention that includes activities related to promotion of normal development and detection of problems in the process.47,It brings together different evaluations that include the perception of parents, teachers, and health professionals.33,36,48,An estimated 200 million children worldwide under the age of five are at risk of failing to achieve their development potential.49,With the use of CHR, Alvim et al.36 were able to trace 35% of children with probable or possible developmental delay, when evaluating 122 children from two months to two years old in the city of Belo Horizonte.Costa et al.29 found failure in the filling out of CHR when assessing the health care provided to children by the Family Health Program in the city of Teixeiras.The authors reported that most children had the CHC, but all were incomplete.There was no information on weight and height, records in the growth chart, and many mothers did not understand the meaning of the curve.The card worked just as a record for vaccine control, and not as a child health monitoring tool.We also found that the younger children monitoring tools have more records.The schedule for routine medical visits is most common in the first months, a period of risk and need for regular monitoring.Over time, the preventive visits are gradually replaced by visits due to health problems."The child's health monitoring tool led to operational changes in the health services.Since 2005, hospitals and maternities have become responsible for the distribution and recording of information regarding pregnancy, childbirth, and neonatal period.CHR, as a health promotion tool, also caused changes in health status perceived by the population.24,Demand for health services can no longer be motivated only by the presence of disease or vaccination, as reported by Vitolo et al.50 in 2010.The findings of this study indicated that 66.2% of those responsible still considered the child monitoring by the childcare service unnecessary in the absence of disease.This frequency is in contrast with the high coverage of the up-to-date immunization schedule.The results presented in this review should take into account that the methodology used in the articles reviewed to assess the filling out of the CHC and CHR was not uniform.In some studies, the criterion was based on at least one record in the three months preceding the interview.Certainly, the values would be lower than those reported if the criterion used was more restrictive, such as the minimum consultation timetable proposed by the MOH.Another issue to consider is the comparison between surveys performed in different socioeconomic and cultural realities.Anyway, the absence or records incorrectness suggests a weak link of professionals with basic health care actions and a discontinuity between the actions initiated in maternity and the proposals for primary care.Health professionals often become overwhelmed in their routines.Beyond the universe of care, the work involves filling out various forms demanded by the institution.The filling out of a CHR cannot be considered an additional administrative record, but a tool for children health promotion and to obtain good quality information to better target the actions of services.However, it is important to emphasize that the absence of records does not mean exactly the non-performance of medical procedures.30,51,52,However, the importance of records to build the epidemiological profile of a population and as a channel of communication between health professionals in the development of their actions is recognized.When done right, it allows the practice of personalized care and reflects the quality of care.25,In the child health monitoring program, the professional focus should be missing no opportunities for action, whether in the promotion and/or prevention and/or assistance, keep bond with the family, and encourage continuous and joint responsibility service and family.53,Co-responsibility of families, professionals, and services can be the key to better use the CHR25 in child care."The act of providing explanations, involving the family, and recording information about the child's health conditions is a way of caring for and encouraging the continuity of care.The understanding by the families of this tool function in child health monitoring is essential for them to take hold of it and appreciate it.Thirty years after the implementation of the Children Health Integral Assistance Program, the use of the child health monitoring tool is not consolidated, according to research reports.The lack of awareness of the health professionals for filling out the study instrument was evident.This review also shows that the diagnostic of use and filling out quality of such tools in Brazil is restricted to a few local works, which do not evaluate all variables considered essential for child health monitoring.Therefore, further studies are desirable, with a methodology consistent with previous studies that allow drawing a national and more updated picture."This knowledge could be enhanced if combined with other qualitative studies, in which professionals from the basic units and FHP teams express their views on the relationship of promotion and monitoring actions for the child's complete health with the filling out and appreciation of CHR.Fundo Nacional de Saúde by the agreement signed by Coordenação Geral da Saúde da Criança e Aleitamento Materno with the Instituto Nacional de Saúde da Mulher, da Criança e do Adolescente Fernandes Figueira.The authors declare no conflicts of interest. | Objective To assess the use of a health monitoring tool in Brazilian children, with emphasis on the variables related to growth and development, which are crucial aspects of child health care. Data source A systematic review of the literature was carried out in studies performed in Brazil, using the Cochrane Brazil, Lilacs, SciELO and Medline databases. The descriptors and keywords used were "growth and development", "child development", "child health record", "child health handbook", "health record and child" and "child handbook", as well as the equivalent terms in Portuguese. Studies were screened by title and summary and those considered eligible were read in full. Data synthesis Sixty-eight articles were identified and eight articles were included in the review, as they carried out a quantitative analysis of the filling out of information. Five studies assessed the completion of the Child's Health Record and three of the Child's Health Handbook. All articles concluded that the information was not properly recorded. Growth monitoring charts were rarely filled out, reaching 96.3% in the case of weight for age. The use of the BMI chart was not reported, despite the growing rates of childhood obesity. Only two studies reported the completion of development milestones and, in these, the milestones were recorded in approximately 20% of the verified tools. Conclusions The results of the assessed articles disclosed underutilization of the tool and reflect low awareness by health professionals regarding the recording of information in the child's health monitoring document. |
31,480 | Computational methods for predicting genomic islands in microbial genomes | Lateral gene transfer is the transfer of genes from one organism to another in a way that is different from reproduction.Its ability to facilitate microbial evolution has been recognized for a long time .Despite the ongoing debate about its prevalence and impact , the accumulation of evidence has made LGT widely accepted as an important evolution mechanism of life, especially in prokaryotes .As a result of LGT, recipient genomes often show mosaic composition, in which different regions may have originated from different donors.Moreover, some DNA sequences acquired via LGT appear in clusters.These clusters of sequences were initially referred to as pathogenicity islands , which are large virulence-related inserts present in pathogenic bacterial strains but absent from other non-pathogenic strains.Later, the discoveries of regions similar to PAIs but encoding different functions in non-pathogenic organisms lead to the designation of genomic islands .GIs are then found to be common in both pathogenic and environmental microbes .Specifically, a GI is a large continuous genomic region arisen by LGT, which can contain tens to hundreds of genes.The size of known GIs varies from less than 4.5 kb to 600 kb .Laterally acquired genomic regions shorter than a threshold are also called genomic islets.GIs often have phylogenetically sporadic distribution.Namely, they are present in some particular organisms but absent in several closely related organisms.As shown in Fig. 1, GIs have several other well-known features to distinguish them from the other genomic regions , such as different sequence composition from the core genome, the presence of mobility-related genes, flanking direct repeats, and specific integration sites.For example, tDNA is well known as a hotspot for GI insertion .However, not all these features are present in a GI, and some GIs lack many of these features.As a consequence, GIs were also considered as a superfamily of mobile elements with core and variable structural features .In addition to the restricted GI definition in , GIs are often seen as a broad category of mobile genetic elements .They can be further grouped into subcategories by mobility: some GIs are mobile and hence can further transfer to a new host, such as integrative and conjugative elements, conjugative transposons and prophages; but other GIs are not mobile any more .GIs can also be classified by the function of genes within as follows: PAIs with genes encoding virulence factors; resistance islands with genes responsible for antibiotic resistance; metabolic islands with genes related to metabolism; and so on .However, the latter classification may not be definite since the functions of genes within GIs may not be clear-cut in practice.GIs play crucial roles in microbial genome evolution and adaptation of microbes to environments.As part of a flexible gene pool , the acquisition of GIs can facilitate evolution in quantum leaps, allowing bacteria to gain large numbers of genes related to complex adaptive functions in a single step and thereby confer evolutionary advantages .Remarkably, the genes inside GIs can influence a wide range of important traits: virulence, antibiotic resistance, symbiosis, fitness, metabolism, and so on .In particular, PAIs can carry many genes contributing to pathogen virulence , and potential vaccine candidates were suggested to locate within PAIs .Thus, the accurate identification of GIs is important not only for evolutionary study but also for medical research.GIs can be predicted by either experimental or computational methods.Herein, we focus on the in silico prediction of GIs: given the genome sequence of a query organism, identify the positions of GIs along the query genome via computer programs alone.Additional input information may also be incorporated, such as the genomes of other related organisms, and genome annotations.Langille et al. gave a comprehensive review of GI-related features and different computational approaches for detecting GIs.Recently, in 2014, Che et al. presented a similar review for detecting PAIs.Here, we want to provide an up-to-date review of representative GI prediction methods in an integrative manner.Firstly, we highlight the general challenges in predicting GIs.Then, we subdivide existing methods based on input information, and describe their basic ideas as well as pros and cons.We also propose the promising directions for developing better GI detection methods.It is a non-trivial task to find laterally transferred regions of relatively small size in a long genome sequence.Two prominent challenges in GI prediction are the extreme variation of GIs and the lack of benchmark GI datasets.It seems easy to predict GIs given the various well-characterized features associated with it.However, the mosaic nature and extreme variety of GIs increase the complexity of GI prediction .The elements within a GI may have been acquired by several LGT events and are likely to have undergone subsequent evolutions, such as gene loss and genomic rearrangement .Consequently, the composition, function and structure of GIs can show various patterns.This can be illustrated by GIs in the same species , GIs in Gram-negative bacteria , and GIs in both Gram-positive and Gram-negative bacteria .The diversity of GIs prevents an effective way of integrating multiple features for prediction.Choosing only a few features as predictors may discard lots of GIs without those features.Even if the fundamental property of GIs, the lateral origin, can be used for prediction, it is still challenging since LGT itself is difficult to ascertain .There are still no reliable benchmark GI datasets for validating prediction methods or supervised prediction.With more GIs being predicted and verified, several GI-related databases have been deployed and regularly updated, such as Islander , PAIDB , and ICEberg .However, these databases are mainly for specific kinds of GIs, such as tDNA-borne GIs, PAIs, and ICEs.There are also two constructed GI datasets based on whole-genome comparison , which were used as training datasets for machine learning methods.But the scale of these datasets is still not large enough, and their reliability has not been verified by convincing biological evidence.In spite of the above challenges, previous methods have made considerable progress in GI prediction.They usually use two most indicative features of the horizontal origin of GIs: biased sequence composition and sporadic phylogenetic distribution.Based on the two features, these methods roughly fall into two categories: composition-based methods and comparative genomics-based methods.For ease of discussion, we categorize GI prediction methods into two large groups based on the number of input genomes: methods based on one genome and methods based on multiple genomes.Methods in the former group are often composition based, while methods in the latter group are usually comparative genomics-based.We also include ensemble methods which combine different kinds of methods and methods for incomplete genomes which predict GIs in draft genomes.Fig. 2 shows an overview of the methods included in this paper.For reference, we list available programs which are discussed under each category in Table 2.Most methods based on one genome utilize sequence composition to identify GIs, but several methods based on GI structural characteristics have also been developed.According to the units for measuring genome composition, composition-based methods can be divided into methods at the gene level and methods at the DNA level.In the following sub-sections, we present the basic idea of composition-based methods before discussing methods at the gene and DNA level separately.The major assumption of composition-based methods is that mutational pressures and selection forces acting on the microbial genomes may result in species-specific nucleotide composition .Thus, a laterally transferred region may show atypical composition which is distinguishable from the average of the recipient genome.Under this assumption, most compositional methods try to choose certain sequence characteristics as discrimination criteria to measure the compositional differences.Several features have been shown to be good criteria, including GC content, codon usage, amino acid usage, and oligonucleotide frequencies.Based on these criteria, single-threshold methods are often adopted for GI prediction.The atypicality of each gene or genomic region is measured by a score derived from the comparison with the average of the whole genome via similarity measures.The genes or genomic regions with scores below or above a certain threshold are supposed to be atypical.The consecutive atypical genes or genomic regions are finally merged to get candidate GIs.Methods based on gene sequence composition are often designed to detect LGT, or laterally transferred genes , and only a few methods are specifically developed to detect GIs.The methods for LGT detection can be utilized to identify GIs by combing clusters of laterally transferred genes, but they are supposed to be less sensitive, since some genes inside a GI may not show atypicality to allow the whole GI being captured.Here we mainly discuss specific methods for GI detection."Some GI detection methods combine multiple discrimination criteria, such as Karlin's method and PAI-IDA . "Karlin's method and PAI-IDA predict GIs and PAIs by evaluating multiple compositional features. "Karlin's method is a single-threshold method, while PAI-IDA uses iterative discriminant analysis.Both methods use a sliding window to scan the genome, and sequences or genes inside each window are used for computation.Other methods use only a single discrimination criterion, such as IslandPath-DINUC and SIGI-HMM .IslandPath-DINUC uses a single-threshold method to predict GIs as multiple consecutive genes with only dinucleotide bias.SIGI-HMM predicts GIs and putative donor of laterally transferred genes based solely on the codon usage bias of individual gene.As an extension of SIGI , an earlier method based on scores derived from codon frequencies, SIGI-HMM substitutes the previous heuristic method with Hidden Markov Model to model the laterally transferred genes and native genes as different states.Methods based on gene sequence composition are generally easy to implement and apply.But what they indeed find are compositionally atypical genomic regions in terms of certain criteria.So there are many false positives and false negatives.Native regions may easily be detected as false positives owing to their atypical composition for reasons other than LGT, such as highly expressed genes .At the same time, ameliorated GIs or GIs originated from genomes with similar composition may not be detected.But the false positives can be reduced by eliminating well-known non-GIs.For example, by filtering out putative highly expressed genes based on codon usage, SIGI-HMM was reported to have the highest precision in a previous evaluation .For methods performing comparisons with the genomic average, laterally transferred regions may contaminate the genome and reduce the accuracy of predictions .Furthermore, the predicted boundaries of GIs are not precise, since the boundaries between laterally transferred genes and native genes can be compositionally ambiguous .Additionally, these methods at the gene level require reliable gene annotations.Thus, they may not be applied to newly sequenced genomes, which have no or incomplete annotations.The increase of newly sequenced genomes without complete annotations necessitates GI prediction based on DNA sequences alone.Without the aid of gene boundaries, the large genome has to be segmented by other measures.According to genome segmentation approaches, methods based on DNA sequence composition can be classified into two major kinds: window-based methods and windowless methods.Window-based single-threshold methods are commonly used for GI detection.These methods use a sliding window to segment the whole genome sequence into a set of smaller regions.There are several representative programs, including AlienHunter , Centroid , INDeGenIUS , Design-Island and GI-SVM .The major differences among them are in: the size of the sliding window, the choice of the discrimination criterion and similarity measure, and the determination of the threshold.Both AlienHunter and GI-SVM use a fixed-size overlapping window of fixed step size.AlienHunter is the first program for GI detection on raw genomic sequences.It measures segment atypicality via relative entropy based on interpolated variable order motifs.The threshold can be obtained by either k-means clustering or standard deviation.GI-SVM is a recent method using either fixed or variable order k-mer frequencies.It detects atypical windows via one-class SVM with spectrum kernel.An automatic threshold can be obtained from one dimensional k-means clustering.Centroid partitions the genome by a non-overlapping window of fixed size.The average of k-mer frequency vectors for all the windows is seen as the centroid.Based on the Manhattan distances from each frequency vector to the centroid, outlier windows are selected by a threshold derived from standard deviation.INDeGenIUS is a method similar to Centroid.But it uses overlapping windows of fixed size and computes the centroid via hierarchical clustering.Design-Island is a two-phase method utilizing k-mer frequencies.It incorporates statistical tests based on different distance measures to determine the atypicality of a segment via pre-specified thresholds.In the first phase a variable-size window is used to obtain initial GIs, whereas in the refinement phase a smaller window of fixed size is used to scan over these putative GIs for getting final GI predictions.Some of these methods are designed to alleviate the problem of genome contamination.Design-Island excludes the initially obtained putative GIs when computing parameters for the entire genome in the second phase.GI-SVM measures the atypicality of all the windows simultaneously via one-class SVM, and only some windows contribute to the genomic signature.To deal with the imprecise GI boundaries that result from a large step size, AlienHunter uses HMM to further localize the boundaries between predicted GIs and non-GIs.But most other programs do not consider this issue.The few windowless methods mainly include GC Profile and MJSD .GC Profile is an intuitive method to calculate global GC content distribution of a genome with high resolution.The abrupt drop in the profile indicates the sharp decrease of GC content and thus the potential presence of a GI.This method was later developed into a web-based tool which is used for analyzing GC content in genome sequences .However, other features have to be used together with GC Profile for GI prediction due to the poor discrimination power of GC content.MJSD is a recursive segmentation method based on Markov Jensen-Shannon divergence measure.The genome is recursively cut into two segments by finding a position where the sequences to its left and to its right have statistically significant compositional differences.Subsequently, each segment is compared against the whole genome to check its atypicality via a predefined threshold.Methods based on DNA sequence composition have the similar advantages and disadvantages as methods based on gene sequence composition.Specifically, window-based methods can be highly sensitive with appropriate implementations.For example, AlienHunter was reported to have the highest recall in previous evaluation , and GI-SVM was recently shown to have even higher sensitivity than AlienHunter .But their precisions are quite low due to the limited input information.They are also inherently incapable of identifying the precise boundaries between regions with compositional differences .In contrast, windowless methods can delineate the boundaries between GIs and non-GIs more accurately .GC Profile has successfully discovered a few reliable GIs in several genomes .But it seems subjective to access the abruptness of jump in the GC profile, and only GIs with low GC content can be detected.MJSD is better at predicting GIs of size larger than 10 kb , but the procedure to determine segment atypicality still suffers from the contamination of the whole genome.The presence of compositional bias is usually not sufficient to assure the foreign origin of putative GIs.Thus, it is necessary to develop methods based on multiple GI-related structural features.According to the approaches of integrating different features, methods based on GI structure can be divided into direct integration methods and machine learning methods.The direct integration methods adopt a series of filters to get more reliable GIs.But some integrated features are only used for validation, since it is difficult to systematically use them for prediction given the extreme GI structural variation.There are mainly two representative programs: IslandPath and Islander .IslandPath is the first program integrating multiple features to aid GI detection.But IslandPath only annotates and displays these features in the whole genome, leaving it to the user to decide whether a region is a GI or not.Based on these computed features, a GI can be identified as multiple consecutive genes with both dinucleotide bias and the presence of mobility-related genes .Islander incorporates a method to accurately detect tDNA-borne GIs.Islander seeks specific tDNA signature to find candidate GIs.Several filters are used to exclude potential false positives, such as regions without integrase genes.Recently, the filtering algorithms are refined via incorporating more precise annotations available now .Several machine learning approaches based on constructed GI datasets have been proposed, including Relevance Vector Machine , GIDetector , and GIHunter .The major differences among them are in the choices of training datasets, GI-related features, and learning algorithms.RVM is the first machine learning method to study structural models of GIs.It is based on the datasets constructed from comparative genomics methods.Eight features of each genomic region are used to train GI models: IVOM score, insertion point, GI size, gene density, repeats, phage-related protein domains, integrase protein domains and non-coding RNAs.GIDetector utilizes the same features and training datasets as RVM, but it implements decision tree based ensemble learning algorithm.GIHunter uses the similar algorithm as GIDetector, but adopts slightly different features and datasets.GI size and repeats are replaced by highly expressed genes and average intergenic distance.The training datasets are replaced by IslandPick datasets.The predictions of GIHunter for thousands of microbial genomes are available online at http://www5.esu.edu/cpsc/bioinfo/dgi/index.php.Methods utilizing GI structure can generate more robust predictions.For example, the high reliability of GIs inserted at tDNA sites leads to very few false positives in the predictions from Islander .But these methods depend on accurate identification of multiple related features, such as tRNA genes, mobility-related genes, and virulence factors.Direct integration methods are straightforward, but many GIs may be filtered out due to the lack of certain features.For example, IslandPath-DIMOB was shown to have very low recall in spite of high accuracy and precision .Conversely, machine learning approaches can systematically integrate multiple GI features to improve GI prediction.This can be partly reflected by the high recall and precision of GIHunter .However, the performance of supervised methods is closely related to the quality of training datasets.Methods based on several genomes detect GIs based on their sporadic phylogenetic distribution.They compare multiple related genomes to find regions present in a subset but not all the genomes.The comparison procedure often involves analyzing results from sequence alignment tools , such as local alignment tool BLAST , and whole-genome alignment tool MAUVE .BLAST and MAUVE can be used to find unique strain-specific regions, whereas MAUVE can also be used to find conserved regions.For example, Vernikos and Parkhill performed genome-wide comparisons via all-against-all BLAST, and then applied manual inspection to find reliable GIs for training GI structural models .They also differentiated gene gain from gene loss via a maximum parsimony model obtained from MAUVE alignments.Despite the tediousness of manual analysis, there are only two automatic methods based on several genomes: tRNAcc and IslandPick .The tRNAcc method utilizes alignments from MAUVE to find GIs between a conserved tRNA gene and a conserved downstream flanking region across the selected genomes.It was later integrated into MobilomeFINDER , an integrative web-based application to predict GIs with both computational and experimental methods.Complementary analysis is also incorporated in tRNAcc to provide additional support, including GC Profile, strain-specific coding sequences derived from BLAST analysis, and dinucleotide differences.But appropriate genomes to compare have to be selected manually.To facilitate genome selection, IslandPick builds an all-against-all genome distance matrix and utilizes several cut-offs to select suitable genomes to compare with the query genome, making it the first completely automatic comparative genomics method.The pairwise whole-genome alignments are done by MAUVE to get large unique regions in the query genome.After being filtered by BLAST to eliminate genome duplications, these regions are considered as putative GIs.Due to the inaccuracies of composition-based methods, methods based on several genomes are preferred if there are appropriate genomes for comparison .But uncertainties still exist in their predictions.Firstly, the results are dependent on the genomes compared with the query genome .Secondly, it is hard to distinguish between gene gain via LGT and gene loss .Thirdly, genomic rearrangements can cause difficulties in accurate sequence alignments .In addition, the applications of methods based on several genomes are limited, since the genome sequences of related organisms may not be available for some query genomes.Different kinds of methods often predict non-overlapping GIs and complement each other .To make the best of available methods, ensemble methods have been proposed to combine different methods.One way of combination is to merge the predictions from multiple programs.This approach is implemented in IslandViewer and EGID .IslandViewer is a web-based application combining three programs: SIGI-HMM, IslandPath-DIMOB, and IslandPick.It provides the first user-friendly integrated interface for visualizing and downloading predicted GIs.Newer versions of IslandViewer include further improvements , such as improving efficiency and flexibility, incorporating additional gene annotations, and adding interactive visualizations.But the underlying integration method is mainly a union of predictions from individual programs.Unlike IslandViewer, EGID uses a voting approach to combine predictions from five programs: Alienhunter, IslandPath, SIGI-HMM, INDeGenIUS, and PAI-IDA.A user-friendly interface for EGID is provided in the program GIST .Another way of combination is to filter the predictions from one method by other methods.This approach is common for PAI prediction, since it is critical to utilize multiple features to discern PAIs from other GIs.Several PAI detection programs adopt this approach, including PAIDB , PredictBias and PIPS .These programs often combine composition-based methods, comparative genomics methods, and homology-based methods.Both PAIDB and PredictBias firstly identify putative GIs based on compositional bias.For PAIDB, the putative GIs homologous to published PAIs are seen as candidate PAIs.SIGI-HMM and IslandPath-DIMOB are later integrated into PAIDB for GI predictions .To overcome the dependency on known PAIs, PredictBias constructs a profile database of virulence factors.If the putative GIs have a pre-specified number of significant hits to VFPD, they are seen as potential PAIs.PredictBias also integrates comparative analysis to validate the potential PAIs.PIPS integrates multiple available tools for computing PAI-associated features.It filters out the initial predictions from comparative genomics analysis via empirical logic rules on selected features.Combining the predictions of several programs is supposed to perform better than individual programs.Actually, IslandViewer was shown to increase the recall and accuracy without much sacrifice of precision , and EGID was reported to yield balanced recall and precision .The available ensemble methods are mostly characterized by user-friendly interfaces, but the combination procedures do not seem to be sophisticated enough.Some valuable predictions made by one method may be discarded in the ensemble method.For example, PredictBias was shown to have lower sensitivity and accuracy than PIPS on two bacterial strains , which reflects the effects of different integration strategies on the performances to some extent.Thanks to low-cost high-throughput sequencing, an increasing number of microbial genomes are being sequenced.However, many of these genomes are in draft status.So there is a need to predict GIs in incomplete genomes.Currently, there are only two programs for this purpose: GI-GPS and IslandViewer 3 .Both programs firstly assemble the sequence contigs into a draft genome, and then use methods similar to those for predicting GIs in complete genomes.GI-GPS is a component of GI-POP, a web-based application integrating annotations and GI predictions for ongoing microbial genome projects.GI-GPS uses an assembler within GI-POP for genome assembly.Then an SVM classifier with radial basis function kernel is applied to segments obtained from a sliding window of fixed size along the genome.The classifier is trained on IslandPick datasets and selected GIs from PAIDB.GI-GPS utilizes compositional features in model training to tolerate potential errors in the assembled genome.The predictions from the classifier are filtered by homologous searches to keep only sequences with MGE evidence.Then the boundaries of filtered sequences are refined by repeats and tRNA genes.IslandViewer 3 maps the annotated contigs to a completed reference genome to generate a concatenated genome.Then it uses this single genome as input to the normal IslandViewer pipeline.GI-GPS and IslandViewer 3 make it feasible to predict GIs for draft genomes.But they are still simplistic and limited.For example, IslandViewer 3 is restricted to the genome which has very few contigs and reference genomes of closely related strains of the same species .Furthermore, it seems inappropriate to apply methods similar to those developed for complete genomes, since draft genome sequences do not have as high quality as whole genome sequences.Since the discovery in microbial genomes, the importance of GIs has been gradually appreciated.Extensive research has demonstrated multiple GI-associated signatures, but these features show great variation in different genomes.Nevertheless, several of these features have been revealed to be effective in GI detection and applied in many computational methods, including compositional bias, structural markers and phylogenetically restricted distribution.Based on the input data, we classify these methods into four large groups, which are further divided into subgroups based on the features utilized or the methodology adopted.It should be noted that some methods may belong to multiple categories.For example, tRNAcc and GI-GPS can also be classified as ensemble methods.In short, distinct kinds of methods detect GIs based on diverse features and assumptions, and thus generate predictions of different reliabilities.Methods based on gene or DNA composition of a single genome provide only rough estimations, since they usually take advantage of very limited information.Methods based on GI structure utilize multiple lines of evidence, and are supposed to be more reliable.But compositional or structural features in a single genome can only provide static information for GI prediction.Instead, methods based on several genomes can reveal genetic flux among closely related genomes and provide dynamic information .Therefore, they can be more accurate.To get more comprehensive and reliable results, it seems desirable to use methods based on more evidence, such as ensemble methods and methods based on GI structure.This can be illustrated by the evaluations of some methods on the well-studied S. typhi CT18 genome.19 reference GIs were obtained from , excluding two GIs of size smaller than 5 kb.The predictions of each program were either downloaded from the corresponding website, tRNAcc, GIHunter) or from running the program on local machine with optimal parameters.The evaluation metrics were measured as those in .All the relevant data and scripts can be found at https://github.com/icelu/GI_Prediction.Although the sophistication and performance of GI prediction methods have been steadily improved, there is still room for further improvement.For instance, the precision and recall of current methods are still not high enough , suggesting the presence of many false negatives and false positives.This can be improved either by more advanced integration of multiple kinds of methods or refinement on a single kind of methods.For GI prediction based on a single genome, machine learning methods may help.On one hand, DNA composition-based prediction can be seen as contiguous subsequence based anomaly detection , whose goal is to find anomalous contiguous subsequences significantly different from other subsequences in a long sequence.From this perspective, many computational approaches for outlier detection may be adapted for GI prediction.On the other hand, it seems feasible to apply more sophisticated supervised learning algorithms for structure-based GI prediction, since the accumulation of reliable GIs can provide a more solid basis for model training.For GI prediction based on incomplete genomes, methods directly applied to sequence contigs without initial genome assembly may be developed.Despite the challenges in analyzing short sequences, there has been a method proposed to detect LGT in metagenomic sequences which consist of contigs from different species in an environment .This approach may be inspiring for predicting GIs from the contigs directly. | Clusters of genes acquired by lateral gene transfer in microbial genomes, are broadly referred to as genomic islands (GIs). GIs often carry genes important for genome evolution and adaptation to niches, such as genes involved in pathogenesis and antibiotic resistance. Therefore, GI prediction has gradually become an important part of microbial genome analysis. Despite inherent difficulties in identifying GIs, many computational methods have been developed and show good performance. In this mini-review, we first summarize the general challenges in predicting GIs. Then we group existing GI detection methods by their input, briefly describe representative methods in each group, and discuss their advantages as well as limitations. Finally, we look into the potential improvements for better GI prediction. |
31,481 | Physical and biogeochemical controls on the variability in surface pH and calcium carbonate saturation states in the Atlantic sectors of the Arctic and Southern Oceans | Polar regions are particularly sensitive to rising temperatures and increasing atmospheric CO2 concentrations.Arctic sea-ice has decreased in summer extent and thickness over the last few decades, and observed warming in this region is currently almost twice that in the Northern Hemisphere as a whole.In contrast, Antarctic sea-ice extent has remained steady or even increased, and warming follows the general global trend, but strong regional differences exist.In particular, the Western Antarctic Peninsula experiences higher rates of warming with decreasing sea-ice and retreating ice shelves.Ocean acidification in polar regions adds pressure to already stressed ecosystems.The uptake of CO2 by the oceans alters the chemistry of seawater and increases dissolved inorganic carbon and bicarbonate ion concentrations, and reduces carbonate ion concentrations, calcium carbonate mineral saturation states and pH.High-latitude oceans have naturally lower pH, Ω and buffering capacity due to a higher solubility of CO2 in their cold waters, and are expected to be the first to experience undersaturation for calcium carbonate minerals.In fact, surface waters of the Arctic Ocean already experience seasonal undersaturation due to increased sea-ice melt, river runoff and Pacific water intrusion.Undersaturated waters can be corrosive to calcifying organisms which lack protective mechanisms and/or rely on seawater pH for calcification.Pteropods living in polar waters will be most affected by a decrease in aragonite saturation state, and live shell dissolution has been reported in the Southern Ocean.However, the response of marine organisms and ecosystems to ocean acidification is still unclear and also depends on other factors, such as nutrient availability, species interactions, and the previous exposure history of an organism to high pCO2 waters.Therefore it is important to characterise the spatial variability of carbonate chemistry in polar oceans in order to determine the environmental conditions that marine organisms currently experience.The carbonate system in polar oceans has high natural variability and strong spatial gradients, which makes it challenging to distinguish natural processes from perturbations resulting from a gradual uptake of anthropogenic CO2.In order to predict the future impact of climate change on the polar carbonate system we must therefore understand the interactions between the physical and biogeochemical processes that drive the spatial variability.Biological processes are reported to have a strong influence on variations in DIC, pH and Ω.During summer months polar surface waters have typically higher calcium carbonate saturations states due to intense primary productivity, as shown in the Arctic shelf seas including the Chukchi Sea, the Amundsen Gulf and the Barents Sea Opening, and also in western and eastern Antarctica.This results in more favourable pH and Ω conditions for organisms during summer.Robbins et al. found sea-ice melt to be an important driver for the carbonate system in areas with low biological activity in the Arctic, such as the Canada Basin and Beaufort Sea.Glacial melt and river runoff were also found to drive low saturation states in the sub-polar Pacific.In this study, we determine the processes that drive carbonate chemistry, including pH and aragonite saturation state, in the Atlantic sectors of the Arctic and Southern Oceans during their respective summers.A unique aspect of this study is formed by the availability of dissolved iron data, which allows a detailed assessment of the conditions for biological productivity, relevant to CO2 uptake.Our aim is to focus on the spatial variability of the carbonate system in surface waters.For this, we include the use of water column data to infer changes from the subsurface to surface waters, and determine the processes that result in large horizontal gradients in the study regions.We first present the environmental setting for the cruises followed by total alkalinity and DIC distributions, before presenting pH and Ωar distributions.The physical and biogeochemical processes controlling surface pH and Ωar are investigated with respect to seasonal changes in the Southern Ocean and through horizontal gradients in the Arctic Ocean.Variable reduction through a principal component analysis confirmed the different forcing factors in the two polar oceans.As the seasonal ice zone is changing rapidly, an improved understanding of how this will affect the carbonate system in these regions is critical.Therefore, we finally focus on these areas and investigate in detail the processes that drive pH and Ωar in both polar oceans upon sea-ice retreat.This manuscript, finally, also provides a carbonate chemistry context for other manuscripts in this special issue.Data were collected on two cruises of the UK Ocean Acidification Research Programme to the polar regions.The cruises covered the Atlantic sectors of the Arctic and Southern Oceans: the Nordic and Barents Seas in the Arctic, and the Scotia and Weddell Seas in the Southern Ocean.The main aim of the cruise programme was to investigate the effects of ocean acidification on biogeochemical processes and polar ecosystems, through field observations across carbonate chemistry gradients and CO2 perturbation experiments, Rees et al., Peck et al., Hartmann et al., Cavan et al. and Le Moigne et al.).The Nordic Seas connect the Arctic and Atlantic Oceans, and are the site of heat, freshwater and mass exchange between the two basins.Warm salty Atlantic waters flow northwards into the Arctic Ocean, and cold fresh Arctic waters southwards, forming a strong natural gradient of water masses.The Norwegian Atlantic Current splits at the Barents Sea Opening into the West Spitsbergen Current which flows northwards into the Arctic Ocean through the Fram Strait, and the North Cape Current which feeds Atlantic waters into the Barents Sea.Part of the WSC recirculates westwards at the Fram Strait and mixes with the southflowing polar waters.Polar waters exit the Arctic Ocean on the western side of the Fram Strait and follow the East Greenland Current along the Greenland Shelf towards the Denmark Strait.Part of the polar waters from the EGC branch eastwards into the central Nordic Seas as the Jan Mayen Current and the East Iceland Current.Sea-ice cover in the Nordic Seas is largely confined to the EGC, extending as far as the Denmark Strait in winter, and progressively retreating northwards towards the Fram Strait during summer melt.The majority of Arctic sea-ice is exported through the Fram Strait and the high interannual variability of sea-ice cover in the Greenland Sea is correlated to the variability in sea-ice export.Unfavourable winds result in low sea-ice cover, while years of low summer Arctic sea-ice cover result in a higher extent and concentration in the Greenland Sea due to easier movement of the Arctic ice mass.In the Barents Sea, sea-ice covers the northern polar waters during winter, and exposes them completely during summer.The winter sea-ice edge has retreated north-eastwards over the last decades, due to advection of warm Atlantic waters further north.The Nordic Seas and Barents Sea are some of the most productive seas in the Arctic Ocean due to a combination of high macronutrient and iron supply, deep winter mixing and extensive ice free regions during summer periods creating favourable light conditions.In the seasonal ice zones of the Fram Strait, along the East Greenland Shelf and in the Northern Barents Sea, sea-ice edge blooms are short-lived but intense, with high biomass accumulation due to strong stratification.In contrast in the deep basins of the Nordic Seas and the Southern Barents, where freshwater inputs are minimal, increased vertical mixing results in lower chlorophyll concentrations that can be sustained over a prolonged period.The eastward-flowing Antarctic Circumpolar Current dominates the circulation in the Southern Ocean, facilitating exchange with the major ocean basins.Wind-driven upwelling in this region is a key component of the meridional overturning circulation.The ACC separates sub-tropical waters to the north from Antarctic waters to the south.It consists of current jets associated with north to south fronts in water mass properties: the Sub-Antarctic Front, the Polar Front and the Southern Antarctic Circumpolar Current front.The ACC and associated water masses occupy most of the Scotia Sea.The southern boundary of the ACC, south of the SACCF, marks the southern limit to these water masses.South of the ACC lies the Weddell Sea, separated from the ACC by the Weddell-Scotia Confluence, a mixture of waters from the Scotia and Weddell Seas with shelf water from the Antarctic Peninsula.During austral winter, sea ice fully covers the Weddell Sea and the south-eastern part of the Scotia Sea, including the South Sandwich Islands.Summer sea-ice melt exposes the eastern Weddell Sea, while the western side remains covered throughout the year.However, recent years have seen anomalously high sea-ice concentrations in the Weddell Sea, with summer sea-ice extending further north and west, and this was also the case in 2013.Large parts of the Southern Ocean are classified as ‘high-nitrate low-chlorophyll’, which is related to limitation of phytoplankton growth by a limited supply of iron in these macronutrient replete waters.The Scotia and Weddell Seas have a relative high productivity compared to rest of the HNLC waters in the Southern Ocean.Favourable topography and eddies supply iron to the region, facilitating bloom formation.In particular, the South Georgia bloom in the Scotia Sea is the largest and most prolonged bloom in the Southern Ocean.The two polar cruises were conducted on board the RRS James Clark Ross.The Arctic cruise, JR271, covered the Nordic Seas, Barents Sea and Denmark Strait, and the Southern Ocean cruise, JR274, covered the Scotia and Weddell Seas.We collected water column and underway surface water samples for all variables, but the focus of this paper is on spatial variability of surface waters.Water column data was used to infer changes in surface water and is not fully described.In the interest of clarity it is worth mentioning that unless specifically stated, description of variables throughout the paper refers to surface water properties.Spatial resolution of surface water sampling during the Arctic cruise was higher than during the Southern Ocean cruise, due to the successful deployment of a pH sensor during that cruise, as detailed below.This sensor was also deployed during the Southern Ocean, but due to technical issues the data was not of sufficiently high quality and therefore not used for the analysis in this study.Surface ocean temperature and salinity from the underway seawater supply were logged continuously using a shipboard thermosalinograph.Measurements were averaged to one minute resolution, and calibrated using the conductivity-temperature-depth profiler data.During the Arctic cruise, a spectrophotometric pH instrument sampled every 6 min from the ship׳s underway seawater supply; samples for surface DIC and TA were collected every one-to-two hours from this supply.Discrete samples for DIC and TA in the water column were obtained from the CTD casts using 20 L Ocean Test Equipment bottles.All DIC and TA samples were collected into 250 mL Schott Duran borosilicate glass bottles using silicone tubing and poisoned with 50 µL of saturated mercuric chloride solution after creating a 2.5 mL air headspace.Samples were immediately sealed shut with ground glass stoppers and stored in the dark until analysis.All Arctic samples were analysed on-board within 36 h of collection using a VINDTA 3C instrument.The DIC was measured by coulometric titration and TA by potentiometric titration and calculated using a modified Gran plot approach.Due to malfunctioning of the coulometer on the VINDTA 3C during the Southern Ocean cruise, one-third of samples were analysed on-board with a DIC analyser which uses non-dispersive infrared detection, with subsequent TA analysis on the VINDTA 3C system.The remainder of the samples were analysed at the National Oceanography Centre, Southampton using a VINDTA 3C instrument for both DIC and TA.Measurements were calibrated using certified reference material obtained from A.G. Dickson.The 1σ measurement precision was calculated as the absolute difference between sample duplicates divided by 2/√π, and was ±3.8 and ±1.7 µmol kg−1 for DIC and TA, respectively, for the Arctic Ocean.For the Southern Ocean, overall DIC precision was ±1.3 µmol kg−1 for measurements with the Apollo and ±3 µmol kg−1 for measurements with the VINDTA; TA precision was ±2 µmol kg−1.The measurement technique was based on a colorimetric method using thymol blue as a pH indicator.pH was determined on the total pH scale.Measurements were made every 6 min with a precision of ±0.001 pH.Bottles of tris pH buffer and DIC/TA CRM provided by Prof. A.G. Dickson were analyzed at 25 °C during the cruise to determine the accuracy and stability of the pH measurements.No trend was detected in the tris buffer pH measurements and the accuracy was determined to be 0.006 pH units.The thymol blue extinction coefficients were determined in the laboratory and the indicator׳s dissociation constant taken from Zhang and Byrne.The full suite of carbonate system variables was calculated from the discrete DIC and TA measurements, with accompanying phosphate, silicate, temperature and salinity data, using version 1.1 of the CO2SYS programme for MATLAB.We used the carbonic acid dissociation constants of Mehrbach et al. refitted by Dickson and Millero, the boric acid dissociation constant of Dickson, the bisulphate ion acidity constant of Dickson and the boron-to-chlorinity ratio of Lee et al.pH is reported using the Total pH scale).The carbonic acid dissociation constants of Mehrbach et al. are only characterised above temperatures of 2 °C, while temperatures below this were observed in some regions in both the Arctic and Southern Ocean cruises.We tested the validity of the constants of MER for the cold water regions by comparing pH and Ωar values obtained using dissociation constants characterized for low temperatures; Lueker et al., 2000; Roy et al., 1993).Studies into internal consistency of measurements of carbonate chemistry variables have shown that the constants of ROY yield better results in cold polar waters, while the constants of MER provide better results in warmer waters.In an intercomparison study, Ribas-Ribas et al. found that for warmer water, calculations using constants by ROY were inconsistent with those by MER and LUK.Our dataset includes results from areas with cold waters, but the majority of our datapoints were in areas with sea-surface temperature above 2 °C.While we found no significant difference when using either ROY, MER, LUK or GOY constants, with the maximum difference within calculation error, we decided to use the constants of MER since the mean temperature during both cruises was above 2 °C.A salinity-temperature relationship for TA was derived for the Arctic cruise from discrete underway surface water samples, and this was used to increase the spatial resolution of surface water TA using the high-resolution temperature and salinity data of the underway logger.The calculated TA was matched with the underway pH measurements, and used to calculate all other carbonate system variables in the Arctic surface ocean, again using CO2SYS, with the same constants as for the discrete DIC and TA samples.Water samples for dissolved oxygen were collected from selected CTD casts to calibrate the CTD oxygen sensor.Seawater was drawn directly into volume-calibrated glass bottles via Tygon tubing and then fixed by addition of manganese chloride and sodium hydroxide/sodium iodide solutions.After thorough mixing and time for the fixing reaction to complete DO was determined by Winkler titration, using a Winkler Ω-Metrohm titration unit with amperometric determination of the titration end-point, and reagents prepared prior to cruise following Dickson.Precision was ±1 μM or better.Apparent oxygen utilisation for the water column profiles was calculated as the difference between the oxygen concentration expected at equilibrium with the atmosphere at the temperature and salinity observed in-situ, and the oxygen concentration measured.Discrete samples for inorganic nutrients were collected from CTD casts and the underway water supply at the same intervals as the DIC and TA samples.Inorganic nutrients were measured using a Skalar Sanplus segmented-flow autoanalyser following Kirkwood.Precision was ±0.04 μM for nitrate+nitrite, ±0.007 μM for phosphate and ±0.01 μM for silicate, and accuracy was 0.003 μM, 0.001 μM and 0.002 μM respectively.Samples for dissolved iron were collected using trace metal clean towed fish positioned at 3–4 m depth, an acid-cleaned PVC hose and a Teflon bellows pump.The seawater was pumped into a clean container and filtered using a 0.2 µm poresize filter cartridge.The samples were stored in acid cleaned 125 mL low density polyethylene bottles and acidified on-board.Sample analysis was conducted at NOCS using an isotope dilution technique in conjunction with ICP-MS analysis, following Milne et al.Seawater standards were preconcentrated and analyzed with each batch of samples, in order to validate our sample concentration.Values obtained for the seawater standards agreed with reported values for the GEOTRACES and the SAFe standard seawater, GEOTRACES D: 1.00±0.04 nmol Fe L−1).The precision for replicate analyses was between 1% and 3%.During the Arctic cruise, samples were collected to measure the stable isotopic composition of DIC following the same procedure as for DIC and TA except that smaller, 100 mL bottles, were used and these were poisoned with 20 µL of saturated mercuric chloride solution and had 1 mL air headspace.These samples were analysed at the Scottish Universities Environmental Research Centre – Isotope Community Support Facility in East Kilbride using a mass spectrometer coupled to a sample introduction system.The measurements are reported relative to the Vienna Peedee Belemnite international standard, and have a 1σ precision – based on duplicates – of 0.1‰.Further details of the analytical and calibration procedures can be found in Humphreys et al.Daily fields of sea ice concentration were produced from satellite data by the Operational Sea Surface Temperature and Sea Ice analysis run by the UK Meteorological Office.The sea ice data at a spatial resolution of 1/20° were spatially interpolated onto the cruise track to provide a time series of daily sea ice concentrations.The mean positions of the ACC fronts for the Southern Ocean study region were defined from satellite-derived sea surface height fields.Daily fields of near-real time absolute dynamic topography, gridded at 1/4° resolution, were obtained from Aviso for the period of the cruise.The mean of the daily fields was contoured to represent the frontal positions.The subsurface Winter Water layer south of the Polar Front in the Southern Ocean was used to estimate the sea surface distributions of variables in the preceding winter, and so calculate seasonal changes.By using the core of the winter water layer the effects of diffusion and vertical mixing on physicochemical properties was minimised.However, surface waters can be subject to stronger advection than subsurface waters, and the winter water layer then may not be representative of the winter conditions of the surface water directly above it.The temperature of the winter water layer in the southern part of the cruise track was colder than waters further north and was close to the freezing point of seawater, reflecting the presence of sea-ice during the previous winter.Therefore it is valid to assume that winter water layer in the Weddell Sea represents conditions during the previous winter.Winter water temperatures in the ACC in the northern part of our transect ranged between –0.8 °C and 1.9 °C, becoming warmer from south to north, in agreement with climatological temperature distributions for August across the Scotia Sea.Therefore these temperatures agree over broad spatial scales and we assume in our WW analysis that advection of the surface and subsurface waters was similar and that the subsurface WW waters reflect surface water conditions in the preceding winter.Principal component analysis of standardized variables was performed using MATLAB.This approach provides a good representation of variation within the data.In case there are associations between the variables, the first two or three components will usually explain the majority of the variation in the original variables, which can then summarise the patterns in the original data based on a smaller number of components.Principal component analysis is an ordination in which samples, regarded as points in a high-dimensional variable space are projected onto a best fitting plane.The axes capture as much variability in the original space as possible, and the extent to which the first few PCs allow an accurate representation of the true relationship between samples in the ordinal high-dimensional space is summarised by the percentage variation explained.The water masses encountered in the surface ocean during the cruises and their physicochemical properties are presented in Table 1.Warm salty Atlantic Waters, were observed in the Norwegian Seas, southern Barents Sea and along the west coast of Svalbard.Recirculating Atlantic Waters were found in the Greenland Sea.Two major fronts, evident from salinity variations, separated the Atlantic waters from cold Polar waters: the Polar front in the northernmost region of the Barents Sea and the East Greenland Front in the Fram Strait, northeast of Jan Mayen Island and in the Denmark Strait.Very cold Polar Surface Water was found on the western side of the Fram Strait below the ice.Sea-ice melt over the Greenland Shelf and in the Northern Barents Sea, reduced salinity and contributed to the formation of warm Polar Surface Water in these areas.Glacial melt lowered salinity in Kongsfjorden in Svalbard, and the reduced salinity signal in the southernmost region of the Barents Sea transect is characteristic of the freshwater influenced Norwegian Coastal Current.In the Southern Ocean, the ACC defined a clear north–south trend in surface water mass properties.The region is divided into water mass zones according to the temperature-salinity relationships, separated by fronts approximated from surface dynamic height.The Polar Frontal Zone, between the sub-Antarctic Front and the Polar front, the Southern Zone, between the Polar front and southern Antarctic Circumpolar Current front, and the Antarctic Zone between the SACCF and the Southern Boundary, form the ACC.South of this lies the Weddell Scotia Confluence and Weddell Sea.Salinity only varied slightly across the ACC, but temperature gradually decreased southwards from the still relatively warm waters of the PFZ, through the SZ to the colder AZ.The coldest waters were found in the WS, where sea-ice melt had lowered salinity.The low nutrient concentrations in the Arctic contrasted with the high levels in the Southern Ocean.In the Arctic, nitrate concentrations were generally low with near depletion in the warm PSW, NCC and Kongsfjorden.High silicate concentrations and low nitrate-to-phosphate ratios indicate the Pacific origin of the PSW of the EGC found in the Fram Strait.In the Southern Ocean, nitrate concentrations were high, although lower concentrations were observed around South Georgia and the South Sandwich Islands.Silicate concentrations increased from ca. 5 µmol kg−1 in the PFZ to ca. 65 µmol kg−1 in the Weddell Sea, reflecting the pronounced and also preferential uptake of silicate by diatoms in Southern Ocean waters.Surface water TA and DIC distributions for the Arctic and Southern Ocean cruises are shown in Fig. 2.In the Arctic, higher TA values corresponded to the AW in the central Norwegian Sea, southern Barents Sea and the region west of Svalbard.Slightly lower TA was found in the AW in the southern Norwegian Sea and Greenland Sea.In the Fram Strait and Denmark Strait the inflow of low salinity polar waters, combined with localised sea ice melt inputs, reduced TA to ca.<2200 μmol kg−1.Freshwater river runoff was observed close to Svalbard, with the enhanced salinity Atlantic water TA signal decreasing near Kongsfjorden by ~40 μmol kg−1.Fresh polar waters also decreased TA in both the Barents Sea opening and off the coast of Jan Mayen.These values agree with previous reports of higher TA for the Atlantic inflow to the Nordic and Barents Seas, and lower values for colder Arctic waters.In the Southern Ocean, the highest TA values were found in the Weddell Sea and the lowest in waters north of the Polar Front, in the region between South Georgia and the Falkland Islands.This range is similar to that previously reported for the Weddell-Scotia Confluence area.While the overall trend was a southwards increase in TA across the circumpolar jets, large variability was observed within each of the water mass zones.Sea-ice melt decreased TA around the South Orkney Islands and in the Weddell Sea area.Terrestrial freshwater inputs close to South Georgia and the Falklands Islands also decreased TA locally.North of South Georgia an area of low TA was also observed.TA-Salinity relationships indicate conservative mixing in the Arctic, with two different regimes, as typically observed in the Nordic Seas.Waters on the western side of the Fram Strait had a high zero-salinity end-member of 1230 μmol kg−1, while the remaining areas had a lower TA0 of 403 μmol kg−1.In the Southern Ocean TA-Salinity relationships indicate conservative mixing in the Weddell Sea and southern part of the Scotia Sea with a low alkalinity end-member.In other areas, mixing appears to be non-conservative, with TA concentrations varying by ca. 100 μmol kg−1 for minor variations in salinity.The different mixing regimes for TA in the surface waters of the Arctic and Southern Oceans indicate control by different processes.The low TA end-member in the Arctic is consistent with sea ice melt, while the higher TA end-member is consistent with freshwater inputs from Siberian rivers with elevated TA.Siberian river runoff is carried by the Transpolar Drift across the Central Arctic and out through the Fram Strait.Polar waters under the ice on the western side of the Fram Strait had high silicate content, low N:P ratios, and high TA0, consistent with the transport and mixing of Pacific Ocean-origin waters with Siberian river waters across the Arctic.In the Southern Ocean, conservative mixing of TA and salinity in the Weddell Sea and southern Scotia Sea indicates mixing with sea-ice melt.The non-conservative behaviour of TA in the ACC can be partly explained by the southwards shoaling of deep water masses.These deep waters have higher TA concentrations, resulting in a gradual increase in TA with dynamic height across the water mass zones in the ACC, with little change in salinity.However, large variations in TA were observed within the Southern and PF Zones.The decoupling of TA and salinity in the Southern Ocean observed in the PCA is consistent with a non-conservative behaviour and indicates other processes drive these variations in TA.These are discussed in detail in Section 5.1.2.In the Arctic, the highest surface DIC concentrations were observed in the Greenland Sea, somewhat higher than values reported by Miller et al.Atlantic waters in the Norwegian Sea and southern Barents Sea had slightly lower DIC concentrations than those in the Greenland Sea but with larger variability, comparable to values of 2080 μmo kg−1 reported by Findlay et al.Similarly, large variations were observed on the Atlantic influenced WSC around Svalbard.The fresher Norwegian Coastal Current showed lower DIC off the coast of Norway.The above values agree with the summer range reported for Atlantic waters in the Nordic Seas.Lower DIC concentrations in the northern Barents Sea were due to the intrusion of Polar waters, which have lower DIC than Atlantic waters.Other areas with Polar waters, such as west of Jan Mayen and north of Iceland, also had lower DIC concentrations.Particularly large freshwater inputs in the Denmark Strait resulted in very low DIC concentrations.Polar waters in the Fram Strait had very large variability: those on the westernmost side had high DIC while those close to the polar front had much lower values.This range is in agreement with values reported for the northern EGC by Yager et al.DIC concentrations ranged between 2100 μmol kg−1 and 2150 μmol kg−1 over much of the Southern Ocean transect.Higher DIC concentrations were observed in the southern region of the Drake Passage, southeast of South Georgia and in the southernmost area in the Weddell Sea.Lower DIC concentrations were observed close to the South Sandwich Islands, northwest of South Georgia and close to the Falkland Islands.The range of values observed is slightly lower than reported previously for a region spanning the ACC and Weddell Sea, but within the variability driven by physical and biogeochemical processes of the system.In this section we describe the overall distributions and patterns of surface pH and Ωar, and in Sections 5.1 and 5.2 we discuss the finer-scale variations and processes that drive them.Surface pH and Ωar were typically lower in the Southern Ocean than in the Arctic.Arctic pH ranged between 8.00 and 8.45, and Ωar between 1.1 and 3.1.These values are in agreement with previous observations in the Norwegian Sea and at the entrance to the Barents Sea.In the Southern Ocean pH ranged between 7.90 and 8.30 and Ωar between 1.2 and 2.6, consistent with summer values found in Western Antarctica and in Prydz Bay.The Arctic spanned a wider range with pronounced differences between the water masses.The Norwegian Sea, Greenland Sea and southern Barents Sea had pH of ca. 8.15, with higher values in the northern Barents Sea and around Svalbard.Elevated pH was also observed in the southern part of the Norwegian Sea, in the Denmark Strait and around Jan Mayen.The lowest and highest pH values were found in the polar waters of the Fram Strait.Overall, Ωar distributions followed a similar pattern to pH distributions.In the central Norwegian Sea and Barents Sea Ωar was between 2 and 2.3 on average, while the waters south and west of Svalbard generally had higher Ωar.The Greenland Sea had slightly lower Ωar, although higher values were observed west of Jan Mayen.The southern Norwegian Sea and the area north of Iceland just east of the Denmark Strait also had higher values, but Ωar decreased to 2.0 in the Denmark Strait.Near corrosive waters occupied the westernmost part of the Fram Strait, while the adjacent waters in the middle of the Strait had higher Ωar values.In the Southern Ocean, the overall distribution pattern of pH was similar to Ωar, although the increase in pH with decreasing temperature across the fronts was not evident in Ωar values.Spatial variability within the water mass zones was observed.In general, waters north of the polar Front had the lowest pH and Ωar values, but similar low values were also observed north of South Georgia and in the southern part of Drake Passage.The highest pH and Ωar values were observed in the vicinity of the major landmasses, South Georgia and the South Sandwich Islands.It is necessary to understand the processes controlling changes in DIC and TA before considering the changes in derived variables like pH, Ω and pCO2sw that may be of more general interest to the ocean acidification community.In the Southern Ocean, these underlying processes have been inferred from seasonal changes, using the WW layer as a proxy for surface conditions during the preceding winter, as described in Section 3.4 and following e.g. Jennings et al.The net changes in DIC, TA and DIN from winter to summer are shown in Fig. 6, with the upper water column DIC and TA distributions shown in Fig. S5.At virtually every sampling station, all of these variables had decreased to some extent from winter to summer.Freshwater is known to dilute DIC, TA and DIN, as was observed in the southernmost part of the cruise, where sea-ice had recently retreated, and at the single sampling station closest to South Georgia.The size of the freshwater component in these areas relative to the net winter-to-summer change was most important for TA, followed by DIC and DIN.The biological component of ΔDIN was high, falling entirely between 80% and 100% of the net winter-to-summer DIN change.However, converting this ΔDINbio into ΔDICbio revealed additional variation.The ΔDICbio can be considered to take a background value of around 100% of net ΔDIC, and it deviates from this in areas that can be put into two categories.Firstly, the areas that have been identified as having a high freshwater influence – near the recently-retreated ice edge and directly adjacent to South Georgia – had a lower ΔDICbio relative to net ΔDIC than this background value, with ΔDICbio reaching a minimum of around 20%.This suggests that in these areas, there had not been any significant primary production in the surface ocean following sea-ice retreat; this is supported by satellite observations of chlorophyll-a, which remained very low in these areas throughout the cruise period.Conversely, to the north of South Georgia and near the South Sandwich Islands, the ΔDICbio component was over 150% of the net ΔDIC, indicating enhanced primary productivity and biological uptake of inorganic carbon in these blooms.The HNLC waters of the Southern Ocean are characterised by very low dissolved iron concentrations as a result of low atmospheric Fe supply and an unfavourable N:Fe ratio in deep waters supplied to the surface ocean through deep winter mixing and advective processes with rapid dFe depletion following the on-set of spring phytoplankton blooms.Where inputs of iron do occur, such as in the shelf region surrounding South Georgia and the South Sandwich Islands, they provide a relief to iron limitation of phytoplankton.This was the case during our study, as indicated by enhanced iron in waters downstream of South Georgia and the South Sandwich Islands, coinciding with enhanced chl-a in both regions.A seasonal depletion of DIC and fCO2 in the bloom region north of South Georgia has also been reported by Jones et al.Further supporting the attribution of this component, in situ measurements of mixed layer particulate organic carbon production rates during the Southern Ocean cruise revealed relatively high values at stations in both of the bloom regions.However, these instantaneous measurements of POC production are not expected to perfectly mirror the seasonally-integrated changes that are measured by ΔDICbio, because of the different timescales.The ΔTAbio was mostly negative with respect to net ΔTA – that is, the biological impact on TA – excluding calcification – had been to increase TA.The most extreme ΔTAbio values, relative to net ΔTA, were associated with the bloom to the north of South Georgia.The ΔTAbio was a smaller fraction of net ΔTA in the South Sandwich Islands bloom because of the higher net ΔTA there.Subtracting the components ΔTAfw and ΔTAbio from net ΔTA leaves a residual that can be attributed to calcium carbonate precipitation and dissolution, ΔTAca.As a fraction of net ΔTA, the ΔTAca has a more complex pattern than the other components so far discussed.The highest values, sometimes exceeding 100%, were observed in the blooms to the north of South Georgia and near the South Sandwich Islands, and values near 100% were observed in the southwestern part of the cruise.Satellite observations provide some evidence of the presence of particulate inorganic carbon production in this area during and immediately preceding the cruise.A third independent line of evidence is that relatively high in situ calcification rates were measured during the cruise at stations in the South Georgia and South Sandwich Islands bloom.The different lines of evidence do not agree in all places; low in-situ calcification rates were observed in the southwestern part of the cruise.Satellite observations of high PIC concentrations which are indicative of preceding enhanced rates of calcification, and in-situ sampling, have shown that elevated abundances of the coccolithophore Emiliania huxleyi in the vicinity of South Georgia can be sustained over a prolonged period.Northwest of South Georgia, the area of low current speed within the cyclonic flow around the Georgia Basin acts to retain this material.Taking a bloom abundance of 200 cells mL−1, this requires an accumulated TA uptake of 0.2 μmol kg−1 in this area, which does not nearly account for the high ΔTAca, even if multiple preceding similar blooms successively grew, calcified and then sedimented out.Advection of lower TA waters from north of the Polar Front through eddies followed by mixing could also cause a TA decrease in this area, which would necessarily be included in the calculation of the ΔTAca component in case the eddies did not influence the WW layer.In the regions strongly affected by freshwater inputs – adjacent to South Georgia and where sea-ice had recently retreated – ΔTAca took negative values relative to net ΔTA.This can possibly be explained by calcium carbonate dissolution occurring in these regions.Dissolution of ikaite crystals during sea-ice melt could have contributed TA to the surface waters.The freshwater that was supplied hence did not have zero TA, contrary to the assumption that was made in the calculation of the ΔTAfw component.Glacial freshwater inputs in the vicinity of South Georgia are likely to have enhanced TA, thereby contributing to the observed ‘dissolution’ signal.Finally, the ΔDICresid was calculated from the net ΔDIC and its components that have been discussed.The sum of the ΔDICfw, ΔDICbio and ΔDICca was typically greater than net ΔDIC, leading to ΔDICresid taking negative values as a fraction of net ΔDIC.This means that there was mostly a net uptake of atmospheric CO2 by the ocean, by direct air-sea gas exchange, from winter to summer.The strongest negative ΔDICresid, relative to net ΔDIC, was found in the South Georgia and South Sandwich Islands bloom, and this was dominantly driven by the very high ΔDICbio in those areas which was stimulated by shelf derived iron inputs; primary productivity is therefore a key control on air-sea CO2 exchange and seawater carbonate chemistry in this region, as previously reported.Values of ΔDICresid close to zero were observed in the areas that had recently experienced sea-ice retreat such as the Weddell Sea; here, the net winter-to-summer change in DIC was effectively entirely accounted for by the freshwater input, and there was not any significant primary productivity.As mentioned in the previous section, it appears ikaite may have been present in the sea-ice in the Weddell Sea, contributing TA and DIC to surface waters upon sea-ice melt.The assumption of a zero end member for DIC results in an overestimation of the freshwater component, ΔDICfw, and an underestimation of the residual.However the contribution from ikaite was very small and did not affect the calculation of the residual term.The physical and biogeochemical processes controlling carbonate chemistry in sea-ice influenced area are discussed further in Section 5.4.Integration of ΔDICresid over the depth of the mixed layer observed at the Southern Ocean sampling stations reveals an increased dominance of the South Georgia and South Sandwich Islands blooms for atmospheric CO2 uptake.It is important to first note that this result is sensitive to the mixed layer depth, and this calculation depends on the mixed layer depths observed during a single cruise.In case they are not representative of typical conditions, then this integrated value will also be erroneous.Nevertheless, if we assume that the ΔDICresid is evenly distributed through the mixed layer, we find net seasonal air-to-sea CO2 fluxes of up to −50 g-C m−2 in these bloom areas.This is consistent with CO2 flux estimates for this region based on a global climatology of pCO2sw measurements.Our observations also indicate that the Scotia Sea was overall a sink of CO2 with net seasonal air-to-sea CO2 fluxes of up to ca.−12 g-C m−2 in agreement with a report by Schlitzer, with areas in the central region acting as a source, in agreement with austral summer observations by.The high pH and Ωar values in the vicinity of South Georgia corresponded to areas with a strong uptake of DIC due to primary production, which considerably increased pH and Ωar.While the decrease in DIC and TA in this area attributed to calcium carbonate production also decreased pH and Ωar, the comparatively larger organic carbon production counteracted this and resulted in enhanced pH and Ωar values north of South Georgia.In the waters immediately north of South Georgia, low pH and Ωar corresponded to areas where the calcium carbonate term was negative, but biological production was relatively low compared with the region further north of South Georgia.Southwards advection, through eddy formation, of waters with decreased pH and Ωar from north of the polar front to the region directly north of South Georgia, is a possible supply mechanism.These processes resulted in an area north of South Georgia with strong horizontal gradients.Freshwater inputs had only minor effects on pH and these were limited to the southern areas where sea-ice had melted, or to the glacial runoff close to South Georgia.A similar pattern was observed for the calcium carbonate saturation state but with larger effects.Variations in pH and Ωar due to sea-ice melt are discussed in detail in Section 5.4.In contrast to the Southern Ocean, our study area in the Arctic does not experience the temperature inversion that allows the calculation of winter water values and a seasonal evaluation of carbonate chemistry controls.Here we use horizontal variations in surface carbonate chemistry in combination with upper water column data to elucidate the processes that drive the surface gradients in pH and Ωar.The upper water column in the Arctic showed strong gradients in both TA and DIC resulting from the presence of different water masses and sea ice across the Norwegian and Greenland Seas, moving from warm, salty Atlantic waters to cold, ice-covered Polar waters.In addition, the strong gradients in carbonate chemistry variables in the Fram Strait, highlighted the role of advection in this area, where strong stratification prevents vertical mixing.The surface gradients in pH and Ωar are equally as sharp and appear to correlate with temperature and salinity changes, particularly in the polar water influenced Greenland Sea and Fram Strait, suggesting that the physical processes controlling water mass composition largely determined surface pH and Ωar in these regions.Interestingly, a decoupling between pH and Ωar was observed over strong temperature and salinity gradients in waters featuring low productivity, both in the northern part of the Barents Sea transect and in the Denmark Strait.In these regions, the salinity derived TA and DIC decreases coincided with a temperature decrease from 6 to 0 °C.In the northern Barents Sea, for example, the pH increased relative to the south due to the temperature decrease, with pHT=6 ca. 8.15 and pHT=0 ca. 8.25.However, Ωar did not show a similar increase due to the much smaller temperature sensitivity, and since both TA and DIC were reduced in equal proportion, Ωar was only slightly lower as a result of the lower calcium ion concentrations in the fresher waters.Elevated surface pH and Ωar values along the ice edge corresponded to lower nitrate concentrations indicating biological CO2 uptake.Abundant iron and high chl-a concentrations corresponded to low DIC providing further evidence for the effect of autotrophy on surface pH and Ωar.Indeed, during the Arctic cruise phytoplankton blooms were observed along the sea-ice edge, over shallow shelves and in coastal areas as confirmed by chl-a concentrations.The surface dissolved iron concentrations in the Arctic region were high compared to the Southern Ocean, with only 3 out of 125 datapoints falling under 0.2 nM, indicating that iron limitation of primary productivity was not likely in this study region.Enhanced dissolved iron levels for the Arctic Ocean have also been reported by Klunder et al. mainly attributed to benthic supply in the shelf regions.The strong influence of biological CO2 uptake on pH and Ωar is clearly illustrated by maxima near Jan Mayen, in the southern Norwegian Sea close to Iceland, along the Norwegian Coast and around Svalbard, coinciding with reduced DIC and enhanced chlorophyll concentrations.Enhanced iron inputs from Svalbard are evident in its coastal waters, stimulating phytoplankton bloom development), nutrient uptake and increased pH and Ωar.A previous analysis over a full annual cycle in the Barents Sea Opening also showed the large impact of biological activity on Ωar, for 45–70% of the difference between winter and summer values.PCA was applied to allow visualisation of the relationships between the environmental forcing variables and showed different forcing factors on each of the components for the two polar regions.DIC, nitrate, phosphate and calcium carbonate saturation states determined PC1 in both the Arctic and Southern Ocean.This component was taken as an indicator of biological activity.PC2 was influenced by temperature, salinity and TA in the Arctic, but by temperature, TA and silicate in the Southern Ocean.This component was taken as an indicator of water masses.For the Arctic, PC1 explained 44.1% of the variation whilst PC2 captured 26.7% of the variation.For the Southern Ocean, PC1 explained 38.8% of the variation whilst PC2 captured 32.6% of the variation.The relative variations of the two principal components along the cruise sections indicate a strong coupling between physical processes and biological activity in the Arctic, while this was not the case for our transect in the Southern Ocean.In the Arctic, the identified principal components varied almost identically, i.e. a change in the biological signal was accompanied by a change in water masses.This is consistent with the strong correlation of gradients in pH and Ωar with temperature and salinity as discussed above.In the Southern Ocean, the principal components did not follow such a clear trend, most times displaying opposite trajectories.This indicates that biological activity did not always correlate with a change in water masses, as discussed above for South Georgia and the South Sandwich Islands, where iron supply stimulated biological activity in the otherwise HNLC Southern Ocean.Different physical and biogeochemical processes controlled the carbonate system following sea-ice melt in the Fram Strait and Weddell Sea.Temperature forms an important control on pH changes, however the strong pH gradients in the Fram Strait far exceeded the pH change caused by temperature alone.Additionally, the lowest pH values were observed in the areas with coldest waters, and corresponded with low Ωar values suggesting organic matter respiration determined these variations.TA and DIC in the Arctic region showed large gradients but they each had different relationship with salinity.TA changes corresponded largely with salinity, decreasing from the salty Atlantic waters further east to the fresher Polar waters further west.A similar trend was not evident for DIC.The low DIC concentrations in the warm PSW on the ice-edge corresponded to a pronounced phytoplankton bloom, supported by satellite chl-a.Summer blooms are known to cause strong drawdown of DIC in this region.The high DIC concentrations in PSW were observed in waters below thick ice, indicating accumulated DIC under the ice.Apparent oxygen utilisation can be used to derive biological activity as oxygen is produced on photosynthesis.Air-sea oxygen equilibration times can be relatively quick and therefore AOU may not fully reflect biological activity in areas which have recently been in contact with the atmosphere.The waters in the western side of the Fram Strait were under ice and are assumed to have been covered since the previous winter as these waters come from regions further upstream in the Arctic Ocean.Although gas exchange with the atmosphere can occur through sea-ice this is not well constrained and for the purposes of our study we assume air-sea exchange was limited under ice.Water column pH and Ωar showed two divergent profiles in the Fram Strait: in open waters, surface pH and Ωar values were higher with respect to subsurface waters, while in ice-covered stations they were lower.AOU correlated negatively with pH and Ωar, with higher surface pH and Ωar in productive waters and lower where respiration was apparent.A pronounced sea-ice edge bloom resulted in low DIC concentrations due to biological uptake, with consequent enhanced δ13CDIC due to preferential uptake of light δ13C by photosynthesising autotrophs, and depleted nutrient concentrations.As a result pH increased by 0.33 and Ωar by 1.6 with respect to subsurface values, as has been observed in other areas of the Arctic Ocean following biological production after sea-ice melt.The surface waters under the ice had a positive AOU due to organic matter remineralisation, as also reported by Bates et al for the western Arctic Ocean, with higher DIC concentrations suggesting respired CO2 is responsible for lowering the pH and Ωar.This respiration signal is considered to be derived from degradation of organic matter upstream in the Arctic Ocean, which accumulates in surface waters under the ice and is transported to the Fram Strait.This is also evident in the δ13CDIC observations with the high DIC concentrations matching reduced δ13CDIC due to remineralisation of biological debris with light δ13C signatures and comparatively larger DIN concentrations.In contrast, in the Weddell Sea, temperature and salinity variations were minor, and mainly the result of recent sea-ice melt over the most southern area.However, pronounced horizontal gradients in pH and Ωar were observed with minor changes in water mass properties.We use the seasonal drawdown described in Section 5.1, to explain these gradients.While pH and Ωar increased from winter to summer across the Weddell Sea, regional differences were evident.In the vicinity of the South Sandwich Islands, a large increase in pH and Ωar corresponded to a large decrease in DIN concentrations attributed primarily to the biological component.Increased shelf-derived iron inputs from the islands facilitated a bloom after sea-ice retreat and the uptake of DIN and DIC.This is in agreement with previous reports of enhanced biological activity in the Weddell Sea, and the strong control by biological activity on seasonal DIC variations.The biological activity resulted in dFe uptake, such that dFe concentrations in the vicinity of the South Sandwich Islands had been depleted whilst pH and Ωar values were enhanced.In contrast, in regions beyond 2800 nm with low iron concentrations, no bloom was observed, despite stratification and increased light availability after sea ice melt.The depth of the mixed layer determines bloom initiation, however this depth was constant across the Weddell Sea and therefore bloom formation in this case was determined by iron availability.Mattsdotter Björk et al. found that sea-ice melt in the Ross Sea supplied enough iron, and together with enhanced stratification, promoted an ice-edge bloom resulting in high pH and Ωar values.The difference in iron concentrations in Antarctic sea-ice can be explained by the variability of iron supply at the time of sea-ice formation.Overall, pH and Ωar in the naturally iron-fertilised region around the South Sandwich Islands increased over three times as much as in the non-fertilised area in the Weddell Sea.The effect of freshwater inputs was negligible on pH and Ωar, while temperature slightly decreased pH, as expected from the warming of surface waters from winter to summer.Biological activity was the largest factor contributing to the increase in pH and Ωar across the Weddell Sea.The residual indicated an increase in pH and Ωar, and was larger in the iron-fertilised area than in the non-fertilised area.The residual represents physical processes plus a disequilibrium term, unaccounted for by the other terms.However sources of error in the biological term, for example from the use of an inaccurate C:N ratio, could result in an unaccounted biological impact, which would be carried in the residual.If this is the case, areas with higher biological activity would necessarily have larger residuals.Spatiotemporal deviations from the Redfield ratio occur due to species composition and stage of the bloom.The C:N ratio in the South Sandwich Island bloom may have been higher than Redfield as observed in other areas of the Southern Ocean, underestimating the drawdown in DIC subsequently used to calculate the biological impact on pH and Ωar, and producing a larger residual in the iron fertilised area.Horizontal gradients in surface pH and Ωar in the Arctic and Southern Ocean were due to the prevalence of different physical and biogeochemical processes in each ocean.In the Arctic, variations in water mass temperature and salinity determined pH and Ωar through changes in TA, while biology did so primarily by forcing DIC changes.In the Southern Ocean, variations in salinity controlled pH and Ωar but these variations were small, and larger changes in TA arising from a combination of calcification, advection and upwelling, had a more pronounced impact on pH and Ωar.While biological activity in the HNLC Southern Ocean was limited, a large impact on pH and Ωar was observed in regions with enhanced iron supply, including the region north of South Georgia.The major contrasts between the ice-covered areas in the Southern Ocean and the Arctic can be attributed to differences in biological activity related to iron availability.In the HNLC Southern Ocean, sea-ice retreat only resulted in pronounced blooms and DIC uptake in regions with enhanced iron supply.In the Arctic in contrast, iron was replete, which facilitated bloom development along the ice edge, where nutrient and/or light availability may control productivity.Therefore, the Fram Strait is almost invariably a sink for CO2 upon ice retreat, while the formation of a CO2 sink in the Weddell Sea ultimately depends on iron availability.The increasing marginal ice-zone in the Arctic Ocean may favour the formation of more frequent ice-edge blooms in light-limited areas such as the Fram Strait, temporarily increasing saturation states and pH. In contrast, ice-edge blooms do not always form at the marginal ice-zone in the Southern Ocean since both iron and light co-limit primary production.The mass island effect of the South Sandwich Islands should continue to promote bloom formations after a potential future sea-ice retreat in this area, with the associated increase in pH and saturation states in downstream waters.Iron inputs from increased calving of glaciers can lead to local blooms in the HNLC Southern Ocean and could potentially increase pH and saturation state in areas of the Weddell Sea further away from the continent.These increases in both the Fram Strait and Weddell Sea can provide some temporal relief for organisms, from the expected global decrease in pH and saturation states due to anthropogenic CO2 uptake.The Arctic Ocean has been postulated to be more vulnerable than the Southern Ocean to ocean acidification, due to lower alkalinity and a less pronounced seasonal cycle, which prevents large increases in pH and Ω.In our areas of study, the Arctic had in general higher pH and Ωar than the Southern Ocean, suggesting this is not always the case.Shadwick et al. pointed out their study is not representative of the Barents Sea and Atlantic inflow areas in the Arctic, and indeed, we have previously reported large seasonal increases in Ωar in the Barents Sea Opening due to biological activity.In the present study, we also found very high pH and Ωar due to primary production in areas close to the ice-edge in the Arctic.These studies suggest that the response of the carbonate system to anthropogenic forcing will not only vary between the two polar oceans but also across them.Finally, this study provided a carbonate chemistry framework for the biological and biogeochemical variables measured during the UKOA cruises and as such, it highlights the large variability of pH and Ωar.to which polar ecosystems may be exposed.Pronounced gradients over small spatial scales may mean that organisms thriving in favourable conditions can easily become exposed to lower pH and Ωar.This may have detrimental effects on, for example, calcifying organisms with insufficient protective mechanisms.Alternatively, it may be the case that polar ecosystems have adapted to the large variations in pH and Ωar and are therefore more resilient to anthropogenic ocean acidification. | Polar oceans are particularly vulnerable to ocean acidification due to their low temperatures and reduced buffering capacity, and are expected to experience extensive low pH conditions and reduced carbonate mineral saturations states (Ω) in the near future. However, the impact of anthropogenic CO2 on pH and Ω will vary regionally between and across the Arctic and Southern Oceans. Here we investigate the carbonate chemistry in the Atlantic sector of two polar oceans, the Nordic Seas and Barents Sea in the Arctic Ocean, and the Scotia and Weddell Seas in the Southern Ocean, to determine the physical and biogeochemical processes that control surface pH and Ω. High-resolution observations showed large gradients in surface pH (0.10-0.30) and aragonite saturation state (Ωar) (0.2-1.0) over small spatial scales, and these were particularly strong in sea-ice covered areas (up to 0.45 in pH and 2.0 in Ωar). In the Arctic, sea-ice melt facilitated bloom initiation in light-limited and iron replete (dFe>0.2 nM) regions, such as the Fram Strait, resulting in high pH (8.45) and Ωar (3.0) along the sea-ice edge. In contrast, accumulation of dissolved inorganic carbon derived from organic carbon mineralisation under the ice resulted in low pH (8.05) and Ωar (1.1) in areas where thick ice persisted. In the Southern Ocean, sea-ice retreat resulted in bloom formation only where terrestrial inputs supplied sufficient iron (dFe>0.2 nM), such as in the vicinity of the South Sandwich Islands where enhanced pH (8.3) and Ωar (2.3) were primarily due to biological production. In contrast, in the adjacent Weddell Sea, weak biological uptake of CO2 due to low iron concentrations (dFe<0.2 nM) resulted in low pH (8.1) and Ωar (1.6). The large spatial variability in both polar oceans highlights the need for spatially resolved surface data of carbonate chemistry variables but also nutrients (including iron) in order to accurately elucidate the large gradients experienced by marine organisms and to understand their response to increased CO2 in the future. |
31,482 | Correlation between changes in functional connectivity in the dorsal attention network and the after-effects induced by prism adaptation in healthy humans: A dataset of resting-state fMRI and pointing after prism adaptation | The data represent a detailed characterization of the relationship between the primary motor cortex and the cerebellum by prism adaptation in healthy adults using fMRI.The data represent correlation between the change ratio of FC and the amplitude of after-effect.The group statistical analysis included a total of 19 healthy participants.A bar-shape target with a length of 15 cm and a width of 3 mm was displayed on a 25 in.touch panel display.The target was randomly presented at 3 locations.Participants were wearing prism glasses that shift visual field to the right by 20 diopters.At the baseline, participants performed six pointing movements without a prism.In the prism adaptation phase, 90 pointing movements were performed wearing prism glasses.At the post PA phase, six pointing movements were performed without prism glasses.At the baseline and in the PA phase, the distance between the touch panel display and the desk was 5 cm.Participants were able to see their finger at the end of movements.In the post PA phase, the distance between the desk and display was 0 cm.Therefore, the participant could not see their finger at the end of movements in this phase.The fMRI data described here were recorded with a 1.5-T MR scanner, during resting state,and before and after the PA session.Regions that were activated during the PA session in previous studies were selected as regions of interest .ROIs included the primary motor cortex and Cerebellum.Preprocessing of the fMRI data was performed using Analysis of Functional NeuroImages ver.16.0.11 .The first five volumes of each fMRI scan were discarded due to unstable magnetization.Thereafter, 8800 functional volumes corrected for the slice timing differences in each volume, head movement, and the image drift between sessions.Spatial smoothing with a 3 mm Gaussian kernel and bandpass time filtering between 0.01 and 0.1 Hz were applied.We tested FC between the M1 and cerebellum.The two correlation coefficients were converted to a normally distributed z-score by Fisher׳s transformation Thereafter, they were averaged and used for ANOVA with Bonferroni correction for multiple comparison.To evaluate the after-effect of the participants in the pointing tasks, pointing errors were recorded immediately after the PA session.Lateral displacement of the movement endpoints relative to the respective target was measured in cm for each pointing movement.The after-effect was defined as the average of the pointing errors in the six trials after the PA session.We calculated the ratio relating FC before the PA session to the FC after the PA session and evaluated the correlation between the change ratio of FC and amplitude of after-effect.Pearson correlation coefficients were analyzed using MATLAB.P < 0.05 was considered statistically significant.It can also be observed between the right primary motor cortex and the left dentate nucleus, and between the left primary motor cortex and the right dentate nucleus.The red line indicates the functional connectivity between the left M1 and the right dentate nucleus.The blue line indicates the functional connectivity between the right M1 and the left dentate nucleus.The green line indicates the functional connectivity between the right dentate nucleus and the left dentate nucleus.The orange line indicates the functional connectivity between the right M1 and the left M1.Vertical axes indicate the correlation coefficient values.Horizontal axes indicate the experimental phases.,The blue dots indicate the change ratio of FC between left frontal eye field and left IPS.The red dots indicate the change ratio of FC between right FEF and left IPS.The orange dots indicate the change ratio of FC between the left MFG and left STG.Finally, the green dots indicate the change ratio of FC between the right MFG and right STG.Vertical axes indicate the change ratio of FC.Horizontal axes indicate the amplitude of after-effect.,The blue dots indicate the change ratio of FC between the left FEF and the right FEF.The red dots indicate the change ratio of FC between the right IPS and the left IPS.The orange dots indicate the change ratio of FC between the left MFG and the left MFG.Finally, the green dots indicate the change ratio of FC between the right STG and the left STG.Vertical axes indicate the change ratio of FC.Horizontal axes indicate the amplitude of after-effect.,Fig. 4 shows the correlation between change ratio of FC and the amplitude of after-effect in the following pairs: the right frontal eye field and right anterior cingulate cortex; the left eye frontal field and the left anterior cingulate cortex; the right middle frontal gyrus and the right frontal eye field; and the left middle frontal gyrus and left frontal eye field.The blue dots indicate the change ratio of FC between the left ACC and the right FEF.The red dots indicate the change ratio of FC between the right ACC and the right FEF.The orange dots indicate the change ratio of FC between the left MFG and the left FEF.The green dots indicate the change ratio of FC between the right MFG and the right FEF.Vertical axes indicate the change ratio of FC.Horizontal axes indicate the amplitude of after-effect.Fig. 5 shows the correlation between change ratio of FC and the amplitude of after-effect in the following pairs the following pairs: the right primary motor cortex and the left primary motor cortex; the right dentate nucleus and the left dentate nucleus; the right primary motor cortex and the left dentate nucleus; and the left primary motor cortex and the right dentate nucleus.The blue dots indicate the change ratio of FC between the right M1 and the left dentate nucleus.The red dots indicate the correlation between the left M1 and the right dentate nucleus.The orange dots indicate the change ratio of FC between the left M1 and the right M1.The green dots indicate the change ratio of FC between the left dentate nucleus and the right dentate nucleus.Vertical axes indicate the change ratio of FC.Horizontal axes indicate the amplitude of after-effect. | It has been reported that it is possible to observe transient changes in resting-state functional connectivity (FC) in the attention networks of healthy adults during treatment with prism adaptation. by using functional magnetic resonance imaging (fMRI) (see “Prism adaptation changes resting-state functional connectivity in the dorsal stream of visual attention networks in healthy adults: A fMRI study” (Tsujimoto et al., 2018) [1]. Recent neuroimaging and neurophysiological studies support the idea that prism adaptation (PA) affects the visual attention and sensorimotor networks, which include the parietal cortex and cerebellum. These data demonstrate the effect of PA on resting-state functional connectivity between the primary motor cortex and cerebellum. Additionally, it evaluates changes of resting-state FC before and after PA in healthy individuals using fMRI. Analyses focus on FC between the primary motor cortex and cerebellum, and the correlation between changes in FC and its after-effects following a single PA session. Here, we show data that demonstrate the change in resting-state FC between the primary motor cortex and cerebellum, as well as a correlation between the change ratio of FC and the amplitude of the after-effect. |
31,483 | Decadal persistence of cycles in lava lake motion at Erebus volcano, Antarctica | Persistently active lava lakes are a spectacular but rare form of open-vent volcanism found at only a handful of volcanoes around the world."An active lava lake is the exposed top of a volcano's magmatic plumbing system.Longevity of such lakes has been argued to reflect either effective transfer of magma between the lake and the deeper system, or a supply of gas bubbles from depth."It can be shown experimentally that processes occurring at depth will manifest themselves at the surface as changes in the lake's behaviour, for example its surface level or gas flux.It follows therefore, that observations of lake properties can yield valuable insights into the processes occurring at depth in the magmatic system, where direct measurements are not possible.This link with the deeper magmatic system makes the study of active lava lakes of particular importance.Erebus is a 3794-m-high stratovolcano located on Ross Island, Antarctica.It is often claimed to be the southernmost active volcano in the world and is known to have hosted an active phonolite lava lake since at least 1972.Although other small lakes have appeared intermittently over this period, the main, “Ray” lake, has been a permanent feature of the crater throughout.The stable convective behaviour of the Erebus lava lake is punctuated by intermittent Strombolian eruptions associated with the rupture of large gas bubbles at the lake surface.Phases of increased and more intense Strombolian activity recur, lasting 1–10 months and are followed by more extended intervals during which gas bubble bursts are less frequent and of a smaller size).The chemical and mineralogical composition of erupted lavas has remained constant for approximately 17 ka and the abundance of unusually large anorthoclase crystals is indicative of sustained shallow magma convection throughout this period.Indeed, the presence of such large crystals may be a significant influence on the behaviour of the shallow convection at Erebus.Other properties of the lake also demonstrate remarkably consistent long-term behaviour, for example SO2 flux and radiant heat output."On shorter time scales, many of the lake's properties exhibit a pronounced pulsatory behaviour.Oppenheimer et al. observed that the radiative heat loss, surface velocity and certain magmatic gas ratios all oscillated with a period of ∼10 min.The cycles appeared to be phase locked with each other, suggesting a common mechanism was responsible for the oscillations in each property.Evidence of similar cyclicity has also been observed in the SO2 flux, and the H2/SO2 ratio, but these have yet to be linked definitively to the cycles observed by Oppenheimer et al.One possible explanation for the observed behaviour is pulsatory exchange flow of hot, degassing magma into the lake from the subjacent conduit.It has been shown experimentally that given two liquids flowing in opposite directions in a vertical pipe, under certain flow conditions an instability occurs which results in a pulsed flow.Oppenheimer et al. suggested that such behaviour may explain the cycles at Erebus volcano, with bubbly and degassing, low density magma rising up the conduit into the lake whilst degassed, denser magma sinks back down the conduit again.The resulting pulsatory flow delivers packets of fresh magma into the lake quasi-periodically, giving rise to the observed cycles in lake properties.The period of the cycles would be expected to reflect the rheological properties and velocity of the bubbly flow and geometry of the conduit.The previous studies at Erebus have analysed only very short time-series of data, and no investigation of the long-term behaviour of the cycles has yet been conducted."However, thermal infrared images of the Erebus lava lake have been collected almost every year since 2004 during the Mount Erebus Volcano Observatory's annual austral summer field campaigns.Using a similar technique to that of Oppenheimer et al., we have extracted mean surface speed estimates from the usable portions of the now substantial IR dataset."Using the mean surface speed as a proxy to assess the cyclicity of the lake motion, we present an overview of its behaviour between 2004 and 2011 and compare this to visible changes in the lake's appearance.Using a dataset recorded at higher time resolution in 2010, we identify times when bubbles arrive at the surface of the lake and compare this to the phase of the cycles.Our specific aims are to identify the persistence of the cyclic behaviour within and between field seasons; to search for any variability in cycle length that might point to changes in lake/conduit configuration or rheological characteristics of the magma; and to probe further the origins of the remarkable cyclic behaviour of the lava lake.We also compare observations at Erebus with those for other active lava lakes.In the following analyses, data from field campaigns between 2004 and 2011 have been used.Although the general behaviour of the lava lake at Erebus is fairly consistent from year to year, there are some observable variations.It is therefore important to set the results presented here within the context of the state of activity of the lake during each of the respective field campaigns.During the 2004 field season, there were two separate lava lakes present in the crater.By the 2005 field season, only the “Ray” lake remained, and no additional lakes have since been observed.All data presented here are from the “Ray” lake, and henceforth, we refer to it simply as the lava lake.Fig. 1 shows how the visible surface area of the lava lake has changed throughout the period of study.Possible reasons for this change are discussed in detail in Section 4.Despite the reduction in visible surface area from 2004 onwards, there have been no other apparent changes in the behaviour of the lava lake.Stable convective behaviour has been maintained throughout."This is characterised by the lateral migration of cracks in the lake's crust across the surface.These typically move radially outwards from the approximate centre of the lake, but more complex flow patterns with several “convection cells” and abrupt reversals of flow direction are also common.Lobes of fresh lava are occasionally observed to spread across the surface of the lake from upwelling regions, and there is visible subduction of the surface crust at the downwelling regions.These behaviours are all evident in the animation provided in the supplementary material for the electronic version of this manuscript.Bubbles of a variety of sizes are observed to surface in the lake.We describe bubbles as “large” if they result in significant ejection of material from the lake.Such bubbles are typically 10–30 m in diameter and cause a visible emptying of the lake.We classify such events as being distinct from the far more frequently occurring “small”, metre and sub-metre scale bubbles in Fig. 5) which arrive at the surface of the lake, but do not rupture violently.A study of explosive events between 2003–2011 using seismic data shows that, with the notable exception of the period from late 2005 to early 2007, their frequency has remained fairly constant at a few per week, with ejecta being entirely confined to the crater.During 2005–2007 however, there were several explosions per day, often of sufficient magnitude to propel ejecta out of the crater, the frequency of these events then gradually declined and by the 2007 field season the lake had returned to its more typical level of activity.Fieldwork on Erebus volcano is limited to the austral summer, and typically takes place from late-November to early January.Where we refer to a field season by year, we are referring to the year in which it began.The logistics involved in reaching the crater rim, combined with frequent bad weather conspire to limit the interval of IR image data acquisition to a few weeks each year.The intervals of useful data are further reduced due to fluctuations in the IR transmission between camera and lava lake.When the gas/aerosol plume is highly condensed the IR transmission in the camera waveband is poor and the images of the lake are of unusable quality.The latest IR camera system, which was deployed in December 2012, is capable of year-round operation.The data from this fully automated system will be analysed in future work.Interruptions to the recording of IR images on Erebus are common.The Agema and P25 cameras both required their memory cards to be changed regularly and equipment failure was frequent due to the harsh operating conditions.These factors have resulted in a segmented data set with many gaps.The first step in data selection was to split the data into groups of continuous acquisition that contained no two images more than 40 s apart.Groups spanning less than one hour of acquisition were discarded.Subsequent data processing was performed on a per group basis.High winds at the summit of Erebus cause camera shake, potentially introducing large errors into the velocity estimates calculated by the motion tracking algorithm.This problem is particularly acute in data from the Agema and P25 cameras, which did not have such stable tripod mounts as does the new SC645 system.Attempted stabilisation of the images in post-processing failed due to the lack of distinctive stationary features in the images.Instead, a simpler approach was followed, in which only periods of data with little or no camera shake were analysed.Due to the large volume of available data, an automated routine for identifying such periods was developed.This involved first defining the bounding box of the lake in each image by thresholding the image at a predetermined level, and identifying the non-zero region.Images in which the bounding box could not be found, or was unusually small were rejected, as these characteristics point to poor visibility of the lake."The centre coordinates of the bounding boxes were then assigned to clusters using SciPy's fclusterdata function.To reduce the run time of the clustering algorithm, duplicate bounding box positions were discarded before clusters were computed.Using the standard deviation of the bounding box coordinates in each cluster as an indicator of camera shake, the best clusters for each year were selected.As a final check of data quality, the images in each cluster were compiled into a video, which was then viewed to ensure good visibility of the lake and minimal camera shake throughout.Since the focal plane of the thermal camera is not parallel to the surface of the lava lake, perspective effects mean that each pixel in the image represents a different distance in the plane of the lake.To correct for this distortion, each image was rectified before the motion tracking was carried out.The required transformation was calculated by matching points in the image to points in a terrestrial laser scan of the lake."OpenCV's cvFindHomography function was then used to calculate the required transformation matrix, and the cvWarpPerspective function used to apply it.Correcting the images in this way also accounts for any lens distortion.Terrestrial laser scan data of the lava lake were only available for 2008 onwards.For thermal images from earlier years, the homography matrix was calculated from the viewing angle of the camera and the size of the lake.Although this method neglects lens distortion, we expect the effects to have little impact on the results obtained.The significant temperature contrast between the lake and the surrounding crater causes problems for the feature tracking algorithm.As the strongest feature in the image, the lake boundary tends to dominate over the structure within the lake that we are actually interested in.This issue can be overcome by masking the regions outside of the lake with Gaussian-distributed white noise with a mean and variance similar to that of the pixels within the lake.Random noise is used rather than a fixed value to prevent the output of the bandpass filters used in the wavelet decomposition from being exactly zero, as this causes the algorithm to fail.Finally, the mean surface speed of the lake was found by averaging the magnitudes of the computed velocity vectors.To avoid possible edge effects, only velocity vectors from the central region of the lake were included in the averaging.An animation showing results of the motion tracking methodology described above is provided in the supplementary material for the electronic version of this manuscript.As can be seen in Fig. 2, the mean surface speed time-series obtained are highly non-stationary.To evaluate the periodic components of the series with time, we therefore use a Morlet wavelet transform to produce spectrograms of the data.Our implementation of the Morlet transform is the same as that of Boichu et al.The mean speed data were interpolated to a uniform 1 s time step prior to the Morlet transform using simple linear interpolation.As illustrated by the expanded regions in Fig. 2, some of the ∼5–18 min cycles are of much greater amplitude than others, and will result in a very high modulus in the Morlet transform.Longer time-series tend to exacerbate this problem, since they often contain at least a few very high amplitude oscillations, which then saturate the colour scale and mask much of the other detail.In this way, the cyclicity of the lake may not be apparent even if it exists.However, creating a spectrogram of just the data from the “non-cyclic” time period, reveals that there are indeed still ∼5–18 min period components present – they are simply of lower amplitude.Bubbles breaking the surface of the lake manifest themselves as sharp peaks in the mean surface speed time-series.The poor time resolution of the Agema and P25 datasets mean that most bubbles are not recorded.However, much of the SC645 data from 2010 was recorded at 2 Hz, which is more than sufficient to capture the arrival of bubbles at the surface.Bubble events were located in time by comparing the mean speed time-series to a low-pass filtered copy of itself.Bubbles were classified as events where the speed was greater than 1.2 standard deviations above the low-pass filtered value.The value of 1.2 was chosen by comparing bubble events detected by the algorithm to those located manually in a test set of data spanning three hours.The analysis was conducted on a continuous time-series of good quality data from 24 December 2010, spanning approximately 13 h. By visually inspecting the IR images corresponding to each of the bubble events, we determined that all events were small.The bubble events detected are uniformly distributed in time.However, this tells us nothing of how they are related to the pulsatory behaviour of the lake.What is of real interest is how bubble events relate to the phase of the speed cycles; for example, do more bubbles surface during periods of fast surface movement?,In order to evaluate a possible relationship between the cyclicity and bubble events we use the method of delays to reconstruct our time-series data into a phase space representation.If the bubble events are somehow correlated to the phase of the speed cycles then we argue that their distribution in phase space will differ from that of a random sample taken from the time-series.We can imagine this as being due to a clustering of bubble events at certain positions in phase space.Details of the phase space reconstruction are given in Appendix A where we show that in order to accurately represent our time-series, a 4-dimensional phase space is required.The data were low-pass filtered prior to phase space reconstruction to remove noise and the spikes due to bubbles.The time-series analysed contains 141 bubble events.We compared the cumulative distribution function of the bubble events to a reference CDF in each of the phase space dimensions.The reference CDF is the CDF of the time-series itself.As an indicator of the expected variation in CDFs, the standard deviation of 10,000 CDFs, each constructed from 141 points randomly sampled from the time-series, was computed.A significant variation of the bubble event CDF from that of the reference in any of the dimensions, would indicate some correlation to the phase of the cycle.Differences between CDFs were quantified using the two-sample Kolmogorov–Smirnov test.The computed critical value for the K–S test at 90% confidence is 0.102.To verify the technique, we created a set of 95 fake bubble events located at the peaks of the mean speed cycles.These events were then subjected to the same analysis as the real bubble events.The critical value for the K–S test at 90% confidence is 0.125 for the fake bubble sample size of 95.As shown in Fig. 4, the CDFs for the fake bubble events show a strong deviation from that of the random samples in each of the phase space dimensions, with K–S test results of 0.50, 0.15, 0.16 and 0.13 respectively.Hence, the technique correctly identified the correlation between the fake bubble events and the phase of the speed cycles.The 2010 field season was characterised by exceptional visibility of the lava lake.In addition to the IR images captured, several short time-series of visible images were captured using a digital SLR camera equipped with a telephoto lens.Fig. 5 shows a short time-series of mean surface speed and mean surface temperature data calculated from IR images, with visible images corresponding to peaks and troughs in the speed also shown.There are no consistent differences observed between the appearance of the lake surface during periods of high speeds and periods of low speeds.Oppenheimer et al. found a strong correlation between the phase of cycles in mean surface speed and radiative heat loss in their data set.This correlation is further demonstrated by the time-series shown in Figs. 5 and 6.Note however, that since we have not attempted an accurate temperature calibration of the IR images, we present mean surface temperatures normalised to their maximum value.What is not clear from these data alone, is whether the temperature variations observed are due to an increase in the surface temperature of the lava in the lake, or an increase in the area of cracks in the surface crust of the lake caused by the increased motion.Additional cracks will expose more of the underlying lava to the surface and will therefore cause an increase in the mean temperature recorded by the IR camera.Increased cracking during periods of higher surface speed is not obvious in the images shown in Fig. 5, suggesting that the changes in recorded temperature are indeed due to variations in the mean temperature of the lake surface.However, we feel that a qualitative argument such as this is insufficient to rule out increased cracking as a cause.In an attempt to identify more rigorously the reason for the temperature cycles, we compared the histograms of the thermal images at the minima and maxima of the cycles.If the cycles are caused by an increase in surface temperature, then we would expect the histograms at the cycle maxima to be shifted relative to those at the minima.If increased cracking is the cause, we would expect more high temperature pixels, resulting in a skewing of the histograms at the maxima compared to those at the minima.Unfortunately, the results obtained were ambiguous, with greater differences between histograms from the same point in the cycles than found by comparing those at maxima to those at minima.The cause of the measured temperature fluctuations remains elusive, however, it seems likely that they are caused by a combination of both increased surface cracking and increased surface temperature.It is important to realise that even if increased surface cracking is insignificant, cycles in the surface temperature do not necessarily reflect periodic variations in the internal temperature of the lake itself.It is possible for example, that during periods of increased surface motion the mean surface age of the lake is lower.Younger crust is likely to be thinner, and hence the conductive heat flow through it will be larger, resulting in higher measured surface temperatures despite the bulk temperature of the lake remaining static.Fig. 6 shows a short time-series of mean surface speed and mean surface temperature calculated from IR images captured in 2010.The pulsatory behaviour was particularly pronounced during the period shown, and the waveform of the cycles is clear.The peaks in speed and temperature are approximately Gaussian in shape, with rising and falling edges that are symmetric about the centre of the peak.The peaks tend to be shorter lived than the troughs, suggesting a system with a stable baseline state that is being perturbed, rather than a system that is oscillating about a mid-point.There also appears to be a correlation between the magnitude of the cycles and their period, with longer period cycles having a greater amplitude.However, such a relationship is not always observed, and there are many instances where the opposite is true.Morlet spectrograms of the mean speed data from the 2007–2011 field seasons are provided as supplementary material to the online version of this article.What is clear from the data is that the cycles in speed are not strictly periodic.Instead, there tends to be a broad range of periodic components present, centred at around 900 s. However, these components appear to be fairly consistent across the dataset and have not changed appreciably during the period of study.Fig. 7 further illustrates this point, showing the time average of the modulus of all the Morlet spectrograms from each field season.The general trend towards higher modulus at longer periods is due to the fact that long period variations in mean speed tend to be of greater amplitude than short period variations.Despite this, the broad peak around 900 s is evident in the data from the 2007–2011 field seasons.The time-series from the 2004 and 2006 field seasons were of insufficient duration to allow analysis for long period behaviour, and as a result do not show the same behaviour as the other years.It is unfortunate that during the 2005 and 2006 field seasons, which covered the period of increased explosive activity, IR data were either not collected at all, or are of insufficient length to compare to other years.However, as shown in Fig. 8, the pulsatory behaviour of the lake appears to be unperturbed by large bubble bursts.The figure shows a short time-series of mean surface velocity data from 29–30 December 2010, during which a large bubble arrived at the surface of the lake.Despite a significant ejection of material from the lake, the mean speed data show that the pulsatory behaviour appears to be uninterrupted on timescales greater than that of lake refill.It is interesting to note that at the time of the explosion, the Morlet spectrogram shows a particularly strong periodic component at ∼1000 s.We believe that this may be caused by increased surface speeds in the build-up to the explosion and also during the recovery phase as the lake refills.The IR images show that the lake level rises rapidly immediately prior to a large bubble reaching the surface, likely causing an increase in the recorded surface speed.Rapid flow of lava into the lake during the refill phase of an explosive event is also likely to cause elevated surface speeds.In addition to the apparent stability of cycles in surface speed, the magnitude of the surface speed has also remained approximately unchanged since 2004.Although the mean surface speed can exhibit considerable variability on a timescale of days, no systematic change was observed over the period of study.Whilst the behaviour of the mean surface speed has remained remarkably stable, the visual appearance of the lava lake has changed significantly.Fig. 1 shows how the surface area of the lake has decreased since the first measurements in 2004.Overall the surface area has reduced by a factor of approximately four."The terrestrial laser scan data also show that since at least 2008, the decrease in area has been accompanied by a 3–4 m per year drop in the lake's mean surface elevation.The dramatic reduction in surface area cannot be accounted for by the drop in surface elevation since the lake walls are observed to have a near-vertical profile.It seems likely therefore, that the observed decrease in surface area is due to a change in the geometry of the lake basin, either due to accretion of lava onto its sides, or deformation of the surrounding rock.The apparent lack of influence of lake geometry on the cyclic behaviour would tend to suggest that the cycles are driven by processes occurring in the magma conduit or a connected reservoir rather than in the lake itself.Fig. 4 shows the cumulative distribution functions of the bubble events, and fake bubble events, in each of the four phase space dimensions.The shaded areas delimit one standard deviation on either side of the reference CDFs.As discussed in Section 3.5, the CDFs for the fake bubbles show a strong deviation from the reference, correctly identifying the correlation between the phase of the speed data and the fake bubble events.In contrast, the CDFs for the real bubble events are very similar to the reference in all but the first dimension.The K–S test gives values of 0.15, 0.05, 0.07 and 0.06 for the four dimensions, respectively.Apart from the first dimension, these are all below the critical K–S value at 90% confidence, indicating that the bubble events are from the same distribution as the speed data itself and that there is, therefore, no correlation between the phase of the speed cycles and the bubbles.In the first dimension, the CDF of the bubble events appears to be the same shape as that of the mean speed data, but shifted slightly to the right.We believe that this is caused by the failure of our low-pass filtering to remove fully the spikes caused by bubble events in the mean speed data, rather than any correlation with the phase of the cycles.As a result, bubble events appear to occur at slightly higher speeds than they actually do, shifting the CDF to the right.We tested this hypothesis by plotting the CDF of 141 randomly selected points from the speed data with a linear offset of 0.002 ms−1 added.The results showed a CDF that matched that of the mean speed data in all dimensions except the first, which showed a linear offset to the right as expected."We therefore conclude that the bubble events are not correlated to the phase of the velocity cycles and that the deviation we observe in the first dimension is due to the low-pass filter's inability to remove completely the effects of bubble events from the underlying mean speed signal.A common conclusion of multi-year studies conducted at Erebus volcano is that its behaviour is remarkably stable.Observations of radiant heat output, SO2 flux and seismicity have all found very little variation during the past decade."Our findings that the pulsatory behaviour of the lava lake has been a persistent and unchanging feature since at least 2004 fit well with these previous findings and further emphasise the remarkable stability of Erebus's magmatic system.The preservation of cycles in surface speed despite large perturbations to the system is indicative not only of the stability of the responsible mechanism, but also that it is likely sourced at a deeper level than the lake itself.This argument is further supported by the consistency of the motion cycles despite a reduction in the area of the lava lake, exposed at the surface, over the period of observation.The broad width of the peak in the spectrograms that we observe is consistent with the findings of Oppenheimer et al. who found the period of the fluctuations in mean lake speed to vary between ∼5–18 min for the 2004 dataset.Although short sections of our data appear to contain several discrete frequency bands within this range, such fine scale structure is never observed consistently over time periods of more than a few hours.No clear pattern to the variation of the period of fluctuations measured is evident from the spectrograms.However, it is important to consider how well the mean speed of the lake surface represents the underlying process responsible for the pulsatory behaviour.Even if this process contains well defined, discrete periods, complex flow dynamics and other forcing mechanisms may result in a highly non-linear response in the surface speed.It is possible that the broad distribution in period of the cycles in speed observed is due to the complex coupling between a periodic driving mechanism and the dynamics of the bubbly flow within the lake.Given the correlation between surface motion and gas composition ratios reported by Oppenheimer et al., we believe that the variability in period stems primarily from the variability in the underlying driving mechanism.Current theories on driving mechanisms for lava lake fluctuations can be grouped into three main categories; instability in density driven, bi-directional flow of magma in the conduit feeding the lake, “gas pistoning” caused by gas accumulation either beneath a solidified crust on the surface of the lake or as a foam layer at the top of the lava column, and gas bubble-driven pressurisation changes.In the latter mechanism, the upflow of bubbly magma in the conduit is interrupted by excess hydrostatic pressure in the lake.Stagnation of the flow allows bubbles in the conduit to coalesce into large gas slugs that rise to the surface independently of the melt.The loss of large gas slugs at the surface of the lake causes an increase in pressure at the base of the conduit.If this exceeds the pressure in the magma chamber then downflow occurs, suppressing the ascent of bubbles in the conduit.As the lake drains, the downflow reduces until it can no longer suppress the ascent of bubbles, and the cycle repeats.Witham et al. were able to demonstrate this mechanism by bubbling air through a water column with a basin attached to the top to represent the lake.They observed cyclic variations in the depth of water in the basin, consisting of a logarithmic increase in depth followed by a rapid, linear decrease.As shown by Orr and Rea, gas pistoning, as observed at Kīlauea, is also an asymmetric process, consisting of a relatively slow, cumulative deviation from the baseline state of the system as bubbles are trapped in the foam layer or beneath the solidified crust, followed by a sudden release of the accumulated gas and rapid return to the baseline state.The temporal symmetry of the perturbations in the Erebus lava lake is not consistent with either of these mechanism.It may be argued that the complex geometry of the upper magmatic system of Erebus could lead to a more symmetric variation than observed by Witham et al. and Orr and Rea.However, our finding that the arrival of small bubbles at the surface of the lake is uncorrelated with the phase of the speed cycles is only consistent with the bi-directional flow mechanism.Both bubble-driven mechanisms require a periodic release of bubbles prior to lake draining and, in the case of the Witham et al. mechanism, a significant decrease in the number of bubbles during lake draining.Large bubbles, typically occur at Erebus only a few times per week, and cannot therefore be responsible for the ∼5–18 min cycles.Since no periodic release of small bubbles is observed either, we argue that the pulsatory behaviour of the lava lake at Erebus volcano is driven by magma exchange between a shallow magma chamber and the lake via bi-directional flow in the connecting conduit.It is interesting to note that, on average, bubble events in the data presented in Fig. 3 occur every 5.5 min.This is comparable to the cycles in surface speed, which range from ∼5–18 min in period.However, given that some cycles occur without any bubbles surfacing and given the random distribution of bubbles with respect to the phase of the cycles, we believe the similarity in timescales to be coincidental.Pulsatory behaviour deriving from bi-directional flow in a conduit has been demonstrated for single-phase systems using two fluids of different densities.However, any exchange of magma occurring at the Erebus lava lake will clearly be multi-phase, and its dynamics will be influenced not only by the presence of gas bubbles but also by the large anorthoclase crystals which constitute 30–40% of the melt volume.Indeed, numerical simulations of the Erebus magmatic system indicate that the inclusion of crystals has a very significant effect on the flow dynamics.While it is likely that gas bubbles play an even more significant role than the crystals, a complete multi-phase flow model of the Erebus system is not yet available.Whilst it is possible that the dynamics observed by Huppert and Hallworth may not be applicable to a complex multi-phase system such as that at Erebus, the lack of compelling evidence for an alternative mechanism leads us to conclude that instability associated with density-driven bi-directional flow is the most likely explanation for the observed cyclic behaviour.As noted by Oppenheimer et al., the density contrast driving the flow is likely to result primarily from magma degassing.Bouche et al. observed that bubbles in the lava lake at Erta ‘Ale volcano may be trapped beneath the cooled crust at the surface of the lake and forced to travel laterally until they encounter a crack in the crust before they can surface.If such a process were also occurring in the Erebus lake, then it would invalidate our comparison of the bubble events to the cycles in surface speed.The variable duration of lateral migration of bubbles would prevent any direct comparison of the timings of the bubble events and the phase of the cycles, since it would tend to randomise their arrival at the surface.However, it can be observed in the IR images that even small bubbles break the surface of the Erebus lake in areas with no visible cracks.We do not therefore believe that the crust on the Erebus lake inhibits bubble ascent, nor that it causes significant lateral displacement of bubbles.These differences likely reflect the contrasting rheologies of the magmas involved, which in turn reflect the differences in composition, temperature, viscosity and crystal content.In our analysis of the correlation of bubble events to lake cycles, we have only looked at small bubbles in detail, since the dataset did not contain any large events.Small bubbles may be sourced within the lake itself, whereas large bubbles are thought to have originated at greater depths.It is possible that the passage of large bubbles through the conduit may perturb the bi-directional flow of magma, causing variations in the period of lake surface speed fluctuations.Although no such variation was observed in Fig. 8, we do not believe this to be sufficient evidence to discount such a possibility."Since the arrival of large bubbles is relatively infrequent, a time-series spanning several months would need to be analysed to achieve a statistically significant sample size with which to investigate possible effects of large bubbles on the lake's motion.We are presently working on an autonomous camera installation on Erebus that we hope can provide such data.We have reported an analysis of thermal infrared image data of the active lava lake at Erebus volcano that spans seven field campaigns from 2004–2011.In total 370,000 useful images were acquired representing 42 “field days” of observations and spanning contiguous observations of up to 44 h duration."The images were analysed using a feature-tracking algorithm to determine surface motion vectors from which the mean speed of the surface of the lake was derived, and this parameter used to monitor the lake's pulsatory behaviour.Shot noise in the mean-speed data was found to indicate bubbles arriving at the surface of the lake, allowing an analysis of the timings of bubble events with respect to the phase of the surface speed cycles.Since 2004, the apparent size of the Erebus lava lake has decreased by a factor of four."Despite these changes in the lake's appearance, its pulsatory behaviour has remained constant over the period of study, exhibiting cycles in mean surface speed with periods in the range ∼5–18 min.Mean surface speeds are typically between 3 and 20 cm s−1.No obvious long-term progression of the cycles was observed.Surface speed time-series are not symmetrical about their mean, suggesting that the pulsatory behaviour is due to intermittent perturbations of the system, rather than an oscillatory mechanism.Bubbles arriving at the surface of the lake show no correlation to the phase of the surface speed cycles.We therefore conclude that the pulsatory behaviour of the lake is associated primarily with the flow dynamics of magma exchange within the shallow plumbing system rather than by a flux of bubbles.While we have analysed a substantially larger dataset than Oppenheimer et al., we have still been limited by the intermittent coverage.We hope that our recently-installed autonomous thermal camera system will yield much more extended time-series, facilitating investigations into the effect of large bubbles on the pulsatory behaviour of the lake. | Studies of Erebus volcano's active lava lake have shown that many of its observable properties (gas composition, surface motion and radiant heat output) exhibit cyclic behaviour with a period of ~10 min. We investigate the multi-year progression of the cycles in surface motion of the lake using an extended (but intermittent) dataset of thermal infrared images collected by the Mount Erebus Volcano Observatory between 2004 and 2011. Cycles with a period of ~5-18 min are found to be a persistent feature of the lake's behaviour and no obvious long-term change is observed despite variations in lake level and surface area. The times at which gas bubbles arrive at the lake's surface are found to be random with respect to the phase of the motion cycles, suggesting that the remarkable behaviour of the lake is governed by magma exchange rather than an intermittent flux of gases from the underlying magma reservoir. © 2014 The Authors. |
31,484 | Difficult action decisions reduce the sense of agency: A study using the Eriksen flanker task | The sense of agency refers to the feeling that we voluntarily control our actions and, through them, events in the outside world.This involves establishing a link between our intentions and our actions, and between our actions and their external outcomes.It has been suggested that our experience of agency colours the background of our mental lives, but we become especially aware of it when the smooth flow from intention to action to outcome is disrupted.Much research has focused on the second link, between actions and outcomes.This has revealed an important signal that informs the sense of agency - the comparison between expected and actual action outcomes.If outcomes match our expectations, we feel that “I did that”; while a mismatch signals a loss of agency.While mismatch signalling partly relies on predictive processes, based on internal signals related to the action system, it is essentially retrospective since the action outcome must be known for the comparison to be made.Recent studies have shown that a metacognitive signal about the fluency of action selection also contributes to the sense of agency.This signal serves to establish a link between our intentions and our actions, and is available before the action is even made, so it can inform our sense of agency prospectively.These studies used subliminal priming to manipulate action selection in an agency task.Here, participants make left or right actions according to a target arrow, which are followed by coloured circles – the action outcomes.Participants are then asked to judge how much control they felt over these circles.Unbeknownst to the subject, a small arrow – a prime – is briefly flashed before the target.When the prime is congruent with the target, and points in the same direction, action selection is easy; but when the prime is incongruent with the target, and points in the opposite direction, action selection is impaired, leading to slower reaction times and more errors.Results showed that the sense of agency over action outcomes was higher following congruently primed actions, compared to incongruently primed actions.Importantly, outcomes could not be predicted by the action or the prime alone, but depended on the congruency between prime and target.Further, the effects of action selection on sense of agency could not be explained by participants relying on a retrospective monitoring of RTs, as these were not correlated with agency judgements.Tellingly, a further experiment manipulated the timing of stimuli to induce either a normal priming effect or a “negative compatibility effect”.In the NCE, congruent primes impair rather than facilitate motor performance.This manipulation reversed the effects of primes on RTs, as expected, but judgements of agency were always higher for congruent priming, in both normal and NCE priming.The authors proposed a model in which the very initial action intention, triggered by the prime, could be compared with the executed action.Congruency between the initial intention and action would facilitate a metacognitive signal about action selection, and thus lead to a higher sense of agency.The later motor inhibitory processes that caused NCE would occur downstream of this metacognitive readout of initial intention.Since these primes were subliminal, participants were not aware that selection fluency was manipulated, and could not strategically decide to use fluency as a cue to agency.Fluency can be thought of as a continuum between easy, or fluent, perceptual or cognitive processing, to effortful, or dysfluent, processing.Response conflict is an instance of highly effortful processing.Although the experience of selection fluency/dysfluency may be relatively weak, people may have a sense of “something going right/wrong” in congruent or incongruent trials respectively, without being able to identify why they have this feeling.It has been shown that people can reliably introspect on their experience of ease/difficulty in action selection, using a similar subliminal priming task, as well as with conflicting supraliminal stimuli.This feeling could then become associated with subsequent events, such as action outcomes.Interestingly, similar effects are found when measuring agency at the end of a trial and at the end of a block.This suggests that the association between fluency experiences and outcomes could build up over time.Alternatively, the learning of action-outcome relations may be disrupted by dysfluent action selection.In fact, the studies that used subliminal priming to manipulate selection fluency differ considerably from previous research on the sense of agency, as they are focused on the instrumental learning of the relation between specific actions and a number of possible outcomes.From this perspective, expertise with a given environment leads to a growing sense of ease, or flow, in selecting an action, which becomes associated with more predictable outcomes.On the other hand, research on the sense of agency has often focused on the attribution of agency.In such studies, action-outcome associations are often well known, and may be violated, and/or there may be ambiguity about “who” caused a specific outcome, i.e. me vs. another agent."Response conflict induced by conscious stimuli has been shown to lead to a reduced sense of agency over one's actions.However, it remains unclear whether conscious stimuli that influence action selection might also alter the sense of agency over action outcomes.One suggestive study set out to manipulate the visibility of primes, while measuring judgements of agency over outcomes.Participants were aware of some primes, but not others.Prime words were presented for a short or long duration, producing subliminal or supraliminal priming, respectively.Participants freely chose whether to press a left or right key once the following mask disappeared.Their action triggered a high or low tone after a variable delay, and participants judged their agency over the tone.For the subliminal priming condition, judgements of agency followed the pattern previously reported, i.e. higher ratings for trials in which the action was congruent with the prime, relative to prime-incongruent actions.However, for supraliminal primes, the effects were reversed, and higher ratings were found for prime-incongruent actions."The authors argued that awareness that one's choice might have been biased by external input would reduce one's sense of freedom and, in turn, one's sense of agency.Importantly, Damen et al. study showed effects of priming on the sense of agency, despite showing little or no effect of either subliminal or supraliminal primes on reaction times.Priming of choices was only found for supraliminal primes, in one of two experiments.Thus, there is little evidence that primes influenced action selection processes in their study."This contrasts with previous reports in which even subliminal primes reliably biased free choices.Instead, Damen et al. argued that action primes might influence agency judgements independently of influencing action selection, by affecting higher-order, conceptual representations of action and agency.The present study aimed to clarify the contribution of action selection processes to sense of agency, using supraliminal stimuli to manipulate action selection across 3 experiments.To additionally test the generalisability of these effects, a novel task was used – the Eriksen flanker task.This is widely used to induce response conflict, and assess cognitive control dynamics.The flanker task was adapted and combined with the design from the aforementioned subliminal priming studies.Participants responded according to a target letter, which could appear flanked by congruent or incongruent flankers.A coloured circle appeared after a variable delay, and participants judged their control over that colour.In the incongruent flanker condition, the presence of flankers associated with the alternative action should lead to response conflict, and thus an increase in RTs and errors.Experiment 1 aimed primarily to test how supraliminal stimuli relevant to action selection would affect the sense of agency in a situation where each action could produce one of a number of outcomes."Damen et al.'s results might suggest that the highest sense of agency would be found in the incongruent condition, when participants had to overcome conscious response conflict.However, if selection fluency has a general effect on the sense of agency then the highest sense of agency should be found in the congruent flanker condition.Additionally, we included a neutral condition, with task-irrelevant flankers to try to distinguish facilitation and conflict effects on action, and on the sense of agency.Finally, some previous studies measured agency ratings at the end of each trial, while others measured agency ratings at the end of a block.In this study, we exploratorily tested half of the participants with each method, though we did not have any strong prediction about interactions involving rating method.Importantly, free vs. instructed choice could modulate how awareness of priming stimuli would influence the sense of agency.For subliminal priming, having a higher or lower proportion of free choice trials, relative to forced choice, did not interact with the effects of action selection on agency.However, this may be different for conscious priming.A participant who consciously perceives a prime might recruit cognitive control resources to resist its influence, potentially increasing their sense of agency.This possibility was assessed in Experiment 2.Forced choice trials were randomly intermixed with free choice trials.A task-irrelevant target letter indicated a free choice trial, and appeared surrounded by task-relevant flankers.Hence, actions could be congruent or incongruent with the flankers, whether the action was instructed by the central, attended stimulus, or was endogenously chosen.Additionally, the timing of stimuli affecting action selection, and thus response conflict, could be important.A sufficient amount of time may be needed between the appearance of biasing information and an instruction/go-signal to develop a clear awareness that one is either following or going against that information.One might then come to have a stronger sense of agency for overcoming external biases.Similarly, if there is enough time, cognitive control processes can inhibit the automatic motor activation induced by primes or flankers, thus abolishing their effects on motor performance.In this case, choosing to go against the prime does not require any additional effort over choosing to go with the prime."Nonetheless, awareness of an external suggestion could still influence one's sense of agency.To test the impact of the timing of conflicting stimuli, Experiment 3 parametrically varied the stimulus onset asynchrony between flankers and target.Flankers could precede the target by 500 ms or 100 ms, be simultaneous with the target, or follow the target after 100 ms.Maximal congruency effects on performance are found for − 100 and 0 SOA conditions, but only small or no effects are found for the − 500 and + 100 SOA conditions.We hypothesized that the − 500 SOA condition would allow sufficient time for suppression of the flankers, and potentially alter effects of conflict on sense of agency.The − 100 SOA condition was expected to still show important effects on action selection, but the clear precedence of the flankers to the target might alter the subjective experience of conflict and agency.The 0 SOA condition should replicate our previous effects.In addition, the + 100 SOA condition would serve to assess whether the temporal precedence of flankers or target might influence agency processing.If congruency between a first intention and the action performed is the important comparison for agency, as suggested by Chambon and Haggard, then this condition should not affect agency even if it showed minor effects on performance.Since choice did not interact with fluency effects on agency in Experiment 2, only forced choice trials were used.The study was approved by the UCL Research Ethics Committee.Twenty-five participants were recruited, based on an a priori power calculation.For this, we used previous reports of prime compatibility on agency in ratings in operant reaction-time tasks, since no previous study to our knowledge had investigated flanker congruency effects on sense of agency over action outcomes."With a Cohen's dz of 0.66, power = 0.8, and alpha = 0.05, a minimum sample size of 21 was indicated, but a slightly larger number were recruited, in anticipation of possible attrition.Participants gave written informed consent to participate in the study and received payment of £7.5/hour.All were right-handed, with normal or corrected-to-normal vision, did not suffer from colour blindness, and had no history of psychiatric or neurological disorders.There were two groups of participants: odd-numbered participants rated agency on every trial, while even-numbered participants rated agency at the end of each block.One participant in the block-wise rating group was excluded due to difficulties in distinguishing outcome colours.Participants were seated approximately 50 cm from a computer screen.The experiment was programmed and stimuli delivered with Psychophysics Toolbox v3, running on Matlab.During a trial, stimuli were presented in a mono-spaced font, Lucida Console.A fixation cross was presented in 18 point font size."Target letters consisted of S's or H's, while flankers consisted of S's, H's or O's.These were presented in 30 point font size, with the 5 letter array subtending 3.2° visual angle.Participants responded by pressing one of two keys on a keyboard.Outcome stimuli consisted of a circle of 2.8° presented in one of 6 colours.Different colours were used in the training phase.All participants gave agency ratings on a 9-point Likert scale.The trial-wise ratings group completed the rating procedure on the computer.For the block-wise ratings, participants were first asked to rank order the coloured circles on a sheet of paper, and then gave a Likert rating for each colour.The task involved making actions in response to targets, which were surrounded by distracting flankers.The action triggered the appearance of a coloured circle – the action outcome.Participants were instructed to pay attention to the relation between their actions and the outcomes that followed, as they were required to judge these relations at the end of each trial or each block, for the respective group.Participants had to respond with a left or right key press according to a central target letter.The assignment of target letters to a left or right action was counterbalanced across participants.Participants were instructed to ignore the flankers and focus on the central letter.Flankers could be congruent with the central target – e.g. HHHHH, and thus with the required action; incongruent – e.g. SSHSS; or neutral – e.g. OOHOO.Flanker-target congruency was randomly varied across trials.Outcome colours were dependent on both the congruency condition and the action performed.Thus, each action was associated with three outcomes, one for each congruency condition.The condition-to-colour mapping varied across the blocks, so participants had to learn the action-outcome relations anew in each block, and were informed of this.The six outcome colours were rotated in a Latin square across the 6 blocks, and the block mapping was randomised.Each colour appeared once in each experimental condition, thus cancelling out any idiosyncratic colour preferences.To ensure that the frequency of each coloured outcome was equal despite differences in error rates across flanker-action congruency conditions, error trials were replaced at the end of a block.Additionally, the action-outcome interval was varied orthogonally to the congruency factor.This was not a variable of interest, but served as a dummy variable, ensuring that participants were exposed to a range of experiences, varying from low sense of agency to high sense of agency.Participants were asked to judge how much control they felt over the coloured circles that were triggered by their actions.For the trial-wise rating group, a 9-point Likert scale was presented at the end of each trial, where 1 was labelled “No Control” and 9 was labelled “Total Control”.The block-wise ratings group completed a ranking and rating procedure on a paper sheet at the end of each block.Participants were instructed to rank order coloured circles on the sheet across 6 rankings, from “Most Control” to “Least Control”.After ranking, participants gave a rating of their sense of control on the Likert scale described above.The study started with a training block of 24 trials, to allow participants to get acquainted with the experiment and the agency ratings procedure.Participants were given a chance to ask questions and repeat the training if desired.To avoid colour mapping repetitions, different colours were used during the training and experimental phases.At the end of the study, participants completed a short debriefing questionnaire.Each trial started with a fixation cross presented for 500 ms. The flankers and target array appeared for 100 ms.Participants responded to the target within a 1.2 s window.If the response was correct, an outcome colour followed the response after a variable delay of 100, 300 or 500 ms. Outcome duration was 300 ms. If an incorrect response or no response was given, a black cross was presented for 300 ms. For the trial-wise rating group, the agency rating scale appeared after 800 to 1200 ms, and remained on the screen until a response was given.For both groups, the inter-trial interval varied randomly between 1 and 1.5 s. Each block consisted of 72 trials, and there were 6 blocks overall.At the end of each block, the block-wise rating group completed the ranking/rating procedure.All participants were allowed to take short breaks between blocks.For the block-wise ratings group, rating sheets were coded and the data computerised.Any blocks where mistakes were made in the ranking/rating procedure were excluded from analysis.Mistakes could involve mismatches between the ranking and rating, or the repetition of a colour name.This resulted in the exclusion of 1 block in 2 participants, and 2 blocks in another participant.Reaction times, error rates and agency ratings were submitted to a 2 × 3 mixed-design analyses of variance.The between-subjects factor was group: trial- or block- wise ratings group; and the within-subjects factor was flanker-action congruency: congruent, neutral or incongruent.Planned comparisons were used to test differences between congruency levels."For the block-wise ratings group, agency ranks were submitted to a Friedman's non-parametric test to assess the main effect of flanker-action congruency.Wilcoxon pairwise tests were used for planned comparisons.Within subjects 95% confidence intervals were obtained for the main effect of congruency.Analyses of RTs showed a significant effect of flanker-action congruency = 64.46, p < 0.001, ƞp2 = 0.75; see Fig. 2.a), but no effect of group nor interaction.Planned comparisons revealed that RTs were significantly slower in the incongruent condition compared to the neutral and congruent conditions.RTs were also significantly slower in the neutral compared to the congruent condition.Analyses of error rates revealed a significant main effect of congruency = 18.55, p < 0.001, ƞp2 = 0.46, Greenhouse-Geiser correction; see Fig. 2.b).Planned comparisons showed that participants made significantly more errors in the incongruent compared to neutral, and congruent conditions.The neutral condition also led to significantly more errors than the congruent condition.Additionally, there was a significant main effect of group = 5.73, p = 0.026, ƞp2 = 0.21), as the trial-wise ratings group made significantly more errors than the block-wise ratings group.This presumably reflects higher task difficulty for the trial-wise rating group, as they had to give agency ratings in each trial, which meant they had to press different keys.In contrast, the block-wise rating group could focus exclusively on responding to the target, and could keep their fingers on the response keys throughout a block.Finally, there was no significant interaction between group and congruency = 2.65, p = 0.10, ƞp2 = 0.11, Greenhouse-Geiser correction).However, this result should be interpreted with particular caution, because our study may not have had sufficient statistical power to investigate interactions involving between-subjects effects of group.The ANOVA on agency ratings revealed a significant main effect of congruency = 4.70, p = 0.014, ƞp2 = 0.18; see Fig. 2.c).Planned comparisons confirmed that the incongruent condition led to significantly lower ratings compared to the congruent, and the neutral condition, whereas the congruent and neutral conditions were not significantly different.There was no significant effect of group = 1.29, p = 0.29, ƞp2 = 0.013), nor a significant group x congruency interaction = 0.30, p = 0.59, ƞp2 = 0.055).For the block-wise group, agency ranks were also analysed, and results showed a significant main effect of congruency = 8.73, p = 0.013).Planned comparisons replicated the pattern of results seen for the agency ratings: the incongruent condition led to significantly lower agency ranks than the congruent condition, and the neutral condition; whereas there was no significant difference between congruent and neutral conditions.Experiment 1 showed that flanker-action congruency influenced action selection as predicted.The sense of agency over action outcomes was significantly reduced following dysfluent action selection, compared to fluent selection.This replicates recent work demonstrating a prospective contribution of action selection processes to the sense of agency, and generalises the finding across different behavioural tasks.So far, most studies used subliminal priming to manipulate action selection, or assessed agency over the action.To the best of our knowledge, the present study is the first to show a reduction in the sense of agency over action outcomes following dysfluent action selection, even though participants could consciously perceive the stimuli that influenced action selection."Previous studies used subliminal priming to manipulate action selection in order to preclude the explicit awareness that one's action was manipulated.Additionally, this increased uncertainty about the outcomes, since they were contingent on both the action and the congruency between the prime and the action.That is, as the primes were not consciously perceived, the relation between prime-action congruency and specific outcomes could not be represented, hence outcomes were never fully predictable.In contrast, as participants were aware of the flankers in the present study, they could learn the full contingency schedule between the letter strings and outcome colours.For example, in a given block, participants could learn that the letter array “SSSSS” was followed by a green circle, whereas “HHSHH” was followed by a red circle.Debriefing confirmed that most participants were aware of this relation.Moreover, the causes of difficulties in action selection, i.e. incongruent flankers, were now clearly available to participants.Nevertheless, the same effects of action selection fluency on agency ratings were found, irrespective of perceptual awareness of the stimulus trigger.Moreover, there was no significant difference in the fluency effects on agency across the two rating procedures, i.e. trial- vs. block- wise ratings.While the same effects had been shown using both procedures, this was the first study to combine them.Previous studies suggest that action selection fluency affects agency online.Additionally, the association between different fluency experiences and ensuing outcomes can be retained in memory, at least for long enough to accumulate over the course of a block of trials, as seen here and in Wenke et al.The inclusion of a neutral condition allowed us to distinguish an enhanced sense of agency due to facilitation of action selection, from a reduction of agency due to response conflict.Only the effect of conflict in action selection yielded a significant modulation of agency ratings.When flankers were congruent with the central target, participants were faster and made less errors, than when the flankers were neutral.Additionally, incongruent flankers led to significantly slower RTs and more errors, compared to neutral flankers.However, while agency ratings were significantly lower following incongruent flankers, compared to neutral and congruent flankers, the trend for higher ratings following congruent compared to neutral flankers was not statistically significant.It should be noted that other baseline conditions, and different tasks, could yield a different pattern of facilitation/conflict.The present study used task-irrelevant stimuli as neutral flankers, which yielded both facilitation and conflict effects on performance.As congruency effects on agency ratings are smaller than congruency effects on RTs, the absence of a facilitation effect could result from a lack of statistical power within-subjects.Additionally, between-subjects design resulted in a small sample in each group, giving relatively low statistical power for investigating between-subjects effects and interactions.These considerations mean that null between-subjects effects should be interpreted with particular care.Importantly, however, these between-subjects effects did not form the focus of our predictions.The key predictions, and therefore the key results, come from main effects of congruency on agency ratings.In our design, these are based on within-subjects comparisons.Further, our results are consistent with those obtained with the subliminal priming paradigm.There, the reduction in agency ratings following incongruent, compared to neutral primes, was larger than the increase in ratings following congruent primes, though neither was statistically significant.A positive sense of agency may be a “default state”.Reduced agency may be triggered by disruptions in the intention-action-outcome chain, which may produce a salient experience relevant to agency judgement.Our results contrast sharply with those of Damen et al.That study reported higher agency ratings when participants chose an action incongruent with a supraliminal prime, compared to when they chose a prime-congruent action.Importantly, free choice trials were used in their study, whereas here participants had to follow the instruction of a central flanker.Experiment 2, therefore, investigated whether choice may interact with the effects of flanker congruency on sense of agency, when biasing stimuli are consciously perceived.Free and forced choice targets were randomly intermixed, such that actions could be congruent or incongruent with the flankers, whether the action was instructed by the central, attended stimulus, or was endogenously chosen.Participant recruitment and study approval was as in Experiment 1.Twenty-four participants were tested.Testing conditions and stimuli were the same as in Experiment 1, except that instead of a neutral flanker condition, the letter O now served as a neutral target in free choice trials.In free choice trials, the neutral target was surrounded by flankers associated with a left or right action.For example, if the array “SSOSS” was presented, participants could choose whether to act congruently with the flankers and make a left action, or act incongruently with the flankers and choose a right action."Thus, flanker-action congruency was not related to the stimuli, but rather reflected the participants' action choice.In forced choice trials, the congruent or incongruent conditions were as described in Experiment 1.The new 2 × 2 design meant that 8 outcome colours were used, 4 associated with each hand, 1 per choice × congruency condition.The colours were Latin square rotated across 8 blocks of 64 trials, and the condition-colour block mappings were randomised.All participants gave agency ratings at the end of each trial, thus the trial timeline was the same as the trial-wise group in Experiment 1.Only 2 action-outcome intervals were used, to reduce the overall number of conditions.As in Experiment 1, the study began with a training block of 32 trials, and ended with a debriefing questionnaire.Reaction times were submitted to a 2 × 2 ANOVA, with choice and flanker-action congruency as within-subjects factors.Agency ratings were submitted to a similar ANOVA, with action-outcome interval as an additional within-subjects factor.For free choice trials, the proportion of flanker congruent choices was analysed with a one-sample t-test against a 0.5 chance level.For forced choice trials, error rates were analysed with a paired-samples t-test comparing congruent and incongruent conditions.Within subjects 95% confidence intervals for pairwise comparisons were calculated separately for free and forced choice trials.Analyses of RTs revealed no significant main effect of choice = 1.65, p = 0.21, ƞp2 = 0.067), a significant main effect of congruency = 20.76, p < 0.001, ƞp2 = 0.47; see Fig. 3.a), and a significant choice × congruency interaction = 5.67, p = 0.026, ƞp2 = 0.20)."Simple effects t-tests showed a significant congruency effect for forced choice trials, i.e. slower RTs for the incongruent than the congruent condition, and a similar modest trend for free choice trials = − 1.72, p = 0.050, Cohen's dz = − 0.35; forced: t = − 4.68, p < 0.001, Cohen's dz = 0.96). "Additionally, incongruent trials led to significantly slower RTs in forced compared to free choice = − 2.18, p = 0.040, Cohen's dz = 0.44).Choice did not affect RTs in congruent trials.In free choice trials, flanker congruent choices were made in 57.47% of trials."A one sample t-test showed that the proportion of flanker-congruent choices was significantly different from chance = 6.40, p < 0.001, Cohen's dz = 1.31). "For forced choice trials, a paired samples t-test on error rates showed that the incongruent condition led to significantly more errors than the congruent condition = − 4.39, p < 0.001, Cohen's dz = − 0.90; see Fig. 3.b).An ANOVA on agency ratings revealed a significant main effect of congruency = 12.70, p = 0.002, ƞp2 = 0.36).Flanker-incongruent actions led to lower agency ratings than flanker-congruent actions.Critically, there was no significant main effect of choice = 1.48, p = 0.24, ƞp2 = 0.061), nor a significant choice by congruency interaction = 2.32, p = 0.14, ƞp2 = 0.092).There was a marginal effect of action-outcome interval = 3.65, p = 0.069, ƞp2 = 0.14), such that ratings for the long interval were higher than for the short interval.These results are inconsistent with previous findings using other tasks."In previous studies, using a wider range of intervals, higher ratings were found for shorter intervals, recalling Hume's concept of temporal contiguity as a cue for causation.Importantly, action-outcome interval did not interact with the factors of interest – choice and congruency.Since action-outcome interval was not a factor of interest, this factor will not be discussed further.Experiment 2 showed that action selection was influenced by flankers in both free and forced choice trials.Flankers biased choice, such that participants were ~ 7% more likely to ‘freely’ select actions corresponding to the flanker suggestion, compared to against it.Similar biases have been found using subliminal priming.Flanker-incongruent actions led to significantly slower RTs in forced choice trials, with a similar trend in free choice trials.Additionally, incongruent forced choice trials led to significantly slower RTs than incongruent free choice trials.Hence, the cost on performance of freely choosing an action incongruent with the flankers was smaller than the cost of following an instruction with incongruent flankers.Consistently, a greater flexibility for changes of mind has been shown for free, compared to forced, choices.Crucially, response conflict, induced by supraliminal flankers, significantly reduced the sense of agency over action outcomes for both instructed and freely chosen actions.Our results additionally show that the discrepancy between our findings and those of Damen et al. cannot be explained by whether participants could freely choose which action to perform, or had to follow an instruction.Although null effects should be interpreted with care, the absence of an interaction between choice and congruency seen here is consistent with a previous subliminal priming study."In Wenke et al.'s study, free and forced choice trials were intermixed, and free choices were effectively biased by subliminal primes, similarly to our results.On the other hand, Damen et al. found little effect of sub- or supraliminal primes on choice, possibly due to the exclusive use of free choice trials.This could have allowed participants to decide which action to make before the beginning of a trial, and thus before the prime was presented.In fact, it has been shown that priming effects seen in blocks of intermixed free and forced choice trials are abolished in blocks with only free choice trials.Nonetheless, Damen et al. did find priming effects on agency.The authors argued that the observed reduction in the sense of agency when following a conscious prime could have been due to a reduced sense of freedom.Using only free choice trials could have potentially increased the overall sense of freedom experienced in the task, relative to mixed conditions, rendering a reduction in that perceived freedom, due to conscious biases, more salient.This sense of freedom may affect agency at a higher, conceptual level, and independently of action selection.Another relevant difference between the two studies, which is related to action selection, lies in stimulus timing.In Damen et al., the prime preceded the go signal by 250 ms in the supraliminal priming condition, and there was no time limit for response.In contrast, in our study, flankers and targets were presented simultaneously, speed was emphasised, and a tight response window was imposed."Hence, a ‘sufficient’ amount of time may be necessary for a realisation that one's actions are being biased, and thus override the normal relation between selection fluency and sense of agency.To assess whether the timing of conflict stimuli may influence the sense of agency, the interval between flankers and target onset was parametrically varied in Experiment 3.Participant recruitment and study approval was as in Experiments 1 & 2.Twenty-six participants were tested.One participant was excluded as she did not follow instructions, and sometimes used only one hand to press the left and right key.Testing conditions were the same as in Experiment 2, but with only forced choice trials.Additionally, the flanker-target stimulus onset asynchrony was randomly varied across the trials.Flankers could appear: 500 ms before target onset; 100 ms before target onset, simultaneously with the target; or 100 ms after the target.To accommodate the varying SOA conditions, target duration was now set to 150 ms.Flankers were displayed until the target duration elapsed.Action-outcome intervals were also changed to 100 and 500 ms to enhance the discriminability of the 2 intervals, while keeping the experimental session short.Each block included 4 outcome colours, one per action × congruency condition, orthogonal to the flanker-target SOA conditions.To obtain a similar number of trials per SOA × congruency condition to the previous experiments, 12 blocks of 64 trials were used.To ensure that each outcome colour appeared only once for each action x congruency condition, 12 colours were used overall in the experiment.These were rotated with a Latin square across the 12 blocks, in groups of 4, and the block mappings were randomised.The 12 colours were shown to participants at the beginning of the study to confirm that they could reliably distinguish them.Participants were also instructed that the colours or the relation between action and colours could change across blocks, so they needed to learn them anew in each block.As in the previous experiments, the study began with a training block of 32 trials, and ended with a debriefing questionnaire.RTs and error rates were submitted to a 4 × 2 repeated measures ANOVA with the factors flanker-target SOA and flanker-action congruency.Agency ratings were submitted to a similar ANOVA that additionally included the factor action-outcome interval.Greenhouse-Geisser corrections were used whenever the sphericity assumption was violated.Bonferroni adjusted post-hoc tests were used to probe the main effect of SOA.The SOA × congruency interactions were investigated with paired samples t-tests, with a Bonferroni adjustment, to test congruency effects across SOAs.Within subjects 95% confidence intervals for the pairwise differences between congruency conditions were calculated separately for each SOA.Analyses of RTs revealed significant main effects of SOA = 240.77, p < 0.001, ƞp2 = 0.91), and congruency = 60.40, p < 0.001, ƞp2 = 0.72), and a significant SOA × congruency interaction = 9.28, p < 0.001, ƞp2 = 0.28).Post-hoc tests to explore the main effect of SOA showed that all pairwise comparisons between SOAs were significant.As Fig. 4.a shows, RTs were faster with earlier presentation of the flankers.Probing the SOA × congruency interaction revealed significant congruency effects at each SOA, except at − 500 SOA = − 1.04, p = 0.31, Cohen’s dz = − 0.21).Analyses of error rates showed no significant effect of SOA = 1.08, p = 0.36, ƞp2 = 0.04), a significant main effect of congruency = 31.61, p < 0.001, ƞp2 = 0.57), and a significant SOA × congruency interaction = 5.01, p = 0.003, ƞp2 = 0.17)."Post hoc tests revealed significant congruency effects for − 100 and 0 SOA = − 5.08, p < 0.001, Cohen's dz = − 1.02; 0 SOA: t = − 3.54, p = 0.002, Cohen's dz = 0.71), but not for − 500 or + 100 SOA = − 0.39, p = 0.70, Cohen's dz = − 0.078; + 100: t = − 1.58, p = 0.13, Cohen's dz = − 0.32; see Fig. 4.b).Analyses of agency ratings revealed a marginal main effect of congruency = 3.99, p = 0.057, ƞp2 = 0.14), in the predicted direction: incongruent flankers led to lower ratings compared to congruent flankers.Notably, there was no main effect of SOA = 0.87, p = 0.46, ƞp2 = 0.035), and no interaction between SOA and congruency = 0.40, p = 0.75, ƞp2 = 0.017).The absence of SOA effects on agency ratings can be clearly observed in Fig. 4.c.Finally, there was a trend towards a main effect of action-outcome interval = 3.27, p = 0.083, ƞp2 = 0.12), with long intervals leading to higher agency ratings than short intervals.There was also a marginal interaction between congruency and action-outcome interval = 3.48, p = 0.074, ƞp2 = 0.13), which was not a focus of prediction, and so was not explored further.The remaining interactions were not significant.Both action-outcome interval results are inconsistent with previous priming studies.Even though the difference between the two intervals was increased, relative to Exp.2, varying the flanker-target SOA may have changed the perception of the subsequent action-outcome interval, and disrupted its normal effects on agency.Since action-outcome interval was not a manipulation of interest, this will not be discussed further.Results showed that flanker effects on action selection were modulated by the flanker-target SOA.As predicted, flankers had no effect on action selection at − 500 SOA, but incongruent flankers did lead to performance costs with the other SOAs.Additionally, there was a gradual increase in RTs with increasing SOA, possibly due to an alerting effect of early flankers, also found in previous studies.Critically, there was no significant interaction between flanker-target SOA and congruency on agency ratings.That is, incongruent conditions led to lower agency ratings than congruent conditions, but did so similarly across flanker-target SOAs, including SOAs where flankers had no performance effects.These results are inconsistent with the hypothesis outlined above of an interaction between the timing of conflict during action selection and the direction of fluency effects on agency.That hypothesis suggested that SOAs favouring successful inhibitory cognitive control might lead to higher agency ratings for incongruent, rather than congruent flankers.At − 500 SOA, we found efficient inhibitory cognitive control, resulting in no congruency effect on RTs or error rates, yet sense of agency was still higher for congruent than incongruent trials.Therefore, the results of Damen et al. cannot be explained by a longer time delay between a biasing influence and action allowing the recruitment of cognitive control to efficiently overcome those biases.The dissociation seen here between congruency effects on motor performance and on agency ratings is, however, consistent with Damen et al., where priming influenced agency but not action selection.The authors argued that the effects were independent of selection fluency, but rather due to priming of conceptual representations of action, or to influencing the experience of freedom.A dissociation between motor effects and agency was also found in a subliminal priming study, using NCE priming."It was proposed that congruency between an initial prime's suggestion and the executed action could serve as a fluency signal that would increase the sense of agency.However, neither of these proposals can fully account for our results, since they would predict that only congruency between the first intention and the action should matter.Our results show that the appearance of incongruent flankers 100 ms after the target still affected the sense of agency, even though the action performed remained congruent with the first intention, which was presumably triggered by the target.Therefore, it seems that holding conflicting intentions is key for the observed reduction in the sense of agency, rather than the precise dynamics of the selection process.Importantly, this condition still led to congruency effects on motor performance, consistent with earlier reports.Action selection processes take time, and will be susceptible to disruptions occurring within a given time window.When using arrow stimuli in the flanker task, no performance effects were found with a + 100 SOA.Thus, the window in which action selection can be disrupted may vary depending on whether the stimulus is imperative in nature.Our results are compatible with a view of the sense of agency as resulting from an integration of information about conflict over a wider time-window than the time-window of action selection.It has been argued that fluency/conflict signals are relatively non-specific with respect to their sources, and have only a general influence.The temporal sensitivity of such signals, and of their integration in the sense of agency, may be low relative to the precise temporal dynamics of action selection and execution.To better characterise this window of temporal integration, future studies could include more flanker-target asynchrony values.In particular, one might ask whether flankers continue to influence the sense of agency even when presented so late that they no longer influence reaction times.Overall, our results suggest that the sense of agency over an action outcome is informed by cognitive processes occurring prior to action execution, particularly those processes involved in initiating a correct rather than an inappropriate action.In many situations, action control requires identifying an appropriate target, and then selecting and initiating the corresponding action, while avoiding the influence of distractors.The feeling of control over the consequences of action is influence by these processes.Part of the content of agency judgements appears to derive from monitoring processes that detect response conflict during action selection.Interestingly, we found that sense of agency was insensitive to the specific dynamics of conflict at the level of motor performance.Thus, the prospective, premotor signals that influence sense of agency appear to signal a disruption in action selection whenever conflict emerges, regardless of whether the conflict is successfully resolved, and of how performance is affected.Additionally, this putative monitoring system can integrate information about action selection in a time window that is broader than that which affects selection at a motor level."Moreover, the effects of action selection on the sense of agency can be independent of the effects of choice, and of the effects of being aware of influences on one's action or choice.That is, regardless of whether we have a choice in what to do, and whether we are aware of stimuli that could bias our decisions, dysfluent or difficult action selection can lead to a reduction in our sense of agency over action outcomes.Finally, we have shown that these effects generalise across tasks.Our results imply that the sense of agency depends on some internal signal related to selecting between alternative actions.In that regard, our results are compatible with ‘metacognitive’ theories of agency.Where might these internal signals be found within the motor system?,The supplementary motor area is necessary for triggering the automatic inhibition processes thought to underlie NCE priming, whereas upstream regions such as the pre-SMA are not.Such automatic inhibition processes were not found to disrupt the sense of agency.The pre-SMA has in turn been implicated in monitoring response conflict, elicited both by conscious and unconscious stimuli.Relatedly, the premotor cortex, but not the primary motor area, has been shown to contribute to metacognitive judgements of perceptual confidence.More specific to the present findings, an fMRI study used the subliminal priming paradigm to study congruency effects on the sense of agency.This study showed that the dorsolateral pre-frontal cortex was sensitive to response conflict, and was associated with the angular gyrus, wherein higher activity was linked to a greater reduction in agency ratings.Together, these studies suggest that the metacognitive monitoring of action selection that informs the sense of agency, may rely on higher-order action representations in premotor and prefrontal areas, rather than low-level motor signals in the primary motor cortex.Importantly, the congruency effects on agency seen here are not due to a retrospective inferential process, but rely on prospective signals from action monitoring processes.As the flankers were clearly visible, one might be tempted to think that the observed effects could result from a retrospective comparison between the flankers and the target, or action, namely at a conceptual level.However, this would imply that neutral flankers would lead to a loss of agency, as they were visibly different from the target.Instead, the effects seen here appear specifically related to conflict in action selection.Experiment 1 showed no significant difference between congruent and neutral flankers, but only a significant reduction in agency following incongruent flankers.Although such null effects should be interpreted with care, especially due to potentially low statistical power, they suggest that a perceptual or conceptual mismatch may not be sufficient to explain our results.Rather, an incongruent action plan should be triggered at some stage, for a reduced sense of agency.In fact, subliminal priming was used in previous studies to manipulate action selection but preclude such post-hoc, conceptual inferences.This method showed a consistent trend for a larger cost of conflict on agency ratings than a facilitation effect.Our Experiment 3 is also consistent with a prospective account: the presence of conflicting motor plans during the trial led to a loss of agency, even when the interval between flankers and target was sufficient to resolve the conflict.The subjective experience of conflict may linger, even after the motor conflict has been resolved.Conflict signals are especially motivationally significant since they can indicate a need to adjust subsequent behaviour.As such, they may have a greater impact on the sense of agency than fluency experiences.Additionally, a positive sense of agency may be a ‘default’, and thus we are especially sensitive to disruptions to the normal flow of voluntary action.Our results clearly contrast with some reports that effort or difficulty can enhance sense of agency.Why, then, do effort and conflict sometimes increase sense of agency, and sometimes reduce it?,The relation between fluency or effort and the sense of agency is complex and remains poorly understood.Often when intentional actions unfold without any obstacles, the sense of fluency can result in a strong sense of agency, as “everything went according to plan”.Yet, effort can also enhance the sense of agency.When a need for cognitive control can be anticipated, some proactive conflict processing may become part of the action plan.This may highlight the sense of self, and of being engaged with task at hand.In contrast, when disruptions are unexpected, executive control will be triggered reactively by conflict signals.We speculate that these two sources of cognitive control may have different effects on sense of agency.In particular, proactively embedding effort into the action plan may be associated with an increase in the sense of agency, however, the unexpected or unwanted need for added effort could instead lead to a reduction in our sense of agency.In addition, the context or the framing of a task could modulate how conflict influences agency."In Damen et al.'s study, each action triggered a specific outcome after a variable delay.Participants were instructed that sometimes they would cause the beep to occur, but other times it would be caused by the computer.Thus, the task and the agency question were framed in terms of attributing the cause of the outcome to the self, or to another.Also, subliminal and supraliminal priming were randomised, so participants presumably experienced wide variations in degree of influence from the primes.In contrast, our studies focused on the instrumental aspect of agency, as participants were asked to judge the strength of the relation between various actions and outcomes, rather than invoking alternative agents."That is, our study focused on ‘concomitant variation’ between a single agent's different instrumental actions and their outcomes, rather than on attribution of outcomes to agents.Both processes are relevant to agency, but conflict between alternative actions might have different effects on each of them.Further research is needed to clarify the conditions under which conflict can enhance, rather than reduce, the sense of agency.Our results are consistent with previous proposals that the sense of agency integrates information from multiple sources, and over time.In addition to retrospective processes related to outcome monitoring, there is also a prospective component related to action selection.Action selection monitoring can detect conflicting intentions and prospectively signal a loss of agency.After this, outcome monitoring can assess action outcome intervals and outcome identity for a mismatch with predictions or expectations, and retrospectively signal a loss of agency.If the smooth flow between intention – action – outcome remains unperturbed, the sense of agency can remain at a default level.Additionally, higher-order beliefs and contextual information can also influence the sense of agency.We found that choice, awareness of biases and timing of conflict did not interact with the effects of selection fluency.However, they may make independent contributions to the sense of agency, depending on context, or other cues.Across the experiments reported here, the sense of agency was prospectively informed by monitoring the processes of action selection.When conflicting intentions were present, the sense of agency over action outcomes was reduced.The effect of conflict on the sense of agency was independent of awareness of the causes of conflict, of free vs. instructed action selection, and of the timing of conflicting information during action selection.Finally, these effects generalised across tasks, from subliminal priming of actions, to the Eriksen flanker task, thus revealing a new approach for further investigating prospective contributions to the sense of agency.These findings support the view that the sense of agency is especially sensitive to a disruption in the normal flow of intentional action, from an intention or goal to its corresponding action, to the desired/expected consequences.Importantly, fluency of action selection was independent of the actual statistical contingency between actions and outcomes in these experiments.Selection fluency does not guarantee successful agency: one can know exactly what to do, and still fail to produce an intended outcome.However, selection fluency may serve as a useful heuristic to guide our sense of agency, as it often predicts successful outcomes.Prospective agency processes based on action selection may thus help to bridge the time gap between action and outcome. | The sense of agency refers to the feeling that we are in control of our actions and, through them, of events in the outside world. Much research has focused on the importance of retrospectively matching predicted and actual action outcomes for a strong sense of agency. Yet, recent studies have revealed that a metacognitive signal about the fluency of action selection can prospectively inform our sense of agency. Fluent, or easy, action selection leads to a stronger sense of agency over action outcomes than dysfluent, or difficult, selection. Since these studies used subliminal priming to manipulate action selection, it remained unclear whether supraliminal stimuli affecting action selection would have similar effects. We used supraliminal flankers to manipulate action selection in response to a central target. Experiment 1 revealed that conflict in action selection, induced by incongruent flankers and targets, led to reduced agency ratings over an outcome that followed the participant's response, relative to neutral and congruent flanking conditions. Experiment 2 replicated this result, and extended it to free choice between alternative actions. Finally, Experiment 3 varied the stimulus onset asynchrony (SOA) between flankers and target. Action selection performance varied with SOA. Agency ratings were always lower in incongruent than congruent trials, and this effect did not vary across SOAs. Sense of agency is influenced by a signal that tracks conflict in action selection, regardless of the visibility of stimuli inducing conflict, and even when the timing of the stimuli means that the conflict may not affect performance. |
31,485 | Paying for efficiency: Incentivising same-day discharges in the English NHS | Many healthcare systems reimburse hospitals through prospective payment systems in which the price for a defined unit of activity, such as a Diagnosis Related Group, is set in advance and is equal across hospitals.Economic theory predicts that hospitals will expand activity in areas where price exceeds marginal costs and minimise activity in areas where they stand to make a loss.1,This form of reimbursement should encourage hospitals to engage in efficient care processes and cost reduction strategies to improve profit margins.Despite these recommendations and financial incentives, SDD rates are lower than is clinically recommended for a wide range of conditions.The reasons for these low rates may relate to financial constraints on hospitals that limit their ability to invest in dedicated same-day facilities or reluctance by doctors to change established working practices.One way to encourage hospitals and doctors to increase uptake of SDD care is to increase the SDD price.This has been the approach taken in England under a payment reform known as the SDD bonus policy.Hospitals receive an SDD bonus on top of the base DRG price for treating a patient as an SDD compared to an overnight admission.Starting in 2010, the reform has been progressively applied to 32 different conditions.Our analysis of this policy reform makes two main contributions to the literature.First, it contributes to our understanding of economic incentives in the health sector by exploiting unique features of the SDD policy that relate to the economic importance of the bonus and the focus on efficiency.It is designed to incentivise technical efficiency, by paying hospitals extra to reduce length of stay and use of care inputs, such as staff time and hospital beds, by shifting care delivery from more expensive overnight wards to less costly same day settings.A distinctive feature of the SDD bonus policy is that the incentive scheme is high-powered, in that it pays more for the less costly SDD treatment.This contrasts with the common form of PPS in which prices are set at average cost, either pooled across SDD and overnight stay, or separately for each admission type.In England, the cost advantage varies across the 32 conditions from 23% to 71% lower for SDD than for an overnight hospital stay in the pre-policy period.The SDD bonus compounds this advantage and is also economically significant, varying from 8% to 66% more than for an overnight stay.We are able to exploit this heterogeneity in the size of the incentive to assess whether it predicts changes in behaviour.We also contribute to analytical studies that employ relatively new synthetic control methods and compare these to more traditional difference-in-difference methods.To evaluate the effectiveness of the policy we exploit the fact that incentives have been applied to 32 conditions, using non-incentivised conditions as control groups.SC methods are a potentially useful addition to the analytical armoury in situations where it is possible to draw on a large number of potential control groups.Following the pioneering work by Abadie and Gardeazabal, Abadie et al., SC methods are receiving increasing attention in the wider economic literature.Within health economics, SC methods have been applied to study the effect of co-payments, tax incentives, public health interventions such as malaria eradication, and expansion of health insurance.SC methods have been very rarely applied to provider incentives.We are only aware of one study by Kreif et al., which applies SC methods to evaluate the effect of a regional pay-for-performance scheme in England on mortality rates.These studies all consider a single policy initiative with associated idiosyncrasies, which provides limited evidence on the general applicability of SC methods for policy evaluations typically considered in health economics.In contrast, we evaluate 32 policy variants of a particular payment reform following a common analysis plan.This yields insights into whether DID and SC methods generate consistent conclusions in terms of point estimates and statistical inference under a range of different scenarios.Our key findings on the effectiveness of the policy are as follows.We find that the policy led to a statistically significant increase in SDD rates of 5 percentage points for planned conditions and 1 pp for emergency conditions.However, there is considerable heterogeneity across conditions with eight out of 13 planned conditions showing statistically significant positive effects in DID analysis.Estimated effects range from −2 to +22 pp changes in SDD rates.Results are more mixed for emergency conditions, where we find that the policy had a statistically significant positive effect on six out of 19 emergency conditions but caused reductions in SDD rates for two conditions.The range of estimated effects is also narrower and more centred around zero.The median elasticity of SDD rates to price is 0.24 for planned conditions and 0.01 for emergency conditions.Elasticities are larger for conditions with larger post-policy price differences between SDD and overnight care, and, for planned conditions only, with bigger profit margins.In relation to the methods employed, our analysis suggests that DID and SC methods provide similar point estimates when there is a large pool of potential control conditions to choose from, as is the case for planned conditions.However, even in such favourable instances, inference from SC methods are still considerably more conservative, resulting in fewer statistically significant findings than in DID analysis.Our analysis relates to two strands of the literature within the broader area of hospital incentive schemes.First, we contribute to studies that focus on the effect of changes in prices designed to encourage hospitals to reduce LoS.It is well established that PPS encourages reductions in LoS compared to either fee-for-service or global budgeting arrangements, by making hospitals more cost-conscious than the alternative funding regimes.This was examined in pioneering work by Rosko and Broyles, Salkever et al., Long et al., Lave and Frank and others in the US Medicare and Medicaid systems, and has subsequently been confirmed in a range of other countries.As well as finding general reductions in LoS, Farrar et al. estimated that the introduction of PPS in the English NHS led to an 0.4–0.8% increase in SDD rates for planned surgery.Much less is known about the ability of payers to influence LoS through deliberate price setting within a PPS arrangement.Shin exploits the 2005 Medicare change in its definition of payment areas that generated exogenous area-specific price shocks.The study found that the higher price did not affect volume, LoS and quality of services but it induced shifting patients into higher-paying DRGs.This is in line with Dafny, who found that a 10% increase in price due to the removal of an age criterion in the allocation of patients to DRGs led to upcoding without significant change in LoS.Verzulli et al. study the effect of a one-time price increase for a subset of DRGs in the Emilia-Romagna region of Italy.They find evidence that hospitals expand the provision of surgery in response to more generous reimbursement but this has no effect on waiting times or LoS.More closely related to our setting, Januleviciute et al. examine the choice of SDD care versus overnight stay in the Norwegian context, where prices are differentiated by admission type.They find no evidence that hospitals respond to intertemporal variation in the price mark-ups for overnight stays relative to SDD care by changing their discharge practice.In none of the above-mentioned settings were prices set with the explicit aim to reduce LoS.A noteworthy exception is the study by Allen et al., who considered the impact of the SDD bonus policy in England on a single incentivised condition, cholecystectomy, within a DID framework with a control group of all non-incentivised procedures recommended for SDD care.This study found an increase in SDD rates of 5.8 percentage points in the first 12 months following the policy introduction.As well as comparing DID and SC methods, we extend this earlier analysis to 31 additional conditions, allowing us to examine the generalisability of the previous result and study the determinants of the potentially heterogeneous responses to the SDD bonus.Furthermore, we examine longer-term effects, up to five years after the introduction of the bonus, allowing us to examine whether short-term effects are maintained over time.Our study also contributes to a second strand of literature evaluating P4P programmes.A recent study reviews 34 hospital sector P4P schemes in high-income countries.Most of the P4P schemes reviewed focus on incentivising quality, either through rewarding health outcomes or process measures of quality, and involve small or moderate bonuses of 5% or less.Effects are generally modest in size, short-lived and sometimes associated with unintended consequences.In contrast to the existing P4P literature, the policy we evaluate has two distinct features.First, few P4P schemes incentivise technical efficiency directly, so this study contributes to the small literature on what we label “pay-for-efficiency” schemes.Second, the SDD bonus policy is much more high-powered than previous P4P schemes and, therefore, our analysis can shed light on whether limited responsiveness to P4P schemes as documented in the literature is simply due to insufficient financial incentive, as has been hypothesised.The study is organised as follows.Section 2 provides the institutional background and the SDD pricing policy.Section 3 describes the data.Section 4 outlines the empirical methods.Section 5 describes the results.Section 6 is devoted to discussion and concluding remarks.The English NHS is funded by general taxation and residents have to be registered with a general practitioner."There are two routes to hospital: either patients are referred by their general practitioner for care ‘planned’ in advance or they are admitted for immediate ‘emergency’ care after attending the hospital's emergency department.The SDD bonus policy applies to both planned and emergency conditions.NHS patients face no charges for hospital care, whether in publicly owned NHS hospitals or the small number of private hospitals that provide care to NHS patients.All NHS hospital doctors are salaried and do not share in hospitals’ profits or losses.The NHS adopted a PPS for hospital reimbursement in 2003.Hospitals are paid a pre-determined price for treating NHS-funded patients, differentiated by Healthcare Resource Groups.Patients are assigned to a HRG based on diagnoses, procedures and, in some cases, other characteristics such as age.Initially limited to a small number of planned conditions, PPS has been extended progressively over time and now covers most hospital activity.From 2010, the English Department of Health has gradually introduced explicit incentives in the form of the SDD bonuses, which give a stronger financial incentive to reduce LoS.For patients allocated to the same HRG, the policy involved increasing the payment for someone treated on an SDD basis, with an offsetting reduction in the base HRG price for those who stay overnight.The difference between these two prices constitutes the SDD bonus.The specific conditions to which the SDD bonuses apply are drawn from a list compiled by the British Associations of Day Surgery and for Ambulatory Emergency Care for which overnight stay is considered unnecessary and where there is clinical consensus about the appropriate level of SDD.5,The BADS and BAAEC both produce directories listing 191 clinical conditions between them that are deemed suitable for SDD with recommended rates of SDD that are considered safe and appropriate.The SDD bonuses apply to all public and private hospitals providing publicly-funded care.The selection and design of the bonuses was informed by discussions with clinical stakeholders and varies across clinical areas.The general criteria for potential selection are volume,6 the national SDD rate being below the RR for this condition, and evidence of variation in the SDD rate across hospitals.Not all clinical conditions meeting these general criteria have an SDD bonus but by April 2014, 13 planned and 19 emergency conditions were covered by the incentive scheme.To qualify for the bonus payment, the patient has to be admitted and discharged on the same day.In addition, for planned treatments, the care has to be scheduled as SDD in advance of admission.New conditions to be incentivised are announced six months in advance of introduction.Table 1 provides an overview of the incentivised SDD conditions, the financial year in which the incentive was introduced,8 the price with and without the SDD incentive, the average cost of care reported by NHS hospitals in the year prior to the policy, as well as the SDD rate and the number of patients eligible in the twelve months prior to announcement of the incentive for that condition.Notice that in the pre-policy period hospitals already had a financial incentive to treat planned patients as SDD up to the recommended rate given that the cost of SDD is nearly always lower than the cost of an overnight stay.But as shown below in Section 3, hospitals had very low planned SDD rates in the pre-policy period, and always well below the RR.This could be due to the motivations of the doctor providing treatment or the constraining features of the hospital in which the doctor works, which we discuss in turn.The hospital in which the doctor works may be constrained in its ability to extend SDD to more patients.To a limited extent, SDD treatments can be offered in a normal hospital setting.However, scaling-up the provision of SDD treatment requires dedicated physical space and facilities.The hospital may have to invest in a dedicated facility, either by opening up new buildings or by engaging in re-organisation of existing wards.This would involve fixed costs which would be justifiable to senior managers only if it offers the prospect of long-term financial returns.Hospitals may not undertake this investment, particularly if they face borrowing constraints that restrict their access to capital funds.Moreover, managers faced with the various day-to-day issues of running a hospital may find it difficult to allocate the necessary time and resources to engage in more strategic re-organisations.Paying a bonus for activity conducted on an SDD basis may be sufficient to overcome both clinical and managerial resistance.Under the assumptions outlined above, the first term is positive and gives the additional revenues for every treatment which is provided as SDD.The second term is negative and is given by the reduction in revenues due to a reduction in the overnight price.The third term is positive if the SDD price induces an increase in the SDD rate, which is less costly.The fourth and last term, in square brackets, relates to changes in the average costs, which can be due to patient composition or external factors, the sign being generally indeterminate.We could argue, for example, that patients who are treated as SDD after the policy are at the margin more severe, so that this will translate into an increase in the average cost of SDD and a reduction in the average cost of an overnight stay, Hafsteinsdottir and Siciliani for more formal theoretical models).However, we assume that the increase in average costs for SDD is relatively small, so that an increase in SDD rates leads to a reduction in overall costs.We use data from Hospital Episode Statistics on all NHS-funded patients aged 19 or older admitted to English hospitals between April 2006 and March 2015 for care which could be delivered as SDD according to the BADS/BAAEC directories.HES is an admission-level dataset that contains detailed information on patients’ clinical and socio-demographic characteristics, the admission pathway and its timings, and whether care was scheduled as SDD in advance.A patient is considered to have received SDD care if admission and discharge date coincide.Fig. 1 shows the SDD rate and the RR for each of the 32 incentivised conditions in the year 2009, prior to the start of the SDD pricing policy.Observed rates for planned conditions are highlighted in light grey, and those for emergency conditions in dark grey.There is marked heterogeneity both in terms of the observed SDD rate and the remaining gap towards the RR, i.e. the potential for growth.Hospitals are consulted on any changes to the payment system — including the introduction of SDD bonuses applied to other conditions — approximately six months prior to the change.This gives them time to adapt to the new policy before the actual implementation, which may bias observed pre-policy rates.We therefore exclude data for the six months prior to the condition being incentivised.For some conditions eligibility criteria were refined over time to restrict the incentive to a more tightly defined patient population in which case we apply the criteria that were valid when the financial incentive first applied to ensure consistency throughout the study period.The overall sample includes 11,336,138 patients with incentivised conditions and 21,121,500 patients with non-incentivised conditions.Descriptive statistics for case-mix variables by incentivised condition are available in Table 6in the Appendix.Each hospital is observed for up to 34 quarters per condition.The number of hospital-quarter observations varies across the incentivised conditions and ranges from 3022 to 9245.All models are estimated as linear probability models with standard errors clustered at hospital level.The validity of our DID estimates may be compromised by two challenges.First, in our study, we consider a large pool of potential control conditions, several of which may be suitable to model the counterfactual outcome.The results of the DID analysis may be sensitive to the choice of control condition, for example because of idiosyncratic shocks or measurement error in the control condition.Second, while we select DID control conditions based on pre-policy trends, the assumption of parallel trends applies to unobserved counterfactual outcomes and can therefore never be tested Abadie et al.If the relationship between time-invariant unobservables and the outcome changes over time, the parallel trend assumption is violated.The SC method proposed by Abadie and Gardeazabal, Abadie et al. can address both of these challenges.The method constructs a synthetic control condition as a weighted combination of all potential control conditions, thus considering all relevant information in predicting the counterfactual outcome and thereby lifting reliance on a specific control condition.Furthermore, by matching on levels, the SC method provides reassurance that the synthetic control condition is well matched to the incentivised condition on time-invariant unobservables and that both have similar scope for improvement.All computations are performed using the user-written synth command in Stata 14.Table 2 presents descriptive statistics for the 32 incentivised conditions and corresponding control conditions under the two methodologies.For each incentivised condition we calculate the pre-policy trend in case-mix adjusted SDD rates as well as the same information for the control conditions, which serve as a diagnostic device of the parallel trend assumption of the DID method.We also calculate pre-policy levels, which are informative about the level equivalence assumption of the SC method.Time-series graphs of SDD rates for incentivised and control conditions are presented in the online appendix.For the planned conditions, the results of DID analysis suggest that the policy led to a statistically significant increase in SDD rates for 8 of the 13 incentivised conditions.18,The estimated policy effects are heterogeneous in size, ranging from −1.6 pp to 21.7 pp, with three instances of more than 10 pp.However, the results of the SC analysis call for a more conservative interpretation.Although the point estimates under both methods are typically quite similar, the confidence intervals around the SC estimates are substantially wider, even in instances where a large number of potential control conditions exist.As a result, there is only one planned condition where a statistically significant increase in SDD rates can be ascribed to the policy.This is shown as an example in Fig. 4.For emergency conditions, the DID analysis identifies statistically significant positive effects for six conditions and negative effects for two conditions.The size of the effects is generally smaller than those estimated for planned conditions, with no point estimate exceeding 6 pp.Given the small number of potential control conditions, the SC estimates are less reliable and deviate substantially from the DID results.Moreover, the placebo tests cannot reject the possibility that these results reflect chance variation, as evidenced by very wide confidence intervals.Fig. 6 plots out the development of the policy effects for each of the 32 incentivised conditions over time based on the DID model with interactions.The estimated developments are generally non-linear, with some conditions experiencing an immediate response to the change in financial incentives and subsequent flattening out, whereas others show a slow increase in SDD over time.There is no single pattern to these developments with all possible permutations present.We conduct two robustness checks to rule out alternative explanations of our results which are presented in Table 5.First, the introduction of incentives to increase SDD rates for some conditions might lead to changes in SDD provision more broadly.These spillovers might be positive, for example if clinicians apply their new skills to non-incentivised clinical conditions, or negative, for example if increasing the provision of SDD care requires resources which might be in demand for other patients, such as specialised day surgery beds.Spillover effects are most likely to occur within the same clinical department, as departments are where hospital resources such as clinical personal and beds are managed on a day-to-day basis.To test for spillovers, we re-estimate our analyses excluding potential control conditions that are performed in the same clinical department as the incentivised condition.21,We find our results to be substantively unchanged, suggesting that spillovers are unlikely to drive our main estimates.Second, for planned conditions, hospitals only receive the higher SDD price if they both schedule and provide SDD care.Hospitals that are already achieving high SDD rates prior to the policy but record poorly whether they have scheduled that care in advance to be delivered on the same day, may therefore be able to increase their payment simply by better recording scheduling plans.If so, observed changes in the incentivised outcome may not reflect changes in patient care but just coding practice.We therefore also estimate models where the dependent variable is a simple indicator of SDD, independent of scheduling.Our findings are broadly similar across DID analyses.In general, policy effects on LoS = 0 rates are larger than those based on meeting the exact conditions for the SDD bonus, suggesting our main analysis is conservative in measuring the impact of the policy on patient care, as hospitals did not always plan or report the planning of SDD despite carrying it out.Exceptions to this general finding are conditions #1 Cholecystectomy, #11 Tonsilectomy and #12 Septoplasty, where effects on LoS = 0 are smaller but still positive.Furthermore, for conditions #6 Laser prostate resection and #8 Therapeutic arthroscopy of shoulder our LoS = 0 estimates indicate large negative effects of the policy which are also significant.Comparisons of SC analyses indicate generally similar magnitudes of policy effects.We also explore whether responses appear to be driven by clinical reasons.We hypothesise that responses to the SDD bonus are more pronounced if SDD pre-policy rates are lower and the gap to the RR is higher, therefore giving more scope for improvement.Fig. 7d provides some support that larger elasticities occur for planned conditions with lower pre-policy SDD rates.However, somewhat counterintuitively, Fig. 7e suggests a negative relationship between the elasticities and the gap between existing practice for planned SDD care.One potential mechanism for this finding is that the size of gap between existing practice and recommended rate is larger when the costs or other limitations to higher SDD rates discussed above are larger.In such cases, the additional incentive created by the policy may still be insufficient for a larger number of hospitals, reflected in a lower national response.We have assessed the long-term impact of a generous pricing policy designed to encourage hospitals to treat patients as a ‘same day discharge’, involving admission, treatment and discharge on the same calendar day.Despite being considered clinically appropriate and having lower costs, English policy makers have been frustrated by the low rates of SDD for many conditions.Consequently, in order to encourage behavioural change by doctors and hospitals, policy makers have set prices for SDD that are well above average costs and are also higher than the price for patients allocated to the same DRG who have an overnight stay.Economic theory predicts that a significant price differential would result in greater provision of treatment on an SDD basis.An early study into the policy impact for one condition, cholecystectomy, suggested that the SDD pricing policy met short-term policy objectives.Since this study, the policy has been rolled out to 31 more conditions.Our study set out to assess how far these earlier findings would be generalisable to these other conditions, whether short-term impacts would hold over the longer-term and what design features of the policy might explain the magnitude of any response.Based on the results of our DID analysis, we find a positive policy response for 14 of the 32 incentivised conditions, translating into approximately 28,400 more patients treated on an SDD basis per year.However, perhaps surprisingly, we do not find a consistent positive response across all incentivised conditions.Indeed, for two conditions the response is negative: despite the enhanced price advantage, fewer SDD treatments are provided post-policy than predicted.For others there is no apparent response.Nor are we able to identify any general temporal pattern in the policy response, with both rapid and delayed uptake of SDD practices being observed.These mixed results mirror those of the literature on P4P, which provides inconclusive evidence for the effectiveness of using financial incentives to drive quality.This lack of generalisability cautions against drawing firm conclusions from a single analysis.Indeed, cholecystectomy turns out to be the condition exhibiting the second greatest positive response among the 32 conditions.Moreover, while Milstein and Schreyögg suggested that P4P arrangements are most appropriate for emergency care, where hospitals have less opportunity to select patients, we find that the SDD pricing policy was more effective for planned care than emergency care.This may be because clinicians may have ethical concerns about discharging patients in urgent need of care without a period of observation, whereas such concerns are less prominent when care is scheduled in advance.Also, emergency admissions occur at unpredictable points in the day, making it difficult to achieve SDD for some patients, particularly those admitted late in the evening.This may limit the scope for rapid increases in SDD rates in emergency conditions compared to planned conditions.It has been argued that the limited impact of P4P schemes is due to incentives being too small.In this study, for all conditions, the price incentive was more high-powered than that typically associated with P4P schemes.But there was significant variation across the conditions in terms of the relative size of the incentive, and we exploit this to investigate the association of incentive size and the estimated clinical response across 32 conditions.There is suggestive evidence that the response to the incentive was greater for conditions with higher SDD prices post policy and with lower SDD rates pre policy.There does not appear to be an association between the size of the price differential, i.e. the marginal reimbursement that hospitals attract from adopting SDD care, and the size of the response.However, there is a positive association, especially for planned conditions, when both price and cost advantages of SDD care are taken into consideration.On the methodological side, our study highlights an important shortcoming of the SC method compared to more traditional DID analysis in a policy evaluation context commonly encountered by applied health economists.Because the SC method aims to make inference about a treatment based on a single treated unit followed over time, the scope for statistical inference is limited to placebo tests.The quality of inference is thus dependent on the number of potential control conditions over which these placebo tests can be conducted.Even for planned SDD conditions, where there are as many as 85 potential control conditions, we only found one statistically significant result at the usual 5% critical level; compared to eight in DID analysis.This is not due to fundamentally different findings about the effectiveness of the SDD pricing policy, as point estimates were generally similar for both methods.The literature on statistical inference techniques for SC methods is rapidly evolving but has not yet reached a consensus on statistical testing.Until then, analysts should remain cautious about drawing conclusions about policy interventions based on traditional inference thresholds, or should interpret SC results as robustness checks for more traditional causal inference methods such as DID.There are two important limitations to our study that should be addressed by future research.First, while we do not find evidence of spillovers from incentivised to non-incentivised SDD conditions, we cannot rule out that spillovers among the 32 incentivised conditions contribute to the limited overall policy effect that we observe.For example, hospitals may find it difficult to increase SDD rates for a condition that starts to be incentivised if dedicated inputs are limited and have already been allocated to another condition where the incentive has been in place for longer.Our analysis treats all 32 incentivised conditions as independent and therefore cannot detect such spillovers.To address this, future research would need to develop a more complex model of inter-hospital allocation of resources that also incorporates the changes in incentive structure over time, but this goes beyond the scope of the current paper.Second, our analysis focusses on changes in discharge behaviour and does not analyse effects on patients’ health outcomes.The assumed welfare effects of the SDD policy are predicated upon the clinical consensus and existing evidence, Marla and Stallard, Vaughan et al., NICE) that SDD care is as safe and effective as care involving overnight stays.Future research should seek to confirm this assumption.In conclusion, we find some evidence that hospitals respond to price signals and that payers, therefore, can use pricing instruments to improve technical efficiency.However, there appears to be substantial variation in hospitals’ reactions even among similar types of financial incentives that is not explained by the size of the financial incentive or the clinical setting in which it is applied.It has been said that a randomised controlled trial demonstrates only that something works for one group of patients in one particular context but may not be generalisable.Similarly, a pricing policy that appears to work as intended in one area may not be effective when applied elsewhere, hence the need for continued experimentation and evaluation. | We study a pay-for-efficiency scheme that encourages hospitals to admit and discharge patients on the same calendar day when clinically appropriate. Since 2010, hospitals in the English NHS are incentivised by a higher price for patients treated as same-day discharge than for overnight stays, despite the former being less costly. We analyse administrative data for patients treated during 2006–2014 for 191 conditions for which same-day discharge is clinically appropriate – of which 32 are incentivised. Using difference-in-difference and synthetic control methods, we find that the policy had generally a positive impact with a statistically significant effect in 14 out of the 32 conditions. The median elasticity is 0.24 for planned and 0.01 for emergency conditions. Condition-specific design features explain some, but not all, of the differential responses. |
31,486 | The Genetic Links to Anxiety and Depression (GLAD) Study: Online recruitment into the largest recontactable study of depression and anxiety | Anxiety and depression are the most common psychiatric disorders worldwide, with a lifetime prevalence of at least 30%.Both are highly comorbid with each other and with other psychiatric disorders, and account for 10% of all years lived with disability.The World Health Organisation now considers depression to be the number one disorder by burden of disease.Many risk factors for depression and anxiety are shared, including psychological), environmental), and genetic influences).These findings on aetiology have been accompanied by an increasing evidence base of effective treatments, especially psychological therapies.Nevertheless, despite advancements in psychological therapies as well as medication, clinicians are unable to predict which treatment will work for whom.This means that the choice of first and subsequent treatments currently progresses by trial and error at the cost of prolonged disability, reduced hope of success/engagement, and increased risk of adverse events.Decades of work estimate twin study heritability of both anxiety and depression at ∼30–40%, rising to ∼60–70% for the recurrent forms of these disorders across several years.Recent UK Biobank analyses confirm that common genetic variants account for a smaller but still significant ∼15–30% of variation in lifetime anxiety.Depression has a SNP-heritability of ∼12–14% and shows substantial genetic overlap with anxiety.Recent genome-wide association studies have identified 102 genetic variants for depression and closely related phenotypes and 5 genetic variants for anxiety, indicating that we are now entering the era where it will be possible to finally discover new biology for both disorders.In addition, these genetic advances suggest that it may be time to include genetic risk factors in research aimed at developing new treatments or predicting which therapies will work best for each patient.Both GWAS and sequencing studies have shown that there are few, if any, individual genetic variants with a large effect size.Instead, the overall heritability is made of the effect of many genetic variants that individually have small effects.This type of genetic architecture is now commonly referred to as being polygenic.As such, almost all disorders affecting >0.5% of the population that show moderate to high heritability are now referred to as polygenic disorders.The use of genome-wide methodology has also confirmed that psychiatric genetics has moved beyond its replication crisis, as its results have proven much more replicable than those of candidate gene studies.Despite the broad range of established risk factors, it is unclear how different types of influences combine to increase risk and influence treatment response.In particular, clinical/psychological, demographic, environmental, and genetic influences have been studied largely independently, despite their evident interplay.As such, more multidisciplinary research is needed.However, efforts to better understand the aetiology of these disorders require large sample sizes and detailed information on symptom presentation.Many studies exist worldwide with either the scale of participants or the thorough phenotyping needed, but few have both.In response to this, the Genetic Links to Anxiety and Depression Study was developed to recruit >40,000 individuals into the newly established National Institute for Health Research Mental Health BioResource.Our sample size was based on the available funding and our aim to be one of the largest single studies of individuals with a lifetime experience of clinical depression and anxiety, providing sufficient power for discovery analyses and for recall studies.The NIHR Mental Health BioResource is an integral part of the overall NIHR BioResource for Translational Research, which has previously recruited approximately 100,000 individuals.However, a large majority of these volunteers were either healthy controls or have a physical health condition with no reported mental health disorder.This gap led to the creation of the NIHR Mental Health Bioresource.GLAD is its first project and has the specific goal of recruiting a large number of participants with anxiety and depression to facilitate future recall and secondary data analysis studies.Recall studies would involve recontacting participants to take part in further research, such as clinical trials aimed at developing better therapies and interventions for anxiety and depression.The GLAD Study will also directly explore both genetic and environmental risk factors for depression and anxiety disorders, including the potential of polygenic risk scores created from analyses of related phenotypes to predict response to treatment and prognosis.GLAD aims to facilitate studies which need to combine genetic, phenotypic, and environmental exposure data by creating a large, homogeneously phenotyped cohort of individuals with depression or anxiety with all these types of data.A focused and intense social media campaign was utilised to inform the public about the GLAD Study.We hoped that individuals with anxiety and depression would be encouraged to join given the online methodology, allowing them to respond to the questionnaires in a convenient time and location.The timeline leading up to the media launch is outlined in Fig. 1.Experts in the field, patients, and individuals with lived experience of depression or anxiety were consulted at each stage of study development.The study team also consulted regularly with collaborators from the Australian Genetics of Depression Study who provided guidance and advice based on their experience of successfully implementing a similar study design in Australia.The website was developed in collaboration with a local company in various stages.During the early stages of development and the design phase, we conducted user testing on the website by contacting individuals with lived experience through various routes.The first user testers were patient volunteers within the Improving Access to Psychological Therapies service at South London and Maudsley National Health Service Foundation Trust."The second group of volunteers were members of the King's College London Service User Research Enterprise and the Feasibility and Acceptability Support Team for Researchers.Third were volunteer individuals with lived experience who were actively involved with the charity Mind.Finally, staff from the charities Mind, MQ Mental Health, and the Mental Health Foundation gave feedback.All individuals provided input on the website content and usability, appearance, and study information and all volunteers were compensated £10 for their time.All feedback was reviewed by the study team and the lead investigators and incorporated into the final version of the website.A wide range of charities, professional bodies, companies, influencers, and individuals with lived experience were identified based on previous advocacy, involvement, or openness of mental health and treatment.These individuals and organisations were approached by the public relations company, the study team, and/or the co-investigators to introduce the study and invite them to collaborate.We received support from many UK charities including Mind, MQ, Mental Health Foundation, Anxiety UK, OCD Action, Bipolar UK, HERO, No Panic, the Charlie Waller Memorial Trust, BDD Foundation, Maternal OCD, Rethink Mental Illness, SANE, MHFA England, Universities UK, and UK Youth.We also received significant support from two UK professional bodies for psychologists and psychiatrists: the British Psychological Society and the Royal College of Psychiatrists."Finally, three organisations agreed to circulate study information internally to their employees, and Priory Group additionally shared study information on social media.Many of the charities, professional bodies, and influencers provided a quote about the GLAD Study to be shared with news outlets and on social media.Other forms of support from charities or professional bodies included circulating study information in member newsletters or magazines, publishing study information on their websites, and sharing the study on social media.Influencers helped to promote the study on their social media channels or blogs.Individuals with lived experience allowed the GLAD Study team to share their stories on social media and news outlets.Of utmost importance, these individuals were available for interviews with broadcast and newspaper agencies and gave a personal voice to the campaign.Spokespeople who became actively involved in the publicity and circulation of the GLAD Study in the media and/or online were vital to the success of the campaign.We created a variety of standard operating procedures and response templates to prepare for the logistical and administrative aspects of study launch and management.These included SOPs for saliva sample kit preparation and posting, website management, data protection, and participant contact.We established guides and SOPs for both email and social media responses to participant questions, concerns, and expressions of distress.These guidelines were reviewed by clinicians and by the Mental Health Foundation to ensure responses were appropriate, straightforward, and clear.Social media content and visuals were designed with the aid of the PR company, an independent videographer, and an animation company.These included a range of infographics and short videos, each designed to be informative and provide basic details for the public about the aims of GLAD and how to join the project.The campaign was targeted to a younger demographic to enable more prolonged collection of longitudinal data and reflect the young age of onset of depression and anxiety disorders.The infographics emphasised the simplicity and ease of taking part.The consent animation outlined key elements of the consent process to provide a simple and clear way to learn about the study, consistent with guidance from the Health Research Authority regarding multimedia use as a patient-friendly method of delivering vital consent information.The consent animation was also included with the website version of the information sheet.The second video was a short film which outlined the study sign-up steps in a real-life, relatable way.Finally, the animation was an engaging ‘call to action’ to join the community of volunteers participating in the GLAD Study.We created a six-week social media schedule for planned posts across Facebook, Instagram, and Twitter platforms which included both organic and promoted content.Most of the posts were accompanied by a video, infographic, or other imagery developed by the study team.The PR company circulated the press release and study information to news and broadcast agencies across England, and later across the rest of the UK.Interested parties organised interviews with the study investigators and/or influencers, charity or professional body representatives, and individuals with lived experience.The press releases and broadcasts were embargoed until the launch date to prevent pre-emptive publicity and provide uniform, widespread coverage across different media.The PR company helped us to organise a widespread national media campaign which included traditional media and social media outreach during the study launch.Examples from the campaign can be viewed at this link: http://wke.lt/w/s/Ymfhz.In the first 24 h, 8004 participants had registered to the website, demonstrating the effectiveness of this strategy.The prepared social media schedule helped to maintain our recruitment numbers over the six-week period.Between 300–1500 participants signed up to the study daily, with numbers fluctuating based on campaign expenditure and social media support from influencers and organisations.Five months after the initial launch, the GLAD Study opened in Northern Ireland, Scotland, and Wales, in collaboration with researchers at Ulster University,2 the University of Edinburgh, and Cardiff University.The press offices of the respective universities organised media outreach to local and national news outlets, and the PR company assisted in targeted social media advertisements for the various countries.In the 10 months following the launch, many NHS organisations in England contacted us interested in supporting the GLAD Study.As of July 2019, these included 12 Clinical Research Networks, 7 family doctor/general practitioner practices, 96 Trusts, 1 Clinical Commissioning Group, and 2 GP Federations.Sites were classified in 2 different categories with varying levels of involvement: 1) advertising sites which displayed study posters and leaflets in clinics and waiting areas, and 2) recruiting sites which conducted mail-outs, approached patients in the clinic or over the phone, and assisted patients through the study sign-up process.The sign-up process for enrolment in GLAD is given in Fig. 3.Personal information and phenotypic data are collected entirely online through the GLAD Study website.Participants register on the website with their name, email address, phone number, date of birth, sex, and gender.They are then able to read the information sheet and provide consent.As part of the consenting process participants agree to long-term storage of their sample, requests to complete follow-up questionnaires, anonymised data sharing, recontact for future research studies based on their phenotype/genotype information, and access to their full medical and health related records.Following consent, participants complete the sign-up questionnaire to assess their eligibility.Eligibility criteria are restricted to participants that meet DSM-5 criteria for MDD or any anxiety disorder, based on responses to screening measures in our online sign-up questionnaire.More information about the measures in the sign-up questionnaire are included below.Eligible participants are then sent an Isohelix saliva DNA sample kit.Saliva samples are sent via Freepost to the NIHR National Biosample Centre in Milton Keynes for processing and storage.Once participants have returned their saliva sample, they become full members of the GLAD Study and are eligible to take part in future studies.Email reminders are sent to participants who do not complete their sign-up steps.Up to four automated reminder emails are sent through the website up to six months post starting sign-up.Once online sign-up steps are completed, a maximum of four reminder emails are sent to participants regarding the saliva samples up to six months from the date the kit was sent.We conduct additional phone call or text message reminders to participants who have not returned their saliva kits within three months of signing-up.Clinical data will be linked to genotype and phenotype data to provide additional information about participants’ medical history, diagnoses, and treatments relevant to current and future research projects.Eligibility for collaborating studies can then be assessed utilising all phenotypic, genetic, and clinical data.Members may then be contacted up to four times per year to take part in future studies, either through the website or by the GLAD Study or NIHR BioResource teams, with access granted via established NIHR BioResource access protocols.As part of the community of members, participants will also have access to useful information and links on the “Useful Links” page of the study website.Members will also be invited to take part in follow-up questionnaires to provide longitudinal data on their symptoms.The sign-up questionnaire was designed to assess core demographic, mental and physical health, comorbidities, and personal information as well as detailed psychological and behavioural phenotyping relevant to anxiety and depression.We included lifetime measures of major depressive disorder, atypical depression, and generalised anxiety disorder, adapted from CIDI-SF and ICD-11 checklists, supplemented with items enabling lifetime assessment of DSM-5 specific phobia, social phobia, panic disorder and agoraphobia.These items were adapted from the Australian Genetics of Depression Study questionnaire.We also assessed current depressive symptoms), and those with a score of five or above on the PHQ 9 were asked additional questions related to current treatment-resistant depression).Other measures of current psychopathology included possible mania/hypomania), current general anxiety symptoms), post-traumatic stress disorder symptoms), alcohol use), psychotic symptoms), personality disorder symptoms), and work and social adjustment).Additional measures were included to assess subjective well-being, recent adverse life events, five childhood trauma items representing the five subscales of the full Childhood Trauma Questionnaire, domestic violence, and catastrophic trauma.To facilitate future meta-analyses with other cohorts, measures were selected when possible to align with the UK Biobank Mental Health Questionnaire.Participants were also asked to report if they have taken part in the UK Biobank.Some aspects of the UK Biobank MHQ were not selected for the sign-up questionnaire in order to make space for additional measures more relevant here.Furthermore, detailed questions on self-harm and suicide were not asked during sign-up due to concerns from the study team and the SLaM Research and Development department of insufficient clinical support in the case of reported adverse events, although the single suicidal ideation item in the PHQ-9 was retained as this measure is validated and widely used.Once participants completed the sign-up questionnaire, they were invited to take part in additional, optional questionnaires.Optional questionnaires to assess a wider number of psychiatric phenotypes and symptoms included measures of fear), drug use), obsessive-compulsive disorder), post-traumatic stress disorder), trauma, postnatal depression), body dysmorphic disorder), eating disorders, and vomit phobia).Other optional projects relate to lifestyle and personal history, asking detailed questions on participants’ experience of healthcare, life events, work and sleep, general health and lifestyle, gambling, and headaches and migraines.Optional questionnaires were made available to all participants, except those on vomit phobia and postnatal depression which were only offered to participants based on responses to screening questions in the sign-up questionnaire.Additional follow-up questionnaires will be sent to participants annually to provide longitudinal data on symptoms and outcomes.All GLAD samples will be genotyped as part of core NIHR BioResource funding.For genotyping we are using the UK Biobank v2 Axiom array, consisting of >850,000 genetic variants, designed to give optimal information about other correlated genetic variants.This careful design means that imputation of the data with large whole genome sequencing reference datasets will yield >10 million common genetic variants per individual.We use Affymetrix software and the UK Biobank pipeline software to assign genotypes and perform standard quality control measures in PLINK or equivalent software packages and R.Analyses will be conducted in PLINK in the first instance.The GLAD Study was approved by the London - Fulham Research Ethics Committee on 21st August 2018 following a full review by the committee.The NIHR BioResource has been approved as a Research Tissue Bank by the East of England - Cambridge Central Committee."Prior to submission for ethical approval, this research was reviewed by a team with experience of mental health problems and their carers who have been specially trained to advise on research proposals and documentation through the Feasibility and Acceptability Support Team for Researchers: a free, confidential service in England provided by the NIHR Maudsley Biomedical Research Centre via King's College London and South London and Maudsley NHS Foundation Trust.The GLAD data can be used alongside UK Biobank for meta-analyses, with UK Biobank participants and/or other NIHR Bioresource participants as healthy controls, to maximise statistical power.One of our primary genomic aims is to utilise polygenic scores created from very large genome-wide analyses of related traits as potential predictors of depression, anxiety, and treatment response in our sample.There are a number of well-powered polygenic scores now available in the field not only for psychiatric traits but also intelligence and its proxies and other relevant predictors.We aim to combine these polygenic scores with significant clinical predictors to produce a combined clinical/genetic risk index.We envision the GLAD sample to also be incorporated either in large genome-wide association studies, contributing to meta-analyses, or in clinical trials, for example researching genetic, epidemiological or social risk factors of anxiety and depression.Researchers wishing to access GLAD Study participants or data are invited to submit a data and sample access request to the NIHR BioResource to request a collaboration, following the procedures outlined in the access request protocol.Applications will be reviewed by the NIHR Steering Committee to assess the study aims and to check ethical approvals and protocols.Collaborations can range from sharing of anonymised data or samples to recontact of participants for additional studies, including experimental medicine studies and clinical trials.Eligibility for future research can be targeted to specific genotypes and/or phenotypes of interest.Recruitment of the GLAD Study is ongoing.All results that follow are from participants recruited before 25 July 2019.As of this date, 42,531 participants had consented to the GLAD Study, with approximately 35% drop-off rate at each stage of the sign-up process.This resulted in 27,991 having completed the questionnaire, with 27,776 screened as eligible and sent a saliva kit.Of those participants which were sent a kit, 18,663 have returned a saliva sample thus far.In the current sample, the mean age is 38.1 with most participants being female, white, and in paid employment or self-employed.The GLAD sample is younger and more female and has a higher proportion of individuals with a university degree than the general UK population.For example, 13.1% of our sample is female aged 25–29 compared to only 4.2% of the UK population.This difference is seen across all age groups under 60.Similarly, in the age-range 25–34 years, 68.9% of the sample have a university degree, whereas in England and Wales this is only 40.4%.In total, 54.8% of the GLAD sample have a university degree compared to 27.2% of the English and Welsh population.Matching educational attainment data was not readily available for Northern Ireland and Scotland.A high proportion of GLAD participants report the occurrence of at least one form of trauma or abuse throughout their lives.Child abuse was most commonly reported, with emotional abuse by parents or family being endorsed by 42.1% of the sample).Other forms of life stress are also frequently endorsed with 62.8% of the sample reporting at least one traumatic experience in their lives.Of note, few participants reported periods of inability to pay rent, reflecting low rates of poverty in the sample.As shown in Fig. 6, the majority of participants reached diagnostic criteria for major depressive disorder, followed by panic disorder and generalised anxiety disorder.The majority of participants with depression reported recurrent episodes across the lifespan.A figure demonstrating self-reported clinician-provided diagnoses of mental health disorders in the GLAD sample can be found in supplementary materials.Of note, self-reported clinician-provided diagnoses of GAD are twice as high as cases assessed by the questionnaire.The sample also has high rates of comorbidity, with 67.0% of screened MDD cases also screening positively for at least one of the anxiety disorders, and 95.1% of screened GAD cases also screening positively for MDD.Retrospective reports of age of onset indicate a young average onset for the lifetime measures of MDD, GAD, specific phobia, social phobia, panic disorder, and agoraphobia.Average age of onset across the anxiety disorders was ∼15.Age of onset for GAD was added to the questionnaire in May 2019, therefore the descriptives below are based on responses from a subset of participants recruited after that date.Measures of current symptoms of psychopathology included in the questionnaire demonstrate high rates of current clinical presentation of MDD and GAD, with high levels of functional impairment.In order to assess the success of the media campaign, participants were asked to report how they found out about the GLAD Study at the end of the sign-up questionnaire.Results of the campaign were assessed 3 months after the initial launch in England.Participants who completed the questionnaire reported that the three most common ways of hearing about the study were through Facebook, print newspaper, and Twitter.The effectiveness of each recruitment strategy varied by age.Of participants aged 16–29, the majority learned about the study through social media, with only 12% receiving study information through traditional media.Participants aged 30–49 also primarily learned about the study through social media, but 22% within that range learned about the study through traditional media.Social media was less effective in reaching individuals aged 50+, with traditional media being the primary means of recruitment above that age.The GLAD Study represents a large and comprehensive study of anxiety and depression and a valuable resource for future research.By achieving our goal of recruiting 40,000 participants with a lifetime occurrence of one of these disorders into the NIHR Mental Health BioResource, the study will collect detailed, homogenous phenotype and genotype data, thereby increasing power for genetic analyses.Importantly, the recontactable nature of GLAD means that researchers will be able to conduct new studies integrating psychological and genetic data.Why integrate genetic data?,GLAD provides researchers with a way of investigating polygenic and environmental effects in studies with participants.As such, the study is not merely a way to increase GWAS sample size, but also a way to increase the potential utility of genomics within psychological studies of all kinds.Previous efforts in depression and anxiety genetics have been hampered by the unreliable methodology of candidate gene studies.However, Border also concluded that modern and successful genome-wide studies of depression, rigorously testing millions of genetic variants in large samples, provide reliable evidence.The implications that these highly complex, but highly replicable, polygenic effects have for the field need to be investigated.GLAD exists to enable recruitment of participants into studies not only on the basis of their self-reported information and/or clinical records, but also on the basis of the increasingly well powered method of polygenic risk scoring.This approach combines risk alleles across the genome into a single quantitative measure for any one individual.For instance, a clinical trial could be undertaken with selection of participants based on past medical history and/or polygenic scores for a relevant trait.External researchers will be able to apply for access to anonymised data and will have the opportunity to recontact participants for additional data collection.This offers the prospect of a wide range of future recall studies, including clinical trials, observational studies, neuroimaging, and experimental medicine.Recruitment for recall studies could also be targeted to participants with phenotypes or genotypes of interest by utilising GLAD data to screen for specific eligibility criteria, thereby simplifying and expediting the recruitment process for future studies.Furthermore, measures were in part selected to match other cohorts and facilitate meta-analysis with samples such as the UK Biobank, Generation Scotland, and the Australian Genetics of Depression Study, although demographic differences between the samples would need to be taken into account.We propose that the GLAD data could be useful for stratifying cases into distinct phenotypes of depression and anxiety to reduce heterogeneity, a strategy which has been shown to increase the power of genetic analyses to detect significant effects.Initial descriptive analyses reveal that the current GLAD sample does not reflect the demographics of the general population of the UK.The limited ethnic diversity within the sample is not representative, with 95% of participants being white compared to 87% of the population.Similarly, although common mental disorders are more than twice as common in young women compared to young men, and 1.5 times more common in women overall, the GLAD sample remains disproportionately female.Finally, roughly double the proportion of the study sample report having a university degree compared to the English and Welsh population.The GLAD sample as a whole also shows severe psychopathology.Our lifetime questionnaire diagnostic algorithm indicated that the majority of the sample have recurrent depression, and 20.9% report possible mania/hypomania.Over half the sample screen positively for a lifetime anxiety disorder, with panic disorder being the most common.Participants also show substantial comorbidity, in particular for GAD cases reporting a concurrent diagnosis of MDD.Our results show significantly higher comorbidity rates for GAD cases compared to a previous epidemiological study whereby 62.4% of GAD cases had a lifetime occurrence of MDD."However, this is likely further indication of the sample's severity, reflecting previous reports of higher rates of comorbid anxiety in patients with recurrent lifetime depression.In terms of current symptoms, GLAD participants report moderate symptoms of depression, mild to moderate symptoms of anxiety, and significant functional impairment, indicating poor work and social adjustment.Finally, it was interesting to note that 55.0% of participants screened positive for mild-moderate personality disorder.Severity levels are also demonstrated by the 96.1% of GLAD participants who report receiving treatment for depression or anxiety, although the study is open to individuals regardless of treatment receipt.In Western countries, it is estimated that between one-thirds and one-half of patients with depression or anxiety disorders do not receive a diagnosis and/or treatment, a group not represented in our sample.However, even in those patients receiving treatment, recent research suggests that only 20% get minimally adequate treatment, meaning that follow-up research on treatment efficacy in the GLAD sample would be highly valuable.An unanticipated benefit of the campaign was the interest it generated from UK NHS sites around the country.Over 100 NHS sites, including Trusts and GP practices, contacted the study team with interest in supporting recruitment, and many had learned about the study from the media campaign.The study was also adopted onto the NIHR Clinical Research Network Portfolio which provides support and funding for NHS organisations that are involved in research.The combined approach of both general and clinical recruitment reaches a wider demographic of participants than either strategy alone.Patients recruited through clinics can also be assisted in signing up by a local clinician or research team to provide support throughout the process if needed.In addition to collecting a large amount of data, the GLAD study represents a template for future online studies.The media campaign proved highly effective in recruiting a large number of participants in a short amount of time and demonstrates the success of such a broad outreach approach.Caution must be taken when interpreting the success of the different media campaign strategies, given the prolonged use of social media promotions in comparison to traditional media outlets; nonetheless social media was the most effective strategy for recruiting individuals under 50.Recruiting younger participants not only reflects the young average age of onset of anxiety and depression, approximately 11 and 24 respectively, but also helps to facilitate long-term follow up and recontact.As previously mentioned, the current demographics of study participants who completed the questionnaire are not representative of the population of the UK.This suggests a selection bias comparable to what has been observed in similar studies such as the UK BioBank.Results from analyses of the current GLAD sample therefore may not be generalisable to the whole population.Smaller studies could overcome these biases by selectively over-sampling males and individuals at the lower end of educational attainment to be more representative of the population.However, efforts are being taken to recruit a wider demographic of participants into the study.Specifically, we will develop a targeted social media campaign to reach young men and collaborate with young male influencers to appeal to that audience.We will additionally prepare a future media campaign that focuses on depression and anxiety separately, with wording specific to each disorder, to attempt to reduce the high comorbidity rates in the sample.Furthermore, we are collaborating with local branches of the charity Mind.These are typically placed within the community and provide a range of mental health support services.Branches will be displaying posters and leaflets on site, posting on social media, and answering questions or providing assistance to potential participants interested in signing up.Recruitment through NHS services and the availability of local research teams within those sites will also help reach the general population and recruit individuals with a wider range of educational attainment.Of particular importance, we are working with Black, Asian, and minority ethnic charities and influencers to conduct additional user testing to understand the barriers to participating for non-white individuals and increase outreach to diverse communities.Previous genetic studies have involved primarily individuals of white European ancestry, and findings from these studies may not apply to individuals of non-white descent.By actively recruiting a diverse sample, our objective is to additionally facilitate research on typically underrepresented groups.Another challenge of the study design is the drop-out rates following recruitment.At each stage of the sign-up process, a substantial number of participants did not complete the next step despite multiple reminders.Unfortunately, we are unable to assess the demographics of participants who do not complete the sign-up questionnaire, but it is possible that the drop-outs are non-random and represent a certain phenotype, such as individuals with more severe symptoms or lower health literacy.The GLAD Study offers a recontactable resource of participants with a lifetime occurrence of depression and anxiety disorders to facilitate future health research.The online study design and media recruitment strategy were effective in recruiting a large number of individuals with these disorders into the NIHR Mental Health BioResource.Recruitment is ongoing with the goal of completing recruitment of 40,000 individuals, making this the largest single study of depression and anxiety.We hope that this paper will not only demonstrate the effectiveness of the study methodology but will also raise awareness of the availability of this cohort to researchers in the field and promote future collaboration.This work was supported by the National Institute of Health Research BioResource, NIHR Biomedical Research Centre , HSC R&D Division, Public Health Agency , MRC Mental Health Data Pathfinder Award, and the National Centre for Mental Health funding through Health and Care Research Wales. | Background: Anxiety and depression are common, debilitating and costly. These disorders are influenced by multiple risk factors, from genes to psychological vulnerabilities and environmental stressors, but research is hampered by a lack of sufficiently large comprehensive studies. We are recruiting 40,000 individuals with lifetime depression or anxiety and broad assessment of risks to facilitate future research. Methods: The Genetic Links to Anxiety and Depression (GLAD) Study (www.gladstudy.org.uk) recruits individuals with depression or anxiety into the NIHR Mental Health BioResource. Participants invited to join the study (via media campaigns) provide demographic, environmental and genetic data, and consent for medical record linkage and recontact. Results: Online recruitment was effective; 42,531 participants consented and 27,776 completed the questionnaire by end of July 2019. Participants’ questionnaire data identified very high rates of recurrent depression, severe anxiety, and comorbidity. Participants reported high rates of treatment receipt. The age profile of the sample is biased toward young adults, with higher recruitment of females and the more educated, especially at younger ages. Discussion: This paper describes the study methodology and descriptive data for GLAD, which represents a large, recontactable resource that will enable future research into risks, outcomes, and treatment for anxiety and depression. |
31,487 | Real time control data for styrene acrylonitrile copolymerization system in a batch reactor for the optimization of molecular weight | The dataset in this article describes the experimental studies for styrene acrylonitrile copolymer polymerization in a batch reactor with varying temperatures.Real time temperature monitoring using DAQ with desired setpoint is shown in Fig. 1.Experimental conditions with varying temperatures shown in Table 1.GPC analysis results were shown in Table 2.Table 3 describes data acquired during the course of the reaction using DAQ system.Results of GPC calibration at bench scale experiment for optimum molecular weight is shown in Fig. 2.Fig. 3 shows the comparison of temperatures of reactor and jacket.Fig. 4 shows the heat duty and control action during the reaction.Fig. 5.GPC analysis report for the polymer in pilot scale experimentation.Experiments were conducted for styrene acrylonitrile polymerization process in a batch reactor with the conditions mentioned in Table 1.Styrene and acrylonitrile of technical grade were used as the reactants.The solvent used is xylene and initiator is AIBN.The experiment was carried out for a constant set point of 343 K.The total experiment was carried out for six hours.The product is just cooled to de-activate the polymerization and then precipitated the product into methanol to isolate the polymer.The sample at the end of reaction is given to GPC analysis for molecular weight determination.The monomer used in this study was obtained with 99% purity from Sigma Aldrich.The feedstock was maintained under a temperature of 4 °C in a cold room.The solvent used was xylene purchased from Aldrich with a purity of 99%, and was used without any further purification.Initially, the mixture of styrene and acrylonitrile was heated up to the desired initial temperature of 70 °C.The data was monitored using a data aquisition software “LabVIEW” which is a commercial software package installed in the personnel computer to acquire, monitor, handle, analyze and log the data.All the control logic and the program for data logging is written using functions in the block diagram.All the controls and indicators are displayed in the front panel where we can also have online monitoring of the process in the form of charts at runtime.In the block diagram the process parameters are read using the input DAQ Assistant function.These values are logged into text files using the write to measurement file function.The process variable is given to the PI control function and the obtained controller output is fed back to the O/P modules using the output DAQ Assistant function.The required process values are given to a chart to have an online trend while running the process which can be seen on the front panel.Free radical polymerization experiments were carried out in 250 ml glass reactor to establish and optimize the process conditions.The experiment is conducted with varying temperatures is listed in Table 1.The molecular weight is analyzed through GPC analysis and a molecular weight of approximate value of 39,900–∼40,000 is observed.GPC analysis as tabulated in Table 2, indicate the conformity of results with the simulated results Anandet.al, 2013, 2014 .Real time temperature monitoring using DAQ with desired set point is shown in Fig. 1.Results of GPC calibration at bench scale experiment for optimum molecular weight is shown in Fig. 2.Fig. 3 shows the comparison of temperatures of reactor and jacket.Fig. 4.Shows the heat duty and control action during the reaction.Fig. 5.GPCanalysis report for the polymer in pilot scale experimentation.The polymer synthesized at the end of the reaction time is subjected to GPC analysis for determination of molecular weight.The sample is first dissolved in solvent and is injected into a continually flowing stream of tetrahydrofuran without disturbing the continuous mobile phase flow at a flow rate of 0.05 ml/min.The mobile phase flows through millions of highly porous, rigid particles tightly packed together in a column.The column separates sample components from one another with rapid analyses.Detector used is refractive index to monitor molecular weight distribution which is directly proportional to concentration.In this procedure the molecular weight of the sample is determined based on the standards obtained by plotting the retention time against the log of molecular weight.The operating conditions for these methods are established as flow rate of 0.05 ml/min, detectors were refractive index, the temperature is 24 °C and the column porosities are10000 Å.The molecular weight of SAN copolymer obtained through this method is 30,000.The operating conditions of reaction time for 6 hrs and temperature of 343 K yield a molecular weight of 39,900 g mol-1 which is nearer to the desired molecular weight of 40,000 g mol-1 is selected as the best operating conditions.The experiment with optimal operating conditions is repeated in a 2 L batch reactor having a PC for online monitoring with data acquisition software.The experimental setup consists of a stainless-steel reactor of 2 L capacity with an external jacket with an electric coil for heating, and also with an internal cooling coil.A total volume of 1200 ml of mixture was used to carry out the experiments.The experiments were carried out by monitoring and controlling the reactor temperature using the manipulated heater with the aid PID controller algorithms accordingly.The optimum operating conditions that are used in pilot scale experiments were temperature: 343K, styrene: 160ml, acrylonitrile:160ml,xylene:800ml and azobisisobutyronitrile: 1.6 g. | This work describes an approach towards experimental implementation of real time control studies conducted on a batch polymerization reactor. The information is related to the controlling of molecular weight for styrene acrylonitrile copolymer polymerization system in a batch reactor generated under a varied range of temperatures, reactant concentrations and retention times. The operating conditions of 6 hrs and temperature of 343 K for yielding a molecular weight in the range of 39,900–40,000 gmol−1 is established using simulation studies. A real time control facility consisting of a batch reactor, data acquisition software “LabVIEW” and a PC for monitoring and control is used to implement these operating conditions. The resulting product is analyzed by gel permeation chromatography (GPC). The data generated can be used by researchers, academicians and industry for generating control strategies, automation and scale up of polymerization reactors. |
31,488 | Tuberculosis control interventions targeted to previously treated people in a high-incidence setting: a modelling study | Worldwide, an estimated 10·4 million people developed tuberculosis and 1·8 million deaths were attributable to the disease in 2015.1,Substantial innovation in tuberculosis control is needed to reach the targets of the new global End TB Strategy, which aims to eliminate the disease by the year 2035.2,The rates of tuberculosis decline must accelerate in settings with the highest disease incidence, some of which are located in southern Africa and are facing the dual burden of tuberculosis and HIV.3,In these settings, the prevalence of untreated tuberculosis remains high, and conventional control approaches that rely on passive case finding can fail to identify infectious cases early enough to prevent transmission.4–6,Active case finding and wide-scale use of preventive therapy have been considered as enhanced activities for improving tuberculosis control, but these approaches require substantial investment.7,Furthermore, disappointing results from community-randomised trials of population-wide case finding and preventive therapy interventions8,9 have tempered enthusiasm for untargeted use of these interventions.It remains unknown whether targeting of case finding and preventive therapy to high-risk groups could be an effective approach for disease control in communities.The broader effect of a targeted approach depends on whether it is possible to prevent disease or reduce the duration of infectiousness among an easily identifiable subgroup that experiences a high relative risk of disease and is responsible for a substantial proportion of transmission.One subgroup that might be attractive for targeted interventions is individuals with a history of previous tuberculosis treatment.10,Studies from southern Africa show a high incidence of recurrent tuberculosis even after previous successful treatment,11–14 resulting from both endogenous reactivation and exogenous reinfection.15,We recently documented a large burden of prevalent tuberculosis in previously treated adults in 24 high tuberculosis burden communities in southern Africa, consistent with the hypothesis that this risk group drives a substantial proportion of transmission in these settings.16,Evidence before the study,Up to now, no empirical studies have been done of the population-level effect of interventions that aim to prevent recurrent disease or more rapidly detect tuberculosis in previously treated people.To establish whether population-based mathematical models have been employed to estimate the effect of tuberculosis interventions targeted to previously treated people, we did a PubMed search of relevant articles published in any language through March 7, 2017, using the search terms “ AND AND”.We also reviewed titles and abstracts of mathematical modelling studies identified through an earlier comprehensive systematic literature review of studies describing mathematical and economic modelling of tuberculosis published through March 30, 2013.While mathematical models have considered the effect of improving treatment outcomes as a means of reducing relapse and associated transmission, none has addressed preferential targeting of tuberculosis control interventions to former tuberculosis patients.Added value of this study,We developed a transmission-dynamic mathematical model of the tuberculosis epidemic and calibrated it to epidemiological and demographic data from a setting with a high incidence of tuberculosis in suburban Cape Town, South Africa.High rates of recurrent tuberculosis and a high prevalence of tuberculosis in previously treated people have previously been reported from this setting.We presented estimates of the potential effect of tuberculosis interventions targeted to people who completed an episode of tuberculosis treatment and noted that targeted prevention and case finding efforts could yield substantial benefits for tuberculosis control at the population level.Implications of all the available evidence,Our results suggest substantial public health potential for control interventions targeted towards individuals with a history of previous tuberculosis treatment in settings with a high disease incidence.In these settings, previously treated people are especially attractive for targeted control interventions because they remain at an increased risk of active tuberculosis after apparent cure, contribute substantially to onward transmission, and should be readily identifiable by national tuberculosis programmes.Efforts to establish the feasibility and costs of such targeted interventions are needed to establish their cost-effectiveness in tuberculosis and HIV endemic settings.In this study, we used a transmission-dynamic model to project the effect of two targeted control interventions—targeted active case finding and secondary isoniazid preventive therapy—in individuals who previously completed tuberculosis treatment in a high-incidence setting in suburban Cape Town, South Africa.We estimated the effect of these targeted interventions on tuberculosis incidence, prevalence, and mortality over a 10-year period.In this study, we used a calibrated population-based mathematical model to project the effect of two types of interventions targeted to previously treated people in a tuberculosis high-incidence setting.Our data suggest that, if targeted active case finding and secondary isoniazid preventive therapy were introduced to complement existing tuberculosis control efforts in this setting, the burden of tuberculosis could be substantially reduced.Our study supports the idea that efforts for prevention and prompt detection of recurrent tuberculosis35 could offer novel opportunities for tuberculosis control in settings of high tuberculosis incidence.We propose these targeted control interventions during a time when untargeted efforts, such as population-wide enhanced case finding and household-based screening8 and mass isoniazid preventive therapy9 have yielded insufficient evidence of effect, and where novel approaches are urgently needed to reduce the burden of tuberculosis in communities most affected by the disease.Targeting control efforts to groups at high risk of tuberculosis could enable health services to make more efficient use of available resources.In many high tuberculosis prevalence settings, previously treated people can be easily identified and experience an elevated risk of tuberculosis,16 therefore they might be an attractive target for focused interventions.We project that within 10 years in this setting, a combination of targeted active case finding and secondary isoniazid preventive therapy could avert more than a third of incident tuberculosis cases and tuberculosis deaths.Targeted active case finding alone could have a notable effect on tuberculosis prevalence and mortality, but is expected to have a smaller effect on incidence; our simulations suggest that a marked effect of targeted active case finding is achieved when it can be coupled with secondary isoniazid preventive therapy.Our projections show that much of the effect of targeted active case finding and secondary isoniazid preventive therapy accrues in the first few years after their implementation.The diminishing effect over time suggests a saturation effect, which might imply that such targeted interventions could be used within an adaptive control strategy.21,Our study constitutes a first step towards better understanding the effect of interventions targeted to previously treated people in high-incidence settings.However, several limitations must be noted.We applied our model to a specific setting with a high tuberculosis incidence and where high rates of recurrent tuberculosis due to relapse and re-infection had been previously reported.12,14,36,We note that the effect of interventions targeted at previously treated people, which we project for this setting, might not be easily generalised to other high-incidence settings for several reasons.High rates of recurrent tuberculosis have been reported from several other high-incidence settings.10,11,13,However, the population-level effect of targeted interventions will also depend on the size of the target group and their contribution to tuberculosis transmission in the population.In this particular setting, persistently high rates of incident tuberculosis have generated a large subgroup of people who had previously been treated for tuberculosis and who constitute a substantial proportion of the prevalent tuberculosis burden in the population.Although our projections are consistent with the epidemiology of tuberculosis in other high-incidence communities in South Africa,5,16 we expect interventions among previously treated people to be less effective in settings with lower tuberculosis incidence, and where a smaller proportion of the tuberculosis burden is attributable to former tuberculosis patients.For example, previously treated people accounted for 4·1% of the adult population and for 13% of prevalent tuberculosis cases in Lusaka, Zambia,6 and for 1·5% and 15%, respectively, in Nigeria37—two settings with lower tuberculosis incidence than our study setting.Nonetheless, given that new approaches for tuberculosis control are most needed in areas where tuberculosis incidence has been persistently high, our results suggest that efforts to both prevent and rapidly detect and treat recurrent disease will produce important health benefits.In our scenario analysis, for which we lowered the force of infection by 50%, we noted that targeted active case finding in combination with secondary isoniazid preventive therapy reduced the expected number of incident tuberculosis cases and deaths to a lesser extent, but still averted a third of incident cases.Differences in the prevalence of HIV in a population might influence the effect of interventions targeted to previously treated people in several ways.Communities with higher HIV prevalence might experience more recurrent tuberculosis given the elevated risk of re-infection with tuberculosis among HIV-infected individuals,38 and thus benefit more from similar interventions.Survival after a first tuberculosis episode might be reduced among those not on ART; those on ART may be subject to more regular clinical follow-up that would limit the benefit of additional case finding interventions in this group.The population-level effect of targeted active case finding and secondary isoniazid preventive therapy will be dependent upon existing patterns of passive health-care seeking behaviour.In settings where there are longer delays to diagnosis, additional interventions to more rapidly identify and treat recurrent cases would be more effectual, whereas in areas where individuals self-present quickly after onset of symptoms, we would expect more modest returns from investment in combined targeted active case finding and secondary isoniazid preventive therapy interventions.This is consistent with our sensitivity analysis, which showed that the time to passive tuberculosis detection among treatment-experienced adults correlated with the projected effect.Uncertainty around parameters of the natural history of tuberculosis, particularly those determining re-infection, disease progression, and mortality among previously treated individuals, leads to substantial uncertainty in the modelled outcomes.To avoid bias towards higher estimates of effect, we used conservative prior ranges of parameters for treatment-experienced adults, similar to those among treatment-naive adults.Specifically, we did not enforce higher susceptibility, lower partial immunity, or higher disease progression risk among those with a history of previous tuberculosis, but did allow posterior parameter values derived from calibration to vary by treatment history.While posterior distributions of our model are consistent with treatment-experienced people being more likely to become productively re-infected than treatment-naive people, we did not explicitly model differential risk of exposure, which could also be a mechanism driving increased risk of recurrent disease.39,Our study is further limited by uncertainty around the efficacy of secondary isoniazid preventive therapy towards preventing recurrent tuberculosis.As shown in our sensitivity analysis, higher effects of secondary isoniazid preventive therapy would result in higher effect at the population level.Only two studies—a randomised trial30 and a cohort study31—have assessed the effect of preventive therapy on recurrent tuberculosis.Both were limited in size and focused on people living with HIV.More available data from the field would improve our projections.We used a simple mathematical model that does not enable us to explore specific intervention designs or consider many practical issues related to implementation.In particular, in our main analysis we assumed that interventions could be aggressively rolled out in these suburban settings—ie, that individuals with previous treatment could be effectively identified, enrolled, and screened for tuberculosis on average every 12 months, that 90% could be enrolled in secondary isoniazid preventive therapy upon completing treatment, and 15% would drop out from secondary isoniazid preventive therapy every year.Although we believe high coverage levels of the interventions could be achieved in this relatively small suburban setting, the effect of these interventions would clearly be lower if interventions were less vigorously applied or if some individuals were not reachable by the intervention.In conclusion, our study provides impetus for further research to better understand the individual and population-level benefits of tuberculosis control interventions targeted at previously treated people.Studies and trials of the feasibility, safety, effect, and population-level effect of targeted active case finding and secondary isoniazid preventive therapy in previously treated people in high-incidence settings would be particularly useful.Other interventions to prevent recurrent tuberculosis such as adjuvant immunotherapy during tuberculosis treatment,40 extending the duration of tuberculosis treatment for certain high-risk patients,34 or post-treatment vaccination might be considered in the future.Further mathematical modelling, in which detailed costs of interventions are also included, would be useful for policy makers as they could establish whether such interventions are cost-effective and how investment in these approaches may compare with alternatives.We developed a stochastic compartmental transmission-dynamic model of the tuberculosis and HIV epidemic in a high-incidence setting of roughly 40 000 residents in suburban Cape Town, South Africa; the appendix provides details about the study setting.The tuberculosis component of our model followed the conventions of earlier models,17–21 with additional structure to distinguish between individuals who were never treated for tuberculosis and those who were previously treated for tuberculosis.We adopted previous ranges for parameters that allowed for differential partial immunity against re-infection and differential reactivation rates in treatment-experienced and treatment-naive, latently infected individuals, and differential delay in detecting tuberculosis in individuals with and without history of tuberculosis treatment.We also allowed for higher infectiousness in treatment-experienced compared with treatment-naive tuberculosis cases, as suggested by local tuberculosis prevalence surveys that reported that treatment-experienced individuals with tuberculosis were more likely to report cough and more likely to be smear-positive than treatment-naive individuals without the disease.16,Among individuals with incomplete tuberculosis treatment, we assumed that up to 20% remained infectious, consistent with findings from a retrospective cohort study done in the study setting.26,Table 1 shows a list of key model parameters describing differences in treatment-naive and treatment-experienced individuals.The HIV component of the model accounts for HIV infection, progression to a state of immunocompromised HIV infection, and antiretroviral treatment.We also implemented a model subcomponent for children aged 0–14 years.Additional model details including the subcomponent for children are described in the appendix.We calibrated the model to data between 2002, and 2008; model simulations were initiated in 1992 to allow for a 10-year burn-in period.We specified an initial population size of 32 889, informed by local census data and projections of population growth.The values of many parameters in tuberculosis and HIV co-epidemics models are not known with certainty.Therefore, we adopted a Bayesian calibration approach27 to identify parameter sets that resulted in simulated trajectories with good fit to available epidemiological data.To implement this approach, we specified previous distributions for each parameter.Multiple parameters sets were randomly and independently selected from these distributions.We used each of these parameter sets to simulate epidemic trajectories, and measured the goodness-of-fit for each of these simulations against several calibration targets.These calibration targets were operationalised as the likelihood of recording the epidemiological data conditional on the simulated values.The appendix provides additional details about the likelihood function used and the methods to characterise the posterior parameter distributions.Figure 2 displays the fit of simulated trajectories against the calibration targets listed in table 2.We used the model to project the effect of two targeted interventions: targeted active tuberculosis case finding and secondary isoniazid preventive therapy.For targeted active case finding we assumed that all adults who previously completed tuberculosis treatment were re-evaluated for active tuberculosis on average once per year and referred for tuberculosis treatment.We modelled targeted active case finding by increasing the rate of diagnosis, resulting in reductions in the average diagnostic delay, and the expected period of infectiousness.For secondary isoniazid preventive therapy, in the first year of intervention, we modelled a catch-up treatment campaign that reached 90% of individuals with previously completed tuberculosis treatment in the population.Subsequent to this catch-up period, we assumed that secondary isoniazid preventive therapy was offered to individuals after the completion of a full course of tuberculosis treatment and that an average of 90% of individuals completing treatment were enrolled.Secondary isoniazid preventive therapy reduces the rate of tuberculosis reactivation and the risk of progression to disease following re-infection.We allowed the preventive effects of secondary isoniazid preventive therapy to vary between 45% and 85%, a range informed by two previous studies.30,31,We assumed that the relative effect of secondary isoniazid preventive therapy was independent of HIV infection, but the absolute effect associated with this intervention remains greater for those with HIV in view of their higher reactivation rate and risk of progression.Secondary isoniazid preventive therapy was intended as a lifelong intervention but we assumed that, on average, 15% of people currently on secondary isoniazid preventive therapy drop out every year, and that the protective effect of secondary isoniazid preventive therapy does not extend beyond the cessation of treatment.32,We projected trends in tuberculosis incidence, prevalence, and mortality for 10 consecutive years—ie, 2016–25, under the baseline scenario and under two interventions scenarios: targeted active case finding alone and targeted active case finding plus secondary isoniazid preventive therapy.The effect of these interventions was defined as the cumulative number of tuberculosis cases and deaths averted during the 10-year period relative to the baseline scenario.The results are presented as the mean and 95% uncertainty intervals.To assess how sensitive the projected effect of targeted active case finding and secondary isoniazid preventive therapy was to input parameters of our model, we calculated partial rank correlation coefficients.33,34,The coefficients measure the correlation between an input parameter and the projected model outcome while adjusting for other parameters in the model.Additionally, we did the following types of scenario analyses: the projected effect of both targeted interventions under different periodicities of targeted active case finding, different probabilities of secondary isoniazid preventive therapy enrolment, and different annual rates of drop-out from secondary isoniazid preventive therapy were assessed.Furthermore, to provide additional insight on how well these targeted interventions might perform in communities with lower transmission rates, we report results for a hypothetical scenario where we reduced the force of infection by 50% relative to that in our study setting.The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.The corresponding author had full access to all of the data and the final responsibility to submit for publication.We estimated that in 2016, 13% of all adults in this population had previously been treated for active tuberculosis.The estimated prevalence of untreated tuberculosis was 2·2% in treatment-experienced adults, about 5·5 times higher than that in treatment-naive adults.The identified parameter posterior distributions suggested that HIV uninfected treatment-experienced people were, on average, 1·6 times more susceptible to re-infection than were HIV uninfected people who were latently infected and tuberculosis treatment-naive.HIV uninfected adults who had completed tuberculosis treatment experienced, on average, a 35 times higher rate of tuberculosis reactivation than people who were latently infected and tuberculosis treatment-naive.The appendix provides posterior distributions of key parameters of the natural history of tuberculosis for treatment-experienced and treatment-naive individuals.In the absence of targeted interventions, we projected 4457 incident tuberculosis cases and 623 tuberculosis-associated deaths between 2016 and 2025.In this period, 1423 incident tuberculosis cases will occur among adults who had completed a prior episode of treatment, representing 32% of all incident cases.Figure 3 shows trends in tuberculosis incidence projected for treatment-naive and treatment-experienced adults over a 25-year period.Among treatment-naive adults, mean tuberculosis incidence per 100 000 people was 903 in 2016 and was projected to decrease to 787 by 2025.Mean tuberculosis incidence among treatment-experienced adults was 4926 per 100 000 people in 2016, 5·5-times higher than among treatment-naive adults, and is expected to fall to 4353 by 2025.The projected average annual decrease in tuberculosis incidence between 2016 and 2025 was 1·3% in treatment-naive and 1·2% in treatment-experienced adults.With regards to the epidemiological effect of the interventions, our model suggests that annual targeted active case finding among individuals who had completed tuberculosis treatment would reduce the average duration of infectious disease in this group from 9·7 months to 5·0 months.Figure 4 shows trends in tuberculosis incidence, prevalence, and mortality under the baseline scenario, under targeted active case finding alone, and under combined targeted active case finding and secondary isoniazid preventive therapy.The average annual decline in tuberculosis incidence between 2016 and 2025 relative to 2015 was 1·6% at baseline, 3·0% under annual targeted active case finding, and 5·4% under annual targeted active case finding in combination with secondary isoniazid preventive therapy.Targeted active case finding alone would avert a total of 621 incident tuberculosis cases between 2016 and 2025, 14% of all incident tuberculosis cases projected under the baseline scenario.Over the same time period, targeted active case finding would avert a total of 138 tuberculosis deaths, 21% of all tuberculosis deaths projected under the baseline scenario.The implementation of targeted active case finding in combination with secondary isoniazid preventive therapy would avert 1805 incident tuberculosis cases between 2016 and 2025, 40% of all incident tuberculosis cases projected under the baseline scenario.The combined targeted intervention would avert a total of 267 tuberculosis deaths, 41% of all tuberculosis deaths projected under the baseline scenario.Findings of sensitivity analysis showed that the projected effect of targeted active case finding and secondary isoniazid preventive therapy was most sensitive to the tuberculosis reactivation rate after completion of tuberculosis treatment, the time between tuberculosis disease onset and detection in the target group, the natural mortality rate in treatment-experienced relative to treatment-naive adults, and the efficacy of secondary isoniazid preventive therapy, among other parameters.Lower periodicity of targeted active case finding and lower uptake of secondary isoniazid preventive therapy, as well as higher drop-out from secondary isoniazid preventive therapy, resulted in reduced effect.In a hypothetical scenario in which we reduced the force of infection to 50% of the baseline value, we noted that annual targeted active case finding in combination with secondary isoniazid preventive therapy averted 34% of 2811 incident tuberculosis cases and 36% of 444 tuberculosis deaths estimated at baseline. | Background: In high-incidence settings, recurrent disease among previously treated individuals contributes substantially to the burden of incident and prevalent tuberculosis. The extent to which interventions targeted to this high-risk group can improve tuberculosis control has not been established. We aimed to project the population-level effect of control interventions targeted to individuals with a history of previous tuberculosis treatment in a high-incidence setting. Methods: We developed a transmission-dynamic model of tuberculosis and HIV in a high-incidence setting with a population of roughly 40 000 people in suburban Cape Town, South Africa. The model was calibrated to data describing local demography, TB and HIV prevalence, TB case notifications and treatment outcomes using a Bayesian calibration approach. We projected the effect of annual targeted active case finding in all individuals who had previously completed tuberculosis treatment and targeted active case finding combined with lifelong secondary isoniazid preventive therapy. We estimated the effect of these targeted interventions on local tuberculosis incidence, prevalence, and mortality over a 10 year period (2016–25). Findings: We projected that, under current control efforts in this setting, the tuberculosis epidemic will remain in slow decline for at least the next decade. Additional interventions targeted to previously treated people could greatly accelerate these declines. We projected that annual targeted active case finding combined with secondary isoniazid preventive therapy in those who previously completed tuberculosis treatment would avert 40% (95% uncertainty interval [UI] 21–56) of incident tuberculosis cases and 41% (16–55) of tuberculosis deaths occurring between 2016 and 2025. Interpretation: In this high-incidence setting, the use of targeted active case finding in combination with secondary isoniazid preventive therapy in previously treated individuals could accelerate decreases in tuberculosis morbidity and mortality. Studies to measure cost and resource implications are needed to establish the feasibility of this type of targeted approach for improving tuberculosis control in settings with high tuberculosis and HIV prevalence. Funding: National Institutes of Health, German Research Foundation. |
31,489 | Data on partial polyhydroxyalkanoate synthase genes (phaC) mined from Aaptos aaptos marine sponge-associated bacteria metagenome | These data provide detailed information on the isolation and identification of phaC from marine bacteria metagenome in Aaptos aaptos sea sponge at Bidong Island, Terengganu, Malaysia.Table 1 shows tabular data on the similarity comparison of sequenced phaC genes against the BLAST sequence databases.Table 2 shows data on the nucleotide sequences of the three putative, partial phaC genes identified from A. aaptos marine sponge-associated bacteria metagenome.The protein identifiers assigned by GenBank to uncultured bacterium phaC 2 and 2B are ASV71961.1 and ASY93340.1 respectively.Fig. 1 shows a phylogenetic Neighbour-Joining tree on the evolutionary relationships of identified and known phaC genes from variable sources.The marine bacteria metagenome was extracted from the tissue of the sea sponge Aaptos aaptos, which was collected in the waters of Bidong Island, Terengganu, Malaysia at a depth of 15 m on June 16, 2016.The metagenome was extracted from 1 cm3 sponge tissue using phenol-chloroform isoamyl alcohol according to modified protocols by Beloqui and co-workers .Whole genome amplification was then carried out on the extracted metagenome using REPLI-g Mini Kit.The purity and concentration of the metagenome before and after WGA were measured using Nanodrop™ 2000 Spectrophotometer."The reaction mixture for PCR was prepared using EconoTaq® PLUS 2X Master Mix according to the manufacturer's instructions prior to the PCR amplification process, which was proceeded in the sequence of pre-denaturation at 95 °C for 3 min, denaturation at 95 °C for 30 s, annealing at 56 °C for 1 min, extension at 72 °C for 90 s, and final extension at 72 °C for 5 min using Applied Biosystems™ Veriti 96-Well Thermal Cycler.The degenerate primers that targeted the Class I and II phaC genes were applied in the PCR process, which were forward primer, CF1TCTACTCTGACCT-3′), and reverse primer, CR4GACTAGTCCA-3′) .A semi-nested PCR was then carried out using forward primer, CF2TTCTTCTGGCGCAACCC-3′), and reverse primer, CR4, with similar protocols to amplify the target gene.The amplified PCR product was separated by 0.7% w/v agarose gel electrophoresis using PowerPac™ Basic power supply, and visualised using Gel Doc™ EZ Imager.The amplified phaC genes were sequenced via submission to First BASE Laboratories Sdn Bhd, which used Applied Biosystems™ Genetic Analyzer with Sanger sequencing method, prior to alignment and refinement using BioEdit software 7.2.6.The query sequences were compared against the sequence databases using the BLAST tool.The sequences were then released in the GenBank nucleotide sequence databases on September 4, 2017, under accession numbers MF457754, MF457753, and MF437016.The phylogenetic tree shows the evolutionary relationship among the three identified, putative partial phaC genes and previously reported phaC genes with complete cds released in GenBank database, comprising of 36 ingroup nucleotide sequences and 1 outgroup nucleotide sequence, which was constructed using the Neighbour-Joining method .The optimal tree with the sum of branch length=10.39764912 is shown.The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test are shown next to the branches .The tree is drawn to scale, with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree.The evolutionary distances were computed using the Maximum Composite Likelihood method and are in the units of the number of base substitutions per site.All positions containing gaps and missing data were eliminated.There were total 19 positions in the final dataset.Evolutionary analyses were conducted in MEGA7 . | We report data associated with the identification of three polyhydroxyalkanoate synthase genes (phaC) isolated from the marine bacteria metagenome of Aaptos aaptos marine sponge in the waters of Bidong Island, Terengganu, Malaysia. Our data describe the extraction of bacterial metagenome from sponge tissue, measurement of purity and concentration of extracted metagenome, polymerase chain reaction (PCR)-mediated amplification using degenerate primers targeting Class I and II phaC genes, sequencing at First BASE Laboratories Sdn Bhd, and phylogenetic analysis of identified and known phaC genes. The partial nucleotide sequences were aligned, refined, compared with the Basic Local Alignment Search Tool (BLAST) databases, and released online in GenBank. The data include the identified partial putative phaC and their GenBank accession numbers, which are Rhodocista sp. phaC (MF457754), Pseudomonas sp. phaC (MF437016), and an uncultured bacterium AR5-9d_16 phaC (MF457753). |
31,490 | Cerium oxide-monoclonal antibody bioconjugate for electrochemical immunosensing of HER2 as a breast cancer biomarker | Cerium belongs to the lanthanide series and is one of the most abundant rare earth elements in the Earth's crust.Several cerium minerals have been mined and processed for industrial applications and pharmaceutical uses .Cerium is a rare-earth element contained at high concentration in monazite minerals, which are widespread in Indonesia, and have been produced as a side-product of tin processing .The element is an important material for industrial use in catalytic converters for removing toxic gases, solid oxide fuel cells, applications of electro-chromic thin-film, glass polishing, catalysts, and sensors.Cerium has also been developed for pharmaceutical uses and shown a role as antioxidants in biological systems, which makes cerium nanoparticles well suited for applications in nano-biology and regenerative medicine .Cerium oxide has been reported to have advantageous properties such as non-toxicity, good electrical conductivity, chemical inertness, large surface area, negligible swelling, oxygen transferability, and good bio-compatibility.These characteristics have attracted more attention to the development in fabricating high-performance electrochemical biosensors and immunosensors .A nano-structured cerium oxide film-based immunosensor has been developed for ochratoxin detection .Nano-structured cerium oxide film was fabricated onto indium‑tin-oxide coated glass plate, and then rabbit-immunoglobulin antibodies and bovine serum albumin were immobilized onto the film.Electrochemical studies revealed that nano-CeO2 particles provide an increased electron communication between r-IgGs and electrode electroactive surface area.The resulting immunosensor had a linear response to ochratoxin in the range of 0.5–6.0 ng/dl, a low detection limit of 0.25 ng/dl, and a fast response time of 30 s.The high value of the association constant, Ka 0.9 × 1011 l/mol, indicates the high affinity of the BSA/r-IgGs/nano-CeO2/ITO to ochratoxin .A label-free amperometric immunosensor based on a multi-wall carbon nano-tubes-ionic liquid‑cerium oxide film-modified electrode has been proposed for the detection of myeloperoxidase in human serum .The cerium oxide was dispersed by chitosan, coated on the glassy carbon electrode, and then antibodies were attached to the nano‑cerium oxide surface.With a non-competitive-immunoassay format, the antibody-antigen complex was formed between the immobilized anti-MPO and MPO in a sample solution.Under optimal conditions, the immunosensor showed a current change, which was proportional to MPO concentration in the range of 5 to 300 ng/mL, with a limit of detection of 0.2 ng/mL .An improved electrochemical immunosensor for MPO detection based on nano‑gold/cerium oxide-l-cysteine composite film has been developed.Cerium oxides were dispersed in 1-butyl-3-methylimidazolium hexafluorophosphate and then were immobilized on the l-cysteine film.The cerium oxides have a positive charge that may be able to bind more anti-MPO.The immunosensor improved some characteristics, such as a linear range to the MPO concentration between 10 ng/mL and 400 ng/mL, and a limit of detection of 0.06 ng/mL .Yang et al. used ceria nanoparticles in the design of an amplification electrochemical immunosensor for influenza biomarker determination.The sensor was constructed based on cerium oxide/graphene oxide composites, as a catalytic signal amplifier for the detection of influenza, and using 1-naphthol that can be hydrolyzed to naphthoquinones by O-acetylesterase enzyme of the influenza virus.The immunosensor showed a linear range of 0.0010–0.10 ng/mL with a detection limit of 0.43 pg/mL .An electrochemical biosensor also has been developed based on CeO2 nano-wires for the determination of Vibrio cholerae O1.The antibodies of VcO1 were immobilized on the surface of a CeO2-nanowire.Interaction of bacterial cells with the anti-VcO1 was analyzed by decreasing the current peak of the biosensor.The biosensor had a linear range of response between 102 CFU/mL and – 107 CFU/mL with a detection limit of 100 CFU/mL for VcO1 .Another development of cerium oxide in immunosensors is cerium nanocomposites biosensors.A nanocomposite with a core-shell structure, cuprous oxide@cericdioxide, was proposed for the quantitative detection of prostate-specific antigen.The amino-functionalized Cu2O@CeO2-NH2 core-shell nanocomposites were prepared to bind gold-nanoparticles by constructing a stable AuN bond between AuNP sand-NH2.The amplified signal sensitivity was achieved by the synergetic effect existing in Cu2O@CeO2 core-shell loaded with AuNPs.It showed a good electro-catalytic activity towards the reduction of hydrogen peroxide and was used as transducing materials to achieve efficiently capture antibodies and triple signal amplification of the proposed immunosensor.The developed immunosensor showed a wide linear range between 0.1 pg/mL and 100 ng/mL, with a low detection limit of 0.03 pg/mL .The Co3O4@CeO2-Au@Pt nanocomposite also has been developed as labels to conjugate with secondary antibodies for signal amplification in a sandwich-type electrochemical immunoassay for squamous cell carcinoma antigen detection.The amino-functionalized cobaltosicoxide@cericdioxide nanocubes with core-shell morphology were prepared to combine sea-urchin like gold @ platinum nanoparticles.The glassy carbon electrodes modified using electro-deposited gold, were used as antibodies carriers and sensing platforms.Because the synergetic effect presents in Co3O4@CeO2-Au@Pt, the amplification of sensitivity was achieved towards the reduction of hydrogen peroxide.The proposed immunosensor exhibited a wide linear range, from 100 fg/mL to 80 ng/mL, with a low detection limit of 33 fg/mL for detecting SCCA .As discussed, studies of various developments of ceria-based immunosensors have been extensively reported.The functionalized ceria or cerium oxides have attracted much attention in the fabrication of bio-sensing systems due to their unique properties.The use of a linker in the controlled assembly of nanoparticles to a target is the one important of surface modification and functionalization in biomedicine as diagnostic and therapeutic agents in cells or tissues.Polyethylene glycol, has been reported as a long compatible linker for nanoparticles.The main advantage of using PEG is to provide enough space to bind more recognizing element molecules like monoclonal antibodies to nanoparticles and allow them to stand aside and result in more effective combination with the target molecules .Anti HER2 is a monoclonal antibody that can bind HER2 protein and has known as one of breast cancer biomarkers because HER2 is over-expressed in some breast cancer patients.HER2 is a receptor tyrosine kinase, which is a member of the epidermal growth factor receptor family involved in cellular signaling pathways, which may lead to proliferation and differentiation.The overexpression of HER2 in some breast cancer patients can be used as a key prognostic marker, and effective therapeutic treatment targets to diagnose breast cancer that generally occurred in adult females .This research concerns the use of PEGylated nano-structured cerium oxides attached to different concentrations of anti HER2 to form bioconjugates.A screen-printed carbon‑gold nanoparticles electrode was used for designing a label-free platform, and the bioconjugates were stabilized covalently on the electrode surface to assay HER2 antigen in serum samples.This highly sensitive and straight forward electrochemical analysis method holds great potential for the detection of this biomarker in clinical diagnoses.Anti HER2 Ab was purchased from F. Hoffmann-La Roche Ltd.The HER2/CD340 human, 3-aminopropyl trimethoxisylane, bovine serum albumin, cysteamine, 1-ethyl-3- carbodiimide, potassium ferricyanide, n-hydroxide succinimide, 2-iminithiolane, 3-mercaptopropionic acid, and polyethylene glycol-α-maleimide-ω-NHS, were purchased from Sigma-Aldrich Ltd.Toluene, hydrochloric acid, ethylene diamine tetraacetic acid, ammonium hydroxide, potassium hydrogen phosphate, sodium hydrogen phosphate, cerium sulfate, sodium hydroxide, and methanol were purchased from Merck.Voltammetric measurements were done using μAutolab type III Potentiostat/Galvanostat with NOVA 1.10 software.The screen-printed carbon‑gold nanoparticles electrodes DRP-110GNP were used for the electrochemical experiments.Other instruments used were Scanning electron microscopy images JEOL JSM-6360LA type and an FT-IR spectrophotometer.NS CeO2 was synthesized based on a simple method .Briefly, 2 g of Ce2 was dissolved in 25 mL double-distilled water and then a drop of sodium hydroxide solution was added to the 25 mL solution.The mixture was continuously stirred for about 2–3 h at room temperature until a pale yellowish-white precipitate was obtained.The precipitate was separated and washed several times with double-distilled water, after then it was dried at 150 °C for an hour.The obtained yellowish particles were dried at 250 °C for 3 h.The Mal-PEG-NHS-NS CeO2 was prepared based on the functionalization of the ferric oxide procedure with a modification.Briefly, to an amount of 0.1 g of NS CeO2 powder, 35 mL of toluene and 25 μL of APTMS were added.Then the mixture was sonicated for 30 min and heated at 60 °C for 5 h in an oven.The obtained APTMS-coated NS CeO2 was separated by decantation and redispersed in 50 mL of methanol.The resulted of amino-NS CeO2 was dissolved in 10 mL of redistilled water, and then 31 mg of NHS-PEG-Mal was added to the solution.The mixture was sonicated for 30 min and stirred for 6 h.At the end of this step, PEGylated nanostructures were separated and redispersed in 5 mL of redistilled water.The bioconjugate was synthesized based on reported methods .The synthesis was started with the thiolation of anti HER2 with 2-iminithiolane.The thiolated HER2 was done by incubating 1000 μg/mL of anti HER2 in 0.1 M PBS pH 8.0 in 2-iminithiolane, in a molar ratio of 1:200.Then 5 mM of EDTA was added into the reaction mixture to protect the thiol groups from oxidation.The mixture was stirred at room temperature for 1 h using mini spin.The thiolated anti HER2 thus formed was purified by its dialyzing with 20 mL of PBS pH 8.0.The purified thiolated anti-HER2 was subsequently added to 200 μL of 5 mg/mL NHS-PEG-Mal-NS CeO2 and kept the mixture under constant shaking for 6 h at room temperature.In this way, the thiol groups of the thiolated anti HER2 would have been covalently attached to the unsaturated bond of maleimides that linked to the NS CeO2 to form the bioconjugates.The resulted bioconjugates were then separated by using magnetic decantation and, finally, redispersed in 1 mL of PBS pH 7.4.Fig. 1 shows schematic reaction mechanisms of preparation of cerium oxide-anti HER2 bioconjugate.The electrode is immersed in a 0.1 M MPA for 10, 30, 60, and 120 min.Then, the generated electrochemical response was measured by cyclic voltammetry using a redox system of 10 mM of Fe63−/4-in KCl 0.1 M at a potential range of −0.6 V to +0.6 V, with a scanning rate of 50 mV/s.As much as 200 μL of Mal-PEG-NHS-NS CeO2 solution was put into a microtube, then 2.5 μg/mL of anti-HER2 antibody was added into the solution.The mixture was stirred using a mini spin, and then its electrochemical response was measured by cyclic voltammetry using a redox system of 10 mM of Fe63-/4- in KCl 0.1 M at a potential range of −0.6 V to +0.6 V with a scanning rate of 50 mV/s.The whole experiment was repeated three times, each using a different concentration of anti-H ER2 antibodies, i.e., 5.0, 7.5, and 10.0 μg/mL.The electrode was immersed in a 0.1 M MPA for 30 min, then dipped in a 0.1 M cysteamine solution and rinsed with redistilled water.After that, a 40 μL solution containing 0.1 M EDC and 0.1 M NHS with a mole ratio of 1:1 was dropped onto the electrode, which was subsequently incubated for one hour at room temperature.Then, after dropping 40 μL of bioconjugate onto the electrode, it was incubated for one hour at room temperature, respectively.The electrochemical response was measured by cyclic voltammetry using a redox system of 10 mM of Fe63-/4- in 0.1 M KCl at a potential range of −0.6 V to +0.6 V, with a scanning rate of 50 mV/s.The whole process was repeated but three times, each with different lengths of time of the final incubation process of the bioconjugate.The electrode was immersed into 0.5 mL of 0.1 M MPA for 30 min at room temperature.This aim would have led to the formation of MPA-GNPs on the electrode.The electrode was then rinsed with double-distilled water for eliminating any unwanted material absorbed on it physically.Following this, to activate the carboxyl groups of MPA, 40 μL of a solution containing 0.1 M EDC and 0.1 M NHS with a mole ratio of 1:1 was dropped onto the MPA-GNPs electrode surfaces.After that, the electrode was incubated for one hour at room temperature.After washing with double-distilled water, the electrode was immersed into a 0.1 M of cysteamine solution to form Cys-MPA-GNPs on it, via an amide formation, and rinsed with double-distilled water.Thereafter, 40 μL of the bioconjugate was dropped onto the electrode, and incubated for two hours at 4 °C.Stable carbon‑sulfur bonds would have been formed between carbon double bonds in the free maleimides of the bioconjugate and the thiol groups of Cys .Any possible excess of the bioconjugate was rinsed with a 0.1 M PBS pH 7.4 solution and redistilled water, consecutively.Next, 20 μL of 1% BSA solution was added to the modified electrode, which was subsequently incubated for 45 min at room temperature, followed by rinsing the electrode with 0.1 M PBS pH 7.4 solution, and redistilled water, consecutively.It will be noted that BSA had been used to block any possible non-specific binding sites.To continuing the experimentations, 20 μL of HER2 solution was dropped onto the bioconjugate-modified electrode, which was then left for 30 min at room temperature.Finally, the electrode was rinsed with 0.1 M PBS pH 7.4 solution and redistilled water, consecutively.To test the performance of the resulted the electrode as an immunosensor, experiments on cyclic voltammetric measurements were done by using the redox system consisted of 10 mM of Fe63-/4- with 0.1 M KCl in 0.05 M PBS pH 7.4 solution, at a potential range of −0.6 V to +0.6 V, with a scanning rate of 50 mV/s.Each measurement was carried out after the addition of MPA, bioconjugates, and BSA.The whole experiment was finally accomplished by measuring some other HER2 solutions of different concentrations.Fig. 2 shows the general schematic diagram of the immunosensor platform for the detection of HER2.Serum samples were obtained from the Bio Fit clinical laboratory in Bandung, Indonesia.Each of the samples was diluted 20 times, with 0.01 M of PBS pH 7.4 solutions.Then 30 μL of a diluted sample was dropped onto the bioconjugate-electrode, which was subsequently and incubated for 30 min at 37 °C.After rinsing the electrode with PBS and redistilled water, it was used to measure electrochemical response by cyclic voltammetry using a redox system of 10 mM of Fe63-/4- with 0.1 M KCl in 0.05 M PBS pH 7.4 solution, at a potential range of −0.6 V to +0.6 V, and with a scan rate 50 mV/s.Three replicate measurements were done, and the HER2 level of the sample was calculated using a previously prepared calibration regression equation.Recovery tests were carried out for two samples.Each sample was spiked with two different concentrations of HER2.The standard addition method was used to assay spiked concentration.Characterization of the NS CeO2 was carried out by using scanning electron microscopy to determine its morphology and size distribution.Fig. 3A displays the results of NS CeO2 analysis at 12.000 × magnifications, which show the average size of the NS CeO2 of about 100 nm.In order to proof the APTMS had been coated on the NS CeO2, FT-IR spectroscopic analyses were carried out to record the spectra of the APTMS as a reference, and those of the APTMS-NS CeO2.The resulted IR spectra of the two compounds are overlaid and are presented in Fig. 3B.In the IR spectrum of APTMS, there are several peaks in the frequency region of 3460–3280 cm−1, which are originated from stretching vibration of OH groups of water molecules or from those of surface OH groups.The peak at 1500 cm−1 is the bending vibration of NH and thus confirms the presence of a primary NH bond.The peak at 2930.18 cm−1 is assigned to CH stretching band, and the peak at 1111.96 cm−1 corresponds to SiO stretching.Moreover, the peak at 1430 cm−1 is assigned to the bending vibration of CH, while stretching vibration of the aliphatic CN bond is characterized by the peak around 1023,33 cm−1 — meanwhile, the spectrum of APTMS-NS CeO2 shows several bands.The very large and strong peak around 2557.0 cm−1 originated from the functional group of –CH.The peak at 1505.7 cm−1 is the bending vibration of the –N-H bond; the moderate peak at 1130.7 cm−1 can be assigned as SiO bond, and the sharp peak with strong absorption intensity at 579.8 cm−1 is assigned to the CeO stretching band.The intense band observed around 852 cm−1 is due to the CeOC stretching vibration .The peak at 3410 cm−1 is related to OH groups of water molecules or those of surface OH groups.From the analysis of the two spectra given above, i.e., the spectra of APTMS and those of APTMS-NS CeO2, it can be concluded that APTMS had successfully been attached to the surface of NS CeO2.Cyclic voltammetry has been used in this study for the electrochemical characterization of the electrodes.The cyclic voltammogram of SPCE-GNPs with MPA, those of SPCE-GNPs-MPA-NSCeO2, and those of SPCE-GNPs-MPA bioconjugates are shown in Fig. 3C.As can be seen in Fig. 3C, the presence of NS CeO2 had decreased the intensity of the redox signal from 116.018 μA to 91.131 μA, i.e., about 25%.This will be proof that the nano-structure of CeO2 has an electro-active characteristic that facilitates the process of electron transfers to electrodes.Therefore, NS CeO2 has an excellent characteristic and will form bioconjugate with the antibody to be used for a good electrochemical detection of HER2.Immunoreagents, including anti-HER2 and HER2 antibodies, are non-electroactive compounds, and thus they reduce the current flow of K3FeCNS redox compounds on the electrode surface.For this reason, optimal experimental conditions resulting in the lowest current were chosen in the experiments.In this present study, the electrode was modified through the immobilizing process of anti-HER2 bioconjugates on the electrode, which was conducted by using the covalent bonding method through the amine coupling system.However, the formation of the covalent bonds in this step is reported to be slow .It has also been noted further that each important step plays an important role and needs to be optimized, including the incubation time of MPA.Each measurement should be carried out for three replicates.The gold electrode was first immersed in a 0.1 M of MPA solution, forming a robust AuS bond because the MPA will be impregnated on the gold surface.The carboxylate group of MPA will facilitate the electron transfer process between the bioconjugates and the electrode.Fig. 4A shows optimum incubation time of MPA immersed in 30 min, with the average current of about 52 μA.Fig. 4B shows that the optimum concentration of anti-HER2 in bioconjugates was 5 μg/mL because it gave the lowest current response.The bioconjugation process was carried out by conjugating NS CeO2 with thiolated anti- HER2.The method to bind thiolated anti HER2 at various concentrations, with PEG-NHS-Mal and NS CeO2, was done as follows: For the first experiment, concentration ratio between PEG-NHS-Mal-NS CeO2 and anti HER2 was 1:0.05 μg/mL.At this condition, the current obtained was too high, i.e., about 138 μA.Next, anti HER2 of different concentrations were used, and the results are presented in Fig. 4B.The data in Fig. 4BB show that the optimum concentration was 5 μg/mL.If the concentration was higher than the optimum, the excess of anti-HER2 on the NS CeO2 had probably resulted in saturation of the nanoparticle surface.It interfered with the binding of the bioconjugate maleimide groups to the electrode surface.However, if the concentration of anti-HER2 were lower than the optimum, the current would be increased because of less anti-HER2 covering the electrode surface.Fig. 4C shows data on the optimization of incubation time of bioconjugates on the electrodes.The data show that the optimum incubation time of the bioconjugate on the electrode surface was two hours, i.e., the time giving the lowest peak current, which was about 115 μA.In this present study, the responses of the prepared immunosensor towards HER2 of seven different concentrations were studied by cyclic voltammetry using the redox system of 10 mM63−/4-) in 0.1 M PBS pH 7.4 solution containing 0.1 M KCl, under the optimal experimental conditions.Triplicate experiments for each HER2 solution concentration were done, and the resulting voltammograms are presented in Fig. 5A.A calibration curve was then prepared using peak current data from the seven resulted voltammograms.As can be seen in Fig. 5B, the calibration curve consisted of two segments of the linear curve having different slopes.The first segment of the calibration curve is linear for a HER2 concentration range of 1–500 pg/mL.The calibration regression equations for this concentration ranges is ΔIpa = 0.0179 + 0.9515, with R2 = 0.9898.Meanwhile, the second segment of the calibration curve is also linear for a 5.0–20.0 ng/mL range of HER2 concentration.The calibration regression equations for the later concentration range is ΔIpa = 0.303 + 9.8006, with R2 = 0.9974.Limit of detection of HER2 using the developed electrochemical immunosensing was evaluated as 3 s/m, where s is the standard deviation of the triplicate measurements of blank peak current, and m is the slope of the calibration curve for HER2 concentration range of 1–500 pg/mL.By using the developed sensing method and the relevant procedure, it was found that the limit of detection of HER2 is 34.9 pg/mL.This figure of limit detection is much lower than those previously reported, using a sandwich immunosensor without bioconjugate .Thus, the near future use of the NS CeO immunosensor developed in the present study for a non-invasive control of HER2 biomarkers in breast cancer patients is very promising and would result in a much better quality of information.Table 1 shows a comparison of the analytical parameter characteristics of the developed immunosensor with the other existing types of HER2 immunosensor.From the information given in Table 1, it will be noted that the immunosensor developed in the present study is better than most of the existing immunosensors used to detect HER2.The immunosensor method developed in the present study has been applied for the determination of HER2 concentration in some blood serum samples collected from breast cancer patient suspects.Conclusions on the possibility of patients whether to suffer breast cancer or not are based on the concentration of HER2 in their blood serum."If a suspect patient's blood serum contains HER2 greater than 15.0 ng/mL, it means that his or her HER2 concentration is excessive, and the suspicious patient presumably has breast cancer .The procedure used to measure HER2 concentration in blood serum samples in this study was similar to those applied to standard HER2 as a reference compound.Two different blood serum samples have been analyzed for their HER2 contents, using the developed immunosensor.It was found that the concentration of HER2 in one blood serum sample was 51.28 ± 0.46 ng/mL, and in the other was 4.67 ± 0.30 ng/mL.A recovery test has been carried out to evaluate the consistency of the new method, with respect to its precision.This has been done by using the Standard Addition Method.Thus, the two serum samples were spiked with the standard HER2 of known different concentrations.As usual, calibration curves were prepared, and the concentration of HER2 in the samples can be found from the calibration curves.Using this standard addition method, with three replicate measurements, it was found that the percentage recoveries of one sample were 101.58 ± 1.03%, and those of the other sample was 102.37 ± 0.70%.It can be concluded that the resulted recovery figures have verified the good precision of the new method.Based on the results it can be concluded that cerium oxide nanostructures were successfully synthesized and conjugated with anti-HER2 to form CeO2-anti HER2 bioconjugate to detect HER2.Voltammetric immunosensor using CeO2-anti-HER2 bioconjugate was selective and sensitive to detect HER2.It could be concluded that the proposed immunosensor for the determination of HER2 has good measurement sensitivity and could be applicated for developing alternative clinical bioanalysis. | Metal oxide-based sensors have the advantage of rapid response and of high sensitivity to detect specific active biological species and are relatively inexpensive. This report of the present study concerns the development of a cerium oxide – monoclonal antibody bioconjugate for its application as a sensitive immunosensor to detect a breast cancer biomarker. A cerium oxide-anti HER2 bioconjugate was constructed by adding anti HER2 onto cerium oxide that had been previously reacted with APTMS and PEG-NHS-Maleimide. The FTIR spectra of the reaction product showed that the cerium oxide-anti HER2 bioconjugate was successfully synthesized. The resulted bioconjugate was then immobilized on a screen-printed carbon‑gold nanoparticles electrode surface by using the amine coupling bonding systems. The interaction of the synthesized cerium oxide-anti-HER2 bioconjugate with HER2 was found to inhibit an electron transfer and a decrease in the voltammetric Fe(CN)63-/4- peak current, which was proportional to the concentration of HER2. The optimal response of the current signal was generated at an anti-HER2 concentration of 5.0 μg/mL. The two linear ranges of HER2 concentration were found: that were 0.001 to 0.5 ng/mL and 0.5 to 20.0 ng/mL. By using the first calibration curve, the limit of detection was 34.9 pg/mL. The developed label-free immunosensor was used to determine HER2 in a human serum sample with satisfactory results, as shown by a consistent result with the addition of standard. Thus, the resulted immunosensor in this study is promising and has a potential application in clinical bio-analysis. |
31,491 | Fading positive effect of biochar on crop yield and soil acidity during five growth seasons in an Indonesian Ultisol | Biochar amendment to soils offers a method to sequester carbon in soil with the co-benefits of waste management, pollutant immobilization, fertility increase and/or N2O emission reductions of degraded soils.The mechanism behind this fertility increase can be improved water retention, improved soil structure, improved nutrient retention, increased robustness towards pests, improved nutrient transport by mycorrhizae, alleviation of soil acidity, or combinations of these mechanisms.For less degraded soils, enrichment of the biochar with nutrients by co-composting or mixing with urine or mineral nutrients can still result in positive biochar effects on crop yield, especially in those cases where nutrient availability of the main growth-limiting factor.Large variations in biochar effectiveness on crop harvest in the tropics have been shown, from minor, generally insignificant effects to strongly positive effects, with the median effect being an increase of about 20%.The effect of biochar is usually strong in tropical soil in comparison to soils in temperate zones where the effect of biochar on the yield and soil properties is usually low.Soils of high fertility have shown to benefit less from biochar addition.Effects tend to be a bit more strongly positive for acidic and weathered soils with coarse or medium/heavy texture which are characteristic of tropical soils.The effect of biochar seems to be thus strongly connected to the soil properties and the climate, but thus far correlations with crop yield are not completely clear.Several authors state the yield increases are related to an overall improvement of soil qualities, also in tropical soils, however pin pointing the exact mechanism behind the increase in yields can be challenging.In extensive four-season field trials in Thailand and the Philippines with rice husk biochar, Haefele et al. observed increased yields of 16–35%, and hypothesized that the increase was a result of improvements in water retention and increased available K and P. Steiner et al. tested biochar effects over four planting seasons in an acidic soil in Brazil, and found positive effects of biochar that faded over time in multiple seasons.Major et al. studied biochar effects in an acidic oxisol in Colombia for 4 years, and did not find any effects in the first year, but maize yield increases in the three subsequent seasons.Griffin et al. investigated the amendment of walnut shell biochar over four years in a field experiment, and found a short-lived effect on maize crop yield in the second year.A long-term wheat/maize field experiment in a calcareous soil with extremely high biochar dosages revealed a slight increase in cumulative yield over four seasons, due to lower bulk density, improved soil moisture and K addition.Jones et al. did a three-year study of biochar on maize and grass yield, in pH-neutral sandy clay loam in Wales, UK.Biochar effects were stronger in year two than in year one.After three years in the field, biochar had caused beneficial changes in the microbial community.Despite their merit of drawing general conclusions from a plethora of data, the meta-analyses on biochar effect on crop yield have necessarily pooled the available data without considering the time since biochar application or inter-season variation for studies carried out over multiple years.The reason is that there are too few studies carried out for longer time spans.A recent review reported that 60% of the 428 data points reviewed were based on one year trials or simply used data corresponding to the first year of multiple-year studies.Thus, there is a need for well-controlled, replicated and longer-term field studies on representative soils.Here, we contribute to closing this gap, as information will be obtained related to trends observed for yields from a highly acidic soil up to five seasons since biochar application, with two very different biochars.The mechanism explaining the soil enhancement effect of biochar will also be investigated, as well as and how often one would need to replenish the biochar in order to maintain the positive soil fertility effects.Ultisols in the humid tropics such as the presently studied soil require significant liming or addition of organic matter to remediate Al toxicity, which is acknowledged as one of the major causes for crop failure.Biochar often contains a major ash component, which is alkaline in nature, and may be used as an alternative for lime, with the co-benefits of carbon sequestration and other improved soil characteristics.The two biochars tested for their effects on crop yield and soil properties were made from cacao shell and rice husk, strongly differing in acid neutralization capacity and cation exchange capacity.A high ANC of a biochar can probably alleviate soil acidity and reduce available Al concentrations.Also P availability can be positively impacted by an increasing pH.The hypotheses for this study were 1) that the agronomic effects of biochar in this soil could be explained by reduced soil acidity, as expressed by reduced exchangeable Al3+ concentrations as well as increased pH, Ca/Al ratios, and base saturation.As a result it was also hypothesized that the biochar with highest ANC would give the strongest yield effects in a soil where crop growth is mainly limited by soil acidity, and 2) that the biochar effectiveness on crop yield would decline over time, due to continued nutrient leaching and rapid depletion of the alkalinity added via the biochar.To investigate the longevity and mechanism of biochar effects on maize production in highly acidic soils of the humid tropics, an extensive field trial was carried out over five cropping seasons, with two biochars and five replicates at an experimental farm in the Lampung district, South Sumatra, Indonesia.The soil was classified as a Typic Kanhapludult Ultisols with high levels of exchangeable aluminum and very low pH.The Lampung district has high rainfall and temperatures throughout the year, and thus a high soil leaching and weathering potential.Both biochars were applied in dosages of 0, 5 and 15 t ha−1 and mixed into the upper 10 cm of the soil.Soil bulk density was 1.30 g cm−3.Percent addition of biochars was thus 0.4% and 1.2% for the 5 and 15 t ha−1 additions, respectively.Both soil chemical parameters and maize yields were monitored over the five growth seasons.Biochars were prepared from rice husk and cacao shell, two common agricultural wastes in Indonesia.Pyrolysis was carried out in a simple kiln without a retort function, and the procedure and conditions for making the biochars have been extensively described in refs., where the same biochars were studied.Pyrolysis temperatures, as determined via Thermogravimetric Analyses were between 400 and 500 °C.Biochar characteristics were reported in earlier work and reported in Table S1.Field trials were carried out on a strongly acidic, sandy loam Ultisol in Lampung province, over five planting seasons: season 1: July–October 2012; season 2: December 2012–April 2012; season 3: April–August 2013; season 4: November 2013–February 2014; season 5: April–July 2014.Precipitation occurs throughout the year in Lampung province, with relatively low rainfall in June–November.In the relatively dry seasons, the plots were irrigated when necessary, in order to keep the seasons as comparable as possible, and because the effect of biochar on moisture retention was neither the topic of the study nor a mechanism expected to be of importance for biochar effects in this humid region.In between cropping seasons, the land was tilled to 15–20 cm depth with a hand held mini-tractor, fertilized, weeded with a generic maize weed killer, and replanted.All plots were on flat terrain on the experimental farm Tamanbogo, belonging to the Indonesian Soil Research Institute.Selected soil properties are presented in Table S1.Five blocks were established in a completely randomized block design.A total of 30 experimental plots of 4 × 4 m size were thus established.One-meter spaces were kept between the plots.Both biochars were applied in single dosages of 0, 5 and 15 t ha−1 in each of the five blocks, prior to the first growth season only, by manual mixing into the 0–10 cm soil layer.The biochars were not enriched with organic nutrients as this did not fit with common agricultural practice in the area due to limited availability of manure.Each treatment in each of the five blocks was sampled for 5 seasons.Maize was planted at 20 cm × 75 cm spacing.Hand-weeding was carried out when required.Mineral fertilizer was applied three times per planting season: just before planting, in the early vegetative growth stage, and in the early generative growth stage.Insecticide application was also carried out before planting.No lime was applied as one of the purposes of the experiment was to investigate the pH effect of biochar amendment.Yield was normalized to grains dried overnight at 110 °C.Statistical testing of effects of treatments or additions was done by the statistical package “R”, version 3.4.4.Linear mixed-effects models were fitted using the R extension package lme4 to evaluate differences between biochar type, biochar dosage and season.Variation in yield between the different blocks was modeled by introducing random effects associated with each of the blocks.Likelihood ratio tests were used to simplify the fixed effects structure of the models.Model checking was based on visual inspection of residual and QQ plots.Differences between the management practices were assessed by means of pairwise comparisons using model-based approximate t-tests with adjustment for multiplicity.Variation between biochar types was assessed by comparing 5 t ha−1 Cacoa shell vs. 5 t ha−1 Rice husk per season and 15 t ha−1 Cacoa shell vs. 15 t ha−1 Rice husk per season.Variation with biochar dosage was assessed by comparing 5 t ha−1 vs. 15 t ha−1 Cacoa shell per season and 5 t ha−1 vs. 15 t ha−1 Rice husk per season.Changes with season were assessed by comparisons of seasons for each of the biochar type and dose combinations separately.After each planting season, soils from all individual plots were sampled and stored at 4 °C.Per individual plot, five 100 g soil samples, taken from 0 to 10 cm depth with a small spade, were pooled into one 500 g mixed sample per plot.Thus, five replicate samples per treatment were obtained.The following parameters were measured for the first three planting seasons: CEC, pHH2O and pHKCl, exchangeable base cations in the CEC extracts and base saturation, exchangeable H+ and Al3+, available P, and elemental composition , all using standard methods as described in refs. and in the footnote of Table S1.During season four and five, due to funding limitations soil analyses were restricted to pHH2O, exchangeable K, and CEC.Differences in molar exchangeable Ca/Al ratios and pH were assessed as described above.In addition, 0 t ha−1 was included for comparisons of biochar type, biochar dosage and season.Linear regression was used for exploring relationships between grain yield and selected soil variables.The soil was a strongly acidic sandy loamy Typic Kanhapludult, with a low base saturation.Associated with the low soil pH, the soil had a relatively high exchangeable Al3+ content, low exchangeable Ca2+ and thus low Ca/Al molar ratio.The organic carbon content was low.Cocoa shell biochar exhibited a higher pH than rice husk biochar, as well as a much higher CEC.Importantly, the cacao shell biochar exhibited a much higher acid neutralizing capacity than the rice husk biochar, resulting in a much higher alkalinity.The cacao shell biochar had a lower ash content and a higher organic C content than the rice husk biochar.This could be due to the high silicate content, yet small base cation content of the rice husk biochar, resulting in a relatively high ash content, but low ANC of the charred material.Cacao shell biochar and rice husk biochar were both used in two dosages, under maize cropping for five seasons.Without biochar amendment, hardly any emergence of maize plants occurred and thus no maize yield was obtained.This is likely a result of the low soil pH and high levels of exchangeable Al.The smallest cacao shell biochar amendment alleviated the deleterious effects of Al3+ and improved maize emergence and resulted in grain yields of 1.3 and 2.9 t ha−1 in seasons 1 and 2, respectively.A dosage of 15 t ha−1 resulted in significantly higher grain yields than a dosage of 5 t ha−1 in all seasons except the first one.At a dosage of 15 t ha−1, rice husk biochar was significantly less effective than cacao shell biochar in all seasons except season 1, and for a dosage of 5 t ha−1 the same was observed for seasons 1, 2 and 3.The biochar trials were continued for five seasons.Variation of biochar effectiveness in the seasons following application revealed an interesting pattern.The maize yield obtained with the 15 t ha−1 cacao shell biochar peaked in season 2 at 4.3 t ha−1, continued to have a good effect in seasons 3 and 4; the production stimulating effect faded during season 5 at the 15 t ha−1 dosage.For the lower 5 t ha−1 dose of cacao shell biochar, a decline in maize yield was already seen from season 3 onwards.The less strong yield effect of 15 t ha−1 rice husk biochar amendment was only significant in season 1, and already faded from season 2 onwards.Results for maize stover biomass showed similar trends as for grain yield.Soil properties were measured after each planting season.The full soil data set can be found in Tables S8–S15.Soil pHH2O and base saturation increased significantly with the application of cacao shell biochar at both addition rates.In contrast, the addition of rice husk biochar had a less pronounced effect on soil pH, Ca/Al ratios and base saturation, which is probably the main explanation for the difference in crop yield effects between the two biochar types.Similar to the trends observed for maize yield, soil pH and base saturation gradually decreased with season in the cacao shell biochar treated plots.This indicates that the fading effect of biochar on crop yield may be related to soil acidity.In particular, the exchangeable molar Ca/Al ratios, analyzed in field samples for seasons 1–3, showed largely similar patterns as grain yields, especially with biochar type and dosage, and to a certain extent with season.That is, they were significantly higher for cacao shell biochar than for rice husk biochar.Also, Ca/Al ratios were significantly higher at the high dosage of 15 t ha−1 cacao shell biochar than for the lower dosage of 5 t ha−1 of the same biochar and higher for seasons 1 and 2 than for season 3.For seasons 4 and 5 we do not have Ca/Al ratios, but as observed previously, pH and Ca/Al ratios are significantly correlated, so here the measured pH values and their trends can be relied upon.The relation between pH and Ca/Al is related to the fact that base saturation is positively correlated with pH, whereas exchangeable Al is negatively correlated with pH.This has been demonstrated explicitly.The relation between crop yield and pH was not entirely straightforward across seasons, as e.g. pH decreased from season 1 to 2 for both cacao shell biochar dosages, whereas crop yields showed a strong increase from season 1 to 2.However, when comparing dosages, the pH effect of 5 t ha−1 cacao shell biochar started to fade after season 2, and the same was observed for crop yield.Similar observations were made for the 15 t ha−1 cacao shell biochar amendment, where the pH effect was significant for the first 4 seasons but no longer for season 5, when also the crop yield effect faded and was back on the level observed during season 1.In Fig. 3a the significant relationship between maize grain yield and Ca/Al ratios is shown.The zero maize grain yield points were included as these data were actual results: as mentioned above, without biochar amendment no grain yields were obtained.p < 0.001 indicates a highly significant relationship, but, as typical for a field study, there was a lot of scatter in the observations, reducing r2-values to 0.4–0.5 for crop yield vs. pH and Ca/Al.Similarly significant but slightly less strong relationships were observed between grain yield and the other acidity-related parameters: pH, BS, and exchangeable K.Also here the zero grain yield values were included, as these were actual measurements.The same relationship was not observed for available P, as P availability was not affected by pH in this low pH range between 3.5 and 5.We observed a much stronger effectiveness in increasing crop yield for the cacao shell biochar than for the rice husk biochar, even though both were made in the same manner.This observation could be explained by its higher pH and ANC.The cacao shell biochar can thus improve the acidity-related soil properties more effectively than the rice husk one.Recently, Gruba and Mulder showed that the exchangeable Al concentration in acid soils reaches maximum values at pHH2O ≈ 4.2, while declining with pH increase.Thus, the pH for the unamended soil was far below this threshold and thus explains the high Al availability.Soils treated with 5 t ha−1 cacao shell biochar still had pHH2O ≈ 4.2, still close to the Al release threshold.After addition of 15 t ha−1 cacao shell biochars, pH was 4.5–5.0, well above the pH where extensive Al dissolution occurs.Since Ca concentrations increase sharply from near-zero above pH 4.2, this implies that Ca/Al ratios increase strongly with increasing pH, associated with the application of cacao shell biochar."Our first hypothesis was that the biochars' agronomic effects could be explained by reduced soil acidity, and that the biochar with highest ANC would give the strongest yield effects.This hypothesis was largely supported by our data, both with regard to acidity alleviation explaining the biochar effects on crop yield, and with regard to the higher-ANC biochar having stronger effects than the low-ANC one.Although confirmed in the large picture, not all individual pH observations were in agreement with the hypothesis, e.g. relatively low pH in season 2 for both cacao shell biochar dosages whereas crop yields were surging, and a significant increase in pH from season 4 to 5 for the 5 t ha−1 cacao shell biochar trial.These exceptions in individual measurements are often encountered in field studies, and the differences between the two biochars, and the trends with biochar dosage, were fully in line with the acidity alleviation hypothesis.Our most significant findings with regard to time trends of biochar effects on crop yield are i) that the biochar effect was stronger in the second season than in the first one, and ii) that the effectiveness started to fade after three to five cropping seasons, partly because the acidity alleviation effect started to fade.Thus, our second hypothesis that biochar effectiveness on crop yield would decline over time, was supported by the observations.This could be due to continued nutrient leaching and rapid depletion of the alkalinity added via the biochar, and the effects varied with biochar type and dose.We hypothesize that the first observation of initial increased effectiveness over time may be explained by “aging” of the raw, non-enriched biochar in soil, where biochar improves soil structure over time, leading to improved soil aggregation as well as decreased soil density and root penetration resistance.Spectroscopic evidence has indicated that an organic coating is formed on the biochar surface over time, resulting in improved nutrient retention and creating a more optimal habitat for soil microorganisms.In addition, precipitation of the dissolved Al may be a slow process.Better crop yield effects in season 2 than immediately after application have been observed in a few studies before.Major et al. studied biochar effects in an acidic oxisol in the humid tropics of Colombia for 4 years, and did not find any strong effects in the first year, but maize yield increases of +28, +30 and +140% in the three following seasons.The authors attributed the greater crop yield primarily to increases in available Ca and Mg, but indeed the small but important increases in soil pHKCl and corresponding increases in molar available Ca/Al ratios probably also contributed to the observed biochar effects, similar to observations in the present study.Compared to the present study, where molar Ca/Al ratios around 0.2 resulted in almost no crop yield, the crop yields of Major et al. of >2 t ha−1 in the absence of biochar, were surprisingly high.Also Griffin et al., studying walnut shell biochar over four years in a field experiment, under dry conditions in CA, USA, on a silty clay loam with pH 6.7 and CEC of 22.3 cmolc kg−1, observed positive yield effects in the second year but not in the first one.The authors ruled out pH and nutrient retention effects, and attributed the yield effect to short-lived increases in available K through direct additions of these nutrients via the biochar), analogous to the present study and other observations of higher K in both soil solution and plant tissue after biochar addition.In addition, moisture retention effects could have explained the biochar effect, and the best effect being in the second year could be explained by gradual improvements in soil structure.Jones et al. investigated the 3-year effect of the amendment of 25 and 50 t ha−1 biochar on maize and grass yield as well as on soil parameters, in pH-neutral sandy clay loam in Wales, UK.In accordance with our study, biochar effects were stronger in year 2 than in year 1.The second important observation, the fading biochar effect over multiple seasons, can probably be attributed to leaching of the alkaline ashes in the humid tropical climate with high rainfall.Analogous to our study, Steiner et al. tested 15 different treatments over four planting seasons in an acidic soil in Brazil, and found that biochar doubled the grain yield in the presence of mineral fertilizers, but that the effect faded after multiple seasons.The authors hypothesized that the positive yield effect resulting from the biochar amendment could partly be caused by significantly lower exchangeable Al concentrations in biochar-amended soil.However, the Ca/Al molar ratios that can be calculated for their untreated soil were much larger than the ones observed presently, in accordance with their soil pH being above the limit of pH 4.2 where exchangeable Al contents are substantially increased.These results cast doubt on the hypothesis that reduced Al toxicity was the mechanism behind the biochar effect in the Brazilian study.A fading effect of the biochar over time after season 2 was also observed by Jones et al.: after 3 years in the field, the alkalinity associated with the biochar had been fully neutralized, similar to our observations, with the pH of the soil being back at 6.6, and the pH of the biochar itself having decreased from 8.8 to 6.6.However, the pH effect of the biochar probably was not of much importance in these near pH-neutral soils, and the authors hypothesized that the main effect of biochar was beneficial changes in the microbial community, stimulating fungal and bacterial growth as well as soil respiration.On the other hand, in extensive four-season trials in Thailand and the Philippines with rice husk biochar on a poor, dry, nonacidic soil, Haefele et al. observed increased yields of 16–35%, although without any clear trends with season, and hypothesized that the increase was a result of improvements in water retention and increased available K and P, not of pH. Yamato et al. investigated the effect of biochar produced from Acacia mangium on maize and peanut yield and soil properties in South Sumatra.They observed a significant increase in yield, as well as the density of the rooting system, soil pH, total N, available P2O5, cation exchange capacity, and observed a strong reduction in soil Al3+ concentration, analogous to the present study on the same island.A meta-analysis undertaken by Jeffery et al. has shown the significant positive effect of biochar application on crop productivity, with a grand mean increase of +20–25% for tropical soils.This increase was attributed to a liming effect and improved water holding capacity of the soil, along with improved crop nutrient availability when biochar is added to soil.Biederman and Harpole, in their meta-analysis reported a statically significant positive yield effect of biochar amendment of +20%, citing pH and soil quality increases as the main reasons for this.These conclusions have also been confirmed by a meta-regression analysis carried out by Crane-Droesch et al.They concluded that soil cation exchange capacity and organic carbon also were strong predictors of crop yield response.Despite the fact that the previous meta-analyses have shown that biochar amendment leads to a significant effect on crop yield, individual literature studies are still quite variable, showing inconsistency between laboratory and field tests and among field studies.For example, Bass et al. investigated the effect of biochar amendment on banana and papaya crop yield and soil properties in Australia.Although there was a positive effect of biochar on soil properties, where cation exchange capacity, K+, Ca2+, soil C content and water retention all increased, there was a no effect, or a negative effect on fruit yield.Further studies are thus necessary in order to fully understand the effect of biochar on soil properties and yield, despite the fact that increases in soil alkalinity and nutrient availability seem to be the most reasonable and statically significant explanations.Comparing biochar amendment to conventional liming, the ANC of the cacao shell biochar was about 0.2 CaCO3-equivalents.In addition, the cacao shell biochar added 127 cmolc kg−1 exchangeable + soluble K, 37 cmolc kg−1 Ca and 32 cmolc kg−1 Mg to the soil.In comparison, dolomite would add much more of both Ca and Mg), but not K. From the ANC of the cacao shell biochar it can be calculated that minimally 3 t ha−1 calcite or dolomite would be needed for the same pH effect as 15 t ha−1 biochar.However, small-scale tests with various dolomite additions over 10 d in the same soil revealed that a higher dolomite dosage around 6 t ha−1 was needed for a pH increase to 4.5–5.0, the pH after 15 t ha−1 cacao shell biochar amendment.At a dolomite price of 250 to 500 US$ t−1, this is a major cost for the small-scale farmers in Lampung district.Biochar can be made for as little as 100 US$ t−1 by these smallholder farmers, especially when using clean, fast and free-of-charge flame curtain kilns, and provides the farmer with the added advantages of K addition and improvement of soil structure and microbiology, as well as the global advantages of carbon sequestration and nitrous oxide suppression.The main conclusion of our study is that the primary cause of increased crop production in an Ultisol of the humid tropics due to biochar addition was related to its acid neutralizing capacity.Thus, the role of biochar as a soil enhancer was mainly associated with its liming effect, causing a significant decline in toxic Al.Our hypothesis that this effect fades over time was supported, and thus multiple biochar amendments are necessary.In this case biochar would need to be applied approximately every third season, similar to conventional liming in the study area.Also, moderate additions of 5 t ha−1 biochar did not suffice for acidity alleviation, and high dosages of 15 t ha−1 were necessary.Such dosages are better amenable with intensive small-scale horticulture or kitchen gardening than more extensive maize farming.In current controlled field trials in the study area we are comparing biochar amendment to liming and ash amendment in multi-season trials, also testing the longevity of the various amendment effects, in order to come to the best farmer recommendation for biochar implementation. | Low fertility limits crop production on acidic soils dominating much of the humid tropics. Biochar may be used as a soil enhancer, but little consensus exists on its effect on crop yield. Here we use a controlled, replicated and long-term field study in Sumatra, Indonesia, to investigate the longevity and mechanism of the effects of two contrasting biochars (produced from rice husk and cacao shell, and applied at dosages of 5 and 15 t ha−1) on maize production in a highly acidic Ultisol (pHKCl 3.6). Compared to rice husk biochar, cacao shell biochar exhibited a higher pH (9.8 vs. 8.4), CEC (197 vs. 20 cmolc kg−1) and acid neutralizing capacity (217 vs. 45 cmolc kg−1) and thus had a greater liming potential. Crop yield effects of cacao shell biochar (15 t ha−1) were also much stronger than those of rice husk biochar, and could be related to more favorable Ca/Al ratios in response to cacao shell biochar (1.0 to 1.5) compared to rice husk biochar (0.3 to 0.6) and nonamended plots (0.15 to 0.6). The maize yield obtained with the cacao shell biochar peaked in season 2, continued to have a good effect in seasons 3–4, and faded in season 5. The yield effect of the rice husk biochar was less pronounced and already faded from season 2 onwards. Crop yields were correlated with the pH-related parameters Ca/Al ratio, base saturation and exchangeable K. The positive effects of cocoa shell biochar on crop yield in this Ultisol were at least in part related to alleviation of soil acidity. The fading effectiveness after multiple growth seasons, possibly due to leaching of the biochar-associated alkalinity, indicates that 15 t ha−1 of cocoa shell biochar needs to be applied approximately every third season in order to maintain positive effects on yield. |
31,492 | Modelling the concentration of chloroform in the air of a Norwegian swimming pool facility-A repeated measures study | Chlorine is the most used water disinfectant worldwide.In Norwegian pool facilities, chlorine is often used in combination with UV treatment.Proper water disinfection is necessary in order to prevent the growth of hazardous microorganisms, but disinfection of water with oxidizing biocides also leads to the formation of unwanted disinfection by-products. >,600 DBPs have currently been identified in disinfected water.Even though a small amount of water is ingested during swimming, dermal penetration and inhalation are considered the most important routes for exposure.Although there is disagreement, exposure to volatile chloramines is considered to be the main reason for the increased prevalence of respiratory conditions, such as voice loss, sore throat, phlegm and asthma, observed in pool workers and swimmers.As a result, the World Health Organization has suggested a provisional guideline value for chlorine species, expressed as NCl3, in the ambient air of swimming facilities being limited to 0.5 mg/m3.Quantitatively, one of the most important group of DBPs are trihalomethanes, with chloroform, bromodichloromethane, dibromochloromethane, and bromoform being most common.The two THMs CHCl3 and CHCl2Br are, according to the International Agency for Research on Cancer, classified as group 2B, i.e., they are possibly carcinogenic to humans.The high volatility and dermal penetration potential of the four THMs suggest that both dermal penetration and inhalation are important pathways for exposure.In Norway, a declaration that the legal requirements of free and combined chlorine in swimming pool water are met must be made.However, unlike many other countries, no upper acceptable limits for the four THMs in pool water exist.A typical indoor swimming pool ventilation system in Norway consists of supply diffusors at floor level along the window facade and return grills in the ceiling or on one wall.The ratio between fresh air and recirculated air is controlled using set points for air temperature and air relative humidity.Traditionally, this ventilation strategy was chosen to prevent condensation on windows due to the cold climate in Norway and the subsequent large difference in temperature and enthalpy between indoor and outdoor air.However, stricter energy requirements now mandate the use of better- insulated windows, and condensation along the window facade is no longer considered to be of great importance.No legal requirements concerning air volume and air circulation in Norwegian swimming pool facilities exist.However, the Norwegian Industrial Technological Research Centre has proposed some guidelines, one of which is to change the air volume 4–7 times per hour in pool facilities, in general, and 8–10 ACH in rooms with hot water pools.The suggested fresh air supply is 2.8 l/s per m2 of water surface, which is well below the suggested 10 l/s per m2 of water surface proposed by the WHO.To reduce the evaporation rate from humid skin and the water surface, it is suggested that the air temperature be kept between 1 °C and 3 °C above the water temperature, with a maximum air temperature of 31 °C."Accordingly, the air velocity above the water's surface should be <0.15 m/s.In recent years, research has shown that poor air quality in indoor swimming pool facilities, caused by volatile DBPs off-gassing from pool water, results in an increased prevalence of irritative symptoms and asthma among workers, swimmers, and users who visit swimming pools on a regular basis."Still, recommendations concerning ventilation focus on how to reduce water evaporation and energy consumption rather than on how to ensure proper air quality in the swimmers' breathing zone.The modelling of DBPs has been a focus in many different articles, and one of the most frequent technique used in their analyses has been multivariate regression.The aims of the present study are to,Document the distribution of the four THMs 0.05 m and 0.60 m above the water surface in various locations in the poolroom in the morning and afternoon, and,By the use of a linear mixed effects model, identify the most important determinants of exposure.Repeated measures design was chosen to study one pool facility located outside the city of Trondheim, Norway.This facility consists of seven swimming pools: one sports pool with a diving springboard and platforms, three therapy pools, one baby pool, one wave pool, one Jacuzzi, and two fountains.Samples were collected during morning and afternoon, once or twice per week between the 2nd of October and 6th of November 2017.The number of visitors to this facility per year is approximately 120,000."On sampling days, the pool facility was used mainly for school children's swimming lessons and for water aerobics for elderly people.The swimming pool water was disinfected using electrolysis of NaCl in combination with ultraviolet treatment during sampling.The water supply was from the municipal water works.The total ventilation rate, i.e., the sum of recirculated air and fresh air, was adjusted to deliver between 29,000 m3/h and 44,000 m3/h of air.The total air volume in the pool facility is approximately 12,000 m3.Air samples were collected on six days using a test stand with two heights: 0.05 m and 0.6 m above the water surface.In the morning, samples were collected from location 1, 2, 3 and 4, and, in the afternoon, samples were collected from location 1, 2, 5, and 6, see Table 1.In total, 16 samples were collected each day over time and space to represent the air quality.The samples collected from locations 1–4 were collected simultaneously from 0.05 m and 0.6 m above the water surface and by the two long sides of their respective pools, where locations 2 and 3 were on each long side of the sports pool, and locations 1 and 4 were on each long side of the therapy pool.The samples collected from locations 5 and 6 were collected only 0.6 m above the floor and 1.5 m from the pool edges bordering each side of the centre of the pool facility.The results are based on 93 out of 96 collected air samples.Three samples were rejected due to tube leakage during analysis.Information on air temperature and RH was obtained using one EasyLog USB.This logger was attached to the test stand 0.4 m above the floor or water surface and logged information about absolute air temperature and RH at intervals of 120 s. Information about free and combined chlorine, pH, and water temperature was received from the supervisory control and data acquisition system located in the pool facility.This online logging system collects information on water quality every second minute during the day.Information on fresh air supply, recirculated air, extracted air, and total air supply was collected from the air handling unit that provides information on the different damper positions and how much air is being extracted and supplied to the pool facility every minute during the day.Sampling, analysis, and quality assurance are in accordance with the published US EPA Method TO-17.The method used for active air sampling was to collect ambient air onto automatic thermal desorption tubes of stainless steel containing 0.20 g of Tenax TA 35/60.At 20 °C, CHCl3 and CHCl2Br have reported breakthrough volumes of 3.8 l and 3.4 l per 200 mg/Tenax TA, respectively.The breakthrough volume reduces by a factor of 2 for each 10 °C rise in temperature and is also effected by the pump flow.To find the safe sampling volume for the THM in the air, different pump flows were tested for 20 min at 0.05 m above the water surface.During these tests, the test tubes were coupled in series with an identical back-up tube to analyse if >5% of the THMs could be identified on the back-up tube."In the EPA's TO-17, it is recommended that the pump flow be above 10 ml/min in order to minimize errors due to ingress of volatile organic compounds via diffusion.In the present study, two ACTI-VOC low-flow pumps,adjusted to deliver a flowrate of 40 ml/min for 20 min.This pump flow rate provided a satisfactory result and was chosen to keep the uncertainty related to the flow calibration as low as possible.The pumps were calibrated in situ before and after each sample.Determination of THMs in the air was performed with a Unity thermal desorber coupled an with Agilent Technologies 5975T LMT-GC/MSD.Thermal desorption was carried out for 10 min at 284 °C with a flow rate of 30 ml/min, and the collected THMs were sent to a cold trap packed with Tenax TA.Secondary desorption was then carried out with a carrier gas flow rate of 20 ml/min from the trap.The separation was performed on a capillary column.The oven temperature was adjusted with a temperature program to go from 35 °C to 90 °C using 5 °C/min steps and maintain a post-run temperature of 230 °C.A selection ion monitoring mode was used for identification and quantification of the collected THMs.Both external and internal calibration methods were utilized.For the internal calibration, the sorbent tubes were spiked with 250 ng 8260 Internal Standard Mix 2 containing chlorobenzene-d5, 1.4-dichlorobenzene-d4 and fluorobenzene in methanol.For external calibration, a five-point calibration curve, ranging from 0.5 ng to 500 ng, was created for each of the four THMs.A THM calibration mix in methanol was used for this purpose.All duplicate measures and volume pairs of tubes were within a precision of 5%.Once per week, one test tube, 0.05 m above water surface, was coupled in series with an identical back-up tube to verify no breakthrough.Identification and quantification of THMs were performed in selection ion monitoring mode in the laboratory of the division of Health, Safety and Environment at the Norwegian University of Science and Technology.The water activity, air and water temperature, number of users, RH, pH, free and combined chlorine, supplied and extracted air volume, and amount of fresh air and recirculated air were recorded during sampling.Statistical analyses were performed using the Statistical Package for Social Sciences 25.00.One-way analysis of variance was used to study if the measured variables varied significantly between the different days of sampling.Since CHCl3 was the only component detected in the collected samples, CHCl3 was the only component included in the modelling of the air concentration.The concentration of CHCl3 was positively skewed and was ln-transformed prior to statistical analysis.To account for the correlation between the repeated measures, the concentration of CHCl3 was modelled using a linear mixed effects model.Judging from the likelihood ratio test, the covariance structure of the first-order autoregressive) model for the repeated samples produced the best fit for the data.Determinants were treated as fixed effects and kept in the model if the p-value was <0.05 and if they could justify the more complex model, as judged by the likelihood ratio test.The interest of this study was not in the effects present only at individual sampling locations but rather in the effects present within the poolroom.Sampling locations were therefore treated as a subject, including the random specific intercept of location, in the model.To estimate the variance components, the method of restricted maximum likelihood was used since this method is considered to be more precise, ice, it reduces the standard error, for mixed effects modelling compared to maximum likelihood.The contribution of the fixed effects was estimated by comparing the variance component of the final model to the variance components estimated in the initial model, in which only the subject-specific intercept was included.The final model included ACHfreshair, height above the water surface, day of the week, concentration of combined chlorine and RH.All water quality parameters obtained in this study were in accordance with the Norwegian regulations.In Table 1, the quantified air quality parameters are presented as mean ± standard deviation, along with their sampling locations.In general, CHBr3 was not detected in any of the collected air samples, and CHClBr2 was either not detected or below the limit of qualification.CHCl2Br was quantified in 53 of the 93 collected air samples.In these 53 samples, CHCl2Br accounts for 0.05%–2.6% of the tTHM, while the rest of the quantified tTHM was CHCl3.All variables, except the number of bathers and air and water temperatures, differ significantly according to day of sampling.ACH represents how many times the air is exchanged per hour in the poolroom, regardless of whether the air consists of fresh air, recycled air, or a mixture of the two.ACHfreshair represents how many times per hour the air in the poolroom is exchanged with outside air.This value is estimated based on the valve position opening recorded in the ventilation log and information from the ventilation supplier, who were able to read off the exact fresh air supply from their logging system.ACH and ACHfreshair for the different days of sampling are listed in Table 2, along with information on the mean CHCl3 and CHCl2Br concentration measured in the morning and afternoon.As shown in Table 2, the ACH was always lower than the Norwegian recommended ACH of 4-7.During night-mode ventilation, from 8 PM to 6 AM, between 2.5 and 2.9 ACH was supplied to the swimming facility, and, of this, between 0% and 33% was fresh air.The first day of sampling, there was an issue with the fresh air dampers, and almost no fresh air was supplied to the pool facility during the morning.Day-mode ventilation was switched on at 6 AM, and the supplied air volume increased slowly from 6 AM to 8 PM in the evening.The linear mixed effects model for the concentration determinants for CHCl3 is presented in Table 3.Before any of the fixed variables were accounted for, i.e., only the subject-specific intercept was included in the model, the estimated between and within location variabilities were σb2 = 0.015, and σw2 = 0.12, respectively.The interclass correlation among locations was also highly dependent, with an AR rho value of 0.30 and scores for each location highly dependent on one another.After the determinants improving the fit of the model were adjusted for, σb2 decreased to 0.008 and σw2 decreased to 0.079, hence, it is clear that σw2 has greater weight than σb2.Approximately 47% and 34% of the between and within variability observed, respectively can be attributed to the determinants identified in Table 3.This study describes the variation in repeated samples of CHCl3 obtained from different stationary sampling locations within the same poolroom.In some previous studies, results have been based on a limited number of air samples, often collected from only one stationary sample location above the swimming pool.However, the assumption that one sampling location can represent the air quality for the entire facility may be incorrect.In a recently published study of one indoor swimming pool in Canada, results showed that some zones have appropriate air-renewal, while others are poorly ventilated or even over-ventilated.It is also known that parameters such as water temperature, water turbulence, water surface, RH and air temperature can impact air quality.As this pool facility consists of swimming pools with different surface areas, water temperatures and activity levels, it is reasonable to assume that local air contamination will vary, despite ventilation system efficiency.During the morning, air samples were collected from locations 1 and 4 by the therapy pool.As shown in Table 1, the average concentration of CHCl3, RH, and air temperature were always slightly higher at location 4 versus location 1, although not significantly.As shown in Table 1, there was a greater difference between the air quality parameters measured at locations 2 and 3 by the sports pool.However, when considering heights, only 37% higher values of CHCl3 were obtained 0.6 m above water surface at location 2 compared to location 3, and the variability shows no significant difference between the two locations at this height.When we looked at samples collected 0.05 m above the water surface, 88% higher values were obtained at location 2 compared to location 3, a statistically significant result.This finding suggests that there is a dead zone by location 2, where the mean age of air is greater compared to the mean age of the air observed at location 3.Although the evaporation mass flow increases with decreasing RH, the concentration of CHCl3 was found to increase with increasing RH.RH was also found to be one of the most important predictors of exposure to air contamination levels of CHCl3.Another important predictor identified for the concertation level of CHCl3 was height above the water surface.On average, between 8% and 57% higher concentrations of CHCl3 were obtained at 0.05 m than 0.60 m. Higher concentrations have also been measured closer to the water surface in previous studies in which samples at different heights have been collected.However, in a French study, in which the air concentrations of tTHM were measured from two different heights above the water surface, the authors did not find any statistically significant difference between the chosen sampling locations.This might be explained by the difference in chosen heights and possibly a different ventilation strategy.Even though air velocity was not measured in the present study, the ventilation strategy is designed to deliver low air velocities above the water surface to reduce the evaporation rate from the water.This might result in a layer above the water surface where the air is not changed as often as the air in the rest of the poolroom.To collect representative information about the exposure among the swimmers, it is therefore essential to collect air samples as close to the water surface as possible.Limited information about the importance of proper ventilation in preventing the accumulation of DBPs above the water surface exists.In a previous study, it was found that the ventilation rate was strongly associated with the measured level of the volatile NCl3 in the air.The authors estimated that >2 ACHfreshair was necessary in order to keep the level of NCl3 below the French limit value of 0.3 mg/m3.In our study, the ACH in the pool facility was below the Norwegian recommendations of 4–7 ACH per hour, but this variable was not found to be an important predictor variable for the air concentration of CHCl3."ACHfreshair, however, was estimated to be an important determinant and a minimum requirement for ACHfreshair is considered to be necessary in order to ensure proper air quality in the swimmers' breathing zone.No upper acceptable contamination limit for tTHM in the air of indoor swimming pool facilities exists in Norway.The German Federal Environmental Agency recommends that the concentration of CHCl3 in a swimming pool facility be ≤200 μg/m3 air.In our study, 33% of the air samples exceeded this value, and, of these, 75% were observed 0.05 m above the water surface.If we are exposed 0.05 m above the water surface together with five other bathers on a Monday, with a combined chlorine concentration in the water of 0.24 mg/l and an RH of 58%, we need an ACHfreshair of approximately 3.1 to keep the concentration of CHCl3 below 200 μg/m3.The filters and dehumidification unit in the ventilation system manage, to some extent, to remove particles and keep the humidity and the air temperature in the recirculated air under control, and the variability observed in these variables was low compared to the variability observed in CHCl3.Gases off-gassing from the pool water will pass through the filters and therefore may be recirculated back into the room with the recirculated air.Considering the determinants of concentration identified in Table 3, the supplied air should be balanced with respect to the water quality as well as the bather load, and not just the RH and air temperature, as it is today."The variations obtained within the pool room highlight the need for a new ventilation strategy, as supplied ventilation air should provide proper air quality in the users' breathing zone and not along the window facade.For future studies, the use of absorbent filters in the air handling unit should also be tested to see if they reduce the gas concentration in the recirculated air sufficiently.One of the main advantages of using a linear mixed effects model is the ability to account for the correlation between the repeated measures using covariance structures.The determinants identified in Table 3 explained about 35.5% of the total variability observed in CHCl3, and these determinants should also be prioritized if hazard control is considered necessary.When all determinants improving the model were accounted for, the correlation between the repeated measures was estimated to be 0.69 using AR.Therefore, the observations are highly dependent and, in order to enhance the precision in the estimates of exposure, handling this dependence is important in terms of preventing biased estimates of the point estimate and confidence interval."Another advantage is the model's ability to adjust for factors that might unfold during the experiment, such as the free and combined chlorine and fresh air supply.Being able to make adjustments allows us to investigate in naturalistic settings and not just under controlled experimental conditions.Adjusting for variables that might influence the variable of interest is important for the credibility of the study and for estimating the influence from different effects.The concentration of CHCl3, RH, and air temperature vary within the pool facility and around the same swimming pool, and the within-location variability suggest that repeated samples over time are necessary in order to understand the long-term mean concentration.The chosen ventilation strategy does not ensure the same air exchange for all locations in this pool facility, and, based on the identified predictor variables, hazard control should focus on increasing the air renewal of the layer above the water surface.ACH did not explain the variability in the observed concentration of CHCl3; however, ACHfreshair did.Based on the identified determinants of contamination, the supplied air should be balanced with respect to bather load and water quality, and not just RH and air temperature, as it is today.The authors have no competing interests to declare. | Certain volatile disinfection by-products (DBPs) off-gassing from pool water can cause eye and skin irritations, respiratory problems, and even cancer. No guidelines or recommendations concerning DBPs in the air exist in Norway. Traditionally, ventilation strategies in indoor swimming pools are based on reducing condensation on the windows rather than ensuring proper air quality in the users’ breathing zone. A total of 93 air samples of airborne concentrations of trihalomethanes (THMs) were collected via stationary sampling. We investigated the distribution of total THM (tTHM) 0.05 m and 0.60 m above the water surface at six different locations in the poolroom and the covariation between the water and air quality parameters. Based on a linear mixed effects model, the most important determinants in terms of predicting the air concentration of CHCl 3 were height above water surface, air changes of fresh air per hour, concentration of combined chlorine in the water, relative humidity (RH) and day of the week. Approximately 36% of the total variability could be attributed to these variables; hence, to reduce the average exposure in the poolroom, hazard control should focus on these variables. Based on the identified predictor variables, the supplied air should be controlled based on water quality in addition to the traditional control censors for RH and air temperature used in the ventilation system of Norwegian swimming facilities. |
31,493 | Categorizing global MPAs: A cluster analysis approach | Marine Protected Areas are clearly defined areas of the ocean and coastal environments that are governed or managed with the distinct purpose of, “the long-term conservation of nature with associated ecosystem services and cultural values” .Under this definition, MPAs help conserve key areas for marine biodiversity, aid in the recovery of degraded areas, and also help increase the resilience of some ecosystems to the impacts of climate change .These benefits are often achieved by restriction of potentially harmful activities, like fishing, within the MPA, and they give MPAs the potential to be highly effective tools for marine conservation around the world."Global commitments have been made to protect 10% of the ocean by the year 2020, including many other regional or national level protection targets.MPA proposals, designations, and implementations have accelerated over the last ten years , with conservation databases reporting nearly 15,000 MPAs.With such a high number of MPAs in many regions of the world, the diversity of MPA characteristics is not surprising.For example, the internationally managed Ross Sea MPA that covers 1.55 million km2 in the remote Southern Ocean has features that are quite distinct from those of a small indigenous-governed MPA in a tropical developing country such as Naru Reef in the Solomon Islands .There are also cases where some MPAs are distinguished by a key policy or management characteristic.Examples of subcategories of MPAs include “no-take” MPAs which prohibit all forms of capture fishing within their boundaries, or “Large Scale Marine Protected Areas” which are defined as MPAs exceeding a given area-based threshold .In this paper, rather than grouping MPAs by a single variable, we developed a more comprehensive perspective of MPA characteristics and diversity by considering multiple attributes simultaneously.Other studies have conducted multivariate analyses for analyzing MPA characteristics around the world with relation to MPA management and performance.Two notable recent works include Edgar et al. and Gill et al. .However, these studies investigated the characteristics that most influence ecological performance and impacts of MPAs, whereas here we investigated how MPAs fall into representative groups or categories based on similar key attributes.Our research also differs from many past analyses in that we used a comprehensive global database rather than a relatively small sample of MPAs.Our primary aim in conducting this analysis was to deliver valuable insight into the present status or state of affairs of MPAs around the world through the lenses of our chosen variables.These insights are highly relevant in light of high rates of MPA proposals, designations, implementations, and expansions in recent years.Investigating the commonalities of MPAs that cluster together may also serve as a useful platform for investigating the potential status, strengths, and challenges for MPA management within each cluster.As a secondary outcome, such a study may also have practical relevance for MPA research.Many highly cited studies pertaining to MPA management and governance have been based on case study focused research .MPA diversity is especially consequential for case study research when findings and outputs are intended to have broad scale relevance and global implications.Case study based research can only truly be scaled up to a global level if the studies selected are representative of the diversity of MPAs around the world.Therefore, our process and findings will provide objective guidance towards identifying representative groups of MPAs from which case studies may be selected.Looking across representative groups can also help investigate the effect of certain features on performance metrics, or control for influential variables that may affect outcomes.Thus, in addition to applying this approach to obtain a global view of MPA characteristics, diversity, and implications for management, we also describe its use as a replicable, systematic method of identifying representative groups to help guide future MPA research.We classified MPAs based on the results of a cluster analysis using publicly available, globally tracked MPA data.Our methodological approach builds upon a cluster analysis conducted for urban parks , which we found analogous to MPAs as another form of spatial planning and management for uses other than industrial or exploitative activities.Ibes used a two-step clustering method, combining Principal Components Analysis and a cluster analysis.This approach is considered statistically rigorous .We conducted a PCA first to help verify the contributions of our selected variables to variance within the chosen data set.We then performed a k-means analysis as a partitional clustering method using Euclidean Distance to establish clusters among the finalized list of explanatory variables.The PCA and k-means analyses were performed in R using the packages ‘FactoExtra’ and ‘FactoMiner’ .Following cluster assignment, clusters were verified and analyzed with ANOVA, pairwise t-tests, and descriptive statistics.The analysis was performed using R version 3.5.1 – “Feather Spray”.We selected the following initial set of variables based on the combination of analytical practicality, tractability, global availability, and relevance to MPA policy and management:Proportional No-Take Area,Human Development Index,Edgar et al., 2014 demonstrated that highly effective MPAs are often large, isolated, old, well-enforced, and have no-take status .Our analysis tracked “old”, “large”, and “no-take” MPAs by inclusion of area, age, and proportional no-take area as variables.Area and no-take status have also been demonstrated to influence MPA management costs .Gill et al., 2017 also identified sufficient staff capacity and operating budgets as especially important indicators of MPA performance .Some of the influential variables in these studies, among factors in other literature deemed relevant to MPA management and performance , were not included in our analysis, including enforcement, isolation, and governance type.Variables were excluded due to limited data availability and restrictions of combining categorical and numeric data in k-means analyses, among other reasons.We also considered that patterns may emerge based on differentiating habitats and environments.For example, MPAs in cooler polar seas may have different characteristics than those in tropical environments, such that tropical MPAs may generally be more attractive for industries like tourism.Tropical regions also typically contain higher biodiversity according to a latitudinal diversity gradient hypothesis .Latitude was thereby selected as a proxy to broadly represent the range of environments and their associated characteristics.Human Development Index has been used in many studies on MPAs and MPA management .HDI permits ranking of the level of overall societal development of a country based on a collection of variables including income, standard of living, education, and health.HDI has also been used as an informative indicator of economic development, and the strength of government frameworks including legal and judicial systems .GDP, while widely available, is solely focused on economic production and output rather than measuring broader societal well-being."Past research has also identified a strong positive correlation between HDI and Yale's Environmental Performance Index, a ranking index comprised of 24 indicators that gauges national progress towards established environmental policy goals .HDI may therefore serve as a useful indicator of environmental progress of a given country including for MPAs, as well as the potential ability to further progress on MPAs and marine conservation goals."One recent MPA management study identified a weak correlation between HDI and the authors' chosen management indicators .However, other studies effectively incorporated HDI into MPA research to contextualize MPA management and governance scenarios .The Atlas of Marine Protection uses the marine portion of the WDPA database, but independently validates the data and includes some additional useful metrics.Experts have cited MPAtlas specifically for its accuracy and wide acceptance among MPA databases .MPAtlas provided shapefiles, from which the attribute table was exported to Excel and used in our analysis.MPA Area was taken directly from MPAtlas.Latitude, Proportional No-take Area, and Age were also gathered via the MPAtlas dataset, but required varying degrees of manipulation.MPAtlas also segmented some multi-use MPAs based on their separate management zones, for example the Great Barrier Reef was included as seven distinct MPAs according to its seven different management areas that have unique goals and policies.Recent expansions were also sometimes included as distinct MPAs, such as the 2016 expansion of Papahanaumokuakea.For consistency, we considered these distinctions as individual MPAs within our analysis as well.We also limited the analysis to locations that, per the verification and validation process of MPAtlas, were considered true MPAs.This procedure resulted in the exclusion of areas that protect only a single species group, such as shark sanctuaries, which, while a form of spatial protection significant to marine conservation, are often distinguished from MPAs in the literature .MPAtlas lists intention to protect the entire ecosystem as an important component of the definition of MPAs.Other types of spatial protection that MPAtlas determined were not truly MPAs included marine mammal sanctuaries, bottom trawl closures, and general fishery management areas, among others.When these were removed, our data set included 10,825 “true” MPAs.HDI indices and calculation methodology have occasionally been revised, especially in 2010 and 2014 .However, HDI reports often ignore many Small Island Developing States with large Exclusive Economic Zones, including autonomous governments that do not officially have UN representation.Using the most recent HDI reports would therefore remove globally significant MPAs including the Coral Sea in New Caledonia, the St. Helena Marine Protected area, and the Cayman Islands.A separate 2009 report expanded the 2008 HDI calculation methodology to include many typically unreported SIDS and other countries not typically included in HDI reports, and we used those HDI metrics in our study.Our selected variables were defined and calculated or modified as follows:Age – Status Year subtracted from 2018.Area – Total coverage of the MPA, taken from Calculated Marine Area.Proportional No-Take Area – No-Take Area divided by Calculated Marine Area, multiplied by 100.MPAs with non-reported No-Take Area were removed from the analysis.Latitude – Centroids were calculated in ArcGIS via shapefiles provided in the MPAtlas dataset.The y axis coordinates of the centroids defined latitude.Human Development Index – Taken from a 2009 report that expanded HDI to many countries not typically accounted for under the official HDI reports, but are highly relevant to marine conservation.All variables were standardized and assigned z-scores relative to their respective variance to remove influences of differing scales across variables.The z-scores were used for the PCA and k-means analysis for assigning clusters.After cleaning the data and removing MPAs with incomplete data, our final data set with which we conducted the analysis included a total of 2938 MPAs.Lack of data on the area in No-take zones and age of the MPA were primarily responsible for the decrease in sample size.The PCA and k-means process identified a preferred analysis using four variables and 7 clusters.The 7 clusters contained a total of 2938 MPAs, and were converted from numeric designations to alphabetical from A to G, from highest to lowest number of MPAs within the cluster.For example, cluster A contains nearly half of the MPAs in the data set, while Cluster G has the fewest.A series of ANOVAs identified significant differences across cluster means for all variables, with pairwise means comparison t-tests revealing statistically significant differences between individual cluster pairings, suggesting that the clusters represent distinct groups.Among cluster and variable results, some patterns were notably distinct.For example, the average area of MPAs in Clusters F and G were much larger than that of other clusters with average areas of 345,282 km2 and 1,142,051 km2 respectively, compared to a sample-wide average of 4420 km2.The other clusters contained a wide range of MPA sizes from 1 to nearly 100,000 km2 or more, though Clusters B and E were notably smaller on average than others, at 191 and 186 km2 respectively.However, we also tracked the proportional distribution of MPAs per cluster across different size ranges.The majority of MPAs in Clusters A and B are less than 1 km2.Clusters F and G exclusively contain MPAs greater than 100,000 km2.The majority of MPAs in the remaining clusters are less than 10 km2 and 100 km2.MPAs in Clusters B and E were notably older than the others, with average ages of 35 and 95 years respectively.In contrast, MPAs in Clusters F and G had an average age of only 4 and 2 years, with a maximum of 12 and 6 years.The other clusters had similar MPA age distributions, with averages ranging from ~13 to 20 years.Cluster C had a distinctly high proportion of MPAs with high no-take area with a minimum of 76% no take and an average of over 99% no-take.Some other clusters contained fully no-take MPAs, but all averaged at 50% or less no-take.MPAs in Clusters A and B averaged less than 1% no-take.Based on HDI, Clusters A, B, and E were exclusively located among more developed countries with an average HDI of 0.952–0.961, and minimum of 0.811 in Cluster E. Cluster D was centered around developing countries with an average of 0.711 and maximum of 0.827.The statistically significant and verified clusters from the k-means analysis lead us to describe MPAs on a global scale in seven general groups.With thousands of MPAs around the world, these classifications are not intended to account for all relevant features and characteristics.But they can function as lenses through which to assess the current state of affairs of MPAs around the world via factors relevant to management and performance.Each cluster had at least one major defining characteristic within our results that distinguished it from its peers, and sometimes had less prominent secondary characteristics that deviated from sample wide averages or otherwise merited consideration.These characteristics are used to define the nature of MPAs within each cluster and potential consequences for management."We have also selected sample MPAs from each cluster within our dataset that are characteristic of their respective cluster's key defining features.Cluster A is the largest cluster by number of MPAs, which at n = 1260 amounts to 43% of the MPAs in the final data set.According to our results, MPAs in this cluster ranged from 0 to nearly 150,000 km2 in area, but more than half of the MPAs in the cluster were less than 1 km2.MPAs in Cluster A ranged from 0 to 25 years old.Cluster A contained some partial no-take areas up to ~50% of total area, but the vast majority of MPAs in this cluster did not report any no-take area, which averaged less than 1% across the cluster.HDI had a relatively narrow range within Cluster A, centered on more developed countries which can be observed visually with most of Cluster A dispersed around North America, Western Europe, and Australia.The average HDI score for the cluster at 0.951 was similar to that of countries such as Spain and the United States.The lower range of development in this cluster includes more developed countries in Latin America and lesser developed countries in Europe.Based on HDI, we refer to Cluster A as MPAs in highly developed countries.From a management perspective, MPAs in Cluster A may be located in countries with better government infrastructure and greater institutional capacity.MPAs in this cluster may also have better access to financial resources to support management activities than some other groups of MPAs due largely to the wealth of the specific countries as indicated by HDI.Prior research also suggests that MPAs in more developed countries receive a greater proportion of their financial support from the national government .Education is another contributor to HDI, indicated by variables such as literacy and school enrollment rates, may also suggest that MPAs in this group are located in countries with a more educated population and/or better education infrastructure.Educating the populace on the importance of the marine environment and MPAs as a method of public outreach is often an important indicator of success and therefore a typical focus of MPA administration .Prior research has also found that level of education can directly influence the amount that people are willing to pay to support marine conservation efforts, even when controlling for income .These inferences on the relation between HDI and MPA management do not mean that MPAs in this cluster are necessarily better managed and enforced.For example many MPAs in Mexico are reported to be ineffectively managed .Rather, MPAs within Cluster A may be particularly well positioned for potential success.To the extent that HDI is correlated with the Environmental Performance Index , governments in countries with high HDIs may also more frequently achieve their respective environmental goals and policies including national targets for ocean protection.We discuss Cluster B and E simultaneously due to their similar characteristics and distinction based on their age compared with other clusters.According to our results, both consisted of older MPAs than other clusters, with averages of 35 and 95 years respectively.Cluster B MPAs ranged from 26 to 65 years old, whereas Cluster E encapsulated all MPAs older than 65.These age ranges influence our decision to refer to Clusters B and E as “Middle-aged” and “Senior” MPAs.MPAs in Clusters B and E were generally located in more developed countries and returned a mean, standard deviation, and min/max range for HDI closely resembling Cluster A."While some exceptionally large MPAs existed within clusters B and E, such as in Greenland, both of these clusters had similarly small average areas, which at ~190 km2 is nearly an order of magnitude smaller than Cluster A's average.In all, they contributed to only ~1% of the total MPA coverage despite making up ~31% of MPAs in our sample.But like Cluster A, the majority of MPAs in Cluster B are less than 1 km2, while Cluster E MPAs were actually more evenly distributed from 0 to 1000 km2.Therefore, we attribute the smaller average size of these clusters not to “smaller” MPAs, but rather a lack of the larger, more expansive MPAs that are more frequent in all other clusters.Prior research has suggested that MPAs were historically small extensions of terrestrial PAs in coastal regions, designed to protect an adjacent local feature like an individual bay .That approach contrasts modern efforts to specifically protect marine environments, including under the UN SDGs, and protecting larger swaths of the ocean that encompass entire ecosystems or ocean regions.The larger MPAs covering thousands of km2 or more, which are more prevalent in the other younger clusters, are more likely to provide that type of protection.Our findings for Clusters B and E thereby provide some objective insight and support of the claim that early MPAs, as extensions of or adjacent to terrestrial PAs, seldom protected larger areas of ocean that are the emphasis of current marine conservation efforts.Clusters B and E also suggest that MPAs, in their modern legal form, were largely exclusive to more developed nations until recent decades.Though some cultures in developing countries conducted spatial closures long before any of these developed country MPAs existed, they have not been registered and are therefore not included in global databases on which we based our analysis.Should consistent data become available from these MPAs, they could be included in future cluster analyses using our methods.Stakeholder or communal participation and cooperation is often heralded as key to effective management , and MPAs which have been established for several decades or more may have further integrated within coastal communities and culture than their younger generational counterparts.This consideration is particularly relevant for Cluster E, which contains MPAs that have been in existence for multiple generations and predate all but the very eldest of local community members.Environmental performance is also known to increase with age .Therefore, Cluster B and E MPAs may receive the benefits of having more time to demonstrate environmental benefits, in addition to being located primarily in more developed countries and having political and societal longevity.These features would all be beneficial towards effective management.Cluster C is defined by a high proportion of no-take area and contained most MPAs with full no-take coverage.And while Cluster C included MPAs with as little as 75% no-take in our results, the mean of 99.7% no-take with standard deviation of 2.29% was consistent with a similar observation from Clusters A, B, and E that partial no-take MPAs are rare and that MPAs are typically either fully no-take or did not have any no-take area at all.Though this observation may have partially been a result of the way MPAtlas segments multi-use MPAs as distinct data points.But consistent with prior published results, we still observed fully no-take MPAs to be the minority of MPAs around the world , and Cluster C represented only 17% of examples in our data set.Other clusters did contain some fully no-take MPAs, but these were few and only if one of the other of our 4 variables were especially prominent.Previous studies have demonstrated that no-take MPAs are substantially more expensive to manage than MPAs that do not fully protect from fishing when controlling for other factors that affect the cost of operations .As a result, management activities for Cluster C are likely to require more financial resources to enforce the more restrictive nature of these MPAs.No-take MPAs may also face greater political opposition from sectors restricted by no-take status.However, no-take status has been identified as a key feature for achieving conservation goals , and all MPAs in this category have this as a strong attribute towards effective management.Therefore, if these challenges can be surmounted, then MPAs in this category may be particularly well positioned for effective management and subsequent performance.Cluster D was most distinguished in the results by having low HDI values among its assigned MPAs, which indicated an association with developing countries.American Samoa, Ecuador, and Colombia were the most developed countries to contain a Cluster D MPA, with HDIs of 0.827, 0.816, and 0.812 respectively."Examples of countries within Cluster D closer to the cluster's average HDI of 0.711 included Indonesia, Egypt, and Tuvalu.Visually, we observed Cluster D MPAs to be distributed primarily in more tropical zones of Latin America, Africa, South East Asia, and parts of Oceania.Cluster D MPAs also had a modest amount of no-take area, including over 50 fully no-take MPAs, or ~20% of the cluster sample.However, these were exclusively in lesser developed countries within the cluster, at or below the average HDI.Recent research has suggested that MPAs in developing countries with lower HDIs often rely on funding from international sources or at the sub-national level , likely due to limited resources within their national governments.Another study on MPA governance theorized that MPAs in developing countries have trended towards various forms of decentralization due to weaker state capacity .Research conducted on some MPAs within Cluster D have highlighted some of these alternative management and financial strategies that reduce reliance on the central government .The findings from this prior research, combined with our objective results from the cluster analysis and implications of HDI, suggest that countries that contain Cluster D MPAs may frequently have fewer financial resources than MPAs in other clusters, or otherwise are less likely to get the support needed from their respective national government.This may be of particular concern for the no-take MPAs within the cluster, which may both require more financial resources than non no-take counterparts while also being located in the least developed countries in this group.We discuss Clusters F and G simultaneously due to their similar characteristics and distinction based on area compared with other clusters.Clusters F and G only contained 17 and 3 MPAs respectively, but combined encompassed more than 70% of the entire area in our data set.The most distinctive characteristic of these clusters was the size of the MPAs, with a minimum area of 180,300 km2 for Cluster F and 989,842 km2 for Cluster G, with average areas ~2 orders of magnitude greater than the sample mean.These were very young groups of MPAs, the oldest in Cluster F being only 12 years old and Cluster G even more recent at a maximum of 6 years old.In addition, while having a mix of no-take coverage ranging from 0 to 100%, 12 of 20 MPAs in the two clusters combined had at least some no-take area, 7 of which were completely no-take.These clusters also included many high profile MPAs such as Papahanaumokuakea in the USA and the Phoenix Islands Protected Area in Kiribati .Management needs and approaches for Clusters F and G are likely to differ from the other four clusters primarily due to the expansive ranges that such large MPAs can encompass.While larger MPAs are overall more expensive to manage, they are far less expensive than smaller MPAs on a per area basis .It is therefore difficult to project just how much greater the costs of managing MPAs in clusters F and G may be compared to others.Also due to their expansive range, Cluster F and G MPAs likely cover wide areas of more remote offshore waters.This may require, as well as enable the use of, management surveillance and equipment appropriate for such remote conditions, which can include offshore-equipped vessels and satellite monitoring.Additionally, the performance of such MPAs that cover large swaths of ocean has also been the subject of scientific debate , and is all the more difficult to ascertain considering the younger age of these MPAs.Therefore, we speculate that MPAs in Cluster F and G may require special emphasis on scientific monitoring in the short to medium term.All of these implications may be especially magnified for Cluster G due to the particularly expansive ranges of MPAs in that cluster.Our results also contribute to the growing body of literature on Large Scale Marine Protected Areas, especially towards better defining the group.LSMPAs are MPAs beyond a certain size threshold, but the minimum size that constitutes an LSMPA remains under debate with different sources ranging from 30,000 km2 to as high as 250,000 km2 .Our analysis suggests that, with our other variables considered, MPAs become large enough to statistically distinguish themselves by area alone at approximately 180,000 km2.While within range of historical LSMPA definitions, this definition differs from previous literature by arriving at a minimum size threshold via mathematical methods rather than by arbitrary selection .The results also identified an even larger threshold for Cluster G at ~1,000,000 km2, suggesting that an additional group of especially large LSMPAs has recently emerged.While Cluster G is a small group, it may continue to grow as countries pursue MPA protection goals of 10% or more of the ocean.Some known MPAs that may have qualified for Cluster G were also excluded due to restrictions related to HDI.Our findings suggest that perhaps the minimum threshold for LSMPAs be in the range of 180,000 km2 as per Cluster F, while a sub-class of especially large LSMPAs be designated with a minimum of around 1,000,000 km2 as according to Cluster G, which we refer to here as Giant MPAs.While only a small number of MPAs currently populate this statistically identified cluster, the addition of future, new LSMPAs to the database will assist in evaluating the robustness of this group as a distinct cluster.Our analysis could not make use of all potentially relevant or informative variables due to technical constraints and data limitations.For example, k-means analyses can be used with categorical data like “Governance Type”, but via a process that is distinct from the numeric based approaches used in this study .Therefore, “Governance Type” would be unable to be paired with others like “Area in km2” in the same cluster analysis.Data on MPAs is also notoriously scarce, especially on a global scale, a pre-requisite for our analysis.Potentially relevant variables like governance or management type remain limited in the few global MPA databases that exist, including MPAtlas.Isolation is another variable that is considered key to MPA performance and would have been appropriate to include.But it is difficult to incorporate because of data availability and reliability.Some studies have measured isolation in quantitative terms .But the reliability of these calculations remains unverified, they are difficult to attain, and overall there remains no consensus on how to objectively measure isolation within the marine science and conservation community.The complexity of measuring and interpreting isolation also means that it is not an easily tractable variable.Given the limitations of not including these potentially relevant variables, future research using our results should expand upon the variables noted here as needed when applying our findings and methods for individual MPA analysis, comparisons, or case study selection.Another limitation in the study was the exclusion of some MPAs from consideration because data were not available for all variables in our analysis.Most MPAs were removed due to a lack of no-take data, and some countries were left out because of a lack of an assigned HDI.For this reason, high seas MPAs like the Ross Sea in Antarctica were not included in our results, nor were MPAs in some SIDS such as Bonaire National Marine Park.However, this highlights one of the many advantages of using such widely available and easily tractable variables, in that it is fairly easy to assign such examples to our MPA clusters based on other key characteristics where data are available.For example, the Ross Sea would likely belong to Cluster G due to its expansive range of 1.55 million km2 .Opportunities exist for improvement and refinement of our methodology, including further insight on the definitions of each cluster and how one arrives at those definitions.For example, how might the method be improved, especially with more complete and better access to data that may allow us to explore other variables more easily?,Such examples may include performing additional cluster analyses within our defined clusters, and including additional variables that become available with improved data access and refined methodologies for measurement.Our study also provides a platform from which to steer future research in MPA management and policy.Might we be able to compare performance indicators through the lens of these cluster assignments such that different outcomes might be associated with each of the clusters?,Would these outcomes verify or refute our speculations on the management implications from these cluster assignments?,For example, researchers could investigate if certain clusters contribute more towards biodiversity targets.Or perhaps some MPA clusters may demonstrate performance in different ways such that some may be better performing in strictly environmental parameters, whereas others can contribute more towards socioeconomic goals.There is also the potential to directly use cluster results for future case study based research on MPAs.By isolating different ‘types’ of representative MPAs that may be more predisposed to different management outcomes as explained above, one can more fairly compare performance indicators across clusters and control for certain factors that may influence performance outside of management decision making.The clusters can also help guide case study selection by defining representative groups of global MPAs, from which case studies can be selected from to maximize diversity within case study samples.The k-means analysis successfully segmented over 2938 MPAs around the world into seven statistically significant clusters from the perspective of MPA management based on age, area, proportional no-take area, and HDI of the host country.These clusters were derived from a comprehensive global database to allow us to view the present state of affairs of MPAs on a global scale through the lenses of these variables.Each cluster held at least one characteristic of intrigue from which they could generally be described.We were also able to infer the potential differences in management needs and approaches for each cluster based on their given defining characteristics.Among these, three groups of MPAs emerged that embody management practices of no-take and Large-scale MPAs that have become increasingly popular among MPAs due to the demonstrated positive contributions of large area and no-take status to MPA performance.We also identified a new threshold from which to define LSMPAs, as well as a subcategory of especially large LSMPAs dubbed here as GMPAs.Our findings also provide valuable practical contributions to future research.Our approach provides objective guidance in case study selection for MPA research.Using these clusters, researchers can track MPA performance and management across clusters and relate any differences in management or performance to our selected variables.By accounting for factors that may influence management or performance beyond ground level management activities, these clusters may provide a more objective comparison of performance outcomes and lead to better planning for MPA viability. | Marine Protected Areas (MPAs) are a widely used and flexible policy tool to help preserve marine biodiversity. They range in size and governance complexity from small communally managed MPAs, to massive MPAs on the High Seas managed by multinational organizations. As of August 2018, the Atlas of Marine Protection (MPAtlas.org) had catalogued information on over 12,000 Marine Protected Areas. We analyzed this global database to determine groups of MPAs whose characteristics best distinguished the diversity of MPA attributes globally, based upon our comprehensive sample. Groups were identified by pairing a Principal Components Analysis (PCA) with a k-means cluster analysis using five variables; age of MPA, area of MPA, no-take area within MPA, latitude of the MPA's center, and Human Development Index (HDI) of the host country. Seven statistically distinct groups of MPAs emerged from this analysis and we describe and discuss the potential implications of their respective characteristics for MPA management. The analysis yields important insights into patterns and characteristics of MPAs around the world, including clusters of especially old MPAs (greater than 25 and 66 years of age), clusters distributed across nations with higher (HDI ≥ 0.827) or lower (HDI ≤ 0.827) levels of development, and majority no-take MPAs. Our findings also include statistical verification of Large Scale Marine Protected Areas (LSMPAs, approximately >180,000km2) and a sub-class of LSMPA's we call “Giant MPAs” (GMPAs, approximately >1,000,000km2). As a secondary outcome, future research may use the clusters identified in this paper to track variability in MPA performance indicators across clusters (e.g., biodiversity preservation/restoration, fish biomass) and thereby identify relationships between cluster and performance outcomes. MPA management can also be improved by creating communication networks that connect similarly clustered MPAs for sharing common challenges and best practices. |
31,494 | Detecting knee osteoarthritis and its discriminating parameters using random forests | Osteoarthritis rates are rising, in part a reflection of our growing ageing population.Currently OA is the second leading cause of disability , and one of the most common forms of arthritis worldwide, accounting for 83% of the total OA burden .The global prevalence of knee OA is over 250 million people , according to Vos et al.Currently diagnosis of OA is based upon patient-reported symptoms and X-rays.The alternative is MRI but this is associated with high cost and is rarely used until symptoms progress and patients are referred for specialist surgical opinion.Thus effective management and early identification of knee OA is a key health issue and is of interest to the population at large as well as a range of clinicians and health service managers.The method presented here represents an effective solution with significantly lower costs compared to MRIs and ultimately aims to be used as a part of standard clinical assessment for the general population, in contrary to imaging that requires severe symptoms to be present.For all the aforementioned reasons, our vision and our long-term motivation is to develop a diagnostic tool for automatic detection of early markers of knee OA that does not act as a black box for the clinical personnel, as is the common case today.In this paper, we propose a computer system that uses computational methods from the area of machine learning to estimate the degree of knee OA.This approach overcomes limitations of previous methods, such as Astephen et al. , Federolf et al. , Beynon et al. , Deluzio and Astephen , and Mezghani et al. , in the sense that it automatically estimates the degree of knee OA by recognising patterns that are more discriminating of knee OA; discriminates the most important parameters for reaching its decision; and produces a set of rules that have a clear clinical rationale.Machine learning concerns the construction of computer systems that are able to learn from data.Such approaches have recently been adopted by the biomechanical field with great effect.The common trend in biomechanics research is to consider individual parameters such as flexion moment peak value, or rotation moment, as done by Kaufman et al. and then statistically test if there are significant differences in each parameter between the patients and normal subjects.However, machine learning looks at the complexity of the data as a whole , overcoming limitations that arise from hypothesis testing using individual parameters, thereby losing the richness and complexity of the data.For example, machine learning can be used to interpret electromyographic, kinematic and kinetic data from the knee, hip and ankle joints during gait and has been shown to be able to separate healthy patients, mild, and severe knee OA according to Haber et al. .Federolf et al. identified systematic differences between healthy and medial knee-osteoarthritic gait using principal component analysis.In this study we analyse parameters of ground reaction forces to estimate using an objective scale the degree of knee OA and to extract parameters that differentiate more effectively between normal and knee OA subjects.To the best of the authors’ knowledge, this is the first study on detecting knee OA via analysing the GRFs using random forests.We believe that a purely data-driven approach yields objective measures and patterns useful for both biological and clinical advancement as suggested by Faisal et al. .Emphasis is given on detecting parameters with physical meaning and in inducting rules that remain fully interpretable even to non-data analysis experts.The guidance rules may be adopted in a routine clinical visit to provide support to healthcare professionals during decision-making.Our final aim is to derive a software tool that can be used either to assist the physician when diagnosing new patients or to train students to diagnose patients.Previous biomedical studies by Beynon et al. , Deluzio and Astephen , Moustakidis et al. , and Mezghani et al. have discriminated between subjects with knee OA versus normal subjects, as detailed below.For example Beynon et al. explored the use of sagittal/frontal/transverse plane range of motion and the peak vertical ground reaction force during the stance phase of gait and cadence.They were able to discriminate knee OA subjects using the Dempster-Shafer theory of evidence."Depending on whether the proposed method's heuristic values are computed by descriptive statistics or provided by an expert, the system had a performance of 90% or 96.7% respectively.In another study by Deluzio and Astephen 50 patients with end-state knee OA and 63 control subjects performed five walking trials.Knee flexion angle, flexion moment, and adduction moment were classified using linear discriminant analysis after principal component analysis, achieving a 93% correct classification.More recently, GRFs have been studied.Wavelet analysis by Moustakidis et al. has shown that a reduction in peak anterior–posterior ground reaction forces during the stance phase occurs in knee OA subjects.They were grouped in no, moderate, and severe OA categories with a 93.4% performance.A second study by Mezghani et al. calculated the coefficients of a polynomial expansion and the coefficients of wavelet decomposition for 16 healthy and 26 tibiofemoral knee OA subjects.A nearest neighbour classifier achieved accuracies ranging from 67% to 91%, depending on the set of parameters.The main objective of this work is to give emphasis to clinicians’ rationale.That is the reason why we refrain from abstract mathematical approaches such as wavelet packet decomposition as done by Moustakidis et al. , as they lack a direct physical interpretation.Moreover, we consider all the trials provided by each subject, rather than averaging across trials in order to calculate the mean GRFs, as is the case of Mezghani et al. .Averaging disregards the intra-subject variability.While previous work focussed on predicting discrete outcomes, our approach provides a continuous number between 0 and 2, since we felt that clinicians would value a continuous output, rather than a yes/no answer, whilst at the same time reflecting the progressive degenerative nature of osteoarthritis.Very few previous studies provide an alternative to discrete predictions.Beynon et al. provided a level of belief that a subject has knee OA or is normal and the associated level of uncertainty.Finally, our approach does not adopt any ad hoc heuristics, like the one proposed by Beynon et al. .It is worth mentioning that the focus of machine learning does not have to be knee OA prediction.For example, the authors Favre et al. applied neural networks to predict knee adduction moment during walking based on ground reaction force and anthropometric measurements, whereas Begg and Kamruzzaman applied support vector machines to discriminate young from elderly subjects exploiting kinetic and kinematic parameters, and Muniz et al. evaluated Parkinson disease exploiting GRFs.Accordingly, the proposed system here is tackling the problem of estimating the presence of knee OA via a rule based approach that concurrently estimates the most discriminating features of the pathology.However, it could also be utilised to analyse additional musculoskeletal diseases, like back pain, given the respective kinetic parameters for its re-training.In this study, subjects diagnosed with OA were recruited, along with gender and age matched control subjects.We collected locomotion data from 47 subjects with knee osteoarthritis and 47 healthy subjects.The mean value and the standard deviation between normal and knee OA subjects of the age, height, weight, and sex for the 47 controls and the 47 knee OA subjects are depicted in Table 1.Ethical approval for this study was obtained from the South West London Research Ethics Committee and written informed consent was obtained from all participants.Control subjects were recruited from local university and hospital staff and students.OA subjects were recruited from hospital clinics and local General Practitioner clinics.Presence of OA was confirmed from medical reports and clinical examination by their practitioner.Subjects were excluded from the study if they reported any neurological or musculoskeletal condition other than knee OA, rheumatoid or other systemic inflammatory arthritis, morbid obesity or had undergone previous surgical treatment for knee OA.Subjects were asked to walk at their self-selected walking speed along a 6 m walkway embedded with two force plates.Kistler Type 9286B force plate exploits piezoelectric 3-component force sensors.It has 4 measuring elements, one at each corner of the 600 mm × 400 mm force plate.It has a rigidity of ≈12 N/μm for the x and the y axes and of ≈8 N/μm for the z axis.The linearity for all GRFs is <±0.2% FSO and the respective hysteresis equals <0,3% FSO.Measuring range is −2.5 to 2.5 kN for GRFX and GRFY, whereas the respective range for GRFZ is 0 to 10 kN.Each subject was barefoot and unaware of the force plates embedded in the walkway.Each subject was asked to walk along the walkway three times.Trials with no clean force plate strike were excluded.A maximum of three trials were recorded for the left and right foot.The signals from the force plates were recorded using an analogue signal data acquisition card provided with the Vicon system and the Vicon Nexus software at a sampling rate of 1000 Hz."GRF data was extracted, normalised to the subject's body weight, to reduce inter-subject variability due to weight, and time-normalised to the entire gait cycle using linear interpolation.Next, statistical parameters were extracted for each axis.A list of those that are common among the three axes is available in Table 2.Additionally, axis-specific parameters are extracted.For the Z-axis the first peak, second peak, and minimum of the mid stance values were calculated along with the time stamps of those events.Furthermore, the differences between the values recorded from each leg were calculated.Also, the difference between the first peak and the second peak was calculated.Finally, two ratios were calculated: the ratio of the 1st peak value over the minimum value during mid-stance and the ratio of the 2nd peak value over the minimum value during middle stance.The difference between the two aforementioned ratios was also calculated.The aforementioned parameters are graphically depicted in Fig. 1.For the X-axis, the minimum during loading response, the maximum of mid stance, the maximum of terminal stance, and the minimum of mid stance and terminal stance were considered.Once again the time stamps of those values are taken into account.Those parameters can be seen in Fig. 1.Accordingly, for the Y-axis, the maximum and the minimum values are taken into account along with the respective time stamps, as is demonstrated in Fig. 1.For each GRF several slopes are defined between two successive extremes.The asterisks in Fig. 1 denote the extremes.Additional extremes exist at the beginning and the end of the stance phase.For example, the GRF of the Z-axis has one slope defined from the beginning of the gait cycle to the 1st peak.This protocol also applies for the GRFs for X and Y-axes.More specifically, 6 slopes were calculated for the GRF over X-axis and 3 for the Y-axis.The advantage of this parameter extraction method is that these parameters bear a physical meaning.The more abrupt the slopes, the quicker that phase occurred relative to the gait cycle.Interquartile range, as well as median is more robust to outliers than the mean.Spearman correlation between left and right legs estimates the strength of the associations of the gait patterns, since knee OA sufferers tend to overload one leg at the expense of the other, as evidenced in Duffell et al. in .It is normal to assume that even if just one knee suffers from OA the patterns of the other knee may be altered.GRF-Z demonstrates two peaks, the first reflects weight transfer from the heel to the mid-foot and the second one is related to the ball of the foot for push-off, as mentioned by Alaqtash et al. in .Also, there is a minimum during the stance phase.These three extremes define an M-shape.The ratios that are calculated for GRF-Z are estimations of its M shape, as explained by Alaqtash et al. and Takahashi et al. .With respect to the ensemble, random forests take the input parameters, traverse them with every tree in the forest, and then average the responses over all the trees.Specifically, each tree considers a different random subset of the parameters.By this procedure, called bagging, different trees have different training parameter sets.Moreover, for each tree node a subset of the training parameter set is considered.The final regression value is obtained by averaging the regression values of the random trees, as proposed by Breiman .Random forests need no cross-validation according to Breiman , this procedure happens inherently by selecting a subset of parameters for every tree and node.Random forests perform parameter selection automatically.If a feature is of poor discriminating ability it will not appear in any node of the trees comprising the forest.Accordingly, if a feature is highly informative it will not only appear in several trees, but will also have a tendency to appear to nodes that are more close to the root, as explained by Chen and Ishwaran .Here, a Matlab implementation of random forests is utilised.To select the most informative parameters, we compute the increase in prediction error if the values of that parameter are permuted across the out-of-bag observations.Out-of-bag observations are those that are left out during the construction of each tree.Since we construct each tree using a different bootstrap sample from the original data that includes the two thirds of the cases, the remaining one-third is left out, constituting the out-of-bag observations.The increase in the prediction error if the values of that parameter are permuted across the out-of-bag observations is computed for every tree, then averaged over the entire ensemble and divided by the standard deviation over the entire ensemble.For this work, we report the 3 most informative parameters per axis.We used half of the subjects’ trials for creating the random forest and the other half for testing in a subject independent manner.This means that the two sets are disjoint, to ensure good generalisation ability.The output of the method is a regression value ranging from 0 to 2, in order to support clinicians with their decisions.We focus on regression instead of classification since we believe that for a clinician it is more useful to obtain a continuous value rather than whether the subject does or does not have knee OA.Also, OA is a degrading disease.The closer this value is to 0 the more probable the subject under consideration is a healthy one, i.e. exhibits no knee OA.A value of 2 equates to both knees suffering from severe OA.In all, a patient may be considered to exhibit no OA if the system calculates a value less than 0.5.The performance of our system was assessed in a subject-independent manner, i.e. by completely separating the training data from the test data.Specifically, we trained each regression forest on half the number of trials, which corresponded to 48 subjects.Half of them were suffering from knee OA and the remainder were healthy.We then tested the efficiency of the proposed approach on the remaining trials carried out by 46 subjects.This means that the testing data has never been seen before by the regression forest, rendering the system robust to generalisation and handing of new, unknown subjects.The experimental protocol is subject-independent."If a subject's trial is included in the training set, then all the trials of this subject are part of the training set and are not used in the test set.This way, the system is able to handle efficiently an unknown subject; is robust; and permits generalization.Since each subject provides up to 3 gait cycles, the output is averaged over the gait cycles, so as to have one final regression value per subject per GRF plane.For visualisation purposes, one tree out of the ten that comprise the random forest is depicted.Accordingly, a tree that traverses GRF-Z is depicted in Fig. 2 and the respective trees for GRF-X and GRF-Y are depicted in electronic supplementary material, Figs. S1 and S2, respectively.A close up of one branch of the tree demonstrated in Fig. 2 can be seen in Fig. 3.The aim of Fig. 3 is to focus solely on one branch of the tree, so as to provide a better insight to the nature of the binary rule induction implemented by trees.With respect to GRF-Z the 3 parameters that bear the most discriminating power are: the ratio of the peak push off value over the minimum value during mid-stance.As it is demonstrated in Fig. 4 subjects that suffer from knee OA have a tendency to apply less force during mid-stance. The slope defined from the first peak to the minimum value between the first and second peak) that is related to the reduction on the GRF-Z due to knee flexion. The slope defined from the beginning of the gait cycle to the first peak, as depicted in Fig. 4 that is related to weight acceptance.This means that OA subjects have flatter GRF patterns when compared to normal subjects and that knee OA subjects have a more gradual weight acceptance.For the GRF-X axis the most important parameters are: the minimum value obtained before the end of the stance phase for the left leg, as depicted in Fig. 5 that is related to the medio-lateral force at toe off. The slope between the second peak and the toe off of the right leg), that is related to moving medially from the peak lateral force. The slope defined from the first minimum value of the gait cycle to the first peak), that is related to development of the lateral force during weight acceptance.For the GRF-Y axis the most important parameters are: the difference in standard deviation between the two legs of anterior–posterior force. The time stamp of the minimum value of the left leg, as shown in Fig. 6, that is the time of the peak push off in posterior direction. The slope from the maximum value to the minimum value for the left leg, as demonstrated in Fig. 6, that is the shear force moving from the peak anterior breaking force to the peak posterior push off force.We can consider that the proposed approach classifies a subject correctly if: the subject declares that he/she has no OA and the proposed approach output≤0.5 or the subject suffers from knee OA and the proposed approach output > 0.5.In any other case a misclassification occurs.The results for this protocol are depicted in Table 3 for GRF-Z, Table 3 for GRF-X, and Table 3 for GRF-Y.Table 3 refers to the linear combination per subject for all GRFs, i.e. the final regression value for each subject is the mean of the regression values calculated for GRFZ, GRFX, and GRFY.Additional figures of merit are calculated for the confusion matrixes presented in Table 3.Those comprise sensitivity, specificity, accuracy and F1 score and are demonstrated in Table 4.With respect to regression accuracy the mean squared error for the GRF-Z is 0.64, for the GRF-X it is 0.67, and for GRF-Y it is 0.64.If we combine the three axes in a linear manner, i.e. if we consider as final regression value per subject the mean value over all three axes, then the mean squared error drops to 0.59.It is noted that the regression values are averaged across trials for the same subject due to the subject-independent protocol.Also, to prove the stability, robustness, and generalisation ability of the proposed method, a 5-fold cross validation is performed.Once again, the subjects for the 5 different training/testing splits are selected in a subject-independent manner.In the cross-validated case, the combined over the three axes mean squared error is 0.44 ± 0.09, whereas the mean accuracy equals 72.61% ± 4.24%.To overcome the limitations that bilateral knee OA subjects are introducing, an alternative configuration of the dataset is tested.In this case, all subjects with OA in both knees, along with their age and gender matched were removed.This leaves us with 36 subjects that exhibit OA in one knee along with their 36 respective age and gender matched subjects.The rest of the computer system configuration remains the same.The results for this protocol are depicted in Table 5 for the linear combination per subject for all GRFs, whereas the figures of merit are demonstrated in Table 5.To comment on those results, accuracy for all GRFs has risen from 65.22% to 77.78%.This can be attributed to the fact that the exclusion of the subjects that have OA in both knees leads to a more homogeneous dataset, so the discrimination between the two categories is more consistent.To compare the trials, for the case of unilateral knee OA and their controls, we calculated the frequency of subjects that had 3 trials classified correctly, had 2 trials classified correctly, had 1 trial classified correctly, and had no trial classified correctly.22 subjects belong to the first category; 6 subjects belong to the second category; 4 subjects belong to the third category; and 4 subjects belong to the third category.This paper presents a novel computer system that automatically estimates the degree of knee OA based on GRFs; discriminates the most important parameters for reaching its decision; those parameters are in fact in line with the literature, as detailed later on this Section and produces a set of rules, presented in this paper as binary decision trees, that can be alternatively seen as a set of if-then-else arguments; these rules we propose are easy from a clinical perspective.Additional experimental results demonstrating the effect of thresholding as well as using alternative machine learning techniques, namely support vector machines, as well as additional training/testing splits, namely leave-one-subject-out, are presented in Supplementary Material.The presented protocol leads to a high number of false negatives.Approximately 20% of subjects that claimed they did not have knee OA, presented with gait patterns similar to those of subjects that suffer knee ΟΑ.This may potentially be attributed to the fact that our healthy population were not investigated for joint abnormalities using imaging.As such, they may had early unknown signs of knee joint changes that led them to work with a gait pattern that bears some resemblance with that seen in people with knee OA.However, this is a speculation and would require further research to validate.Our findings clearly indicate that for verification an imaging assessment of the healthy subjects is required.Radiographic assessment of the healthy subjects is part of our proposed future work.Our method has its own limitations.First of all, the method is not validated against radiographic imaging, such as X-rays or MRIs which often are used for OA diagnosis.However, using figures in the scientific literature indicates that less than 50% of people with evidence of OA on plain radiographs have symptoms related to these findings as proved by Hannan et al. .Therefore, the ‘clinical endpoint’ is more difficult to establish as explained by Hunter et al. .To conclude, the work of Zhang et al. proves that there is no gold standard in the diagnosis of knee OA.However the knee OA subjects were identified by experienced orthopaedic clinicians and GPs based on their clinical examination findings and medical records.A fraction of knee OA subjects had been referred from their GP for an X-ray or MRI.Healthy volunteers were assessed for any exclusions criteria such as knee pain or limitation in functional ability, but did not have this confirmed through imaging; as such they may have had early signs of OA that were undetected.However, this study aims to work as a proof of concept, rather than a validation study.The next step is to obtain ethics and funding to recruit a larger number of subjects all of which will undergo MRI at the respective hospital department at the time of data collection.This will allow us to confirm the presence or absence of imaging signs of knee OA.Also, the results although clinically relevant cannot be used in the everyday clinical practice without further work including validating the suitability of the selected features as knee OA markers and, ultimately, risk factors.On the advantage side, the parameters that we discriminate as most informative in this study are in line with the findings in the related literature.OA subjects are thought to adopt gait compensation strategies to reduce pain or the moments generated about the knee.Such strategies may provide insight into the altered parameters noted here.For example, reduced gait speeds may be adopted by patients in order to reduce medial compartment loading in OA subjects, as suggested by Mündermann et al. , through reduction of GRF-Z peak amplitude and loading rate as demonstrated by Zeni and Higginson .Reduced knee excursion in the sagittal plane during the stance phase of gait has been reported in knee OA subjects by Childs et al. as well as by Schmitt and Rudolph and related to weakness of the quadriceps muscle; this would also affect the rate of force development in GRF-Z.Other strategies are thought to alter medio-lateral knee loading according to Simic et al. , including increased varus thrust as proposed by Chang et al. and lateral trunk lean, which is thought to change the location of the centre of the mass in the frontal plane as explained by Mündermann et al. and Hunt et al. and would therefore alter GRF-X.Increased trunk lean was also associated with pain in OA subjects as shown by Bechard et al. .Finally, alterations in foot angle, are postulated to mediate medio-lateral knee forces and pain as suggested by Bechard et al. , Lynn and Costigan , and Simic et al. and would alter shear forces both in the medio-lateral and antero-posterior directions.Comparing the work shown here with the previous research presented by Kotti et al. in , the main difference lies on the research focus and methodology.The work of Kotti et al. focused on understanding the motor behaviour by deconstructing its complexity.In more detail, it was studied how to deconstruct GRFs into a low-dimensional space and if this deconstruction of GRFs was capable of discriminating between subjects with and without knee OA.Considering the methodology, probabilistic principal component analysis was used for dimensionality reduction and the classification was done by means of a Bayes classifier.All the axes were considered concurrently, that is no results were available per axis, and no feature engineering took place.The use of PPCA means that a direct physical interpretation of the results was not possible.Moreover, the approach presented by Kotti et al. was not designed exclusively for GRFs and could be transferred to other signals, such as EMGs, since no feature engineering is required.On the common methodology side, both works are subject-independent and use a cross-validated protocol.The advantages of our method compared with the related research summarized in the Introduction Section that also uses GRFs, are that a greater number of subjects is exploited; the experimental protocol is subject-independent; and the experimental protocol is 50% training/50% testing.However, this has an effect on the accuracy of the results presented here.For example, Moustakidis et al. report an accuracy of 93.4%, using a subject-dependent 10-fold cross validation protocol over 214 trials of just 36 subjects.Accuracy is boosted since the experimental protocol is both subject-dependent and 90% training/10% testing, thus less challenging than the subject-independent 50% training/50% testing exploited in this work and also due to the feature engineering, rendering the features not directly clinically interpretable.Also, the 2 force plates used by Moustakidis et al. are embedded into a treadmill, rather than in a walkway, as in the presented approach.There is an argument in the research community whether treadmill gait data are different from overground walking gait according to Warabi et al. .Referring to the system presented by Mezghani et al. , the experimental protocol in this case is leave-one-subject out, so subject independent, but still less challenging than the leave-half-the-subjects-out tested here.The number of subjects is 42, so less than half of those tested for this paper.Moreover, we consider all the trials provided by each subject, rather than averaging across trials in order to calculate the mean GRFs, as is the case of Mezghani et al. .Averaging disregards the intra-subject variability, rendering the problem less complex.One of the main advantages of our approach is that it simultaneously discriminates between subjects that have knee OA by extracting the most informative parameters.Our aim is to create a clinically relevant tool that enables the physician to see the influence of each parameter upon discrimination, as suggested by Beynon et al. .Also, in both cases we need to identify whether the proposed tool makes decisions in line with clinical opinion.Additionally, our study has a common point with that of Moustakidis et al. , since they both decompose the complex knee OA problem into simpler binary sub-problems via tree structures.However, for the random forest approach, its robustness is mathematically proven, it is robust to overfitting, and it does not utilise heuristics that are subjectively defined.An additional advantage of this study is that since we do not transform our initial parameters we do not need to map them back to the original space, where they have a physical meaning.Such a mapping is subjective and may lead to ambiguities.For example, the parameters derived by Deluzio and Astephen in , namely the knee flexion moments during stance, knee adduction moments during the stance phase, and knee flexion ranges of motion throughout the gait cycle are qualitative observations.In our work the parameters are strictly, quantitatively defined.The same argument applies to discrete wavelet decomposition, where a mother wavelet Symlet is utilised to capture the temporal information in the work of Mezghani et al. .However, it is unclear which temporal information was retained and why.Finally, this study takes extra care to use a subject-independent protocol to boost generalization.Subject dependent protocols can lead to systems of higher accuracy, since a subject already seen during training is re-tested during the testing phase, as done by Beynon et al. .However, such systems may not be robust when they actually see a subject outside of the training population.To conclude this paper has proved the suitability of random forests for analysing ground reaction forces in order to distinguish knee OA patients from healthy ones.Moreover, it has managed to provide a set of 9 features, 3 per axis, that are more discriminative of knee OA.The suitability of those features has been verified by the related bibliography.However, our method manages to combine those features in a rule-based way, instead of using them independently.Moreover, the rule-based core of the proposed system is close to the clinical rationale.To boost intra-subject consistency subjects were asked to walk along the walkway 3 times.Mean squared error is 0.44 ± 0.09, whereas the mean accuracy equals 72.61% ± 4.24% in a subject-independent protocol.However, further studies are needed to validate those findings as well as to collect data whose ground truth is derived through imaging.Our ultimate clinical vision is to create an objective, sensitive, diagnostic tool and to personalise health care, since each individual patient traverses the regression trees in a unique way.Ethical approval for this study was obtained from the South West London Research Ethics Committee and written informed consent was obtained from all participants. | This paper tackles the problem of automatic detection of knee osteoarthritis. A computer system is built that takes as input the body kinetics and produces as output not only an estimation of presence of the knee osteoarthritis, as previously done in the literature, but also the most discriminating parameters along with a set of rules on how this decision was reached. This fills the gap of interpretability between the medical and the engineering approaches. We collected locomotion data from 47 subjects with knee osteoarthritis and 47 healthy subjects. Osteoarthritis subjects were recruited from hospital clinics and GP surgeries, and age and sex matched healthy subjects from the local community. Subjects walked on a walkway equipped with two force plates with piezoelectric 3-component force sensors. Parameters of the vertical, anterior–posterior, and medio-lateral ground reaction forces, such as mean value, push-off time, and slope, were extracted. Then random forest regressors map those parameters via rule induction to the degree of knee osteoarthritis. To boost generalisation ability, a subject-independent protocol is employed. The 5-fold cross-validated accuracy is 72.61% ± 4.24%. We show that with 3 steps or less a reliable clinical measure can be extracted in a rule-based approach when the dataset is analysed appropriately. |
31,495 | Mulberry leaf active components alleviate type 2 diabetes and its liver and kidney injury in db/db mice through insulin receptor and TGF-β/Smads signaling pathway | Diabetes is a systemic metabolic disease caused by insufficient insulin secretion or decreased biological effects of glucose in the body, which is characterized by disorders of glucose metabolism, abnormal liver glucose output, and insulin resistance .The pathogenesis of diabetes is complex and it may be related to multiple risk factors including family history, aging, obesity, hypertension, and energy intake .As a typical metabolic disease, its development process includes not only the abnormal metabolism of sugar, fat, proteins and other substances, but also the existence of inflammatory reactions, oxidative stress and gastrointestinal flora and other pathological changes .Chronic hyperglycemia often causes various organ diseases, including kidney, liver, nerve and cardiovascular diseases .Liver is a major site of insulin clearance and fabrication of inflammatory cytokines, plays an important role in maintaining normal glucose concentrations in fasting and postprandial states .Kidney is an important organ of excreting waste and poison, which can regulate the concentration of electrolyte and maintain acid-base balance, and diabetic nephropathy often leads to mesangial expansion, increased glomerular filtration rate, increased basement membrane thickness, which in turn leads to glomerular sclerosis and interstitial fibrosis, which eventually leads to renal failure .Therefore it can be seen that with the development of diabetes mellitus, the harm of diabetes to the body is from many aspects, liver and kidney injury is a serious part of it.Mulberry is the dry leaves of Morus alba L., which is one of the commonly used traditional Chinese medicines.Mulberry has been widely recognized for its therapeutic effect on diabetes and its complications.Many studies on the hypoglycemic mechanism of mulberry leaves have laid a foundation for our in-depth exploration.The main active ingredients in mulberry leaves are flavones, polysaccharides, and alkaloids.The hypoglycemic mechanism of alkaloids has been basically clarified that polyhydroxy alkaloids represented by DNJ, known as α-glycosidase inhibitors, have hypoglycemic effect and inhibition of adipogenesis, it can prevent type II diabetes, inhibit lipid accumulation and reduce postprandial hyperglycemia by inhibiting α-glucosidase in the small intestine .A large number of documents have documented that mulberry flavonoids have anti-oxidation, anti-bacterial, anti-inflammatory, anti-viral, blood sugar lowering, blood pressure lowering effects, and can improvement the function of heart and liver, etc.; mulberry polysaccharides also have significant hypoglycemic effect and inhibit blood lipids increase .Both flavonoids and polysaccharides have broad prospects in the treatment of type II diabetes, but the hypoglycemic mechanism of flavonoids and polysaccharides is not yet clear.In this study, db/db mice model of spontaneous obesity diabetes was used to evaluate the therapeutic effect of mulberry leaves multicomponent on type II diabetes and effective mechanism improving the associated liver and kidney injury.And based on UPLC/QTOF-MS technology, metabolomics method was applied to identify potential biomarkers in serum and related metabolic pathways, and to provide basis for the mechanism research and new drug development.40 SPF grade 6–8 week old male db/db mice and 10 db/m mice purchased from the Institute of Biomedical Sciences, Nanjing University, License No. SCXK 2015-0001.The animal experiment ethics committee of Nanjing University of Traditional Chinese Medicine examined that the experiment conformed to the Regulations on the Administration of Laboratory Animals issued by the State Science and Technology Commission and the Detailed Rules for the Implementation of the Administration of Medical Laboratory Animals issued by the Ministry of Health.All animals were kept in Nanjing University of Traditional Chinese Medicine Drug Safety Evaluation Research Center, conventional feed feeding, free diet, temperature and humidity are maintained at 23 ± 2 °C and 60 ± 2%, respectively, and the light cycle is 12 h.Metformin hydrochloride tablets were purchased from Sino-American Shanghai Squibb Pharmaceuticals Ltd.; TG, T-CHO, INS, mALB, Cre, ALT and AST kits were purchased from Nanjing Jiancheng Bioengineering Institute Co., Ltd.Acetonitrile; formic acid; methanol are chromatographically pure."RIPA lysate, BCA protein quantification kit, sodium dodecyl sulfate-polyacrylamide gel electrophoresis gel configurator, 5*SDS protein loading buffer, enhanced chemiluminescence kit, electrophoresis solution, transfer solution, blocking solution, color pre-stained protein marker and antibodies were all produced by Shanghai Wei'ao Biotechnology Co., Ltd.The extracts of flavonoids, polysaccharides and alkaloids were prepared in our laboratory.The contents of DNJ and fagomine in alkaloids were 27.7% and 3.78%, respectively, the total polysaccharides were 18.8% and the contents of neochlorogenic acid, chlorogenic acid, cryptochlorogenic acid, rutin, isoquercitrin and astragaloside in flavonoids were 3.85%, 3.43%, 4.38%, 3.56%, 1.58%, 1.02%, respectively.ACQUITY UPLC System, Xevo mass spectrometer, MassLynxTM mass spectrometry workstation software.Small vertical electrophoresis tank, small Trans-Blot transfer tank, basic power supply.Tissue homogenizer, ultrasonic cell disrupter, shaking table, high speed refrigeration centrifuge, full-wavelength microplate reader.Anke LXJ-IIB type and TDL-240B centrifuge was purchased from Shanghai Anting Scientific Instrument Factory; EPED ultra-pure water machine was purchased from Nanjing Yeap Esselte Technology Development Co., Ltd.Adaptive feeding for 18 days.Fasting blood glucose measured once a week and the mice blood glucose stable and greater than 16.7 mmol/L were regarded as successful models.The 10 db/m mice was a control group, and the db/db mice were randomly divided into five groups: a diabetic model group, a metformin control group, a mulberry flavonoids group, a mulberry polysaccharides group and a mulberry alkaloids group.The doses given is based on the experimental study in our laboratory .Extracts and metformin are dissolved in normal saline.All mice were given intragastric administration at a dose of 10 ml·−1 for 6 weeks in this order, the control group and model group were given the same volume of saline at the same time.At the end of treatment, 24-hour urine collection with metabolic cage and all overnight-fasted mice were anaesthetised.Whole blood was collected into 1.5 mL micro-centrifuge tube and centrifuged at 13,000 rpm for 10 min.The supernatant was transferred to new 1.5 mL micro-centrifuge tube and stored at −80 °C.The livers were quickly weighed and stored in two portions: the left outer leaves were stored in formalin for HE and Masson staining and the rest were stored at −80 °C for Western blotting.The kidneys were removed and the left kidney was preserved in formalin for HE and pas staining and the rest were stored at −80 °C for Western blotting.The liver index and kidney index were calculated according to the following formula.Liver index = liver weight/mice weight × 100%;Kidney Index = kidney weight/mice Weight × 100%.On the weekends of 1, 2, 3, 4, and 6 after modeling, FBG was detected after 12 h of fasting.According to the procedure of the kit instructions, the content of TG, T-CHO, ALT, and AST in the final collected serum and Cre in urine were detected, and the content of INS in serum and mALB in urine were detected by ELISA.The insulin resistance index were calculated according to the following formula.Homa Insulin-Resistance = FBG × INS/22.5.Three liver and kidney tissues samples were randomly selected and cut into pieces, 20 mg tissue were mixed with 100 μL RIPA lysate and ground in homogenizer.The supernatant was collected after centrifugation at 3000 rpm for 15 min.The protein concentration was determined by BCA protein quantification kit, adjusted by 5 × loading buffer to obtain a sample solution.The sample solution was boiled at 100 °C for 5 min to denature the protein, cooled by ice and then centrifuged at 3000 rpm for 1 min.Prepare 10% separation gel and 5% concentration gel, add 20 μL sample solution to each well for electrophoresis.After the protein were transferred onto polyvinylidene fluoride membranes, the membranes were blocked by 5% albumin from bovine serum for 2 h. InsRβ, IRS-1, PI3K, NF-κB, β-actin, TGF-β1, CTGF, smad2, smad3, smad4 were diluted by blocking solution, incubated overnight at 4 °C; horseradish peroxidase-conjugated goat anti-mouse secondary antibody incubated for 2 h at room temperature.After 2 min reaction with chemiluminescence detection reagents, the membrane was sensitized and developed with X film in darkroom.Six serum samples in each group were randomly selected.After mixed 50 μL serum with 150 μL acetonitrile, the mixture was vortexed for 60 s and centrifuged at 13,000 rpm for 10 min at 4 °C.Equal volumes of serum from each sample were mixed together as QC samples.At the beginning of the sample injection, QC sample was injected six times continuously to adjust and equilibrate the system, then QC sample was injected every 8 samples to monitor the stability of the analysis.Chromatographic conditions: ACQUITY UPLC BEH C18 column; mobile phase was composed of formic acid solution and acetonitrile using a gradient elution of 5–45% B at 0–3 min, 45–95% B at 4–13 min, 95% B at 13–14 min.Flow rate: 0.4 mL·min−1; injection volume: 2 μL, column temperature: 35 °C.MS conditions: ESI mass spectra were acquired in both positive and negative ionization modes by scanning over the m/z range 100–1000.Capillary voltage: 3.0 kV, cone voltage: 30 V, extraction voltage: 2.0 V, ion source temperature: 120 °C, desolvation temperature: 350 °C, Cone gas flow rate: 50 L/h, Collision energy: 20–50 eV, desolvation gas flow rate: 600 L/h, Activation time: 30 ms; Gas collision: High purity nitrogen.The leucine-enkephalin was a locked mass solution, the concentration was 200 pg/mL and flow rate was 100 μL/min.SPSS 21.0 software was used to perform mathematical statistics analysis on the count data.The data were expressed in terms of mean ± SD and P < 0.05 was considered statistically significant.The raw data obtained by mass spectrometry was processed by Masslynx v4.1 software.The parameters used were set as RT range 0–15 min, mass range 100–1000 Da, mass tolerance 0.05 Da, peak intensity threshold 50, Mass window 0.05 Da and RT window 0.2 min, automatic detection of 5% peak height and noise.The intensity of each ion was normalized respect to the total ion count to generate a list of data consisting of retention time, m/z value and normalized peak area.The data was imported into EZinfo 2.0 for supervised PLS-DA and OPLS-DA.For the identification of potential markers and metabolic pathway, the following databases have been used: HMDB, Metabo Analyst and KEGG database.As shown in Fig. 1a, HE-stained hepatocytes in control group tightly arranged, the hepatic plate structure is clear, and there is no abnormality in the central vein and portal area.The hepatocytes in model group have degeneration, swelling, cytoplasmic loose staining and vacuolization; local tissue showed necrotic foci, necrotic nucleus deep staining, fragmentation or dissolution; a small amount cells accompanied by steatosis, fat vacuoles can be seen in the slurry.Masson-stained hepatocytes in control group contained a small amount of blue, while a large amount of blue collagen fibers exist in model group, which are mainly deposited in the blood vessels, portal areas and disse space.All the treatment groups can reduce the pathological changes in different degrees, the blister degeneration and necrotic cells are obviously reduced, the cell arranged more closely and orderly, and the blue collagen fibers reduced to some degree.Fig. 2a shows the staining of kidney tissue.In the control group, the renal cortex and medulla are clearly demarcated, the morphology of glomerular and renal tubular are normal.The tubular epithelial cells in model group are edematous, swollen and the cytoplasm is loose staining, some renal tubular epithelial cells are necrotic, nucleus become pyknosis and hyperstained.The renal capsule is narrow and the epithelium is thick, a small amount of cast can be observed.The thickened epitheliums and edematous cells reduced in each administration group, and cast can hardly be observed.Compared with control group, the PAS staining shows that the glomerular mesangial cells and the mesangial matrix in model group had some hyperplasia, and the basement membrane thickened.In addition, the IOD of the basement membrane and the pixel area of the glomerular vascular plexus in PAS staining were analyzed, and the average optical density values were calculated.Each group compared with model group have significant differences.The weight ratio of liver and kidney in model group was lower than that in control group.On the one hand, it is related to body weight of db/db mice as model group was slightly higher than that of other groups, on the other hand, pathological changes occurred in organs under diabetes mellitus cause organ atrophy.However, these changes alleviated after administration, and the regulation of the liver is particularly evident.The results of FBG and related indicators are shown in the Figs. 3, 4 .Compared with control group, the FBG, TG, T-CHO, mALB/Cre, ALT, AST and insulin resistance index increased significantly in model group.After the intervention, the value of FBG in MP and MA groups, the ratio of mALB/Cre in MP and MA groups decreased obviously.While the MF and MA possessed a significant effect on reducing the levels of ALT and AST.These enzymes are believed to leak from the cytosol into the bloodstream as a consequence of damage to hepatic tissue .Hypercholesterolemia, hypertrigly-ceridemia and enhanced glomerular lipid synthesis have been implicated in diabetic glomerulosclerosis and known to exercebate kidney diseases.Prevention of oxidative stress-related hyperlipidemia and/or lipid peroxidation is considered crucial in preventing disorders associated with diabetes .In mulberry treated groups shown to normalize the lipid levels as compared to the model group in serum.Compared with the control group, the levels of InsRβ and PI3K protein in liver were significantly decreased, IRS-1 and NF-κB protein were increased in model group, suggesting that db/db mice have developed insulin resistance and inflammatory reactions which lead to a disorder of insulin receptor pathways and insulin signal transduction.The drug-administered groups had different degrees recovery on InsRβ, PI3K and NF-κB, the effect of MA and MP was more obvious especially, but they have no significant effect on IRS-1 protein.The levels of TGF-β1, CTGF, smad2, smad3 and smad4 protein in kidney were significantly increased in model group.TGF-β1 and its signal transduction proteins smads play a key role in renal fibrosis .Connective tissue growth factor.CTGF also plays an important role in the formation of renal interstitial fibrosis.It can stimulate the proliferation of renal interstitial fibroblasts and ECM synthesis .The effect of MF and MP was more obvious especially.The stability, trend of QC sample and the RSD of relative peak area were described, demonstrating the quality of the QC sample.Validation of this method was evaluated by calculating the m/z of ten random molecules and repeatability of this method was evaluated by using six replicates of the QC sample.The results show that the method has good repeatability and stability.UPLC-QTOF/MS was used to collect the metabolic information of serum samples in positive and negative ion modes.In order to examine the overall changes of metabolites in diabetic mice, the data were analyzed by partial least squares discriminant analysis to obtain the OPLS-DA score map, S-plot map and VIP distribution map.From the OPLS-DA scores map, it can be seen that the two groups can be clearly separated under positive and negative ion patterns, indicating that metabolic abnormalities occurred in the model mice.A total of 13 potential biomarkers were identified by retrieving and confirming the mass spectrum data in METLIN, HMDB and KEGG databases.The mass spectrometry data and its changes in control and model groups are shown in Table 2.The relative content of the 13 potential biomarkers in all groups were tested by one-way analysis of variance to compare the differences between model group and other groups.The administered group had significant differences compared with model group, suggesting that the intervention of the three effective components could effectively adjust abnormal changes of these potentials biomarkers.In order to study the effects and mechanisms of mulberry components in diabetic mice, PLS-DA was applied to obtain the changes in control group, model group and administration groups.The t-test was performed on the relative distances between control group and other groups in the PLS-DA score map.The three effective components of mulberry can be clearly distinguished from model group and closer to normal group than metformin and recovered to a level that close to the control group in varying degrees, and the flavonoids and alkaloids show better improvement effects.It suggests that the multi-effect and multi-target action mode of mulberry components is different from traditional hypoglycemic agents.The identified 13 endogenous metabolites were input into the Metabo Analyst database to construct metabolic pathways.The pathways with impact values above 0.1 were screened as potential target pathways.As shown in Fig. 10c, among the metabolic pathways constructed, ether lipid metabolism, dicarboxylate metabolism, and arachidonic acid metabolism are the most important metabolic pathways.db/db mice is derived from the inbreeding of the heterozygous C57BL/KsJ.Lacking the function of leptin receptor result in failing to elicit an effect upon food intake, leading to obesity, high insulin and insulin resistance.The symptoms of hyperglycemia developed in 6 weeks, severe hyperglycemia developed after 8 weeks, and some diabetic complications such as nephropathy and liver injury developed gradually with age.The db/db mice have been extensively used for the study of type 2 diabetic and insulin activity .The aim of the present study was to assess the therapeutic efficacy of mulberry components and try to elucidate the antidiabetic mechanism through metabolic profiling.After 6 weeks treatment, the mulberry components were shown to modulate blood glucose and lipid levels in db/db mice and improve liver and kidney function and structural disorders.The specific performance is to down-regulate blood glucose and blood lipid, reduce urinary microalbumin, transaminase levels, and insulin resistance, improve hepatic steatosis and hepatocyte necrosis, and protect glomeruli and renal tubules.Western blot results of liver showed that the insulin receptor pathway was activated in the db/db group.Insulin receptor pathway is the main molecular mechanism of insulin resistance, and insulin-signaling disorder is one of the causes of insulin resistance in type 2 diabetes .The mulberry components may play a role in lowering blood glucose in diabetic mice by promoting glucose metabolism in peripheral tissues, increasing insulin sensitivity, and improving insulin resistance.Western blot results of kidney showed that TGF-β/Smads signaling pathway was activated in the db/db group.Studies have shown that abnormal of TGF-β/Smads signaling pathway can affect fibroblast differentiation, migration and fibrosis-related factors expression , CTGF acts as a mediator of TGF-β-induced fibroblast proliferation by regulating cyclin-dependent kinases activities .Mulberry may inhibit kidney injury by inhibiting the expression of CTGF and related gene.Metabolomics is a systematic method for studying the overall metabolic changes in bodies.It through analyzing the composition and changes of endogenous metabolites in biological samples, explain the physiologic and pathological conditions of the subjects, and reflect the rules in the overall metabolism under the influence of internal and external factors .Metabolomics results revealed, the content of PGH3 increases in the model group.PGH3 belongs to prostaglandins, which can be formed by arachidonic acid metabolism and stereospecific oxidation of arachidonic acid through the cyclooxygenase pathway .It is associated with pathological processes such as inflammation, allergic reactions and cardiovascular diseases as lipid peroxidation products can regulate the conversion of arachidonic acid to secondary metabolites and lipid metabolism disorders and oxidative stress can increase lipid peroxides, stimulate metabolism of arachidonic acid to promote the occurrence of inflammation .Palmitic acid, the most abundant fatty acid in body, reduced in model group .The metabolic disorder of fatty acids was one of the main causes of inflammation and insulin resistance .It has been reported that leptin can promote lipolysis and inhibit lipogenesis, but leptin has no lipolysis on leptin receptor-deficient mice adipocytes , so the levels of free fatty acids in model mice are lower than control group.In addition, insulin can promote the synthesis and storage of fat, and promote the dephosphorylation of hormone-sensitive lipase by enhancing hormone sensitivity lead to decrease lipolysis of adipose tissue and reduce the level of free fatty acids in blood .Ox-LDL is an important substance that causes dysfunction of vascular endothelial cells, and LysoPC is the main component of it.LysoPC is an intermediate product of PC metabolism, and it plays a significant role in oxidative stress and immune inflammation .LysoPC can increase oxidative stress and can induces inflammatory response by inducing antibody formation and stimulating macrophage to influence the inflammatory state of organism that leads to the disturbances of phospholipid metabolism and vascular endothelial dysfunction in inflammatory state .The low PC content combined with high total cholesterol and triglyceride levels shows that the lipid homeostasis in diabetic mice was disturbed , which is consistent with the result that there have high correlation between insulin resistance and the reduced of various acylglycerol phosphate choline observed by Gall and colleagues .The disorder of LysoPC combined with the increase of arachidonic acid metabolism also implies abnormal activation of PKC pathway in glucose metabolism.The activation of PKC pathway can increase the synthesis of extracellular matrix and inhibit the activity of nitric oxide to promote partial hypoxia and increase of vascular permeability, causing kidney damage .The content of cis-aconitic acid increased in model group.Cis-aconitic acid was an intermediate in TCA cycle and produced by dehydration of citric acid.The TCA cycle is the final metabolic pathways of the three major nutrients, are also the pivotal links between carbohydrates, lipids, and amino acid metabolism.The increase of cis-aconitic acid indicates that the conversion of TCA cycle is inhibited, resulting in an increase of metabolic intermediates, and making energy and lipid metabolism abnormal .Additionally, the accumulation of pyruvate during tricarboxylic acid cycle can promote the production of reactive oxygen species and increase the oxidative stress response, while glyoxylate and dicarboxylate metabolism can also serve as symbol for inflammation.Studies have shown that DNJ can increase glycolysis in diabetic individuals and reduce gluconeogenesis, thus improving the TCA cycle , which is consistent with our study that the content of cis-Aconitic acid in alkaloid group significantly lower than that in model group.Therefore, we speculate that mulberry leaf components can improve the metabolism of arachidonic acid and fatty acid, reduce the production of lipid peroxidation products, and regulate oxidative stress and inflammation.In addition, they can improve glucose metabolism, maintain vascular endothelial function and reduce complications caused by microvascular injury.Metformin was selected as a positive control.Its hypoglycemic effect is obvious, and has a good regulating effect on TG, T−CHO, it also reduces the content of AST and ALT.Western blot results showed that metformin has no direct relationship with insulin receptor pathway, but it can regulate NF-κB.It also plays a role in the TGF-β/Smads signaling pathway.According to the literature, metformin directly acts on the metabolic process of sugar, promotes the anaerobic hydrolysis of sugar, and increases the uptake and utilization of glucose by peripheral tissues such as muscle and fat it inhibits intestinal absorption of glucose, hepatic glycogenesis and reduces hepatic glycogen output, thus lowering blood sugar .The relationship between metformin and AMPK pathway has been studied extensively .A study conducted by Lu and co-workers indicates that activation of AMPK has a potential value in treatment of CKD.The study also demonstrates that the activation of AMPK by metformin suppressed renal fibroblast collagen type I production in response to TGF-β1, while AMPK inhibited its stimulated upregulation of CTGF.From the results of metabonomics, although metformin has a certain regulatory effect on the potential biomarkers we identified, in general, from PLS-DA score plots, metformin is far away from mulberry leaf components and mulberry leaf components are closer to the control group.In summary, we speculate that in the diabetic state, the disorder of insulin receptor signaling pathway in the liver causes inflammation, inhibition of glucose uptake and increment of gluconegenic genes, and the disorder of TGF-β/Smads signaling pathway in kidneys increases CTGF, which in turn leads to cell hypertrophy, mesangial matrix expansion.On the other hand, the phospholipid metabolism, fatty acid metabolism and tricarboxylic acid metabolism disorder causing inflammatory reaction, oxidative stress, insulin resistance and vascular endothelial dysfunction, eventually leading to liver and kidney injury.The protective effect of mulberry on liver and kidney injury is mainly mediated by insulin receptor pathways and TGF-β/Smads signaling pathway to improve insulin resistance and oxidative stress-induced renal fibrosis.But the three effective components of mulberry have different effects on this model.It suggests that the multi-effect and multi-target action mode of mulberry components is different from traditional hypoglycemic agents, which provides a scientific basis for further revealing the mechanism, characteristic advantages of mulberry leaves and new therapeutic agent.The list authors contributed to this work as follows: S.S. and D.J. conceived and designed the experiments, Z.L. performed the experiment and wrote the paper; Z.Y., G.J. and G.S. analyzed the data; O.Z. revised the article for important intellectual content and contributed materials; S.S. and D.J. acquired funding for the research.This work was supported by the National Natural Science Foundation of China ; the Construction Project for Jiangsu Key Laboratory for High Technology Research of TCM Formulae ; the Priority Academic Program Development of Jiangsu Higher Education Institutions ; 2013’ Program for New Century Excellent Talents by the Ministry of Education , 333 High-level Talents Training Project Funded by Jiangsu Province.No potential conflict of interest was reported by the authors. | Mulberry leaf is one of the commonly used traditional Chinese medicines, has been shown to exert hypoglycemic effects against diabetes. The aim of this study is to investigate the effects and mechanism of mulberry leaf flavonoids (MF), polysaccharides (MP) and alkaloids (MA) on diabetic and its liver and kidney injury. The db/db mice was adopted and the results showed that the FBG (fasting blood glucose) of model group continued to increase and associated liver and kidney injury. After the intervention of MP and MA, the value of FBG exhibited the most obvious hypoglycemic effect. MF and MP have obvious improved effect on kidney injury, which reduced the content of mALB/Cre (microalbumin/creatinine) in urine and improved the tubular epithelial cells edematous and renal cystic epithelial thickening. While the MF and MA possessed a significant effect on liver damage, manifested in reducing the levels of ALT (alanine aminotransferase) and AST (aspartate aminotransferase) and pathological changes of liver on db/db mice. Through metabolomics analysis, 13 endogenous potential biomarkers were identified in serum. The three effective components of mulberry can regulate the 13 potential biomarkers and the corresponding metabolic pathway. Collectively, the components of mulberry leaf have clear hypoglycemic effect and protective effect on liver and kidney injury and the effects are related to insulin receptor and TGF-β/Smads signaling pathway. |
31,496 | The multifunctionality of mountain farming: Social constructions and local negotiations behind an apparent consensus | In the last 20 years, there has been growing attention among both scholars and policymakers to the idea that “agriculture is multifunctional, producing not only food, but also sustaining rural landscapes, protecting biodiversity, generating employment and contributing to the viability of rural areas”.The notion of the multifunctionality of agriculture emerged out of the 1994 Uruguay Round Agreement on Agricultural Trade, when agriculture was first integrated into the ongoing reforms of international trade liberalisation.This notion was introduced in the Common Agricultural Policy in 1999 through specific subsidies for agricultural production, but abandoned a few years later, because it was considered as trade-distorting by the World Trade Organisation.Meanwhile, the notion had proliferated within academic communities working on contemporary agricultural and rural changes, such as rural geography, rural sociology and agricultural economics.Like most successful concepts, which are used by a large range of community types, the idea of agricultural multifunctionality is far from being uniformly understood.Marsden and Sonnino distinguish between three paradigms, which lead to different definitions and conceptualisations.In the first paradigm, agricultural multifunctionality is restricted to pluriactivity, i.e. the combination of agricultural and non-agricultural incomes within the farm household.Multifunctionality of agriculture is merely understood as the multifunctionality of farmers.In the second paradigm, based on post-productivism, rural areas are perceived as consumption spaces with cultural and social amenities that have value for growing urban populations.Agriculture is no longer central, with the focus instead on the multiple functions of rural landscapes rather than the multifunctionality of agriculture itself.The third paradigm of agricultural multifunctionality, which Marsden and Sonnino align with the rural development and agro-ecology paradigms, re-emphasizes food production and the central role of agriculture to sustain rural economies and local societies.In this paradigm, farmers are strongly connected with other local actors and activities through the social and environmental functions of their farming activity.Despite their differences, the range of scholars working under these paradigms use the idea of agricultural multifunctionality as a normative concept.They perceive agricultural multifunctionality as something inherently positive, and reflect on the conditions to get closer to this ideal.Some scholars such as McCarthy or Potter and Tilzey prefer to comprehend multifunctionality as an object of study and consider it as a “highly politicized, essentially discursive and deeply contested policy idea”.These studies depicted in particular a critical and political understanding of discourses surrounding agricultural multifunctionality in agri-environmental policies in western Europe.Following these authors, we approach the multifunctionality of agriculture as a socially-constructed idea, or more precisely, the subject of multiple social constructions.People use, interpret and transform the idea of agricultural multifunctionality with different ideas in mind, based on various interests, norms and values, in the context of complex social interactions.However, while previous research analysed these social constructions in the political national and international arenas, within the policy struggles and resistances about agricultural liberalisation, we suggest to focus on the local action arenas, within which farmers, other local resource users, and managers interact about land-use management.These local action arenas have been impacted by the international and national policies built around the idea of agricultural multifunctionality, so the concept has permeated local discourses."This is especially true with regard to mountain livestock farming in European uplands, an economically marginal form of agriculture that has been supported by policies over previous decades for its multifunctional character.Indeed, since the 1970s, there has been a consensus among scientists and policymakers regarding the need to maintain extensive livestock farming in mountains to prevent land abandonment and its negative consequences on patrimonial landscapes, tourism economy and biodiversity.However, this consensus sometimes conveys an idealistic view of mountain livestock farming that overlooks that different types of mountain agricultural systems provide different types of economic, social, and/or environmental functions, and that not all functions are necessarily compatible with each other.For example, a farmer who is involved in agritourism will have less time for his farming activity and might be forced to abandon some grasslands.There are trade-offs at both farm and landscape levels, and therefore choices that are made, either explicitly or implicitly.This study seeks to understand the social mechanisms underlying these choices.Drawing on a case study from the National Park of the Pyrénées, in the south west of France, where there is a local political consensus around the need to maintain multifunctional livestock farming, this paper aims to analyse the way that the idea of agricultural multifunctionality has been appropriated, re-constructed and negotiated in the local arenas dedicated to land-use and natural resource management.More precisely, it aims to understand how the consensus around the multifunctionality of livestock farming was locally built and negotiated, and what lies behind this consensus.To do so, we analyse the diversity of discourses around the roles of livestock farming held by local stakeholders, and unpack the way that these different discourses interact with each other in local action arenas.Through these discourses, we look at the representations of the different functions of agriculture and rural landscapes, as well as the diverse meanings of multifunctionality.This is a way to address a research gap mentioned by several authors, which is the need to go beyond abstract considerations of multifunctionality to produce data on concrete preferences and representations regarding rural areas and agriculture."Going one step further, we analyse not only the diversity of people's representations, but also the way that people holding these different representations interact with each other in the local fabric of rural landscapes, and how they negotiate trade-offs between the multiple functions of agriculture and rural areas.Our research draws on a political ecology framing, which looks at how competing discourses and power relations affect land-use and natural resource management, generating winners and losers.Discourses are defined as “webs of meanings, ideas, interactions and practices that are expressed or represented in texts, within institutional and everyday settings”.These discourses are shaped by social relationships, power relations and institutions, and vice versa, they have impacts on social relationships.Understanding this nexus of discourses and social relationships is central to analyse the social constructions and negotiations surrounding natural resource management.In this study, we combined this political ecology approach with the concept of ecosystem services as an analytical tool to identify the discrete functions of agriculture and rural areas mentioned in discourses.ES are broadly defined as the benefits people obtain from ecosystems, and three main types of ES are commonly distinguished: provisioning services, such as food and timber, regulating services, including water quality regulation or pollination, and cultural services such as an aesthetically pleasant landscape or recreational activities.In early stages of ES mainstream research, agriculture was mainly considered as an external driver that degraded ecosystem and diminished ES provision.Since then, however, most branches of ES research have recognised the active role of farmers in the co-production of ES.Ecosystem dis-services are also increasingly accounted for.They are defined as “the ecosystem generated functions, processes and attributes that result in perceived or actual negative impacts on human wellbeing”.Throughout the remainder of the paper, mention of the ES concept will implicitly also include ecosystem dis-services.In this study, we could have adopted a conceptual framework based on the concept of agricultural multifunctionality to identify the different functions of agriculture under debate.The reasons why we have instead adopted an ES lens are two-fold.Firstly, the ES concept induces a shift and displaces agriculture from its central position.Within the ES framework, the goods and services are produced by the ecosystems, which can be transformed by agricultural activities.Within the paradigms of agricultural multifunctionality, even in the post-productivist paradigm that broadly looks at the functions of rural landscapes, the focus is on the benefits that are directly or indirectly derived from agricultural activities.People do, however, value rural landscapes for multiple reasons, and some of these reasons might not be related to agriculture.Some people might even prefer rural landscapes without agriculture, because they value wilderness and biodiversity.These values are a blind spot of studies based on the concept of agricultural multifunctionality.The second reason why we adopted an ES lens is to reveal the people behind the functions of agriculture, both the ones who benefit from these functions and the ones who provide them.Research on agricultural multifunctionality tends to focus on the functions of agriculture, and not on the beneficiaries of these functions.The joint products of agricultural activity are considered as public goods that widely benefit society as a whole.Conversely, the identification of ES beneficiaries is central in ES research.As Huang et al. note, a service is not considered a service unless a beneficiary has been identified.On the provision side, farmers are seen as providers or co-providers of the joint products of agriculture in both agricultural multifunctionality and ES research.However, ES frameworks potentially embrace a larger range of providers, by looking at all the people who shape ecosystems, including for example, foresters or hunters.The ES concept has generated multiple controversies, in particular the ES economic valuation approaches that convey a Western and utilitarian perception of nature, which could lead to its commodification.However, the last decade has also seen the rise of numerous alternative approaches to ES that emphasize its heuristic dimension to foster sustainable development and biodiversity protection, and that aim to include a wide diversity of values beyond the sole utilitarian and economic values.Aligned with these critical but constructive approaches to ES, we adopted in this paper the conceptual framework developed by Barnaud et al., which is based on a relational and constructivist approach to the ES concept.As a constructivist approach, the adopted framework considers that ES do not exist per se, but are subjective perceptions of ecosystems.It aims to understand what ES are important to which people, and why, according to what interests and values.These can be instrumental values, intrinsic values or relational values.The adopted framework is also a relational approach to ES, which focuses on social interactions.The framework uses indeed an ES lens to uncover social interdependencies between people1 that were not previously visible or explicit, in order to reflect on existing or potential conflicts and collaborations.For example, when two ES are antagonist, i.e. when the provision of an ES decreases the provision of another, this can create a conflict of interest between the beneficiaries of these ES.This notion of social interdependency is critical for collaborative processes; people who feel that they depend on others to solve a problem or to improve their situation are indeed more inclined to engage in a collaborative process."In the adopted framework, the key social interdependencies of the system are identified and in turn analysed according to four dimensions: the degree of stakeholders' awareness of interdependencies; the formal and informal institutions that regulate these interdependencies; the levels of organisation at which actors operate; and the power relations affecting them.This conceptual framework has been applied to other case studies related to natural resource management in agro-ecosystems, but issues of power relations and contesting discourses were not central in these applications.In this paper, we provide an application that fully explores the conceptual association between political ecology and the ES concept which underlies this framework, with ES identifying the particular ecological categories as objects of investigation, and political ecology supplying the theoretical frame for thinking about power relations and discursive contestation around these categories.The Pyrénées is a range of mountains in southwest France that forms a natural border between France and Spain.Traditional livestock farming in the French Pyrénées is based on pastoralism, defined as an extensive exploitation of seminatural grasslands for fodder harvesting and grazing."The specificity and richness of Pyrénées' landscapes and ecosystems is widely considered as the result of these farming activities, which have built and maintained these typical open landscapes of grasslands.However, since the 1950s, the Pyrénées, like other European mountains, have experienced a rural exodus, a declining number of farmers, and land abandonment, which has promoted massive land-use changes and high rates of spontaneous reforestation in former grasslands.This process of forest encroachment onto open landscapes is, in France, referred to as the fermeture des paysages.In the 1970s, this process began to be perceived very negatively, for reasons such as the loss of aesthetics and the cultural values of open landscapes, or the increase of natural hazards like forest fires, and more generally as the symbol of the decline of the local farming communities.As a result, as early as the 1970s, agriculture in the French mountains was supported through direct government subsidies to farmers, due to a scheme that explicitly supported agriculture in less-favoured areas for their contributions to society, including maintaining a local, social fabric, and preserving the landscapes.Later on, in the mid- 1990s and early 2000s, in the context of the greening of the Common Agricultural Policy, agri-environmental measures started to support extensive grassland practices, in order to preserve the specific biodiversity of grasslands.These various policies have strongly impacted upon the professionals of the farming sector in the Pyrénées, who have widely adopted the ideas and language of agricultural multifunctionality in order to justify the need to support their activity.This provided therefore, an interesting place to study how the idea of agricultural multifunctionality was locally adopted, re-constructed and negotiated.Our case study is the upper part of the Aure Valley, located in the central part of the French Pyrénées, with elevations ranging from 728 to 3134 m above sea level.The landscape is dominated by open grasslands, forests and rocks in the upper altitudes.The main economic activity of the valley is tourism, with two ski centers in winter, and multiple outdoor activities available during the summer.Hydroelectricity is also an important source of local incomes.Although its economic importance has declined since the 1950s, extensive livestock farming remains the main land-use activity of the valley.Sheep and cattle are raised extensively, mainly for meat production, using grasslands in the pastures of high altitude in summer, and hay is produced in the lower parts of the valley during the winter.In terms of land ownership, summer pastures at high altitude are owned by municipalities, while the rest of the land is mainly privately-owned, characterised by high land fragmentation and multiple owners.This valley is also characterised by a high density of biodiversity conservation schemes.It is part of the National Park of the Pyrénées, and it includes two Nature Reserves and three Natura 2000 sites under the Habitats Directive, protecting in particular grassland habitats.Among the key species of conservation interest are the Bearded Vulture, the Western Capercaillie, and the Pyrenean Desman.This valley was chosen as a case study because of the concentration of multiple activities with potentially competing objectives, i.e. livestock farming, tourism, forestry and biodiversity conservation, in the same place.This provided an interesting location to study the social interactions and discourses related to the multifunctionality of livestock farming.This paper draws on a series of semi-structured interviews that were conducted in this valley between 2012 and 2015.After a few exploratory interviews conducted in 2012 with regional and local stakeholders, a more structured set of interviews was conducted in 2013 with the stakeholders of two action arenas within the valley, namely a Natura 2000 site and a Nature Reserve.Two other sets of interviews were then conducted in 2015, one exploring in greater depth the previous Natura 2000 site with complementary interviews, and the other focusing on one village, chosen because it included diverse action arenas such as another Nature Reserve, local associations of livestock farmers, and the community council.83 interviews were conducted in total, with 66 individuals.The objective of the sampling was to gather a diversity of perspectives on livestock farming and spontaneous reforestation issues.We therefore interviewed people with diverse professional or recreational activities in relation with the local environment, including livestock farmers, hunters, biodiversity conservation managers, agricultural advisors, elected representatives, forest managers, tourists, tourism professionals, and residents.For each set of interviews, we adopted a snowball sampling method, starting with an initial list of interviewees based on the suggestions of local experts, and subsequently asking at the end of each interview for suggestions and contact details of people with different or important perspectives.Tourists were interviewed when encountered either in local shops and restaurants, or on the walking trails in the mountains.The interviews were all semi-structured interviews, conducted in a conversation mode rather than following a pre-defined list of questions.The ES concept was never explicitly used in the interviews, it was only a conceptual lens to analyse the transcripts."All interviews started with broad questions on the interviewee's life history and the description of his/her activities in relation with to the local environment, including a description of his/her organisation where relevant, and his/her specific roles in this organisation.This part was common to all the interviews.It enabled us to see what issues and functions of livestock farming, and which ecosystem services were spontaneously evoked by the interviewee, according to his/her life experience, activities, interests and values.The second part of the interview was different according to the different sets of interviews.In the first two sets, the interviewees were asked whether or not it was important, in their opinion, to maintain livestock farming, and why.Similar questions were also asked regarding the importance of protecting the environment, including questions in which the interviewees were asked to specify which elements of their local environment they were keen to protect.In the other two sets of interviews, the entry point was the observed environmental changes.The interviewees were asked about their general relationship with nature, including what was important for them in the surrounding environment and why, what were the main changes that had affected the local ecosystems in the past 10–20 years, what were the causes and consequences of these changes, and for whom."Finally, in the four sets of interviews, questions were asked about the social interactions in the action arenas under study, including identification of the main stakeholders' roles and positions, narratives of conflicts, negotiations, or collaborations.Other written sources of information were also consulted, such as reports of meetings of working groups and councils, press articles, and information bulletins available online.These documents were critical complements to the interviews for the analysis of historical social interactions.The interviews ranged from 1 to 3 h in duration and were transcribed in full.The qualitative analysis of these interviews aimed at elaborating typologies of representations and discourses, and then analysing how these diverse representations interact with each other within social relationships.The discourses and social interactions were analyzed with an ES lens, as presented in the conceptual framework section.In the fourth set of interviews, we undertook a systematic coding of the ecosystem services and dis-services that were evoked by the interviewees, either spontaneously or in response to questions regarding their local environment."The questions related to the causes and consequences of environmental changes were used to gain access to the interviewees' representations of ES providers and beneficiaries.The human causes of environmental changes indicate the people whose direct or indirect actions have an impact on ecosystems, i.e. the providers of ES, and the consequences of these changes indicate the people who are affected by these changes, who are often the ES beneficiaries.Finally, we identified and analysed in more depth some of the key social interdependencies among the beneficiaries and providers of ES.This analysis provided insights to the social constructions of agricultural multifunctionality in the local action arenas, as we shall see in the next section.A key initial emerging result from the interviews was the existence of a dominant and apparently consensual narrative around the need to maintain livestock farming to stop forest expansion.This narrative was encountered in almost all the interviews; only a few first-time visitors did not mention it.This narrative is comprised of four main elements.Firstly, there is the statement that the forest is expanding at the expense of former meadows and grasslands, especially at mid-altitudes and around the villages.Second, there is a negative judgment on this process, which is considered a problem: “If you could see a picture of here 60 years ago and a picture today, you could see that there is no more meadow.The forest has taken over.If it is not forest, it is shrub or moorland, but anyway it is nothing good.,.The third consensual element of this narrative refers to the causes of this process, which is the decline of livestock farming: “In the mountains, when you have no more livestock, you can see the trees growing, and the shrubs that take over on the grasslands”.Finally, this dominant narrative leads to the conclusion that it is important to maintain livestock farming in order to maintain open landscapes.The cultural and aesthetic value of open landscapes was the most frequently evoked ES, mentioned by a majority of interviewees.However, our analysis shows that behind this dominant narrative and apparent consensus around the need to maintain livestock farming and open landscapes, there are actually different types of representations and discourses."People don't want to maintain livestock farming for the same reasons, and they have different sets of priorities.We have identified five main types of discourses on spontaneous reforestation and livestock farming.This typology aims to highlight poles or extremes, which are sometimes caricatured.Individuals are often situated at a certain distance from, and sometimes at the meeting point of, these poles.Moreover, even in a single interview, an individual may oscillate between different viewpoints, and adopt different views depending on which ‘hat’ he/she is wearing at a given moment.Some interviewees also experienced internal contradictions."For example, a farm advisor may support sheep farming in a professional environment when facing a conflict between those who defend the presence of wolves' and sheep farmers, and yet may recognise, for personal reasons, the importance of wolves in the preservation of cultural and natural heritage.In the same vein, when we indicate that a given type of discourse is predominantly hold by a category of stakeholder, for example livestock farmers, it does not mean that all the interviewed livestock farmers adopted such a discourse.There are actually diverse types of livestock farmers who adopt diverse types of discourses.The Type A discourse emphasizes the productive functions of livestock farming, which should be supported as an economic activity adapted to the hard mountain conditions, that aims to produce food, and should allow farmers to earn their living due to adequate agricultural prices.In terms of ES, open ecosystems are also seen as productive.Rangelands and meadows provide grass and hay that are essential to feeding livestock.The expansion of forests is therefore seen as a loss of forage: “We lose grass every year”.This type of discourse was predominantly held by agricultural technical advisors and livestock farmers, who often expressed that they were not at ease with the evolution of agricultural policies and the subsidies that are increasingly justified by the social and ecological functions of livestock farming.The Type B discourse emphasizes the need to maintain livestock farming for the benefits it provides to local communities, in terms of the local tourism economy and regulation of natural hazards.The cultural and aesthetic dimensions of open landscapes of grasslands are considered a key element of the attractiveness of the valley for tourism, the primary local economic activity and a source of employment.In terms of natural hazards, the role of grasslands and meadows as traditional firebreaks is stressed, as well as the role of grazing for the prevention of avalanches, also considered economically important since there are two ski centers in the valley.This type of discourse was predominantly held by locally-elected representatives and tourism professionals, but also by some of the livestock farmers.For the latter, this was a way to justify the need to maintain their activity, and to gain support in local decision-making arenas.This Type B discourse reflects an instrumental vision of livestock farming.The idea is not to maintain livestock farming for itself, but for the benefits it brings to the people who live in the valley.As a result, it is not necessarily the livestock farmers that are needed, but the presence of livestock in the mountains for their grazing action, as illustrated in this quote: “We, tomorrow, cannot have brush cutters that replace livestock”, said an elected representative.This is a major difference with the type A discourse that focuses on the livelihoods of local farmers.The Type C discourse suggests the need to maintain livestock farming and open landscapes for cultural heritage."While in the previous discourse, this cultural heritage is a way to maintain the valley's attractiveness for tourism, in this discourse, the cultural heritage is an end in itself.The two discourses highlight the same ES, i.e. the cultural and aesthetic dimension of open landscapes, but the previous discourse demonstrates instrumental values, whilst this discourse emphasizes relational values, i.e. a personal attachment to this type of landscape.The interviewed livestock farmers also expressed their sense of responsibility to maintaining this landscape.The expansion of shrubs and forests is negatively perceived as the symbolic representation of their failure to maintain the pastures that they inherited from their predecessors.Some interviewees predominantly held this type of discourse, for example some residents or tourism professionals who were passionate about the local, cultural identity, but this type C discourse was more often combined with other types of discourses, and appeared as consensual.For example, many livestock farmers were attached to the cultural heritage associated with their activity, and yet thought that livestock farming should be seen primarily as a productive activity.The Type D discourse emphasizes that extensive livestock farming should be supported in order to maintain the biodiversity of open ecosystems, since shrub and forest expansion leads to a loss of fauna and flora that are specific to open grasslands.What is sought is not necessarily a landscape with only open grassland ecosystems, but a balance between open, shrub, and forest ecosystems.Like the Type B, this discourse corresponds to an instrumental vision of livestock farming.It was predominantly held by conservationists and hunters, which is interesting since these two types of actors usually hold very conflicting positions.There were also some livestock farmers who adopted it.These were farmers who are shifting from a productive vision of their activity to be more in alignment with the expectations of society, and to be at ease with the subsidies that they obtain from agri-environmental policies: “We know that we are not competitive, we know it.Our role is more environmental than productive”.The Type E discourse radically differs from the first four, because it does not consider that livestock farming should necessarily be maintained.This discourse questions the relevance of spending public money to maintain an activity that is no longer economically viable.It considers that if the livestock farming continued to decline, the shrubs and forest that were originally occupying the mountain would return, and this would lead to ecosystems and biodiversity that would be as interesting as – if not more interesting than-those currently in existence in the valley.In terms of ES, this discourse values the existence of woodland biodiversity.It is critical towards the previous discourse, questioning, in particular the argument of maintaining livestock farming for the sake of biodiversity.This type of discourse is a taboo in the local action arenas.The very few interviewees who expressed these ideas would not openly claim this type of opinion in a local council meeting, for instance.These interviewees were mainly some conservationists, foresters and second-home residents, who were less involved in local decision-making arenas than most of the other interviewees.However, as mentioned, many individuals oscillated between different types of discourses.The interviewees who held the Type E discourse often expressed that they had actually mixed feelings and representations.A conservationist, for example, explained that as a conservationist, he questioned the necessity to maintain livestock farming for biodiversity reasons, but as an individual going hiking during the weekend, he enjoyed the presence of livestock in the mountains."This illustrates the limits of thinking only in terms of stakeholders' categories.Regarding these categories, our study also confirms that people belonging to the same broad category can hold very different perspectives.This is particular noticeable for livestock farmers and conservationists.Overall, this discourse analysis shows that behind the apparent consensus around the need to maintain livestock farming and open landscapes, there are actually different people who want to maintain livestock farming for different reasons, emphasizing different functions of livestock farming and the different ES of open landscapes.It also shows us that the dominant narrative surrounding the need to maintain livestock farming is hiding a taboo in this valley: an unvoiced perspective, which believes that the decline of livestock farming could be an opportunity for recovering woodland biodiversity.In this section we will consider how these different discourses interact with each other in the local action arenas dedicated to the concerted management of ecosystems, and how these discourses shape and are shaped by the social interactions, institutions and power relations.We study in particular the social interactions related to the emergence of the apparent consensus, and the tensions that persist behind this consensus.To do so, we rely on the analysis of the social interdependencies that are derived from the functioning and dynamics of ES in the studied action arenas, in particular around a Natura 2000 site in the valley.The local apparent consensus on the multifunctionality of livestock farming locally emerged a decade ago in the negotiations regarding the Natura 2000 site.When the Natura 2000 site was first announced by the state authorities and the regional elected representatives, it was strongly and unanimously rejected by local people."It was considered a top-down process that did not acknowledge the local community's ability to sustainably manage local natural resources, and as an infringement of liberty, with a fear of restrictions on human activities in the perimeter of the Natura 2000 site.Later, as in all Natura 2000 sites in France, local elected representatives were invited by the state authorities to preside over the council of the Natura 2000 site.Some locally-elected individuals explained in the interviews that they accepted in order to defend the rights of local people to maintain all human activities in the valley, especially tourism, as the main local economic pillar.A concerted process began, with several working groups and council meetings, involving a range of local stakeholders, such as hunters, livestock farmers, tourism professionals, pastoral advisors, conservationists, etc.During these meetings, the conservationists in charge of the management of the Natura 2000 site said that their intention was not to restrict human activities, and that on the contrary, they needed livestock farming in order to maintain the grassland habitats, which were of community importance.They drew on a diagnosis conducted by pastoral advisors which had shown that “the environmental issues were 95% the same as the livestock farming issues”.Locally-elected representatives who considered the maintenance of open landscapes as an essential element of tourism saw it as an opportunity for the local communities, and all the involved stakeholders agreed a consensus regarding the need to maintain open landscapes through livestock farming.While diverse types of habitats of this Natura 2000 site are designated for protection under the Habitats Directive, the efforts of the team managing the site were targeted towards the grassland habitats rather than the forest ones.Within a few years, several actions were implemented under the consensual goal of maintaing open landscapes, most of them funded by the second pillar of the Common Agricultural Policy, i.e. mechanical clearing actions, modernization, and consolidation of pastoral equipment, and agri-environmental contracts paying for the salary of shepherds during the summer.In terms of social interdependency, this consensus corresponds to a synergy of interests between the multiple beneficiaries of the bundle of ES provided by open ecosystems of grasslands.They are not all interested in the same ES, but since these ES are in synergy, this led to a convergence of interests that in turn led to a consensus.This consensus can also be seen as a consensus between the first four types of discourses that we have identified, which all wanted to maintain livestock farming, but for different reasons.Finally, it is also a synergy between the interests of local stakeholders and the interests of society as a whole, for which open landscapes biodiversity is preserved.This justifies the investment of public money to implement actions to support local livestock farmers.We can see how the notion of agricultural multifunctionality was locally socially-constructed and negotiated, and how the local discourses, and the social and institutional interplays mutually shaped each other in this process.It is clear that the emergence of a dominant discourse based on the idea of multifunctional agriculture has impacted the local action arenas and institutions, and has enabled the concrete implementation of actions that were successful to support livestock farming.Vice versa, the social interactions and institutions have shaped the discourses, which increasingly integrated the ideas of agricultural multifunctionality."This has led to changes in local people's discourses and, to a certain extent, in their representations.Several farmers said that the Natura 2000 process increased their awareness of the positive impacts of their practices on the environment.Some of them not only accept their environmental role, but also claimed it with a sense of pride: “I am a farmer doing environmental excellence”.In response to the rise of agri-environmental policies, the pastoral advisors who are traditionally supporting the productive dimensions of livestock farming also integrated the environmental functions in both their discourses and actions.“We do not have purely environmental missions."That's clear It's more the environmental issues that arise” said a technical pastoral advisor.Overall, through the consensus-building processes and in the context of the agri-environmental policies, the idea of agricultural multifunctionality has been internalized by local stakeholders and appropriated in different ways, serving their various interests.The implementation of concrete actions undertaken to maintain open landscapes required collaboration between ES providers and intermediaries, including: the groups of farmers exploiting common pastures and receiving collective agri-environmental subsidies, the shepherds hired by these groups to look after the livestock, the managers of the Natura 2000 sites, the pastoral advisors providing technical advice and facilitating group discussions, and the elected representatives of the local collectives owning the common land and funding infrastructure.Farmers and shepherds are considered as direct ES providers, whilst advisors, Natura 2000 managers, and elected representatives are intermediary stakeholders that indirectly contribute to ES provision.While these stakeholders all had a common goal, tensions between them arose in the concrete implementation of actions.These tensions crystallized around the hosting of external livestock on the common pastures.The local elected representatives considered that the local farmers did not take good care of the mountain, because they had allowed the bush and woodland to take over the grassland, which threatened the attractiveness of the valley for tourism.Together with the Natura 2000 site managers, they wanted to increase the number of external livestock, namely the livestock owned by non-local farmers living outside the valley who pay an access to the summer pastures.Local farmers and shepherds were not in favour of this option, fearing for a lack of grass in the good pastures and contamination of their livestock by infectious disease.Once more, a study conducted by the pastoral advisors helped ease the tensions."This study suggested that local infrastructure was not adapted to local farmers' needs.The municipality decided to support them and built several new huts and paths.The local farmers benefited from the new infrastructure, but in exchange they had to accept an increase in the number of external livestock.“They build us four huts within two years … We cannot say that they did not listen to us.They do everything right, but after in return they ask me to increase the livestock in the mountain so that the huts are not made for nothing”.This is an interesting illustration of the tensions that lie behind the apparent consensus on the multifunctionality of livestock farming.These tensions arise from the fact that these people do not want to maintain livestock farming for the same reasons.On the one side, the Type B and D discourses have an instrumental vision of livestock farming.They primarily aim at maintaining open landscapes for tourism economy and biodiversity.For that purpose, they need livestock grazing, but not necessarily local livestock farmers – even though this is a rather extreme statement that is rarely voiced this way.On the other side, the Type A and C discourses want to maintain local livestock farmers and their livelihoods.This is also an illustration of the power imbalance between local elected representatives and local farmers.The power held by the former has two main sources.Firstly, the local authorities have the capacity to invest in infrastructure.Secondly, they own the land of the common pastures.Even if the exploitation of the land is theoretically within the responsibility of the group of farmers, as a farmer said, “the boss, in the end, is the one who owns the land!,.One of the sources of power held by the livestock farmers is that they are the ones who maintain the landscape.They know they cannot be replaced by machines.There are therefore both strategic and symbolic dimensions in the tensions around the hosting of external livestock.If the local farmers can be replaced by external livestock, they lose a source of power in the negotiations with local authorities.Another illustration of the existence of tensions behind the apparent consensus is the conflict between farmers and elected representatives regarding the increasing number of building permits that are granted by the elected representatives on former arable land.Indeed, in the local dominant consensus, there is a strong narrative around the synergy between livestock farming and tourism: farming shapes attractive landscapes for tourism, and vice versa, tourism provides additional incomes for pluri-active farmers.However, behind this claimed synergy, lies a strong antagonism.The hay meadows in the bottom of the valley have been increasingly replaced by tourist residences.This is a major problem for the economic viability of local farmers, who suffer from hay scarcity, and must buy or grow their hay outside the valley.This also contributes to the decrease of time and energy that local farmers can spend in maintaining the landscape in their own valley, which can be seen as a counter-effect to the policy of the local authorities.This process highlights the ambiguity of the discourses shared by the locally elected representatives.Livestock farming is central to their narrative, but their priority is the economic dynamism of the valley: “In each meeting they say “we need farmers!, "But they don't really listen to us”.We see again here that behind the consensus to maintain livestock farming, there is a tension between people who want to maintain livestock farming for different reasons, with a divide between those who want to maintain the livelihood of the livestock farmers, and those with an instrumental vision of livestock farming.The case of the building permits highlights that the latter viewpoint could potentially lead to actions undermining the viability of local farms.The consensus described in section 4.2.1 relied on a perceived synergy between local and public interests, the latter being the conservation of biodiversity of open habitats of community importance.This synergy was, however, questioned by some conservationists: “I have heard feedbacks from conservationists who said “ah, Natura 2000, they actually used it to support livestock farming”.The interviewed conservationist referred here to other conservationists who criticised the focus of Natura 2000 on the biodiversity of open ecosystems and suggested that the decline of livestock farming could be an opportunity to recover the biodiversity of forest ecosystems.In terms of ES, there is an antagonism between the ES provided by open and forest ecosystems, leading to a conflict of interest between their respective beneficiaries.Since society as a whole is considered to be the main beneficiary of the existence value of biodiversity, this raises the question of which type of biodiversity should be prioritised in the public interest.However, this Type E discourse is a taboo in the local decision-making arenas.It was indeed neither voiced nor heard in the working groups and steering committees of the Natura 2000 site.And yet, it seems that the process aimed at being collaborative and to include a diversity of perspectives.One of the interviewed conservationists who played a key role in the studied Natura 2000 site considered that these people should have talked when they were given the chance: “We gave everyone the opportunity to speak, they had to take it, in the working groups, in the steering committees, everyone was represented and the people who said nothing, they cannot say that we did not listen to them.They had to talk”.There are several possible interpretations of this phenomenon.Firstly, the Type E discourse is in opposition with the views of the majority of local people and can be interpreted as giving up on the local efforts to maintain a social fabric and economic vitality in the valley.It obviously requires strength and motivation to voice such controversial view.Secondly, power relations likely also played a role.The conservationists and managers of the Natura 2000 site had not actually had much choice: faced with the initial strong rejection of the scheme and the coalition of local actors in favour of open landscapes, they had no regulatory power at their disposal to impose measures that would have put the focus on the value of shrub and forest ecosystems.Thirdly, some conservationists expressed their difficulties related to the lack of clear conservation objectives and biodiversity indicators: “We are not even able to do quantification.What do we do?,Preserve species?,Ecosystems?,On which indicators do we base our decisions?,.In this unclear context, it was also a pragmatic choice to focus on objectives that were aligned with the interests of local actors.However, some conservationists also explicitly acknowledged that their position in favour of grasslands habitats corresponded to a personal commitment and not necessarily to a choice based on the objective superiority of grasslands and moorlands over forest ecosystems.“I understand that some people might like a wild mountain, without animals, the forest.I defend the grass!,I do not say that it is an absolute value."That's more of a personal commitment”.Finally, the emergence of this local taboo around the idea that farmland abandonment could be worthwhile in terms of biodiversity might also be related to the context of the conflict between naturalists and livestock farmers regarding the reintroduction of Eurasian brown bears in the Pyrénées.This conflict has indeed probably reinforced the taboo, since any discourse that appears to be a pro-bear discourse is strongly rejected locally."On the other hand, the existence of a taboo doesn't help solve the conflict.Because it is a taboo, it is little discussed in local decision-making arenas.It is considered by many local people as a fight between “us and them”, where ‘us’ are the local people and ‘them’ are the urban conservationists, and the environmental national and European policies.Overall, our analysis suggests that the social construction of a consensus around the multifunctionality of livestock farming in the local decision-making arenas aligned with the social construction of a local taboo around the idea that farmland abandonment might be potentially interesting in terms of biodiversity.We called it the taboo of a rewilding scenario.We made the choice in this study to use an ES lens to decipher the social construction mechanisms related to the multifunctionality of agriculture.Incorporating the two concepts of ES and MFA, this study allows us to discuss their respective interests, limits, and implications.This discussion section is organised as follows.Firstly, we come back on the interests and limits of the chosen conceptual framework, and we discuss the respective heuristic values of ES and MFA as conceptual lenses to study interactions between agriculture, environment and society.Secondly, we discuss the normative dimensions and policy implications of these concepts for the future of mountain farming and rural areas.The two concepts of ES and MFA emerged and gained importance in parallel, but are rooted in different scientific disciplines.The concept of MFA is anchored in the rural social sciences, while the ES notion was born from an alliance between ecology and economy.While these two concepts and their diverse branches share many features, they rely on different assumptions, and they can lead to different understandings of agricultural and rural changes.As a conceptual lens, and compared to the MFA, the ES concept displaces agriculture from its central position.It allows therefore a more thorough analysis of the discourses of people who emphasize the negative impacts of agriculture, or who envision scenarios of a rural space with less or without agricultural activity.Moreover, compared to the MFA concept, the ES concept naturally leads one to think in terms of stakeholders, with the identification of ES providers and beneficiaries.Because it gives the same attention to ES related to agriculture as to other sectors, such as tourism or biodiversity conservation, the ES concept also enables the integration of a larger range of sectors and stakeholders in an area, and see how they interact.This added-value of the ES concept was particularly remarkable in the conceptual framework we adopted, since it focused on social interdependencies related to ES dynamics.It might therefore not be true for other ES approaches found in the literature.However, the ES concept also presents several conceptual weaknesses in order to analyze representations of relationships between agriculture, society and environment."Firstly, in our interviews, the ES concept was less closely aligned to local people's representations than the MFA concept. "A study of Lamarque et al. using the ES concept to study local stakeholders' representations in a mountainous agro-pastoral system led to the same observation.During our interviews, people spontaneously listed the multiple functions of livestock farming.Conversely, except for a few interviewees who had an instrumental vision of nature that fitted well with the ES concept, it was quite difficult for most of the interviewees to spontaneously think about the range of benefits they get from nature."This kind of information was obtained indirectly by analyzing the interviewees' discourses.Moreover, the MFA concept corresponds to a more integrated vision of human-nature interactions."The ES concept sometimes leads to artificially separate social and ecological elements that are inherently interwoven with each other in people's representations.This is especially true in the case of extensive livestock farming that is often described as a symbiotic relationship between the grassland, the animals, and the shepherd.Thinking in terms of ES requires addressing conceptual questions like: should we consider the animal as part of an animal-grassland ecosystem, or as part of a symbiotic shepherd-animal team that shapes and uses a grassland ecosystem?, "While such questions are important to address, they illustrate the fact that the ES concept doesn't fit easily with most local people's representations.The fact that people think more spontaneously in terms of MFA than in terms of ES should also be attributed to the long history of agricultural policies based on MFA.As we have seen in this study, people have appropriated and internalized this idea.The same might happen with the ES concept in the future.In the previous section, we have discussed the interests and limits of ES and MFA as conceptual lenses to study discourses on rural changes.We shall now discuss their normative implications in terms of policy and management options regarding the future of mountain farming.In particular, we will critically discuss the implications of different ways to comprehend ES and MFA, since both concepts are open to multiple interpretations.To do so, we draw a parallel between our empirical results on the competing visions of mountain farming, and the debates in the literature about the normative implications of different ES and MFA paradigms.Firstly, there is the local opposition between the visions that wish to maintain agriculture at the centre of rural development and the more instrumental visions of agriculture.We have seen that the local consensus on the need to maintain livestock farming has been accompanied by a dominance of narratives justifying livestock farming in instrumental terms, paradoxically leading to at times an ineffective support of livestock farming.Although some actors do want to maintain livestock farming for its intrinsic values, our discourse analysis has shown that the key actors with a strong influence in the negotiations were mostly interested in the ES provided by the landscapes shaped by grazing.As a result, although livestock farming is central to the discourses of the powerful elected representatives, supporting local livestock farmers is not always a priority in their concrete actions when there are trade-offs and choices to make, especially in the case of trade-offs with tourism economy.These local processes of appropriation, negotiation, and the social construction of the MFA concept echoes debates in scientific and political arenas between competing paradigms of MFA and ES.As noted in the introduction of this paper, in the MFA paradigm based on rural development and agro-ecology, agriculture remains at the centre of any project on land-use management.In terms of policy, in this paradigm, the MFA becomes a tool for place-based rural development, where the social, ecological, cultural and economic functions of agriculture are integrated in local and regional development.Tilzey adds a distinction here between counter-hegemonic and income support paradigms.The first one, which was not observed in our case study, corresponds to the strongest definitions of MFA, i.e. a radical path to counter the hegemony of capitalist forms of agriculture, echoing the most political definitions of agroecology and food sovereignty, which exclude support from state policies.In the income support paradigm, which clearly resonates with the discourses observed in our case study, MFA is conceived as a way to financially support farms which lack competitive capacity in international markets, due to their small size and/or topographic constraints.In both paradigms, the goal is to support farmers and to maintain agriculture at the centre of rural development."In contrast, post-productivism emphasises the services provided by the rural landscapes for society, and agriculture is not necessarily a central part of this 'rural'.This paradigm corresponds to what Tilzey labelled ‘embedded’ neoliberalism, which recognizes that certain forms of farming generate positive externalities, but which considers that farming is only contingently rather than necessarily required for the delivery of these beneficial environmental outcomes.It corresponds to the instrumental visions of livestock farming that we observed in our case study.In terms of concepts, this paradigm prompts a shift from the idea of multifunctionality of agriculture to the idea of multifunctionality of agro-ecosystems, rural landscapes or countryside.In terms of policy, it is associated with agri-environmental subsidies, and is congruent with ES approaches based on payments for ES which reward farmers for their efforts in providing ES such as water quality, soil conservation, or biodiversity preservation.There have been many debates and fears raised regarding the idea that the ES concept might lead to a commodification of nature.In the case of agri-environmental policies, it seems that the ES concept is mainly facilitating a shift towards the development of result-based approaches, where farmers are not paid to compensate the costs of changing their farming practices, but for the effective provision of environmental benefits.Simoncini suggests that a “shift of focus, from the multifunctional character of agriculture to that of agro-ecosystem, could overcome the difficulties in assessing the environmental benefits of the majority of the current agri-environmental programs”.Although these schemes are used to support farmers financially, this corresponds to an instrumental vision of agriculture, with potentially negative consequences for farmers as illustrated by our case study.In this vision, agriculture loses its centrality.Some PES in Central America for example give payments to farmers in exchange for a conversion of their farm land into forestry plantations, for carbon sequestration and biodiversity conservation purposes.This can be seen as an extreme version of post-productivism, a third paradigm in which the focus is on the conservation and/or recreational functions of rural areas, for the benefit of the wider society, especially the people living in the cities.This echoes what Tilzey named the “radical neoliberal” discourse, which denies the “exceptionalism” of agriculture.This third paradigm echoes the last type of discourse we have encountered in our study, which is the taboo idea that farmland abandonment might be worthwhile in terms of biodiversity.As described, some conservationists considered that this idea was not taken into account in the implementation of the Natura 2000 scheme, which has in the end served the interests of agriculture more than biodiversity.Whilst such an idea was still a taboo in local decision-making arenas for land-use management in the studied valley when we conducted interviews, it echoes recent academic controversies among ecological scientists on the effects of farmland abandonment on biodiversity.Two main schools of thought are traditionally distinguished in landscape ecology: the American one and the European one.The first sees farmland abandonment as an opportunity for ecosystems recovery and biodiversity conservation.Forest transition is a common framework under this paradigm that considers that forest expansion on abandoned marginal farmland is a sign of economic development, which enables the recovery of natural ecosystems.Conversely, the European school of landscape ecologists sees land abandonment and spontaneous reforestation as a threat to biodiversity, because of landscape homogenization, and the loss of open-habitat species of conservation value.There are cultural and historical reasons behind this school of thought.In Europe, most rural landscapes have been shaped by humans for thousands of years, leading to a traditional agricultural landscape comprised of a land-use mosaic, and that European landscape ecologists and agri-environmental policies have urged to maintain until now.However, there is an emerging movement among European scholars that highlights the high social cost of these agri-environmental subsidies, and invites policy-makers to consider farmland abandonment in European remote areas as an opportunity for “rewilding” ecosystems.They identify numerous species that could benefit from forest regeneration, as well as benefits in terms of carbon sequestration and recreation.This concept of rewilding is now spreading in Western Europe with increasing numbers of rewilding projects, sometimes associated with reintroductions of species of big predators.It is, however, a very controversial and divisive idea, questioning the very place of humankind in ecosystems."To conclude, we suggest that whether we talk about multifunctionality of agriculture, multifunctionality of landscapes, agro-ecology, ecosystem services, landscape services or nature's contributions to people, any concept is socially-constructed within complex relationships between scientists, politics and society.Whilst they hold different internal meanings and values, they can be appropriated and reconstructed differently in different arenas, serving sometimes other purposes than their initial conceptualisation.Moreover, each concept highlights specific dimensions of reality, and hides others.Nonetheless, some concepts are more integrative than others.For example, Holmes suggests to characterise the multifunctional transition in rural areas as “a shift from the formerly dominant production goals towards a more complex, contested, variable mix of production, consumption and protection goals”.However, none of the existing concepts or frameworks are able to integrate the existing diversity of social representations and values.It is therefore necessary to maintain a multiplicity of concepts.More importantly, there is a need to support dialogue between citizens, scientists and policy makers working with these different concepts.It is not the role of the scientists alone to decide which concepts should prevail, because this entails the future of rural spaces, which is essentially a social and political choice."Our study illustrates that there are people with highly competing views who don't understand each other, because their visions are based on very different values.This highlights the necessity of more dialogue and collaboration between these groups, and also among the different academic communities, and the diverse policy sectors, to avoid implementing contradictory policies at the local level.Policies should instead be co-designed to ensure that they are are as adaptive, negotiable and integrative as possible. | The multifunctionality of agriculture is often understood as a normative political notion aimed at fostering the sustainable development of rural areas. Considering it as a locally, socially-constructed concept, the objective of this paper is to analyse how the idea of agricultural multifunctionality was appropriated, re-constructed and negotiated in local arenas dedicated to land-use management. Conceptually, we adopt a political ecology approach which uses a constructivist and relational approach to the concept of ‘ecosystem services’. Drawing on a case study in the French Pyrénées mountains, we analyse the diversity of discourses on the roles of livestock farming held by local stakeholders and unpack the ways that these different discourses interact with each other in the local action arenas. We show that a coalition of interests led to the emergence of a dominant and apparent consensus around the need to support livestock farming to maintain open landscapes. We also show that behind this apparent consensus, there are in fact tensions between people who want to maintain livestock farming for different reasons, with some having more instrumental visions than others. Finally, we demonstrate that the dominant consensus has generated a local taboo, hiding an unvoiced pro-rewilding perspective which considers that farmland abandonment could be an opportunity in terms of biodiversity. Incorporating the two concepts of ecosystem services and agricultural multifunctionality, this study allows us to discuss their respective heuristic values and policy implications. |
31,497 | Low Free Testosterone and Prostate Cancer Risk: A Collaborative Analysis of 20 Prospective Studies | Experimental and clinical evidence implicates testosterone in the aetiology of prostate cancer.Nearly all metastatic prostate tumours overexpress the androgen receptor, and androgen deprivation therapy is the mainstay treatment approach for many prostate tumours .Two large randomised controlled trials of 5α-reductase inhibitors showed a reduction in prostate cancer risk .Genome-wide association studies and animal models also support an association between androgens and risk .Despite the strong biological evidence of an association between testosterone concentration and prostate cancer risk, previous epidemiological studies have not found evidence of an association .This may be because the association is nonlinear; variations across the normal range of circulating testosterone may not lead to alterations in prostate growth because the stimulation of prostatic androgen receptors may remain relatively constant, due to relatively constant intraprostatic DHT concentrations and/or saturation of the androgen receptors .However, when the supply of testosterone to the prostate is abnormally low, prostate growth may decrease .Therefore, we hypothesised that men with very low circulating testosterone concentrations may have a reduced risk of prostate cancer but that, above these low concentrations, prostate cancer risk is not associated with further increases in circulating testosterone concentrations.Less than 2% of testosterone circulates unbound to carrier proteins or “free”, and is able to pass out of the blood into the prostate tissue ; therefore, the focus of our analysis was on free testosterone.The Endogenous Hormones, Nutritional Biomarkers and Prostate Cancer Collaborative Group is a pooled individual participant dataset of prospective studies and prostate cancer risk.A previous analysis by this group found no association between prediagnostic androgen concentrations and prostate cancer .However, this dataset has since been expanded to include almost double the number of prostate cancer cases and now comprises 20 prospective studies with a total of 6933 cases and 12 088 matched controls with calculated free testosterone data.This large dataset now provides sufficient power to examine whether men with very low concentrations of circulating free testosterone have a reduced risk of prostate cancer.Individual participant data were available from 20 prospective studies by dataset closure on 31 August 2017.Principal investigators were invited to contribute data to the EHNBPCCG if they had published or unpublished data on concentrations of endogenous hormones and/or nutritional biomarkers from blood samples collected prior to the diagnosis of prostate cancer.Studies were identified using literature search methods from computerised bibliographic systems and by discussion with collaborators, as described previously .Data were harmonised in a central database.Studies were eligible for the current individual participant analysis if they had prospective data on prediagnostic circulating concentrations of testosterone and sex hormone-binding globulin, from which an estimate of free testosterone concentration could be calculated.Participating studies are listed in Supplementary Table 1.Further details of data collection and processing are provided in the Supplementary material.Principal investigators were asked to provide data on prostate cancer case or noncase status, and if applicable, a matched-set identifier.Data were also supplied on participant and tumour characteristics, circulating concentrations of total testosterone, SHBG, as well as other biomarkers that may be potential confounders or sources of bias.The majority of the studies were matched case-control studies nested within either prospective cohort studies or randomised trials.Four studies were cohort or case-cohort analyses.To apply a consistent statistical approach across all studies, the cases from the case-cohort studies were matched to up to four participants who were free of prostate cancer at the age at diagnosis of the case on the basis of our minimal matching criteria.Each study individually obtained ethical approval; therefore, separate ethical approval for this secondary reanalysis of data was not necessary.Details of participant recruitment, study design, and case ascertainment are summarised in Supplementary Table 1 and assay details in Supplementary Table 3.Free testosterone concentrations were calculated from total testosterone and SHBG concentrations using the law of mass action , assuming a constant albumin concentration of 43 g/l.Prostate cancer cases were defined as early stage if they were tumour-node-metastasis stage ≤T2 with no reported lymph node involvement or metastases, and advanced stage if they were TNM stage T3 or T4 and/or N1+ and/or M1.Aggressive disease was categorised as “no” for TNM stage ≤T3 with no reported lymph node involvement or metastases, and “yes” for TNM stage T4 and/or N1+ and/or M1 and/or stage IV disease or death from prostate cancer.Prostate cancer was defined as low-intermediate grade if the Gleason score was <8 or equivalent and high grade if the Gleason score was ≥8.More detail can be found in the Supplementary material and previous publications .Conditional logistic regression was used to calculate the odds of prostate cancer diagnosis by hormone concentration.The analyses were conditioned on the matching variables and adjusted for age at blood collection, body mass index, height, usual alcohol consumption, smoking status, marital status, and education status as categorical variables, with an additional category for missing data, except for age.As we were interested a priori in the risk for prostate cancer in men with very low free testosterone concentrations, we categorised free testosterone concentrations into study-specific tenths, with cut points defined by the distribution in control participants, to allow for any systematic differences between the studies in assay methods and blood sample types , using the highest tenth as the reference category.To explore the association with greater power, these tenths were also grouped, with the 8th–10th tenths combined as the reference category.In all further analyses, the 2nd–10th tenths were combined and used as the reference category.Where more than two categories of exposure were compared, variances were used to calculate floating confidence intervals, which facilitate comparisons between any two exposure groups .PSA, IGF-I, and C-peptide concentrations at blood collection were available for subsets of participants.The main analyses of the relationships between low free testosterone and prostate cancer risk were examined in these subsets before and after further adjustment for these variables.Heterogeneity among studies was assessed by comparing the χ2 values for models with and without a study × analyte interaction term.Tests for heterogeneity for case-defined factors, in which controls in each matched set were assigned to the category of their matched cases, were obtained by fitting separate models for each subgroup and assuming independence of the odds ratios using a χ2 test, which is analogous to a meta-analysis.Tests for heterogeneity for non–case-defined factors were assessed with χ2 tests of interaction between subgroups and the binary variable.All tests of statistical significance were two sided, and statistical significance was set at the 5% level.All statistical tests were carried out with Stata statistical software, release 14.1.Further details of the statistical analysis can be found in the Supplementary material.A total of 20 studies, comprising 6933 cases and 12 088 controls, were eligible for this analysis.Mean age at blood collection in each study ranged from 33.8 to 76.2 yr, and the year of blood collection ranged from 1959 to 2004.Study participants were predominantly of white ethnic origin.The average time from blood collection to diagnosis was 6.8 yr, the average age at diagnosis was 67.9 yr, and most cases were diagnosed between 1995 and 1999.Prostate cancers were mostly localised and/or low grade.The free testosterone concentration cut points used for each study are shown in Supplementary Table 4.Men in the lowest study-specific tenth of free testosterone were older, and had a higher mean BMI and lower PSA at blood collection than men with higher free testosterone concentrations.Fig. 1 shows the associations of free testosterone, total testosterone, and SHBG concentrations with overall prostate cancer risk.Men in the lowest tenth of free testosterone had a lower risk of prostate cancer compared with men in any other tenth of the distribution.We next combined the tenths into a smaller number of categories; here, men in the lowest tenth had a 23% lower risk compared with men in the 8th–10th tenth group.When categories 2nd–10th were combined, the risk estimate remained very similar, with no evidence of heterogeneity between studies.Two studies included organised prostate cancer screening, but there was no evidence of heterogeneity between studies that included organised screening and those that did not.PSA concentration at blood collection was available for 48% of matched sets.In this subset, men with low free testosterone had a reduced risk of prostate cancer; further adjustment for PSA attenuated the association to the null.In men with data for IGF-I and C-peptide, further adjustment for these analytes made no appreciable difference in the associations.There was evidence of heterogeneity by tumour grade; a low concentration of circulating free testosterone was associated with a reduced risk of low-grade prostate cancer, while there was a nonsignificantly increased risk of high-grade prostate cancer.Our results indicate that men in the lowest study-specific tenth of calculated free testosterone concentration have a 23% reduced risk of prostate cancer compared with men with higher concentrations.Above this very low concentration, prostate cancer risk did not change with increasing free testosterone concentration.We also found evidence that this association varied by tumour grade.This is the largest collection of data on hormones and prostate cancer risk available, and is the first large-scale prospective evidence supporting an association between low free testosterone concentrations and prostate cancer risk.The observed association between low free testosterone and lower prostate cancer risk may be due to a direct biological effect.Across the normal range of circulating free testosterone concentrations, stimulation of prostatic androgen receptors may remain relatively constant, due to stable intraprostatic DHT concentrations and/or saturation of androgen receptors .Therefore, variation across the normal range of circulating free testosterone concentrations may not be associated with a prostate cancer risk.However, when circulating concentrations are very low, reduced androgen receptor signalling may lead to a reduction in prostate cancer risk .An alternative explanation for the main findings may be detection bias.Controls with low free testosterone concentrations had low PSA concentrations at blood collection, and adjustment for PSA concentration in a subset of our dataset attenuated the association of low free testosterone and prostate cancer risk towards the null.However, there was no evidence of heterogeneity in the associations between men diagnosed before and after 1990, before which there was relatively little PSA testing .PSA is partly regulated by the androgen receptor ; therefore, it is difficult to disentangle the relationship between these variables in this observational analysis .While there was no evidence of heterogeneity in the association of free testosterone with prostate cancer risk by tumour stage or aggressiveness, there was evidence of heterogeneity in this association by tumour histological grade; a low free testosterone concentration was associated with a lower risk of low-intermediate–grade prostate cancer, and there was a nonsignificantly increased risk of high-grade disease.Although it is possible that this heterogeneity is a chance finding due to the multiple tests conducted and the relatively small number of high-grade tumours, this pattern has been reported previously in the Health Professionals Follow-up Study , several clinical case studies , and the PCPT and Reduction by Dutasteride of Prostate Cancer Events trials.These two trials investigated the effect of 5α-reductase inhibitors, which can reduce intraprostatic DHT concentration by approximately 80–90%, on prostate cancer risk .Both trials reported a 23–25% reduction in overall prostate cancer .However, the PCPT reported a 27% increase in high-grade tumours , and the REDUCE trial reported a 58% increased risk of high-grade tumours .There are several possible explanations for the observed heterogeneity in the associations by tumour grade.Prostate tumour grade stays stable over several years , suggesting that high-grade tumours develop de novo rather than from the dedifferentiation of low-intermediate–grade tumours.Mechanistically, prostatic androgen-androgen receptor binding is an important modulator of cell differentiation ; thus, prostate cells with reduced androgen exposure may be less differentiated and more likely to develop into high-grade tumours .Alternatively, this may be a differential growth response of early low-grade cancer lesions to a low androgen environment.Another possibility is differential detection bias as discussed in relation to PCPT and REDUCE .Owing to the clinical importance of high-grade tumours, this observed heterogeneity by grade, with a possible higher risk of high-grade tumours, requires further investigation.Our study has a number of limitations.Free testosterone was calculated using the law of mass action , which is based on testosterone and SHBG concentrations and assumes a constant albumin concentration.Although this is a commonly used method of estimating free testosterone concentration, it has not been validated within each individual study via equilibrium dialysis .The assay methods used to measure analytes varied, with the majority of studies using nonextraction assays to measure testosterone.While this may introduce some misclassification, this would be expected to be nondifferential and therefore tend to bias any association towards the null.Mass spectrometry is often considered the gold standard method to measure sex hormone concentrations , but high-quality immunoassays are able to measure reliably low adult male testosterone concentrations .Although these assays may not be suitable for determining absolute clinical cut points, they are considered appropriate for the determination of relative concentrations within studies .Our study relied on single measurements of testosterone and SHBG, with an average time from blood collection to diagnosis of 6.8 yr, to represent participants’ hormone concentrations over medium to long term.While several studies show that a single measure of these analytes has moderately good reproducibility over periods of up to 1 yr, it is unknown whether these measures are reliable over the longer term.In summary, the findings from this pooled prospective analysis of 6933 prostate cancer cases and 12 088 controls support the hypothesis that very low concentrations of circulating free testosterone are associated with a reduced risk of prostate cancer.Further research is needed to elucidate whether the association is causal or due to detection bias, and explore the apparent differential association by tumour grade. | Background: Experimental and clinical evidence implicates testosterone in the aetiology of prostate cancer. Variation across the normal range of circulating free testosterone concentrations may not lead to changes in prostate biology, unless circulating concentrations are low. This may also apply to prostate cancer risk, but this has not been investigated in an epidemiological setting. Objective: To examine whether men with low concentrations of circulating free testosterone have a reduced risk of prostate cancer. Design, setting, and participants: Analysis of individual participant data from 20 prospective studies including 6933 prostate cancer cases, diagnosed on average 6.8 yr after blood collection, and 12 088 controls in the Endogenous Hormones, Nutritional Biomarkers and Prostate Cancer Collaborative Group. Outcome measurements and statistical analysis: Odds ratios (ORs) of incident overall prostate cancer and subtypes by stage and grade, using conditional logistic regression, based on study-specific tenths of calculated free testosterone concentration. Results and limitations: Men in the lowest tenth of free testosterone concentration had a lower risk of overall prostate cancer (OR = 0.77, 95% confidence interval [CI] 0.69–0.86; p < 0.001) compared with men with higher concentrations (2nd–10th tenths of the distribution). Heterogeneity was present by tumour grade (phet = 0.01), with a lower risk of low-grade disease (OR = 0.76, 95% CI 0.67–0.88) and a nonsignificantly higher risk of high-grade disease (OR = 1.56, 95% CI 0.95–2.57). There was no evidence of heterogeneity by tumour stage. The observational design is a limitation. Conclusions: Men with low circulating free testosterone may have a lower risk of overall prostate cancer; this may be due to a direct biological effect, or detection bias. Further research is needed to explore the apparent differential association by tumour grade. Patient summary: In this study, we looked at circulating testosterone levels and risk of developing prostate cancer, finding that men with low testosterone had a lower risk of prostate cancer. We found that men with low circulating free testosterone had a 23% reduced risk of overall prostate cancer, but there was some evidence that these men had an increased risk of developing high-grade disease. |
31,498 | Prediction of the crystal structures of axitinib, a polymorphic pharmaceutical molecule | Organic crystals are a key component of the formulated products that are manufactured in many industrial sectors including pharmaceuticals, agrochemicals, foods, paints, and explosives.The efficacy, stability and other end-use properties of such products are largely influenced by the precise structure of the organic crystals because the molecular packing arrangement affects numerous physical properties such as color, mechanical strength, flowability and solubility, to name but a few.In view of the importance of crystal structure, the propensity of many organic molecules to crystallize readily in multiple metastable structures creates significant challenges in many aspects of product development and manufacturing.A well-known example of the problems that can arise as a result of polymorphism is that of ritonavir, an active pharmaceutical ingredient marketed as a HIV drug by Abbott Laboratories from 1996.In 1998, a previously unknown form, Form II, appeared, and it became impossible to revert to the production of Form I. Form II was found to be more stable than Form I, with a significantly lower solubility.The product had to be recalled from the market, leading to an interruption of supply, and was eventually re-developed as a liquid formulation.Further investigation revealed three further forms of ritonavir.The existence of polymorphs also poses intellectual property challenges, as patent protection relates to the form of the product.For example, the crystal structures of cefdinir, a drug molecule with at least five polymorphs, have been the subject of multiple patents and of prolonged legal battles.The magnitude of the risks arising from insufficient knowledge of polymorphism has motivated increasing investment in polymorph screening, the experimental investigation of the so-called polymorphic landscape of organic molecules.This has been complemented by computational crystal structure prediction methodologies aiming to identify possible crystal structures with little or no experimental input.While it has long been clear that achieving this goal would require a very significant research effort, CSP is increasingly used in combination with experimental screening.The blind tests organized by the Cambridge Crystallographic Data Centre since 1999 provide a useful series of snapshots of the state-of-the-art and of the progress made in the field.In each blind test, participants are asked to predict the most stable crystal structure for a handful of molecules, salts or co-crystals of varying complexity.The degree of difficulty of each system depends on the number of molecules it contains, the types and number of atoms, the presence of charged species and the flexibility of the molecules.Of particular note in previous blind tests are two milestones: the consistent success achieved with the GRACE approach by Neumann, Leusen, Kendrick, in the fourth blind test and by Neumann, Leusen, Kendrick, and van de Streek in the fifth blind test, in predicting the polymorphs of small molecules; and the successful prediction in the fifth blind test, by two groups, of the most stable structure of “Molecule XX”-1,3-thiazol-2-yl)phenyl)carbamate), a molecule whose structure, size and flexibility are representative of those of pharmaceutical compounds.As a result of the increasing reliability of CSP, several promising applications to industrially-relevant compounds have been reported in the literature, focusing on the identification of known and potential polymorphs.In the area of pharmaceuticals, these have included studies of some of the compounds shown in Fig. 1, namely naproxen, GlaxoSmithKline׳s molecule GSK269984B, Pfizer׳s crizotinib, a melatonin agonist, Eli Lilly׳s olanzapine and Eli Lilly׳s tazofelone.In these different cases, crystal energy landscapes, in which every putative crystal structure is characterized in terms of its energy and density, were generated.The computed crystal structures were ranked in terms of their thermodynamic stability, usually based on the predicted lattice energy, rather than the more difficult to compute Gibbs free energy.The studies of pharmaceutical compounds reported in the literature to date have focused on “small molecule pharmaceuticals”, typically with up to 10 rotatable bonds.They have generally resulted in the correct identification of all known polymorphic structures as low-energy minima on the energy landscape.The relative stability of the computed polymorphs, however, often differs from the experimental relative stability, as extrapolated to 0 K. Furthermore, many structures that have not been identified experimentally are often found as low-energy minima.This can arise for a number of reasons, including the fact that some computed structures may be found to be unstable when entropic effects are taken into account and the fact that some structures may be difficult to crystallize experimentally.Despite these limitations, CSP has found several applications of practical relevance beyond the scientific goal of achieving the blind prediction of all likely polymorphs.Thus, it can be used to provide reassurance that all likely polymorphs have been identified; to guide the search for further polymorphs by suggesting the specific crystal structures that might be observed, thereby helping to identify appropriate crystallization conditions; to support crystal engineering by providing an understanding of the link between the motifs observed and molecular structure or crystal composition; to help crystallographers interpret data gathered on specific compounds.This broad array of uses provides impetus for methodological improvements aimed at increasing the accuracy of the predictions and at broadening the range of molecules, co-crystals, salts and solvates that can be tackled in terms of size and complexity.Several reviews have recently been published on the current state-of-the-art in CSP, covering one or more methodologies.Together with the papers summarizing the results of the five blind tests to date, these provide an excellent survey of the field.In the present paper, we focus on a specific systematic approach that has been developed in our group.The algorithms on which this approach is based have been successfully used in several of the examples discussed so far.We aim to provide an introduction to the approach and a perspective on future developments via its application to axitinib, a Pfizer anti-cancer API that has been noted for its numerous crystal forms, including 5 neat ones and 66 solvates.This provides a great challenge for CSP and a fertile learning ground allowing us to assess the current status of the methodology and to identify directions for further research.In Section 2, we review previous work on the crystal structures of axitinib.In Section 3, we provide an overview of our CSP methodology, and in Section 4 we discuss its application to the prediction of polymorphism in axitinib.Section 5 discusses various key aspects of the performance of the CSP approach when applied to axitinib, aiming to draw some lessons from the results obtained.Section 6 concludes with some general remarks on the current status of CSP methodologies and their limitations, and identifies some relevant research priorities in this area.This section provides a review of the available information on the polymorphism of axitinib.Published experimental data on the five neat polymorphs are summarized, and previous computational work is discussed.From the point of view of crystallography, axitinib is notable because of its large number of neat polymorphs, solvates and hydrates.The 71 forms that have been reported in the literature to date are labeled using Roman numerals, i.e. Form I to LXXII, with no form being assigned to number V.As a consequence of this rich polymorphic landscape, this molecule has proved a challenge for the pharmaceutical industry, as has been well documented in the context of crystallization process development.There are five known non-solvated forms of axitinib.Four polymorphs have one molecule in the asymmetric unit, while Form IV has two independent molecules.The main crystallographic information for these five polymorphs of axitinib, based on spectroscopic data, is summarized in Table 1.All structures have been resolved to a high degree of confidence, as indicated by the low values of the relative R-factor.Form IV was thought to be the most suitable for development until the discovery, following further experimental screenings, of Form XLI, which is currently considered to be the most stable form of axitinib.Campeta et al. further reported that Form XLI is monotropically related to all other forms, while forms IV, VI and XXV are enantiotropically related.Furthermore, Chekal et al. found that Form IV is more stable than Form XXV at temperatures above 75 °C, based on solubility experiments in a 80:20 water/methanol solution.The transition temperature between Forms XXV and VI has not been determined: the two structures are so close in energy that calorimetric experiments result in conflicting evidence on their relative stability.Finally Form I is the least stable among the five polymorphs, and is known to be unstable in a humid environment and to transform to the monohydrate Form IX.Abramov estimated the relative stability of Forms XLI, XXV, VI and IV by applying six computational models, including molecular mechanics, density functional theory, two versions of dispersion-corrected DFT, and two versions of the quantum theory of atoms in molecules.QTAIM was used to model molecular clusters derived from experimental crystallography data, with DFT-optimized hydrogen positions.Based on QTAIM, the charge density and the electronic potential energy density, at a point defined as the bond critical point along the bond path of one or more hydrogen bonds, were calculated based on a B3LYP/6-31G-derived wave function.The relative order of stability computed by DFT+d was found to be in agreement with the experimental order based on ΔHf, while the densities calculated by QTAIM were found to correlate well with the experimental relative stability.The other methodologies tested did not provide good agreement.Interestingly, of the 10 pharmaceutical compounds studied with these different approaches, axitinib was found to present the greatest challenge.At least four of the six computational approaches were found to yield good agreement with experimental order for each of the other molecules.Finally a limited CSP study was carried out by Lupyan et al.The authors initially developed an updated parameter set for the intramolecular S…O interaction for the OPLS_2005 force field.They then used this force field to perform a conformational search using the low-mode search method, in order to find low energy conformations of axitinib.Finally, using the conformations identified by this search that are closest to the experimental conformations, they performed a CSP study for each conformation, restricting the search to the corresponding experimental space groups.The Polymorph Predictor CSP module in Materials Studio 5.5 with the COMPASS force field was used for this purpose.No prediction was attempted for Form IV because it has two molecules in the asymmetric unit.Forms I, VI and XLI were correctly identified as lattice energy minima with rms10 values equal to 0.66 Å, 0.54 Å and 0.47 Å respectively.Form XXV was not found to correspond to a lattice energy minimum.The computational studies of polymorphism in axitinib carried out to date have allowed the investigation of the effects of flexibility and hydrogen bonding on the stability of the known polymorphs.There have been limited attempts to explore the crystal energy landscape for this challenging molecule.In the remainder of this paper, we investigate the applicability of the approach developed in our group to the Z′=1 polymorphs of axitinib.This section provides a brief overview of the CSP problem and its mathematical formulation, and of the CSP methodology that will be used for the ab initio prediction of the polymorphic landscape of axitinib.The crystal structure prediction challenge can be stated as:“Given the molecular diagrams for all chemical species or ions) in the crystal, identify the thermodynamically most stable crystal structure at a given temperature and pressure, and, in correct order of decreasing stability, other crystal structures that are also likely to occur in nature.,To be relevant to the pharmaceutical and related industries, a CSP approach must be applicable to organic molecules involving multiple rotatable bonds and having a molecular weight of at least a few hundred daltons.In addition, it is necessary to be able to predict the crystal structures of salts, co-crystals and solvates as such systems are frequently used to enhance product effectiveness or to facilitate manufacturing.In a recent publication, we set out the design requirements that systematic CSP methodologies should meet in order to find wide applicability in practice, such as a high degree of automation, with limited dependence on user insight; a consistent and general physical basis; a high degree of reliability; a high degree of accuracy, but with reasonable computational cost.This last requirement is particularly challenging: experience in crystal structure prediction has shown that it is essential to carry out an exhaustive search of the energy landscape covering millions of potential structures, and that the relative energies of computed crystal structures are highly dependent on the accuracy of the energy model, with electronic structure calculations providing the most reliable results.In practice, the evaluation of the Gibbs free energy of a known crystal structure is a very challenging problem, and it cannot reasonably be performed within an extensive search for putative crystal structures.It is thus common practice to neglect the entropic contribution and to focus on minimizing the crystal enthalpy at 0 K.This is usually justified on the grounds that entropy makes a relatively small contribution to the overall energy at room temperature, the combined contribution of entropy and zero-point energy being estimated to be of the order of 2–5 kJ mol−1.As mentioned in Section 1, the assumption that entropy can be ignored may result in some inaccuracies in the relative ordering of the structures identified, as well as in an overestimation of the number of structures.Furthermore, by neglecting the entropic contribution, it is not possible to investigate enantiotropically-related polymorphs.Thus, there are emerging attempts to take some account of entropic contributions.A further approximation is usually made to neglect the pV term, as this is very small at low pressures, only becoming practically important at pressures of the order of GPa.As can be expected based on the close proximity of atoms in the crystalline environment, the specific model chosen for the computation of the lattice energy strongly influences the outcome of the calculations.Among the approaches commonly used are molecular mechanics force fields, plane-wave dispersion-corrected electronic structure calculations, and hybrid models that combine electronic structure calculations) and empirical terms.Within this latter approach, electrostatic interactions can be modeled in different ways, including the use of point charges or distributed multipoles, and may even incorporate an anisotropic model of repulsion.Polymorphism is prevalent in many molecules of practical relevance and often arises from the presence of rotatable bonds with a deformation energy of the same order of magnitude as the energy change on packing.It is thus essential to account accurately for molecular flexibility during the course of energy minimization; a discussion of the pitfalls of neglecting or overly restricting flexibility can be found in Pantelides et al.The need to explore conformational variation greatly increases the complexity of the optimization problem and has been a key driver for recent theoretical and algorithmic developments of our CSP framework.Multistage CSP methodologies are based on the idea that one can extract key features of the crystal energy landscape with relatively simple models, and then improve the accuracy of the results based on much more detailed and computationally expensive models.Using a simple model in the global search stage is needed to ensure the computational tractability of the extensive search that is necessary to identify all polymorphs of potential practical interest.On the other hand, if this model is not sufficiently accurate, the global search may also miss important polymorphs because they do not happen to correspond to local minima of the lattice energy surface; or, even if it does succeed in identifying them, it may rank them so highly in energy that they would not be considered by the subsequent refinement stage unless it is applied to a very large number of structures.Therefore, a fine balance needs to be struck between the accuracy and the cost of the model used for global search, especially in the consideration of molecular flexibility and its effect on the intramolecular energy contributions.As illustrated in Fig. 3, the CSP methodology involves three main stages; these are reviewed in more detail below.As already explained, a key success factor for CSP is the correct handling of configurational flexibility at the global search and refinement stages.Therefore, as a first step of the CSP methodology, we attempt to establish the degree of conformational flexibility of each CDF that needs to be considered in the CSP context.In particular, in vacuo molecular conformations correspond to global or local minima in the conformational energy landscape.Intermolecular interactions in the crystalline environment may cause some CDFs to deviate appreciably from their in vacuo values leading to a conformational energy increase by up to about 20–30 kJ mol−1 in most cases.In some instances where intramolecular hydrogen bonds are broken, intramolecular energy increases greater than 50 kJ mol−1 have sometimes been observed.Potentially flexible CDFs normally include a subset of the torsion angles and some of the bond angles.They can often be identified using basic chemical understanding, complemented where possible by empirical evidence, such as the geometry of these angles in similar molecules appearing in crystal structures stored in the Cambridge Structural Database.Their flexibility can be confirmed by performing a 1-dimensional conformational scan for each CDF under consideration; during a scan, the corresponding CDF is fixed at a sequence of values, and at each such point an isolated-molecule quantum mechanical calculation is performed to minimize conformational energy with respect to all other CDFs.The range of values of the CDF for which the configurational energy increase is within 20 kJ mol−1 are considered to be of interest for the purposes of CSP in this work.In some cases, there may be several non-overlapping ranges satisfying this condition for a given CDF.Another useful, and computationally inexpensive, indication of the validity of the chosen level of QM theory and basis set is provided by the conformational scans mentioned above.The variation of conformational energy over the values of a given torsion angle often correlates well with the frequency of occurrence of these values in crystal structures occurring in nature, as reported in the CSD.In particular, the most likely values are expected to be those in regions of low conformational energy.For the repulsion/dispersion term, an exponential-6 functional form is usually adopted, with parameters obtained from the literature, or fitted to existing crystallographic data.We shall return to discuss the implications of these decisions in Section 5.3.The intermolecular electrostatic interactions are modeled via point charges.Within CrystalPredictor, it is possible to use conformationally-dependent charges, which are fitted to the electrostatic potential obtained at each grid point.For the study reported in this paper, we use conformationally-invariant atomic charges that are fitted to the charge density of the gas phase conformation with the CHELPG algorithm, as implemented in GAUSSIAN09.Within the current implementation of CrystalPredictor, a search can be carried out in up to 63 space groups.The exploitation of space group symmetry allows the number of variables in the lattice energy minimization to be reduced further, contributing to computational efficiency.A large number of candidate structures are generated within the space groups selected by the user; a low-discrepancy Sobol׳ sequence is used to obtain the initial values of the independent CDFs, the lattice parameters and positions and orientations of the molecules in the asymmetric unit.The use of such a quasi-deterministic sequence to generate several hundreds of thousands or millions of structures ensures comprehensive coverage of the search space.The sampling of the different space groups under consideration is performed according to their frequency of occurrence among all organic crystals in the CSD.Appropriate space group constraints are imposed for each local minimization, but this does not restrict the ability of the overall algorithm to search over a wide range of space groups.For each candidate structure, a local lattice energy minimization is carried out subject to space group symmetry constraints.A successive quadratic programming algorithm for constrained problems is used for this purpose, making use of analytical partial derivative information for the reliable and efficient identification of the optimal solution.Distributed computing hardware is used to carry out the minimization of multiple structures in parallel.Once all calculations are complete, the generated structures are post-processed to identify any duplicates, and ranked in order of increasing lattice energy.At this stage, the lowest lattice energy structures generated at Stage 1 are re-optimized with a more accurate energy model, using the CrystalOptimizer algorithm.The structures selected to undergo such refinement are typically those within +20–30 kJ mol−1 of the lowest energy structure.CrystalOptimizer also employs a much more accurate description of electrostatic intermolecular interactions based on distributed multipole expansions up to hexadecapole rather than point charges.The conformational dependence of these multipoles on the independent CDFs θ is also described via LAMs.The lattice energy minimization in the current implementation of CrystalOptimizer is formulated as a bilevel optimization problem in which the independent CDFs θ are treated as outer variables, while the remaining variables, X and β, are considered in an inner-level minimization using the DMACRYS code.Whenever the lattice energy or its gradients must be evaluated at a given point θ, a test is applied to determine whether this point is within the range of validity of an already existing LAM, or whether a new LAM needs to be constructed via a new QM calculation.This approach ensures that, even with large and flexible molecules involving large numbers of independent CDFs θ, the number of LAMs generated is very small compared to the total number of lattice energy evaluations.At the end of the refinement stage, the final structures are post-processed via clustering to remove multiple occurrences of the same structure.The CSP methodology described in Section 3 is now applied to the axitinib molecule, aiming to identify polymorphs with one molecule in the asymmetric unit.The only input of this study relating specifically to axitinib is the molecular diagram shown in Fig. 2; the primary result of the study is a list of possible crystal structures ranked on the basis of the calculated lattice energy at 0 K.As outlined in Section 3.2.1, we start by identifying the CDFs that are likely to be affected significantly by intermolecular interactions in the crystalline environment.In this case, these include the 7 torsion angles indicated in Fig. 2.We perform a set of 1-dimensional conformational energy scans using isolated-molecule QM calculations, varying one of the 7 torsion angles at a time while minimizing conformational energy with respect to all other CDFs.All calculations are performed in GAUSSIAN09 using DFT with the M06 functional and a 6-31G basis set.This model was selected mainly on the basis that it offers a reasonable balance between predictive accuracy and computational cost given the size of the axitinib molecule and the large number of QM calculations that would need to be performed with it during the subsequent stages of the CSP procedure.Fig. 4 shows the results for the conformational scans for torsions d26, d27 and d28.As might be expected, the methyl group rotation has only a minor effect on conformational energy.Since this rotation has a relatively small effect on atomic positions in the crystal, d28 belongs to the category of very flexible CDFs that can be fixed at their in vacuo values during the global search.In contrast to d28, torsion d27 has a strong effect on conformational energy, exhibiting minima at 0° and 180°.This is corroborated by the evidence from the CSD shown in Fig. 5 which confirms that this angle is near-planar.Torsion d26 also has a significant effect on conformational energy.As both d27 and d26 exhibit non-negligible ranges of variation over which the conformational energy is within +30 kJ mol−1 of the global minimum, their variation will need to be considered explicitly during the global search.Torsion angles d8 and d10 are also found to be in this category.The repulsion/dispersion interactions are modeled using the semi-empirical Buckingham potential with the transferable ‘FIT’ parameters for C, N, O, H developed by Cox et al., Williams and Cox, and Coombes et al.To the best of our knowledge, there exists no generic transferable parameter set for the sulfur atom.We therefore model the sulfur intermolecular interactions using the potential parameters that were developed by the group of Price for the S atom of the thiophene group of 5-cyano-3-hydroxythiophene.This is likely to be an appropriate choice as the S atom in that molecule has a similar environment to the sulfur of axitinib.The conformational analysis of Stage 0 identified the CDFs that need to be treated as flexible during the global search.These are the 6 torsion angles listed in Table 2, which also shows the domain of variation that needs to be searched for each angle; these domains are selected so that their Cartesian product includes all points with conformational energy up to +30 kJ mol−1 above the global in vacuo minimum.In the case of torsion d27, the search is restricted to a small range around the lowest minimum at 180°.As mentioned in Section 3.2.2, a multi-dimensional Hermite interpolant is used for the computation of intramolecular energy contributions during the global search.The interpolant is constructed over a regular grid with the spacing indicated in the penultimate column of Table 2.The last column of Table 2 shows the number of points in the corresponding dimension of the grid.From this, it can be seen that a 6-dimensional grid that would be sufficiently accurate over the entire domain of interest would involve 1,249,248 grid points, each requiring an isolated-molecule QM calculation.This would clearly be prohibitively expensive from a computational point of view.Therefore, taking account of axitinib׳s molecular structure and the results of the conformational scans, we divide the 6 torsion angles into three groups, each assumed to have an independent effect on conformational energy:Group 1: d8, d10 described by a 2-dimensional grid of 132 points;,Group 2: d19, d20, d26, described by a 3-dimensional grid of 1183 points;,Group 3: d27, described by a 1-dimensional grid of 8 points.In particular, it is assumed that torsion d27 can be treated as independent because only small deviations around the minimum are considered.This reduces the required number of grid points considerably.Overall, this decomposition allows us to approximate conformational energy with a total of 1323 QM calculations.Fig. 6 shows some aspects of these grids and the variation of intramolecular energy over them.The energy variation over the d8×d10 space indicates that planar conformations are favored in the part of the axitinib molecule on the right of Fig. 2.In contrast, Fig. 6b shows that a wide range of combinations of d20 and d26 would result in similar energy values, and therefore these angles may deviate considerably from their in vacuo values.The global search was performed over the 59 space groups that appear with the highest frequency in CSDSymmetry.A total of 4,800,000 candidate structures, each involving one molecule in the asymmetric unit, were generated and used as initial guesses for lattice energy minimization subject to space group symmetry constraints.Clustering was used to eliminate any duplicates among the final structures.Fig. 7 shows the resulting lattice energy landscape which has 5960 unique structures within +30 kJ mol−1 of the global minimum, and 1830 unique structures within +25 kJ mol−1.Structures corresponding to all four Z′=1 polymorphs of axitinib are found in this landscape, and their main characteristics are summarized in Table 3.The global minimum of the landscape corresponds to Form VI, whilst Form I is the least stable among the experimental polymorphs, with an energy +23.49 kJ mol−1 above the global minimum.The quality of experimental structure reproduction is quantified via the root mean squared deviation of the 15-molecule coordination sphere, calculated using COMPACK as implemented in Mercury.The computed and experimental structures are in reasonable agreement, especially if one considers the relative simplicity of the computational model used during the global search stage.Given the relative simplicity of the lattice energy model used in the global search, and in order to ensure that no structures of practical importance are missed, the CrystalOptimizer refinement would normally need to be applied to all structures appearing within 20–30 kJ mol−1 of the global minimum in the lattice energy landscape of Fig. 7.In this case, this would involve the refinement of several thousands of structures, which is impractical given the complexity of the axitinib molecule and the high degree of detail incorporated in the lattice energy model used by CrystalOptimizer.In view of the above, we adopt a two-stage approach that first attempts to establish a more reliable lattice energy estimate of the structures identified in the global search, before applying the full CrystalOptimizer refinement to the most promising structures.These two steps, referred to as Stages 2a and 2b respectively, are described in more detail below.A significantly improved estimate of the lattice energy for a structure derived by the global search can be obtained by taking a single iteration of CrystalOptimizer.Although a single iteration does not in itself generate a converged crystal structure, it does allow us to take advantage of the much improved accuracy of the CrystalOptimizer model to get a better assessment of the relative values of the lattice energies of the various structures that are candidates for refinement.In turn, this allows us to apply a pre-screening test aiming to significantly reduce the number of structures that will undergo full refinement.A single iteration of CrystalOptimizer is not entirely cheap as it involves at least one isolated-molecule QM conformational energy minimization.However, the information generated by these QM calculations is used to construct LAMs which are stored in a database for potential use during Stage 2b.Here we apply the above procedure to the 3765 structures found within 28 kJ mol−1 of the global minimum in Stage 1.The resulting lattice energy landscape is shown in Fig. 8.Extensive re-ranking of the structures obtained at the end of Stage 1, with the ranks of experimental forms VI, I XXV and XLI now being 14, 77, 95 and 258 respectively.We now apply full refinement to the 500 lowest-energy structures determined at Stage 2a, spanning a range of about +20 kJ mol−1 of the global minimum in Fig. 8.Each of these structures is used as an initial point for a full minimization of lattice energy using CrystalOptimizer.Following the identification of duplicate structures using COMPACK as implemented in Mercury and with default settings, 139 minimized structures are removed.This illustrates how allowing a wider range of conformational flexibility allows multiple initial structures to relax into the same final structure.The final lattice energy landscape, shown in Fig. 9, contains 361 distinct structures spanning a range of just over 20 kJ mol−1 of lattice energies.The density of the landscape confirms the propensity of axitinib to pack in many different ways, and suggests that at least some polymorphs that have not yet been observed experimentally may exist in nature.Other characteristics of the four predicted structures are summarized in Table 5.The results of a local minimization of the Z′=2 polymorph, Form IV, are also included in Table 5 to allow a more complete assessment of the quality of the computational model.Form IV is found to have the second lowest energy, less than 2 kJ mol−1 above that of Form VI.The computed densities for all polymorphs are found to be in good agreement with the measured values, with Form XXV being the densest, followed by Form XLI.Forms I and VI are found to be less dense, with nearly equal densities.All computed densities are within 2% of the corresponding experimental values.Overlays between the predicted structures and the experimental structures are shown in Fig. 10.The reproduction of the geometrical features, as measured by rms1 for the molecular conformational and rms15 for the crystal structure, is of good quality for Forms VI, XXV and XLI, with rms1 values around 0.1 Å and rms15 values less than 0.35 Å.For Form I, a poorer reproduction is observed in terms of both conformation and crystal structure.In order to analyze the impact of computational choices on the accuracy of reproduction of the experimental structures, CrystalOptimizer computations starting from the Stage 1 structures that best match the experimental structures and using different levels of theory were carried out.The approaches used, in addition to M06/6-31G, were HF, PBE0, B3LYP with a 6-31G basis, and M06/6-31G with the polarizable continuum model with a dielectric constant of 3.The resulting rms15 values were found to be similar or worse in all cases.Furthermore, none of these calculations resulted in a better match of the experimentally-determined relative stability of the polymorphs: Form XLI remained the least stable structure across all levels of theory, with energy differences from the consistently most stable form, Form VI, ranging from 9.9 to 19.2 kJ mol−1.In this section, we focus on some key aspects of the approach and results presented in Section 4.We consider the quality of the predictions and its interactions with the underlying physical models.We also discuss some methodological and algorithmic considerations that are found to be important in light of the observed performance of the CSP approach when applied to axitinib.The general CSP methodology described in Section 3.2 has been successful in that, starting only from axitinib׳s molecular structure shown in Fig. 2, it has managed to identify low-energy crystal structures corresponding to all four known experimental polymorphs for the class considered.The geometry of the predicted structures is in very good agreement with the experimental structures for three of these polymorphs, and in reasonable agreement for the fourth.The predicted energy ranking is also in good agreement for all but one polymorph.Overall, the results highlight the significant impact of conformational flexibility on crystal structure, with the conformational energies of the axitinib molecule within the experimental polymorphs being +10–20 kJ mol−1 above the in vacuo value.A large number of other low-energy minima are identified, indicating that other polymorphs may exist, which would be consistent with the already observed propensity of axitinib to pack in different crystals.On the other hand, it is also characteristic of current CSP techniques that many more polymorphs are predicted than are measured.This may be partly a result of inaccuracies in the lattice energy model.Also some structures may be found to disappear once entropic effects are considered, while kinetic considerations could make some metastable structures unlikely to ever form in nature.As mentioned in Section 4.2, the global search was performed over 59 space groups.In fact, only 15 space groups actually appear in the final energy landscape of Fig. 9, with the relative frequencies shown in Fig. 11.Approximately 80% of the structures belong to three space groups, P1̄, P21/c and C2/c, with nine space groups having a frequency of no more than 2%.Furthermore, only 12 space groups appear among the 100 lowest-energy structures in the final landscape, with 79% of the structures in a P1̄, P21/c or C2/c unit cell.The significant differences in the underlying computational model of lattice energy between Stages 1, 2a and 2b have a relatively limited impact on the quality of reproduction of the experimental structures, as measured for example by the rms15.However, the accuracy of lattice energy calculations does affect significantly the relative ranking of different structures.This is arguably the weakest aspect of the current approach.Inaccuracies can arise from the neglect of specific contributions to the crystal energy.In fact, such contributions can be captured partly within the empirical term of the computational model used here whose parameters are fitted to energetic and structural data from a number of crystal structures.In this context, it is interesting to note that, while the importance of the accurate modeling of the electrostatic and intramolecular contribution has gained wide understanding and acceptance in recent years, much less attention has been given to the impact that the empirical term contribution may have on the final outcome.More specifically, a possible source of error is the fact that the FIT parameters used here were derived from experimental data using models of the intramolecular and electrostatic contributions that are very different to those employed by current CSP methodologies.In particular, consideration of conformational flexibility was limited, QM calculations were performed only at the HF level of theory, and electrostatic interactions were modeled via atomic point charges rather than distributed multipoles.Thus, there is a potentially severe mismatch between the different contributions to the lattice energy.To explore this issue, the various contributions to the lattice energy are summarized in Table 6 for the predicted structures that correspond to the four experimental forms.The table shows results obtained using two different repulsion/dispersion potentials, namely the FIT one used throughout this paper, and the W01 potential, which includes a foreshortening of the interaction site for hydrogen; the same sulfur parameter set is used in both cases.Considering the FIT results first, it is evident that the variability of the repulsion/dispersion contributions across the four structures is significantly larger than the variabilities of the intramolecular and electrostatic contributions.A very similar effect is observed in the results obtained using the W01 potential.It is also observed that, with this potential, the predicted lattice energy difference between the most and least stable polymorphs is reduced from 10.42 kJ mol−1 to 7.31 kJ mol−1.The last four rows of Table 6 show a comparison of the predictions obtained using the two repulsion/dispersion potentials.As a result of the change in the repulsion/dispersion potential, the structures obtained are slightly different in terms of both molecular conformation and relative positioning of the atoms in different molecules in the crystal, and this indirectly leads to small differences in the intramolecular contributions and somewhat larger ones in the electrostatic contributions.The effect on the quality of the experimental structure is generally small.More interestingly, the differences in the repulsion/dispersion component of the energy between the two sets of results are in the range of about 17.8–20.9 kJ mol−1.These are much larger than the lattice energy differences across the four different polymorphs as determined using either the FIT or the W01 potential.The impact of the choice of repulsion/dispersion potential can also be seen by carrying out an extensive re-minimization of all structures found in the final lattice energy landscape with the W01 potential.The main impact is the stabilization of Form XLI, whose rank is reduced from 108th to 52nd.The least stable experimental polymorph is now Form I, ranked 58th.The above observations emphasize the importance of accurately characterizing the empirical term contributions in CSP.In the context of empirical potentials, such as FIT and W01, it is not expected that the parameters used are optimal for axitinib and indeed for the level of theory used.This suggests that the corresponding parameters may need to be re-estimated using models that are consistent with the intramolecular and electrostatic descriptions that are employed during the CSP.The introduction of a re-ranking step 2a within the refinement stage is based on the assumption that the ranking of structures following a single CrystalOptimizer iteration is a better indicator of final ranking than the ranking obtained following the global search using CrystalPredictor.To investigate this, we consider the final 361 unique polymorphs identified, and trace their rankings through the CSP process.Figs. 13a and 13b plot the rankings of these structures at the ends of Stages 1 and Stage 2a respectively, against the final rankings.The data in Fig. 13b are visibly better correlated than those in Fig. 13a, the corresponding R2 coefficients of correlation being 0.387 and 0.078.Therefore, the lattice energy values after a single CrystalOptimizer iteration provide a better indicator of “promising” structures than the values established by CrystalPredictor during the global search.This may be an important consideration for CSP applied to large, flexible molecules where full structure refinement is expensive and needs to be limited to these promising structures only.The computational cost of the different steps is reported in Table 7.The total time spent is approximately 17.4 CPU years, an amount of computation which is rendered practically feasible only via the extensive exploitation of distributed computing architectures.Stage 1 accounts for approximately 40% of the total cost, although the number of structures minimized is 3–4 orders of magnitude larger than that in Stage 2.Once the grid has been generated in Stage 1, the marginal cost of additional structure minimizations is very small.Given the rapidly increasing cost of individual structure minimizations as the computational model increases in accuracy from Stage 1 to Stages 2a and 2b, it is clear that the overall cost could be reduced if a more accurate energy ranking could be achieved in Stage 1.More specifically, the lattice energy model used for the global search in the current work is based on several approximations that could be removed without an excessive increase in the computational cost.In particular, the electrostatic model is assumed to be conformationally invariant and so are all CDFs other than those explicitly considered as “flexible”.These deficiencies may be addressed by adopting the LAM-based approach within CrystalPredictor; this will also allow the consideration of higher degrees of molecular flexibility within the global search.Overall, this can be expected to lead to improved rankings by the end of the global search, thereby reducing the number of structures to be refined during Stage 2.This paper has reviewed a CSP methodology that has been used fairly extensively both by the authors׳ research group and by others over the past decade.Its application to axitinib, a relatively large pharmaceutical molecule, has demonstrated the significant progress that has been achieved in recent years in terms of methodological improvements, the effective exploitation of high-performance computing architectures for performing this demanding task, and, ultimately, the quality of the results obtained.The results of the axitinib case study also provide a fairly good illustration of the limitations of current methodologies.Some of these deficiencies are algorithmic, e.g. the large numbers of structures that need to be refined at Stage 2 because of the relatively low accuracy of the model employed during the global search at Stage 1; or the use of a bilevel optimization at Stage 2 especially if the inner-level optimization does not provide exact partial derivative information needed by the outer-level one.It should be possible to overcome most of these issues via improvements in the underlying algorithms and their implementation.In addition, the systematic application within our CSP approach of further tests of the viability of the predicted structures, such as mechanical stability, could be used to eliminate some of the putative crystals.A more serious shortcoming concerns the correct ranking of the polymorphs identified.This is severely affected by the accuracy of the lattice energy calculation.As discussed in Section 5.3, given the significant recent improvements in the descriptions of intramolecular and electrostatic contributions, further progress may be predicated on improving the characterization of repulsive/dispersive interactions.In this context, it should be recognized that, albeit usually thought of as “repulsion/dispersion potentials”, the empirical terms used within current CSP methodologies actually attempt to compensate for a multitude of physical and numerical approximations; and this mismatch between model and reality is likely to persist even if ab initio descriptions are improved.We therefore believe that there is a need for more systematic approaches for exploiting all available experimental information within the theoretical framework of CSP.At a more fundamental level, another likely source of error in the relative ranking of different polymorphs is the use of lattice energy as a proxy measure for free energy, which means that, strictly speaking, any predicted rankings pertain to 0 K rather than to room temperature.The computation of free energies for crystals of organic molecules is now beginning to be feasible, and this should allow an entropic correction to be post-calculated for, and applied to, each of the lowest-energy structures identified by the CSP. | Organic molecules can crystallize in multiple structures or polymorphs, yielding crystals with very different physical and mechanical properties. The prediction of the polymorphs that may appear in nature is a challenge with great potential benefits for the development of new products and processes. A multistage crystal structure prediction (CSP) methodology is applied to axitinib, a pharmaceutical molecule with significant polymorphism arising from molecular flexibility. The CSP study is focused on those polymorphs with one molecule in the asymmetric unit. The approach successfully identifies all four known polymorphs within this class, as well as a large number of other low-energy structures. The important role of conformational flexibility is highlighted. The performance of the approach is discussed in terms of both the quality of the results and various algorithmic and computational aspects, and some key priorities for further work in this area are identified. |
31,499 | Decisions and delays within stroke patients' route to the hospital: A qualitative study | Stroke is a leading cause of morbidity and mortality worldwide, with an estimated 5.7 million deaths and approximately 50 million disability-adjusted life years lost every year.1,Urgent treatment with intravenous thrombolysis using alteplase for acute ischemic stroke can markedly improve patient outcomes for eligible patients.Timely access to therapy depends on patients’ and health service providers’ recognizing symptoms early, facilitating prompt arrival in the hospital, and accessing specialist assessment and treatment, ideally as soon as possible after symptom onset, and within the “therapeutic window” of 4.5 hours.2-4,Editor’s Capsule Summary,What is already known on this topic,Stroke is a time-dependent condition, but patients and families are sometimes delayed in seeking care.What question this study addressed,This qualitative study analyzed how 30 patients with acute stroke decided to seek care, how they engaged with the health care system, and what influenced those decisions.What this study adds to our knowledge, "Delays arose from 3 sources: patients' lack of recognition of the significance of their symptoms, a decision to first contact primary rather than emergency medical services care, and lack of recognition of the significance of the presentation by the initial health care providers.Bystander advice was associated with more rapid recognition and care.How this is relevant to clinical practice,These results suggest that efforts should focus on broader public awareness of critical signs of stroke, and specific directives to engage emergency rather than primary services.There is wide variation in the proportion of people with symptoms of stroke who contact emergency medical services as opposed to other health service providers such as family practitioners.5-8,Delays at any stage of the care pathway can have a major influence on the proportion of patients who receive timely assessment and treatment in the hospital.9-12,Previous work has shown that individuals who do not call EMS are delayed in arriving at the hospital and has principally considered the way in which recognition of symptoms influences initial help-seeking behavior.13-17,Similarly, public health campaigns have concentrated on the recognition of the symptoms of stroke and the importance of promptly calling EMS.18,In the United Kingdom, as with other health services, patients’ first contact with health services can be calling EMS, directly attending the hospital emergency department, or contacting primary care.Subsequent transportation alternatives include ambulance and private or public transport.No previous studies have addressed how patients navigate through these multiple options or their experiences when first contact with the health service does not result in immediate transfer to the hospital.This study aimed to understand through patients’ narratives how decisions are made and delays occur en route to the hospital after the onset of stroke symptoms.This qualitative study was part of a larger mixed program of work that recruited patients with stroke who attended 2 urban hospitals within the West Midlands, United Kingdom,19 with an ethnically diverse catchment population.Both participating hospital trusts offered a 24-hour thrombolysis service, 7 days a week, but in the case of the second trust, this was achieved by combining an “in hours” service, 9 am to 5 pm, Monday to Friday in the lead hospital, with out-of-hours care at a separate site.A summary of the patient pathway for acute stroke in the United Kingdom is detailed in Figure E1.At the study, a 4.5-hour maximum window for thrombolysis was in operation.The prevalence of stroke in West Midlands is estimated to be approximately 17 per 1,000 population, similar to national rates.20,Participants were purposively recruited on the basis of their route to the hospital and demographic characteristics.Patients who had experienced a stroke within the last 6 months were contacted either directly on the ward or by invitation letter postdischarge from the hospital.Patients were excluded if they had previously stated they did not want to be contacted about the interview study, required a consultant to consent for them, were non-English speakers, or were unable to communicate.Participant characteristics were collected from the patients or their hospital records.After informed consent, semistructured interviews with a topic guide were conducted by 4 female interviewers, each trained in qualitative methods, who were not part of the patients’ health care team.The topic guide was developed by the study team, with the first draft based on information gained from reviewing the literature, but then was influenced by data from the interviews; for example, asking participants specifically about awareness of a stroke campaign, as well as generally about their previous knowledge of stroke.When present, partners were invited to participate to fill in any gaps in patients’ accounts, with the emphasis of the interview on patients’ accounts.Patients were asked about their experience of having an acute stroke and of health services, with particular emphasis on their route to the hospital.Patients chose their interview setting and were interviewed once, with the exception of 1 participant who received a follow-up interview.Interviews were conducted between January 2011 and July 2013, and ranged from 15 minutes to 2 hours in length, mean 46 minutes.Interviews were audiorecorded and transcribed verbatim.Field notes were recorded at the end of each interview and similarly transcribed.Transcripts were checked for completeness and accuracy.NVivo 921 was used to manage the data.Researchers took an interpretive approach to data analysis, acknowledging that patients were recalling their perspectives of their experience rather than the “empirical truth,” and with the knowledge that they had experienced a stroke.Initial analysis was conducted with the “1 sheet of paper” method, where for the first interviews all the points raised about patients’ route to the hospital within each interview were noted on a sheet of paper, along with the participants’ pseudonym.22,This allowed the points to be grouped and summarized and provide a basis for development of the main themes.It gave insight into variation in responses between interviews and how themes linked.This 1 sheet of paper method provided the structure for further analysis, onto which the rest of the interview data were added as they were collected.A constant comparison analysis approach was taken, in which sections of data were compared to establish differences and similarities.23,Analysis was conducted at the individual level and by the initial health service provider contacted.This provided the components of the 3 themes outlined below.To ensure analytic rigor, both R.M.M. and S.B. coded and double coded a subset of interviews, meeting regularly to compare findings and resolve differences through discussion.Furthermore, R.M.M. and R.J.M. reviewed summary data, discussed it in light of the literature and clinical experience, and referred to the original transcripts to ensure that emerging interpretation remained grounded in the original data, and through this process the final delay categorization was reached.Interviews ceased when data saturation was reached; that is, when no new theme emerged.This happened after 30 interviews had been carried out, which is consistent with the recommended sample size to allow saturation to be achieved in this type of study.24,25,Participants have been sent a lay summary of all study findings, but member checking, either of the study or their individual transcripts, has not been conducted.Quotations give patients’ sex, age, and initial service contacted.The London-Queen Square Research Ethics Committee approved this study.Thirty stroke patients were interviewed, including 6 with their partner.They all lived in an urban area, and the majority of interviewees were men, were white British, were younger than 65 years, and experienced their strokes at home.More than half contacted 1 service before arriving at the ED and then stroke treatment; the remainder had more circuitous routes.Less than half arrived within 3 hours of the onset of the symptoms, but many had no onset time recorded in their hospital records.Delays en route to the hospital were defined at 3 levels on the acute stroke pathway: primary delays, which included a lack of recognition of stroke or serious symptoms or lack of response to these symptoms; secondary delays, which included initial contact with nonemergency health services; and tertiary delays, which meant patients’ presenting symptoms were not initially interpreted as indicating a stroke by the health service provider.Patients could potentially be subject to 1, 2, or all 3 levels of delay.The flow of decisions from onset of symptoms until hospital arrival is summarized in the Figure.For primary delays, the lack of recognition of stroke or lack of response to those symptoms was influenced by bystanders and the perceived seriousness of those symptoms.Bystanders were frequently mentioned in accounts of the route to treatment.They became involved because they were present at the time, the patient sought them out, the patient saw them by chance, or they recognized symptoms that the patient was unaware of.Patients frequently reported seeking advice or help from friends, family, or others present at the time to confirm that something was wrong and determine necessary action.“I managed to get on the side of the bed and lift myself up and then I just fell back and I managed to ring….I rang my brother.,In other instances, patients were not aware or resisted the idea that something was seriously wrong, and it took another bystanders to persuade or “force them” into seeking help."“I said, ‘No, no, I'm all right. "I'm all right.’",And they sort of bullied me into taking me to .."I was angry because I mean the girls had persuaded me, or forced me to go into hospital and I didn't want to go into hospital, let alone be kept overnight.”",Several factors affected whether bystanders were able to influence the patient to seek help: the patient’s relationship with them, whether they were seen to have some “medical knowledge,” their perception of the patient’s ability to make a decision at that time, and their level of proactiveness in the situation.In a minority of accounts, a bystander delayed the help-seeking process.Implicit reasons for this were not wanting to take responsibility for the decision but rather contacting someone else who they viewed was able to do it; perceiving the situation to be less urgent or serious than the patient did; or misinterpreting the symptoms and thinking the situation was not serious.Some patients were alone at symptom onset.Depending on the severity of symptoms, such individuals were able to decide whether they wanted to seek help themselves or wait for someone else to assist.They may not have had the physical or practical ability or mental clarity to contact services and communicate their symptoms on their own.“I was putting the groceries away and I fell..He came unexpectedly.. therwise I would have lain there, you know, for a long time.,Patients were influenced in their actions by their perception of the seriousness of the symptoms.Moderate symptoms were described as feeling weird or dizzy or having a headache or migraine, whereas patients who reported limb numbness or facial droop often reported that their symptoms were serious.People who believed the symptoms were serious called EMS, made their own way to hospital, or telephoned a nonemergency telephone triage service to confirm the significance.“He said the room was spinning round, and I said, well, ‘Do you want me to call the doctor?’,‘No’ was his answer again..and on the third occasion, when he does do it again, he comes back into the room, tries to sit on the bed and, whether he didn’t see the bed, or he thought it was there, and the next thing, he’s on the floor….I said, ‘This is ridiculous; I’m going to call the doctor.’,Symptoms were not perceived to be serious if patients thought they could self-medicate; if they could relate it to a previous illness that had not been serious; if they were in denial; or if their judgment had been clouded.Some younger patients reported that they thought they were too young to have a stroke; therefore, their symptoms could be attributed to something less serious, ie, a migraine.“I came downstairs and I was met with , who said I’d got a migraine.I’ve never had a migraine before and so I thought, you know, that that’s pretty plausible and I’ll just go home.,Secondary delays, initial contact with a nonemergency health service, were influenced by uncertainty about the seriousness of the symptoms, previous hospital experience, and ease of access to services.Ideally, patients would contact EMS to take them immediately to the hospital, but some arranged private transportation.26,A minority of patients initially contacted non-EMS health service providers, who were unable to treat or provide direct access to treatment for symptoms of stroke: nonemergency telephone triage service, family practice, and walk-in center.Non-EMS providers could refer to a more appropriate service.The bystander quoted below contacted the nonemergency telephone triage service to confirm the seriousness of the symptoms, which resulted in a physician callback, delaying the EMS call.Similarly, access to family practice could result in an initial delay if stroke symptoms were not recognized when an appointment was booked.“So I called national health helpline; we had a good discussion..They said they would ring us back, which they did, and a doctor spoke to me and said, ‘Yes, call an ambulance straightaway,’ which we did.,Previous experience of hospitalization could affect desire to attend.One patient had reported a good hospital experience, which reinforced his choice to travel to the hospital; however, another reported a particularly unpleasant recent stroke experience, which contributed to her convoluted route: after initially calling EMS, she did not use the ambulance that arrived but rather waited a day before going to her family practitioner.“We got the ambulance again on Sunday night, and the driver said, ‘Oh, how are you feeling ?,You know, you’re looking all right,’ and I said, ‘Yes, I feel not too bad actually,’ and I did not want to go and spend another night in that horrible ward, so I said I’d stay at home and see.,One patient delayed accessing services because he already had a family practice appointment booked.Other patients gave specific reasons for making their own way to the hospital as opposed to calling EMS: going by car would be faster, and it would be easier because there was a car on hand.Some had not considered calling EMS, whereas others were concerned about wasting health service resources.Tertiary delays, in which health care providers did not initially interpret the patient’s presenting symptoms as serious or suggestive of stroke, could occur within the emergency health service or within primary care and result in multiple providers being involved before the patient received appropriate treatment.Most patients contacted EMS or made their own way to the ED, which should have led to urgent treatment.In a minority of cases, participants reported that EMS providers did not interpret their presenting symptoms as serious or suggestive of stroke.As noted earlier, there was one instance when an ambulance crew was involved in the patient’s decision not to go to the hospital.Two patients reported that the EMS operations center suggested they contact their family practice.These instances were unusual: one patient was ill on New Year’s Eve and 1 had stated to the EMS emergency operations center that he was an alcoholic.Furthermore, conveying information over the telephone potentially leads to poor understanding of symptoms.“Then I rang the 999 straightaway, which in turn put me onto the ambulance station, who told me to go and ring the mobile doctor, which I contacted."He said, ‘Well, he's on his way, but he won't be coming for some time yet and it could be 2 hours.’,Two men reported receiving a misdiagnosis in the ED and leaving the hospital rather than being admitted.Hospital staff had thought it was a less serious diagnosis, ie, virus.Both returned to the ED later.As discussed below, although the patient thought his symptoms were serious, he was concerned about being a “bad patient” and questioning the physician, and this created reluctance to seek further care."“It was playing with my head because I didn't want to waste anybody's time or thinking that I'm like a hypochondriac: ‘You know this guy: he's coming but he’s not letting the medication sort of take its course or anything,’ but it wasn't improving and I was getting worse..”",Although some primary care physicians immediately called EMS on recognizing individuals with symptoms of stroke, others did not organize an emergency admission.Patients who did not get a sense of urgency from primary care could delay further.The patient below refused the offer of an ambulance and delayed her hospital attendance to cancel her exercise class.Her example is of both a primary and tertiary delay because she deviated from her advised immediate hospital attendance, earlier reporting she did not perceive the symptoms to be serious or urgent, and is also a tertiary delay because the nurse involved did not insist on using EMS."“ said, ‘I'm going to write a note and I'm going to phone them and say you're on your way.’", "I said, ‘But I've got to let them know at tai chi because they’ll wonder what's happening and it's only round the corner at .’",She says, ‘You need to go now.’,I said, ‘Oh, all right.’, ".. And then when I got outside I thought I've got to let them know at tai chi, so I walked from round .”",Three patients attended the family practitioner between 1 day and 2 weeks after the initial stroke.This delay might have influenced the family practitioner’s decision not to insist on EMS use.In one case, the family practitioner had concerns about the patient’s general health and thus advised against hospital attendance in case the patient contracted an infection.One was given the choice of an ambulance or to make his own way to the hospital; he chose to use public transport.The other patient was told to go to the hospital and was asked whether he was able to get there.However, the patient’s means of transport required him to walk home and ask his neighbor to drive him to the hospital.He attributes this decision to use private transport as the best use of resources because of lack of certainty over his diagnosis.From his account, it would appear that he did not disclose to the family practitioner the convoluted route that he would take to hospital.“She wrote me a letter and sent me straight down to the hospital..The doctor didn’t suggest calling an ambulance?,No, because I don’t think she was sure that I’d actually had a stroke.I’m sure she suspected; she did ask me did I have somebody with me, and did I have a means to get to the hospital, and I had, you know .Ambulances are for people who really need them.,Whenever services redirected, the decision on how to proceed depended on patient or bystander response.Sometimes this led to a more convoluted route to the hospital, with 2 or more services contacted before arrival at the ED.Patients who received a final diagnosis of stroke were purposively recruited according to the initial health service provider contacted on onset of stroke symptoms.19,However, despite purposive mailings, it was difficult to recruit individuals who used non-EMS routes, and recruitment depended on patients’ responding to written requests for participation.Similarly, fewer women agreed to be interviewed, and patients who required consultant consent, could not speak English, had severe aphasia, or were too ill or had died were excluded from the study, so their perspectives are not represented.Because the sample was restricted to patients with a final diagnosis of stroke, excluding those with symptoms of stroke but a different diagnosis, it is not possible to comment on the implications for their treatment, in which a less urgent response may be more appropriate.Similarly, patients with more severe stroke were less likely to be included and may have had different experiences.It is also possible that patients with less positive health service experiences were more likely to agree to be interviewed because they wanted to be able to tell their story.Although patients were recruited from a limited sample of 2 hospitals, the local stroke services available were reflective of current national practice.27,Health care organization varies from country to country, but the ability to call an ambulance or instead contact another health care provider is common to most Western countries, and hence the delays considered here are widely relevant, albeit potentially from differing providers in different countries.For example, a health maintenance organization might require initial contact with a triage service in some circumstances, potentially leading to delays should a patient or triage officer not recognize symptoms immediately.A further limitation was that some patients had difficulty recalling the details of their route to the hospital.Reasons for this included conducting the interview several weeks after the event and patients being asked about a time when they were not well and hence had impaired recollection.Furthermore, by the time of interview, participants had received a diagnosis of stroke, and this knowledge may have influenced their perceptions of their earlier memories.The presence of partners in 6 of the interviews may have influenced how patients presented their narratives28; however, it assisted in filling any gaps in patients’ memories, and their presence was appreciated in terms of moral support.29,Furthermore, in the case of all partners, they had been present in the patients’ route to the hospital.Patients experienced a range of out-of-hospital delays: primary delays because of lack of stroke recognition or appropriate response to them; secondary delays because of initially contacting a nonemergency health service; and tertiary delays, in which the health service did not recognize the stroke.Key to patient decisionmaking and primary and secondary delays were the presence and influence of significant bystanders, who could expedite or delay access to treatment.Decisions to choose a certain route were influenced by the perception of the seriousness of symptoms, previous hospital experience, and ease of access to services.Tertiary delays were influenced by whether the health service provider interpreted the patient’s presenting symptoms as serious or suggestive of stroke.Previous studies have focused on primary patient-related delays slowing down stroke patients’ route to the hospital.16,17,The present study highlights that delays can occur on a number of additional levels, including secondary delays caused by initial misdirection and tertiary delays related to the health service.Even when patients reacted immediately and contacted appropriate services, misdirection by health service providers had significant influence.Previous studies have noted that some family practices can delay patients’ arrival to the hospital by organizing a home visit,30 by not arranging for the patient to be taken to the hospital, or by not stressing the urgency of arriving at one.15,This study has found additional sources of delay farther along the stroke pathway, up to and including the ED.This study highlighted the importance of bystanders in primary and secondary decisions in the route to the hospital, mostly in a positive way, although some of our patients actively resisted bystanders’ making decisions.Mackintosh et al15 reported patients using bystanders to avoid taking responsibility and generally causing delay.They perceived bystanders contacting EMS so that the responsibility was removed from them.Moloczij et al,17 Jones et al,16 and Harrison et al30 also reported negative instances.The present study highlights the range and importance of patients’ perceptions of symptoms.Mackintosh et al15 reported a range of perceptions, with some patients ignoring symptoms in the hope that they would “go away” and finding that patients whose symptom onset was not significant might delay seeking attention.Moloczij et al17 emphasized the importance of feeling pain and how the lack of this in most stroke patients could result in initial contact with nonemergency services.Quantitative studies have linked neurologic severity with delay to arriving at the hospital.13,14,31,32,Our findings and the stroke-specific literature have striking similarities to the findings of studies during several decades of help-seeking behavior among people experiencing acute myocardial infarction.For example, Kirchberger et al33 found misinterpretation of symptoms of heart attack to be associated with delaying the call for help.Dubayova et al34 reported from their systematic review that intensity of fear was associated with earlier help-seeking.Classic studies from Nottingham, United Kingdom,35 and Rotterdam, the Netherlands36 reported significantly longer delays in hospitalization and initiation of reperfusion therapies when patients sought advice from their primary care physician rather than calling an ambulance.The decision by health service providers on how best to respond to initial patient presentation is crucial and is often made by receptionists or ambulance dispatchers.The present study highlights the importance of nonemergency services in directing patients toward emergency care in acute stroke.Family practitioners should emphasize the urgency of ED attendance and arrange ambulance transportation when referring patients with suspected stroke to the hospital.Warning hospitals or providing a patient referral letter to expedite the patient journey to the hospital after initial secondary delays may not be as effective as ambulance alerting.12,37,38,Further training in stroke recognition should be considered for nonclinically staffed, nonemergency telephone services to avoid secondary delays’ being compounded, leading to worse outcomes.This is particularly important, given that only 3% of EMS calls for stroke include more than 1 facial asymmetry, arm weakness, or speech disturbance symptom,39 although a balance needs to be struck to ensure that service providers do not become overly risk averse and send too many patients to emergency care, which could overload the system.Patients may not be the best judges of the seriousness of their symptoms; therefore, bystanders can be extremely important in their seeking care.Campaigns could encourage members of the public to assist when symptoms of stroke are suspected.Furthermore, current campaigns aimed at ensuring the correct use of EMS must be cautious not to dissuade people from seeking emergency care if they are uncertain whether their symptoms are serious.Members of the public should not be expected to always make the best decision during a medical crisis; rather, the health service organization should direct them appropriately, whatever the initial point of contact.40,Limited data from a recent systematic review of UK literature on awareness of and response to stroke symptoms revealed a good level of knowledge of the 2 commonest stroke symptoms and of the need for an emergency response among the general public and at-risk patients.Despite this, less than half of patients recognized they had experienced a stroke.Symptom recognition did not reduce time to presentation.For the majority of patients, the first point of contact for medical assistance was a primary care physician.41,The English mass media campaign Act FAST aimed to raise stroke awareness and the need to call emergency services at the onset of suspected stroke.Although some stroke patients and witnesses reported that the campaign affected their stroke recognition and response, the majority reported no effect.Clinicians have often perceived campaign success in raising stroke awareness, but few have thought it would change response behaviors.42,These findings were confirmed in a subsequent systematic review by the same research group.43,In summary, there are several points en route to the hospital at which patients or health service providers can potentially delay access, which will affect patients’ ability to receive timely assessment and treatment.Patients have described delays caused by both themselves and health professionals who responded to their initial presentation.Bystanders appear to be important in the decisionmaking processes both in terms of initiating action in the face of symptoms of stroke and in deciding what action to take.Future stroke public awareness campaigns should encourage members of the public to assist where signs of stroke are recognized and direct patients to emergency services.Potential delays caused by health professionals could be reduced through training for first-point-of-contact health service providers to assist them in recognizing symptoms and ensuring that illnesses of patients with possible stroke are treated as emergencies. | Study objective We examine acute stroke patients' decisions and delays en route to the hospital after onset of symptoms. Methods This was a qualitative study carried out in the West Midlands, United Kingdom. Semistructured interviews were conducted with 30 patients (6 accompanied by partners). Patients were asked about their previous experience of having had a stroke and their initial engagement with health services. "One sheet of paper" and thematic analyses were used. Results Three potential types of delay were identified from onset of symptoms to accessing stroke care in the hospital: primary delays caused by lack of recognition of symptoms or not dealing with symptoms immediately, secondary delays caused by initial contact with nonemergency services, and tertiary delays in which health service providers did not interpret the patients' presenting symptoms as suggestive of stroke. The main factors determining the speed of action by patients were the presence and influence of a bystander and the perceived seriousness of symptoms. Conclusion Despite campaigns to increase public awareness of stroke symptoms, the behavior of both patients and health service providers apparently led to delays in the recognition of and response to stroke symptoms, potentially reducing access to optimum and timely acute specialist assessment and treatment for acute stroke. |