Unnamed: 0
int64
0
31.6k
Clean_Title
stringlengths
7
376
Clean_Text
stringlengths
1.85k
288k
Clean_Summary
stringlengths
215
5.34k
700
Modelling to inform prophylaxis regimens to prevent human rabies
Considerable efforts are underway to reduce the burden of rabies, with the goal of reaching zero human deaths due to dog-mediated rabies by 2030 .The main strategies to prevent human rabies are canine vaccination to eliminate rabies at its source and Post-Exposure Prophylaxis comprising wound washing, vaccination and, when indicated, Rabies Immunoglobulin, to individuals bitten by suspect rabid animals .An additional preventive strategy is Pre-Exposure Prophylaxis whereby vaccine is given to prime the immune system .Individuals who have received rabies PrEP still require PEP, but need fewer doses than unprimed individuals and do not require RIG.To support the development of practical and feasible recommendations for human rabies prevention WHO established a Strategic Advisory Group of Experts Working Group on rabies vaccines and immunoglobulins in 2016.As part of this process, we compared PEP regimens, approaches to RIG administration and PrEP to assess their relative merits.PEP is extremely effective if administered promptly to exposed individuals and several regimens have been recommended .Intradermal regimens are more cost-effective than intramuscular regimens , because smaller volumes of vaccine are used to elicit a clinically equivalent immune response.So far only a few countries have adopted ID vaccination, with several factors likely contributing.Fractionated vials should be discarded within 6–8 h to minimize risks of bacterial contamination , which is often perceived as waste .Inexperienced clinicians may consider ID vaccination to require more skill, and fear that smaller doses are less protective.Rabies vaccine is available as 0.5 mL or 1 mL vials.Using standard syringes with mounted needles, clinicians usually obtain four ID doses of 0.1 mL from 0.5 mL vials and 8 from 1 mL vials, with wastage of 20%.Use of more expensive insulin syringes with built-in needles and no dead space prevents such wastage and the same needle can be used to withdraw vaccine and inject the patient.RIG is recommended for severe exposures to potentially rabid animals and all bat exposures, providing passive immunity while the vaccine elicits an active immune response .The previous WHO recommendation calculates dosage according to patient body weight, with as much RIG as is anatomically possible being administered into and around the wound and the remaining product being injected intramuscularly distant from the wound .New evidence led to revised protocols for administering RIG at the wound only .RIG vials can be shared between patients using single-use injection devices, but opened vials should also be discarded at the end of each day.A systematic review on safety, immunogenicity, cost-effectiveness and recommendations for use of rabies PrEP concluded that PrEP “is safe and immunogenic and should be considered: where access to PEP is limited or delayed; where the risk of exposure is high and may go unrecognized; and where controlling rabies in the animal reservoir is difficult” .Rabies PrEP programmes have been implemented in Peru and the Philippines .Offering widespread PrEP, for example within routine EPI programmes in rabies-endemic countries, raises practical and operational difficulties, as delivering multiple doses within short time scales lies outside the standard programme.However, if PrEP could be a cost-effective method to prevent human rabies, ways to overcome these challenges should be considered.We developed models to quantitatively assess the potential benefits and costs of these strategies for prevention of dog-mediated rabies.We updated a simulation to compare PEP regimens .Briefly, our algorithm involved: assigning patient presentation dates uniformly based on clinic throughput; generating patient return dates based on specified regimens and patient compliance; calculating daily vial use; iterating steps 1–3 to capture variation.Direct costs for vaccines and their administration and indirect costs were taken from published data and expert consultation.We assumed that vaccine administration time is equivalent for all regimens.For this analysis we did not include RIG because it is rarely available to bite victims in endemic countries .We explored vial use according to:Clinic throughput: monthly numbers of bite patients presenting to clinics to initiate PEP.Total presentations depend on the regimen, its schedule requirements, clinic accessibility and patient compliance.Vial size: most rabies vaccines are sold in 0.5 mL or 1 mL vials, at equal cost.Vial size affects numbers of doses that can be withdrawn, as does syringe type.Vaccine wastage: vaccine from opened vials must be used within 6–8 h or discarded.We assumed use of WHO pre-qualified rabies vaccines, with 0.1 mL doses for all regimens.Syringe type: We compared costs of using insulin syringes that reduce waste compared to standard syringes.For all regimens we assumed use of an additional syringe per vial to reconstitute the vaccine.Patient compliance: the probability of a bite patient completing PEP vaccination.Whatever its cause, poor compliance has consequences for vaccine use, vial sharing and PEP efficacy.We did not consider variability in return dates.We ran 1000 realisations for each scenario to capture variation in patient presentation dates and vial sharing.We compared costs for bite victims depending upon pricing strategies and indirect costs.We assume bite victims travel further to reach rural clinics compared to urban clinics and incur correspondingly higher costs, spanning the range from $2.5 to $15 per clinic visit .To investigate limited vaccine supply we assessed the maximum number of patients that could be treated with a given volume of vaccine under different regimens.We undertook a similar analysis for RIG using data collected over 12-months from Himachal Pradesh, India.Due to limited RIG availability, patients were administered RIG under a dose-sparing approach of infiltration of the wound only.All survived on follow-up .We used bootstrap sampling of these data, where patient weight was measured, to capture variability in RIG use under two scenarios: infiltration at the wound with the remainder administered intramuscularly distant from the wound; and infiltration of the wound only.We assumed opened RIG vials were discarded at the end of each day and examined a range of clinic throughputs using 5 mL ERIG vials containing not less than 300 IU/mL, as available in Himachal Pradesh.We took two approaches to quantify the potential benefits and relative costs of including rabies PrEP within a routine EPI schedule in endemic settings:We developed a simple simulation model to estimate the relative cost of PrEP plus PEP versus PEP alone.This cost ratio largely depends on the incidence of dog bites and the cost per course of PrEP plus PEP vs PEP alone.Bite incidence: The incidence of dog bites in endemic settings has been reported to vary from around 12 per 100,000 population to around 1200 per 100,000 population .A more recent systematic review covering 2013–2015 reported typical bite incidence in the range 10–130 per 100,000 per year.The highest reported bite incidence we identified in the literature is 4840 per 100,000 in rural Cambodia which is far higher than reported in any other setting .Since PEP costs are only relevant for individuals who seek care, crude bite incidence should be modified by the proportion of people seeking care.We modelled a typical range of care-seeking bite victims of 10–500 per 100,000 population per year.Costs of PrEP and PEP: We did not model all regimens considered for PEP and PrEP individually, assuming any differences in rabies prevention would be marginal.We assumed costs of PEP for those who have been primed to be the same as for PrEP, but this does not fully capture the variation in all of the recommended regimens.Although it is the relative rather than absolute costs that are important, for simplicity we assumed that dose-sparing regimens would be used giving PrEP costs of between $5 and $20 and PEP costs in naïve individuals of $10–160, with the upper end of this range high enough to include RIG provision.We did not consider tetanus vaccination which is often given alongside PEP.Simulation.Assuming that both bite incidence and costs of PrEP and PEP followed a uniform distribution we ran 10,000 simulations to estimate the ratio of costs for a hypothetical cohort of 100,000 children with high EPI vaccine uptake.We assumed protection from PrEP lasted for 20 years as an optimistic assumption based on limited data and that dog bites are most common in children.Future costs were not discounted.A model was previously developed to estimate the cost-effectiveness of PEP alone to prevent human rabies deaths versus PEP plus dog vaccination for N’Djaména, Chad .To evaluate the cost-effectiveness of PrEP in the same setting, we compared a scenario incorporating PrEP vaccination of a yearly cohort of children with these published scenarios.For all scenarios we assumed maximum access to PEP and that communication between the veterinary and human health sector guided treatment.The number of suspected rabies exposures were taken from data collected in 2012 in approximately 30% of all health facilities in N’Djaména, including all public health centres and hospitals, prior to a mass dog vaccination campaign .Suspicion was defined based on the animal status and was attributed to bites from unvaccinated animals and animals that died, vanished, or were killed without laboratory diagnosis within 10 days of the bite.The Essen 5-dose IM regimen is used in N’Djaména with PEP costing 198 USD including transport and personnel, but not RIG, which is unavailable in Chad.We considered PrEP costs of 83 USD covering 3 vaccine doses, transport, loss of work time and personnel for administration.We assumed that pre-vaccinated children require 2 additional vaccine doses if exposed, amounting to 66.5 USD."We assumed PrEP coverage of 55% based on observed 'measles 1' vaccination coverage in Chad .This is probably optimistic as a full rabies PrEP schedule requires 2 visits rather than just 1 for measles.To achieve this coverage, approximately 57,270 children must be vaccinated annually in N’Djaména.We simulated the change of PrEP coverage in children under 15 years over a 20-year period using a demographic model with data from the 2009 national census.Multiplying this coverage by the percentage of children among exposure victims generated an increasing number of children requiring two PEP doses instead of 5.The overall cost of this scenario is the sum of costs for PrEP, PEP for pre-vaccinated children and PEP for unvaccinated children and adults.Cumulative costs were discounted at a rate of 0.04 and DALYs averted were based on a 19% risk of developing rabies after exposure by a suspect rabid animal .Analyses were conducted using R version 3.4.1 except for the N’Djamena demographic model which was conducted in vensim.Code and data for reproducing the PEP and RIG simulations are available at: https://github.com/katiehampson1978/Modelling_rabies_regimens,Overall, ID vaccination is always more cost-effective than IM vaccination, is less costly for health providers and either costs less or equivalent for patients.The cost-effectiveness of IM vaccination does not change with clinic throughput.For ID vaccination cost-effectiveness increased with throughput.The 1-week 2-site ID regimen is the most cost-effective regimen in all settings because less vials are used per course than for IM vaccination, even without vial sharing.The requirement for patients to return to clinics for subsequent doses means opportunities for vial sharing occur even in low throughput settings.In high-throughput clinics the 1-week 2-site ID regimen uses 85% fewer 1 mL vials than IM regimens.Use of insulin syringes rather than standard syringes with mounted needles further reduced costs, particularly in clinics receiving >10 new bite patients each month.The same qualitative patterns were observed with lower patient compliance, therefore these results are not presented.ID regimens are dose-sparing and have greater potential to treat more patients given limited vaccine supply.The 1-week 2-site ID regimen can treat 5 times as many patients than IM regimens given an annual supply of 3000 vials.Where PEP is provided free-of-charge, the 3-week IM and 1-week ID regimens are preferable for patients, who incur only indirect costs, as both require just 3 clinic visits.When patients must pay for PEP, the most preferable regimen depends on pricing strategies and relative travel costs, but ID regimens are always preferable to IM.Infiltration of RIG at the wound only results in considerable savings, that increase with patient throughput as vials can be more effectively shared between patients using single-use injection devices.Moreover, when available vials are limited, many more patients can be treated if RIG is only administered at the wound site.In the clinic in Himachal Pradesh, around 270 patients are seen per month requiring approximately 262 RIG vials if injected at the wound only, versus 370 vials if the remainder is administered distant to the wound, a 40% reduction.Use of PrEP plus PEP boosters was at least twice as expensive as PEP alone in 75% of simulations.In some simulations where bite incidence was low and PEP costs in naïve individuals were also relatively low, the ratio was in the range of 100–200.In 4% of simulations the ratio was ≤1, meaning that PrEP plus PEP was less expensive than PEP alone; here both bite incidence and relative PEP costs in naïve individuals were very high.The annual number of suspect rabies exposures for N’Djamena based on reported bite patients was 374, of which 42% were children .This is a conservative estimate, since the survey did not cover all pharmacies and private medical facilities.We estimate that after 20 years the cumulative cost of PrEP plus PEP was over fifty times higher than the cumulative cost of PEP alone and PEP with mass dog vaccination.The cost per DALY averted is $3242 USD versus just $43 USD using only PEP.In practice, 100% access to PEP is impossible to achieve, but even when compared to observed PEP use in N’Djamena , which prevents rabies at a cost of 171 USD per DALY averted, PrEP remains much less cost-effective.PrEP coverage among children stabilizes at around 35% after 35 years, whereas coverage in adults stabilizes at 35% only after 40 years.The models we developed provide evidence to support practical and feasible recommendations for human rabies prevention in different clinical settings.The safety and efficacy of ID administration of rabies PEP has been recognized for decades .New data supports the clinical efficacy of an abridged ID regimen , and confirms that ID vaccination can be safely completed within one week .Our results further establish that ID vaccination could considerably reduce vial use, mitigate shortages and provide more equitable access by making PEP more affordable.Specifically, the 1-week 2-site ID regimen was the most cost-effective regimen, with further advantages of enabling PEP to be completed within 1 week, requiring only 3 clinic visits and treating more patients when vaccine is in short supply.We found similar advantages of RIG administration to the wound only, an advantage that is more pronounced in higher throughput clinics.We found that PrEP would be substantially more expensive than other measures to prevent human rabies deaths, such as PEP and mass dog vaccination in almost all settings.Clinic throughput affects capacity for vial sharing, and therefore the cost-effectiveness of ID versus IM vaccination.But, ID regimens always use at least 25% fewer vials than IM, and as throughput increases, ID regimens become increasingly cost-effective, using up to 85% fewer vials.ID vaccination is safe, well-tolerated and clinically efficacious .Health workers routinely deliver BCG immunizations intradermally , so there should be no technical difficulty in switching to ID administration.Use of insulin syringes should further reassure clinicians and reduce wastage as more accurate vaccine volumes can be injected.Single-use prefilled injection devices could eliminate vaccine waste and would be more user-friendly therefore should be considered for future development if costs can be kept low.Similarly, research into vaccine preservation could enable more economical vaccine use .However, most critically, only two human rabies vaccines are currently WHO prequalified .Prequalification is an important step to accelerate widespread adoption of the new abridged 1-week 2-site ID regimen and should be encouraged.Our model has several simplifications: we assume that the day of the week does not affect the likelihood of presenting for PEP.But patients may be less likely to present on Sundays and/or more likely to present on Mondays or other days, which may affect vial sharing.We also only consider a range of travel costs to clinics but in practice the location of clinics and transport access would affect indirect costs and delays to PEP provision.Moreover, we do not consider clustering of presentations as frequently occurs due to the same dog biting multiple people , which increases opportunities for vial sharing.Finally, an opened vial may be additionally used for PrEP in non-bitten family members or individuals whose occupation increases their risk of exposure.We also find major advantages of RIG administration to the wound site only.This allows many more patients to be treated given limited RIG availability, an advantage that is more pronounced in high throughput settings.This analysis assumes patients comparable to those from Himachal Pradesh in terms of body weight and wounds.Preservatives could enable opened RIG vials to be safely stored at 2–8 °C for up to 28 days increasing vial sharing opportunities, which should be further investigated.Our results showed that mass pre-exposure vaccination for rabies is unlikely to be an efficient use of scarce resources.We assume PEP will always be necessary after exposure to a potentially rabid animal, therefore there are limited health benefits and substantial costs associated with PrEP.Use of PrEP in the EPI schedule targets many more children than are likely to be exposed to rabies and unlike most other infectious diseases, this risk is identifiable.Of course, this assumes that PEP is available, which may not always be the case.However, in view of vaccine shortages observed in some countries, diverting vaccine from PEP to PrEP could be fatal for exposed victims and marginalized communities that already have limited PEP access and would likely also have limited PrEP access, compounding health inequalities.Any country seriously considering routine PrEP should assess relative effectiveness and cost-effectiveness of this approach compared to other measures, using models informed by local epidemiology.We used two modelling approaches, one generic and one context-specific, to address the potential costs and cost-effectiveness of PrEP.For the hypothetical birth cohort we simulated a range of bite incidence and relative costs.We did not take into account differential immunogenicity as all regimens are expected to be clinically equivalent, nor did we account for age-specific variation in dog bites.In the specific example, parameters were based upon extensive field studies.We did not model PrEP use in other settings but there are additional studies reported in the literature.One from Thailand similarly found that bite incidence would need to be much higher than has been observed to make PrEP cost-comparable .These analyses suggest that investing in PEP, and/or mass dog vaccination, will be preferable to PrEP.Our findings are in agreement with the recent systematic review of PrEP .Even if the price of rabies vaccine were considerably lower, the marginal cost-effectiveness of PrEP is still likely to be less favourable than PEP or dog vaccination, simply because many more individuals need to be targeted.Overall, we find that ID is more economical than IM vaccination.The 1-week 2-site ID regimen is the most cost-effective and could enable many more bite victims to be equitably treated with the same volume of vaccine.We recommend use of insulin syringes to efficiently administer ID vaccination and reassure clinicians.Moreover, we encourage further efforts to prequalify rabies vaccines to accelerate adoption of ID vaccination.Where RIG is available, it could be delivered to most patients by infiltration at the wound only.We find that PrEP as part of the EPI programme is highly unlikely to be an efficient use of resources and should only be considered in extreme circumstances, where incidence of rabies exposures is high in populations which cannot access timely PEP.Modelling could be used to support decision making in specific high-exposure contexts.Caroline Trotter reports consulting payments from GSK in 2018 and a consulting payment from Sanofi-Pasteur in 2015 on unrelated topics.The Institut Pasteur in Cambodia received non-nominative grants from Sanofi to develop rabies prevention materials for dog bite patients and World Rabies Day information campaigns for the general public.ML was supported by the Gavi Alliance Learning Agenda.
Background: The Strategic Advisory Group of Experts (SAGE) Working Group on rabies vaccines and immunoglobulins was established in 2016 to develop practical and feasible recommendations for prevention of human rabies. To support the SAGE agenda we developed models to compare the relative costs and potential benefits of rabies prevention strategies. Methods: We examined Post-Exposure Prophylaxis (PEP) regimens, protocols for administration of Rabies Immunoglobulin (RIG) and inclusion of rabies Pre-Exposure Prophylaxis (PrEP) within the Expanded Programme on Immunization (EPI). For different PEP regimens, clinic throughputs and consumables for vaccine administration, we evaluated the cost per patient treated, costs to patients and potential to treat more patients given limited vaccine availability. Results: We found that intradermal (ID) vaccination reduces the volume of vaccine used in all settings, is less costly and has potential to mitigate vaccine shortages. Specifically, the abridged 1-week 2-site ID regimen was the most cost-effective PEP regimen, even in settings with low numbers of bite patients presenting to clinics. We found advantages of administering RIG to the wound(s) only, using considerably less product than when the remaining dose is injected intramuscularly distant to the wound(s). We found that PrEP as part of the EPI programme would be substantially more expensive than use of PEP and dog vaccination in prevention of human rabies. Conclusions: These modeling insights inform WHO recommendations for use of human rabies vaccines and biologicals. Specifically, the 1-week 2-site ID regimen is recommended as it is less costly and treats many more patients when vaccine is in short supply. If available, RIG should be administered at the wound only. PrEP is highly unlikely to be an efficient use of resources and should therefore only be considered in extreme circumstances, where the incidence of rabies exposures is extremely high.
701
Granulocyte Macrophage Colony-Stimulating Factor-Activated Eosinophils Promote Interleukin-23 Driven Chronic Colitis
Chronic intestinal inflammation is characterized by dysregulated T helper 1 and Th17 cell and innate lymphoid cell responses with excessive production of inflammatory cytokines, leading to increased production of granulocyte-monocyte progenitors and accumulation of inflammatory myeloid cells in the target tissue.Previously we described an interleukin-23-granulocyte macrophage colony-stimulating factor axis as a key driver of dysregulated hematopoiesis in colitis; however, the relative contribution of distinct innate effector cells downstream of this pathway remains unknown.Neutrophils are considered a major culprit in IL-23-Th17-cell-type-mediated tissue damage, while the pathogenic role of eosinophils has primarily been established for Th2 cell-mediated conditions such allergic skin and lung disease.Eosinophils, which arise from GMPs through an eosinophil progenitor intermediate, are rare in the blood but more abundant in tissues such as the gastrointestinal tract, although their contribution to intestinal homeostasis remains enigmatic.Beyond their role in Th2 cell immunity, eosinophil secrete various inflammatory mediators and have been implicated in activation of dendritic cells and neutrophils.They can also release anti-microbial compounds toxic for viruses and bacteria and promote the survival of immunoglobulin A-secreting plasma cells in the intestine, suggesting a possible anti-microbial function.A dysregulated eosinophil response can cause immune pathology, and this is most evident in atopic diseases such as asthma and eczema, Th2 cell-mediated eosinophilic esophagitis, and hypereosinophilic syndrome.However, the molecular signals that drive eosinophils from protective to tissue damaging cells are ill-defined and require further characterization.Similar to neutrophils, eosinophils produce a range of cytotoxic mediators; matrix metalloproteinases and reactive oxygen species, as well as specific proteins such as eosinophil peroxidase and eosinophil cationic protein.These molecules are toxic for invading microorganisms but can also lead to collateral damage to host tissues including the intestinal epithelium.Indeed, intestinal eosinophil accumulation has been implicated in the pathogenesis of a chemically induced model of acute colonic injury and increased eosinophil numbers and activation has been reported in inflammatory bowel disease.However, despite their abundance in the intestine, the regulation of eosinophils by colitogenic cytokines and their functional role in chronic intestinal inflammation is not known.Our previous work identified IL-23-driven GM-CSF as a key mediator of chronic inflammation in T cell transfer colitis.GM-CSF promoted intestinal inflammation at several levels, including skewing of hematopoiesis toward granulo-monocytopoiesis and accumulation of highly proliferative GMPs in the intestine.Using experimental models of chronic colitis, we now show that GM-CSF promoted IL-23-driven intestinal inflammation through local accumulation of activated eosinophils and potentiation of their effector functions.In addition, it also promoted bone marrow eosinopoiesis in synergy with IL-5.Because IL-23 is a well-known driver of the Th17 cell response, these results provide evidence of a link between the Th17-cell-type response and eosinophils in intestinal inflammation and suggest that targeting the GM-CSF-eosinophil axis might have therapeutic utility in some forms of IBD.To investigate the relative contribution of granulocyte subsets to chronic intestinal inflammation, we used a well-characterized T cell transfer model of IL-23 driven colitis.In this model, chronic colitis develops ∼6 weeks after transfer of T cells into Rag1−/− mice and is accompanied by increased granulopoiesis.Standard markers were used to discriminate eosinophils from Siglec-F−Gr1hi neutrophils.We also used the level of expression of Siglec-F as a measure of eosinophil activation.Thus, Siglec-Fint cells are immature or resting eosinophils that reside in lymphoid organs and in uninflamed tissues, whereas Siglec-Fhi cells are mature or activated eosinophils mostly found in extralymphoid tissues and increased during inflammation.In chronic colitis, we found both populations among colonic lamina propria leukocytes, with high granularity and expression of the eotaxin receptor CCR3 confirming them as eosinophils.Percentages of Siglec-Fhi eosinophils were as high as those of neutrophils in the inflamed intestine, both of which were ∼2-fold increased in colitic compared to control Rag1−/− mice.This increase was equivalent to a ∼40-fold increase in absolute numbers.The abundance of intestinal eosinophils was confirmed in situ, with a high density of Siglec-F+ cells observed in inflamed colons.IL-23-deficient Rag1−/− mice, which only develop mild colitis after T cell transfer, had a reduced absolute number and percentage of eosinophils among CD45+ leukocytes compared to colitic IL-23 competent mice, suggesting a link between intestinal eosinophil accumulation and IL-23-driven inflammation.Eosinophils in the inflamed intestine showed increased activation based on a number of parameters.First the majority express high amounts of Siglec-F, a phenotype associated with activation in inflammatory lung disease.Consistent with their increased activation state, Siglec-Fhi eosinophils in the inflamed intestine also expressed higher amounts of other eosinophil activation markers such as CD11b, IL-33R, and Gr1.These activated eosinophils in colitis also expressed the degranulation marker CD63.Such cells were 1.7-fold higher in the colon of colitic mice compared to controls with a 60-fold increase in total numbers.Finally, while resting Siglec-Fint eosinophils in the BM and spleen secreted negligible amounts of TNF, 14% of eosinophils in the inflamed colon were positive for TNF, a crucial colitogenic cytokine.Interestingly, protection from colitis by co-transfer of Foxp3+ regulatory T cells with colitogenic CD4+ T cells reduced not only cLPL neutrophil numbers but also eosinophil accumulation and activation, indicating similar inhibitory effects of Treg cells on neutrophils and eosinophils in this model.Taken together, these data show that eosinophils are a major constituent of the IL-23-driven intestinal inflammatory network.Although a predominant population of eosinophils in the normal intestinal mucosa had a resting Siglec-Fint phenotype, a shift toward an activated Siglec-Fhi population with signs of degranulation occurred during chronic intestinal inflammation.As we observed sustained accumulation of eosinophils in chronic intestinal inflammation, we next sought to investigate the role of eosinopoiesis in this process.Eosinophils can increase their lifespan from a few days to a few weeks within inflamed tissues, therefore we could not exclude that the abundant stock of preformed eosinophils in the BM fuelled tissue accumulation without major changes in eosinopoiesis.We first examined this in T cell transfer colitis and found that a striking 3.5-fold expansion of EoPs correlated with a substantial increase of eosinophils in the BM of colitic mice compared to controls.There was also a 2-fold increase in the percentage of BM eosinophils positive for the gut-homing α4β7 integrin.Consistent with these results, ∼7% of CD4+ T cells in the inflamed intestine expressed the eosinopoietin IL-5.GM-CSF, which can act in synergy with IL-5 to stimulate eosinopoiesis, was also increased in the inflamed colon compared to controls and ∼40% of CD4+ T cells were GM-CSF+ in colitis.Next we investigated eosinopoiesis in a lymphocyte replete model of colitis to ensure our results were not a consequence of altered myelopoiesis in Rag1−/− hosts.For this we used a well-described model of colitis following Helicobacter hepaticus infection and concomitant blockade of the IL-10-IL-10R pathway.In this model, there was a similar increase in EoPs and BM eosinophils, as well as accumulation of activated TNF-secreting eosinophils in the colon.Mature granulocytes in peripheral tissue are described as post-mitotic and indeed eosinophils that accumulated in the inflamed colon stained negative for BrdU after a 16 hr pulse-chase assay, whereas almost half of eosinophils developing in the BM had incorporated the dye at this time.In contrast, BrdU+ eosinophils appeared in the intestine only 2–3 days after initial BrdU pulsing, suggesting that the increase in colonic eosinophils is supported by sustained BM eosinopoiesis.Thus, the large accumulation of eosinophils in the inflamed intestine was supported by a significant increase in eosinopoiesis, giving rise in the BM to newly formed eosinophils that were preferentially tagged for intestinal migration.GM-CSF can directly stimulate eosinopoiesis and eosinophil survival.To test whether the GM-CSF-eosinophil pathway is pathological in colitis in a lymphocyte-replete setting, we turned to the Hh and anti-IL-10R colitis model described above.Lack of a GM-CSF-Rβ signal in Csf2rb−/− mice reduced EoP and eosinophil increases in the BM in this chronic model of colitis.A change in eosinopoiesis was accompanied by a ∼90% decrease in percentages of intestinal eosinophils in Csf2rb−/− versus WT infected mice, whereas percentages of neutrophils and CD4+ T cells were similar.This alteration in the composition of the cellular infiltrate correlated with a significantly reduced colitis score in Csf2rb−/− compared to WT mice and a decrease in the ratio of activated to resting eosinophils in the intestine.It was notable that the decrease in intestinal eosinophils in Csf2rb−/− mice occurred at steady state, whereas the lack of IL-23R signaling had no effect on the accumulation of eosinophils in the normal intestine.Next we utilized mixed WT and Csf2rb−/− BM chimeras to distinguish cell-intrinsic from non-cell-autonomous secondary effects.While cell-intrinsic GM-CSF-Rβ signaling was not required for accumulation of neutrophils and monocytes in the inflamed intestine, cell autonomous GM-CSF-Rβ signaling was required for eosinophil accumulation in colitis.Together, these data indicate that GM-CSF-Rβ chain signaling promotes eosinophilia and colitis and differentially regulates the accumulation of neutrophils and eosinophils in the inflamed intestine.Because the lack of colitis observed in Csf2rb−/− mice correlated with a decrease in the frequency of eosinophils, but not neutrophils, we next investigated the relative contribution of these distinct innate effectors to the pathogenesis of colitis.We employed two strategies to deplete eosinophils, either blockade of the eosinopoietin IL-5 or antibody-mediated depletion of Siglec-F+ cells.In the presence of IL-10R blockade, Hh infected mice treated with anti-IL-5 had a 50% reduction in total cLPL compared to isotype treated controls and an 87% decrease in the percentage of Siglec-Fhi eosinophils.Most importantly, this was accompanied by a significant reduction in colitis severity compared to isotype treated controls.The IL-5R is constitutively expressed by eosinophils but also by some B cell subsets.Therefore, to further increase the specificity of treatment, we used an anti-Siglec-F depletion approach shown to selectively eliminate eosinophils.Although alveolar macrophages in the lung express Siglec-F, intestinal and peritoneal macrophages do not and are therefore not affected by anti-Siglec-F depletion.Treatment with anti-mouse Siglec-F immune serum reduced colitis severity to the same extent as IL-5 blockade.This treatment regimen led to an 85% decrease of eosinophils based on reduced CD11b+CCR3+SSChi cells in the colon of anti-Siglec-F versus pre-immune serum treated mice, whereas there was only a 28% reduction in the percentage of colonic neutrophils.The small reduction in neutrophils was most likely secondary to reduced overall inflammation, because uninfected mice treated with anti-Siglec-F serum did not display a decrease in neutrophils or any leukocyte populations other than eosinophils.By contrast with eosinophil-depleting strategies, depletion of neutrophils with an anti-Ly-6G antibody did not have a significant effect on colitis.Together these results reveal differential roles for eosinophils and neutrophils in chronic colitis.While eosinophils play a non-redundant role in disease, neutrophils are dispensable for the development of chronic intestinal inflammation.In order to further understand the colitogenic role of GM-CSF, we investigated whether GM-CSF and IL-5 had differential effects on eosinophil production and activation.Because GM-CSF-R and IL-5R share the same β-receptor subunit, we tested whether GM-CSF blockade would reproduce the decrease in eosinophil activation and accumulation observed in Csf2rb−/− mice.Interestingly, IL-5 and GM-CSF were produced at steady state by ILCs but were not increased in lymphocyte-replete colitis.By contrast, GM-CSF production by CD4+ T cells was increased in the inflamed colon compared to controls, while percentages of IL-5 producers were unchanged.Accordingly, colonic GM-CSF, but not IL-5, mRNA and protein levels were augmented in chronic colitis, possibly highlighting a more homeostatic role for IL-5 compared with the more activation-induced functions of GM-CSF.Regarding eosinophil chemoattractants, eotaxin-1 and RANTES were increased in early and late phases of colitis, respectively.When treated with anti-GM-CSF, Hh infected and anti-IL-10R-treated mice exhibited significantly reduced colonic infiltrates and colitis score compared to mice treated with isotype control and displayed a striking 50% decrease in the frequency and activation status of colonic eosinophils.While anti-IL-5 treatment inhibited the general accumulation of colonic eosinophils, GM-CSF blockade only decreased the most activated population suggesting a role for GM-CSF in intestinal eosinophil activation in the inflamed colon.Blockade of either GM-CSF or IL-5 led to reductions in the number of eosinophils in the BM; however, only GM-CSF blockade inhibited the accumulation of GMPs and downstream EoPs.These results indicate that during intestinal inflammation, GM-CSF sustains eosinophilic granulopoiesis, whereas IL-5 mediates a more specific function promoting the terminal differentiation of EoPs into Siglec-F+ cells.A differential effect of GM-CSF and IL-5 was also evident on the “gut-tagging” of newly produced BM eosinophils, as upregulation of α4β7 integrin in colitis was only inhibited by IL-5 blockade.Overall, these results highlight the synergy between GM-CSF and IL-5 in the regulation of eosinopoiesis and reveal the key role of GM-CSF in driving chronic intestinal inflammation through accumulation of activated eosinophils in the colon.We next sought to characterize further the differential regulation of eosinophils in the periphery by GM-CSF and IL-5.Both cytokines can promote eosinophil survival, and we confirmed this observation in colitis.Annexin-V staining on freshly isolated peripheral eosinophils was decreased in colitis and increased in the presence of GM-CSF or IL-5 blockade.As GM-CSF production was increased during colitis, while IL-5 levels stayed constant, we hypothesized that GM-CSF would be a key driver of eosinophil effector functions in the inflamed intestine.Indeed, anti-GM-CSF treatment inhibited the increase in CD11b and increase in side scatter, while IL-5 blockade did not have a significant effect on these markers of activation.In addition, cell-sorted eosinophils exhibited morphological changes in vitro in the presence of GM-CSF, notably increased diameter as a sign of activation.Interestingly CD64, which is increased on neutrophils in IBD, was induced on eosinophils during colitis in a GM-CSF-dependent but IL-5-independent manner.Furthermore, the amount of CD64 was higher on Siglec-Fhi than Siglec-Fint eosinophils, consistent with their more activated status.Regarding expression of cytokines involved in epithelial cell dysregulation and damage, intestinal eosinophils expressed higher amounts of Tnf, Il6, and Il13 mRNA in colitis compared to uninflamed controls.In vitro analysis of cell-sorted intestinal eosinophils showed that GM-CSF stimulated Tnf and Il13 mRNA expression, but had no effect on Il6.Altogether, these data demonstrate that GM-CSF and IL-5 promote the survival of peripheral eosinophils, but only GM-CSF promotes their activation and inflammatory cytokine production, revealing one of the key colitogenic effects of GM-CSF during chronic intestinal inflammation.Because TNF, IL-6, and IL-13 are expressed by various leukocytes, we decided to investigate whether eosinophil-specific products could also drive chronic intestinal inflammation.For this purpose, we tested whether EPO, which is produced exclusively by eosinophils and can be tissue-toxic, contributed to chronic colitis.EPO levels and activity in the intestine were greatly increased during chronic inflammation, confirming substantial eosinophil degranulation.In addition, a reduction in eosinophil numbers during anti-GM-CSF or anti-IL-5 treatment was accompanied by a significant decrease in EPO.Resorcinol is a potent inhibitor of EPO leading to decreased anti-bacterial activity of eosinophils.In Hh-induced colitis, daily treatment of WT mice with resorcinol led to significantly reduced EPO activity and decreased colitis.This was accompanied by decreased markers of colonic inflammation compared to PBS treated mice, including reduced leukocyte infiltration, lower neutrophil percentages, and a trend toward reduced IFN-γ+ CD4+ T cells.EPO inhibition, however, did not affect the frequency of eosinophils among cLPL, consistent with previous in vivo observations.Overall, the pathogenic effect of uncontrolled accumulation of activated eosinophils in chronic colitis could be attenuated by inhibition of EPO, an enzyme well known to mediate oxidative tissue damage in eosinophil-dependent inflammatory diseases.Our study newly identifies a GM-CSF-eosinophil axis as a crucial component of IL-23-driven chronic colitis.Our previous work described GM-CSF as a pivotal downstream effector of IL-23 in the inflammatory cascade that drives aberrant responses to commensal microbiota through increases in myelopoiesis in T cell transfer colitis.Neutrophils are widely accepted as tissue-toxic cells in IL-23-mediated colitis.However, our results challenge this view and indicate a more prominent and unexpected role for eosinophils in this response.We show a marked accumulation of activated eosinophils in the colon of colitic mice, supported by increased eosinopoiesis, and a direct colitogenic role through production of eosinophil peroxidase and inflammatory cytokines.Although eosinophils are abundant in the intestine, their role in chronic intestinal inflammation is rarely considered.Here we identified GM-CSF as a key molecular switch diverting eosinophils from a tissue-protective to a tissue-toxic state of activation.These results extend the paradigm of eosinophil-mediated immune pathology beyond Th2 cell-type responses to effectors of IL-23-GM-CSF-driven dysregulated tissue immunity.GM-CSF is emerging as a central cytokine at the crossroads of various types of effector T cell responses and can be produced by Th1 and Th2 cells to stimulate increased myeloid cell activity.More recently it was shown that IL-23 stimulated Th17 cells to produce GM-CSF, which was pathogenic in EAE although its functional role was not established.In the inflamed intestine, IL-23 stimulated polyfunctional IFNγ+IL-17A+Th cells to produce GM-CSF, which triggered extramedullary hematopoiesis.In this report, we show that GM-CSF increased eosinopoiesis and numbers of highly activated eosinophils in the inflamed intestine.GM-CSF promoted increases in GMPs and downstream EoPs, which both express GM-CSF-R-α and β chains.Increased EoPs have been observed in Th2 cell-mediated asthma and anti-helminth responses, however, our study constitutes the first report of chronic EoP accumulation during IL-23-Th17 cell type-mediated immune disease, extending our previous observation of dysregulated hematopoiesis in colitis to the eosinophilic lineage.IL-5 or GM-CSF blockade resulted in a substantial decrease of eosinophils in the inflamed intestine, however there were marked differences in their action.Although both cytokines promoted eosinopoiesis during colitis, IL-5 specifically increased the differentiation of EoPs into Siglec-F+ eosinophils and promoted imprinting of α4β7 integrin expression.However by contrast with GM-CSF it had no effect on the upstream GMP.Resident populations of intestinal leukocytes contribute to the maintenance of basal eosinophil numbers as ILCs constitutively produce IL-5 in the normal intestine and resident macrophages express eotaxins.However, intestinal IL-5 does not appear to be controlled by IL-23 because IL-5 expression was not increased in colitis or reduced in IL-23 deficiency.These results are in contrast to the findings that IL-23 promoted IL-5 and Th2 responses in asthma models suggesting differences in IL-5 regulation in distinct tissue sites.In contrast with IL-5, IL-23 increased GM-CSF expression by CD4+ T cells and ILC in colitis, pinpointing an IL-23-GM-CSF-eosinophil axis in colitis that can boost basal IL-5 dependent eosinophilia.A recent study showed a role for eosinophils in maintaining intestinal integrity toward the gut microbiota through stimulating IgA+ plasma cells and Foxp3+ Treg cells.Importantly, here we show that the unchecked production of GM-CSF during chronic colitis is a key driver of the eosinophil switch from a resident and homeostatic phenotype to an over-activated and tissue toxic phenotype.Siglec-Fhi eosinophils were also increased in lung inflammation and were more resistant to apoptosis than Siglec-Fint eosinophils.Eosinophil activation and cytokine secretion that accompanied colitis was inhibited by GM-CSF but not IL-5 blockade.Furthermore in vitro, GM-CSF acted directly on eosinophils to induce production of colitogenic cytokines TNF and IL-13.Together the data suggest that IL-5 plays a homeostatic role maintaining basal levels of eosinophils in the intestine, whereas GM-CSF promotes their activation and deleterious effector functions in chronic colitis.It is worth noting that GM-CSF-Rβ deficiency did not affect the percentage of neutrophils in the intestine in colitis and BM chimera experiments, despite inducing a severe decrease in eosinophil percentages.Thus, GM-CSF is not absolutely required for the neutrophil increase probably owing to the compensatory role of G-CSF, which is a potent inducer of neutrophilia and is increased in T cell transfer colitis.Unexpectedly, while eosinophil depletion dampened colitis, no such effect was provoked by depletion of neutrophils, highlighting a dichotomy in the role of these granulocyte populations in chronic colitis.Amelioration of chronic colitis by pharmacological inhibition of EPO, which is implicated in cytotoxic oxidant generation, pinpointed one of the molecular mechanisms by which eosinophils specifically mediate intestinal damage.This pathway has also been implicated in the DSS model of acute colonic injury, suggesting broad relevance in intestinal damage.Interestingly, regulation of eosinophils in acute versus chronic intestinal inflammation is not identical as we found that IL-5 depletion inhibited chronic colitis, whereas IL-5 deficiency had no significant effect in the DSS model contrary to the protective effect of eotaxin deficiency.This suggests that in an acute damage model, mobilization of mature eosinophils from the BM to intestine is sufficient, whereas sustained chronic colitis requires IL-5-dependent eosinopoiesis.In human, treatment of eosinophils with GM-CSF in vitro led to increased release of EPO and ECP, providing further evidence that GM-CSF can directly increase the cytotoxic functions of eosinophils.Conversely, IL-10 inhibited LPS-induced TNF release and increased survival of human eosinophils in vitro.Treg cells play an important role in intestinal homeostasis and suppress colitis in part via IL-10.We found that Treg cell-mediated control of colitis correlated with a reduction in eosinophil accumulation and activation.Based on those results, it is tempting to speculate that under homeostatic conditions eosinophils in the intestine are hyporesponsive to TLR activation as a consequence of the IL-10 rich environment, which might be over-ridden by sustained increases in GM-CSF production in chronic inflammation.There are several reports of increased GM-CSF in Crohn’s disease and ulcerative colitis patients, and concomitant Th17 and IL-5 and IL-13 T cell responses have been observed in ileal CD suggesting a more polyfunctional T cell response in certain patients.However GM-CSF activity is a double-edged sword and when produced in a controlled fashion plays an important role in the steady state accumulation of mononuclear phagocytes and Foxp3+ Treg cells and in promoting host protective immunity in the gut.Consistent with this, intestinal injury in a DSS model was exacerbated in Csf2rb−/− mice further illustrating differences in the mechanisms of acute and chronic intestinal damage.Increased levels of anti-GM-CSF autoantibodies have been observed in pediatric and some forms of adult CD leading to the idea that GM-CSF is protective in IBD.However three clinical trials of recombinant GM-CSF administration failed to show demonstrable protective effects.It is highly likely that GM-CSF will play both protective and pathological roles in IBD and that the context in which it is produced, such as where and for how long, might determine its ultimate functional role.Several studies have reported increased eosinophil numbers and activation in both UC and CD.ECP was also increased in the faeces of IBD patients suggesting eosinophil degranulation.Our results in model systems taken together with an emerging picture in humans suggest that blockade of the GM-CSF/eosinophil axis might be a therapeutic target in particular patient subsets.The fact that sustained depletion of eosinophils in patients with hyper-eosinophilic syndrome treated for up to 6 years with anti-IL-5 did not lead to adverse effects is encouraging for considering this approach in IBD.B6.SJL-Cd45.1 mice or C57BL/6 mice: wild-type, Csf2rb−/−, Il23r−/−, Rag1−/−, or Rag1−/−Il23p19−/− were bred and maintained under specific pathogen–free conditions in accredited animal facilities at the University of Oxford.All procedures involving animals were conducted according to the requirements and with the approval of the UK Home Office Animals Acts, 1986.Mice were negative for Helicobacter spp. and other known intestinal pathogens and were used when 7–12 weeks old.Naive CD4+CD25−CD45RBhi T cells and regulatory CD4+CD25+CD45RBlo T cells were sorted by flow cytometry from enriched CD4+ single-cell spleen suspensions to a purity of >99%.For the induction of colitis, 4 × 105 naive T cells were injected intraperitoneally into C57BL/6.Rag1−/− recipients.Where indicated, 2 × 105 protective T reg cells were co-injected i.p.Colitis was induced in WT C57BL/6 mice by infecting with Hh on 2 consecutive days with 5 × 107–2 × 108 CFU Hh and i.p. injection of 1 mg 1B1.2 mAb on days 0 and 7 after Hh infection.Mice were killed 7 days after the last anti-IL10R mAb treatment.Where indicated, mice were i.p. injected two times per week with 0.4 mg of anti-GM-CSF or isotype control or 0.5 mg of anti-IL-5, or three times per week with 0.25 mg anti-Ly6G or isotype control starting from the first day of Hh infection.Where indicated, mice were i.p. injected two times per week with sheep preimmune serum or sheep anti-Siglec-F serum.Where indicated, mice were i.p. injected daily with Resorcinol or PBS.Proximal, mid-, and distal colon samples were fixed in buffered 10% formalin solution.5 μm paraffin embedded sections were cut and stained with hematoxylin and eosin and inflammation was scored in a blinded fashion.In brief, 4 parameters of inflammation were assessed: epithelial hyperplasia and goblet cell depletion, leukocyte infiltration in the lamina propria, area of tissue affected, and markers of severe inflammation such as submucosal inflammation.Aggregate scores were taken for each section, to give a total inflammation of 0–12.Colon inflammation scores represent the average score of the three sections taken.Single cell suspensions were prepared from spleen, MLN, and cLPL as previously described.In brief, colons were longitudinally opened, cut into 1 cm pieces, and incubated in RPMI 1640 with 10% FCS and 5 mM EDTA at 37°C to remove epithelial cells.Tissue was then digested with 100 U/ml of type VIII collagenase in complete RPMI medium containing 15cmM HEPES during 1 hr at 37°C.The isolated cells were layered on a 30/40/75% Percoll gradient, which was centrifuged for 20 min at 600 g, and the 40/75% interface, containing mostly leukocytes, was recovered.BM cell suspensions were prepared by flushing the marrow out of femur and tibia and were resuspended in PBS with 2% BSA.Flow cytometry and cell sorting, quantitation of gene expression using real-time PCR, in vitro stimulation assays, EPO Elisa and EPO colorimetric assay, in vivo BrdU labeling and immunofluorescence were performed as described in Supplemental Experimental Procedures.Statistical analysis was performed with Prism 6.0.The nonparametric Mann-Whitney test was used for all statistical comparisons.Differences were considered statistically significant when p < 0.05.T.G. and I.C.A planned and performed experiments and wrote the paper.C.P., T.K., C.S., F.F., and J.S. performed particular experiments.F.P. wrote the paper and supervised the study.F.P. and T.G. designed the study.B.S.M. and P.R.C. provided essential materials and were involved in data discussions.
The role of intestinal eosinophils in immune homeostasis is enigmatic and the molecular signals that drive them from protective to tissue damaging are unknown. Most commonly associated with Th2 cell-mediated diseases, we describe a role for eosinophils as crucial effectors of the interleukin-23 (IL-23)-granulocyte macrophage colony-stimulating factor (GM-CSF) axis in colitis. Chronic intestinal inflammation was characterized by increased bone marrow eosinopoiesis and accumulation of activated intestinal eosinophils. IL-5 blockade or eosinophil depletion ameliorated colitis, implicating eosinophils in disease pathogenesis. GM-CSF was a potent activator of eosinophil effector functions and intestinal accumulation, and GM-CSF blockade inhibited chronic colitis. By contrast neutrophil accumulation was GM-CSF independent and dispensable for colitis. In addition to TNF secretion, release of eosinophil peroxidase promoted colitis identifying direct tissue-toxic mechanisms. Thus, eosinophils are key perpetrators of chronic inflammation and tissue damage in IL-23-mediated immune diseases and it suggests the GM-CSF-eosinophil axis as an attractive therapeutic target.
702
Structure of the mammalian ribosome-Sec61 complex to 3.4 Å resolution
Sample Preparation,Porcine pancreatic microsomes were prepared as previously described, resuspended in membrane buffer and flash frozen for long-term storage at −80°C.Microsome flotation and sucrose gradient experiments showed that all ribosomes in the preparation were membrane bound and primarily in polysomes.Details of additional characterization are shown in Figure S1.To convert polysomes into monosomes, microsomes were adjusted to 1 mM CaCl2 and 150 U/ml micrococcal nuclease, incubated at 25°C for 7 min, adjusted to 2 mM EGTA, and flash frozen in single-use 50 ul aliquots.A 50 ul aliquot of nuclease-digested microsomes was adjusted with an equal volume of ice cold 2X solubiization buffer and incubated 10 min on ice.Earlier reports using similar solubilization conditions at both higher and lower salt observed no loss of nascent chains from the mammalian Sec61 channel as judged by protease protection assays.Samples were spun for 15 min at 20,000 x g to remove insoluble material, and the solubilized material was fractionated by gravity flow over a 1 ml Sephacryl-S-300 column pre-equilibrated in column buffer.Roughly 100 ul fractions were manually collected and the void fraction containing ribosome-translocon complexes was identified by A260 measurements.The sample was centrifuged again as above to remove any potential aggregates before using immediately to prepare and freeze cryo-EM grids.It is worth noting that we efficiently recovered nontranslating ribosome-Sec61 complexes despite using 400 mM KOAC during solubilization.Two reasons probably contributed to this high recovery: solubilization and fractionation at a very high sample concentration, favoring an otherwise weak interaction; re-association of ribosomes with Sec61 when the salt concentration was reduced upon entering the gel filtration resin.Note for example that dissociation of translocon compoments was greater using sucrose gradient sedimentation than the more rapid Sephacryl-S-300 separation.A second question is why a large proportion of our ribosomes contained no tRNA, some of which have eEF2.We do not know for certain, but differences from most earlier ribosome preparation protocols include isolating ribosomes from a microsomal subcellular fraction derived from a native tissue, the nuclease digestion reaction, and the specific conditions used for solubilisation and purification.Most earlier ribosome purification protocols are typically from total cell lysates, do not employ high detergent concentrations, often involve greater fractionation, and are bound to Stm1-like proteins.Grid Preparation and Data Collection,Ribosome-Sec61 complexes were diluted in column buffer to a concentration of 40 nM, and were applied to glow-discharged holey carbon grids which had been coated with a ∼50–60 Å thick layer of continuous amorphous carbon.After application of 3 μl of sample, the grids were incubated at 4°C for 30 s, blotted for 9 s, and flash-cooled in liquid ethane using an FEI Vitrobot.Data were collected on an FEI Titan Krios operating at 300 kV, using FEI’s automated single particle acquisition software.Images were recorded using a back-thinned FEI Falcon II detector at a calibrated magnification of 104,478, using defocus values between 2.5–3.5 μm.Videos from the detector were recorded using a previously described system at a speed of 17 frames/s.Image Processing,Semi-automated particle picking was performed using EMAN2, which resulted in selection of 83,839 particles from 1,410 micrographs.A smaller second data set was collected later from another grid containing the same sample in order to increase the number of particles containing tRNAs and nascent chain.Data set 2 contained 726 micrographs that led to the selection of 37,061 particles.Contrast transfer function parameters were estimated using CTFFIND3 for both data sets, and any micrographs that showed evidence of astigmatism or drift were discarded at this stage.All 2D and 3D classifications and refinements were performed using RELION as described below.Unsupervised 2D class averaging was used to discard any nonribosome or defective particles, which resulted in a final data set of 80,019 80S particles, and an additional 36,696 particles from data set 2.Each data set was individually refined against a map of the S. cerevisiae ribosome filtered to 60 Å resolution, utilizing statistical movie processing in RELION as described previously.As the 40S subunit was in several distinct conformations, a mask that included only the 60S subunit was used during refinement of the complete initial data set.This resulted in a final resolution of 3.35 Å using 80,019 particles for the 60S subunit as judged by the gold-standard FSC criterion.In parallel, 3D classification of the initial 80,019 80S particles was performed using angular sampling of 1.8 degrees and local angular searches.Ten possible classes were allowed and resulted in the following.Class 4: representing ∼13% of the data set, contained ribosomes in a ratcheted conformation with A/P- and P/E-site tRNAs and a nascent polypeptide in the ribosomal tunnel.Classes 6,7,9: three identical classes, together representing ∼46% of the data set, contained ribosomes with eEF2 in a partially ratcheted orientation.Classes 2 and 3: together representing ∼19% of the data set, contained eEF2 bound to a ribosome in the canonical, unratcheted conformation.Classes 1,5,8,10: together comprising ∼22% of particles, contained empty ribosomes, without tRNAs or translation factors.Given the large percentage of particles that contain eEF2 in this sample, weak density was observed for the factor in refinements using the complete data sets.However, continuous density for the factor could only be observed in refinements of appropriately grouped particles, described below.The apparent composition of the sample was similar for data set 2, from which 4,168 particles containing hybrid state tRNAs and a nascent peptide chain were combined with class 4 above to produce a larger set of particles for the translating ribosome-Sec61 structure.Particles identified from 3D classification were combined according to biological state, and subjected to a final 3D refinement resulting in the density maps used for model building as described in Figure S2.The idle ribosome-Sec61 map was obtained using the 69,464 particles that did not contain tRNAs or a nascent peptide, refined using a 60S mask to 3.4 Å resolution.The map of the translating ribosome-Sec61 complex, obtained by combining the first and second data sets, resulted in 14,723 particles that produced a 3.9 Å density map.The 40S subunit and the ribosomal stalk were best resolved in the 36,667 particles containing eEF2 and the 40S subunit in a defined orientation, which extended to 3.5 Å resolution.All maps were corrected for the modulation transfer function of the detector, and then sharpened using a negative B-factor, which was estimated using previously reported procedures.Local resolution of the final unsharpened maps was calculated using ResMap.Model Building and Refinement,The porcine 80S ribosome was built using the moderate resolution model for the human ribosome.Sec61 bound to both the idle and translating ribosome was built using the crystal structure of the archaeal SecY and the low-resolution model of the canine Sec61 bound to the ribosome.All models were built in COOT, and refined using REFMAC v5.8 as previously described.Registry and other errors to the ribosomal proteins were corrected manually, and each chain was refined individually against an appropriately cut map.Secondary structure restraints were generated in ProSMART, and nucleic acid base-pairing and stacking restraints were generated as before and were maintained throughout refinement to prevent overfitting.Ramachandran restraints were not applied, such that backbone dihedral angles could be used for subsequent validation of the refined models.To test for overfitting, we performed a validation procedure similar to that described previously.In brief, the final model was refined against an unsharpened density map calculated from only one half of the data using empirically determined chemical restraints.The resulting model was then used to calculate FSC curves for both halves of the data, one of which had been used during the refinement, and the other which had not.The two FSC curves nearly overlap, and we observe significant correlation beyond the resolution used for refinement, demonstrating that the model has predictive power and has not been overfitted.The models for the 60S and 40S subunits were then refined using these same restraints against the highest resolution sharpened maps for each subunit.The resulting models for the 60S subunit, the 40S body, and 40S head were individually rigid-body fitted into the maps for the remaining classes.All figures were generated using Chimera and PyMOL.The maturation of nascent polypeptides relies on many factors that dynamically associate with the translating ribosome.These factors include modification enzymes, chaperones, targeting complexes, and protein translocons.While many fundamental aspects of protein translation are now understood in chemical detail, far less is known about how these exogenous factors cooperate with the ribosome to facilitate nascent chain maturation.A major class of proteins that rely extensively on ribosome-associated machinery are secreted and integral membrane proteins.In all organisms, a large proportion of these proteins are cotranslationally translocated across or inserted into the membrane.The exceptional prominence of this pathway in mammals is underscored by the original discovery of ribosomes as a characteristic feature of the endoplasmic reticulum membrane.Thus, understanding the nature of membrane-bound ribosomes and their role in secretory protein biosynthesis has been a long-standing goal in cell biology.After targeting to the membrane, ribosomes synthesizing nascent secretory and membrane proteins dock at a universally conserved protein conducting channel, called the Sec61 complex in eukaryotes and the SecY complex in prokaryotes and archaea.The PCC has two basic activities.First, it provides a conduit across the membrane through which hydrophilic polypeptides can be translocated.Second, it recognizes hydrophobic signal peptides and transmembrane domains and releases them laterally into the lipid bilayer.These activities rely on binding partners that regulate PCC conformation and provide the driving force for vectorial translocation of the nascent polypeptide.The best characterized translocation partners are the ribosome and the prokaryote-specific ATPase SecA.Extensive functional and structural studies of the SecA-SecY posttranslational translocation system, in parallel with the cotranslational ribosome-Sec61 system, have coalesced into a general framework for protein translocation.Over the past two decades several crystal structures and cryo-EM reconstructions have led to numerous mechanistic insights into these events.High-resolution crystal structures of the large ribosomal subunit visualized the exit tunnel, whose conserved conduit was shown to align with a bound Sec61 complex.While structural analysis of the prokaryotic ribosome and translation cycle progressed rapidly, the lower resolution of parallel PCC structures posed a challenge to identifying changes in its conformation at different stages of translocation.A major advance was the crystal structure of the archaeal SecYEβ complex, which made several predictions about the nature and function of the translocation channel that were supported by later studies.The ten transmembrane segments of SecY are arranged in a pseudosymmetric orientation such that the two halves surround an hourglass-shaped pore occluded by the plug domain.Six conserved hydrophobic residues from multiple surrounding transmembrane helices form a pore ring that lines the narrowest part of the channel and stabilize the conformation of the plug.Polypeptide translocation occurs through this central channel, with the pore-ring residues contributing to maintenance of the membrane permeability barrier during translocation.Lateral egress of hydrophobic sequences from the SecY pore toward the membrane bilayer occurs through a lateral gate formed by the interface of helices 2 and 3 with helices 7 and 8.Crosslinking and cryo-EM studies support this as the site of signal peptide and transmembrane domain recognition and insertion.Accordingly, impeding gate opening by crosslinking or mutagenesis impairs PCC function.Together these studies identify the key structural elements of the Sec61/SecY channel that allow it to open across the membrane for translocation or toward the lipid bilayer for transmembrane domain insertion.How these basic functions of the PCC are regulated by a translocation partner and the specific nascent polypeptide is incompletely understood.An X-ray structure of the SecA-SecY complex shows that interactions between the cytosolic loops of SecY with SecA induce a partial opening of the lateral gate and displaces the plug.These changes are thought to “prime” the channel for the ensuing polypeptide translocation.The analogous priming event with the ribosome has only been visualized at low-resolution, and thus is poorly defined.It is clear however, that ribosome interaction occurs via cytosolic loops between TM helices 6 and 7 and TM helices 8 and 9.The precise nature of these interactions and how they affect key functional elements such as the plug or lateral gate remain unknown.The subsequent stages of cotranslational translocation also remain to be resolved mechanistically.The various ribosome-PCC structures show that protein translocation is not accompanied by any major structural changes to the PCC.By contrast, engagement of a signal peptide or transmembrane domain opens the lateral gate to varying degrees, which may result in a conformation similar to that observed when a symmetry-related protein partially parted the lateral gate of SecY.However, molecular insight into these regulatory events in a physiologic context require high-resolution structures of complexes engaged at different stages of the translocation pathway.A number of recent technological advances in cryo-EM have permitted structure determination by single-particle analysis to unprecedented resolution.These advances include the use of direct electron detectors, algorithms to correct for radiation-induced motion of particles, and improved computational methods for image processing and classification.Collectively, these advances have facilitated structure determination of the ribosome and associated factors, even when the relevant complex is present as a small percentage of a heterogeneous mixture.In some instances, sufficient resolution can be achieved to build structures de novo and visualize the molecular details of key interactions.We reasoned that applying similar methods to a native membrane-bound ribosome solubilized from the endoplasmic reticulum could simultaneously provide mechanistic insights into both the mammalian ribosome and the associated translocation channel.At present, mammalian ribosome structures are limited to ∼5.4 Å resolution and have been bound to Stm1-like inactivating factors.Furthermore, features such as a native translating polypeptide and an A/P hybrid tRNA, characteristic of active elongation, have been difficult to trap in any system.A sample from an actively translating tissue, if sorted suitably, could overcome these limitations.Similarly, a native sample of the PCC will also contain heterogeneity, due in part to the presence of associated factors such as the translocon-associated protein and oligosaccharyl transferase complexes; however, all particles should contain a single Sec61 complex.Furthermore, the linked nature of translation with translocation suggests that the translation state could indirectly inform on the status of the PCC.This could allow computational sorting of translating from idle PCCs on the basis of the ribosome.Thus, the recent methodological advances may allow sample heterogeneity to be transformed from an impediment to an advantage.Here, we have determined structures of a porcine 80S ribosome-Sec61 complex in both an idle and translating state, determined to 3.4 and 3.9 Å resolution.These structures allow the detailed interpretation of the mammalian ribosome, the interaction between the Sec61 complex and the 60S subunit, and the conformational changes that occur to the channel during protein biogenesis.The ribosome-translocon specimen was generated by fractionation of detergent-solubilized rough microsomes from porcine pancreas.Rough microsomes typically contain a mixture of actively translocating and quiescent ribosomes.The presence of translationally active ribosomes in our microsomes was verified by labeling of their associated nascent polypeptides with puromycin.Subsequent fractionation demonstrated that over 90% of puromycin-released nascent polypeptides were larger than ∼18 kD and cosedimented with the microsomes.The vast majority of these polypeptides were efficiently extracted by alkaline sodium carbonate, a treatment that did not extract integral membrane proteins.Thus, on average, the active translocon prior to solubilization contains a hydrophilic polypeptide passing through its central channel.In an attempt to capture these active ribosome-translocon complexes, we prepared our specimen with minimal time and manipulation between solubilization and freezing.The structures described here help refine our understanding of several steps during cotranslational protein translocation and provide mechanistic insights into the two stages for fully activating the Sec61 channel.In the quiescent state presumably represented by the isolated crystal structure, the channel is fully closed to both the lumen and lipid bilayer.The first stage of activation involves binding of the ribosome, which primes the channel by opening of the cytosolic side of the lateral gate, thereby decreasing the energetic barrier for translocation.The movement of helix 2, implicated as part of this priming reaction, may provide a hydrophobic docking site for the arriving signal peptide in this region.Importantly, this primed state leaves the channel largely closed to membrane and entirely closed to the ER lumen.In the second stage of activation, a suitable substrate can now exploit the primed Sec61 by binding to and further opening the lateral gate.Signal peptide engagement at the lateral gate results in destabilization of the plug from the pore ring, either by sterically pushing the plug out of position, or by opening of the lateral gate, which shifts the helices surrounding the plug.Such a state appears to have been captured at low resolution in the E. coli system.This model would rationalize why promiscuously targeted nonclients are rejected by Sec61, prior to gaining access to the lumenal environment.The model would also explain how a small molecule that seems to bind near the plug can allosterically inhibit a signal sequence from successfully engaging Sec61.Once the plug is destabilized, the translocating nascent chain can enter the channel, which sterically prevents the plug from adopting its steady-state conformation.A dynamic plug no longer stabilizes the surrounding helices at the central pore, permitting a more dynamic lateral gate.This flexibility may permit sampling of the lipid bilayer by the translocating nascent chain, thereby allowing suitably hydrophobic elements to insert in the membrane.This model for activation provides one explanation for why transmembrane segments within a multispanning membrane protein can be far less hydrophobic than those that engage the Sec61 channel de novo: the latter would need to fully open a nearly-closed lateral gate stabilized by the plug, while the former could take advantage of a gate made dynamic by plug displacement.Both before and during translocation, a constant feature of the native ribosome-translocon complex is the substantial gap between the ribosome exit tunnel and Sec61.This gap has been consistently seen in many earlier structures and presumably provides a site for release of cytosolic domains of membrane proteins.Secretory proteins are also accessible to the cytosol via this gap, and may be exploited for quality control of stalled or translocationally aborted nascent polypeptides.Analysis of heterogeneous mixtures of particles visualized by cryo-EM is facilitated by improvements in image processing, in particular the use of maximum likelihood classification techniques.Our initial data set contained 80,019 ribosomal particles.In silico classification of these particles agrees with several aspects of its biochemical characterization.First, nearly all ribosomes contained a bound translocon, as classification of the final sample could not isolate any translocon-free ribosomes.Second, while the density for the area surrounding the translocon was heterogeneous due to a combination of accessory factors and the detergent-lipid micelle, very high occupancy was observed for the central Sec61 complex.Third, multiple classes of particles could be sorted based on the conformation of the ribosome and included translating and idle populations.The complete data set and individual classes were separately analyzed to extract their best features, which were incorporated into a composite model for the complete 80S-Sec61 complex.An initial reconstruction using the entire data set was calculated using a mask for the 60S subunit to avoid interference in the angular assignment by the heterogeneous conformation of the 40S.The resulting map, determined to 3.35 Å resolution, was used to build the ribosomal RNA and proteins of the 60S subunit.A distinctive class of ∼13% of particles contained two tRNAs bound in the A/P and P/E hybrid state.These particles were used to generate a 3.9 Å resolution map of the translating ribosome-translocon complex, within which density for the nascent polypeptide was observed throughout the ribosomal tunnel.The remaining 69,464 particles lacking tRNA and a nascent peptide were considered nontranslating ribosomes.This class was processed using a 60S mask to build the idle ribosome-Sec61 map at 3.4 Å resolution.Finally, this idle class was further subdivided by the degree of ribosomal ratcheting, and the presence or absence of the translational GTPase eEF2.One of these subclasses contained 36,667 particles and was used to produce a 3.5 Å resolution map used for building of the 40S ribosomal subunit and a well-ordered lateral stalk region.Thus, by leveraging major advances in both image detection and in silico analysis, a relatively small and heterogeneous data set could be used to build a near-complete atomic model of the mammalian 80S ribosome and high-resolution structures for the Sec61 complex bound to the translating and idle ribosome.We will begin by presenting the structure of the 80S ribosome, followed by discussion of the Sec61 complex structure and its functional implications.Throughout this study, we use the new unified nomenclature for ribosomal proteins.The porcine ribosome described in this study was determined to an average resolution of 3.4 and 3.5 Å for the 60 and 40S, respectively, as judged by the “gold-standard” Fourier Shell Correlation criterion.Notably, much of the core of the 60S subunit is at 3.0 Å resolution or better, while the head of the 40S subunit, given its inherent flexibility, is at somewhat lower resolution.The distal regions of several metazoan-specific rRNA expansion segments, such as ES27L, protrude from the ribosome and are presumably dynamic.As in the earlier study, these regions of rRNA were not visualized in our averaged maps.As the sample was prepared from an actively translating tissue, there was no evidence for binding of Stm1 or other sequestration factors that were observed in previous studies.Using a recent model of the human ribosome generated at ∼5.4 Å resolution as a starting point, we have rebuilt each ribosomal protein and the rRNA, including many amino acid side chains, RNA bases, and over 100 Mg2+ ions.Our density map allowed de novo building of many regions that were previously approximated due to lower resolution.Additional eukaryote-specific extensions of ribosomal proteins previously modeled by secondary structure predictions were also visible and built de novo.The ribosome stalk was stabilized in the class of particles containing eEF2, which facilitated modeling at high resolution in this region.As a result, we were able to build a near-complete 80S mammalian ribosome at atomic resolution.The marked improvement in the model is evident from the reduction of Ramachandran outliers within the ribosomal proteins from ∼13% to ∼5.4% for the 60S subunit and ∼7.5% for the 40S.The low percentage of Ramachandran outliers suggests the quality of our mammalian cryo-EM model is comparable to that of the seminal S. cerevisiae ribosome crystal structure determined to 3.0 Å resolution.Unlike in bacteria, the eukaryotic ribosome relies on extensive protein-protein interactions, and the improved model presented here illustrates many of the detailed chemical interactions that stabilize the mammalian ribosome.For example, ribosomal proteins eL21 and uL30 together each contribute one strand of a β sheet, while stacking interactions are observed between a phenylalanine in eL20 and the 28S rRNA.Additionally, though eEF2 was bound in a nonphysiological state without P-site tRNA, its interactions with ribosomal proteins uL10 and uL11 can be observed at high resolution.Given the high degree of confidence we now have in the model, and the extremely high sequence conservation of the ribosome in all mammals, this structure will serve as a resource for future biochemical and structural experiments.The translating ribosome-translocon structure contained hybrid state A/P- and P/E-site tRNAs and a nascent polypeptide.The conformation of the P/E tRNA is similar to earlier reports and stabilizes the L1 stalk inward.However, as previous reconstructions of an A/P tRNA were limited to ∼9 Å resolution, our structure represents the first high-resolution visualization of an A/P tRNA bound to the ribosome.Though the sample contains a mixture of tRNA species, it was nevertheless possible to infer the global conformational changes required to adopt this hybrid conformation.In order to simultaneously bind the A-site mRNA codon and the 60S P site, the body of the tRNA must bend by ∼13° when compared to a canonical A-site tRNA.Notably, the CCA tail of the A/P tRNA does not superimpose with the 3′ end of a canonical P-site tRNA, presumably because in the hybrid state the 60S subunit is in a different orientation relative to the 40S.Thus, the hybrid A/P conformation is accomplished by an ∼9 Å displacement of the CCA tail, comparable to that observed in reconstructions of the bacterial complex, and by bending in two regions of the tRNA: the anticodon stem loop, and the acceptor/T-stem stack.Similar regions have been implicated in binding of tRNAs to the ribosome in other noncanonical conformations.In particular, mutations in the anticodon stem loop have profound functional effects, as these mutations perturb the flexibility of the tRNA body and thus the energy required for adoption of these distorted conformations.Similarly, the A/P tRNA is undoubtedly a high-energy state stabilized by the presence of a nascent chain, which is discussed in further detail below.The instability of these intermediate tRNA conformations may favor movement of tRNAs and mRNA through the ribosome, facilitating translocation.Thus visualization of an A/P hybrid state further supports the notion that flexibility within the tRNA body must be precisely tuned to the requirements of the ribosome during protein synthesis.In addition to the high-resolution model of the ribosome presented above, analysis of the 80S-Sec61 complex afforded new insights into the role of Sec61 in translocation.The final models of a porcine ribosome-Sec61 complex in both an idle and translating state were determined to 3.4 and 3.9 Å resolution.Local resolution analysis of a cut away of the 60S subunit bound to Sec61 showed that the cytosolic regions of the idle Sec61 complex are at a similar resolution to the ribosome, and the resolution falls off only modestly toward the lumenal end.Notably, the density threshold at which the ribosome was well resolved also afforded visualization of individual helices of the core Sec61 complex with almost no surrounding micelle or accessory factors.At a lower threshold, a large lumenal protrusion, which was previously identified as the TRAP complex was observed together with the surrounding toroidal detergent-lipid micelle.Thus, these heterogeneous accessory components were either present at relatively low occupancy or highly flexible, with only the Sec61 complex well ordered in nearly every particle.All three subunits of Sec61 are present, and have been unambiguously built into the density, including many amino acid side chains in the essential Sec61α and γ subunits.Notably, the two ribosome-associating cytoplasmic loops in Sec61α, between transmembrane helices 6 and 7 and transmembrane helices 8 and 9, have been built de novo, as they have changed conformation compared to isolated crystal structures of SecY.These loops were modeled only approximately in previous lower-resolution studies.Density for the nonessential Sec61β subunit is only visible in unsharpened maps displayed at low threshold, suggesting that it may be conformationally heterogeneous.We have therefore modeled only the backbone of the transmembrane helix of this subunit.The overall architecture of the ribosome-bound mammalian Sec61 complex is similar to previously reported structures of the prokaryotic SecY determined by X-ray crystallography.Earlier moderate resolution cryo-EM maps fit with homology models of the X-ray structures also show the same general architecture.However, given the significant improvement in resolution over these reconstructions, it is now possible to describe the atomic interactions of Sec61 with the ribosome and the nature of relatively subtle conformational changes that may occur within Sec61 during protein translocation.Sec61 interacts with the ribosome primarily through the evolutionarily conserved loop 6/7 and loop 8/9 in the α subunit, as well as the N-terminal helix of Sec61γ.The most extensive interaction surface is composed of loop 8/9 and Sec61γ, which together contact the backbone of the 28S rRNA and ribosomal proteins uL23 and eL29.Earlier structures implicated Sec61 interactions with uL29.Although loop 6/7 packs against a loop of uL29, we could not observe specific contacts.Specific interactions involve several conserved basic residues in loop 8/9, including His404, which interacts with Thr82 of uL23, and the universally conserved Arg405, which forms a stacking interaction with rRNA residue C2526.The hydroxyl group of Thr407 in helix 10, whose role in ribosome binding has not been previously predicted, is also within hydrogen bonding distance of the side chain of Asn36 of eL19.This may represent a conserved interaction, as the presence of a polar residue at position 407 has been evolutionarily retained.Finally, Arg20 of the γ subunit forms a salt bridge with Asp148 of uL23.These hydrogen bonding interactions stabilize the conformation of loop 8/9, and anchor the translocon at the exit tunnel.This observation is consistent with biochemical studies, which demonstrate that mutations to conserved residues in this loop cause a marked decrease in affinity of the translocon for the ribosome.Conversely, very few specific hydrogen-bonding interactions are observed for loop 6/7.Arg273 and Lys268 interact with phosphate oxygens within the 28S rRNA, while Arg273 appears to be stacking on Arg21 from protein eL39.Inverting the charge of Arg273 causes a severe growth defect in yeast, consistent with the observed interaction with the rRNA.While it is clear that loop 6/7 is playing an important role in protein translocation due to its proximity to the ribosome, and its sequence conservation, the relatively small number of contacts suggest that it is unlikely to provide the primary stabilization of Sec61 to the ribosome.This is supported by the observation that although mutations within loop 6/7 cause profound defects in protein translocation and cell growth, they do not appear to affect ribosome binding.In all of the isolated crystal structures of SecY, cytosolic loops 6/7 and 8/9 are involved in a crystal contact or interact with either a Fab or SecA.These loops appear to provide a flexible binding surface, likely due to their large number of charged and polar residues, which is exploited in both physiological and nonphysiological interactions.It has long been predicted that ribosome binding must prime the translocon to accept an incoming nascent chain.The idea is attractive because the channel must prepare to open toward the lumen or the membrane, requiring at least partial destabilization of the contacts that prevent access to these compartments.To gain insight into this priming reaction, we compared our idle ribosome-Sec61 structure to previous crystal structures from either archaea or bacteria.The implicit assumption in this comparison is that the crystal structures approximate the preprimed quiescent state in the membrane.With this caveat in mind, we propose the following hypothesis for how ribosome binding could trigger a series of conformational changes that result in Sec61 priming.In the ribosome-bound state, loop 6/7 is displaced relative to the isolated crystal structures, resulting in a rotation of the loop by 20–30 degrees.Were the loop to remain in the conformation observed in the isolated structures, it would clash with either ribosomal protein uL29 or the 28S rRNA.It is likely that the extensive contacts between loop 8/9 and the ribosome, along with the clash with uL29 and the rRNA, constrain loop 6/7 into the observed conformation.Similarly, loop 8/9 is shifted by ∼6 Å, and the N terminus of the gamma subunit by ∼3 Å, compared to the isolated SecY in order to interact with the 28S rRNA and ribosomal proteins.The ribosome-constrained conformation of these loops transmits a small, but concerted distortion to their adjoining helices, which appears to be propagated helix to helix through the Sec61 channel.As the interhelical contacts in Sec61α are likely weakest at the lateral gate, these movements result in a slight opening between the cytosolic halves of helices 2 and 8.For example, residues G96 and T378 move from 4.4 Å apart in the isolated structure, to 11 Å apart on the ribosome.However, the intramembrane and lumenal portions of the lateral gate are largely unchanged and remain closed.An earlier model in which helix 8 bends substantially upon ribosome binding could not be supported by our higher-resolution map.Furthermore, the plug is virtually unaltered from the conformation observed in the isolated structures.The positions of helices surrounding the plug, which contribute pore-ring residues, also remain essentially unchanged.This suggests that the overall stability of the plug is not markedly altered by ribosome binding, although it is possible subtle differences in pore-ring interactions partially destabilize this region.In total, these conformational changes may represent the priming of Sec61 upon binding of the ribosome.Though we cannot exclude the possibility that these movements are the result of sequence differences between archaea and mammals, this seems unlikely given the high degree of sequence conservation in the regions interacting with the ribosome and the interhelical contacts that change upon priming.Relative to the isolated crystal structures, the primed Sec61 has prepared for protein translocation by decreasing the activation energy required to open the lateral gate without altering the conformation or stability of the plug.Since targeting to the Sec61 complex is mediated by either a signal peptide or transmembrane domain, a cytosolically cracked lateral gate is ideally positioned to receive these forthcoming hydrophobic elements from SRP.Indeed, a transmembrane domain stalled at a preinsertion state site specifically crosslinks to residues lining the cytosolic region of the lateral gate.Insertion of a signal peptide or transmembrane domain into this site would further open the lateral gate, presumably destabilizing the plug.In this way, the channel’s opening toward the lumen would be coupled to successful recognition of a bona fide substrate.Interestingly, movements of the lateral gate in Sec61, as described here, closely resemble those that occur upon binding of another translocation partner, SecA, to the cytosolic face of SecY.As with the ribosome, SecA interactions with the cytosolic loops 6/7 and 8/9 also partially separate helix 8 and 2 at the lateral gate.These conformational changes may thus represent a universal mechanism for preparing the channel for translocation.However, the movements in the lateral gate with SecA are more exaggerated than with the ribosome: helix 7 shifts to increase the extent of lateral gate opening, while the plug is displaced toward the periplasm.Snapshots of the lateral gate and plug in a more open or closed form are also seen when SecY interacts with either an adjacent protein molecule or a Fab, respectively.Thus, the lateral gate interface would appear to be rather pliable and easily modulated by any number of physiologic or artificial interactions, particularly with the cytosolic loops.Though the translationally active ribosome-Sec61 structure contains a heterogeneous mixture of translating polypeptides, it was possible to visualize near-continuous density in the ribosomal exit tunnel beginning at the tRNA and approaching the translocon.No density in the exit tunnel was observed in the population of ribosomes without tRNAs.Through the majority of the tunnel, the observed density would be most consistent with an extended polypeptide chain.However, within the wider region of the ribosomal tunnel near the exit site, the density for the peptide broadens, suggesting that alpha-helix formation may be possible.As our sample contains an ensemble average of nascent chains, representing endogenous polypeptides, it suggests that all peptides follow a universal path through the ribosome, regardless of sequence or secondary structure tendency.The density for the peptide first encounters Sec61 adjacent to loop 6/7, providing further evidence for the critical role this loop plays in protein translocation.Several studies have hypothesized that there may be communication between the ribosomal tunnel and translocon to potentially prepare the channel for the handling of specific upcoming sequence domains.As the rRNA lining the tunnel is relatively fixed, it has been proposed that such communication would involve the ribosomal proteins.The only protein that directly contacts Sec61 and partially lines the tunnel is eL39, which is positioned at the distal region of the tunnel, where the peptide could begin to adopt secondary structure features.It is plausible that the conformation or hydrophobicity of the nascent peptide chain can be communicated via eL39 directly to loop 6/7 of the translocon.Alternatively, this communication could be transmitted via uL23, which forms extensive interactions with both eL39 and Sec61 at the surface of the ribosome.The ability to visualize at near-atomic resolution both a defined nascent polypeptide and the Sec61-interacting ribosomal proteins surrounding the exit tunnel should allow these hypotheses to be directly tested.Given the presence of the hybrid state tRNAs and nascent peptide, this class of particles clearly contains an actively translating ribosome-translocon complex.However, at a threshold at which nascent chain density is visible in the ribosomal tunnel, density was not observed within the Sec61 channel.One reason may be that upon exit from the ribosome, nascent chains have more conformational freedom inside a dynamic Sec61 than within the ribosomal tunnel.We cannot exclude the alternative possibility that nascent chains have slipped out of the Sec61 pore during sample preparation.However, several lines of evidence suggest that most translating ribosome-Sec61 complexes in our sample contain a nascent chain within the Sec61 channel.First, the majority of polypeptides in this sample represent soluble proteins of at least ∼150 residues, a length more than sufficient to span the aligned conduits of the ribosome and Sec61 channel.Second, folded lumenal domains in most of these nascent chains would prevent back sliding through the pore during solubilization.Third, solubilization of pancreatic microsomes under conditions comparable to those used here retain nearly all endogenous nascent chains within the translocon.Fourth, sample preparation after solubilization was very brief with minimal manipulations, in contrast to the multistep purification that resulted in partial loss of nascent chains.For these reasons, we provisionally interpret this structure as an “active” Sec61 channel in the discussion below; definitive proof must await a structure that permits direct nascent chain visualization.Though the resolution of this active Sec61 channel structure in many regions does not allow the same type of atomic level analysis as is possible for the idle translocon, it is still feasible to examine its main characteristics.In agreement with earlier studies, the translocating state of Sec61 has no large-scale changes in its architecture.Helices 2, 7, and 8 do not appear to have undergone substantial rearrangement, and the lateral gate is largely unchanged from the primed state.Additionally, helices 1 and 10 have shifted, and the density for helix 3 is very weak, suggesting it has become mobile.At a threshold where all the surrounding helices were visualized, density for the plug was no longer visible in the center of the channel and a continuous conduit now runs through Sec61α.The central pore was sufficiently large to house a model of an extended polypeptide without clashes.While the plug’s canonical position was not occupied in the active state, we could not unambiguously assign it to an alternate location.It is possible the plug adopts a variety of conformations in this sample or becomes disordered to allow translocation.Given that the plug can be crosslinked to several disparate residues within an active SecY, it is likely dynamic once freed from its interactions with the pore ring.This flexibility may be facilitated by the observed movements in helix 1.In the static situation of a stalled nascent chain, the plug may settle at its lowest energy state, perhaps explaining why it was apparently seen near its original location.However steric constraints would require at least a nominal shift in the plug to accommodate the nascent peptide within the central pore.Although fewer particles for the active Sec61 complex led to a lower-resolution map than that for the idle complex, some areas are better resolved than others.Helices 6-9, along with loops 6/7 and 8/9, display the highest resolution within the structure as judged by atomic B-factor.This provides confidence in concluding that this part of Sec61 has few if any substantive conformational changes relative to the idle state.Thus, the C-terminal half of Sec61 effectively forms a stable platform for ribosome interaction.By contrast, the density for helices 2-4 is significantly weaker than for either this same region in the idle Sec61 structure, or for helices 6-9 in the active structure.This observation strongly argues that the position of helices 2-4 in the active Sec61 is heterogeneous.Several nonmutually exclusive explanations are possible: heterogeneous clients at different stages of transloction; different accessory proteins acting during translocation; and inherent flexibility in this region when the plug is displaced.Irrespective of the specific explanation, it would seem clear that helices 6-9 provide a ribosome-stabilized fulcrum, which allows movements within the remaining portion of the molecule to accommodate the nascent chain.The structures of the mammalian ribosome-Sec61 complex highlight the types of experiments made feasible by contemporary cryo-EM techniques.By studying a native, actively translating ribosome, it was possible to obtain high-resolution information for the conformation of an A/P tRNA and polypeptide within the exit tunnel, two states that are particularly challenging to capture using a reconstituted system.Furthermore, by using subsets of particles for different facets of the structure, otherwise dynamic elements such as the ribosome stalk could be visualized at high resolution.We anticipate that similar strategies will reveal the mammalian ribosome in various stages of its functional cycle, as well as translation-related regulatory events that impact human physiology.Analysis of a functionally heterogeneous mixture of particles also permitted direct comparisons of an idle and translating ribosome-Sec61 complex from the same sample.These structures allowed the detailed analysis of the interaction between Sec61 and the 60S subunit and the conformations acquired by the channel upon ribosome binding and protein translocation.These insights suggested a two-stage model for activation of the Sec61 channel, and provide a timeline for molecular changes leading to channel opening for peptide translocation or insertion.The challenge ahead will be to test these and other mechanistic hypotheses regarding the function of Sec61.Structures containing defined nascent peptides, stalled at intermediate stages of translocation, will allow us to precisely trace the sequence of events that accompany a nascent peptide’s transit from the ribosomal peptidyl transferase center into the ER lumen or membrane.Additional details can be found online in Supplemental Information.Porcine pancreatic microsomes were solubilized in 1.75% digitonin, for 10 min on ice, clarified by centrifugation, and fractionated using Sephacryl S-300 resin in 50 mM HEPES, 200 mM KoAc, 15 mM MgoAc, 1 mM DTT, and 0.25% digitonin.The void fraction was immediately processed for microscopy.Ribosome-Sec61 complexes at 40 nM were applied to glow-discharged holey carbon grids, coated with a layer of amorphous carbon, and flash-cooled in liquid ethane using an FEI Vitrobot.Data were collected on an FEI Titan Krios operating at 300 kV, using FEI’s automated single particle acquisition software.Images were recorded using a back-thinned FEI Falcon II detector at a calibrated magnification of 104,478, using defocus values between 2.5–3.5 μm.Videos from the detector were recorded at a speed of 17 frames/s as previously described.Particle picking was performed using EMAN2, contrast transfer function parameters were estimated using CTFFIND3, and all 2D and 3D classifications and refinements were performed using RELION.The resulting density maps were corrected for the modulation transfer function of the detector and sharpened as previously described.The porcine 80S ribosome was built using the moderate resolution model for the human ribosome, while the Sec61 channel bound to both the idle and translating ribosome were built using the crystal structure of the archaeal SecY and the models of the canine Sec61 bound to the ribosome.All models were built in COOT, and refined using REFMAC v5.8.Secondary structure restraints for the Sec61 channel were generated in ProSMART.To test for overfitting, we performed a validation procedure similar to that described previously.The final models for the 40S and 60S subunits were rigid-body fitted into the maps for the remaining classes, and refined.Figures were generated using Chimera and PyMOL.R.M.V. and R.S.H. conceived the project.R.M.V. prepared and characterized samples, optimized them for EM analysis, and collected data.Particle selection, classification, and generation of initial maps were by R.M.V. with guidance from S.H.W.S. and I.S.F. Ribosome structure building and analysis was done by I.S.F. with help from R.M.V. Analysis of Sec61 structure was by R.M.V. with guidance from R.S.H. R.M.V. and R.S.H. wrote the paper with input from all authors.
Cotranslational protein translocation is a universally conserved process for secretory and membrane protein biosynthesis. Nascent polypeptides emerging from a translating ribosome are either transported across or inserted into the membrane via the ribosome-bound Sec61 channel. Here, we report structures of a mammalian ribosome-Sec61 complex in both idle and translating states, determined to 3.4 and 3.9 Å resolution. The data sets permit building of a near-complete atomic model of the mammalian ribosome, visualization of A/P and P/E hybrid-state tRNAs, and analysis of a nascent polypeptide in the exit tunnel. Unprecedented chemical detail is observed for both the ribosome-Sec61 interaction and the conformational state of Sec61 upon ribosome binding. Comparison of the maps from idle and translating complexes suggests how conformational changes to the Sec61 channel could facilitate translocation of a secreted polypeptide. The high-resolution structure of the mammalian ribosome-Sec61 complex provides a valuable reference for future functional and structural studies. © 2014 The Authors.
703
An objective approach to model reduction: Application to the Sirius wheat model
Simulation models that predict the yield of agricultural crops from weather, soil and management data have provided a focus for crop physiological research over the last three decades and have contributed to current understanding of crop-environment interactions.Many such models have been developed for a wide range of crops, for example, STICS, APSIM, and DSSAT.The growth and development of agricultural crops in the field is the result of non-linear and inter-related processes, and as a result crop models are necessarily complex.Even when a model approximates individual processes by relatively simple relationships, there are a large number of inter-acting processes that have to be considered.Therefore the complexity of crop models typically arises from the inter-relationships between modelled mechanisms rather than the sophistication of individual process representation.The level of detail, the number of processes considered, and the means whereby they interact are all choices to be made in the design of the model leading to a very diverse range of crop model designs and, as a result, a need for effective methods of model evaluation.The purpose of a model is vital in defining the approach taken to its design and evaluation."For Jamieson et al. the explicit aim of a crop model was improved understanding of the crop's response to its environment.They described a crop model as a ‘…collection of testable hypotheses…’ and viewed model inter-comparison as a method of testing the hypotheses embedded in the models through an examination of their ability to predict detailed within season measurements of the crops growth and development.For example Jamieson et al. presented a comparison of the performance of 5 wheat models with respect to crop and environmental data from within the growing season.Their conclusions were directed towards the mechanistic basis of the different models.For example the assumptions about the role of root distribution on water uptake and the influence of water stress on canopy expansion were highlighted as important differences in the models considered.Many researchers have emphasised the need for a systematic evaluation of model structure.Crout et al. proposed a conceptually simple method for undertaking an evaluation of model structure by reducing a model through the replacement of variable quantities with constants.By iteratively replacing different variables in combination with one another a set of alternative model formulations were created.The ability of the reduced models to predict observations was then compared with the original model in order to test the importance of the replaced variables in the model.To date published work with this approach has considered relatively simple models.In this work we extend the approach to the more challenging case of a full crop simulation model with the aim of explicitly testing the hypotheses of the model.The usefulness of the analysis is dependent on the reliability and comprehensiveness of the observational data used.Inevitably the data available are partial, and therefore any model analysis is limited to some extent.Nevertheless we argue that this approach provides greater support for the model design than simply comparing the predictions of the full model with observations.A typical example of a process-based wheat model is Sirius.This model calculates biomass production from intercepted photosynthetically active radiation and grain growth from simple partitioning rules utilising nitrogen response and phenological development sub-models described by Jamieson and Semenov.Sirius is an actively developing model currently being applied at a number of levels, from basic research to on-farm decision support.The reduction approach requires a quantitative comparison between model predictions and observations.In principle this can be based on any relevant data series available.Our purpose was to mechanistically evaluate the performance of the model in reproducing the pattern of growth and development within a growing season as well as between sites and seasons.We therefore selected data from trials where detailed growth analysis had been conducted including cases where the major abiotic stresses of nitrogen and water limitation were present.Data from 9 trials have been used for the analysis: a spring wheat study at Lincoln New Zealand with four levels of water supply winter wheat trials at three sites in the UK with high nitrogen application rates and a further UK winter wheat trial with two levels of nitrogen application.Typically crop models are calibrated for use with particular cultivars and require site specific inputs for weather and soil conditions.Sirius had been previously applied to the New Zealand field data by Jamieson and Semenov and their cultivar and site parameters were employed for this work.The UK field trials used the cultivar Mercia, for which Sirius had been calibrated previously using field experiments in the UK.Soil and weather characteristics used were as reported by Gillett et al.The approach was to compare the predictive performance of a large number of alternative model formulations.These were based on the original full model but with specific variables replaced by constant values."In this context model variables were defined as internal quantities calculated using an assumed relationship expressed in terms of the model's parameters, input variables and other model variables.This definition of model variables was partially subjective because intermediate steps in a model calculation could be defined as individual model variables, or combined into a larger relationship as a single model variable.Such choices often depend upon the requirements of specific computer implementation.However, we regarded each model variable as having a specific mechanistic role in the model and defined a variable as a specific model component whose value was allowed to change during the run of the model."We therefore tested the effect of fixing a model variable to a constant on the model's skill.If the variable was important for model prediction one would expect replacing it with a constant to have a detrimental effect on the comparison between observations and predictions.As the assessment of model skill was based on a comparison of observations and model predictions, the approach was explicitly reliant on observational data.The utility of the analysis was therefore directly dependent on the quality and scope of the data used.The key steps in the analysis were:Implementation.The analysis was undertaken using a software environment which manages the variable replacement on the fly under programmatic control.Therefore the first step was to implement the model within this environment and check its behaviour to ensure it is the same as the original source model.This was accomplished through comparisons to the original hard coded Sirius model.Evaluation of model structure to enable the identification of candidate variables for reduction analysis.Not all the variables in the model merited investigation.For example, many variables in the model code were for diagnostic or output purposes and not part of the models functional structure.Use of the reduction software facilitated a syntactical analysis of the model structure to identify variables which had a technical or operational function in the model code.The remaining 1 1 1 variables, representing the modelled processes, were considered in the screening analysis.Screening analysis.The computational requirements of the reduction procedure increase with the number of variables considered.Therefore a screening analysis was conducted to assist in refining the list of candidate variables.This involved replacing variables individually rather than in combination.Multi-factorial reduction analysis.In this stage the candidate model variables were replaced in combination in order to explore the set of possible model replacements as fully as possible.The use of the informal Qi values as pseudo-likelihoods required cautious interpretation.However our aim was to assess the mechanistic basis of the model, not obtain absolute probability values.The effect of changing the value of α on the interpretation was investigated by calculating the Qis with α set at 2.5, 5.0 and 10% of the value of RSSfull.In the screening analysis each variable was replaced individually.The value of the replacement constant was estimated by minimising the model residual sum of squares, subject to the constraint that the value must be within the range the variable takes in the run of the full model.These replacement values were used throughout the subsequent analysis.Variables whose replacement had little or no detrimental effect on model performance were considered for inclusion in the multi-factorial analysis.Where a number of the potential variables related to the same process, switch variables were introduced to the code which enabled the effect of replacing the result of whole process by a constant to be considered.For example, the adjustment of soil maximum temperature from observed maximum air temperature was modelled using a relationship involving two model variables calculated elsewhere in the model.Rather than replace these individually it was more convenient to modify the model code and introduce a switch variable to reduce the modelled value of soil maximum temperature to the maximum air temperature, thereby testing the role of ENAV and TADJ simultaneously.Switch variables of this type were introduced for soil minimum temperature, soil maximum temperature and adjustment of canopy temperature from air temperature.In each case replacing the switch variable with zero reduced the temperature variable to the appropriate air temperature.Further switch variables were introduced to reduce the diurnal variation in temperature used for many temperature dependent processes in the model to a daily mean temperature and to switch off the vernalisation sub-model.There are 2N−1 different possible combinations of replacements for N candidate variables.Searching this replacement space is not possible exhaustively for even moderate values of N. Therefore a stochastic search based on the Metropolis–Hasting principle was used.The state of each candidate variable was either normal or replaced.The procedure moved through the search space by changing these states.For each iteration a step was made to a new model by changing a given number of the variable states.The number of states to change is adjusted operationally to achieve a random walk through the search space rather than just a random sample.In the case of Sirius, we found that allowing just one state to change per iteration gave the most efficient search.The results of the analysis were very similar when up to two or three state changes per iteration were allowed but the procedure required a larger number of iterations to converge.This Metropolis–Hastings random walk through the replacement space has the ability to accept moves which reduce the model likelihood allowing the walk to escape local minima in the search space.The probability of accepting a bad move is the ratio on the left hand side of Eq. and for each iteration this was compared to the random draw r.The effect is that moves with a small detrimental impact on the model fit will be accepted quite often, whereas moves which seriously worsen the fit are unlikely to be accepted.Although the walk through the search space may return to a previously evaluated model this does not adversely affect the search efficiency in our case as the previously calculated pseudo-likelihood value was used avoiding the need to re-evaluate the model.The pseudo-likelihoods were normalised to unity over the models considered.A replacement probability was calculated for each individual variable by summing the normalised pseudo-likelihoods for the models where the given variable was replaced.The results presented were based on 10,000 unique model evaluations.The replacement probabilities were reported at suitable intervals to ensure they had stabilised after this number of iterations.The replacement constants used were those obtained for each of the candidate variables in the screening analysis.Predicted and observed total biomass, grain weight and leaf area were compared for the full model and summarised as Nash–Sutcliffe statistics.The trends in biomass and grain weight were well reproduced, although leaf area less satisfactory.The overall timing of the canopy was well described in the model simulations although the canopy size was often over-predicted.In practice over prediction of leaf area tends to be disconnected from the prediction of biomass and grain as light interception does not increase linearly with canopy size.For example, over-prediction of leaf area index from four to five increases fractional interception by only 5% and therefore has little effect on predicted crop production.Under-prediction of leaf area would be expected to have detrimental effects on biomass and grain yield prediction.The behaviour of the reduced models in the screening analysis was summarised using the ratio of RSS for the reduced model to that of the full model.The distribution of these values for the 1 1 1 variables considered is shown in Fig. 3.The individual replacement of 29 of the considered variables by a constant increased the ratio of RSS for the reduced model to that of the full model to greater than >1.1.Of the remaining 82 replacements 60 resulted in a smaller RSS than the original full model; the remaining 22 had a small detrimental effect.These 82 variables were considered as potential candidates in the factorial analysis.The reduction of RSS in this screening analysis was expected as the values of the replacement constant for each variable were selected by fitting them to the observed data.However this did suggested that these variables may not be important for the predictive performance of the model.Up to this point the analysis was entirely automatic.However at this stage mechanistic interpretation of each of the 82 potential variables role in the model was required to ensure that the replacement of specific variables was meaningful.For example some of the variables are intermediate steps in a calculation whose reduction would be best accomplished through the replacement of the final end point variable.The replacement of some variables would break the mass balance of the model, for example allowing it to create nitrogen or dry matter to translocate to the grain irrespective of the status of the crop.On this basis 54 variables were eliminated from the analysis.The remaining 28 model variables, together with 5 switch variables were identified for inclusion in the multi-factorial analysis.The evolution of the estimated replacement probability for selected variables is shown in Fig. 4 to illustrate the gradual convergence of the computational analysis.The models comprising the uppermost 75% of the model probability distribution are shown in rank order in Fig. 5.This shows a small number of relatively better performing models, followed by a gradual decline in model performance.In comparison to previously published work using the replacement method the results here are notable for the large number of models with relatively similar performance.The replacement probabilities are shown in Table 3.Values tending to unity imply that model performance improved when the variable was replaced.Values of 0.5 imply that model performance was unaffected when the variable was replaced by a constant.Values tending to zero implied that model performance was worse when the variable was replaced.The results in Table 3 were considered in the context of the mechanistic basis of the model design with a view to defining a minimum form of the Sirius model whose overall performance could be compared to the full model.Dry matter for grain filling was derived from a combination of photosynthesis during the grain filling period and translocation of stem and leaf biomass.The model recorded biomass at anthesis and this was used to define the rate at which stem and leaf biomass could be translocated to the filling grain.In effect translocation potential was related to biomass at anthesis.The reduction analysis suggested biomass at anthesis is redundant and that translocation potential could be a constant across sites and treatments.In the potential and drought treatments the effect of this replacement was to reduce the contribution of translocation to grain yield.In the case of N limited treatments, where anthesis biomass was relatively low, the effect was to increase the relative contribution of translocation to grain yield.The model allowed for the expansion of the canopy to be reduced under water stress through the variable DrFACLAI and similarly the rate of canopy senescence increased through the variable GAKILR."These variables generally had low replacement probabilities implying that they contributed to the model's predictive skill.The model used several interesting temperature adjustments to account for the differences between air temperature and soil minimum and maximum and canopy temperatures.Three model variables related to these adjustments were identified as redundant and as outlined earlier their overall importance was assessed through the inclusion of three switch variables which had the effect of replacing each of these temperatures with the appropriate air temperature.All three were found to be redundant.Another interesting feature of Sirius is that the diurnal variation in temperature is used to estimate the rates of progress of a variety of plant processes rather than simply using the mean temperature.This feature was found to be redundant with replacement probabilities of approximately 0.5.Sirius used plant nitrogen status to influence nitrogen uptake and to drive nitrogen translocation between the stem and leaf.The variable MINSTEMDEMAND was calculated for each day and represented the minimum nitrogen demand which must be supported if growth was to occur.If this nitrogen was not available from uptake it was obtained through translocation of nitrogen from the leaf to the stem.MAXSTEMDEMAND provided an upper limit on crop nitrogen uptake in cases where stem nitrogen was high, causing nitrogen uptake to cease.Related to these variables LEAFDEMAND calculates the nitrogen required for the expected leaf expansion and used this to calculate appropriate transport of nitrogen between leaf and stem if that was required to sustain leaf expansion.MAXSTEMDEMAND was redundant in the analysis and MINSTEMDEMAND was borderline redundant.Given the values of the replacement constants the effect of these was to remove the influence of plant nitrogen status on nitrogen uptake, the crop simply removed whatever nitrogen was available to it.The replacement probabilities for LEAFDEMAND varied between the likelihood methods but overall were all <0.4.Therefore the minimum model simplified nitrogen uptake, ignoring the effect of plant nitrogen status on uptake, but retaining the use of leaf demand to drive internal nitrogen allocation.These variables do not relate to the link between crop nitrogen status and growth which remained unchanged in the minimum model.Two variables associated with the prediction of nitrogen mineralisation were identified as redundant.These related to the influence of soil moisture and temperature on the rate of mineralisation.Both were replaced at the low end of their range with the effect of reducing nitrogen mineralisation to a low constant value."The representation of crop development in Sirius combined the effect of temperature, including the effect of low temperatures, and daylength through simulation of the crop's leaf number.The approach is fully described by Jamieson et al. and is only briefly summarised here.Leaves are produced at a constant thermal time interval which is a cultivar specific parameter.During the course of the growing season the model sets a final leaf number depending on the vernalisation and daylength experienced by the growing crop according to the parameters defined for the cultivar.Anthesis occurred three phyllochrons after the point when the crop simulated leaf number is equal to the final leaf number.For cultivars with a vernalisation requirement a potential final leaf number was calculated in the vernalisation procedure.If the cultivar was daylength sensitive this potential final leaf number was further modified by a daylength function to set the final leaf number.Two variables related to the vernalisation submodel were found to be redundant, moreover the switch variable that turns off vernalisation entirely had a replacement values of c. 0.7 indicating that model predictions were improved when this variable is replaced.Several variables related to soil surface evaporation were redundant such that EVsoil is effectively replaced by a constant low value of 0.32 mm day−1.However ignoring soil surface evaporation completely had a detrimental effect on model performance, with the implication that although the model may be over-estimating evaporation, it did need to be considered.In the model soil evaporation influences both the calculation of soil maximum temperature and the soil water budget.Replacing soil evaporation with a constant continued to have an advantage for model performance even when soil maximum temperature was set equal to the maximum air temperature implying that the changes to water budget are beneficial to model performance.In addition to the replacement probabilities for individual variables shown in Table 3 joint probabilities were calculated to indicate whether there were cases where the replacement of one variable was dependent on whether another variable was, or was not replaced.These results showed no notable interactions for the redundant variables.The analysis described is partial as the observational data used do not provide a test for all aspects of the model, nor do they represent a fully comprehensive range of site conditions.The interpretation of the results of the analysis needs to reflect these limitations if useful insights into the model design are to be gained.For example, although setting translocation potential to a constant improved model predictions in our analysis the replacement would be problematic if modelled anthesis biomass was lower than the proposed constant for setting the potential for translocation, as might be the case under extreme stress.This finding may imply that translocation potential is not linearly related to biomass and rather than simply set the variable to a constant it may be more productive to consider this feature of the crops behaviour more carefully in future model development.Similar arguments apply to the redundancy of variables related to vernalisation.We are not suggesting that there is no such process as vernalisation, rather that the modelled modification of leaf number to account for vernalisation gives a worse result than that obtained by ignoring the process in the model.This may provide an indication that the representation of the process in the model is inappropriate.However caution is required, the findings were dictated by the response of the model to the range of conditions experienced over the 3 UK sites as New Zealand trials used a spring wheat cultivar.Redundancy in the variables related to nitrogen mineralisation also illustrates the effect of partial data on the analysis.These variables had little effect on model performance as the crops nitrogen supply was high relative to the rate of nitrogen mineralisation.This is true even in the treatment with no nitrogen fertiliser additions where the nitrogen supply is dominated by the residual soil inorganic nitrogen at the time of planting.On the basis of the above analysis a variant of Sirius was developed in which all the identified redundant model variables were reduced with the aim of comparing the resultant predictions with those of the full model.The differences were that canopy and soil temperatures were taken as equal to the air temperature; there was no allowance for diurnal variation in temperature on development or other processes; nitrogen uptake is simplified such that the plant simply removes all nitrogen available to it in each time step; the effect of vernalisation is ignored, with development being driven solely by temperature and photoperiod; soil nitrogen mineralisation and soil surface evaporation were ignored; translocation potential was considered a constant.The resulting model predictions are compared to the full model in Fig. 1 and the summary Nash–Sutcliffe statistics are presented in Table 2.Notwithstanding these simplifications of the model the performance was almost identical to the full model, with slightly improved performance for leaf area and grain weights.In previous applications of the variable replacement approach to model reduction all the models investigated were found to have redundant variables.Similarly Sirius was found to contain variables whose use was redundant for predicting the data we have used in this analysis."However, most of the model's variables could not be reduced to a constant; of the 1 1 1 variables considered 16 were ultimately identified as potentially redundant.The areas of the model where there was evidence of redundancy were carbon translocation; nitrogen physiology; adjustment of air temperature for various modelled processes; allowance for diurnal variation in temperature; vernalisation soil nitrogen mineralisation soil surface evaporation.A minimum form of the model in which these features were either removed or replaced by constants performed slightly better than the full model with these data sets.This does not imply that these processes are not important in the real crop system."Rather, it indicates that the model's predictive performance was not improved through their representation in the model.The outcomes of the work we have described depended on our choice of comparison data.In our case this was within season measurements of multiple components of the crop over a relatively small number of trials.We focussed on challenging the mechanisms within the model at a relatively detailed level in order to evaluate which of the modelled processes are contributing to the overall prediction of growth and development over the growing season.Therefore the approach is analogous to the type of detailed model inter-comparison described by Jamieson et al.However our work could be described as a model intra-comparison as it was based on the comparison of many simplified forms of the same model.The approach provides automation to increase the efficiency of the evaluation and is a systematic means of increasing the rigour of the evaluation.However there is, as yet, no way to avoid the need for mechanistic model understanding and interpretation if model performance is to be critically evaluated.The analysis is dependent on the observational data used.Subject to this limitation it provides a test of whether a particular formulation of model variables contributes to the models predictive performance.The aim should not be to simply find a simpler model and use it, but to use the identification of redundant variables as a means to challenge and improve the formulation used in the model.
An existing simulation model of wheat growth and development, Sirius, was evaluated through a systematic model reduction procedure. The model was automatically manipulated under software control to replace variables within the model structure with constants, individually and in combination. Predictions of the resultant models were compared to growth analysis observations of total biomass, grain yield, and canopy leaf area derived from 9 trials conducted in the UK and New Zealand under optimal, nitrogen limiting and drought conditions. Model performance in predicting these observations was compared in order to evaluate whether individual model variables contributed positively to the overall prediction. Of the 1. 1. 1 model variables considered 16 were identified as potentially redundant. Areas of the model where there was evidence of redundancy were: (a) translocation of biomass carbon to grain; (b) nitrogen physiology; (c) adjustment of air temperature for various modelled processes; (d) allowance for diurnal variation in temperature; (e) vernalisation (f) soil nitrogen mineralisation (g) soil surface evaporation. It is not suggested that these are not important processes in real crops, rather, that their representation in the model cannot be justified in the context of the analysis. The approach described is analogous to a detailed model inter-comparison although it would be better described as a model intra-comparison as it is based on the comparison of many simplified forms of the same model. The approach provides automation to increase the efficiency of the evaluation and a systematic means of increasing the rigour of the evaluation. © 2014 The Authors.
704
Directional Switching Mechanism of the Bacterial Flagellar Motor
Many bacteria possess flagella to swim in liquid media and move on solid surfaces.Escherichia coli and Salmonella enterica serovar Typhimurium are model organisms that have provided deep insights into the structure and function of the bacterial flagellum.The flagellum is composed of basal body rings and an axial structure consisting of at least three parts: the rod as a drive shaft, the hook as a universal joint and the filament as a helical propeller.The flagellar motor of E. coli and Salmonella consists of a rotor and a dozen stator units and is powered by an electrochemical potential of protons across the cytoplasmic membrane, namely proton motive force.Marine Vibrio and extremely alkalophilic Bacillus utilize sodium motive force as the energy source to drive flagellar motor rotation.The rotor is composed of the MS ring made of the transmembrane protein FliF and the C ring consisting of three cytoplasmic proteins, FliG, FliM and FliN.Each stator unit is composed of two transmembrane proteins, MotA and MotB, and acts as a transmembrane proton channel to couple the proton flow through the channel with torque generation .The flagellar motor rotates in either counterclockwise or clockwise direction in E. coli and Salmonella.When all the motors rotate in the CCW direction, flagellar filaments together form a bundle behind the cell body to push the cell forward.Brief CW rotation of one or more flagellar motors disrupts the flagellar bundle, allowing the cell to tumble, followed by a change in the swimming direction.Sensory signal transducers sense temporal changes in extracellular stimuli such as chemicals, temperature and pH and transmit such extracellular signals to the flagellar motor via the intracellular chemotactic signaling network.The phosphorylated form of CheY, which serves as a signaling molecule, binds to FliM and FliN in the C ring to switch the direction of flagellar motor rotation from CCW to CW.Thus, the C ring acts as a switching device to switch between the CCW and CW states of the motor .The stator complex is composed of four copies of MotA and two copies of MotB.The MotA4/MotB2 complex is anchored to the peptidoglycan layer through direct interactions of the C-terminal periplasmic domain of MotB with the PG layer to become an active stator unit around the rotor .A highly conserved aspartate residue of MotB is located in the MotA4/MotB2 proton channel and is involved in the energy coupling mechanism .The cytoplasmic loop between transmembrane helices 2 and 3 of MotA contains highly conserved Arg-90 and Glu-98 residues and are important not only for torque generation but also for stator assembly around the rotor .FliG is directly involved in torque generation .Highly conserved Arg-281 and Asp-289 residues are located on the torque helix of FliG and interact with Glu-98 and Arg-90 of MotAC, respectively .Since the elementary process of torque generation caused by sequential stator–rotor interactions in the flagellar motor is symmetric in the CCW and CW rotation, HelixTorque is postulated to rotate 180° relative to MotAC in a highly cooperative manner when the motor switches between the CCW and CW states of the C ring .This mini-review article covers current understanding of how such a cooperative remodeling of the C ring structure occurs.FliF assembles into the MS ring within the cytoplasmic membrane .The C ring consisting of a cylindrical wall and inner lobes is formed by FliG, FliM and FliN on the cytoplasmic face of the MS ring with the inner lobes connected to the MS ring .FliF requires FliG to facilitate MS ring formation in the cytoplasmic membrane .FliG binds to FliF with a one-to-one stoichiometry .FliM and FliN together form the FliM1/FliN3 complex consisting of one copy of FliM and three copies of FliN , and the FliM1/FliN3 complex binds to the FliG ring structure through a one-to-one interaction between FliG and FliM to form the continuous C ring wall .Most of the domain structures of FliG, FliM and FliN have been solved at atomic resolution, and possible models of their organization in the C ring have been proposed .FliG consists of three domains: N-terminal, middle and C-terminal domains .FliGC is divided into two subdomains: FliGCN and FliGCC.FliGN is involved in the interaction with the C-terminal cytoplasmic domain of FliF .Inter-molecular interactions between FliGN and FliGN and between FliGM and FliGCN are responsible for the assembly of FliG into the ring structure on the cytoplasmic face of the MS ring .FliGM provides binding sites for FliM .A highly conserved EHPQR motif of FliGM is involved in the interaction with FliM .FliGCC contains HelixTorque, and highly conserved Arg-284 and Asp-292 residues of Aquifex aeolicus FliG, which corresponds to Arg-281 and Asp-289 of E. coli FliG involved in the interactions with conserved charged residues of MotAC , are exposed to solvent on the surface of HelixTorque .FliM consists of three domains: N-terminal, middle and C-terminal domains .FliMN contains a well conserved LSQXEIDALL sequence, which is responsible for the interaction with CheY-P .FliMN is intrinsically disordered, and the binding of CheY-P to FliMN allows FliMN to become structured .FliMM has a compactly folded conformation, and side-by-side associations between FliMM domains are responsible for the formation of the C ring wall .The binding of CheY-P to FliMN affects inter-molecular FliMM–FliMM interactions, thereby inducing a conformational change in the C ring responsible for switching the direction of flagellar motor rotation .A well conserved GGXG motif of FliMM is involved in the interaction with FliGM .FliMC shows significant sequence and structural similarities with FliN and is responsible for the interaction with FliN .FliN is composed of an intrinsically disordered N-terminal region and a compactly folded domain, which structurally looks similar to FliMC .FliN exists as a dimer of dimer in solution and forms the FliM1/FliN3 complex along with FliM through an interaction between FliMC and FliN .CheY-P binds to FliNC in a FliM-dependent manner .Leu-68, Ala-93, Val-113 and Asp-116 of E. coli FliN are responsible for the interaction with CheY-P .The binding of CheY-P to FliN affects interactions between FliMC and FliN, inducing the conformational change of the C ring responsible for directional switching of flagellar motor rotation .FliNN seems to control the binding affinity of FliNC for CheY-P although it is dispensable for the function of FliN .FliN also provides binding sites for FliH, a cytoplasmic component of the flagellar type III protein export apparatus for efficient flagellar protein export and assembly .Val-111, Val-112 and Val-113 of FliN are responsible for the interaction with FliH .Electron cryomicroscopy image analysis has shown that the C ring structures of the purified CCW and CW motors have rotational symmetry varying from 32-fold to 35-fold, and the diameter varies accordingly .The C ring diameters of the CCW and CW motors with C34 symmetry are 416 Å and 407 Å, respectively, and so the unit repeat distance along the circumference of the C ring is closer in the CW motor than in the CCW motor .The C ring produced by a Salmonella fliF–fliG deletion fusion strain missing FliFC and FliGN lacks the inner lobe, suggesting that FliFC and FliGN together form the inner lobe .In agreement with this, cryoEM images of the C ring containing the N-terminally green fluorescent protein tagged FliG protein show an extra density corresponding to the GFP probe near the inner lobe .The fliF–fliG deletion fusion results in unusual switching behavior of the flagellar motor, suggesting that the inner lobe is required for efficient and robust switching in the direction of flagellar motor rotation in response to changes in the environment .The upper part of the C ring wall is formed by FliGM and FliGC.FliGM binds to FliGCN of its adjacent FliG subunit to produce a domain-swap polymer of FliG to form a ring in both CCW and CW motors .Since HelixTorque of FliGCC interacts with MotAC , FliGCC is located at the top of the C ring wall.Since FliMM directly binds to FliGM , the continuous wall of the C ring with a thickness of 4.0 nm and a height of 6.0 nm is formed by side-by-side associations of the FliMM domains .A continuous spiral density with a diameter of 7.0 nm along the circumference at the bottom edge of the C ring is made of FliMC and FliN .In E. coli and Salmonella, the flagellar motor is placed in a default CCW state .Mutations located in and around HelixMC of FliG, which connects FliGM and FliGCN, cause unusual switching behavior of the flagellar motor , suggesting that helixMC is involved in switching the direction of flagellar motor rotation.HelixMC is located at the FliGM–FliMM interface and contributes to hydrophobic interactions between FliGM and FliMM .In-frame deletion of three residues, Pro-Ala-Ala at positions 169 to 171 of Salmonella FliG, which are located in HelixMC, locks the motor in the CW state even in the absence of CheY-P .The crystal structure of the FliGM and FliGC domains derived from Thermotaoga maritima with this CW-locked deletion have shown that the conformation of HelixMC is distinct from that of the wild-type .In the wild-type Tm-FliGMC/Tm-FliMM complex, Val-172 of HelixMC of Tm-FliGMC makes hydrophobic contact with Ile-130 and Met-131 of Tm-FliMM .In contrast, disulfate crosslinking experiments have shown that HelixMC is dissociated from Tm-FliGM in the presence of the CW-locked deletion .Consistently, the CW-locked deletion of Tm-FliG reduces the binding affinity of Tm-FliGMC for Tm-FliMM by about 400-fold .Therefore, it seems likely that the binding of CheY-P to FliM and FliN induces conformational rearrangements of the FliGM–FliMM interface, thereby causing dissociation of HelixMC from the interface to facilitate the remodeling of the FliG ring structure responsible for directional switching of the flagellar motor.HelixMC interacts with HelixNM connecting FliGN and FliGM .The E95D, D96V/Y, T103S, G106A/C and E108K substitutions in HelixNM of Salmonella FliG result in a strong CW switch bias .A homology model of Salmonella FliG built based on the crystal structure of FliG derived from A. aeolicus has suggested that Thr-103 of HelixNM may make hydrophobic contacts with Pro-169 and Ala-173 of HelixMC .These observations lead to a plausible hypothesis that a change in the HelixNM–HelixMC interaction mode may be required for conformational rearrangements of the C ring responsible for directional switching of the flagellar motor.A FliF–FliG full length fusion results in a strong CW switch bias of the E. coli flagellar motor .Intragenic suppressor mutations, which improve the chemotactic behavior of the E. coli fliF–fliG full-length fusion strain, are located at the FliGN–FliGN interface , suggesting that a change in inter-molecular FliGN–FliGN interactions may be required for flagellar motor switching.Therefore, there is the possibility that conformational rearrangements of the FliGM–FliMM interface caused by the binding of CheY-P to the C ring influence the HelixNM–HelixMC interaction, thereby inducing conformational rearrangements of FliGN domains responsible for the switching in the direction of flagellar motor rotation.The elementary process of torque generation by stator-rotor interactions is symmetric in CCW and CW rotation .A hinge connecting FliGCN and FliGCC has a highly flexible nature at the conserved MFXF motif, allowing FliGCC to rotate 180° relative to FliGCN to reorient Arg-281 and Asp-289 residues in HelixTorque to achieve a symmetric elementary process of torque generation in both CCW and CW rotation .Structural comparisons between Tm-FliGMC of the wild-type and Tm-FliGMC with the CW-locked deletion have shown that the CW-locked deletion induces a 90° rotation of FliGCC relative to FliGCN through the MFXF motif .Consistently, the binding of CheY-P to the C ring induces a tilting movement of FliMM, resulting in the rotation of FliGCC relative to FliGCN .Therefore, it is possible that such a tilting movement of FliMM may promote a detachment of HelixMC from the FliGM–FliMM interface, resulting in the 180° rotation of FliGCC relative to FliGCN.Switching between the CW and CCW states of the flagellar motor is highly cooperative .The cooperative switching mechanism can be explained by a conformational spread model, in which a switching event is mediated by conformational changes in a ring of subunits that spread from subunit to subunit via their interactions along the ring .The binding of CheY-P to FliM and FliN affects subunit-subunit interactions between FliMM domains and between FliMC and FliN in the C ring to induce a 180° rotation of FliGCC relative to MotAC, thereby allowing the motor to rotate in CW direction .HelixMC of FliG located at an interface between FliGM and FliMM plays an important role in highly cooperative remodeling of the FliG ring structure .However, it remains unknown how HelixMC coordinates cooperative rearrangements of FliG subunits with changes in the direction of flagellar motor rotation.The C ring of the CCW motor can accommodate more FliM/FliN3 complexes without changing inter-subunit spacing, and directional switching of the motor induces several weakly bound FliM/FliN3 complexes from the C ring .Consistently, the CW-locked deletion weakens an interaction between FliGM and FliMM .Because there is no difference in the rotational symmetry of the C ring between the purified CCW and CW motors , it remains unclear how several FliM1/FliN3 complexes weakly associate with the C ring when the motor spins in the CCW direction.The elementary process of the torque-generation cycle is symmetrical in CCW and CW directions .However, the output characteristics of the CW motor are distinct from those of the CCW motor.Torque produced by the CCW motor remains almost constant in a high-load, low-load regime of the torque-speed curve and decreases sharply to zero in a low-load, high-speed regime.In contrast, torque produced by the CW motor linearly decreases with increasing motor speed .This suggests that directional switching of the flagellar motor may affect stator–rotor interactions in a load-dependent manner.However, nothing is known about the molecular mechanism.Furthermore, the switching rate of the flagellar motor also depends on the motor speed .A recent non-equilibrium model of the flagellar motor switching has predicted that the motor sensitivity to CheY-P increases with an increase in motor torque .However, it remains unknown how stator–rotor interactions modulate the binding affinity for CheY-P.High-resolution structural analysis of the C rings in the CCW and CW states by cryoEM image analysis will be essential to advance our mechanistic understanding of the directional switching mechanism of the flagellar motor.FliM and FliN alternate their forms between localized and freely diffusing ones, and the copy number of FliM and FliN in the CCW motor has been found to be about 1.3 times larger than that in the CW motor .Consistently, fluorescence anisotropy techniques have shown that the CCW motor accommodate more FliM1/FliN3 complexes without changing the spacing between FliM subunits .Such exchanges depend on the direction of flagellar rotation but not on the binding of CheY-P to the C ring per se .The timescale of this adaptive switch remodeling of the C ring structure is much slower than that of the rotational switching between the CCW and CW states.Such a structural remodeling of the C ring is important for fine-tuning the chemotactic response to temporal changes in the environments .The CW-locked deletion of FliG considerably reduces the binding affinity of FliGM for FliMM presumably due to detachment of HelixMC from the FliGM–FliMM interface .Because FliM binds to HelixMC of FliG in the E. coli CCW motor , the dissociation of HelixMC from the FliGM–FliMM interface may promote the dissociation of several weakly bound FliM1/FliN3 complexes from the FliG ring when CheY-P binds to the C ring to switch from its CCW to CW states.
Bacteria sense temporal changes in extracellular stimuli via sensory signal transducers and move by rotating flagella towards into a favorable environment for their survival. Each flagellum is a supramolecular motility machine consisting of a bi-directional rotary motor, a universal joint and a helical propeller. The signal transducers transmit environmental signals to the flagellar motor through a cytoplasmic chemotactic signaling pathway. The flagellar motor is composed of a rotor and multiple stator units, each of which acts as a transmembrane proton channel to conduct protons and exert force on the rotor. FliG, FliM and FliN form the C ring on the cytoplasmic face of the basal body MS ring made of the transmembrane protein FliF and act as the rotor. The C ring also serves as a switching device that enables the motor to spin in both counterclockwise (CCW) and clockwise (CW) directions. The phosphorylated form of the chemotactic signaling protein CheY binds to FliM and FliN to induce conformational changes of the C ring responsible for switching the direction of flagellar motor rotation from CCW to CW. In this mini-review, we will describe current understanding of the switching mechanism of the bacterial flagellar motor.
705
Existing environmental management approaches relevant to deep-sea mining
To date there has been no true commercial deep-sea mining, yet the sector already faces challenges in obtaining support and approval for developments.In some cases societal concerns have stopped or delayed planned seabed mining projects .The deep-sea environment, although vast, is poorly known and may be particularly sensitive to disturbance from anthropogenic activities .Perceptions about the likely environmental impacts of deep-sea mining have been based on this sensitivity and concern over previous impacts caused by allied industries, such as terrestrial mining and offshore oil and gas operations .The social and environmental effects of mining on land feature regularly in the media , and the reputational and financial risks of environmental damage at sea are enormous, as demonstrated by the $55 billion dollar cost of the 2010 Deep Water Horizon oil spill .Therefore, corporate responsibility is a key issue in sustaining a profitable business and for the DSM sector as a whole.This demand for social license is coupled with the overarching legal requirements of the United Nations Convention on the Law of the Sea, which sets forth the environmental aim of ensuring effective protection from harmful effects of seabed mining, plus a legal obligation to avoid serious harm .While definitions for these key terms are still evolving, it will be imperative for the DSM industry to transparently demonstrate its commitment to environmental sustainability in order to obtain and keep its social licence to operate .It must comply with international legal requirements as well as national legislation, follow good-practice guidance, learn from the experience of allied industries and take all steps to minimise environmental impacts.To do this effectively, the industry needs to develop and maintain high standards of operations throughout the development cycle.Such management of processes is not straightforward and relies on a continuous cycle of developing, documenting, consulting, reviewing and refining activities.Increased environmental standards are often assumed to impose significant costs on industry, impacting productivity adversely .This view has been challenged by an alternative hypothesis that well-designed environmental regulations encourage innovation, potentially increasing productivity and producing greater profits .The benefits of establishing regulations and binding recommendations include: 1) increased efficiency in the use of resources, 2) greater corporate awareness, 3) lower risks that investments in environmental practices will be unprofitable, 4) greater innovation, and 5) a levelling of the playing field between operators .This hypothesis applies principally to productivity and market outputs, with other benefits to reputation and social license.When these benefits are considered together, evidence-based studies suggest that improved environmental requirements bring positive outcomes for industry .Compelling examples of such positive outcomes on the offshore oil industry can be found in the management of routine safety and environmental activities .Reductions in safety incidents and environmental hazards and their consequences have been made through advances in operational management, including regular improvements made through an iterative cycle of planning, implementation, monitoring and review .Protocols for good practice in operations have been developed, tested and refined over time.Effective operations have been taken up by trade organisations and made into industry-wide standards .Increasingly more rigorous legal regimes and pressures from stakeholders have enforced changes.The DSM industry has the opportunity to learn from developments in safety and environmental management practices in other industries.DSM is still predominantly in the planning stage, offering a unique opportunity to implement good-practice approaches proactively from the outset.Although DSM will face some unique challenges, many of the key environmental management issues, environmental management planning, baseline assessment, monitoring and mitigation) have been considered and documented in detail already by allied industries.DSM has the potential to select and optimize recognised and documented good practices and adapt them.However, DSM is different from other industries.There is a particular lack of knowledge of the environments of industry interest, and very little information on the potential effects of mining activities .DSM is also unlike many other marine industries in having an international legal framework that prescribes the need to avoid serious harm .A major advantage in developing good practices for DSM is that there is one principal global regulator.Unlike most deep-water industries, it is likely that a significant amount of DSM will be carried out in areas beyond national jurisdiction.The Area and its mineral resources have been designated as the “Common Heritage of Mankind” .Mining there is controlled by the International Seabed Authority, an international body composed of States party to the United Nations Convention on the Law of the Sea, which is charged with managing the Area and its resources on behalf of all mankind, as a kind of trustee on behalf of present and future generations .The legal status of the Area and its resources influences every aspect of the ISA regime, including the determination of an adequate balance between facilitating mining and protecting the marine environment .The concept of the common heritage of mankind promotes the uniform application of the highest standards for the protection of the marine environment and the safe development of activities in the Area .States encouraging DSM within their Exclusive Economic Zones must ensure that national rules and standards are “no less effective” than international rules and standards , thus approaches adopted by the ISA should be incorporated into national legislation and regulations.Here existing environmental management approaches relevant to the exploitation of deep-sea minerals are identified and detailed.Environmental management will be principally guided by ISA rules, regulations, procedures and guidelines.However, the legal landscape governing DSM has been widely discussed and is outside the scope of this review.Instead, this review focuses on the mechanisms that can be used to improve the management of DSM.These include good practices adopted by allied industry and professional organisations.Drivers for increasing sustainability are considered, followed by an assessment of management approaches that may reduce the environmental impact of operations.There are many reasons for improving environmental management beyond compliance with environmental regulation.All industrial activities involve a range of stakeholders that exert direct and indirect pressure on parties active in the industry; this review concentrates on drivers from those stakeholders that can exert direct legal or financial pressure on those involved in DSM activities.In the case of DSM in the Area, companies need a state sponsor.The sponsor should exercise due diligence to ensure that the mining company complies with ISA rules, regulations, standards and procedures .However, there is no specific guidance on meeting this requirement and no examples exist of acceptable practice.All sponsoring states may need to enact and enforce new laws was enacted to enable Singapore to become a sponsoring state ), and implement administrative procedures and resources to regulate their enterprises, or be held liable for damage to the marine environment .Many DSM operations will require external funding from large organisations, including international financial organisations and institutional investors.Increasingly, financial backing for companies or projects is dependent upon meeting key environmental criteria or performance standards.Rules and advice are given by the World Bank and the International Finance Corporation on criteria that should be used when considering projects for finance and the performance standards that must be achieved.Projects for the World Bank are assessed on whether they are likely to have significant adverse environmental impacts and whether the ecosystems they affect are sensitive or particularly diverse .If the project is unprecedented, such as in the case of DSM, consideration might be given to the degree to which potential environmental effects are poorly known .The Equator Principles have been adopted by approximately 70% of organisations providing project finance for any industry across 36 countries .This group of 81 Equator Principles Financial Institutions has agreed that for a company to receive investment or finance it must demonstrate that it meets eight Environmental and Social Performance Standards developed by the International Finance Corporation .The Performance Standards provide guidance on how to identify risks and impacts, and are designed to help avoid, mitigate, and manage risks and impacts as a way of doing business in a sustainable way .Of key relevance is Performance Standard 6 on biodiversity conservation and sustainable management of living natural resources .Appropriate mitigation, following the mitigation hierarchy is emphasised particularly for avoiding biodiversity loss .These appraisals take into account the level of stakeholder engagement and participation in decision taking .Although the effect on DSM may be minor, there is evidence that an increasing number of individual investors are using environmental considerations to inform their investment decisions .These ethical investment funds invest in companies based on objective environmental performance criteria.As a result, an increasing percentage of the ownership of a public company may be concerned with corporate sustainability and the share price may be partially driven by environmental performance.While a mining company may only directly benefit from this as part of an initial public offering, managers are usually shareholders and benefit from a high share price.Furthermore, the market for eventual mineral products of DSM may be driven in part by social or environmental considerations.National and international policy has been augmented substantially by developments in international good practice guidance.A good example of such guidance was developed to guide the development of Pacific Island States Exclusive Economic Zones through a joint programme of work at the Secretariat of the Pacific Community, supported by funding from the European Commission.They have developed a Regional Legislative and Regulatory Framework , a Regional Environmental Management Framework and Regional Scientific Research Guidelines for Deep-Sea Mineral Exploration and Exploitation.In assessing the impact of DSM activities and any associated activities, the SPC reports recommend an “ecosystem services” approach in all its guidance, recognizing that ecosystems provide a wider variety of services than just resources.For DSM in the Area, the ISA is considering issues of corporate social responsibility as part of its development of a framework for the exploitation of deep-sea minerals .This may become a particularly important issue owing to the participation of many developing nations in the ISA, several of which will have faced social and environmental issues from mining activities on land.A Voluntary Code for the Environmental Management of Marine Mining has been created through the International Marine Mining Society , and the ISA has encouraged its contractors to apply the code .As the ISA notes :The Code provides a framework and benchmarks for development and implementation of an environmental programme for a marine exploration or extraction site by marine mining companies and for stakeholders in Governments, non-governmental organizations and communities in evaluating actual and proposed applications of environmental programmes at marine mining sites."The Code also assists in meeting the marine mining industry's requirement for regulatory predictability and risk minimization and in facilitating financial and operational planning.The emerging exploitation regulations can be expected to cover many of the same elements as the Code, making them mandatory.The Code can also help to guide business practices within national waters until regulatory systems catch up.Companies adopting the IMMS Code commit themselves to a number of high-level management actions: to observe all laws and regulations, apply good practice and fit-for-purpose procedures, observe the Precautionary Approach, consult with stakeholders, facilitate community partnerships on environmental matters, maintain a quality review programme, and transparent reporting .The Code also contains guidance on responsible and sustainable development, company ethics, partnerships, environmental risk management, environmental rehabilitation, decommissioning, the collection, exchange and archiving of data, and the setting of performance targets, reporting procedures and compliance reviews.The IMMS Code foresees the need for companies to develop environmentally responsible ethics by showing management commitment, implementing environmental management systems, and providing time and resources to demonstrate environmental commitment by employees, contractors and suppliers of equipment, goods and services .Specific recommendations are made on reviewing, improving and updating environmental policies and standards, as well as communicating these at business and scientific meetings .Companies are encouraged to evaluate their environmental performance regularly using a team of qualified, externally-accredited environmental auditors .Deep-sea mining is planned to occur in areas that are generally poorly known, especially with regard to their ecology and sensitivities .This leads to great uncertainty in the estimation of impacts and hence for establishing management activities.Managers and regulators need ways to address and reduce this uncertainty.The first approach is to reduce uncertainty through baseline data collection, experimentation and monitoring of activities.This is important, but will take a long time, particularly because of the difficulties of sampling in remote deep-sea environments but also because effects must be measured over large timescales in order to capture the long response times in many deep-water systems .Area-based management tools are a second important approach.By protecting a proportion of an area representative of the environment suitable for deep-sea mining, it is likely that many of its key attributes, such as structure, biodiversity and functioning, are also being protected, particularly if all available information is taken into account in a systematic approach .ABMTs are often set up at a broad scale in regional environmental management planning and at a finer scale in EMPs.Two other important approaches for dealing with uncertainty are applying the precautionary approach and adaptive management.The precautionary approach is widely adopted in a range of international policy .The precautionary approach is to be implemented when an activity raises threats of harm to human health or the environment, and calls for precautionary measures to be taken even if some cause and effect relationships are not fully established scientifically .It is a crucial tool to address the environmental protection challenges posed by deep seabed mining, both at a regulatory level and for management by the contractor .The precautionary approach is applicable to all decisions relevant to DSM, including assessments of the environmental risks and impacts, the effectiveness and proportionality of potential protective measures as well as any potential counter-effects of these measures .Precautionary decision-making includes consideration of scientific knowledge and the identification and examination of uncertainties .The precautionary approach is valuable in many stages of both the preparation and evaluation of EIA and EMPs .The RLRF and REMP developed by the SPC address the application of the Precautionary Approach by stressing the need to avoid the occurrence of irreversible damage.Seeking out alternatives to the proposed action as well as ongoing monitoring and research are also essential components of the precautionary approach.Where there is a possibility of an adverse effect, the provision of evidence that the nature or extent of this will be acceptable will rest with the operator.For environmental management in projects of high uncertainty, adaptive management has been suggested as a suitable approach .In DSM, uncertainty exists in a wide range of aspects particularly the impacts of mining and their effects on the environment.This results in uncertainty about the efficacy of mitigation measures proposed in an EMP.Adaptive management is a form of structured decision-making that addresses this uncertainty by monitoring the effects of the management plan and assessing the results of the monitoring with the intention to learn from the results and incorporate findings into revised models for management actions .The SPC considers the application of adaptive management in its RLRF and REMP ; adaptive management techniques are recommended to allow some activities to proceed despite uncertainty provided appropriate checks and risk-minimizing controls are in place.The application of adaptive management is complicated in the Area as a result of the vulnerability of most deep-sea environments to serious and irreversible impacts from commercial scale DSM, combined with requirement to avoid serious harm .Adaptive management could be applied both by the regulator, in setting of regulations, policies and guidelines, and by the contractor, in improving their environmental management activities throughout the project.While widely acknowledged as a useful management tool , it is not clear how adaptive management approaches will be incorporated by the ISA into regulations or implemented for DSM in the Area .However, adaptive management has been applied successfully by a regulator to manage chemosynthetic deep-sea communities associated with SMS deposits in national jurisdictions .Adaptive management should form part of the contractors’ environmental management planning and based on the results of careful monitoring, activities may be adjusted as information improves.Although DSM will likely occur in different geographic, ecological and geological settings, such as the Clarion-Clipperton Zone in the equatorial eastern Pacific, at mid-ocean ridge systems and at a few selected seamounts , there are many environmental issues that are common to DSM development in all of these areas that would benefit from harmonizing environmental management measures .For example, potential environmental risks may extend beyond the boundary of a single mining site, while others may result in cumulative impacts from multiple mine sites within a region and from interactions with other uses of marine space.Environmental risks may need to be considered at a broad scale and environmental management procedures may need to be tailored to the resources and ecosystems under pressure , and require coordination with other stakeholders and regulatory bodies.As a result, it is important to develop approaches for environmental management at a more strategic level, for example within a region .The broad scales of planned mining activities and potential impacts highlight the need to manage the marine environment across business sectors and at broader scales than any one activity.Management at scales greater than individual projects is usually termed strategic or regional management.The generally accepted processes for this are Regional Environmental Assessment and Strategic Environmental Assessment .Both SEA and REA are assessments, and as such, a process.The outcome of this process is typically twofold: a report that documents the process and a management plan that describes the implementation of the management approach.The ISA has already begun setting high-level strategies , which include protecting the marine environment and encouraging scientific research.However, their focus for detailed assessment appears to be at the regional level and some elements of a regional environmental management plan already exist for the CCZ, focussed on area-based management .The ISA has also held workshops with a view to develop REMPs for the Mid-Atlantic Ridge and North Pacific Seamount areas.As a result, this paper focuses on regional environmental assessment, which refers to an evaluation the wider regional context within which multiple and different activities are set.REA can be viewed as a subset of SEA .These processes are an early management action that allows biodiversity and other environmental considerations to be included in the development of new programmes .A REA for DSM might include an assessment of the probability, duration, frequency and reversibility of environmental impacts, the cumulative and transboundary impacts, the magnitude and spatial extent of the effects, the value and vulnerability of the area likely to be affected including those with protection status and the extent of uncertainty in any of the above .These approaches represent the need for a transparent broad, or strategic, planning view.Such assessments and resulting documents therefore are ideally formulated at an early stage, but are ongoing and should be adapted with time.For example, REAs may include provisions for representative networks of systems of Marine Protected Areas before specific activities commence, and for adjustments in MPA provisions with time.This may be already challenging for DSM when contractor exploration areas are defined and exploration activities have begun .Regional or strategic assessments have guided a number of similar industries to DSM and how they operate, particularly as a result of the EU SEA Directive .SEA has been undertaken for the offshore oil and gas exploration and production sector for several years .Not all industries follow explicitly, but have adapted the SEA approach to meet their particular needs, for example ‘Zonal Environmental Appraisal’ for the UK East Anglia Offshore Wind Farm development and REA for the UK Marine Aggregate Regional Environmental Assessments .Both ZEAs and REAs consider cumulative impacts; in the former case taking into account the effects of multiple wind turbine structures and in the latter case numerous and repeated dredging operations.In the case of dredging, the impacts of existing claim areas up for renewal are considered with applications developing new areas.The ISA has begun strategic planning .It has adopted a regional environmental management plan in the CCZ in the equatorial Eastern Pacific Ocean .The CCZ EMP incorporates some of the aspects of an REA process for polymetallic nodule mining.The CCZ EMP was adopted in 2012 to set aside c. 1.5 million km2 of seabed of a total of approximately 6 million km2 in order to protect the full range of habitats and biodiversity across the CCZ.The EMP adopts a holistic approach to the environmental management of the CCZ in its entirety, including, where appropriate, consideration of cumulative impacts, and incorporating EIAs of new and developing technologies.The CCZ EMP aims 1) to maintain regional biodiversity, ecosystem structure and ecosystem function across the CCZ, 2) manage the CCZ consistent with the principles of integrated ecosystem-based management and 3) enable the preservation of representative and unique marine ecosystems.For this purpose, the CCZ EMP establishes, on a provisional basis, an initial set of nine “Areas of Particular Environmental Interest” as no-mining areas based on expert recommendations , which has been recommended to be expanded .The CCZ EMP does not include any APEIs within the central section, with the highest nodule concentrations and greatest mining interest, primarily because exploration contracts had been issued prior to the APEIs being established .The CCZ EMP has left some flexibility as the boundaries may be modified based on improved scientific information about the location of mining activity, measurements of actual impacts from mining operations, and more biological data if equivalent protection can be achieved.The EMP should be subject to periodic external review by the ISA LTC at least every five years .In 2013, the United Nations General Assembly invited the LTC to prioritize the development of EMPs for other regions of mining interest, and development of further regional environmental management plans is now a priority for the ISA ."This will build on the ISA's experience with the establishment of the environmental management plan for the CCZ.Environmental management at a project level involves detailed management of a clearly defined project location and activities within known environmental conditions, with the aim of minimizing impacts according to strategic environmental objectives.Most industries have accepted processes for the incorporation of environmental management into the planning and execution of projects, with defined project phases and associated deliverables, and roles and responsibilities for involved parties ; such a process has been suggested as part of the IMMS Code and detailed for DSM .Project-specific environmental assessments, an important component of management, are common for most major developments; internationally-approved approaches involve environmental impact and risk assessment to identify, avoid, mitigate and, potentially compensate for environmental impacts .Environmental impact assessment is a key aspect of the planning and environmental management of a project .EIA is a process that is documented in a report.EIA aims to describe the major impacts of an activity on the environment in terms of its nature, extent, intensity and persistence ; a plan can be developed to mitigate the impacts using this assessment, and an overall decision can be made as to whether the project should take place and what conditions should be observed if it does.EIA addresses the sensitivity and/or vulnerability of all habitats and species that may be affected and the ability of those habitats to recover from harm, including cumulative effects.Cumulative effects may occur from a number of repeated impacts, the sum of different impacts, and/or the combined effects of human impacts and natural events.Environmental assessments should include characteristics of the ecosystems that may warrant extra protection .The ISA draft exploitation regulations require a site-specific EIA to be completed and an environmental management plan for DSM to be developed prior to the commencement of mining operations .A draft template for environmental impact statements for exploration has also been developed by the ISA .An ideal EIA process has recently been detailed for DSM .EIA should be a transparent process that involves independent experts and encourages public participation .EIA is typically divided into stages, which are directly applicable to DSM .Screening is the process by which a project is assessed to determine whether or not the production of a statutory EIA Report is required .It is expected that most DSM activities will require an EIA .The scoping phase should determine the content or scope, extent of the issues to be covered, the level of detail required in the EIA and identify actions to be taken to compile the required information .Scoping is an important part of the EIA process in most jurisdictions and formal scoping opinions are important in clarifying the focus and direction of the EIA process .Scoping studies may include a project description, project location with mapping, a list of receptors expected to be affected at each stage and by each activity, the identification of potential environmental impacts and information on how assessment will be carried out, data availability and gaps, as well as suitable survey, research and assessment methodologies .Scoping studies are also required to consider transboundary effects .EIAs generally include an environmental baseline against which the effects of the project can be assessed .The baseline study describes the physical, chemical, biological, geological and human-related environmental conditions that will prevail in the absence of the project, together with interactions between elements of them.Typically, the baseline study will identify the pre-project conditions, and highlight habitats and species that may be vulnerable to the impacts of the planned project.The study will describe and quantify environmental characteristics and may provide predictive modelling of some aspects to inform judgements about the quality, importance, and sensitivity of environmental variables to the impacts identified during the scoping process.Although it has been challenging to implement , the European Marine Strategy Framework Directive uses the concept of good environmental status, with multiple descriptors to define the baseline and thresholds for significant effects.All DSM projects are expected to acquire new baseline data specific to the project prior to test operations and full-scale mining .The baseline study will form the basis for subsequent monitoring of environmental impact during mining.The ISA has issued guidance to contractors on the elements required in an environmental baseline study covering all three main mineral resource types: polymetallic nodules, sulphides and cobalt-rich crusts.To ensure a degree of standardization and quality, the guidance on baseline study elements includes the definition of biological, chemical, geological and physical measurements to be made, the methods and procedures to be followed, and location of measurement such as the sea-surface, in mid-water and on the seabed.Scientists have made further suggestions on parameters to include .These data are required to document the natural conditions that exist prior to mining activities, to determine natural processes and their rates, and to make accurate environmental impact predictions.Baseline survey for DSM may have some specific characteristics that differentiate it from other industries .There is very little knowledge of potential effects of large-scale mining activities and the ecology of the areas likely to be impacted by mining is likewise poorly known .As a result, baseline surveys will necessarily have to target a wider range of investigations.Building the knowledge-base of how ecosystems respond to mining disturbance is also critical and measures of initial impacts, ecosystem effects and the rate of recovery of faunal communities and ecosystem function will be important.Residual uncertainty will be high, at least in the EIA phase, and statistical and probability analyses will be important to assess the likelihood of occurrence of a particular outcome .A comparison of the mining site and reference areas to wider knowledge of biological communities in the region should be made.Area based or spatial management options are likely to be an important component of managing residual impacts .The guiding principle for environmental management is to prevent or mitigate adverse impacts on the environment ."The tiered “Mitigation Hierarchy” is becoming an accepted tool for operationalizing this principle and is integral to the International Finance Corporation's Performance Standards .The first two tiers of the hierarchy, avoidance and minimisation, prevent the impacts from occurring and thus deserve particular emphasis.Indeed, these principles are referred to throughout guidance for DSM.The last tiers of the hierarchy, restoration and offsetting, are remediative, as they seek to repair and compensate for unavoidable damage to biodiversity.These stages have been little explored in the case of DSM and are expected to be costly and have uncertain outcomes .An EIA Report brings together all the information generated from environmental baseline studies, the planned industrial activities, the EIA, and proposals for mitigation of impacts.The details of the planned industrial activities should include a description of the proposed development, its objectives and potential benefits, compliance with legislation, regulation and guidelines, stakeholder consultations and closure plans .The EIA Report contains a set of commitments to avoid, and to minimise or reduce the environmental impacts of a project to an acceptable level.While an EIA Report is generally specific to one project it may have to take into account other activities, environmental planning provisions and business sectors in the region and the possible cumulative impacts of the proposed activity with these other operations.It may also have to take into account effects of any reasonably foreseeable future impacts.Guidance for the preparation of EIA reports for DSM in the exploration phase has been provided by the ISA and further elaborations are to be expected as part of the exploitation regulations and associated documents.An initial guide on EIA for prospective developers planning mineral exploitation activities has now been refined by guidelines for EIAs relating to offshore mining and drilling in New Zealand waters .These guides highlighted some concerns specific to DSM, in particularly the high levels of uncertainty associated with DSM.Sources of uncertainty, such as uncertainties in environmental conditions, mining plans, impacts of activities or efficacy of mitigation actions, should be identified and mitigation should be precautionary.Uncertainty may be addressed in part with the use of predictive models, which should be described, validated, reviewed and tested against other models as was done in some existing EIAs for DSM .Every plan of work for marine minerals must include a plan for management and monitoring, the EMP.The aim of the EMP is to ensure that harmful effects are minimized, no serious harm is caused to the marine environment and the more specific requirements of ISA rules, regulations and standards as well as the environmental goals of the actions planned in the EIA are achieved.The EIA Report should contain at least a provisional EMP or a framework for one .Both the EIA Report and the final EMP are generally required to obtain regulatory approval to begin and continue operations; the ISA has provided some instructions for the content of an EMP for DSM .An EMP is a project-specific plan developed to ensure that all necessary measures are identified and implemented in order to ensure effective protection of the marine environment, monitor the impacts of a project and to comply with ISA environmental rules, regulations and procedures as well as relevant national legislation .Such plans should clearly detail how environmental management and monitoring activities will be accomplished through the elaboration of specific objectives, components and activities, inputs and outputs .The EMP must include monitoring before, during and after testing and commercial use of collecting systems and equipment.This will require the development of relevant indicators, thresholds and responses in order to trigger timely action to prevent serious harm.Monitoring will demonstrate whether the predictions made in the EIA are broadly correct, show that mitigation is working as planned, address any uncertainties, demonstrate compliance with the approval conditions, allow the early identification of unexpected or unforeseen effects, and supports the principle of ‘adaptive management’."A clear budget and schedule for implementation is also required, with identification of the agencies responsible for financing, supervision and implementation, and other relevant stakeholders' interests, roles and responsibilities .The monitoring plan should allow for impacts to be evaluated and compared with the scale of variation expected from natural change, which should be assessed in the baseline study .Within site management and monitoring plans provide the opportunity for specifying more local area-based management approaches.For example, it looks likely that exploitation monitoring will require establishment of impact reference zones and preservation reference zones in keeping with the ISA exploration regulations .Dedicated protected areas within a claim area, either based on criteria of representativity or importance, may help meet management objectives by mitigating impacts, at least at the scale of the claim area.Environmental management plans also offer the opportunity for even finer-scale mitigation options, such as leaving protected recolonisation networks or including technological approaches to reducing the impact.Nautilus Minerals Inc. have engaged in advance planning for SMS mining in the Exclusive Economic Zone of Papua New Guinea at the ‘Solwara 1’ site .The approach taken by Nautilus Minerals is similar to that outlined here for other related industries.Nautilus Minerals collected environmental data to inform the EIA and improve management.Their environmental plan allows for mitigation strategies to assist the recovery of benthic ecosystems, although it is not clear if these strategies will be carried out.Mitigation strategies include the preservation of similar communities, in terms of species, abundance, biomass, diversity and community structure, at a locality within 2 km upstream to allow monitored natural recolonisation of the mined area.They also include potential active restoration through the translocation of faunal groups from areas about to be mined to those areas where mining is complete .A monitoring plan is to be submitted by Nautilus to PNG as part of an EMP before mining begins .They will monitor and report on compliance with regulatory permits and licenses, including the validation of predicted impacts, the documentation of any unanticipated events and the introduction of additional management measures.Such a project is inevitably controversial , but has received authorisation to proceed from the PNG government.Environmental impact assessment has been carried out for other mining-related projects."Some details of the EIS are available for a SMS project in either Okinawa Trough or Izu-Bonin Arc in Japan's national waters .This work focusses on the environmental baseline data for the sites.There have also been two recent EIS produced for a nodule collector test in two claim areas of the Clarion-Clipperton Zone.These provide detail on small-scale tests in the German Federal Institute for Geosciences and Natural Resources and Belgian Global Sea Mineral Resources NV claims as part of the Joint Programming Initiative-Oceans science and industry project MiningImpact .The responses to these documents is as yet unknown.A key characteristic of a modern sustainable business is a clear focus on sustainability in the corporate strategy.To achieve this focus, the senior management team of an organisation must include environmental considerations in all aspects of the business and create policies that embody broad sustainability principles.Clear management responsibilities and commitment at the highest level are vital to integrate environmentally responsible and sustainable management practices into all operations within a company, from exploration, through design and construction to operations.Staff dedicated to environmental responsibilities report directly to senior management , and environmental goals are embedded in the job descriptions of all managers."As recommended by the IMMS code , a senior executive environmental manager should be appointed to monitor the company's marine mining activities, products or services, as well as monitoring internal environmental performance targets and communicating these to employees and sub-contractors.Both internal initiatives and external advice can be used for development, implementation and refinement of sustainability strategies actions and indicators.An environmental management structure that formalises reporting is used in industries similar to DSM to improve sustainability across operations .This is particularly critical as companies become larger and environmental initiatives need to be maintained across multiple projects or divisions.Corporate transparency is important in improving sustainability, both within and outside the company particularly for DSM .An increase in anticipated or real scrutiny provides the business case for sustainability and enhances innovation.This is vital for public companies that are obliged to report to investors and disclose material aspects.Integrated reporting is becoming more common, in which sustainability metrics are included in annual financial reports.The International Integrated Reporting Framework sets out guidelines for this.Reports and performance metrics should encourage sustainability and efforts should be made to quantify and monitor environmental impacts .Reporting initiatives such as the Global Reporting Initiative , the Sustainability Accounting Standards Board and the Shared Value Initiative should be encouraged.A long-term focus is also important for sustainability and reporting and metrics that focus on the short term should be avoided, for example quarterly profit reports .It is recommended that during periodic review key areas for improvement and specific actions should be identified and defined to increase sustainability.This may be done through function or issue-related policies, which are disseminated internally and externally.Sustainability policies should be regularly reviewed and updated .Larger companies may adopt an operational management system, which is a framework aimed at helping it to manage risks in its operating activities."The OMS brings together a company's needs and internal standards on a range of matters such as health and safety, security, environment, social responsibility and operational reliability.OMS are commonplace in the oil and gas industry, where there are established guidelines for the creation and improvement of OMS .Environmental Management System are thought to have an important role in improving overall corporate environmental performance , particularly if clearly linked to environmental management planning .EMS is a formal and standardised approach to integrate procedures and processes for the training of personnel, monitoring, summarizing, and reporting of specialized environmental performance information to internal and external stakeholders of the company .In other industries EMS is often a component of an overarching Health, Safety and Environmental management system that governs all of its activities .Aspects of an EMS are encouraged by the IMMS Code and implemented by companies involved in DSM , but no detailed EMSs have yet been presented for DSM.Evidence suggests that having a formalized and certified EMS in place increases the impact of environmental activities on corporate performance, more so than informal and uncertified systems .Several important areas for development of protocols and standards have been identified in this review.These represent current gaps that key stakeholders for deep-sea mining could consider targeting as a priority.These have been generally grouped into approaches for environmental management, environmental assessment and mitigation.Environmental management standards and guidelines for deep-sea mining are in their infancy.Some progress has been made for EIA and the contents of EIS, but further detail is required, particularly as deep-sea mining assessments have already begun.REA is likely an important process for broad-scale management and has already started for the CCZ.Unifying the approach for REA across regions and optimising the development of REMPs will improve management and provide further guidance for EIA.Operational decision making, particularly by the ISA, is currently untested as no developments have started but will become necessary once exploitation is closer.It is not clear what the process for this will be but clear approaches, timeliness and consistency may be important.Efficient management also requires access to quality information and data and is improved by transparency.Further to this, companies may want to develop improved approaches for their internal management of DSM projects, such as EMS.Effective environmental management needs good information, particularly to predict and assess mining-related impacts.In the deep-sea much of this information is currently unknown.However, the scientific tools and expertise are available, in the majority, to collect appropriate information.Optimising data collection during baseline assessment and monitoring is important to ensure cost-effective yet robust assessment of impacts.This optimisation requires improvements in survey approaches and sampling designs, using the latest data collection and analysis tools .Quantitative prediction approaches, including modelling, are likely to be important.This prediction and effective monitoring will rely on the establishment of robust specific environmental indicators, determining what represents good environmental status and establishing appropriate thresholds for impact.Clear guidance for EMP would help ensure impacts can be detected if they occur and facilitate broad-scale data analysis by making datasets more comparable between projects.Approaches for estimating cumulative impacts also need to be developed.Effective management relies on appropriate mitigation approaches.The general approaches for mitigation, as outlined in the mitigation hierarchy, are well known.Developing specific approaches for reducing the potential negative impacts of deep-sea mining on the environment is a priority as potential mitigation actions are untested and may not correspond with those appropriate for other environments .It is clear that there is a pressing need for environmental management of the DSM industry.There is already much international and national legislation in place that stipulates key environmental management principles and requirements.There is also substantial pressure from both direct and indirect stakeholders for procedures to be put in place that reduce the magnitude and likelihood of environmental risks.In many cases the regulator for DSM activities is clearly identified.The ISA and many national regulators have implemented some environmental procedures, which are being further developed and updated regularly.There is a well-developed set of tools for reducing industrial environmental impacts that can be applied to DSM.In some cases these have been tested, for example the Solwara 1 development has already undertaken an EIA.In other cases it is not clear how some tools, for example strategic environmental assessment, will be implemented in the case of DSM.Currently the DSM industry is small and facing much international scrutiny.As a result, environmental impacts and the sustainability of the industry will be high on the corporate agenda.As the industry develops and becomes larger, potentially with companies managing multiple projects across the world, environmental management may become more difficult and critical.Incorporating lessons from the offshore oil and gas industry in creating systems for both organizational and environmental management of DSM will help reduce environmental impacts and risks.It is important to act now in developing and reviewing the guidance for this fledgling industry because standards and protocols set at the outset quickly become precedents.Lessons learned from other marine policy and industries can be applied to DSM, while considering the higher level environmental obligations of UNCLOS.This can result in clear, robust and precautionary protocols and standards to guide the DSM industry as it develops.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Deep-sea mining (DSM) may become a significant stressor on the marine environment. The DSM industry should demonstrate transparently its commitment to preventing serious harm to the environment by complying with legal requirements, using environmental good practice, and minimizing environmental impacts. Here existing environmental management approaches relevant to DSM that can be used to improve performance are identified and detailed. DSM is still predominantly in the planning stage and will face some unique challenges but there is considerable environmental management experience in existing related industries. International good practice has been suggested for DSM by bodies such as the Pacific Community and the International Marine Minerals Society. The inherent uncertainty in DSM presents challenges, but it can be addressed by collection of environmental information, area-based/spatial management, the precautionary approach and adaptive management. Tools exist for regional and strategic management, which have already begun to be introduced by the International Seabed Authority, for example in the Clarion-Clipperton Zone. Project specific environmental management, through environmental impact assessment, baseline assessment, monitoring, mitigation and environmental management planning, will be critical to identify and reduce potential impacts. In addition, extractive companies’ internal management may be optimised to improve performance by emphasising sustainability at a high level in the company, improving transparency and reporting and introducing environmental management systems. The DSM industry and its regulators have the potential to select and optimize recognised and documented effective practices and adapt them, greatly improving the environmental performance of this new industry.
706
Topological attributes of network resilience: A study in water distribution systems
Water distribution systems are critical infrastructures for the safe and secure provision of drinking water and are vital to our society and economics.They span over long distances and are subject to various threats, yet they are buried underground, which hinders timely diagnosis, maintenance and repair.WDS failures are reported to occur regularly and the associated economic loss was estimated to be $93-$198/capita/day in the U.S.A.Under climate change and urbanisation, WDSs are becoming increasingly vulnerable to unusual and unforeseeable yet often unavoidable failures.As such, conventional risk management that deals with known and quantifiable hazards and aims for failure prevention is no longer sufficient.Resilience management is an emerging and complementary concept of anticipating failures and minimising their impacts, and a body of research has explored its definition and measurement.Despite the differences in detailed definitions, resilience is widely interpreted as the capacity of a system to resist, absorb and withstand, and rapidly recover from exceptional conditions.As the level of resilience only manifests under disturbances, stress-strain tests have been used for its assessment by evaluating the performance of the system when it is subject to varying levels of perturbation.In particular, methods that simulate stress by degree of system malfunction are a pragmatic and valuable approach, since it is daunting to enumerate and represent all threats, but the failure modes of infrastructure systems are more easily identifiable.For example, pipe breaks are simulated rather than their causal threat.The idea of introducing stress to test the performance of WDSs has been widely applied for the evaluation of reliability and in recent years for resilience assessment.However, the exerted stress is often within a limited, low impact range and/or is generated based on the probability of failure of components, and the resulting system performance is usually assessed in a simplistic manner by single performance metrics.Global Resilience Analysis, first proposed for resilience analysis of sewer systems, is a form of stress-strain test especially suited for network analysis that incorporates random sampling and considers a wide range of failure scenarios including extreme levels of system malfunction.Diao et al. applied GRA to assess resilience of WDSs in a detailed manner and performance metrics such as time to strain, failure duration and failure magnitude have been used.However, the rate of change in system performance, which is a key aspect of resilience, was not considered and needs to be explicitly measured to understand system absorption and recovery capacities.For a deep understanding and building of resilience in WDSs, it is essential to explore its underlying mechanisms, i.e., the link between system performance and inherent properties such as topological characteristics.Graph theory is a valuable mathematical tool for the study of network topology, by which a WDS is abstracted as a graph of nodes and links and details of the physical system and processes are omitted.The graph can be drawn differently as long as the pairwise relations between nodes/links are unchanged.It has been widely applied in the study of social, biological, business, communication and transportation networks, where topological attributes such as network connectivity, centrality and diversity in nodal degree are shown to have strong impacts on system resilience.However, resilience has generally been evaluated in a simplistic manner by single performance indicators.Moreover, most of the investigated networks are non-spatial networks where links between nodes represent relations rather than physical connections, thus resulting in different topological features compared to water distribution networks.As such, it is questionable if the aforementioned topological attributes are critical for resilient WDSs.There have been limited topological studies on WDSs and graph theory has mainly been applied to establish simplified system models for pressure/water quality management, reliability assessment, and vulnerability analysis.Statistical topological metrics for complex networks in graph theory have been used in previous research as surrogate indicators of resilience of WDSs.However, no comprehensive studies have been reported on the interplay between resilience and various topological attributes of WDSs and the appropriateness of the topological metrics for resilience assessment is unknown.These knowledge gaps need to be addressed so that effective measures can be developed to build resilience into practice.The aim of this paper is to propose a comprehensive analysis framework for examining the resilience pattern of WDSs against topological characteristics, i.e. the correlations between resilience and topological features.It is based on a detailed and systematic assessment of resilience and topological attributes, selection of representative WDSs, and generation of network variants to find out the implications for system design.Though the framework is employed for and illustrated by the study of WDSs in this paper, it should be readily applicable to other network systems.The proposed framework consists of three modules for assessing resilience and topological attributes and their correlations.A detailed description is presented as follows.As resilience addresses dynamic system performance under stress rather than solely the occurrence of failure, strain is represented by the following metrics to explicitly measure system resistance, absorption and restoration capacities respectively: time to strain, failure magnitude and failure rate, and recovery rate.In addition, failure duration and severity, despite being lumped indicators, are assessed due to their wide application in literature.Definitions of the six metrics are provided below and illustrated in Fig. 1.Time to strain: Time between the application of stress and the start of service failure, i.e. when the level of service at a node drops below a predefined performance threshold, QLOS.Failure duration: Time taken from the occurrence of service failure triggered by stress to recovery to normal performance.Failure magnitude: The most severe drop in system service at a node following the application of stress.Failure rate: This measures the rate of degradation of system service under stress, and is calculated as failure magnitude divided by the time between the start of failure and occurrence of the worst system performance.Recovery rate: This shows the speed of system bouncing back to normal performance after stress, and is calculated as failure magnitude divided by the time between the worst performance and return to QLOS.Severity: The area between the quality of performance response curve and the QLOS line during the period of service failure, as illustrated in Fig. 1.Each strain measure defined above is computed for every node, the value of which is zero if no failure occurs at a node, and then summed or averaged to derive the metric value for the whole network.Severity is obtained by summing severity values at each node across the network, while the other metrics are calculated by taking mean values.As the recovery rate of nodes at the ‘un-failed node’ cannot be assigned a number, results for the network are based on average strain values of failed nodes only.It is also difficult to assign values of time to strain to un-failed nodes as it should be infinite in theory.However, it is assigned as zero in this study to measure the spatial scale of failure as time to strain is found to be very similar at the failed nodes in the investigated WDSs.For comparison of results from different networks, normalisation of metric values is made if they are affected by the scale of the WDS.Each attribute is assessed using one or more representative metrics: link density, algebraic connectivity and clustering coefficient for connectivity, average path length for efficiency, central point dominance for centrality, heterogeneity for diversity, spectral gap for robustness, and modularity indicator for modularity.Detailed definitions of the metrics are provided in Table S1.Some widely used metrics are not employed due to lack of suitability in WDSs, e.g. the common tree-like branches make indicators like ‘density of bridges’ not as useful and informative as in many other networks; also, a preliminary analysis is made to examine the correlations between metrics describing the same attribute based on the studied networks so that only one metric is used if two or more metrics are strongly correlated.Details on the selection of metrics are provided in Section S2.Despite the intuitive appeal of connectivity, it encompasses multiple aspects of system topology such as to what extent and how nodes are connected.As such, three metrics are chosen to describe the different facets of network connectivity.As defined in Table S1, link density measures how close the number of links in a network is to the maximum possible number for a given number of nodes.Algebraic connectivity is strongly correlated with link density but is still used here as it is an indicator of structural robustness and failure tolerance against efforts to cut a network into isolated parts; also it can distinguish differences in the connectivity of networks with the same number of nodes and links but different ways of connecting the nodes, where link density would be the same.The clustering coefficient measures network redundancy by the degree to which neighbours form a clique.More complex mathematical models can be used to represent WDS in more detail – for example, links in a graph can be directed to describe flow directions in pipes and the links and nodes can be weighted to represent characteristics of pipes and nodes.Undirected and unweighted graphs are sufficient for the calculation of the eight statistical metrics selected for this study; however inclusion of directions and weights can be very useful in system design as discussed in Section 4.2.1) Networks of various sizes and topologies: Benchmark networks based on existing WDSs are used for the correlation analysis.However, to increase the data set and ensure a more comprehensive analysis, many realistic, virtual networks have also been developed.These are example networks produced by the HydroGen model, which generates WDSs automatically according to a pre-defined algorithm with user defined settings for network size and characteristics.The virtual networks are generated independently and vary in size, distribution of customers/water demands and layout representing the diverse nature of real WDSs.Two example virtual networks are shown in Fig. 1.2) Network variants by pipe addition: A WDS generation model is developed to produce different designs of pipe addition to the same WDS.By performing a similar correlation analysis between resilience and topology based on the network variants, insights can be gained on the appropriateness of using topological attribute metrics as surrogate resilience indicators in guiding the rehabilitation/extension of an existing WDS.As shown in Fig. 1, Networks D-1 and D-2 are variants of Network D with two added pipes."To generate a network variant by the model, two nodes are randomly chosen in the network and a pipe is added if it does not intersect with other pipes in the network; this procedure is repeated until the given number of pipes to be added is reached. "Further constraints can be made on the addition of pipes for a specific network if sufficient information is available on the case study area, such as where it is or isn't convenient to lay pipes due to the land use and compatibility/interference with existing infrastructures.Five benchmark WDSs and 80 virtual WDSs are used for the correlation analysis.The virtual networks are of various sizes covering that of the benchmark WDSs, as shown in Table S2.Their topological attribute metrics are calculated and presented in Fig. 2, along with the layouts of four benchmark WDSs.It can be seen from Fig. 2 that the topological attributes of the benchmarks largely fall within the value bounds of the virtual networks, with only three WDSs having larger/smaller values on single attribute metrics.This suggests that the topological features of the virtual networks are realistic, hence they are suitable for this study.The benchmark network Net3 is used to generate network variants due to its relatively small size.For validation purposes, two sets of 85 WDSs with different topologies are developed by randomly adding 5 pipes and 10 pipes to Net3.No further restrictions are applied with regards to where the pipes can be added as no ground information is available on Net3.The diameter and roughness coefficient of all added pipes are assumed to be 100 inches and 130 respectively.For valid comparison between WDSs, results of failure magnitude, failure rate and recovery rate for each WDS are normalised with respect to the average nodal demand and severity results are normalised with respect to the total water demand in the network.Link density, algebraic connectivity, average path length and modularity are found to be strongly influenced by network size, with linear correlation coefficients being −0.91, −0.59, 0.78 and 0.90, respectively.As the sizes of the investigated WDSs cluster around 100, 200, 200 and 400 nodes, the investigated WDSs are divided into four groups according to the network size.Analysis of the correlations between the topological attribute metrics and resilience metrics is performed for each group separately and the mean correlation coefficient values are used to better understand the relationships.The related correlation coefficient values presented in the results section, therefore, are mean correlation coefficients for the four network groups.As mentioned earlier, recovery rate is excluded from the correlation analysis.Nevertheless, it is found to be strongly correlated with failure magnitude when recalculating failure magnitude of all WDSs in the same manner as recovery rate.This may be because in the investigated failure scenarios, recovery largely happens when the pipe malfunction is repaired which is rapid among all WDSs determined by the hydraulic processes in pipes; hence failure magnitude is the dominating factor influencing recovery rate of WDSs.The findings can be different if longer system malfunction is simulated and need further studies.The correlation results between all topological attribute metrics and five resilience metrics are presented in colour maps in Fig. 3 for a clear comparison of their relative significance.The colour of a grid cell shows the strength of correlation between the topological attribute metric displayed on the left of the corresponding row and the resilience metric shown at the bottom of the corresponding column.Positive correlation is shown in red and negative correlation in blue, and the darker the colour the stronger the correlation.Those with strong correlations are marked with ‘s’ in the corresponding cell.Results based on the mean, minimum and maximum stress-strain curves are presented separately in Fig. 3a to c respectively.It can be seen that the correlations based on the maximum stress-strain curves are slightly weaker than the other two.This may be because WDSs can fail with the worst possible results even at very low stress levels due to the lack of connectivity of reservoirs to the main body of the network in most cases, hence providing weaker differentiation between systems with different topological features.Nevertheless, the resilience patterns presented in Fig. 3a to c are very similar; therefore, the following sections of this work focus only on results obtained from the mean stress-strain curves for system resilience.As shown in Fig. 3a, link density and modularity show strong correlations with two metrics of system resilience, i.e. time to strain and failure duration.This demonstrates that the more connected and less modular a WDS is, the lower the failure impacts in terms of the spatial and temporal scales of affected nodes.The pattern observed between connectivity and resilience here is different from many other types of networks – for example, higher connectivity is found to boost the spread of diseases in social networks.This may be because the failures tested in WDSs here are passive results of system malfunction, which tend to sprawl to more surrounding nodes if fewer alternative pathways exist for re-routing water supply.Although modularity is found to be a positive attribute in some systems such as biological or technological networks, it has different implications for WDSs.The investigated WDSs are found to be most easily segmented into 10 to 20 communities, which is much more than the number of reservoirs/water tanks available in the networks.Given the essential role of water sources for each community to function, higher modularity indicates higher vulnerability of communities to disconnection of water supply.Besides the four pairs of strong correlations as discussed, a few other correlations are close to being strong.Also, it is found that the gradient of the stress-strain curves approaches zero under high stress magnitudes.Hence, the differences in resilience of various WDSs become smaller if the area under the entire stress-strain curve is used for resilience assessment which may mask the interplay between resilience and network topology.As such, a sensitivity analysis is conducted to re-assess the correlations based on the response to a restricted maximum stress magnitude, i.e. using only part of the areas under the stress-strain curves for the correlation analysis with topological attribute metrics.This produces 100 sets of correlation coefficient values between each pair of resilience and topological attribute metrics, which are summarised in boxplots in Fig. 4.The minimum, maximum, 25th percentile, 75th percentile and median values are shown by the left and right whiskers, left and right bounds of the box, and the line within the box respectively.The results corresponding to a maximum stress magnitude of 100% are identified in Fig. 4 with red diamonds for comparison.Fig. 4 shows that the correlations between time to strain/failure duration and algebraic connectivity/average path length/central point dominance become strong when certain values of maximum stress levels are used in the derivation of resilience values.Details on the range of maximum stress levels where strong correlations are observed are presented in Table S3.Network efficiency is greatly affected by connectivity and the existence of long range connections, which highlights the importance of connectivity between water sources and user nodes and among the nodes themselves.Though scale-free networks have very high centrality and are found to be robust to random failures, the positive correlations between central point dominance and time to strain/failure duration suggest that higher centrality tends to decrease the resilience of lattice networks like WDSs.The clustering coefficient, despite being a measure of network connectivity, is still only weakly correlated with resilience metrics in the sensitivity analysis.This may be because triangular loops, which are a key element in calculating the clustering coefficient, are not common in grid-like WDSs.Compared to time to strain and failure duration, the influence of network topology on other resilience metrics is much weaker.As can be seen in Fig. 4, failure magnitude is strongly correlated with connectivity, efficiency and modularity under a limited range of maximum stress magnitude.This shows that the influence of these three topological attributes in mitigating/worsening failure magnitude is less evident under higher stress levels as more nodes are failing in the network hence the differences in the average failure magnitude become smaller.Similar relationships with the topological attribute metrics are observed on severity, but the range of maximum stress magnitude showing strong correlations are wider.In comparison, failure rate cannot easily be estimated from a single topological attribute, although strong correlations with link density and average path length are observed under a very narrow range of stress levels.Spectral gap, which is considered an indicator of system robustness against node/link removal and has been widely used in prior literature, has the weakest correlations with all resilience metrics.This may result from the fact that spectral gap is greatly affected by the existence of bottleneck links, yet their malfunction may not lead to significant drop in service in WDSs as there are usually water sources in each big cluster.Heterogeneity is another key attribute affecting resilience identified in previous studies, for example by enhancing the spread of diseases or improving ecological system resilience against the extinction of species.Though heterogeneity is negatively correlated with the five resilience metrics among the investigated WDSs, the correlations are not strong.This may be due to the grid-like structures and rare existence of hubs in WDSs, which result in low heterogeneity values compared to other network systems.For example, the heterogeneity values of the 14 ecological networks investigated in Gao et al. range from approximately 5 to 100, while those of the 85 investigated WDSs are in the range 0.11–0.34.It is also worth noting that the correlations between resilience metrics and the five topological attribute metrics are weak when only very small maximum stress magnitudes are considered.This raises an interesting point as, in the traditional regime of water system management, relatively high probability system malfunctions dealt with are often of low magnitude; yet the weak correlations shown here indicate that modifying the topological properties of a system are more likely to enhance system capability under severe system malfunctions rather than events of low magnitude.The changes in the values of the topological attribute/resilience metrics of the two sets of network variants with respect to the original WDS are summarised in Fig. 5 in red and cyan boxplots.Values of link density are the same for WDSs in each set hence are not presented in the figure.The trend of change in system topology/performance can be determined from the median values and the ranges between the maximum and minimum values and between the 25th percentile and 75th percentile values.As shown in Fig. 5, addition of 5 or 10 links provides an evident improvement in algebraic connectivity and clustering coefficient, although link density values only increase by 4% and 8% respectively.The increase in network efficiency and heterogeneity are also clear, although not as great as connectivity.Spectral gap, like the other examined spectral metric algebraic connectivity, is very sensitive to link addition and shows the most evident changes, yet the direction of change is not predictable as more than 25% of the generated WDSs have negative results; similarly, variation in centrality is obvious but it can increase or decrease depending on where the added links are laid.Changes in modularity are negligible compared to other topological attribute metrics.Despite the relatively small variations in resilience results compared with some topological attribute metrics, pipe addition is demonstrated to be effective in reducing failure impacts, e.g. time to strain, failure duration, failure magnitude, failure rate and severity can be abated by 33%, 17%, 21%, 22% and 21% respectively by increasing the number of pipes in the system by 4%.The benefits are shown to diminish with more pipe additions, as the corresponding results are 42%, 22%, 34%, 30% and 34% respectively when the number of pipes in the system is increased by 8%.Recovery rate and failure magnitude are strongly correlated in both sets of WDSs, which is consistent with the findings in Section 3.1.It should be noted that the observed trends are not indicative for single cases, as there are great overlaps in the results of the two sets of networks, hence adding more pipes will not necessarily increase system connectivity/efficiency/failure impacts unless carefully designed.Fig. 6 shows the comparison between the correlation values based on the two sets of network variants and the 85 networks in Section 3.1.No correlation results of ‘many WDSs’ are presented in Fig. 6e as the correlation was not performed between topological attribute metrics and recovery rate in Sections 3.1 for reasons explained in Sections 2.1 and 2.3.2.Results suggest that no strong correlations exist between the topological properties and resilience performance among the two sets of modified WDSs.These conclusions are unchanged after performing a sensitivity analysis similar to that in Fig. 4, where resilience is determined based on partial strain results obtained from the GRA.The weaker correlations for the sets of modified WDSs can be explained by the smaller metric value ranges compared with those of the WDSs in Section 3.1, especially for the resilience results.Three WDSs with similar average path lengths but different resilience results are shown in Fig. 7 in the order of decreasing failure impacts.Many of the pipes added in the WDSs in Fig. 7a and b are of trivial importance, while most of the pipes added in the WDS in Fig. 7c are effective in reducing the system vulnerability to failures of trunk mains or decreasing the reliance of nodes on local water sources.This suggests that it is crucial to differentiate water sources from other nodes in analysis for the extension/rehabilitation of WDSs and it can be misleading if only the topological attribute metrics are used.Future work can be conducted to explore the potential of modifying the topological metrics for resilience assessment.The proposed analytical framework is applicable to other network systems such as sewer systems or even systems in other fields if numerical models are available for detailed resilience assessment of the network systems.The method of GRA is suitable for any network systems, the six attributes examined in this study are key topological aspects of any networks, and the recommend metrics can be applied for the quantification of each attribute subject to a correlation analysis as suggested in this work.The use of many networks supports reliable statistical analysis of the correlations between the topological attributes and resilience.The idea of developing network variants from a single network can be followed in future studies to examine the usefulness of using topological attribute metrics in guiding resilient system design.Despite topological attribute metrics having been used in many studies as surrogate resilience indicators for WDS management, a crucial, fundamental question remains to be answered.That is - does network topology greatly influence the resilience of WDSs; if yes, what topological attributes are key contributing factors?,To bridge the knowledge gap, a novel and logical analytical framework was developed for a deep analysis of their relationships.The framework provided a detailed assessment of WDS resilience as all key aspects of resilience were considered and measured by stress-strain tests.The framework also enabled a systematic study of WDS topology by a comprehensive review of network attributes and their descriptive metrics.The mapping between resilience and topological attributes was achieved by correlation analysis between their descriptive metrics, based on a large number of benchmark and virtual WDSs.The value of the resilience patterns on the rehabilitation/extension design of WDSs was uncovered by developing network variants from a single WDS and conducting similar correlation assessment between resilience and topological attribute metrics.The methods and thinking provided in the framework are not specific to WDSs, hence should be readily adaptable to other networks of environmental and social significance such as drainage and/or sewerage systems.The resilience patterns identified in WDSs are summarised and discussed as follows.It is crucial that the concept of resilience is comprehensively understood and assessed before the analysis of its interplay with topology.Resilience is a term that encompasses multiple aspects of system performance, which are too complex to be represented by a single performance indicator.For example, a trade-off was found between absorption and restoration capacities of the investigated WDSs under the failure scenarios simulated, i.e. a WDS that fails badly is also likely to recover quickly.Strong correlations were only observed between certain metrics of resilience and topological attributes.This suggests that topology strongly affects the spatial and temporal scales of failure impacts, while other metrics that are closely linked to physical processes and system dynamics are influenced to a lesser extent.Due to the unique characteristics of WDSs, some network attributes such as diversity in nodal degree and robustness are not critical for WDS resilience, unlike some other systems reported in literature.Not all topological attribute metrics are suitable for the description of the topological features of WDSs.For example, clustering coefficient was found to be a poor indicator of connectivity of WDSs, which can be explained by the uncommon triangular loops in WDSs.No strong correlations were shown between WDS performance and topological attributes at very low stress levels, suggesting that the study of topological properties is more meaningful for resilience management than the traditional risk management.Results suggest that it can be misleading to use topological attribute metrics as surrogate resilience indicators in the design of WDSs.This is supported by the study on network variants, where the extent of value changes in topological attribute metrics were shown to be much larger than resilience metrics subject to pipe additions and weak correlations were found between the values of all topological attribute and resilience metrics.Performance of the three typical network variants indicated the importance of considering details of a WDS in system design which was not captured by the statistical topological metrics.It should be noted that the resilience patterns identified in this study are only applicable to mechanical failures of WDSs.The findings are expected to be different for water quality-related failures, as it is likely that higher connectivity promotes the spread of contaminants and recovery of WDSs from contamination would not be as rapid as pipe failures.Hence, the water quality-related resilience and its potential conflict with mechanical resilience should be investigated in future research.As only random failures are investigated in this paper, it is also worth considering targeted attacks in future studies and examining their impacts on resilience.Nevertheless, the resilience patterns revealed from WDSs in this paper are different from many other types of networks reported in the literature, suggesting the need for explicit and critical analysis of systems with distinctive topological features and dynamics."The research data supporting this publication are openly available from the University of Exeter's institutional repository at: https://doi.org/10.24378/exe.483
Resilience has been increasingly pursued in the management of water distribution systems (WDSs) such that a system can adapt to and rapidly recover from potential failures in face of a deep uncertain and unpredictable future. Topology has been assumed to have a great impact on resilience of WDSs, and is the basis of many studies on assessing and building resilience. However, this fundamental assumption has not been justified and requires investigation. To address this, a novel framework for mapping between resilience performance and network topological attributes is proposed. It is applied to WDSs here but can be adaptable to other network systems. In the framework, resilience is comprehensively assessed using stress-strain tests which measure system performance on six metrics corresponding to system resistance, absorption and restoration capacities. Six key topological attributes of WDSs (connectivity, efficiency, centrality, diversity, robustness and modularity) are studied by mathematical abstraction of WDSs as graphs and measured by eight statistical metrics in graph theory. The interplay between resilience and topological attributes is revealed by the correlations between their corresponding metrics, based on 85 WDSs with different sizes and topological features. Further, network variants from a single WDS are generated to uncover the value of topological attribute metrics in guiding the extension/rehabilitation design of WDSs towards resilience. Results show that only certain aspects of resilience performance, i.e. spatial and temporal scales of failure impacts, are strongly influenced by some (not all) topological attributes, i.e. network connectivity, efficiency, modularity and centrality. Metrics for describing the topological attributes of WDSs need to be carefully selected; for example, clustering coefficient is found to be weakly correlated with resilience performance compared to other metrics of network connectivity (due to the grid-like structures of WDSs). Topological attribute metrics alone are not sufficient to guide the design of resilient WDSs and key details such as the location of water sources also need to be considered.
707
HIV-1 Activates T Cell Signaling Independently of Antigen to Drive Viral Spread
Many viruses exploit direct cell-cell infection to replicate most efficiently.HIV-1 is no exception and has evolved to take advantage of the frequent interactions between immune cells in lymphoid tissue to disseminate at sites of T cell-T cell contact.Indeed, cell-cell spread is the predominant mode of HIV-1 replication that ultimately leads to T cell depletion and the development of AIDS.HIV-1 manipulation of immune cell interactions in lymphoid tissue, where T cells are densely packed, allows for rapid HIV-1 spread and evasion of host defenses, including innate and adaptive immunity as well as antiretrovirals.Importantly, ongoing viral replication likely prevents an HIV/AIDS cure.Cell-cell spread of HIV-1 occurs across virus-induced T cell-T cell contacts and is a dynamic, calcium-dependent process that appears highly regulated, culminating in polarized viral egress and rapid infection of neighboring cells.The molecular details of how HIV-1 co-opts the host cell machinery to drive maximally efficient spread between permissive T cells remains unclear.Moreover, whether cell-cell spread induces signals that potentiate viral replication has been little considered but has major implications for therapeutic and eradication strategies.Phosphorylation-mediated signaling controls many cellular functions, including immune cell interactions and cellular responses to the environment and infection.Quantitative phosphoproteomics analysis by mass spectrometry allows for global, in-depth profiling of protein phosphorylation kinetics.When coupled with functional analysis, such studies have helped define the pathways leading to T cell activation, differentiation, and gain of effector function, paving the way to understanding the molecular details of T cell signaling and the immune response.So far, analysis of signaling during immune cell interactions has generally employed reductionist approaches; for example, cross-linking individual cell-surface proteins such as the T cell receptor or co-stimulatory molecules with antibody.Such approaches mimic the process of antigen-dependent stimulation that occurs when a T cell encounters antigen-presenting cells expressing cognate peptide in the context of major histocompatibility complex molecules.However, the unmet challenge is to globally map cellular signaling pathways activated when two cells physically interact, a more complex setting that recapitulates the uncharacterized complexity of receptor interactions that take place between immune cells and synergize to drive a cellular response.To gain insight into the molecular mechanisms underlying HIV-1 spread between T cells, we developed an approach that employs triple SILAC with quantitative phosphoproteomics to map cellular signaling events simultaneously in two distinct cell populations.We have used this strategy to perform an unbiased and comprehensive analysis of how HIV-1 manipulates signaling when spreading between CD4 T cells.By simultaneously mapping real-time phosphorylation changes in HIV-1-infected and HIV-1-uninfected CD4 T cells with kinetic resolution, we identified the host cell pathways and cellular factors modified during HIV-1 dissemination.Remarkably, our results reveal that HIV-1 subverts canonical TCR signaling in the absence of antigen to drive spread at T cell-T cell contacts.Manipulation of T cell signaling by HIV-1 in this way represents a previously unknown strategy to promote efficient replication with important implications for disease pathogenesis.To obtain an unbiased and global overview of manipulation of host cell signaling during HIV-1 spread, we used SILAC coupled with quantitative phosphoproteomics analysis by MS. Jurkat CD4 T cells, a well-characterized model of HIV-1 infection and T cell signaling, were labeled using either “heavy” or “light” amino acids for at least six doublings.SILAC-labeled R10K8 T cells were infected with HIV-1 by spinoculation to synchronize infection, achieving 90% infection after 48 hr.HIV-1-infected heavy- labeled and uninfected light-labeled target T cells were mixed to optimize contacts and either lysed immediately or incubated at 37°C for 5, 20, or 40 min prior to lysis to allow for cell-cell contact and cross-talk.We expected rapid dynamics of cellular signaling and HIV-1 cell-cell spread during T cell-T cell contact.To enable inter-time-point comparison and temporal analysis of dynamic signaling, each time point was supplemented post-lysis with an internal standard consisting of a pooled sample of mixed infected and uninfected T cells both labeled with “medium” amino acids and collected from each time point.All samples were processed and analyzed by MS with quantification of abundance changes based on MS signal intensities of the triple-SILAC-labeled peptides.Raw MS data were processed using MaxQuant for protein assignment, quantification of peptides, phosphorylation, and phosphosite localization.We identified a total of 28,853 phosphopeptides corresponding to 5,649 independent proteins.This is the largest single dataset from a lymphocyte or hematopoietic cell analysis.We captured proteins across numerous subcellular localizations, including the cytoplasm, nucleus, and plasma membrane, in both T cell populations.Protein function analysis revealed a broad spectrum of host cell pathways modified in both infected and uninfected cells, demonstrating this approach yields an unbiased capture of the T cell phosphoproteome.Phosphorylated serine and threonine were significantly more abundant than phosphorylated tyrosine, in agreement with their relative prevalence and key role in T cell signaling.Co-culturing HIV-1-infected and uninfected T cells results in >80% of uninfected target T cells becoming infected by contact-dependent cell-cell spread.To determine the temporal changes in cellular signaling during HIV-1 spread from donor to target T cells, we curated the data to consider only high-confidence phosphorylation sites, proteins that were identified at all four time points and those showing >1.5-fold change in the abundance of phosphorylation compared to the internal medium-labeled reference.The relative abundance of phosphorylation states for all phosphosites was quantified and the change over time calculated.Statistically significant changes over time were detected in 938 phosphopeptides corresponding to 434 proteins in HIV-1 donor cells and 851 phosphopeptides corresponding to 430 proteins in target cells.Consistent with rapid activation of signaling pathways, the largest changes in distribution and frequency of phosphopeptides from both cell populations were seen within the first 5 min.Temporal signaling changes defined groups of early- and late-responsive phosphosites and distinct clusters of responsive proteins, indicative of activation of specific cellular pathways in each cell population and downstream propagation of signaling cascades.To confirm the data and obviate potential label bias, we repeated the experiment with reversed SILAC labeling.Phosphorylation changes were confirmed in 163 phosphopeptides corresponding to 134 proteins in the HIV-1 donor cell and 141 phosphopeptides corresponding to 124 proteins in the target cell.This represents an average 29% overlap between replicate experiments, in excellent agreement with the reproducibility of similar screens.Of these, 108 phosphorylation sites were unique to infected cells, 86 phosphorylation sites were unique to the target cell, and 55 phosphorylation changes were common to both donors and targets.This implicates specific host cell factors that may regulate the late steps of viral assembly and budding, early infection effects and common factors that may regulate T cell-T cell interactions and VS formation.We took an unbiased, Ingenuity Pathway Analysis approach to analyze the host signaling networks and pathways modified during HIV-1 spread.This revealed that TCR signaling in donor T cells was the top canonical pathway modified over time during HIV-1 spread.This was followed by CD28, inducible T cell costimulator-inducible T cell costimulatory ligand, and actin cytoskeleton signaling.In uninfected target T cells, the top canonical pathways were TCR, CD28, Cdc42, RAC, and actin signaling.Motif-X analysis of phosphosites predicted the kinases most active in HIV-1 donor cells as CaMKII, PAK, and proline-directed kinases, compared to PAK, CDK5, and proline-directed kinases in target cells.The fact that TCR signaling was the most highly activated pathway in infected cells is surprising because HIV-1 mediated T cell-T cell contact during viral spread does not involve TCR-pMHC interactions and as such is antigen-independent.Rather it is driven by Env expressed in infected cells engaging viral entry receptors on opposing cells during T cell-T cell interactions with additional contributions from adhesion molecules.Figure 2A graphically summarizes the phosphoproteins we identified in HIV-1 donor cells mapped onto signaling pathways associated with canonical T cell activation at T cell-APC contacts.This visual representation highlights the significant overlap between the well-established TCR/co-stimulatory signaling pathway and phosphorylation changes identified in HIV-1 donor cells during contact with targets.To explore this further, we compared our phosphoproteome data with studies where the TCR was directly cross-linked on Jurkat T cells, and signaling was analyzed across similar time points.We found a 44% overlap between the phosphorylation profile of HIV-1 donor cells during co-culture with target cells and TCR-specific responses reported by Chylek et al. and a 30% overlap with Mayya et al.KEGG database analysis also reported substantial overlap between our phosphoproteome results and phosphorylation of TCR-associated proteins.Interestingly, we identified multiple proteins in our data with phosphorylation changes that mapped to early plasma membrane proximal and intermediate/late components of TCR signaling, as well as downstream regulators of gene expression.Many of the residues modified were known activating sites.T cell signaling modulates the host cell cytoskeleton and the protein trafficking that is required for T cell activation and secretion of effector molecules.Consistent with the notion that HIV-1 cell-cell spread is an active, cytoskeletal-dependent process and that virus infection drives this process, we found dynamic phosphorylation changes to many actin regulators, polarity proteins and components of vesicle trafficking and fusion, most of which that have not been previously described as host cofactors for HIV-1 replication and spread.HIV-1 predominantly spreads at virus-induced cell-cell contacts but can also disseminate less efficiently via classical diffusion-limited cell-free infection.Comparative analysis of our results obtained from target T cells with a study mapping phosphorylation in T cells exposed to cell-free virus showed a 23% overlap in modified proteins, with 41% of the phosphorylation changes in these proteins mapping to the same site.Since the molecular processes of HIV-1 entry and the early steps of infection are similar between cell-free and cell-cell spread, some overlap is expected; however, differences implicate additional signaling pathways specifically activated during T cell-T cell contact and other unique responses occurring when target cells encounter greater numbers of incoming virions during cell-cell spread.Having identified changes in phosphorylation of key components of classical T cell signaling during HIV-1 spread, which strongly indicates activation, we sought to validate this observation directly using western blotting to visualize protein phosphorylation and quantified this from multiple experiments using densitometry analysis.Proteins were chosen that represented upstream kinases, cytoskeletal proteins, and transcriptional regulators involved in T cell receptor signaling that showed dynamic phosphorylation changes at defined sites that dictate protein function.Other components of the top canonical pathways activated were also included.Figures 3A and 3B shows that contact between HIV-1-infected and HIV-1-uninfected T cells increased phosphorylation of the actin regulators PAK1S204 and CFLS3.While PAK1 activation was specific to contacts mediated by HIV-1-infected cells, CFL phosphorylation appeared to be infection-independent and was also triggered by contact between uninfected T cells.However, as T cells do not usually form sustained cell-cell contacts in the absence of retroviral infection or previous antigenic stimulation, this may be unlikely to occur under normal conditions of transient cell interactions.PAK1 influences cytoskeletal dynamics and T cell activation and is activated through phosphorylation at Ser204 via TCR-dependent and TCR-independent mechanisms.CFL, a downstream target of the PAK1 cascade, stimulates actin severance and depolymerization to increase actin turnover and is inactivated by LIMK phosphorylation at Ser3, potentially stabilizing cell-cell contacts.Modulation of cytoskeletal dynamics is thus consistent with the requirement for actin remodeling during immune cell interactions and HIV-1 cell-cell spread.Next, we examined Lck and ZAP70, which are TCR-proximal kinases and key initiators of T cell signaling.Lck activity is regulated by multiple phosphorylation states and intracellular localization.ZAP70 activity is positively regulated by Y319 phosphorylation.Consistent with activation of TCR signaling, rapid and dynamic changes to both LckY394 and ZAP70Y319 were seen over time during HIV-1-induced cell-cell contact, with identical patterns of phosphorylation indicative of Lck-dependent ZAP70 activation.A slight dip in Lck and ZAP70 phosphorylation was seen at 20 min, although the reasons for this are unclear.By contrast, activation of LATY191 was unchanged in both MS and western blotting.Supporting our phosphoproteome data showing downstream propagation of signaling cascades, strong activation of ERKT202/Y204 during HIV-mediated cell-cell contact was observed by 40 min.Finally, having found phosphorylation of the serine/threonine kinase AKT and a number of downstream targets by MS, we tested phosphorylation of AKTT308 and AKTS473.AKTT308, which lies in the activation loop of AKT and is most correlative with kinase activity, showed a 1.5-fold increase in phosphorylation during HIV-1-mediated cell-cell contact.By contrast, AKTS473 that contributes to further kinase function, and phosphorylation of additional downstream targets appeared to be activated by cell-cell contact independent of HIV-1 infection.Next, we extended the analyses to primary CD4 T cells purified from healthy donors that were infected with HIV-1 ex vivo and mixed with autologous CD4 T cells as well as mock-infected controls.Primary T cells showed similar patterns of HIV-dependent, contact-mediated phosphorylation over time but more rapid propagation of signaling and more robust AKTT308 activation, in agreement with previous data indicating HIV-1-infected primary T cells are highly responsive to contact with target cells.However, western blotting of total cell lysates from primary cells did not reveal global changes in Lck phosphorylation, consistent with high basal levels of Lck phosphorylation in primary T cells.Signaling through the TCR is considered a tightly controlled checkpoint to ensure T cell activation only occurs in response to foreign antigen displayed by MHC.It is therefore striking that antigen-independent, HIV-1-induced T cell-T cell interactions should trigger classical TCR signaling cascades and phosphorylation of numerous pathway components.To probe the relationship between the TCR complex and contact-induced activation of signaling in HIV-1 donor cells, TCR/CD3-negative T cells were infected with HIV-1 and phosphorylation examined.Notably, HIV-1-infected TCR-negative cells did not phosphorylate PAK, Lck, ZAP70, or ERK in response to contact with target T cells, implicating the TCR in signal activation.As a control, we confirmed TCR-negative cells retained expression of Lck and that HIV-1-infected cells did not downregulate cell-surface expression of the TCR/CD3 complex.Seeking a role for TCR-dependent signaling in HIV-1 spread, TCR-negative cells were infected and their ability to support viral replication measured.TCR-negative cells were readily susceptible to initial infection with HIV-1 and showed no defect in cell-free virus production over a single round of infection, as measured by quantifying release of viral Gag and particle infectivity.Remarkably, when infected cells were incubated with wild-type target T cells, we observed a significant defect in their ability to transmit virus by cell-cell spread.Reconstituting TCR expression using lentiviral transduction resulted in >85% of cells expressing the TCR complex at the cell surface and rescued HIV-1 cell-cell spread.Failure of TCR-negative cells to efficiently transmit virus by cell-cell spread indicates an important role for the TCR in VS formation and virus spread.Quantitative immunofluorescence microscopy revealed TCR-negative cells were indeed impaired in VS formation and could not recruit the viral structural proteins Gag and Env to sites of cell-cell contact to polarize viral budding toward the target cell, 10-fold ± 2.6-fold, n = 17; TCR negative, 1.6-fold ± 0.5-fold, n = 16; Gag enrichment at contact site: WT, 18.3-fold ± 5.7-fold, n = 14; TCR negative, 1.7-fold ± 0.3-fold, n = 16).We hypothesized that close and sustained contact between infected and uninfected cells may be inducing TCR coalescence at the contact site as a mechanism of receptor triggering.In support of this model, analysis of contacts formed between HIV-1-infected primary T cells and autologous uninfected T cells showed that 70% of VSs displayed co-enrichment of the TCR and Env on infected cells at the contact zone, despite the absence of direct antigen-dependent TCR engagement by opposing uninfected targets.Quantification of fluorescence revealed the TCR was enriched 3.3-fold ± 0.6-fold and Env 6.9-fold ± 1.5-fold at the contact site.The kinase Lck is a key upstream initiator of TCR signaling and activation of cytoskeletal dynamics at immune cell contacts.To test whether signaling during HIV-1-induced T cell contact was Lck dependent, Lck-negative JCAM1.6 cells were infected with virus and mixed with wild-type target T cells, and protein phosphorylation was analyzed.Figures 4A–4G and quantification of western blots revealed that Lck-negative HIV-1-infected cells were unable to initiate signaling and activate PAK1S204, ZAP70Y319, ERKT202/Y204, and AKTT308, whereas CFL remained responsive.To examine whether Lck and the downstream kinase ZAP70 contribute functionally to HIV-1 replication, Lck- and ZAP70-negative T cells were infected, and virus assembly, budding, and spread were quantified.We used VSV-G-pseudotyped virus to overcome variable expression of the receptor CD4.Notably, both Lck- and ZAP70-negative Jurkat cells failed to support efficient cell-cell spread.In agreement with data using TCR-defective cells, impaired cell-cell spread in Lck- and ZAP70-deficient Jurkat cells was not due to a block in virus infection or a defect in virus production, since the cell-free virus budding and particle infectivity were equivalent to that of WT Jurkat cells.However, as expected, there was a significant defect in VS formation and failure to recruit Env and Gag to the contact interface but no effect on cell-cell contact, demonstrating that Lck and ZAP70 are not mediating their effect through altering T cell-T cell interactions.Reconstituting cells with exogenous Lck and ZAP70 significantly increased cell-cell spread and restored VS formation as measured by Env and Gag recruitment to the cell-cell interface.The viral determinants of contact-induced antigen-independent T cell signaling remained unclear.HIV-1 Env expressed on the surface of infected T cells binds cellular entry receptors on opposing cells, leading to sustained cell-cell contact, plasma membrane remodeling, receptor clustering, and VS formation.Consistent with antigen-independent T cell signaling being driven by close, sustained physical cell contact mediated by Env-CD4/coreceptor interactions, T cells infected with Env-deleted VSV-G-pseudotyped virus did not activate phosphorylation of T cell signaling components following incubation with target cells, with the exception of CFL and AKT473.Similar results were observed when primary CD4 T cells were infected with ΔEnv VSV-G-pseudotyped virus.We postulated that failure to activate signaling was because TCR clustering did not occur in the absence of HIV-1 Env-mediated cell-cell contact.Concordantly, we observed a significant reduction in the number of cell-cell contacts displaying TCR clustering in the absence of HIV-1 Env expression on infected primary CD4 T cells, with only 16% of contacts showing TCR enrichment when cells were infected with HIV ΔEnv virus compared to 70% using WT virus.The HIV-1 accessory protein Nef has been reported to modulate T cell signaling and induce hyper-responsiveness to stimulation.To test whether Nef was potentiating antigen-independent signaling, T cells were infected with Nef-deleted virus and signaling examined.Figures 5A–5G shows that deletion of Nef resulted in failure to activate ERKT202/Y204, LckY394, ZAP70Y319, and PAK1S204 following incubation with target cells, with AKTT308 phosphorylation remaining responsive to cell-cell contact.However, in contrast to Env, Nef appeared dispensable for TCR clustering at the VS, suggesting Nef potentiation of signaling acts downstream of cell-cell contact.Taken together, these data demonstrate HIV-1 infection induces antigen-independent TCR signaling that is activated by Env-dependent cell-cell contact and further potentiated by the HIV-1 virulence factor Nef.Here, we have developed an approach to globally map phosphorylation-based dynamic signaling events in complex mixed cell populations and performed an analysis of host cell signaling pathways that are manipulated during HIV-1 spread between CD4 T cells.Cell-cell spread of HIV-1 potently enhances viral dissemination, but many aspects of this host-pathogen interaction remain obscure.Our identification of >200 host cell factors that are manipulated during highly efficient HIV-1 spread is thus timely and provides a wealth of information with implications for pathogenesis.Furthermore, the experimental approach we describe has broad applicability and can be readily applied to complex biological analyses such as signaling during intercellular communication or to define host cell responses to the sequential steps of pathogen replication.Notably, we make the unexpected discovery that HIV-1 subverts classical TCR signaling pathways independently of antigen during cell-cell contact to drive viral spread from infected to uninfected T cells.Specifically, we found that cell-cell contact mediated by Env uniquely activates the TCR/CD3 complex and downstream kinases Lck and ZAP70 in infected T cells and that this process is required to transmit virus to neighboring cells.We propose a paradigm of TCR signaling in which the close apposition and sustained physical contact between HIV-1-infected and -uninfected T cells, which are mediated by Env-receptor interactions and remodel the synaptic plasma membrane, lead to aggregation and enrichment of the TCR/CD3 complex at the contact site and initiation of TCR signaling.Specifically, the presence of plasma-membrane-exposed Env engaging CD4 and coreceptor on target cells results in coalescence and enrichment of cross-linked Env at the VS.Concomitantly, we observe Env-dependent clustering of TCR-containing plasma membrane microdomains at the cell-cell interface and activation of Lck-dependent signaling.We propose that activating signaling could then drive the recruitment of additional Env and Gag to the contact zone as we found here by quantifying viral protein enrichment, resulting in polarized HIV-1 assembly and budding of virions across the synaptic space toward the engaged target cell.The TCR, Lck and ZAP70 comprise a triad of early T cell signaling initiators that trigger cytoskeletal remodeling and intracellular trafficking.Consistent with this, these host proteins were all required to direct the active transport of HIV-1 structural proteins Env and Gag to sites of cell-cell contact.In support of our results, ZAP70 has previously been implicated in HIV-1 spread and VS formation.While our data reveal a compelling role for antigen-independent TCR triggering for synaptic signaling, VS formation, and rapid HIV-1 transmission, we cannot discount the contribution of other cell-surface receptors, such as LFA-1 or indeed other as-yet-unidentified pathways, in contact-induced signaling during HIV-1 spread.Unfortunately, ablation of LFA-1 expression significantly impaired the stability of cell-cell contacts, meaning we were unable to assess the contribution of adhesion molecules to signaling.Future work will undoubtedly be informative to further define these processes.Jurkat cells have been used extensively to interrogate T cell signaling and HIV-1 cell-cell spread; however, they are constitutively activated and can lack components of the T cell signaling machinery.For example, PTEN deficiency results in higher levels of basal AKT activation in transformed cell lines, including Jurkats.This is reflected in our results, where we detected more robust contact-dependent AKTT308 phosphorylation in primary T cells compared to Jurkat cells.By contrast, primary T cells show high basal Lck activation, making it difficult to detect differential Lck phosphorylation by western blotting, as we also observed.However, having performed experiments using both Jurkats and primary T cells, we provide compelling data attesting to contact-mediated activation of T cell signaling that is dependent on HIV-1 infection and Env expression.This was supported by our observation that the TCR complex is recruited to the VS formed between primary T cells in an Env-dependent manner.Furthermore, cell lines lacking the TCR or Lck were unable to activate signaling, form VSs, and drive viral spread.That we were previously unable to detect significant enrichment of CD3 at the VS in an earlier study using T cell lines is likely to due to suboptimal staining conditions and the choice of cells in that study.Here, we find that cell permeabilization prior to staining with commercial antibody and the use of primary CD4 T cells greatly improved the intensity of CD3 staining and revealed robust and reproducible enrichment of TCR/CD3 at the VS.Our simultaneous analysis of phosphorylation changes in HIV-1-infected and HIV-1-uninfected T cells during viral dissemination revealed widespread modulation of host cell pathways by HIV-1 that support viral replication by activating unique replication-enhancing signals.In addition to the requirement for physical cell-cell contact mediated by Env, the viral accessory protein Nef was necessary, but not sufficient, for contact-induced TCR signaling.Nef is multifunctional modulator of T cell signaling that has been implicated in aberrant Lck activation independent of the TCR during T cell-APC interactions and perturbation of immune synapse formation.However, conflicting reports about Nef’s ability to potentiate or suppress signaling means the biological consequences for viral spread remain poorly understood.Intriguingly, the Nef proteins of HIV-1 and its ancestor, SIVcpz, unlike most other simian immunodeficiency viruses, do not downregulate expression of the TCR on infected T cells.Why HIV-1 does not employ this potential immune-evasion strategy has remained enigmatic.We propose that HIV-1 has instead evolved to preserve expression and exploit the TCR through molecular reprogramming of classical T cell signaling pathways during cell-cell contact, allowing for optimal viral replication and spread between human CD4 T cells.That most SIVs downregulate the TCR on infected cells raises the intriguing question of how those viruses disseminate between cells.Whether they exploit an alternate mechanism for driving spread between T cells in contact, and if this contributes to differences in immune activation and disease pathogenesis seen in natural SIV infections, is unknown.Future studies to address this would be extremely interesting and undoubtedly shed new light on unresolved questions surrounding the pathogenicity of lentiviral infection.We envisage that the insights our data provide into the manipulation of T cell signaling by HIV-1, coupled with the identification of >200 host cell factors modified during viral spread, will inform future studies aimed at defining the molecular processes regulating successful HIV-1 replication in both infected and susceptible target T cells and the associated immunological dysfunction causing AIDS.In an era in which ex vivo manipulation of T cells is increasingly deployed for diverse immunotherapy strategies, our findings have clear importance beyond the sphere of HIV-1 and define concepts of T cell activation that could be considered in future immunomodulatory strategies.Jurkat T cell lines and HeLa TZM-bl and 293T cells were cultured, infected, or transduced as described in Supplemental Experimental Procedures.Primary CD4 T cells were isolated from peripheral blood of healthy donors by Ficoll gradient centrifugation and negative selection, cultured, and infected as described in Supplemental Experimental Procedures.HIV-1 was prepared from the molecular clone pNL4.3 by transfecting 293T cells using Fugene 6 and infectious virus titered on HeLa TZM-bl cells using Bright-Glo Luciferase assay.Jurkat cells or primary CD4 T cells were infected with HIV-1 NL4.3 by spinoculation and incubated for 48 hr prior to use.Alternatively, cells were infected with VSV-G-pseudotyped virus generated by transfecting 293T cells with pNL4.3, pNL4.3 ΔNef, or pNL4.3 ΔEnv and pMDG.Infection was quantified by flow cytometry staining for intracellular Gag.Triple SILAC was performed on Jurkat CE6-1 cells and incorporation confirmed by MS. Jurkat cells labeled with heavy amino acids were infected with HIV-1 and mixed with uninfected target Jurkat cells labeled with light amino acids.Medium-labeled cells were used as an internal reference.Infected and uninfected T cells were mixed and incubated for 0, 5, 20, or 40 min prior to lysis.All time-point samples were buffer exchanged, reduced, and alkylated and subsequently digested using filter aided sample preparation.Digested peptides were fractionated via hydrophilic interaction chromatography and enriched for phosphopeptides using titanium immobilized metal affinity chromatography and analyzed by high-resolution nano-liquid chromatography electrospray ionization MS/MS.Raw MS data were processed using MaxQuant for protein assignment, quantification of peptides, phosphorylation, and phosphosite localization.Refer to Supplemental Experimental Procedures for more information.Cell lysates were prepared as described for MS. Proteins from an equal number of cells were separated by SDS-PAGE, transferred to nitrocellulose, and subjected to western blotting for total and phosphorylated proteins as described in Supplemental Experimental Procedures.Blots are representative of two or three independent experiments.Densitometry quantification of bands was performed using ImageJ.The band intensity for each phosphoprotein was normalized to the corresponding total protein at each time point and plotted as the mean fold change in protein phosphorylation from multiple experiments.Jurkat T cells were infected with VSV-G-pseudotyped HIV-1.Forty-eight hr post-infection, viral supernatants were harvested and Gagp24 quantified by ELISA.Virion infectivity was determined by luciferase assay using HeLa TZM-bl reporter cells.HIV-1 cell-cell spread was measured by quantitative real-time PCR, and data are shown as the fold increase in HIV-1 DNA compared to the albumin housekeeping gene and normalized to baseline, reflecting de novo reverse transcription in newly infected target T cells during cell-cell spread.Alternatively, Jurkat 1G5 target T cells containing a luciferase reporter gene driven by the HIV-1 long terminal repeat were used and cell-cell spread measured by luminescence assay.Quantification of T cell-T cell contacts and VS was performed as described previously.Conjugates were defined as two closely apposed cells consisting of one HIV-1-infected T cell and one target T cell.VSs were defined as conjugates showing enrichment of HIV-1 Env and Gag to the site of cell-cell contact.Images were acquired using the DeltaVision ELITE Image Restoration Microscope coupled to an inverted Olympus IX71 microscope and a CoolSNAP HQ2 camera, deconvolved with softWoRx 5.0 and processed using Huygens Professional v4.0 and Adobe Photoshop C3.Quantification of fluorescence intensity was performed using ImageJ.A region of interest at the contact site was selected and compared to a region on the opposite side of the cell.The integrated density was adjusted for the size of region of interest and for background, and the fold enrichment of fluorescence signal at the contact zone was determined for at least 20 contacts from two independent experiments.Statistical significance was calculated using the Student’s t test or the Mann-Whitney test.For multiple comparisons, statistical significance was calculated using the parametric ANOVA test with Bonferroni correction.Significance was assumed when p < 0.05.All tests were carried out in GraphPad Prism 6 software.C.J. conceived the study.A.C.L.L. designed, performed, and analyzed the quantitative phosphoproteomics and western blotting experiments.S.S. performed the viral replication experiments and analyzed the data.M.S. performed the TCR reconstitution experiments and western blotting and analyzed the data.C.J. and A.C.L.L. wrote the paper, with contributions from all authors.
HIV-1 spreads between CD4 T cells most efficiently through virus-induced cell-cell contacts. To test whether this process potentiates viral spread by activating signaling pathways, we developed an approach to analyze the phosphoproteome in infected and uninfected mixed-population T cells using differential metabolic labeling and mass spectrometry. We discovered HIV-1-induced activation of signaling networks during viral spread encompassing over 200 cellular proteins. Strikingly, pathways downstream of the T cell receptor were the most significantly activated, despite the absence of canonical antigen-dependent stimulation. The importance of this pathway was demonstrated by the depletion of proteins, and we show that HIV-1 Env-mediated cell-cell contact, the T cell receptor, and the Src kinase Lck were essential for signaling-dependent enhancement of viral dissemination. This study demonstrates that manipulation of signaling at immune cell contacts by HIV-1 is essential for promoting virus replication and defines a paradigm for antigen-independent T cell signaling.
708
Energy and exergy analysis of chemical looping combustion technology and comparison with pre-combustion and oxy-fuel combustion technologies for CO2 capture
Combustion of carbonaceous fuels to produce electricity in power plants emits CO2, which causes climate change .Coal will continue to dominate power production in the near future due to its lower price and 113 years of reserves globally , although it is highly polluting .Therefore, developing clean and cheap energy from coal has been an issue of international concern and a challenge for engineers and researchers .Substantial efforts are being made worldwide to find new technologies to use coal in an environmentally-friendly manner .Integrated gasification combined cycle coupled with chemical looping combustion and direct coal chemical looping combustion are promising technologies to produce clean electricity from coal by efficiently incorporating CO2 capture .Chemical looping combustion systems consist of two interconnected fluidised bed reactors which separately effect the oxidation and reduction reactions of an oxygen carrier .The OC particles are continuously circulated to supply oxygen for combustion of solid or gaseous fuel.This arrangement prevents dilution of products of combustion with nitrogen.Steam is condensed out, to obtain a pure stream of CO2 for transport and storage.Cormos and Erlach et al. presented a detailed plant concept and methodology for an IGCC–CLC process which would exhibit a net electrical efficiency of 39% and a CO2 capture rate of ∼100%.Cormos and Cormos evaluated thoroughly plant configuration and operational aspects for CDCLC processes.Their study showed that CDCLC can achieve a net electrical efficiency of 42.01% with a 99.81% CO2 capture rate.Physical absorption-based pre-combustion capture, and capture following oxy-fuel combustion, are another two promising technologies for CO2 capture, which are commonly studied .Oxy-fuel combustion involves burning the fuel in a mixture of pure oxygen and recycled CO2.Pre-combustion capture includes the gasification of the fuel, which involves reaction with oxygen in sub-stoichiometric quantities to produce a mixture of carbon monoxide, CO2, methane, hydrogen and steam/vapour.The CH4 from the gasification is subsequently reformed in the presence of steam to a mixture of CO and H2, and finally the CO undergoes the water–gas shift reaction with H2O to produce a mixture of CO2 and H2, with the CO2 being removed and the H2 then burned in a modified gas turbine or used to power a fuel cell.Chiesa et al. investigated physical absorption-based pre-combustion capture for an IGCC plant producing 390–425 MW electricity.They observed a net electrical efficiency between 36 and 39% with ∼91% CO2 capture rate and a 6.1–7.5% efficiency penalty against reference cases without CO2 capture.Padurean et al. compared different types of physical and chemical solvents by applying physical absorption-based pre-combustion capture technologies to a base-case IGCC plant with 425–450 MW power output.They concluded that the IGCC process using Selexol is the most efficient capture technology, with a net electrical output of 36.08% and a CO2 capture rate of 91.43%.A collaborative project between Vattenfall Group, Energy Research Centre of the Netherlands and Delft University of Technology has successfully developed and validated methodology and tool for retrofitting a Selexol based pre-combustion capture method for an IGCC plant at Groningen, Netherlands .Studies on oxy-fuel combustion are mainly restricted to pulverised coal power plants.However, some studies have proposed new process designs to incorporate oxy-fuel combustion into IGCC plants.For instance, Oki et al. demonstrated an innovative oxy-fuel IGCC process using pure oxygen instead of air in the gas turbine combustor unit of an IGCC.This configuration prevents dilution of the GT exhaust stream with N2 and produces a pure CO2 stream.Experimental apparatus constructed for their new oxy-fuel IGCC process was used to test the performance of different types of fuels for power generation.Kunze and Spliethoff have also proposed an advanced process design, using combination IGCC and oxy-fuel combustion technologies that purports a net electrical efficiency of 45.74% and is capable of capturing 96–99% CO2.Supplementary information Table S1 shows a comparison of different CO2 capture technologies through published literature.Existing work discussing individual capture technologies is difficult to use for direct comparison due to variation in modelling assumptions such as type of fuel used, scale of power output and efficiencies of individual process units.Furthermore, there is no study available to our knowledge, which provides a comparison between CLC and oxy-fuel combustion technologies for IGCC power plants.In this work, we compared IGCC–CLC, CDCLC, pre-combustion and oxy-fuel combustion technologies through simulation studies using common modelling assumptions and considerations.A detailed comparison with the conventional post-combustion technology was not included in this work.Interested readers are directed to publications such asKunze and Spliethoff for a comparison between this and other technologies.The above four capture technologies were analysed against a conventional IGCC process without capture in order to estimate the energy penalty associated with CO2 capture.An exergy analysis was also performed for IGCC–CLC, CDCLC and conventional IGCC processes to understand the fuel conversion mechanism and to find out the sources of irreversibilities in each process.Power production, power consumption, electrical efficiency, CO2 capture efficiency and exergy are the key parameters studied in this work.This work is solely based on the technical aspects of the CO2 capture technologies.A comparison of cost of electricity for different capture technologies and commercial feasibility of carbon capture and storage will be considered in future publications.It should be also noted that the aim of this work is not to produce a plant design better than those proposed previously.Instead, the focus is on producing a consistent comparison between a numbers of technologies reported in the literature.Flowsheet models of five large-scale IGCC processes with and without CO2 capture were developed in Aspen plus in order to investigate the effect of CO2 capture on net electrical efficiency and to allow comparisons of the capture technologies.The Aspen plus flowsheet models for the CO2 capture cases are provided in Supplementary information Fig. S1–S4.A nominal net power output between 400 and 500 MW was selected for all five cases.Table 2 shows the composition of Illinois #6 type coal used as fuel in all five cases.A conventional IGCC plant without CO2 capture is considered as the base case.Cases 2 and 3 represent IGCC processes with pre- and oxy-fuel combustion based CO2 capture, respectively.Case 4 is IGCC–CLC process and Case 5 is the CDCLC process with CO2 capture.A block diagram of Cases 2–5 is shown in Figs. 1–4.Process configuration for Cases 2–5 is discussed in Section “Plant configuration”.Individual unit models used in the processes for existing technology, such as water–gas shift reactor, gasifier, GT, steam turbine, heat exchangers, are mostly verified by supplier data described in .Development of Aspen plus flowsheet models for all five cases is explained in Section “Developing an industrial level flowsheet model in Aspen plus’ while the methodology followed for exergy analysis is described in Section “Exergy analysis”.Chemical and phase equilibrium based on Gibbs free energy minimisation is utilised to develop the reactor models in our simulations.Input parameters and design assumptions such as flow rates, pressure, temperature, equipment efficiency and fuel composition used for developing the process flowsheet models are collected from the various published literatures and are presented in Table 3.No external energy sources were used apart from the coal feed.This section describes the detailed plant configuration for IGCC with pre-combustion capture technology, Case 2.A dry-feed entrained flow gasifier by Shell operating at 1300 °C and 30 atm is fed with crushed Illinoi #6 type coal and oxygen from the top .A stand-alone ASU produces 95% pure oxygen at 2.37 bar; the oxygen stream is compressed to 1.2 times the gasifier pressure before being fed into the gasifier .The purge N2 stream from the ASU is compressed to 22 atm and fed to the GT combustor for nitrogen oxides control.Inside the gasifier, the oxygen partially oxidises solid coal into syngas, which mainly consists of CO and H2, with a conversion efficiency of 99.99% .All reactions in the gasifier are assumed to achieve equilibrium.The ash present in coal feed melts at 1300 °C and exits the bottom of the gasifier, along with syngas .The gasification of coal is highly exothermic and produces more heat than actually required to maintain the operating temperature of 1300 °C inside the gasifier.This excess heat is removed from the gasifier by passing pressurised water through cooling coils; water is ultimately converted to steam and used in the ultra-supercritical Rankine cycle for power generation .Hot and pressurised raw syngas from the gasifier is cooled to 350 °C in the heat recovery steam generation unit.This cooled raw syngas along with pressurised steam generated in the HRSG is fed to a water–gas shift reactor operating at 350 °C and 30 atm, where CO is partially converted to CO2 by reaction with steam.The exhaust gases from WGS-1 containing partially converted syngas and steam is cooled to 178 °C in the HRSG and fed to reactor WGS-2 which operates at 178 °C and 30 atm, for further conversion of CO into CO2 .The cumulative CO conversion efficiency for both the WGS reactors is 98% .The shift reactions inside the two WGS reactors are exothermic in nature, therefore any excess heat generated is extracted by pressurised feed water in order to maintain the operating temperature conditions inside the reactors .The exhaust gaseous stream from WGS-2 reactor is principally composed of a mixture of CO2, CO, H2 and steam, which is cooled to 40 °C in the HRSG before being fed into the acid gas removal unit; where 99.99% hydrogen sulphide and 94.8% CO2 is removed.The AGR unit uses a Selexol-based physical solvent, which is regenerated in a steam stripper in the H2S removal unit and pressure flash chambers in the CO2 removal unit .Steam required in the steam stripper is generated in the HRSG at 130 °C.CO2 recovered from Selexol solvent is compressed to 150 atm for transportation and storage.Clean syngas after H2S and CO2 removal is heated in the HRSG to 300 °C and fed to the GT combustor for combustion in the presence of compressed atmospheric air.Excess air supply along with N2 from the ASU maintains the required temperature of 1300 °C inside the GT combustor.Flue gas leaving the GT at 597 °C is sent to the HRSG for heat recovery before venting to the atmosphere.In the Rankine cycle, pressurised steam is generated at 600 °C through a two-step process .In step one, only supercritical steam is produced at 600 °C and 285 bar by using the excess heat generated in the reactor units; WGS reactors and gasifier in Case 2.In step two, heat available from the cooling of process streams in the HRSG is used for generating supercritical steam at 600 °C, and reheating intermediate pressure and low pressure steam to 600 °C.The supercritical steam produced in both steps is mixed and supplied to the HP steam turbine.The exhaust steam from the LP ST exits at 0.046 bar and 90.5 °C .It is then condensed at 25 °C using cooling water at 15 °C and pumped back to the relevant process units after pressurising to 285 bar.The steam generation approach followed is similar for all five cases.In IGCC with oxy-fuel combustion technology, the coal gasification process is same as in Case 2 described in Section “IGCC with pre-combustion based CO2 capture technology”.Downstream of the gasifier, raw syngas is cooled to 40 °C in the HRSG and sent to the Selexol-based AGR unit for sulphur removal.The clean syngas after H2S removal is re-heated to 300 °C in the HRSG before being fed to the GT combustor operated at 1300 °C and 21 atm.A stream of pure oxygen generated in the ASU is compressed and supplied to the GT combustor for complete syngas conversion .This arrangement prevents dilution of flue gas with N2.The GT exhaust consisting primarily of CO2 and water vapour is sent for heat recovery in the HRSG, where the vapour is condensed and separated from CO2.Nearly 80% of this CO2 stream is compressed and recycled to the GT combustor to maintain the operating temperature of 1300 °C .The remaining 20% of CO2 is compressed to 150 atm for storage.In the Rankine cycle, pressurised steam is produced using a two-step process.In step one, supercritical steam at 600 °C and 285 bar is produced using the excess heat generated in the gasifier.In step two, heat available from cooling of raw syngas and GT exhaust in HRSG is used for supercritical steam generation and for reheating IP and LP steam to 600 °C.Steam produced in both the steps is mixed and supplied to the three STs.The IGCC–CLC process follows the same process configuration as in Case 3, which is discussed in Section “IGCC with oxy-fuel combustion technology for CO2 capture”, until the sulphur removal unit.The sulphur free syngas is heated to 300 °C in the HRSG and sent to the counter-current fluidised bed fuel reactor, where it is completely oxidised following reactions shown in Eqs. and.The oxygen required for syngas conversion is supplied by the OC particles which are reduced to wustite.Fe2O3 is supported by 15% aluminium oxide and 15% silicon carbide to enhance its thermal and physical properties .A syngas conversion efficiency of ∼100% is achieved in the fuel reactor .The exhaust from the fuel reactor is passed through a cyclone separator where the reduced OC particles are separated from gaseous products of syngas conversion primarily consisting of CO2 and vapour.This hot gaseous product stream is cooled in the HRSG to condense the vapour and produce a pure CO2 stream for compression and storage.Supplementary information Table S2 and Fig. S5 show the details on the composition and thermodynamic state of key process streams and the heat transfer diagram, respectively, for Case 4 as an example case.In the CDCLC process, the pulverised coal is directly fed to the fuel reactor, eliminating use of any separate gasifier.Coal is oxidised by the OC in the fuel reactor.Almost complete coal conversion is calculated by the equilibrium based fuel reactor model.The simulation results were validated via comparison with available literature on experimental data and modelling .Exhaust product gas from the fuel reactor consisting primarily of CO2 and water vapour is separated from the reduced OC particles in a cyclone.Reduced OCs are sent to the air reactor for regeneration, whereas hot product gas is cooled to 40 °C in the HRSG and sent to gas clean-up and the AGR unit for sulphur removal.Steam required for Selexol regeneration is generated in the HRSG.Any remaining vapour is condensed, yielding a stream of pure CO2 for compression and storage.A 100% CO2 capture rate was observed .Regenerated OC from the air reactor follows the same path as in Case 4 described in Section “IGCC–CLC process for CO2 capture”.The supercritical steam at 600 °C and 285 bar is generated in the HRSG by using a portion of heat available from the product gas and flue gas cooling.The IP and LP steam are heated to 600 °C in the HRSG.Tables 2 and 3 present the key parameters and operating conditions used in development of the flowsheet models for the cases mentioned in Section “Plant configuration”.Stream class MIXCINC is selected for all the cases considered in this study.The Peng–Robinson–Boston–Mathias property method is used for the conventional components whereas coal enthalpy models HCOALGEN and DCOALIGT are used for the two non-conventional components coal and ash .OC is entered as solid particle in the component list.The equilibrium reactor model RGIBBS is used for modelling the coal gasifier, fuel reactor, air reactor and combustor.The RGIBBS model requires temperature, pressure, stream flow rate and composition as its key inputs.The REQUIL model is used to design the two WGS reactors.A PUMP model with an efficiency of 90% is used to pressurise the feed water in the process.A counter-current MHeatX type heat exchanger is used to represent the HRSG.An MCOMPR model with an isentropic efficiency of 83% represents the four-stage compressor used in the processes to compress the gas streams such as air, oxygen, N2 and CO2.The performance of the IGCC–CLC process and conventional IGCC process are compared on the basis of net electrical efficiency and CO2 emissions.Detailed simulation results for both cases are summarised in Table 5.Case 1 is not equipped with CO2 capture and emits 328.3 t/h of CO2.In contrast, Case 4 with CO2 capture emits only 0.60 t/h of CO2 capturing 99.8% CO2 emissions.This syngas conversion efficiency obtained by the equilibrium reactor models for CLC fuel reactor in our simulation is similar to the efficiency given in Jerndal et al. and Chiu and Ku .Case 4 gives an overall electrical efficiency of 39.69%, which is somewhat higher than is observed in other work .This could be due to the use of a supercritical steam cycle in our case whereas the other studies used sub-critical steam.CO2 capture in Case 4 contributes to a net electrical efficiency penalty of 4.57%-points compared to Case 1, which has 44.26% efficiency.Results obtained for the efficiency of Cases 1 and 4 can be compared to various other studies including IEA reports .The above comparison concludes that almost 100% CO2 can be captured from an IGCC power plant with ∼10% reduction in the net electricity produced per unit of fuel input.Fig. 5 shows the relations between power produced and consumed in different units for Cases 1 and 4.The overall heat produced from syngas conversion in both the cases is same.In Case 1, this overall heat is completely produced in a single combustor reactor, the exhaust gases of which are first used in the GT unit for power generation and then in the HRSG for heat recovery.In contrast to Case 1, Case 4 combusts the syngas in two separate reactors; the air and fuel reactors,.The GT in Case 4 receives less energy input for power production since it is supplied with the exhaust gases from the air reactor only and hence it produces 76 MW lower power than the GT in Case 1.The heat generated in the fuel reactor in Case 4 is used in Rankine cycle which results in 41.2 MW more power output in STs compared to Case 1.Case 1 generates 51 MW higher net power than Case 4 because in Case 1 most of the heat available from syngas conversion is used in the GT cycle which is more efficient than the Rankine cycle to convert heat into power.In addition, the parasitic energy consumption in Case 4 is 20% higher than in Case 1, which is mainly because of the extra energy required for CO2 removal and compression.The competitiveness of CLC with pre-combustion and oxy-fuel combustion technologies is investigated here on the basis of CO2 capture and net electrical efficiency.The calculated process outputs summarised in Table 5 indicate that Case 4 with the IGCC–CLC process has the highest net electrical efficiency of 39.74%.The next highest is 37.14% for IGCC with physical absorption based pre-combustion capture technology followed by 35.15% for the IGCC with oxy-fuel combustion process.The efficiency penalty associated with CO2 capture in Cases 2, 3 and 4 compared to the conventional IGCC process without capture ranges between 5 and 9%-points, which is comparable to other literature values .The net electrical efficiencies achieved by the above three capture technologies are found to be substantially higher than those for amine based post-combustion capture technology .Cases 3 and 4 capture ∼100% CO2 compared to 94.8% for Case 2.The CO2 capture efficiencies for Cases 2–4 are comparable to various other studies .The lower capture efficiency in Case 2 is due to incomplete absorption of CO2 by the Selexol solvent, and 98% CO conversion in the two WGS reactors.The unconverted CO from the WGS-2 reactor is not captured by the Selexol solvent in the AGR unit and is converted into CO2 in the GT combustor.This CO2 along with the un-captured CO2 from the AGR unit is expanded in the GT before it is finally vented to the atmosphere with other flue gases.Fig. 6 shows the variation in power production and power consumption in Cases 2–4.Case 2 produces 293.12 MW power from GT which is higher by 36.6 and 33.27 MW than Cases 3 and 4, respectively.In spite of producing the highest GT power, Case 2 manages to generate only 523.31 MW of gross power, owing to the lowest power output in the Rankine cycle or the STs compared to Case 3 and 4.However, the net power output is lowest for Case 3 and not Case 2 since the parasitic energy consumption in Case 3 is 32.2% more than Case 2 due to the high oxygen demand and compression effort for recycling CO2 into the GT combustor to assist in temperature moderation.On the other hand, Case 4 with the highest gross power of 540.93 MW and lowest parasitic power of 93.21 MW along with ∼100% CO2 capture proves to be superior in all aspects.Table 6 shows the energy and efficiency penalty associated with CO2 capture in Cases 2–4 with reference to the conventional IGCC process without capture.Table 6 indicates that the relative decrease in net electrical efficiency in Case 2, 3 and 4 against Case 1 is 16.08, 20.58 and 10.21%, respectively.These values are lower compared to the amine based post-combustion capture methods described in a report of the IEA , which indicates that IGCC with pre-combustion, oxy-fuel combustion or CLC technologies is likely more efficient than the PC power plants with post-combustion capture technology for CO2 capture.Per MW decrease in the net energy production with reference to Case 1, the IGCC–CLC process captures significantly higher CO2 compared to pre-combustion and oxy-fuel combustion capture technologies.These results suggest that CLC is a more favourable option from energetic point of view to capture CO2 from IGCC power plants compared to pre-combustion and oxy-fuel combustion technologies.However, pre-combustion and oxy-fuel combustion are practically proven and commercially available technologies whereas CLC requires a considerable amount of further research to make it available for commercial use .Hanak et al. , Xu et al. and IEA reported a loss of ∼10% points, 11–16% points and 10–12% points in the net electrical efficiency, respectively, for a supercritical coal-fired power plant using monoethanolamine based post-combustion CO2 capture technology, which is higher compared to our IGCC-CLC process showing a loss of 4.52% points.Furthermore, a 14.5–15.0% loss in the net electrical efficiency was observed by Sanpasertparnich et al. and Kanniche et al. for pulverised coal power plants using amine based post-combustion CO2 capture technology.It is also noted in the above mentioned studies that the amine based post-combustion capture technologies captured up to 90% CO2 which is lower by ∼10% points compared to the IGCC–CLC process studied in our work.Based on the above comparisons it can be concluded that IGCC–CLC process is more efficient that amine based post combustion process from the energetic point of view.One of the main objectives of this work is to indicate the potential improvements or changes that could be made to the process configuration for CLC technology in order to make it more energy efficient, and simultaneously compare it with the conventional IGCC process without CO2 capture and other CO2 capture technologies.In order to achieve the above objective, the IGCC–CLC process was modified to CDCLC process which uses coal directly in the CLC fuel reactor instead of syngas and hence, completely eliminates the use of the additional coal gasifier which was used in all other cases.Table 7 shows the key performance characteristics of the CDCLC process along with the results of its comparison to Cases 1–4.No significant difference was observed in the net electrical efficiencies of the CDCLC process and conventional IGCC process.A similar trend has been witnessed for exergetic efficiency by Anheden and Svedberg with syngas used as fuel instead of coal.They found that CLC causes lower destruction of fuel exergy upon combustion compared to conventional combustion process, which could result in higher net power generation in CLC process.A more detailed discussion on the power output trend of the CDCLC process in comparison to the conventional IGCC process is presented in Section “Exergy analysis of conventional IGCC, IGCC–CLC and CDCLC processes” through exergy analysis.The net electrical efficiency obtained for the CDCLC process in our work is higher than that observed in other available literatures .As mentioned previously, this is likely owing to the assumed steam cycle conditions simulated here.The CDCLC process shows an increase of 4.67%-points in the net electrical efficiency compared with IGCC–CLC process, which indicates an improvement in the overall performance of the CLC technology while maintaining ∼100% CO2 capture rate.The coal conversion efficiency obtained in the fuel reactor in our simulation is similar to the efficiency obtainedby Fan et al. .The gross power output of the CDCLC process is 526.33 MW, which is lower than Cases 1, 3 and 4; however, it still manages to produce 1.8–82 MW higher net electrical power.This is because the CDCLC process has very small parasitic power consumption of 25.98 MW, primarily due to the absence of an ASU and a N2 compressor.IGCC with oxy-fuel combustion technology has the highest relative decrease in net electrical efficiency with respect to CDCLC process.To summarise, using coal directly in the CLC fuel reactor instead of syngas can significantly reduce the energy penalty associated with CO2 capture and produce a similar amount of net electricity as that produced by a conventional IGCC process without CO2 capture.It can be concluded from the above discussion that CDCLC is more energy efficient than IGCC–CLC.However, the solid–solid reactions between coal and OC in the fuel reactor of the CDCLC process are comparatively more complicated and slower than the gas–solid reactions in the fuel reactor of IGCC–CLC process which provides an advantage to IGCC–CLC process over CDCLC.An exergy analysis has been conducted to identify the sources of irreversibilities in the conventional IGCC, IGCC–CLC and CDCLC plant designs.Exergy destruction within each process unit can be estimated using an exergy balance equation.A visual basic application for Microsoft® Excel 2013 developed by Querol et al. has been used to calculate the exergies of individual streams in the Aspen plus simulation models.It is worth noting that the actual exergy values could be marginally different than the exergy values calculated in the current work.This is because only physical and chemical exergies of the streams were considered whereas the exergy of mixing is excluded from the total exergy calculations.The authors consider that since this work is a comparative study, the exclusion of exergy of mixing from total exergy calculations does not have any significant impact on the results or conclusions of the current work.The total exergy lost in the overall process is a combination of the exergy contained in the material streams discharged from the overall process without further usage and the exergy destruction in various process units such as the gasifier, CLC reactors, combustor, turbines, compressors and pumps.We considered the recovered sulphur stream as an exergy loss in our analysis, although sulphur is a useful by product and can be sold under some circumstances.Table 8 lists the exergy destruction rates in the main process blocks and the energetic efficiency of the overall process for Cases 1, 4 and 5.The exergy analysis results indicate that the fuel reactor of IGCC-CLC process can oxidise syngas more efficiently with an exergy destruction rate of only 4%, compared to the combustor of a conventional IGCC process which experiences an exergy destruction rate of 17.5% for oxidising the same amount of syngas.Anheden and Svedberg obtained an exergy destruction rate of 2.6–4.2% in the CLC fuel reactor and 22.9% in the IGCC combustor, which is comparable to our observations.In the IGCC–CLC process, the overall syngas conversion is a combination of the oxidation and reduction reactions taking place in the CLC fuel and air reactors.Therefore, the consideration of only the fuel reactor does not provide a fair comparison of the syngas conversion capabilities of IGCC–CLC and conventional IGCC processes, and it is essential to include the exergy destruction or losses of the CLC air reactor as well.The addition of the air reactor in the syngas conversion analysis increases the overall exergy destruction to 17.4%, which is similar to what is obtained for the combustor unit.This implies that both IGCC–CLC and conventional IGCC are equally efficient in syngas conversion.The ASU, oxygen compressor and gasifier have a combined exergy destruction rate of 26.15% and are common to both IGCC–CLC and conventional IGCC processes.The gasifier in both IGCC–CLC and conventional IGCC processes, with an exergy destruction rate of 24.4%, is the most exergetically inefficient unit among all other process units.The above discussion indicates that both IGCC–CLC and conventional IGCC processes are equally efficient in coal gasification and syngas conversion.Compared to the conventional IGCC process, a higher N2 compression pressure in the IGCC–CLC process results in a rate of exergy destruction 0.12%-points higher.The exergy destruction rate in the GT and air compressor of IGCC–CLC process is 0.87%-points higher than conventional IGCC process.This is due to the increased airflow in the CLC air reactor caused by higher enthalpy of oxidation for the OCs than the conventional syngas combustion.The larger volumetric flow of water and steam through the pumps, HRSG and STs results in 1.16%-points higher exergy destruction in the Rankine cycle of the IGCC–CLC process compared to the conventional IGCC process.The extra CO2 compressor used in the IGCC–CLC process creates 0.35%-points additional exergy destruction in compressing the captured CO2 to 150 atm.Table 8 shows that the total exergy losses in the IGCC–CLC process is 51.5 MW higher than for the conventional IGCC process.The above discussed points collectively explain the reason for lower overall exergetic efficiency of 35.6% in the IGCC–CLC process, compared to 39.7% in the conventional IGCC process.The exergy destruction rates and overall exergetic efficiency obtained for the IGCC–CLC and conventional IGCC processes in our study are comparable to other literatures .The CDCLC process has an overall exergetic efficiency of 39.8%, which is 4.2%-points higher than the IGCC–CLC process and similar to the conventional IGCC process.The CDCLC process has the lowest exergy destruction of 756.1 MW as compared to 757.3 MW for conventional IGCC and 808.8 MW for IGCC–CLC process.It is seen from Table 8 that in CDCLC process, coal is more efficiently oxidised in the fuel reactor with an exergy destruction rate of 40.8% compared to 41.9% destruction in conventional IGCC process and 41.8% destruction in the IGCC–CLC process.The absence of ASU, oxygen compressor and N2 compressor benefits the CDCLC process by saving the additional exergy destruction associated with these units.The exergy destruction in the Rankine cycle of the CDCLC process is 2.57%-points and 3.73%-points lower than conventional IGCC and IGCC–CLC processes.This could be due to the lower mass flow of water/steam in the Rankine cycle and lesser number of streams available for heat transfer in the HRSG which reduces the number of heat exchangers and thus minimises the chances of exergy losses.The above discussed points explain the higher exergetic efficiency of CDCLC process than IGCC–CLC process.It is concluded that gasifying coal directly in the CLC fuel reactor ultimately reduces the total exergy losses in the process by 6.5% and makes CLC technology equally efficient as the conventional IGCC process in terms of exergy.This article evaluates the competitiveness of CLC technology against pre-combustion and oxy-fuel combustion technology for IGCC plants with CO2 capture producing electricity from coal, in five different process configurations.Chemical Looping Combustion was studied for two different process configurations, IGCC–CLC and CDCLC, in order to fully explore its potential.The work also examines the CLC technology through a detailed exergy analysis.The key conclusions obtained from this work are as follows:For IGCC cases with CO2 capture, the IGCC–CLC process achieves the highest net electrical efficiency of 39.74% followed by pre-combustion capture at 37.14% and oxy-fuel combustion capture at 35.15%.These figures are relative to a net electrical efficiency of 44.26 for the unabated plant.Modification of the IGCC–CLC process to the CDCLC process increases the net electrical efficiency by 4.67%-points, while maintaining the CO2 capture rate at 100%.The net electrical efficiency is then approximately equivalent to the base case unabated system.The detailed comparative analysis performed in this work demonstrates that, regardless of any configuration used, the CLC technology is a more suitable option for CO2 capture than physical absorption based pre-combustion capture and oxy-fuel combustion capture technologies from the thermodynamic perspective.However, it is necessary to examine the economic aspects, before drawing firm conclusions regarding the selection of a capture technology.Furthermore, the impact of degradation of the OC for CLC and Selexol for physical absorption technologies should be examined for complete technoeconomic evaluation; life cycle analysis is also necessary for a complete understanding of the environmental impact of the process.The authors declare no competing financial interest.
Carbon dioxide (CO2) emitted from conventional coal-based power plants is a growing concern for the environment. Chemical looping combustion (CLC), pre-combustion and oxy-fuel combustion are promising CO2 capture technologies which allow clean electricity generation from coal in an integrated gasification combined cycle (IGCC) power plant. This work compares the characteristics of the above three capture technologies to those of a conventional IGCC plant without CO2 capture. CLC technology is also investigated for two different process configurations - (i) an integrated gasification combined cycle coupled with chemical looping combustion (IGCC-CLC), and (ii) coal direct chemical looping combustion (CDCLC) - using exergy analysis to exploit the complete potential of CLC. Power output, net electrical efficiency and CO2 capture efficiency are the key parameters investigated for the assessment. Flowsheet models of five different types of IGCC power plants, (four with and one without CO2 capture), were developed in the Aspen plus simulation package. The results indicate that with respect to conventional IGCC power plant, IGCC-CLC exhibited an energy penalty of 4.5%, compared with 7.1% and 9.1% for pre-combustion and oxy-fuel combustion technologies, respectively. IGCC-CLC and oxy-fuel combustion technologies achieved an overall CO2 capture rate of ∼100% whereas pre-combustion technology could capture ∼94.8%. Modification of IGCC-CLC into CDCLC tends to increase the net electrical efficiency by 4.7% while maintaining 100% CO2 capture rate. A detailed exergy analysis performed on the two CLC process configurations (IGCC-CLC and CDCLC) and conventional IGCC process demonstrates that CLC technology can be thermodynamically as efficient as a conventional IGCC process.
709
Effect of a mass radio campaign on family behaviours and child survival in Burkina Faso: a repeated cross-sectional, cluster-randomised trial
Scenario-based projections suggest that, to achieve the Sustainable Development Goal target of 25 or fewer under-5 deaths per 1000 livebirths by 2030, about two-thirds of all sub-Saharan African countries will need to accelerate progress in reducing under-5 deaths.1,Poor coverage of effective interventions for preventing child deaths has been attributed to weaknesses in both provision of and demand for services.2,While much effort towards achieving the Millennium Development Goals has focused on health systems and the supply side,3 including community case management of childhood illnesses,4,5 less attention has been paid to increasing demand for services.However, it has been acknowledged that behaviour change has an important part to play in enhancing child survival in low-income and middle-income countries.6,Evidence before this study,Four reviews, done before this study, concluded that targeted, well executed mass media campaigns can have small to moderate effects not only on health knowledge, beliefs and attitudes, but on behaviours as well.However, much of the evidence for an effect comes from non-randomised designs and the limited number of randomised studies that have been reported have often failed to demonstrate an effect.Hornick has suggested that in many of the randomised trials the exposure to the media was too small to result in an effect."Development Media International's experiences in delivering mass media campaign corroborate this crucial implementation principle and indicate that implementation at sufficient scale and intensity is the most important. "However, evaluations of the DMI's Saturation+ approach, prior to this study, relied on pre-post designs and on self-reported knowledge or behaviours only.Using the Lives Saved Tool, DMI predicted that a sustained, comprehensive campaign of sufficient scale and intensity could reduce under-5 mortality by between 16% and 23% during the third and subsequent years of campaigning through increases in coverage of key life-saving interventions.Added value of this study,To our knowledge, this study was the first attempt to conduct a randomised controlled trial to test the effect of mass media on a health outcome in a low-income country.From March, 2012, to January, 2015, DMI implemented a comprehensive, high intensity radio campaign to address key family behaviours for improving under-5 child survival in Burkina Faso.Using a repeated cross-sectional, cluster-randomised design, we report on the effect of the campaign on child mortality and family behaviours after 32 months of campaigning.Implications of all the available evidence,The available evidence supports the view that mass media campaigns can lead to changes in some behaviours linked to child survival.However, some behaviours are likely to be less amenable to change than others.Furthermore, that some mass media campaigns can produce changes in behaviour should not be interpreted as meaning that any and every mass media campaign can change behaviour.The “dose” delivered and received by the target audience as well as the quality of the messages are likely to be key determinants of the effectiveness of any campaign.These findings have important policy implications, suggesting that saturation-based media campaigns should be prioritised by governments and belong in the mainstream of public health interventions.Behaviour change interventions encompass a wide range of approaches including interpersonal-based, community-based, media, and social marketing approaches.Compared with other approaches, mass media campaigns have the potential to reach a large audience at relatively low cost.A recent review of evaluations of mass media interventions for child-survival-related behaviours done between 1960 and 2013 in low-income and middle-income countries concluded that so-called media-centric campaigns can positively affect a wide range of child health behaviours.7,Of the 32 evaluations that relied on moderate to stronger designs, all but six were reported to show some positive effects on behaviours.However, the researchers acknowledged likely publication bias towards successful campaigns.Additionally, all but six evaluated programmes included interpersonal communication components or implementation of community-based activities, but none could disentangle the effect of different components.To our knowledge, there have been no attempts to do a randomised controlled trial to test the effect of mass media on any health outcome in a low-income country.From March, 2012, to January, 2015, Development Media International implemented a comprehensive radio campaign to address key family behaviours for improving under-5 child survival in Burkina Faso.8, "In 2010, Burkina Faso ranked 161 of 169 countries in United Nations Development Programme's Human Development Index with 44% of the population living below the poverty line and 77% living in rural areas.9",The under-5 mortality rate was high, estimated at approximately 114 deaths per 1000 livebirths in 2010, with malaria, pneumonia, and diarrhoea the leading causes of child death.10,Burkina Faso was chosen both for its high child mortality before the campaign and its unique media landscape.We have previously reported on the coverage of the campaign and its effect on behaviours at midline.11,Here we report on the effect of the campaign on child mortality and behaviours at endline—ie, after 32 months of campaigning.The pre-intervention period was defined as the 2 years before the campaign, from March, 2010, to February, 2012.The post-intervention period was split into three periods: from March, 2012, to December, 2012, January, 2013, to October, 2013, and November, 2013, to October, 2014.Full pregnancy history data collected at the endline survey were used to calculate both pre-intervention and post-intervention cluster-level mortality estimates.Cluster-level estimates of post-neonatal under-5 child mortality and under-5 child mortality were computed using the Demographic and Health Survey synthetic cohort life-table approach.Missing months of birth were randomly imputed according to the DHS method.16,This method relies on the construction of logical ranges for each date, which are refined in three steps, resulting in successively narrower ranges.In the final step, months of birth are randomly imputed within the final constrained logical range.An analysis of covariance was performed on a log-risk scale to estimate the risk ratio for the effect of the intervention adjusted for pre-intervention mortality.The Wild bootstrap test, recommended when there are few clusters,17 was used to test for evidence of an intervention effect and for evidence of effect modification by post-intervention period.We did a cluster-level difference-in-difference analysis to assess the change from baseline to follow-up survey in self-reported behaviours.For each of the 17 target behaviours, cluster-level differences in prevalence from baseline to follow-up survey were calculated and regressed on the intervention status of clusters and the cluster-level baseline prevalence to account for regression to the mean.Wild bootstrap tests were done to test the null hypothesis of no intervention effect.No formal adjustment was made to account for multiple testing.Analyses of maternal and newborn related behaviours at midline and endline were restricted to pregnancies ending after June, 2012.At baseline, the mean post-neonatal under-5 mortality risk during the 2 years preceding the intervention was estimated at 112·3 per 1000 children in the intervention group versus 82·9 per 1000 children in the control group.Three covariates, expected to predict mortality, were particularly imbalanced between groups at baseline: the distance to the capital, Ouagadougou, as a proxy for general level of development; the median distance to the closest health facility; and the facility delivery prevalence.These covariates were combined using principal component analysis to produce a single cluster-level summary confounder score.After controlling for the confounder score, the pre-intervention mortality risk difference between groups estimated at baseline was reduced from 30·9 to 6·8 per 1000 children.To control for imbalance between groups, analyses of behaviour and mortality were adjusted for the confounder score.Three categories of radio ownership were defined: no radio in the compound, radio in the compound but not in the household, and radio in the household.The Wild bootstrap test was used to test for evidence of effect modification by radio ownership.With respect to care-seeking behaviours, three categories of distance to the closest health facility were also defined to look for evidence of effect modification by distance on service-dependent behaviours, using the same analysis as described above.In a first analysis, the absolute number of attendances was calculated by yearly period and by cluster.For each post-intervention period, the ratio of the absolute number of attendances over the absolute number in the year before the intervention was then calculated in each cluster, a mean ratio to baseline was then computed by group, and a Wild bootstrap test, adjusted for confounder score, was used to compare the mean ratio to baseline between groups.In addition, an interrupted time-series analysis was also done using mixed-effects Poisson regression of monthly counts of attendances per cluster, from January, 2011, to February, 2016.The model included fixed effects allowing for a long-term secular trend, for month of the year to account for seasonal variation, for intervention status of the cluster to account for systematic differences between groups at baseline, for confounder score, and for intervention effect by period, with cluster treated as a random effect.18,To obtain 95% CI and p values, we used bootstrap resampling.19,This trial is registered with ClinicalTrial.gov, number NCT01517230.The funders of the study had no role in study design, in the collection, analysis, and interpretation of data, in the writing of the report, and in the decision to submit the paper for publication.The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.Pregnancy histories were completed for 102 684 women aged 15 to 49 years at endline.At baseline and endline, respectively, the behavioural questionnaire was completed for 5043 and 5670 mothers of a child younger than 5 years and living with them across the 14 clusters.Baseline sociodemographic characteristics have been reported in detail elsewhere.7,Briefly, while many characteristics were similar across groups at baseline, there were some important differences with respect to ethnicity, religion, and distance to the closest health facility.There was little change in sociodemographic characteristics between surveys.Household radio ownership was similar in both groups at baseline, about 60%, and changed little at endline."Across surveys, women's radio listenership in the past week averaged 52% in the intervention clusters and 46% in the control clusters.In the intervention clusters, listenership, in the past week, to the radio station that was broadcasting the intervention varied from 74% in March, 2011, prior to the implementation, to 43% at endline, reflecting possible seasonal variation."Contamination was reported in one control cluster with, respectively at midline and endline, 33% and 37% of women in the Gayeri control cluster reporting having listened in the past week to the campaign's partner radio station in the neighbouring Bogande intervention cluster.At endline, 2269 of 2784 interviewed women in the intervention group reported recognising spots played at the end of the interview and 1968 reported listening to a long format programme.In the control group, around 20% of women also reported recognising spots and long format programmes."When asked on which radio station they listened to these broadcasts, 1782 of 2269 women mentioned DMI's radio partners in the intervention group compared with 208 of 606 women in the control group.Before the campaign, the post-neonatal under-5 child mortality risk was 93·3 per 1000 livebirths in the control group versus 125·1 per 1000 livebirths in the intervention group.The between-cluster coefficient of variation was 0·34 across all clusters.We recorded similar substantial decreases in risk in both groups over time, to 58·5 per 1000 livebirths in the control group versus 85·1 per 1000 livebirths in the intervention group during the last period.After controlling for pre-intervention mortality and confounder score, there was no evidence of an intervention effect across the intervention period.There was no suggestion that the effect of the intervention increased or decreased over time.Results were similar for under-5 child mortality.At baseline, most service-dependent behaviours tended to be reported more commonly in the control group than in the intervention group, while home-based behaviours were more similar between groups.In both groups, the proportion of children who had fever, fast or difficult breathing, or diarrhoea in the 2 weeks preceding the interview and who were reported to have received appropriate treatment was quite low at a third or less.We noted a similar low prevalence for early breastfeeding initiation and sanitation-related behaviours.Other home-based behaviours were more common, reported by about 40–60% of mothers.We previously reported some evidence, at midline, of an effect of the intervention on self-reported appropriate family responses to diarrhoea and fast or difficult breathing, and on saving during the pregnancy.7,At endline, the only self-reported behaviour for which there was some evidence of an intervention effect was saving during the pregnancy.For the other target behaviours, baseline prevalence and confounder-score-adjusted difference-in-differences ranged from −11·3% for recommended antimalarials for fever to 22·0% for breastfeeding initiation within 1 h after birth.Table 5 summarises the absolute numbers of attendances in primary facilities located in trial clusters for antenatal care consultations, deliveries, and under-5 consultations by group and time period.Figure 5 shows the same data by month.New antenatal care attendances remained relatively constant in both groups over the entire study period.Facility deliveries seemed to increase slightly in the intervention group while remaining relatively constant in the control group.Under-5 consultations increased in the first year of the intervention by 40% in the intervention group compared with 21% in the control group.In the second and third years of the intervention, the number of consultations remained steady in the control group and fell slightly in the intervention group.Despite a much larger increase in attendances in the intervention group, a simple analysis based on cluster-level summaries did not provide any statistical evidence for an intervention effect.Table 6 shows the estimates of the intervention effect by period computed from the interrupted time-series analysis.In the first year of the campaign, there were small increases in new antenatal care attendances and deliveries in the intervention group compared with the control group.We noted a substantial increase in under-5 consultations in the intervention group.In the second and third years, the estimated effect on new attendances to antenatal care and deliveries remained relatively constant, although without statistical evidence for the former in year 3.The effect on under-5 consultations seemed to decrease over time, but evidence of an intervention effect remained.There was no evidence that the effect of the intervention on post-neonatal under-5 child mortality varied with radio ownership after controlling for pre-intervention mortality and confounder score.With respect to self-reported behaviours, we only did tests for effect modification on care-seeking behaviours for childhood illness to investigate whether patterns were consistent with the routine health facility data.There was no evidence for effect modification by radio ownership after controlling for baseline prevalence and confounder score.We noted strong evidence that the effect of the intervention on self-reported care-seeking behaviours for childhood illness varied with distance to the closest health facility, with baseline prevalence and confounder-score-adjusted difference-in-differences of around 23% and 14% for families within 2 km and 2–5 km from a facility, respectively, compared with an estimated difference-in-difference of −13·7% among those living further away.We found no evidence of an effect of a mass media campaign on child mortality.This finding comes against a background of rapidly decreasing mortality in both groups, which will have reduced our power to detect an effect on mortality.The decrease in mortality we recorded is broadly consistent with estimates for Burkina Faso as a whole from the UN Inter-Agency Group for Child Mortality Estimation.Recent improvements in child survival could reflect changes in national health policies, in particular two rounds of free national distribution of insecticide-treated bednets, and the addition of the pneumococcal and rotavirus vaccines to the expanded programme for immunisation in 2013.However, routine health facility data did provide evidence of increased utilisation of health services in intervention clusters relative to control clusters, especially with respect to care seeking for childhood illness."Self-reported behaviours might have been over-reported due to socially desirable bias, especially in the intervention group as a consequence of DMI's campaign itself.Nevertheless, we observed some evidence of improved care seeking and treatment in the midline survey.9,Although no overall difference was apparent at the endline survey, the survey data are consistent with increased care seeking among families living within up to 5 km of a facility, with no effect at greater distances.With only a limited number of clusters available, a major limitation of our trial is that, despite randomisation, important differences between the intervention and control groups at baseline were not unlikely.14,The use of pre-intervention mortality estimated at baseline survey was precluded by the intervention timeframe, and we therefore used a pair-matched randomisation procedure based on geography and estimated radio listenership.Nevertheless, intervention communities had a different ethnic and religious mix, tended to live further away from health facilities, and experienced higher mortality than the control communities.We generated a confounder score to account for imbalance between groups, but cannot exclude the possibility of residual confounding.Furthermore, contamination of one of the control areas occurred due to an increase in the strength of the transmission signal of the neighbouring radio partner, above that permitted by the national authorities.However, excluding women living in villages where contamination occurred had little effect on the results."The DMI campaign seems to have reached a high proportion of the primary target population as a high proportion of mothers interviewed in the intervention group reported recognising DMI's spots and listening to the long format programmes.One in five women in the control clusters also reported recognising the spots or long format programme."Excluding the control cluster in which contamination occurred, only a few women mentioned one of DMI's radio partners when asked on which radio station they listened to these broadcasts, which could suggest courtesy bias or confusion with other radio programmes.In interpreting these results it should be considered that our survey data are likely to have much lower power than the facility data to detect a change in care seeking.While the survey data include 1000 or fewer sick children per group, the facility data record tens of thousands of consultations.However, both sources of data are prone to errors.Retrospective reporting of illness episodes and care seeking in surveys is known to have important limitations.We used a recall period of 2 weeks, as used in DHS, but it has been shown that recall of disease episodes tends to decline after a few days,20–24 as well as reporting of clinic visits.20,Thus, our population-based surveys almost certainly missed some episodes of recent illness.However, the routine facility records might also be subject to recording errors and come without precise and up-to-date denominator data.The population of Burkina Faso is estimated to be increasing by about 3% per year11 and it is therefore likely that the under-5 child population served by the facilities for which we have data was increasing over time.Interpretation of the observed differences between intervention and control groups as being attributable to the intervention requires the assumption that any increases in the underlying populations served by the facilities were of similar magnitude in both groups.However, we have no reason to believe that population growth differed between groups.The facility data suggest a large increase in under-5 consultations in the intervention group in the first year of the intervention.The estimated impacts in subsequent years are smaller.While this apparent decline could be a chance finding, it might reflect attenuation in the effect of the intervention.In Burkina Faso, in-depth interviews with health workers and patients have revealed low satisfaction with the quality of care in public facilities.25,26,The low use of and dissatisfaction with community-based insurance in northwest Burkina Faso has been attributed, in part, to the suboptimal quality of care provided, including poor health worker attitudes and behaviours.27,In the same area, Mugisha and colleagues found that, while many factors influence initiation of the demand for services, only perceived quality of care predicted “retention” in modern health-care services.28,They concluded that increasing patient initiation and patient retention require different interventions and that the latter should focus on improving the perceived quality of care.A possible, admittedly speculative, explanation for our findings is that women were initially encouraged by the campaign to take their children to a facility, but that poor perceived quality of care may have discouraged some from returning for subsequent illnesses.Our findings showed no effect of the campaign on self-reported habitual behaviours, such as child feeding practices, handwashing, and child stool disposal practices."The campaign's broadcasts were heavily weighted to care seeking rather than home-based behaviours, and as we have discussed previously, it might be harder to achieve sustained changes in habitual behaviours that need to be performed daily with little obvious immediate benefit, than for behaviours that are only performed occasionally and for which some immediate benefit may be perceived.9",The confidence intervals for the effect of the intervention on habitual behaviours are wide and do not preclude modest but important changes in these behaviours.While we detected evidence that the intervention was associated with an increase in care-seeking in facilities we did not detect any evidence of a reduction in mortality.There are several possible explanations for this apparent inconsistency.First, our mortality data do not exclude the possibility of an impact on mortality with the lower bounds of the 95% confidence interval for the mortality risk ratio compatible with an important reduction in mortality.The impact of the campaign on child mortality has been modelled using the Lives Saved Tool and showed an estimated 8% reduction in the first year, and 5% reduction in the second and third years.In addition, mortality at baseline differed between the two groups despite randomisation.Although we adjusted for pre-intervention mortality risk and a confounder score, which performed reasonably well in explaining the baseline mortality imbalance, we cannot exclude the possibility of residual confounding which might have masked an intervention effect.Second, while the numbers of consultations with diagnoses of malaria, pneumonia and diarrhoea, three of the leading causes of child death in Burkina Faso, all increased, we have no data on the severity of the episodes for which children were taken to facilities.If most of the increase in consultations was due to children with mild self-limiting illness, then limited impact on mortality might be expected.In some parts of Burkina Faso a preference for traditional care has been reported for some severe manifestations of illness, such as cerebral malaria.29,30,Third, if the quality of care received at the facility was low, this could limit any mortality reduction through increased care seeking.An evaluation of the quality of care at health facilities for children under-5, conducted in 2011 in two regions in the north of Burkina Faso, found that on average only six of ten tasks that should be performed as part of IMCI were performed.31,Only 28% of children were checked for three danger signs, and 40% of children judged to require referral by an Integrated Management of Childhood Illness expert were referred by the health worker.In addition, the 2010 DHS indicated that among children in rural areas who were taken to a public primary health facility, 54% of those with fever received an antimalarial, 35% of those with diarrhoea received oral rehydration solution, and 77% of those with cough and fast or difficult breathing received an antibiotic.Fourth, mortality data were collected by interviewing women about their pregnancy histories.Such data are subject to measurement errors.We did a number of checks on the data, similar to those routinely performed by DHS.Apart from heaping of deaths at age 12 months, which occurred to a similar degree in both groups and should not have affected the under-5 mortality estimates, these analyses did not identify any major concerns.The cluster-level estimates of mortality risk at baseline correlated well with subnational estimates from the 2010 DHS and the time trend in mortality is broadly consistent with that estimated by the UN Inter-agency Group for Child Mortality Estimation."In summary, there is evidence that DMI's campaign led to increased use of health facilities, especially by sick children.However, we noted no effect of the campaign on child mortality.The small number of clusters available for randomisation together with the substantial between-cluster heterogeneity at baseline, and rapidly decreasing mortality, limited the power of the study to detect modest changes in behaviour or mortality.Caution should be exercised in interpreting these results since, despite randomisation, there were important differences between intervention and control clusters at baseline.Nevertheless, this study provides some of the best evidence available that a mass media campaign alone can increase health facility utilisation for maternal and child health in a low-income, rural setting.This study was done using a repeated cross-sectional cluster randomised design by an independent team from the London School of Hygiene & Tropical Medicine and Centre Muraz in Burkina Faso.The intervention and evaluation design have been described previously.8–12,Briefly, women of reproductive age and caregivers of children younger than 5 years were the main targets of the campaign, which covered 17 behaviours along the continuum of care."Women were told the surveys were about their children's health, without any mention of the radio campaign, and they recorded their consent to participate in the survey on a Personal Digital Assistant.The study was approved by the ethics committees of the Ministry of Health of Burkina Faso and the London School of Hygiene & Tropical Medicine.Of 19 distinct geographical areas, 14, each centred around a community FM radio station, were selected by DMI based on their high listenership and minimum distances between radio stations to exclude population-level contamination.For evaluation purposes, clusters around each radio station were identified using the last national census to provide an evaluation population of about 40 000 inhabitants per cluster.We included villages located around the selected community radio station, with a good radio signal but limited access to television.We did this by excluding communities likely to be served by the electricity grid—ie, the towns from which the selected radio stations were broadcast, villages within 5 km of these towns, other villages with electricity or with a population larger than 5000 inhabitants.Seven clusters were then randomly allocated to receive the intervention or control using pair-matched randomisation based on geography and radio listenership.Specifically, we defined three radio listenership strata, and within each stratum we paired the areas geographically closest with each other, one of which was randomly assigned to receive the intervention.Randomisation was done by SS and SC, independently of DMI.The randomisation sequence was generated using computer-generated random numbers.Because of time constraints, randomisation was done before the baseline survey.The nature of the intervention precluded formal masking of respondents and interviewers.The average number of villages included per cluster was 34 in the control group and 29 in the intervention group.In all clusters the government was the main health service provider and, with the exception of Kantchari, a regional or district hospital was located in the town with the community radio station.The trial population also had access to primary health facilities in villages across each cluster."DMI's radio campaign launched in March, 2012, and ended in January, 2015.A description of the theory of change and the Saturation+ methodology used to design and implement the campaign is provided elsewhere.12,Short spots, of 1 min duration, were broadcast approximately ten times per day, and 2 h, interactive long-format programmes were broadcast 5 days per week.All materials were produced in the predominant local languages spoken in each intervention cluster.The dramas were based on message briefs that DMI drew up for each target behaviour.The long-format programmes were followed by phone-ins to allow listeners to comment on the issues raised.Behaviours covered by spots changed weekly, while the long-format programme covered two behaviours a day and changed daily.Table 1 shows the campaign resources allocated to each of the target behaviours.During the trial period, no other radio campaigns related to child survival and of comparable intensity were broadcast in any of the clusters included in the trial.Various health programmes operated in similar numbers of clusters per group.From 2010 to 2013, community case management for malaria, pneumonia, and diarrhoea was supported by the Catalytic Initiative to Save a Million Lives in one of the intervention clusters and one of the control clusters, although the independent evaluation of this rapid scale-up programme concluded that it did not result in changes in coverage or mortality.13,Cross-sectional household surveys were performed in all clusters at three time points: at baseline, from December, 2011, to February, 2012; at midline, in November, 2013, after 20 months of campaigning; and at endline, from November, 2014 to March, 2015, after 32 months of campaigning.At baseline and endline surveys, a census of villages selected for the survey was performed with GPS coordinates recorded.For all households with at least one woman aged 15–49 years, the household head was interviewed to collect socioeconomic data and all women aged 15–49 years were interviewed on their pregnancy history.At baseline, due to time and cost constraints, the census took place in a simple random sample of villages covering half of the population in each cluster and pregnancy history data collection was truncated to cover the period from January, 2005, to the date of the interview.At endline, the census took place in all villages included in the trial and a full pregnancy history was recorded.At each survey, about 5000 mothers with at least one child younger than 5 years living with them were interviewed regarding their demographic characteristics, radio listenership, and family behaviours of relevance to child survival.9,To test recognition of the campaign at midline and endline, the two spots broadcast in the last 2 weeks of the previous month were played at the end of the interview and women were asked whether they had heard them on the radio.With respect to long format programmes, recognition was tested by referring to its title.At baseline and endline, mothers for the behavioural interviews were selected using systematic random sampling of all women interviewed about their pregnancy history.At midline, a two-stage sampling procedure was used.9, "Before each survey, fieldworkers received 2 weeks' training.At baseline and endline, 84 fieldworkers were deployed across clusters in teams of six fieldworkers.At midline, 56 fieldworkers were deployed in teams of four fieldworkers.Each team was managed by a supervisor.Interviews were performed using Trimble Juno SB Personal Digital Assistants using Pendragon forms software.Data were backed up twice a week by a team of seven data managers and checked for consistency and completeness.Re-interviews were requested in cases of missing or inconsistent responses.The trial was designed to detect a 20% reduction in the primary outcome with a power of 80%.8,We assumed a baseline mortality rate of 25 per 1000 per year, a coefficient of variation between clusters of 0·18, that mortality would decline in all clusters by 5% over the course of the study, and that the analysis would be based on cluster-level summaries with adjustment for pre-intervention mortality.Simulations indicated that, given a total of 14 clusters, a sample size of 7000 under-5 children per cluster would be required.The sample size of 5000 mothers was calculated assuming a design effect of 2 with a view to providing an absolute precision of within 10% or better for all behaviours.Routine health facility data from January, 2011, to February, 2016, were obtained from the Direction Générale des Etudes et des Statistiques Sanitaires of the Ministry of Health.For 78 primary health facilities located in trial clusters, monthly numbers were provided for: pregnant women attending for a first antenatal consultation, facility deliveries, and all-cause under-5 child consultations.The primary outcome was all-cause post-neonatal under-5 child mortality, the secondary outcome was all-cause under-5 child mortality, and intermediate outcomes included the coverage of the campaign and family behaviours targeted by the campaign as listed in table 1."Primary analyses were performed on an intention-to-treat basis, and followed an analysis plan agreed in advance with the trial's Independent Scientific Advisory Committee.All analyses were performed on cluster-level summary measures14 and adjusted for pre-intervention levels to control for imbalances between groups and improve precision.The matching procedure was ignored, as recommended for trials with fewer than ten clusters per group.15,All clusters were given equal weight in all analyses.All analyses were done with Stata.
Background: Media campaigns can potentially reach a large audience at relatively low cost but, to our knowledge, no randomised controlled trials have assessed their effect on a health outcome in a low-income country. We aimed to assess the effect of a radio campaign addressing family behaviours on all-cause post-neonatal under-5 child mortality in rural Burkina Faso. Methods: In this repeated cross-sectional, cluster randomised trial, clusters (distinct geographical areas in rural Burkina Faso with at least 40 000 inhabitants) were selected by Development Media International based on their high radio listenership (>60% of women listening to the radio in the past week) and minimum distances between radio stations to exclude population-level contamination. Clusters were randomly allocated to receive the intervention (a comprehensive radio campaign) or control group (no radio media campaign). Household surveys were performed at baseline (from December, 2011, to February, 2012), midline (in November, 2013, and after 20 months of campaigning), and endline (from November, 2014, to March, 2015, after 32 months of campaigning). Primary analyses were done on an intention-to-treat basis, based on cluster-level summaries and adjusted for imbalances between groups at baseline. The primary outcome was all-cause post-neonatal under-5 child mortality. The trial was designed to detect a 20% reduction in the primary outcome with a power of 80%. Routine data from health facilities were also analysed for evidence of changes in use and these data had high statistical power. The indicators measured were new antenatal care attendances, facility deliveries, and under-5 consultations. This trial is registered with ClinicalTrial.gov, number NCT01517230. Findings: The intervention ran from March, 2012, to January, 2015. 14 clusters were selected and randomly assigned to the intervention group (n=7) or the control group (n=7). The average number of villages included per cluster was 34 in the control group and 29 in the intervention group. 2269 (82%) of 2784 women in the intervention group reported recognising the campaign's radio spots at endline. Post-neonatal under-5 child mortality decreased from 93.3 to 58.5 per 1000 livebirths in the control group and from 125.1 to 85.1 per 1000 livebirths in the intervention group. There was no evidence of an intervention effect (risk ratio 1.00, 95% CI 0.82–1.22; p>0.999). In the first year of the intervention, under-5 consultations increased from 68 681 to 83 022 in the control group and from 79 852 to 111 758 in the intervention group. The intervention effect using interrupted time-series analysis was 35% (95% CI 20–51; p<0.0001). New antenatal care attendances decreased from 13 129 to 12 997 in the control group and increased from 19 658 to 20 202 in the intervention group in the first year (intervention effect 6%, 95% CI 2–10; p=0.004). Deliveries in health facilities decreased from 10 598 to 10 533 in the control group and increased from 12 155 to 12 902 in the intervention group in the first year (intervention effect 7%, 95% CI 2–11; p=0.004). Interpretation: A comprehensive radio campaign had no detectable effect on child mortality. Substantial decreases in child mortality were observed in both groups over the intervention period, reducing our ability to detect an effect. This, nevertheless, represents the first randomised controlled trial to show that mass media alone can change health-seeking behaviours. Funding: Wellcome Trust and Planet Wheeler Foundation.
710
10 years of 25-hydroxyvitamin-D testing by LC-MS/MS-trends in vitamin-D deficiency and sufficiency
In 2015, we reported a pediatric case of vitamin-D intoxication with vitamin-D3 supplements.Briefly, four-month-old infant was admitted to Mayo Clinic for significant dehydration, lethargy, and weight loss.Routine chemistry blood panel revealed total calcium levels of 18.7 mg/dL indicating severe hypercalcemia.PTH levels were suppressed and serum phosphorous was 1.9 mg/dL; normal range: 2.5–4.5 mg/dL.Further medical evaluation revealed hypercalciuria and nephrocalcinosis."During the discussion with the mother it was discovered, that in the last two months the infant was receiving daily dosage of oral vitamin-D3 supplementation that was greater than the manufacturer's recommendation.It was estimated that the baby was receiving ~50,000 IU of vitamin-D per day while recommended dose on the label was 2,000 IU.It was also determined that the actual amount of vitamin-D per drop was 6,000 IU, 3 fold higher than what it was stated on the label."Vitamin-D metabolites were tested in the infant's blood and were as follows: 25D3, 293 ng/mLD:20–50 ng/mL); 1,252D3, 138–111 pg/mL; ratio of 24,252D3 to 25D3 was 0.11–0.14 and suggesting normal Cyp24A1 function.Baby was treated with fluids and calcitonin to reverse hypercalcemia and lower vitamin-D levels and both biomarkers reached the normal levels within 3 months.In 2013, Kara et al. also described vitamin-D toxicity in children, but in her cases the error was exclusively caused by the manufacturer producing fish oil supplements with concentration of vitamin-D that was 4,000 times the concentration stated on the label.Seven children, below age 4.2, were admitted to the hospital due to significant hypercalcemia.Serum phosphorous and PTH levels were found to be in normal range.All children presented with similar clinical manifestations: vomiting, dehydration, constipation and weight loss.Median concentration of serum 25D was 620 ng/mL.It was later estimated that the children were receiving daily vitamin-D3 dosage between 266,000 IU and 8,000,000 IU, which was significantly above the normal recommendation.Calcium levels in all children were corrected within 3 days and vitamin-D levels normalized within 3 months.In 2015, Kaur et al. described 16 patients who overdosed with vitamin-D supplements to treat their vitamin-D deficiencies.All patients presented with similar symptoms of vitamin-D toxicity: vomiting, weight loss, nausea and constipation.Upon admission, patients were noted to have median serum calcium level of 13.0 mg/dL, median serum 25D of 371 ng/mL and normal phosphorous and PTH results.It was determined that patients were taking ~77,000 IU of vitamin-D3 each day over a period of 4–7 weeks and this mega-dose resulted in vitamin-D toxicity in all patients.Vitamin-D and parathyroid hormone are the principle regulators of calcium homeostasis in all tetrapods and play an important role in bone metabolism.Vitamin-D exists in two major forms, vitamin-D2 and vitamin-D3.Both are formed by UV irradiation of either 7-dehydroergosterol or 7-dehydrocholesterol.The two forms differ only in the substitution on the side chain.It is believed that vitamin-D2 first emerged in phytoplankton about 750 million years ago and served as the vitamin-D source for marine vertebrates until the transition to terrestrial life occurred and vitamin-D3 production commenced in the skin of tetrapods.In modern humans, primary sources of vitamin-D include diet or supplements and vitamin-D3 derived from 7-dehydrocholesterol in the skin after exposure to UV light.Endogenous or dietary vitamin-D binds to vitamin-D binding protein and is transported to the liver, where it is hydroxylated at the carbon 25-position, creating 25-hydroxyvitamin-DD).25D is the most abundant circulating form of the vitamin-D.Vitamin-D and 25D have no established bioactivity and can be viewed as pro-hormones.25-hydroxyvitamin-D-1α-hydroxylase in the kidneys converts 25D to the active hormone, 1,25-2D.1,25-2D belongs to the superfamily of thyroid- and steroid hormones, and exerts its effects through changes in gene transcription.It binds to a nuclear receptor that dimerizes with the retinoid X receptor before binding to gene regulatory DNA elements.There are between 2000 and 8000 VDR response elements in the human genome.Transcriptional response depends on the number of available VDR and RxR, the concentrations of their respective ligands, the nature of the response element, availability of co-factors and transcriptional accessibility of the respective genes.Consequently, the actions of 1,25-2D across various tissues of an entire animal are complex and in most cases require detailed study to detect clinically relevant changes.The areas that are exceptions to this rule are calcium and phosphate metabolism and bone metabolism.1,25-2D obvious principle role is to increase intestinal calcium absorption, to increase calcium and phosphate reabsorption in the kidneys, to increase calcium release from bones in concert with PTH, and to downregulate PTH production.The activity of the CYP27B1 in turn is regulated by PTH and other factors including calcium demand.Due to these feedback mechanism, production of 1,252D remains constant over a wide range of serum 25D concentrations, with excess 25D and 1,252D being converted to inactive metabolites, 24,252D and 1,24,253D by 25-hydroxyvitaminD-24-hydroxylase.Additional, independent inactivation is catalyzed by 3-epimerase, which isomerizes the C-3 OH group of 25D and 1,252D from the α to β orientation, reducing bioactivity.Since vitamin-D3 and vitamin-D2 are either stored in adipose tissue or rapidly metabolized to the corresponding 25-hydroxylated metabolites, their serum levels fluctuate widely and there is no clinical value in monitoring these forms of vitamin-D in the circulation.Among the >40 vitamin-D metabolites discovered so far, three have been shown to be the most clinically relevant: 25D, 1,252D and 24,252D.25D serum concentrations are useful for assessing vitamin-D reserves.25D levels increase steadily upon exposure of skin to UV-containing light or after consuming supplements containing cholecalciferol or ergocalciferol.In recent times, many investigators have explored associations between vitamin-D metabolism and cardiovascular disease, obesity, cancer, and autoimmune diseases; however, no clear recommendations for clinical interventions have yet emerged from these studies.By contrast, studies of bone health and 25D levels have at least in part resulted in clinically useful, albeit not always uncontested, conclusions.One study, which investigated the effect of calcium and vitamin-D supplementation on bone density in men and women older than 65 years, found a moderate reduction in bone loss in the femoral neck, spine, and total body over the three-year study period and a reduced incidence of non-vertebral fractures in the group supplemented with 500 mg of calcium plus 700 IU of vitamin D3 per day compared to placebo.Vitamin-D supplementation was also shown to reduce the risk of falls by >20% among ambulatory or institutionalized older individuals with stable health.Benefits of vitamin-D supplementation on fracture prevention are related to its effect on intestinal calcium absorption and bone mineral density.However, in years that followed, the findings for falls and fractures in the elderly were mixed when supplementation with intermittent single large doses of vitamin-D was examined in randomized controlled trials.A meta-analysis of nine RCTs found that supplementation with intermittent, high dose vitamin-D might not be effective in preventing overall mortality, fractures, or falls among older adults.The paradoxical increase in fracture-risk in some of the reviewed studies has been hypothesized to be caused by an up-regulation of the CYP24A1 enzyme and an increased clearance of 1,252D.The Institute of Medicine has recommended that at the low end a serum 25D level of at least 20 ng/mL is sufficient for 97.5% of the population for effective prevention of bone disease and fractures, while a level of 50 ng/mL is considered as a safe upper healthy population cut-off.Serum 25D levels <20 ng/mL represent deficiency.For every 100 IU of vitamin-D supplement administered, the 25D levels rise by 0.5 to 1 ng/mL.Deficiency of 25D can cause bone pain and muscle weakness, and in extreme cases osteomalacia in adults and rickets in children.However, mild deficiency may not necessarily be associated with overt symptoms.On the other end of the spectrum, sustained levels of 25-OH-vitamin D > 50 ng/mL might lead to hypercalciuria, stone formation and ultimately decreased renal function.Frank vitamin-D toxicity might occur at even higher levels and is characterized biochemically by hypercalcemia, hyperphosphatermia, suppressed serum PTH concentrations, and markedly elevated 25D levels.Clinical manifestations of severe toxicity include vomiting, nausea, abdominal pain, fatigue, and weakness, and sometimes rapidly developing nephrocalcinosis.The tight regulation of 1,252D production means that its levels fluctuate little.Except in cases of extreme vitamin-D deficiency or toxicity 1,252D serum concentrations typically remain 22–65 pg/mL.Therefore, serum 1,252D measurements are in the main only indicated when there is suspicion of unregulated conversion of 25D to 1,252D, as might be seen in some granulomatous diseases, or if patients have advanced renal impairment with or without 1,252D replacement therapy.A combination of measurements of serum 25D, 1,252D, 24,252D and C3-epi-25D might be necessary in the differential diagnosis of acquired-, iatrogenic- and genetic causes of non-PTH driven hypercalcemia.In recent years, 24,252D has received a lot of attention in this context.It is a marker for CYP24A1 function when used in conjunction with 25D measurements.In normal individuals, 24,252D is 7–35% of the total 25D.A 25D/24,252D ratio of 99 or greater is indicative of CYP24A1 deficiency.Once the cause of vitamin-D toxicity has been established, serum vitamin-D levels might need to be monitored until they fall to about 30–50 ng/mL.Methodologies for 25D measurements include high performance liquid chromatography, radioimmunoassay, automated immunoassays and liquid chromatography-tandem mass spectrometry, while current 1,252D and 24,252D measurements involve RIAs or LC-MS/MS.Analytical challenges have been reported for all of these methods, but currently measurement of vitamin-D compounds by HPLC with MS/MS detection has been established as the gold standard for vitamin-D metabolite testing,Despite this, automated immunoassays are the mainstay for the majority of high volume analytes in clinical laboratories.They offer high throughput and automated sample handling and require minimal manual labor.Consequently, 90% of routine 25D analyses today are performed by automated immunoassay.However, most automated immunoassays suffer from the inherent narrow dynamic range and specificity limitations of competitive immunoassays.The former results in these assays frequently either underestimating or overestimating 25D concentrations at both the low and the high end of their measurement range, i.e. precisely at the concentrations where accuracy would be most important, while the latter manifests itself as unequal affinity for 25D2 versus 25D3, and occasional interferences.In concert, these issues also result in significant differences between the results of different immunoassays.The discordance between 25D values from different assays is magnified by differences in standardization of each assay.LC-MS/MS overcomes these issues and has become widely accepted for routine use for many low molecular weight analytes in clinical laboratories due to improved analytical specificity and sensitivity and wider dynamic range compared to immunoassay methods.LC-MS/MS vitamin-D assays offer better accuracy at medical decision levels to correctly classify patients as vitamin-D deficient and sufficient.Several LC-MS/MS methods for measurement of all clinically relevant metabolites of vitamin-D including 25D, 1,252D and 24,252D have been reported.However, although LC-MS/MS is considered the gold standard for 25D testing, according to the Accuracy-Based Vitamin-D 2016 Survey, only 74 out of 364 US laboratories used LC-MS/MS for 25D testing.Initial capital investment, the labor and time intensive nature of development and implementation of clinical LC-MS/MS assays and slower turnaround time might be the keys impediments towards a global adoption of LC-MS/MS for vitamin-D metabolite quantitation.The College of American Pathologists and Vitamin-D External quality assessment scheme surveys are used to monitor the performance of laboratories using various methods for testing of 25D.The survey feedback does not assess the accuracy of 25D measurements by laboratories, but scores laboratories for agreement within the group using a particular method.Additionally, lack of standardization has been recognized as a challenge in steroid hormone testing.The Center for Disease Control and Prevention has established a Vitamin-D Standardization Certification Program focused on providing reference measurements for 252D, to assess the accuracy and precision of vitamin-D tests, to monitor their performance over time, and provide technical support to external quality assurance programs, proficiency testing programs, and research studies.A recent study by VDSCP study established core performance criteria, namely CV ≤ 10% and mean bias ≤5% for 25D quantitation.Inter-laboratory performance of 25D measurement was compared by providing a set of 50 individual donor samples to 15 laboratories representing national nutrition survey laboratories, assay manufacturers, and clinical or research laboratories.Samples were analyzed using immunoassays and LC-MS/MS.All LC-MS/MS results achieved VDSCP criteria, whereas only 50% of immunoassays met the criterion for a ≤ 10% CV and only three of eight immunoassays achieved the ≤5% bias.Perhaps some of these issues might be addressed with the availability of National Institute of Standards and Technology standard reference materials, but even then the inherent problems of immunoassays will likely continue to negatively impact vitamin-D laboratory testing quality.During the first decade of the 21st century, Vitamin-D deficiency was shown to be highly prevalent in the USA.In the years that followed, several studies echoing these findings were reported.Consequently, the current decade has witnessed an increased public awareness of vitamin-D supplementation.The clinical laboratory at the Mayo Clinic in Rochester, MN is a referral laboratory for a vast network of clinical providers in the US.During the 2007–2017, ~5,000,000 patient samples were tested for 25D by LC-MS/MS in our laboratory.Patient results were categorized into the following groups <10 ng/mL, 10–24 ng/mL, 25–80 ng/mL and > 80 ng/mL.Frequencies were calculated by using the formula:D category / total number of patients tested in the corresponding week) ∗ 100), in order to gauge whether there have been any changes in the frequency of 25D in the four categories over time.The frequency of patients in each serum 25D category is shown in Fig. 2.Seasonal variation, as a reflection of the amount of sunlight to which a person is exposed, was observed in the <10 ng/mL, 10–24 ng/mL, 25–80 ng/mL categories.As expected, serum concentrations of 25D were highest in late summer and lowest in spring.At the end of summer of 2006, 4.3% of the population being tested had serum 25D levels <10 ng/mL.This number increased to 8.5% by the end of winter of 2007.After 10 years, a significantly lower percentage of the population had serum 25D levels <10 ng/mL.Similarly, the percentage of patients with 25D range of 10–24 ng/mL decreased steadily between 2007 and 2017.By contrast, the percentage of patients with 25D levels increased from 72.5% to 82.4% post-summer and from 60.6 to 72.9% post-winter during the ten-year period.Of note, the percentage of patients with levels >80 ng/mL remained constant, without seasonal variation until the end of 2012.Since then, it has been slowly increasing to 2–2.5% probably due to an increased awareness of vitamin-D deficiency and an increased use of over-the-counter supplements and prescriptions of high-dose vitamin-D.Given that our 25D vitamin-D assay is a laboratory developed test, we have verified that there were neither a shifts in the calibration over time, nor biases and trends in quality controls nor biases due to reagent lot-to-lot changes.Based on our data obtained with this assay of demonstrably stable analytical performance, it appears that vitamin-D supplementation is driving improvements in population 25D levels.Our findings are consistent with a recent study which also found modest increases in the serum 25D concentrations in the US population from 1988 to 2010.The observation that serum 25D levels in the general US population have increased during the last decade has important implications.Population based and basic science studies exploring the role of vitamin-D metabolism in health and disease pathways have raised public awareness about effective modes of vitamin-D supplementation.Of note, the increase in the percentage of patients who have circulating concentration of 25D of >80 ng/mL warrants further attention.Long-term physiological effects of persistently elevated 25D may need to be investigated in clinical studies.Over the last decade, the frequency of measurements of 25D in the healthy population has significantly increased due to an increased awareness of vitamin-D deficiency and its potential association with many diseases beyond its role in maintaining bone health.This also resulted in an increased demand for vitamin-D metabolite measurements.Quantification of 25D in serum is the best indicator of vitamin-D status of individuals."25D accurately reflects the body's vitamin-D stores.At present, LC-MS/MS assays offer the best accuracy for vitamin-D metabolite analysis.Accurately measuring vitamin-D metabolite levels is important to classify patients as vitamin-D deficient so appropriate supplementation can be recommended.Monitoring patients who receive high-dose vitamin-D supplementation is also of clinical value.Our retrospective study has shown seasonal changes of 25D as well as an overall change in frequency of patients with vitamin-D deficiency and sufficiency.Of note our data show that the frequency of patients with 25D deficiency decreased over the last decade.The flip side of this is that the percentage of the patients undergoing 25D testing who have serum 25D > 80 ng/mL has slowly increased during that time.While the proportion of patients with potentially toxic levels remains relatively low, the consequences of toxicity can be severe, and prospective population studies to investigate clinical impact and long-term safety of higher circulating levels of 25D might be warranted.I have no conflict of Interest for the data in this publication.
In early 2000's vitamin-D deficiency was shown to be prevalent in several countries including the United States (US). Studies exploring the role of vitamin-D metabolism in diverse disease pathways generated an increased demand for vitamin-D supplementation and an immense public interest in measurement of vitamin-D metabolite levels. In this report, we review the role of vitamin-D metabolism in disease processes, clinical utility of measuring vitamin-D metabolites including 25-hydroxyvitamin-D (25(OH)D), 1,25-dihydroxyvitamin-D and 24,25-dihydroxyvitamin-D and discuss vitamin-D assay methodologies including immunoassays and liquid chromatography mass spectrometry (LC-MS/MS) assays. We also provide examples of vitamin-D toxicity and insight into the trends in serum 25(OH)D levels in the US population based on 10 years of data from on serum 25(OH)D values from ~5,000,000 patients who were tested at the Mayo Medical Laboratories between February 2007–February 2017.
711
A review of types of risks in agriculture: What we know and what we need to know
Farmers constantly cope with and manage different types of agricultural risks.1,Risk inherently involves adverse outcomes, including lower yields and incomes and can also involve catastrophic events, such as financial bankruptcy, food insecurity and human health problems, although higher expected returns are typically one of the positive rewards for taking risk.Farmers therefore cope simultaneously with and manage multiple risks that can have compounding effects.The compounding effects may affect decisions and outcomes at scales well beyond the farmer.One initial cause of the 2007/08 world food price crisis was production risk related to severe droughts but the impacts of the ensuing price spikes were exacerbated by some governments imposing export restrictions.During this crisis farmers faced production risk, market risk, and institutional risk all within a short period.Thus, risk outcomes can have cascading effects where one type contributes to another type occurring—for example, excessive rainfall during harvest is an event that can engender another set of risks such as financial risks associated with being unable to repay loans.Given that multiple types of agricultural risks are likely to occur simultaneously, several policy-driven initiatives have begun to address these risks more holistically.These initiatives examine risk management issues and strategies that concentrate on multiple sources of risk.They include the Platform on Agricultural Risk Management, the World Bank’s Forum for Agricultural Risk Management in Development, and programs in the Center for Resilience.2,Funders of agricultural research are also beginning to support more projects that focus on the multiple risks that farmers encounter.Examples include the SURE-Farm project and the INFORM index for risk management.In addition, both academics and policy researchers are taking a more earnest focus on risk, such as the PIIRS Global Systemic Risk research community and the recent efforts by the OECD’s risk management and resilience topic group.This new focus and reorganization of human and financial resources, often in the context of the resilience of farms and the agricultural sector to adverse events, suggests that a growing appreciation exists that multiple types of risk are important.Farmers have always faced multiple risks; for example, in premodern Iceland major concerns for farmers included weather variability and personal illness.Campbell et al. argue that the growing number of studies that focus almost exclusively on the link between weather variability and crop yields provide only marginal increases in knowledge and by only studying one risk we only gain an inadequate picture of all the types of risk farmers encounter.The implication of this argument is that analyses of multiple concurrent sources of risks are likely to generate more useful insights.The IPCC reinforces this view by discussing how diverse types of risks co-occur or reinforce each other and how such co-occurrence can limit the effectiveness of adaptation planning for climate change.The IPCC indicates a possible remedy may be policymaking that considers multiple risks.Other researchers have also argued that the risks associated with climate change, economic volatility, globalization, and political instability have become more pronounced and severe.Whether farmers’ exposure to risks, in general, has increased over time remains an open question as the quantitative evidence seems mixed and context specific, especially for weather and commodity prices.However, unanticipated events with considerable impacts on farmers continue to occur, which suggests that the nature of risk has changed over time.The challenges to the agricultural sector from a growing world population, from changing diets with higher demand for animal-source foods, and from climate change, make managing multiple risks more important than ever.Given this context, the objective of our study is to examine the extent to which the existing peer-reviewed literature provides sufficient support for a more holistic approach to risk management that includes examining multiple types of risk and evaluates their joint effects.The focus is on farmers and the types of risks relevant to them on their farms.Ideally, new initiatives that seek to promote and support holistic risk management should be underpinned by evidence on how farmers cope with multiple risks.However, the evidence from our study indicates that the existing literature may not adequately provide such support.Our study describes and synthesizes the trajectory and status of the peer-reviewed literature on the types of agricultural risks that researchers have examined.We use a literature search procedure in the Web of Science for all available years.We include five general types of risk in agriculture: 1) production, 2) market, 3) institutional, 4) personal, and 5) financial.The first four of these risks are business risks and in important ways are independent of financial risks associated with how a farm may be financed.Our current study complements earlier reviews that have examined the theoretical models and empirical methods used to examine specific types of risk.These reviews reinforce the importance of understanding risk, as for example, technology choices are strongly affected by risk-related issues.However, Just argues that agricultural risk research “has failed to convince the larger profession of the importance of risk averse behavior”, that “agricultural risk research has focused too much on problems in which risk is less likely to be important”, and that there has been an over emphasis on “characterization of the production problem that does not support risk research”.Researchers have reflected that the treatment of multiple sources of risks appears limited in the literature.Researchers have also reflected that the literature has often focused on the types of risk that are “easy” to study, such as weather shocks in Africa rather than market or institutional risks.Managing these less “easy” risks possibly provides more opportunities for long-term livelihood improvement.Our study therefore examines these reflections in more detail through a literature review and analysis, in light of the recent initiatives on risk and because farmers face multiple risks simultaneously.We conducted a literature search to identify an initial database of peer-reviewed studies that possibly examined type of agricultural risk.Every one of these studies was then manually assessed for eligibility to retain in the database based on an eligibility criteria.After removing ineligible studies from the initial database, we arrived at our database.For each study in our database we recorded the type of risk studied and the geographic focus.To provide context to our literature search we first define risk and some of its interpretations, and then overview the five general types of risk in agriculture.There can exist multiple sources of risk within a type of risk, for example production risk is a type of risk and the source of risk that generates the production risk might be a drought or a pest outbreak.The risk management option could include crop yield insurance for a drought or integrated pest management for a pest outbreak.To identify studies on types of risk we set a boundary on the words and terms associated with types of risk.Here definitions and interpretations of risk in the literature informed our choice of search strings.Knight defined risk as the case where the distribution of outcomes is known either a priori or statistically through experience, and uncertainty as the case where probabilities cannot be quantified.This definition implies that decisionmakers have imperfect information about whether a given outcome associated with a course of action will occur but act as if they know the probabilities of the relevant alternative states of nature that each lead to different outcomes.Nevertheless, probabilities used by decisionmakers are usually unavoidably subjective.Hardaker lists three common interpretations of risk: 1) the chance of a bad outcome, 2) the variability of outcomes, and 3) uncertainty of outcomes.Building on the word stability in the second interpretation, other words that characterize risk include robustness, vulnerability, and resilience.Finally, the Society for Risk Analysis has a Glossary of Risk-Related Terminology for key terms related to risk analysis.The preceding definitions, interpretations, and glossary informed our search.The five general types of risk in agriculture are as follows:Production risks stem from the uncertain natural growth processes of crops and livestock, with typical sources of these risks related to weather and climate and pests and diseases.Other yield-limiting or yield-reducing factors are also production risks such as excessive heavy metals in soils or soil salinity.Market risks largely focus on uncertainty with prices, costs, and market access.Sources of volatility in agricultural commodity prices include weather shocks and their effects on yields, energy price shocks and asymmetric access to information are additional sources of market risk.Other sources of market risk include international trade, liberalization, and protectionism as they can increase or decrease market access across multiple spatial scales.Farmers’ decision making evolves in a context in which multiple risks occur simultaneously, such as weather variability and price spikes or reduced market access.Institutional risks relate to unpredictable changes in the policies and regulations that effect agriculture, with these changes generated by formal or informal institutions.Government, a formal institution, may create risks through unpredictable changes in policies and regulations, factors over which farmers have limited control.Sources of institutional risk can also derive from informal institutions such as unpredictable changes in the actions of informal trading partners, rural producer organizations, or changes in social norms that all effect agriculture.Farmers are increasingly supported by and connected to institutions, especially as farm production becomes more market focused.Personal risks are specific to an individual and relate to problems with human health or personal relationships that affect the farm or farm household.Some sources of personal risk include injuries from farm machinery, the death or illness of family members from diseases, negative human health effects from pesticide use, and disease transmission between livestock and humans.Health risks are a major source of income fluctuation and concern for farmers.Farmers often cope with the interconnectedness of personal and institutional risks; for example, divorce or death of a husband can lead to the appropriation of land or livestock, due to institutional risks created by customary laws.In the literature, the words “personal”, ”human”, and ”idiosyncratic” generally refer to the same type of “personal” risks we considered.Financial risk refers to the risks associated with how the farm is financed and is defined as the additional variability of the farm’s operating cash flow due to the fixed financial obligations inherent in the use of credit.Some sources of financial risk include changes in interest rates or credit availability, or changes in credit conditions.The literature search used a combination of search strings to retrieve studies in the Web of Science Core Collection.The SCC is part of the Institute for Scientific Information Web of Knowledge Database.The search covered all Citation Indexes in the Database.The Indexes included the Science Citation Index Expanded, Social Sciences Citation Index, Arts & Humanities Citation Index, and the Emerging Sources Citation Index.The search included peer-reviewed English-language journal articles published between 1974 and 2019.The first available year in the Database was 1974.We conducted the search on August 5th, 2019 and the search included two Research Areas defined by the Web of Science: agriculture and economics and business.The literature search excluded studies in the Web of Science Category of Forestry and Fisheries because our primary focus was to identify studies on types of risks in agriculture relevant to crops and livestock.Table 1 displays the search strings used to identify the initial database of studies.Our first search string specified words related to agriculture.Subsequent searches were composed of two inclusion terms linked by a proximity operator, following procedures in other agricultural-focused reviews.The string of words in the first inclusion term ensured the search results pertained to risk, guided by methods Section 2.1.The second inclusion term captured specific sources of risk for each type of risk—searches 3–7 in Table 1.The scale of the study was left unrestricted, and therefore included studies at the plot, farm, household, country, regional, and global scale.We linked the inclusion terms for risk with each type of risk using a proximity operator.The proximity operator was set to 2, which meant the two inclusion terms were within two words of each other.This approach was less restrictive than searching for exact phrases.As a result, the search retrieved more studies than if we applied exact phrases, for example, we capture “risk of climate change” plus “climate risk”.Setting the proximity operator to 2 also helped to reduce false hits that would have occurred by using the “AND” operator between the inclusion term for risk and the type of risk.For example, to retrieve production risk studies the Web of Science syntax was #1 AND, where NEAR/2 is the proximity operator.We experimented with different words in the search strings and setting the proximity operators greater than or less than 2.The strings in Table 1 suited our study’s objectives.Changing the search strings changes the number of studies in each category; however, the search strings described in Table 1, coupled with the proximity operator, produced an initial database that we believe reflects the literature between 1974 and 2019.We report data for all years but focused our analysis of temporal trends for the following decades: 1979–1988, 1989–1998, 1999–2008, and 2009–2018.The search retrieved an initial database of studies for which the study’s title, abstract, or keywords indicated the study examined a type of risk.We then manually assessed every study against an eligibility criteria.The assessment involved examining the study’s title, abstract, or keywords against an eligibility criteria.If the study’s title, abstract, or keywords contained insufficient information to assess the study’s eligibility we examined the full text of the study.We retained in our database only studies that met our eligibility criteria:The study provided a quantitative or conceptual analysis of a type of agricultural risk.Examples of quantitative analyses included studies based on manipulative experiments, monitoring of sources of risk, scenario analysis with simulation models, statistical analyses, or studies that combined multiple methods.Statistical analyses included examining survey data on perceptions of risk types or the effect of risk types on farmer behavior, or econometric analysis of commodity price volatility.Conceptual analysis of types of risk included reviews and overview studies, theoretical studies, and qualitative assessments.The actual analysis or argument of the study earnestly included risk, i.e., a major part of the study covered a type of risk and risk was not only mentioned in the framing or motivation of the study.For example, we excluded a study that writes “climate change is a growing problem” but then does not actually examine climate change.We also excluded a study that writes “Key uncertainties about price trends include” but then does not actually examine price uncertainty.A study, in general, earnestly included risk if the study’s title or abstract listed an objective, research question or result related to a risk type.We excluded studies that were in the initial database for nomenclature reasons but were unrelated to a type of risk.For example, we excluded studies that included terms like “modelling uncertainty” or “aggregate stability” but had no more details on a type of risk.The study focuses on crops or livestock or both.We excluded studies only on forestry or fisheries.Studies on integrated aquaculture and agroforestry studies were eligible.If multiple commodities were studied for market risk, the agriculture commodity must be earnestly studied and not just part of the list of commodities studied.For example, we retained studies on how oil price shocks affect cereal grain prices.To be eligible the study focuses more on the agricultural production aspects of risk types than on the consumer aspects of risk types.We excluded studies on consumer choice of foods and excluded studies that provided a cursory mention to “human health” with no other focus on a type of risk.After we assessed all the studies in the initial database for eligibility, we examined the eligible studies by recording the type of risk the study focused on, and the geographic focus of the study.This involved examining the title, abstract, or keywords, or full text version if required.The geographic focus of the study was based on the United Nations Standard Country codes.We listed the country or region where the study focused.For theoretical studies, studies that presented stylized numerical examples without a geographic focus, or studies where the geographic focus was unclear, we listed the geographic location as Not Applicable.Our literature search identified 5294 studies published between 1974 and 2019 that potentially examined risk.We then examined the studies in this initial database for their eligibility.Through this examination, we excluded 2011 studies from the initial database, resulting in a database of 3283 studies.Fig. 1 shows the distribution of the 3283 studies among all the combinations of risk types.A total of 2160 studies focused solely on production risks, accounting for 66% of the total sample.Among studies that examined only one of the five risk types, market and then institutional risks were the next most widely examined.Thirteen percent of the total sample considered only market risks, but only 2.4% of all studies considered only institutional risks.Only 1.8% of the studies in the total sample considered personal risks only and 2.0% of all studies focused solely on financial risks.Fifteen percent of the studies in total sample examined at least two types of risk.Among these 484 studies, 405 considered two risk types, 50 considered three types, 11 considered four types, and 18 considered all five types.Risks in production were the most likely to be examined in combination with another type of risk.The combination of production and market risk was the pair that occurred most frequently, consisting of 236 studies that was 7.2% of the total sample and 48.7% of the subsample of studies examining multiple types of risk.Production, market, and institutional risk was studied in 26 of the 50 studies on three types.Financial risks are only incurred by farmers who actually have financial obligations like a loan.These obligations are reflected in financial risks being least numerous in Fig. 1.Fig. 2A shows the distribution of studies for each risk type over the past four decades.The number of studies in the dataset increased over time.The number of studies published between 1989 and 1998 was 284.However, between 1999 and 2008 the number of studies in our database increased by 120% to 626, and between 2009 and 2018 increased by 245% to 2158, also highlighting that the number of studies increased at an increasing rate.Changes occurred in the allocation of studies among the risk types over the past four decades.The percentage of all studies on only production risk increased modestly from 59% in 1989–1998 to 63% in 1999–2008, and then 66% in 2009–2018.The increased prevalence of production only studies occurred together with a decline in the percentage of studies on only market risk.Eighteen percent of studies considered only market risks in 1989–1998, but this percent decreased to 14% in 1999–2008 and 12% in 2009–2018.The proportions of studies in the other categories remained relatively constant over the entire period.For example, the proportion of studies on at least two types of risk was between 14% and 18% in all decades.One notable change was that in the decade 1979–1988 4 of the 21 studies were on financial risk only and in all subsequent decades only 2% of studies were on financial risk only.Seventy nine percent of all studies in the database included production risk and 2160 of those 2759 studies considered production risk only.To the extent that production risk features in studies from other risk types, we found 64% of all studies on multiple types of risk included at least production and market risk.We observed no discernable change in the percentage of all studies considering multiple types of risk over time.Market risks, in isolation from or in combination with other risks, were the next most widely examined.Forty six percent of all studies on market risk considered those risks in combination with at least one other type of risk, but, for example, only 16% of all studies on production risk considered production risks in combination with at least one other type of risk.Less studies included institutional, personal, or financial risks, compared with production and market risks.The percentage of studies that considered institutional risk was 6.2%, considered personal risk was 5.1%, and considered financial risk was 4.2%.Looking at studies of two or more risks, institutional risks were mostly studied in combination with market risks and personal risk with production risks.For example, 29 studies considered only production risk with institutional risk and 39 studies considered only market risk with institutional risk.For the geographic focus of the 3283 studies, 823 were in the Americas, 726 were in Asia, 672 in Europe, 526 in Africa, 251 in Oceania, 181 were in multiple regions, and 104 we allocated to the NA category.Most of the studies listed as NA were theoretical studies or studies with stylized numerical examples without a geographic focus.The geographic focus changed over time, with the major trend among regions being a faster increase in the number of studies in Asia compared to the Americas.The percentage of all studies from Asia was 16.5% in 1989–1988, 16.8% in 1999–2008, and 24.1% in 2009–2018, and the percentage of all studies from the Americas was 38.4% in 1989–1988, 32.1% in 1999–2008, and 21.6% in 2009–2018.Across all years and for studies that were specific to one country, the number of studies in the top ten countries by number of studies was United States of America 541, Australia 218, India 195, China 178, Canada 97, France 74, Germany 70, Brazil 67, Ethiopia 62, and Spain and the United Kingdom both with 59.Across all countries and years, the proportion of studies in developing countries increased over time, for example in 1989–1998 34% of country-specific studies were in developing countries, but in 2009–2018 this percent rose to 50%.Table 3 summarizes the 18 studies that considered all five risk types.Thirteen of these studies used questionnaires to ask what types of risk farmers perceived as most important.The other five studies were qualitative or conceptual.Many of the questionnaire-based studies ranked the types of risk based on farmers scores from a 5-point Likert scale.The types of risk perceived as most important varied by context, for example famers in Europe reported institutional risks associated with policy uncertainty as a major concern.None of the studies on all five types of risk examined directly the effect of changes in sources of risk on farm indicators.Our literature search and subsequent database of eligible studies provides insights into the types of risks studied in agriculture between 1974 and 2019.Previous reviews highlight the extent to which agricultural studies focus on risk, for example, 29% of studies that used farm-scale models in the European Union between 2007 and 2015 included risk or stochasticity.Between 1957 and 2015 the topic “uncertainty and risk” had the greatest number of studies in the Australian Journal of Agricultural and Resource Economics.Our results, however, also indicate that studies have overwhelmingly clustered around production and market risks.The focus on production risk is understandable given that productivity in agriculture is closely connected to biological processes and can be studied in relatively controlled experiments.These experiments permit a better understanding of cause and effect.For example, the analysis of long-term agronomic trials can help identify how weather variability affects crop yield stability.Moreover, farmers often perceived production risks as being one of the most important types of risk, but this perception is context specific with surveys from farmers in Europe often suggesting policy uncertainty as a major concern.The focus on market risks is also reasonable.Markets, prices, and price volatility are at the center of theories and models developed in agricultural economics.Researchers have recognized the importance of risks beyond production risks, but the rate of increase in studies on multiple risks was less than the rate of increase in studies on single types of risk over the past two decades.The literature has focused less on institutional, personal, and financial risks, compared with production and market risks.The focus on production and market risks may also be related to the greater availability of open access data on weather and prices.This focus has in turn shaped the methods available to study risk.Only a limited number of studies examined personal risk.One example was Zhen et al. who reported survey results from 270 farmers that implied cropping systems on the North China Plain are economically viable.These farmers also reported the over use and inappropriate handling of mineral fertilizer and pesticides, resulting in 20% of farmers reporting headaches and fatigue.These health problems are a concern for human welfare and may affect agricultural production through reduced work productivity.Quantifying these human health problems is a challenge, but identifying the risk is an important first step in quantifying the cost of the risk.The extensive focus on production and market risks raises questions about whether the current literature adequately addresses the information needs of farmers, and the institutions and agencies working to assist them prioritize among all available options to cope with risk.These considerations suggest that a refocus of research towards studying strategies that address these additional sources of risk may be useful.This refocus would address concerns raised by several researchers about the limited focus on multiple sources of risk—see, for example, Chambers and Quiggin and Dercon.Further, OECD argue because different types of risks are often linked, a holistic approach is needed to manage them.Without studying all the types of risks that farmers encounter, practitioners and policymakers will continue to have challenges identifying appropriate risk management options and policies.Yet, as discussed above, evidence seems limited about how multiple risks affect farm indicators and about the effectiveness of different risk management options.The few studies that jointly consider multiple sources of risk also suggests that the focus of the current literature is too narrow.Many of the studies the examined multiple types of risk applied quantitative methods.For example, Lien examined the variability of gross margins on a Norwegian dairy farm and studied production and market risks through the examination of stochastic dependencies between prices and yields.Pacin and Oesterheld studied the combined effect of production and market risks on income stability of farmers in Argentina.Some studies have examined more than two risks jointly, often using simulation models that take a system view.Taking a system view through using simulation models has been considered one of the best approaches to examine risks since at least the 1970s.A recent example is the use of a recursive programming model to examine farmer responses to production, market, and institutional risks in Uzbekistan.In this Uzbekistan example, production risks came from irrigation water variability and market risks stemmed from price fluctuations.Variability in irrigation water and prices were based on realized observations over time, and the institutional risks were considered using scenarios in the programming model.The institutional risk focused on farmers having land expropriated because of their failure to fulfill cotton production targets set by the government.Among the 18 studies that considered all five major risk types, most examined how farmers perceive the importance of each type of risk using a ranking-based Likert scale.The importance of risk is context specific; for example market risks contributed more than production risks in explaining revenue variability among Californian and New Zealand farmers.However, blueberry farmers in Chile were more concerned about production risk than market risk.The studies on risk perceptions indicate that farmers make important distinctions between issues at the farm scale and farm household scale, with health risks for household members generating issues for the farm.Further, several studies reported that concerns about family relationships and the health condition of family members are important issues for farm households.Some of the 18 studies also canvassed the importance of risk management options.The farm scale versus farm household scale issue emerged again with off-farm income being one risk management option.Using off-farm income is especially important in response to institutional risks, such as the hypothetical ending of all Common Agricultural Policy payments.The approach of initiatives by the World Bank and the International Fund for Agricultural Development aims to build the capacity to develop and implement comprehensive risk-related contingency plans and to promote the implementation of multiple strategies to manage risks.As Holling observed, these types of approaches do not require “a precise capacity to predict the future, but only a qualitative capacity to devise systems that can absorb and accommodate future events in whatever unexpected form they may take”.Nevertheless, policymakers desire evidence on the joint effects of multiple risks and on the comparative efficiency of different risk management strategies.Therefore, studies that examine coping strategies and current and potential responses to multiple risks could supplement our understanding of decisions under risk and improve stakeholders’ capacity to manage risk at the farm and policy scale.A retrospective look at responses by governments to the 2007/08 world food price crisis buttresses the need for more research on jointly managing multiple risks.Governments changed their storage and trade policies to help manage the price spikes during the crisis in a range of countries.In some cases, these policy changes stabilized domestic markets but destabilized global markets.One overall lesson from the crisis was that time could be spent between food price crises to generate evidence to help improve policy decisions.Not all risks are equally important in specific contexts.For example, property rights are often more secure for land and water in developed countries, with Feder and Feeny arguing that institutions should be considered in developing countries when assessing how property rights effect resource allocations.Therefore, although we outline a need to examine multiple sources of risk jointly, the importance of each risk will differ by context.The effects of risk on farm indicators still need to be examined within specific contexts, if these effects remain unexamined the information available for prioritizing risk management options will remain limited.In addition, some sources of personal risks may be more relevant in developing countries because, for example, health insurance is often less available than in developed countries, and labor laws and occupational safety policies in developed countries are often more stringent and enforced.These laws and policies may result in farmers having less work-related injuries or less exposure to harmful levels of pesticides.We advocate for a greater focus on studying multiple types of risk, but considerable challenges exist to further our understanding of the problem.Some of these challenges include access to relevant and reliable data, appropriate methods to account for the stochastic dependency between the different sources of risk, and how to obtain more relevant probabilities for risk research.Probabilities can come from either a frequentist view or a subjectivist view.The frequentist view considers probabilities as the limit of a relative frequency and the subjectivist view sees a probability as the degree of belief in an uncertain proposition.Here we offer our thoughts on some of these challenges, mainly regarding data.Often variance in gross margins, revenue, or income is an indicator of risk and this indicator is often examined as a single stochastic process, rather than as a joint distribution of separate stochastic variables for yields, prices, and costs.A single stochastic process is commonly used because greater levels of disaggregation can lead to an increase in the number of “messy” dependencies.As such one approach for examining production and market risk consists of using a time series of weather data to examine how weather variability affects farm production and then conduct a sensitivity analysis of the subsequent farm income to changes in prices or to use simulation models for scenarios related to risk types, such as institutional risk.However, the application of sensitivity analysis could be made more relevant for risk research if probability distributions for the risk types were specified.A priority for future research lies in developing databases that capture all five risk types and developing methods to account quantitatively for simultaneous changes in multiple types of risk.Databases available for risk research may be incomplete for the examination of all types of risks.For example, financial information at the household scale are absent in the main farm-scale accounting database in Europe, the Farm Accountancy Data Network, despite the FADN being a valuable source of panel data.Thus, the analysis of financial risk is impossible with data from the FADN.Improving data collection in agriculture is becoming a more visible issue, especially given the changing nature of the family farm and the increased complexity of the agricultural sector.Risk analysis is the “art of the possible”, and as such understanding which risks are important for farmers is a crucial step in risk analysis.Judging by the 18 studies that examined all five types of risk, farmers displayed a concern with all of them even though their importance is context specific.Given the greater availability of open access data on weather and prices, along with panel data tracking individual farms and farm households such as the FADN and the Living Standards Measurement Study, a possible path forward is to apply simulation models that have a core of production and market risk and conduct farmer-relevant “what-if” scenarios for institutional, personal, and financial risk.The coupling of these panel data that provide year-on-year variability in farm indicators with other data on risks, such as satellite data on weather or complementary surveys on institutional or financial risks and their probabilities, may help uncover trends between risks and indicators.These panel data often contain self-reported data on a range of risks, such as drought severity and any deaths or major illnesses in the farm household.These trends could also provide data for the calibration and evaluation of simulation models and inform scenario design.We recommend that “what-if” scenarios for risk analysis are informed by data on the probability distribution of the risk type, or at least the range of possible values for sources of risk are considered.Across the five types of risk, production and market risk data appear more readily available and could be viewed as having frequentist distributions, but data appear scarcer for institutional, personal, and financial risks where a subjectivist view may be more appropriate to generate probability distributions.When data are scarce, an option to develop probability distributions is to combine the frequentist and subjectivist views, here probability distributions are generated based on scarce data and expert judgments.Case studies that integrate the frequentist and subjectivist views are emerging.An understanding of Bayesian decision theory may also help researchers revise probability distributions after new information is obtained, such as after the realization of an uncertain event.Our study examines the trajectory of the literature on the types of agricultural risks studied since 1974.Starting with a literature search in the Web of Science, we identify 3283 studies on types of risk in agriculture.Unexpected events continue to effect farmers and we know that farmers manage multiple risks jointly.Thus, our study focuses on the distribution of studies by type of risk and the number of studies that examined more than one type of risk.Our results reflect the types of risk that researchers have studied and do not necessarily reflect the importance of different risks as perceived by farmers.We found only a limited number of studies that examined multiple sources of risk.This limited number means that there may be opportunities to better align risk research with the needs of farmers who manage multiple risks jointly, and the agencies, institutions, and donors that work to support them.Adopting a multi-risk research agenda faces challenges, including the intense data requirements needed to understand how risks are connected.One pragmatic approach, among several options, is to use simulation models that combine observed data on weather variability and price volatility with the design of “what-if” scenarios related to institutional, personal, and financial risks.Some simulation models consider market risks through the use of a sensitivity analysis, but we require greater understanding of how to better account for the stochastic dependencies between types of risks and the probability distributions of variables for risk types, especially given the differences between the frequentist and subjectivist views on probabilities.Moreover, to use simulation models and conduct scenarios for combinations of types of risk, data on the effects of all the types of risk are required.Our results also highlight that the types of risk are often relevant at differing scales, with personal risks often stemming from the farm household scale but negatively effecting farm operations.This scale issue highlights the need for risk research to consider the interactions between on-farm production activities and household family members.Despite these challenges, our study raises the awareness of the apparent disconnect between risk research and the multi-risk realities encountered by farmers and policymakers.This greater awareness is a first step towards developing a research agenda that overcomes technical challenges in analyzing multiple risks, such as the stochastic dependencies between types of risk, and provides much needed information to farmers and policymakers regarding risk management priorities.The data and code required to replicate the figures and tables reported in the results section of this study are available online: http://dx.doi.org/10.17632/ppy2x4yy7k.1.The authors declare no conflict of interest.
This study examines the scope and depth of research on the five major types of risks in agriculture, and the extent to which those studies have addressed the impacts of, and policies to mitigate individual types of risk as opposed to more holistic analyses of the multiple sources of risk with which farmers have to cope with. Risk is at the center of new paradigms and approaches that inform risk management initiatives and shape investments in many countries. Although the literature includes several substantive reviews of the methods available for risk analysis and their empirical applications have been extensively scrutinized, limited information exists about which types of risks have received sufficient attention, and which have not. This limited information is perplexing because farmers manage multiple risks at the same time and unanticipated events continue to have substantial impacts on farmers. We identify 3283 peer-reviewed studies that address one or more of the five major types of risk in agriculture (production risk, market risk, institutional risk, personal risk, and financial risk) published between 1974 and 2019. We conduct a literature search and then apply an eligibility criteria to retain eligible studies from the search. We then classify those eligible studies based on risk type and geographic focus. We placed no limit on the temporal scale, geographic focus, or study method for inclusion in our search. Results show that 66% of the 3283 studies focused solely on production risk, with only 15% considering more than one type of risk. Only 18 studies considered all five types of risk and those either asked how farmers perceived the importance of each risk or were focused on conceptual issues, rather than assessing how exposure to all the risks quantitatively affects farm indicators such as yields or incomes. Without more detailed analyses of the multiple types of risks faced by farmers, farmers and policymakers will lack the information needed to devise relevant risk management strategies and policies. A shift in research focus towards the analysis of multiple contemporaneous types of risk may provide a basis that gives farmers greater options for coping with and managing risk. We discuss some of the challenges for studying multiple risks simultaneously, including data requirements and the need for probability distributions and the role of simulation approaches.
712
Norovirus prevalence and estimated viral load in symptomatic and asymptomatic children from rural communities of Vhembe district, South Africa
More than 70% of African people who live in poverty, reside in rural areas .Subsequently illiteracy, malnutrition, inadequate water supplies and poor sanitation, as well as poor health and hygiene practices, affect a large proportion of rural communities in the African continent.With the considerable decline of rotavirus-associated diarrhea in countries that have introduced rotavirus vaccines, NoV is increasingly recognized as a leading cause of acute gastroenteritis .The symptoms associated with NoV infection, which manifest after an incubation period of 1\u2fff2 days , are typically self-limiting, characterised by nausea, vomiting, abdominal pain and non-bloody diarrhea.The duration of NoV illness is typically 12\u2fff72 h but the illness can be prolonged in the very young or old, and immunocompromised persons .However, reports have revealed that not all individuals develop symptoms and a significant proportion remains asymptomatic after NoV infections .Several studies have suggested that the semi-quantitative measure of real-time RT-PCR as a proxy measure of fecal viral load using threshold cycles value may distinguish between asymptomatic viral shedding from clinically relevant disease .Studies have shown that children from poor communities in developing countries with poor standards of hygiene, including unsafe disposal of faeces and the use of contaminated water supplies can facilitate the transmission of NoV .Nevertheless most of the NoV studies in Africa have been carried out in urban settings, likely due to the lack of laboratory capacity for Human NoV detection in rural settings .In South Africa, little has been reported on the prevalence and circulating NoV genotypes across the country .To determine the prevalence of NoVs in asymptomatic and symptomatic children in rural communities of Vhembe district/South Africa and to compare the differences in viral burden as suggested by the RT-PCR CT value.This study was a cross-sectional, clinic-based investigation of out-patients, conducted from July 2014 to April 2015.Stool samples were randomly collected at different clinics situated within the rural communities of Vhembe District in Limpopo Province, South Africa.In South Africa, most cases of intestinal gastroenteritis are seen by the PHC centres situated in the rural communities and only the severe cases are directed by the clinic nurses to the hospitals.A total of 40 clinics were designated sampling sites for this study.Samples were transported to the University of Venda Microbiology laboratory and tested for NoV by RT-PCR.The study protocol and consent procedures were approved by the Ethics committees of the Department of Health in the Limpopo Province and University of Venda.Written, informed consent was given by the parent or guardian of the child before stool sample collection. After consent was given, personal details as well as clinical data such as presence of fever, vomiting, abdominal pain or dehydration were collected.The consistency of the stool was documented.The parent employment status as well as the family living conditions such as the source of water, presence of livestock and toilet seat use was also recorded.One stool sample from each child under 5 years of age, who presented to the clinic with diarrhea, was collected by the clinic nurse and kept at + 4 °C.Diarrhea was defined as three or more episodes of watery stool in the previous 24 h .Stool specimens were collected from clinics on a weekly basis, transported on ice to the laboratory within 6 h and stored at \u2fff 20 °C until tested.A total of 253 stool samples from symptomatic cases were collected for this study.Stool samples from patients with bloody diarrhea were excluded.Fifty stool samples from healthy controls were also collected.The Boom method was employed to extract NoV RNA as previously described .The method is based on the lysing and nuclease inactivating properties of the chaotropic agent guanidinium thiocyanate, together with the nucleic acid-binding proprieties of silica particles.- RIDA© GENE NOROVIRUS I & II real-time RT-PCR kits were used to detect NoV from clinical samples in this study.This PCR assay offers qualitative detection and differentiation of NoV genogroup I and II in human stool samples according to the manufacturer and it is not thought to cross-react with other common enteric pathogens.RIDA gene kit can also detect GIV genogroup.The assay has 98% of sensitivity and specificity and includes an internal control to monitor for extraction efficiency and amplification inhibition.The test is carried out in a one-step real-time RT-PCR format in which the reverse transcription of RNA is followed by the PCR in the same tube.The real-time PCR program was performed on a Corbett Research Rotor Gene 6000 with the following cycling conditions: Reverse transcription for 10 min at 58 °C; initial denaturation step for 1 min at 95 °C followed by 45 cycles of 95 °C for 15 s and 55 °C for 30 s with continuous fluorescence reading.Separate rooms were used for the pre- and post-amplification steps to minimise the risk of amplicon carry-over and contamination of samples.Randomly selected stool RNA extracts, which tested NoV positive, were subjected to RT-PCR amplification using primers from previously published work, for the purpose of sequencing to confirm the detection results.The One step Ahead RT-PCR was used, utilising specific oligonucleotide primer sets GISKF/GISKR to amplify 330 bp of GI capsid fragment and GIISKF/GIISKR for 344 bp of GII capsid fragment as previously described .The PCR products of the amplified fragments were directly purified with a master mix of ExoSAP.Using the same specific primers, the Sanger sequencing was performed on the ABI 3500XL Genetic Analyzer POP7\u2fe2.The nucleotide sequences were compared with those of the reference strains available in the NCBI GenBank using BLAST tool available at http://www.ncbi.nlm.nih.gov/blast then analysed for their genotypes using Noronet typing tools available at http://www.rivm.nlm/norovirus/typingtool,Data was initially recorded in Microsoft Excel.All analyses were done by STATA v13.Logistic regression of being NoV positive, using the following predictors: types of water sources, specific symptoms and whether or not the patient had watery stool, was calculated.Mann-Whitney U, Wilcoxon W, Z test and a t-test comparing CT values in cases and controls were performed.Non-parametric receiver operating characteristic analyses to assess the association between CT values and illness were also performed.A P-value of < 0.05 was considered to be statistically significant.From July 2014 to April 2015, a total of 303 fecal samples, including 253 specimens from cases and 50 from healthy controls, were collected and examined for NoV.The median age was 10 months in the symptomatic group and the sex distribution was 53.4% male, 46.6% female.In the control group the median age was 13 months and this cohort was comprised of 50% male and 50% female participants.The most common clinical features of the symptomatic children were with diarrhea only and diarrhea with vomiting.The demographic profiles and clinical characteristics of study participant children are described in Tables 1 and 2.Of the 253 fecal samples from symptomatic children, 104 were positive for NoV.Of these positive samples 62 were GII only, 16 were GI, and 26 were GI/GII mixed in symptomatic children.Of 50 control samples 18 were positive for NoV including 9 GII, 2 GI and 7 G/GII mixed.The prevalence of NoV was higher in cases though this was not statistically significant.Looking at each genotype whether as single agent or in combination, GI was detected in 42 of cases and 9 of controls and GII in 88 of cases and 16 of controls.These differences were also not statistically significant.The highest detection rate of NoV, in case patients, was found in the age group of 13\u2fff24 months.NoVs were predominantly detected from children presenting with liquid stool.There is a suggestion that liquid stool is associated with NoV positivity, but this was not statistically significant.Also, no risk factor has been found with NoVs genogroup as a predictor of symptomatic cases.As can be seen from Table 2 there is no difference in reported symptoms between case patients positive for NoV and case patients negative for NoV.Temporal distribution of NoV genogroups between July 2014 and April 2015 showed NoV detection every month throughout the study period with a possible peak in October 2014.NoV-G2SKF/G2SKR amplicons of samples number 30, 45, 148 and NoV-G1SKF/G1SKR amplicons of samples number 139, 168, H011 were sequenced.A BLAST search confirmed that the sequenced samples were Human NoV.Noronet genotyping tool identified respectively the following Norovirus strains: GII.4 variant, GII.14, GI.4 and GI.5.There was a considerable variation in NoV CT values in positive samples from both symptomatic cases and asymptomatic controls.The median CT value of NoV GII genogroup in symptomatic was lower than in asymptomatic children and this was statistically significant.However, there was no difference in median CT value between symptomatic and asymptomatic participants for NoV GI.The association between viral load, as estimated by CT values, and illness was further investigated using non-parametric ROC analyses.For GII, it can be seen that there was a reasonable predictive power of CT values, but not for GI.Table 4 shows the sensitivity and specificity of using different CT values for GI and GII as predictors of symptoms.It can be seen that although sensitivity of the GI and GII analyses are similar, the specificity for GI is much lower than for GII across all CT values.Overall it would appear that the CT values for GII adequately predict illness whereas this is not the case for GI.Specificity is poor, even for GII, except for CT values below 20.The main objective of this study was to assess the NoV prevalence and compare the estimated viral load in asymptomatic and symptomatic children in rural communities of Vhembe district/South Africa.The results of this study revealed that the detection rate of NoV in symptomatic cases was high but was not statistically different when compared to the controls.Evidence that NoV-positivity was more common in the symptomatic compared to the asymptomatic children was not established in this study.Furthermore NoV positive cases were not found to be predictors of symptoms.Comparison of CT values of NoV genogroups revealed a lower median CT value of NoV GII detected in symptomatic children, compared to that recorded for the asymptomatic children, and this was statistically significant.However, there was no significant difference in CT values between NoV positive cases and controls for NoV GI genogroup.Even though the prevalence of GII is roughly the same in cases and controls, the estimated viral load is higher in cases.We note that NoV GI genogroup, detected in both groups, did not exhibit the same trend suggesting that GI is not a cause of disease in the study population.The ROC analyses also revealed a considerable predictive power of CT values for diarrhea GII positive, but not GI.NoV-induced gastroenteritis has previously been associated with lower CT values, than asymptomatic infections in several studies .However, to our knowledge this is the first study reporting on the differences in estimated viral load of GII and GI NoV positive cases and controls.In real time PCR, CT levels are used as a surrogate measurement of viral load in combination with standards of known quantities.In this study, the inhibition that may have affected the target CT values, were monitored by the use of an internal control and all control CT values were within the 30\u2fff32 cycle range.The findings of the study are concordant with several studies that reported NoV GII as the predominant genogroup involved in clinical cases, and circulating in communities worldwide .The observation that the prevalence of Human NoV excretion in stools is similar in both symptomatic and asymptomatic children has been previously reported and raises questions about its pathogenic role in Africa .These findings also indicate that asymptomatic infections could be a source of NoV outbreaks.Similarly, Ayukekbong et al. reported that in developing countries NoV infections are very common with comparable detection rates observed in diarrhea cases and controls.However in a cross-sectional study, it is easy to mis-classify substantial numbers of post-symptomatic infections as asymptomatic infections even when the controls are defined as absence of diarrhea symptoms in the preceding 4 weeks .The high detection rate of NoV in children living in rural communities is likely to reflect their substantial exposure to enteric pathogens, probably as a result of poor sanitation and hygiene practices.Most of the children in the study population were from households with a very low income and poor living conditions, although comparable rates of NoV detection from outpatient children in rural communities and semi-urban settings have been reported previously in other developing countries such as Bolivia, China, Brazil and Mexico .The findings of this study are inconsistent with previous studies that found a substantial difference in the NoV detection rates of both groups.However these studies were carried out in semi-urban settings which are different from rural settings.Children aged 13 to 24 months had the highest rates of NoV positivity relative to those of other age groups in this study.This finding is consistent with other studies of outpatient children in developing countries .Young children between 13 and 24 months of age may have more opportunities to be exposed to NoV-infected environments that children of other age groups , coupled with the absence of toilet training.One of the limitations of this study is the restricted number of stool specimens from healthy controls.Also we have not looked for other causes of gastroenteritis such as adenovirus, astrovirus or bacterial and parasitic causes.Though we have performed nucleotides sequencing of amplified capsid fragment on some samples at low virus concentration, the assay used in this study cannot help to differentiate Norovirus genotypes.Our findings suggest that the difference between asymptomatic and symptomatic children in African populations may relate to the NoV viral load.The difference in estimated viral load of NoVs GI relative to GII observed in this study also supports the concept that transmissibility via the fecal-oral route and viral infectivity may be lower for GI than GII .The study findings may have implications for the diagnosis of NoV disease and future vaccine development, which may only need to consider GII as the genogroup associated with diarrhea in the African population.
Background Human Norovirus (NoV) is recognized as a major etiological agent of sporadic acute gastroenteritis worldwide. Objectives This study describes the clinical features associated with Human NoV occurrence in children and determines the prevalence and estimated viral burden of NoV in symptomatic and asymptomatic children in rural South Africa. Study design Between July 2014 and April 2015, outpatient children under 5 years of age from rural communities of Vhembe district, South Africa, were enrolled for the study. A total of 303 stool specimens were collected from those with diarrhea (n = 253) and without (n = 50) diarrhea. NoVs were identified using real-time one-step RT-PCR. Results One hundred and four (41.1%) NoVs were detected (62[59.6%] GII, 16[15.4%] GI, and 26[25%] mixed GI/GII) in cases and 18 (36%) including 9(50%) GII, 2(11.1%) GI and 7(38.9%) mixed GI/GII in controls. NoV detection rates in symptomatic and asymptomatic children (OR = 1.24; 95% CI 0.66–2.33) were not significantly different. Comparison of the median CT values for NoV in symptomatic and asymptomatic children revealed significant statistical difference of estimated GII viral load from both groups, with a much higher viral burden in symptomatic children. Conclusions Though not proven predictive of diarrhea disease in this study, the high detection rate of NoV reflects the substantial exposure of children from rural communities to enteric pathogens possibly due to poor sanitation and hygiene practices. The results suggest that the difference between asymptomatic and symptomatic children with NoV may be at the level of the viral load of NoV genogroups involved.
713
Tailoring emission spectra by using microcavities
Spectrum of an emitter in crystals and glasses play an important role in its application for optoelectronics and information technology areas.Erbium-doped glasses, thulium-doped glasses, and neodymium-doped glasses have emission peak wavelengths around 1530 nm, 1470 nm and 1300 nm in telecommunication windows region respectively .On the first hand, for optical signal amplification, the wavelength width of emission spectra is hoped to be as broad as possible because broader emission spectra have wider gain bandwidth, thus more channels can be amplified in an optical fiber amplifier .On the other hand, for optical fiber laser, the width of emission spectra is hoped to be as narrow as possible, as narrower emission width has larger gain coefficient, leading to lower lasing threshold.Moreover, in order to explore new quantum optics applications, a suitably modified dielectric surrounding is indispensable having vacuums fluctuations controlling spontaneous emission .Spontaneous emission is the basic natural property of the material involved in the generation of light .Therefore, it has been very important to control and tailor the behavior of spontaneous emission when designing some devices for optical communication, quantum-information systems, displays and solar energy conversion technologies .Quantum theory suggests that SE results from the transition of an excited state electron toward its available ground state .Recently photonic cavities have got great improvements and found their applications in ultra-high efficiency single photon emitters and thresholdless nano lasers .The direction-dependent emission spectra and radiative transitions rate are highly affected by the surrounding of emitter .LDOS have played an important role in the development of novel photonic devices .At the specific location of the emitter, LDOS determines the number of electromagnetic modes available for the emission of photons.Therefore, it is very important to analyze LDOS of different microcavity structure geometry.In this paper, we design several different microcavities and calculate their LDOS using finite-different time domain method to study emission spectra from emitters embedded in circular/elliptical-metal cavity and square/rectangular cavities.We show that the emission spectra in a free space can be tailored by using different microcavity geometry surrounding it and show the relationship between LDOS and the emission spectrum of the luminescent ion.Fig. 1 shows the microcavities with different geometries.A: square cavity and rectangular cavity, B: circular cavity and ellipse cavity.In order to allow the fields to propagate away from the cavity the structures have a small notch on one side.FDTD simulations are used to calculate the LDOS and emission spectrum.The numerical resolution is 100 pixels/a.A Gaussian source in time is lunched at the center of the microcavities, and the monitor for power flux is recorded at the notch to observe the emission spectrum.For the square-cavity, the side length is 1.2a and thickness is 0.2a. For the rectangular-cavity, the length and width are 1.4a, 1.2a, thickness is 0.2a. For the circular cavity, the radius is 0.6a and thickness is 0.2a. For the elliptical cavity, the long axis and short axis lengths are 1.2a, 0.8a, respectively, and thickness is 0.2a, where a is a length unit, e.g. one micrometer.The dash lines in Fig. 2, and illustrate the emission spectra in free space, and the solid lines present the power spectral density flowing through the monitors of the square, circular, rectangular and elliptical metallic cavities.It is shown from Fig. 2 that in the square metal and circular metal cavities, the emission spectra recorded at the notch are similar to that in free-space environment.However, in the rectangular metal and elliptical metal cavities, the spectra are much narrower compared to that in free-space environment, thus the emission spectra can be modified as sharp spectra, which will have potential applications for designing low threshold solid lasers and gas lasers.Fig. 3 illustrates the emission spectra in free space, square, circular, rectangular and elliptical silicon cavities.It is shown in Fig. 3 that in the square silicon and rectangular silicon cavities, the emission spectra recorded at the notch is a convolution of three smaller emission spectra, and the main spectra is much higher than other spectra and its width is narrower than that in a free-space environment, especially, the width in square cavity is much narrower.In the circular silicon cavity, the spectra is a sharp spectrum, which is much narrower than that in free-space environment, thus a broad emission spectra in free-space can be modified as a sharp spectra by using this circular silicon microcavity.However, in the elliptical silicon cavity, the emission spectrum recorded at the notch includes two intense peaks.The central frequency of one peak is much smaller than the frequency of free-space environment, its line width is much narrower compared to the free-space environment.The central frequency of other peak is much larger than the frequency of free-space environment, and its line width is more narrower compared to the free-space environment.Thus, the emission spectra in free space can be modified as broadband spectra by further tailoring the elliptical silicon cavity geometry, which will have potential application for designing broadband optical devices such as fiber amplifiers and fiber sources.The fields inside square, rectangular, elliptical and circular metallic cavities are shown in Fig. 4.Fig. 5 and show the structure and field of double-square silicon microcavities and and illustrates the emission spectra of the double-square silicon and double-square metal microcavities.For the silicon microcavity, the spectra is convolution of two emission spectra, and the main peak is much higher than second peak, and the whole spectra width is much wider than that in a free-space environment.For metal microcavity, the spectra includes one peak, and its width is nearly equal to the spectra in a free-space environment.The structure and field of double-elliptical silicon microcavities are shown in Fig. 6 and, and the emission spectra of the double-elliptical silicon and metal microcavities are shown in Fig. 6 and.For double-elliptical silicon microcavity the emission spectrum comprises of two peaks.Double-elliptical silicon microcavity has high-order modes and possess two peaks of emission spectrum in opposite directions.The positive peak is due to the positive electric field and the negative peak is due to opposite direction.The double-elliptical silicon cavity halves the emission spectra of the luminescent ion.The metallic elliptical double-cavities has one sharp peak.The metallic cavity can be used for designing low threshold solid lasers and gas lasers.We discover the possibilities of tailoring the emission spectra of luminescent ions by using geometrical microcavities and describe the mathematical relationship between emission spectrum of luminescent ion in free space and embedded in the microcavity.The numerical results show that the emission spectrum of an active ion from a metallic cavity is equivalent to the dot product of its emission spectrum in free space and the geometrical dependent LDOS of the cavity.Furthermore, the numerical results show that metallic rectangular, metallic elliptical, metallic double-elliptical cavities and circular silicon cavity can modify spectra as sharp spectra, enabling the luminescent ions to be used for designing low threshold solid lasers and gas lasers.Square and rectangular silicon cavities modify the emission spectra as convolution of different emission spectra of different widths.Elliptical silicon cavity and double-square silicon cavity can modify spectra as wider spectra, enabling the luminescent ion to be used for designing broadband waveguide amplifiers and sources.Square, circular and double-square metallic cavities do not produce considerable change in the emission spectra of luminescent ion.Double-elliptical silicon microcavity halves the emission spectra of luminescent ion.These microcavity geometries will be key for the development of new luminescent materials with more useful spectral properties than conventional bulk materials.
We present a theoretical model to describe emission spectrum of an emitter in a micro-cavity. Our model proposes that the spectrum in a metal micro-cavity depends on the product of the emission spectrum in free space and the local density of states (LDOS) in the cavity which the emitter is placed in. Since Purcell effect can enhance LDOS of a micro-cavity, the emission intensity of an emitter is directly proportional to LDOS depending on the geometry of microcavity. Thus, the model predicts that the spectrum of an emitter can be modified as a narrow spectrum in microcavity possessing sharp LDOS spectrum, also can be modified as broadband spectrum with broadband LDOS spectrum. Finite-difference time-domain (FDTD) numerical simulations support our prediction.
714
Diverse Brain Myeloid Expression Profiles Reveal Distinct Microglial Activation States and Aspects of Alzheimer's Disease Not Evident in Mouse Models
Microglia and other mononuclear phagocytes of the CNS are the primary cellular responders to CNS injury or infection.These cells originate during embryonic development when erythromyeloid progenitor cells from the yolk sac colonize the CNS and give rise to microglial cells in parenchymal tissue and to perivascular, meningeal, and choroid plexus macrophages at the interfaces between CNS and peripheral tissues.After embryonic establishment, CNS myeloid cells are stable, long-lived populations that persist into adulthood as tissue-resident macrophages.A common histological feature of many neurological diseases is “microgliosis,” involving changes in microglial cell morphology, marker expression, and sometimes proliferation.However, the scope of information gained from histological studies can be limited or even misconstrued when microglial cells are classified into purported states of polarization—such as the formerly used “M1” and “M2” macrophage states—using only a handful of markers.Transcriptional profiling of mouse microglia acutely isolated from CNS tissue has emerged as a popular method for genome-wide surveillance of microglial activation states in diverse biological settings.Many datasets with interesting findings have been generated, but how these transcriptional profiles interrelate has not been approached systematically.Here, we compared microglial/myeloid cell expression profiles from a wide range of CNS disease models including ischemic, infectious, inflammatory, tumorous, demyelinating, and neurodegenerative conditions, including new microglial expression profiles from two models of tau pathology.From this meta-analysis emerged modules of co-regulated genes related to proliferation, interferon response, endotoxin response, and neurodegenerative settings.We also used reference profiles of myeloid cells from peripheral tissues, different brain regions, different stages of brain development and aging, and single-cell analyses to provide further context for disease-related expression profiles.We then used these modules to further improve our understanding of brain myeloid activation states in Alzheimer’s disease and AD models.First, we assessed the degree to which genes from different modules depended on Trem2 for their modulation in an AD model.Second, we probed a recent single-cell RNA sequencing dataset from the same model to learn whether our modules could identify distinct microglial activation states.Finally, we analyzed expression profiles from human neurodegenerative disease tissues—including new data from fusiform gyrus of Alzheimer’s and control patients—and observed upregulation of not only the neurodegeneration-related gene modules but also the LPS-specific and neutrophil/monocyte modules in AD tissues.We also provide a public platform for exploring gene expression in all these datasets.Our analyses identified multiple dimensions of CNS myeloid cell activation that differentially respond to specific conditions.Understanding how these dimensions can be modulated may open new opportunities for neurological disease therapy.We searched the literature for gene expression studies of acutely isolated microglia/myeloid cells from adult mouse brains.The details of purification varied from study to study, but the most common strategies were selection of CD11b+, CD11b+;CD45int, or Cx3cr1::GFP+ cells by fluorescence-activated cell sorting.We also prioritized studies with at least 3 replicates per treatment group.From these, we considered but excluded “germ-free” and “nerve injury” perturbations, since these exhibited little reproducible signal and/or few significant changes.Our final database included 18 datasets spanning 69 different conditions and 336 individual expression profiles across a range of neurodegenerative, neoplastic, inflammatory, and infectious disease models, along with reference profiles from different developmental stages, brain regions, and myeloid cell subtypes.We also generated new RNA-seq data from Cx3cr1::GFP+ cells from the PS2APP model of AD and CD11b+ cells from hippocampi of two mouse models of tauopathy.The PS2APP data resembled what we previously described in CD11b+ cells.In the Tau models, which express either the P301L or P301S mutations associated with frontotemporal dementia and parkinsonism, we found hundreds of differentially expressed genes in the more aggressive P301S model at 6 months, but only a few tens of genes in the P301L model at the age of 12 months.Overall, the genes upregulated in P301S tended to show slight elevation in P301L, suggesting that myeloid cells in the P301L model undergo a milder form of the same phenotypic changes that occur in the P301S model.We next considered these changes in light of the other transcriptional profiles in our database.Each study was separately analyzed, whenever possible starting from raw data and using standardized pipelines.Absolute expression values were then Z score normalized within each study, and these Z scores were finally bound together across all of the studies to create one master gene expression matrix.This normalization results in relative gene expression values within each study, thus providing a statistical control for the many differences that exist between studies.In order to explore common types of microglial activation in an unbiased manner, we selected the genes differentially expressed in the greatest number of conditions.These 777 genes captured on average 36% of DEGs in the individual studies.We performed hierarchical clustering on this “master matrix” to separate these genes into enough co-regulated modules to allow us to categorize the major patterns of gene expression discernible by eye.In certain cases, we named modules of co-regulated genes based on the genes they contained or on their pattern of expression in these datasets.One such module, module 26, consisted of 82 genes mostly associated with cell proliferation, such as Mki67, Cdk1, and Plk1.These genes showed correlated expression within the various datasets.For example, they were largely unchanged in brain myeloid cells after lipopolysaccharide injection but robustly induced following injection of the virus LCMV.The fold inductions of these genes likely reflect the relative proliferative state of brain myeloid cells in these different conditions, from unchanged or only slightly elevated in most neurodegenerative models, to very high proliferative gene expression after virus treatment and during development.Beyond the proliferation module, we interpreted an additional six groupings of co-regulated genes.Three of these showed increased overall expression in multiple models of neurodegenerative disease: the core “neurodegeneration-related” module, the “interferon-related” module, and the “LPS-related” module.It should be noted that many hundreds of genes responsive to LPS and/or virus were not elevated in neurodegenerative models and did not appear in any of our modules since they did not meet the threshold of differential expression in 7 datasets.The interferon-related module included many well-known interferon-stimulated genes, such as members of the Oas and Ifit families, Isg20, and the transcription factors Irf7 and Stat2.These genes were most highly induced in response to virus, but also high in LPS and glioma.The module was modestly elevated in several neurodegenerative disease models, in cerebellar relative to cortical microglia, and in microglia from aged mice.The LPS-related gene set included certain inflammatory genes such as Ikbke, Cd44, Ccl5, and Tspo.They were induced highly in response to LPS and glioma, to a lesser extent in some of the neurodegenerative disease models, and hardly at all in response to virus.Notably, Tspo, which encodes the target of the widely used “microglial activation” PET tracer 11C-PK11195, was robustly induced in response to LPS but, like many LPS-stimulated genes, showed little change in the neurodegenerative models.The core neurodegeneration-related gene set was broadly elevated across most or all of the neurodegenerative disease models but changed little in response to either LPS or virus.The main grouping—like the interferon-related and LPS-related modules—typically showed elevated expression in the demyelination, ischemia/reperfusion, and glioma models and tended to be more abundantly expressed in peripheral, infiltrating, or perivascular macrophages than in bulk brain myeloid cells under normal conditions.The smaller grouping showed the peculiar combination of being normally expressed more highly in brain myeloid than peripheral/infiltrating myeloid cells, being induced to even higher levels of expression in neurodegenerative models, and typically being repressed in LPS-injected animals.Of the 134 genes comprised by the core Neurodegeneration-related gene set, 101 are annotated with a Gene Ontology term associated with either the plasma membrane or extracellular space.This suggests that microglia in neurodegenerative settings change the manner in which they interact with their environment.Possible regulators include transcription factors encoded by four genes in the set: Bhlhe40, Rxrg, Hif1a, and Mitf.10 of the genes, including the cathepsins Ctsb, Ctsl, and Ctsz are specifically associated with the “lysosome” GO term, suggesting higher lysosomal activity in these cells.Other genes of note in this set include Gpnmb, which is genetically linked to Parkinson’s disease, a biomarker for Gaucher Disease, and a known target of Mitf; Igf1, thought to be a secreted neuroprotective factor; and Apoe, the foremost genetic determinant of late-onset AD.The CD11c-encoding gene Itgax is also in the neurodegeneration-related gene set.The transcriptomes of CD11c-positive and CD11c-negative microglia isolated from mouse brains with β-amyloid pathology were recently described.Although the raw data were not publicly available, a supplemental table contained 240 genes enriched in CD11c-positive relative to CD11c-negative microglia.65 of our core neurodegeneration-related genes were in the table of CD11c-enriched genes.By comparison, this table included only 1 gene from our proliferation module, 6 genes from the LPS-related modules, and no genes from our other groupings.All together, these data indicate that core neurodegeneration-related modules represent a special activation state of brain myeloid cells largely distinct from that induced by microbial challenge and characterized by altered environmental engagement and lysosomal activity.Surprised that relatively few genes were differentially expressed in microglia from the P301L tau model, we directly investigated the neurodegeneration-related gene set in the P301S and P301L models.As expected, many of the genes were robustly activated in the more severe P301S model.While only a handful of the individual genes showed clear increases in P301L microglia, overall the gene set was clearly elevated.Therefore, even in the P301L model we see a signal of this distinctive, neurodegeneration-related microglial signature.Modules 2, 3, 5–7, and 9 were elevated in resident brain myeloid cells relative to infiltrating and peripheral macrophages.Among these, genes of the microglia module were unique in their specific elevation in parenchymal microglia relative to perivascular macrophages.Some of these, such as P2ry12 and Tmem119 have already been described as distinguishing microglia from other brain-resident myeloid cells.Previously published “microglia-specific” gene sets, “MG400”, and “Chiu MG” genes in Figure S1) also included many genes expressed generally by brain myeloid cells, not only by microglia.,Virtually all perturbations reduced the expression of the microglia module, with modest decreases in neurodegenerative models and pronounced reductions with LPS treatment.In theory, this could be due either to a change in gene expression or to partial replacement of the sorted myeloid compartment with non-microglial cells.However, in at least some of the datasets, the macrophage signature was also decreased, all but ruling out the most likely suspect for such a replacement.Recent parabiosis experiments confirmed that any contribution of blood-derived cells to the brain’s myeloid population is negligible in β-amyloid models.Therefore, the decreased expression of the microglia module in neurodegenerative models likely reflects frank cell-intrinsic transcriptional modulation.The macrophage genes, including Mrc1 and F13a1, were elevated in perivascular, brain-infiltrating, and peripheral macrophages relative to microglia.Of all the disease models tested, only glioma showed pronounced elevation of these genes.Interestingly, the expression of the microglia and macrophage modules was inversely coordinated during brain myeloid cell development, with macrophage expression gradually reduced and microglia expression gradually increased from embryonic through perinatal to adult brains.By contrast, myeloid cells from cerebella, as well as from older mice of any brain region, showed the opposite pattern: slightly lower expression of the microglia module accompanied by slightly increased macrophage module expression.Finally, genes of the neutrophil/monocyte modules including Ngp and Mmp8 were identified by their elevation in neutrophils and, to a lesser extent, monocytes relative to macrophages and other immune cell types.Though mostly unchanged in neurodegeneration models, these genes were robustly elevated in LPS and glioma models, as well as in cerebellum, suggesting an increased abundance of neutrophils or monocytes in these conditions.This highlighted that preparations of myeloid brain cells, although dominated by parenchymal microglia, are complex mixtures of different cell subtypes unless extra measures are taken to exclude other myeloid cell types.Having established these gene modules relating to brain myeloid subtypes and activation states, we next explored three ways the modules could be used to better understand neurodegenerative disease.First, we studied whether the different modules depended on Trem2 for their induction in an AD model.Second, we looked at whether our modules could identify unique subsets of microglia in a single-cell RNA-seq dataset from the same model.Third, we analyzed bulk tissue RNA profiles from human neurodegenerative disease samples to assess the degree to which the information from mouse models reflected brain myeloid activation states observed in the human diseases.Since mutations in TREM2 are among the strongest known genetic factors that elevate risk of AD, we asked whether our myeloid gene modules showed differential dependence on Trem2 for their activation.We calculated the “percent Trem2 dependence” for each DEG in the 5XFAD model.This ranged from 0%, for genes showing similar induction in Trem2KO and Trem2WT microglia, to 100%, for genes that were induced in Trem2WT but showed no induction in Trem2KO.Most genes fell between these two extremes—a diminished but not ablated response in Trem2KO animals.Notably, Apoe was among a small number of genes, also including Cd9, in the neurodegeneration-related gene set whose fold induction in 5XFAD was Trem2 independent.Interestingly, the DEGs in our proliferation, interferon-related, and LPS-related modules showed as much or greater dependence on Trem2 for their induction in 5XFAD microglia, compared to DEGs in the neurodegeneration-related gene set.The decreased expression of several genes in the microglia module, including P2ry12, also showed considerable Trem2 dependence, and this was true for most downregulated genes throughout the brain myeloid modules.These data only partly agree with a recent interpretation of single-cell RNA-seq data, also from the 5XFAD model, which reported that Apoe induction was Trem2 independent, while Cd9 induction was Trem2 dependent and repression of P2ry12 and other “homeostatic” microglia genes was Trem2 independent.Differences aside, our analysis indicates that Trem2 is required for the full transcriptional response to β-amyloid pathology across all tested gene modules.This was perhaps unexpected for DEGs in the LPS-related gene set, since Trem2 is known to restrain, not enhance, the direct LPS response.Trem2 dependence was also observed for induction of the neurodegeneration-related and LPS-related modules in the cuprizone demyelination model.Thus, context is essential for understanding the possible outcomes of Trem2 activity.The modules we have described provided information about population-wide transcriptional changes in brain myeloid cells in various settings.However, it was unclear whether these modules could be induced concurrently within individual cells or whether they represented discrete activation states.To further validate these modules and better understand their utilization, we examined their expression in a recently published single-cell RNA-seq survey of CD45+ immune cells from the 5XFAD mouse model.We recapitulated the cell clusters originally reported by the authors.As expected, the DAM cells were present almost exclusively in 5XFAD, not non-transgenic, brains and expressed the core neurodegeneration-related gene set.We also identified other interesting clusters of microglial cells.When we probed these for expression of our modules, we were able to pinpoint unique clusters of microglia expressing the interferon-related module, the proliferation module, or module 8, which consisted of the immediately early genes Fos and Egr1.These cell clusters were clearly distinct from the DAM cells that expressed neurodegeneration-related genes, indicating these modules represent discrete, possibly exclusive, microglial states.Indeed, looking within each of these cell clusters for expression of the other modules revealed no apparent overlap.The LPS-related gene set, in contrast, was not upregulated in one discrete cell cluster but rather appeared modestly increased in the DAM cluster, the cluster expressing interferon-related genes, and the cluster expressing proliferation genes, relative to resting microglia.Decreased expression of the microglia module was obvious only in the DAM cells but not in other microglial clusters.Other than the DAM cells, most of the microglial clusters showed no clear difference in cell numbers between 5XFAD and non-transgenic brains, with the exception of one other cluster.Cluster 13 cells expressing the interferon-related module were roughly twice as abundant in 5XFAD brains than in controls, and we also observed a similar ∼doubling of this population using single-cell RNA-seq in the PS2APP model.Our gene sets therefore aid in interpreting these complex single-cell data and highlight that interferon-related activation occurs independently and in parallel to neurodegeneration-related activation.The difficulties of post-mortem tissue acquisition have, to date, limited the availability of sorted cell expression data in human neurodegenerative disease, so we examined the expression of our gene modules in bulk tissues.We sequenced RNA from frozen specimens of fusiform gyrus of 33 neurologically normal controls and 84 autopsy-confirmed Alzheimer’s cases obtained from the Banner Sun Health Research Institute Brain and Body Donation Program.We cross-checked our findings in a previously published microarray dataset of Alzheimer’s patient “temporal cortex”.We did not analyze a highly referenced dataset from frontal cortex in which microglial content was confounded with age, which itself was not well controlled between AD and control cohorts.We also explored expression profiles in datasets from other neurodegenerative disease bulk tissues, including spinal cord anterior horn pool of sporadic amyotrophic lateral sclerosis patients, frontal cortex from PGRN mutant FTLD patients, and caudate nucleus from Huntington’s disease patients.Because signals from bulk tissue can be dominated by changes in the relative abundance of different cell types, we first examined changes in cellular composition, using cell-type-specific gene sets as a proxy for CNS cell-type abundance.We created gene sets for the major CNS cell types from mouse and human expression data and then scored these gene sets in bulk tissue expression profiles to analyze cell-type enrichment in each sample.Compared to mouse models, a greater range of cell-type variability was observed within both neurologically normal and diseased human tissues.Nonetheless, significant changes in the distribution of cell-type scores were associated with various diseases.For example, both excitatory and GABAergic neuron scores were significantly lower in most of the disease conditions relative to their controls.All diseases were associated with higher myeloid scores, although the effect was surprisingly modest in the AD datasets.Astrocyte scores were typically higher in neurodegenerative tissues but only in one of the two AD datasets.Keeping in mind the variability in cellularity, we next examined the expression of the mouse brain myeloid gene modules in the RNA expression data from human neurodegenerative and mouse model bulk tissues.Interestingly, all seven modules were at least modestly elevated in bulk tissue from PS2APP and/or SOD1G93A mouse models, even though some were not elevated or, in the case of the microglia gene module, even slightly lower in purified myeloid cells from these transgenic models.This highlights the challenges of untangling changes in cellularity from changes in expression using bulk tissue expression data.In human neurodegenerative tissues, many of the myeloid activation gene sets were modestly elevated in the disease conditions.However, compared to the cell-type marker gene sets, genes within in the myeloid activation modules were not as well correlated.This was not surprising since the myeloid modules were defined irrespective of gene expression levels in non-myeloid CNS cell types.Thus, trying to assess myeloid activation states in bulk tissue profiles using our gene modules was confounded by at least two factors—artificial elevation due to increased abundance of myeloid cells and obfuscation due to gene expression in non-myeloid cells.We next tried to correct for these confounding factors in AD datasets in order to more accurately assess whether our myeloid gene modules were in fact elevated in AD tissues.We noticed that almost half of the genes in the neurodegeneration-related module were enriched in AD bulk tissue, many were essentially unchanged, and a smaller number surprisingly showed lower expression in the AD cohorts.However, many of the “down” genes, but few of the “unchanged” or “up” genes, showed enriched expression in neurons relative to other cell types.Since the decreased abundance of these mRNAs in bulk tissue likely reflected neuronal loss rather than reduced microglial expression, we excluded neuron-enriched genes from our analysis.Next, since many of the remaining genes in the neurodegeneration-related gene set are enriched in myeloid cells, we created subsets of the control and AD cohorts with similar myeloid content.Even in these myeloid-balanced datasets, the neurodegeneration-related module scores were significantly higher in AD samples than controls.Therefore, elevated neurodegeneration-related scores in AD cohorts reflected, at least in part, frank transcriptional activation, and the myeloid reaction observed in mouse models also likely occurs in human patients.Following the same analysis for the other myeloid gene modules—removing neuron-enriched genes and examining myeloid-balanced cohorts—we found that some modules were still elevated in bulk AD tissue.In particular, the LPS-related and neutrophil/monocyte modules showed higher expression in both the fusiform gyrus and temporal cortex datasets.Since many genes in the LPS-related modules are somewhat elevated in neurodegenerative models, we tested whether genes elevated only in myeloid cells from LPS-treated animals, but not in any neurodegenerative models or following LCMV infection, also showed elevated expression in AD tissues.Even these “LPS-specific” genes were elevated in bulk patient tissue.Although it is not possible to confidently deduce whether this signal in bulk tissue RNA arises from altered microglial gene expression or from increased presence of peripheral myeloid cells, these results suggest an important difference between myeloid activation or recruitment in AD patient brains compared to existing mouse models of neurodegenerative disease.To enable others to explore these data, we have assembled an Excel file giving all gene annotations from all the figures, including those derived from other sources, as well as those developed in this manuscript, average expression levels in each experimental group of every dataset, and statistics for every differential expression analysis.Each column is “filter-ready,” enabling further mining of these data.Each row corresponds to a human gene.Data S3 contains similar data, but organized with one row per mouse gene.We also provide two smaller files, containing just the myeloid activation modules, and just cell-type markers.We have also built an interactive website at http://research-pub.gene.com/BrainMyeloidLandscape.The website provides reports for each gene and for each study.The gene reports include an overview of differential expression results across all of the studies, followed by expression plots showing the gene’s expression levels across samples in each study.The study reports include plots showing the genome-wide differential expression results.This should be a user-friendly go-to resource for scientists and enthusiasts interested in brain myeloid gene expression.We have compared the genome-wide transcriptional responses of brain myeloid cells obtained from diverse models of neurodegenerative disease, aging, viral infection, inflammatory stimulus, ischemic injury, demyelinating disorder, and brain tumor growth.From these profiles, we identified modules of genes that show similar response in multiple settings, and we have highlighted the proliferative, interferon-related, LPS-related, and core neurodegeneration-related modules.Using these modules to analyze a published single-cell RNA-seq dataset, we recognized clusters of activated microglia, showing that the interferon-related, proliferation, and core neurodegeneration-related modules represent independent activation states while the LPS-related gene set was enriched in all three activated clusters.As the identification of these modules required the genes in a given set to be differentially regulated in multiple comparisons, we also performed targeted analyses identifying LPS-specific and LCMV-specific genes.All of these gene sets, as well as the individual datasets, can be further explored using the web resource and supplemental tables we have provided.While transcriptional responses in various disease settings were diverse, one change that was almost universal in all comparisons was the downregulation of most genes whose expression in microglia/brain myeloid cells distinguishes these cells from myeloid cells/macrophages in peripheral tissues.It is curious that with any perturbation—even normal aging—the genes that set microglia apart from other tissue macrophages and are presumably involved in CNS-specialized microglial functions show reduced expression.Looking at broad trends across modules 10–25, we see that the majority of genes in the interferon-related, LPS-related, and core neurodegeneration-related modules show higher expression in various macrophage populations than in microglia, in cerebellar microglia than in cortical microglia, and in microglia from aged mice than from young mice.The adoption of certain macrophage-like properties in normal adult cerebellar microglia suggests that such changes should not be presumed to be pathological in aging or neurodegenerative settings.An important question is whether the identified brain myeloid activation signatures impact disease progression.Although many assume the microglial activation in mouse neurodegenerative models has pathogenic consequences, certain lines of evidence suggest that in fact the neurodegeneration-related response is neuroprotective.Whereas microglia from wild-type mice robustly upregulate these genes in response to β-amyloid pathology or cuprizone-induced demyelination, the response is notably attenuated in Trem2-deficient mice.Correlated with this attenuated response, Trem2-deficient mice had poorer outcomes: increased phosphorylated tau and axonal dystrophy in the 5XFAD and APPPS1-21 β-amyloid models and persistent demyelination and axonal dystrophy after withdrawal from prolonged cuprizone treatment.These examples of protective, Trem2-dependent microglial activation in neurodegenerative settings are consistent with human genetic evidence indicating that TREM2 hypomorphism or deficiency is associated with increased AD incidence or Nasu-Hakola disease, respectively.Further preclinical experiments will be necessary to understand how activation or suppression of the neurodegeneration-related response may alter the course of disease at different stages, whether this response is Trem2 dependent in models of tau-driven or Sod1-driven pathology, and whether Trem2 function is protective or detrimental in those models.Our analysis of bulk AD tissues suggested both similarities and differences with existing mouse models.On the one hand, the core “neurodegeneration-related” module genes were elevated in these tissues, suggesting that this type of activation is common to both mouse and human.However, genes of the “neutrophil/monocyte” module and the specially prepared “LPS-specific” module were also elevated in bulk human tissues.This suggested that human AD could involve more classical inflammatory signaling and/or peripheral immune cell infiltration than is apparent in expression profiles from mouse neurodegenerative disease models.β-amyloid pathology is known to prime microglia for augmented inflammatory response to systemic infection, which is clinically associated with accelerated cognitive decline in AD patients.The housing of laboratory mice in pathogen-free conditions results in a naive immune system with less innate immune activation, possibly reducing the transmission of inflammatory signals from the periphery into the CNS.It is also tempting to associate the lack of classical inflammatory gene expression in mouse models of β-amyloid pathology with the lack of overt neurodegeneration in those models, both in contrast to human AD tissues.Further studies on human tissue with more refined technologies—profiling purified brain myeloid cells or nuclei as a population and at the single-cell level—will clarify the extent to which these phenotypes actually occur in disease and guide attempts to model them in preclinical research.Two notable caveats of the human datasets deserve mention.First, the human expression data are from end-stage tissue and may not inform the pathogenic mechanisms of earlier disease stages.Second, stress-induced changes in inflammatory gene expression may occur post-mortem while cellular energy stores and temperature permit, and microglia from neurodegenerative tissues may be “primed” for an augmented inflammatory response.These caveats for human expression data are somewhat insurmountable given current technological and ethical constraints; thus, addressing these questions will likely require better preclinical models of human disease.Understanding the dimensions of brain myeloid cell activation defined herein, and learning how to manipulate them, may lead to novel therapeutic approaches for human neurological disease.Further details and an outline of resources used in this work can be found in Supplemental Experimental Procedures.All protocols involving animals were approved by Genentech’s Institutional Animal Care and Use Committee, in accordance with guidelines that adhere to and exceed state and national ethical regulations for animal care and use in research.For GSE89482, Cx3cr1::GFP mice were crossed to our PS2APP colony, and microglia from the cortex of Cx3cr1GFP/+;PS2APPnegative mice and Cx3cr1GFP/+;PS2APPhomozygous mice were compared.Hippocampal microglia were collected from hMAPT-P301Lhomozygous and hMAPT-P301Shemizygous mice and their non-transgenic littermate controls, for GSE93179 and GSE93180, respectively.At 6, 12, or 14–15 months of age, animals were perfused and processed in control and transgenic pairs using two BD FACSAria sorters simultaneously.Frozen fusiform gyrus tissue blocks and pathology/clinical reports, including age, sex, diagnosis, and Braak stage, were obtained from the Banner Sun Health Research Institute Brain and Body Donation Program in accordance with institutional review boards and policies at both Genentech and Banner Sun Health Research Institute.RNA was extracted from approximately 300 μg of frozen sections from each tissue block as described and standard polyA-selected Illumina RNA-seq analysis was performed as described on samples with RNA integrity scores at least 5 and post-mortem intervals no greater than 5 hr.Relevant gene expression datasets were identified using the criteria in Results by a combination of searches on GEO and PubMed databases with terms such as “microglia” and “neurodegeneration” and also naturally discovered as we followed the literature and presentations at scientific symposia.Datasets were processed and Z score normalized separately, and then Z scores were combined into a master matrix for hierarchical clustering, from which gene modules were defined.Differential expression statistics were calculated using limma, voom+limma, DESeq2, or Mann-Whitney tests, as described in Supplemental Experimental Procedures.Unpaired t tests were used to compare gene set scores and immunohistochemistry intensities, as described in figure legends.
Microglia, the CNS-resident immune cells, play important roles in disease, but the spectrum of their possible activation states is not well understood. We derived co-regulated gene modules from transcriptional profiles of CNS myeloid cells of diverse mouse models, including new tauopathy model datasets. Using these modules to interpret single-cell data from an Alzheimer's disease (AD) model, we identified microglial subsets—distinct from previously reported “disease-associated microglia”—expressing interferon-related or proliferation modules. We then analyzed whole-tissue RNA profiles from human neurodegenerative diseases, including a new AD dataset. Correcting for altered cellular composition of AD tissue, we observed elevated expression of the neurodegeneration-related modules, but also modules not implicated using expression profiles from mouse models alone. We provide a searchable, interactive database for exploring gene expression in all these datasets (http://research-pub.gene.com/BrainMyeloidLandscape). Understanding the dimensions of CNS myeloid cell activation in human disease may reveal opportunities for therapeutic intervention. Ready to move beyond M1 and M2? In this meta-analysis of CNS myeloid cell expression profiles, Friedman et al. identify gene modules associated with diverse microglial activation states. These modules identify distinct subsets of microglia in an Alzheimer's model and reveal aspects of the human disease not apparent in mouse models.
715
Experiment and simulation calculation of micro-cavity dielectric barrier discharge
Dielectric barrier discharge is easy to produce uniform and stable low temperature plasma, and it has been used widely, such as aerodynamics, aerodynamic thermodynamics, material surface modification, disinfection and waste gas treatment, plasma chemical vapor deposition .Many scholars have done a lot of studies on the discharge mechanism and characteristics of DBD at the aspects of experimental measurement, modeling and simulation .Blennow has established a simulation model based on the breakdown electric field of the uniform electric field in the air, which has obtained the charge distribution in discharge space of DBD .A simulation model using the Possion equation and the particle continuity equation has been established by Boeauf and has analyzed the related plasma kinetics and its discharge process .Valdivia has established a simulation model of wire-tube structure electrode based on voltage controlled current source, which has analyzed the effect of power frequency variation on discharge characteristics .Shao and Song also have done deep research on the simulation of DBD .DBD has been studied generally using plate-plate or coaxial electrode structure, but typical small curvature radius electrode with micro-structure is less likely been reported.The micro-structure discharge also has many interesting phenomena .In this paper, micro-cavity dielectric barrier discharge with each cavity less than 1 mm is studied.Because of special micro-structure, MDBD has stable pulses and low electron mean energy, so the MDBD is easily form more uniform and stable discharge than the conventional plate-plate of DBD which filaments have high current pulses.In order to study the discharge mechanism and parameters of MDBD, an experimental platform based on the surface grid micro-structure electrode device of dielectric plate was built.The MDBD equivalent circuit was established on the basis of the characteristics of micro-cavity structure and the DBD, and Kirchhoff voltage equation was obtained according to the equivalent circuit.Because the rapidity and complexity of the discharge process in micro-cavity and the limitation of the measurement method, we cannot get an accurate and comprehensive variation of the discharge parameters only through the experimental methods.So it is very important to study the discharge mechanism of the MDBD by modeling and simulation.Fig. 1 is the schematic diagram of the micro-cavity structure electrode device, which consists of high-voltage electrode, grounding electrode, micro-cavity and polyimide dielectric layer.The high-voltage electrode and the grounding electrode are located in the center of the front and back of the dielectric layer respectively and the length of them are all 30 mm.The length of a single micro-cavity l and the thickness of polyimide dielectric layer d are all 1 mm, the distance between adjacent micro-cavity l1 is 0.2 mm, the thickness of copper clad electrode and dielectric layer l2 is 70 µm.Fig. 2 is the schematic diagram of the electrical wiring of the MDBD experimental device in atmospheric pressure.The A and B are high-voltage electrodes which corresponding to the Fig. 1, and C is the grounding electrode.The experiment was carried out in atmospheric pressure.The type of the plasma power supply for the experiment is CoronalabCTP-2000K.In the experiment, the amplitude and frequency of voltage were 13.53 kV and 8.5 kHz respectively.The types of oscilloscope and the high-voltage probe are UTD2052CL and Tektronix P6015A respectively, and the attenuation coefficient of the high-voltage probe is 1000.In the experiment, the voltage signal was attenuated 1000 times by the high-voltage probe and then input into the CH1 of the oscilloscope.The measurement capacitor and resistor are connected in series in the discharge circuit to obtain the discharge charge and discharge current, and the values of measurement capacitor and resistor are 0.22 µF and 50 Ω respectively.The voltage signal on the measurement capacitor is input into the CH2 of the oscilloscope.In the experiment, the movement speed of electron is extremely fast under the applied electric field, because the quality of the electron is very small.The electrons in the movement collide with the gas molecule constantly, which produces a large number of new electrons.When the applied voltage is large enough, the discharge begins.The electrons will accumulate on the surface of the dielectric layer when the electrons move to the dielectric layer, and the electrons interact with the positive charges in the micro-cavity air gap to form an inverted additional electric field.The additional electric field increases gradually due to the continuous accumulation of electrons, and its direction is opposite to the applied electric field, so the total electric field of the discharge gap decreases gradually.The discharge will be interrupted when the total electric field of the discharge gap is lower than the breakdown electric field.The initiation and interruption of the branch discharge can be equivalent to the change of the plasma resistance.Electrons accumulate and release on the surface of the dielectric layer with the periodic change of the applied voltage, so the accumulated electrons on the surface of the dielectric layer can be equivalent to a virtual electrode which can affect the discharge process .The variation of the discharge characteristic parameters of MDBD such as air gap voltage, dielectric surface voltage, air gap equivalent resistance, current, electron density and electron temperature can be obtained through Matlab/Simulink software.Figs. 4 and 5 are the MDBD U-I characteristic curves under the effect of sine wave obtained by experiment and simulation calculation respectively.As we can see from the figures, the discharge begins when the external applied voltage reaches the air gap breakdown voltage.A large number of high-speed electrons accumulate on the surface of the dielectric layer.The accumulation electrons interact with the positive charges in the micro-cavity air gap to form an inverted additional electric field, and its field strength is gradually increased due to the continuous accumulation of electrons, so the air gap voltage slightly decreases although the applied voltage increases.The discharge current reaches its peak before the applied voltage reaches its peak and rapidly decreases to zero with the decrease of the applied voltage, so the phase of discharge current is ahead of the phrase of the applied voltage.The explanation of this phenomena will be shown in following 4.2 section.The current waveform obtained by simulation is the enveloping line of discharge pulse, which does not fully reflect the capacitive current and single electrical discharge pulse.This is because the simulation model is established under ideal condition, which cannot fully reflect the effect of single filament discharge.Comparing with Figs. 4 and 5, we can learn that the U-I characteristic curve and the discharge time is basically consistent.The peaks of the discharge current are 23 mA and 26 mA obtained through simulation calculation and experiment respectively, so the simulation model established in this paper is accurate and effective.Fig. 7 is the variation of the air gap voltage and the dielectric surface voltage with time obtained through the simulation.In the Fig. 7, the air gap voltage increases gradually with the increases of the applied voltage before air gap breakdown, but slightly decreases during discharge.Comparing with Figs. 5 and 7, we can know the phase of discharge current is according with the slight change of air gap voltage during discharge.The air gap voltage reach its peak value at about 3.5 kV, meanwhile the discharge current reaches its peak at about 23 mA.Fig. 8 is the variation of electron density at different discharging time.The initial electron density before the discharge gap breakdown is 1.06 × 1013 m−3.The movement speed of electron is extremely fast under the effect of the electric field because of the quality of the electron is very small.The electrons in the movement collide with the gas molecule constantly, which produces a large number of new electrons.So the electron number density increases rapidly and reaches its maximum value when the current reaches its peak, and the maximum value is about 1.6 × 1016 m−3.Comparing with the U-I characteristic curve we can learn that the variation of electron density is consistent with that of the discharge current, which shows that the discharge current of the electric gap is mainly composed of electronic motion.Fig. 9 is the variation of plasma resistance obtained by simulation calculation.The dynamic change of plasma resistance in the discharge process reflects the initiation and interruption of the discharge.As shown in the Fig. 9, the plasma resistance has reached its maximum value before the discharge gap breakdown, which is about 0.27 MΩ.According to the formula, the electrical conductivity increases with the increase of the electron density, which leads to the plasma resistance declines rapidly.The resistance declines to its minimum value when the discharge current reaches its peak, and the minimum value is about 0.03 MΩ.Fig. 10 is the variation of the electron temperature with time.We obtained the variation of the electron temperature with time through BOLSIG+ software and simulation calculation.As shown in Fig. 10, the electron temperature reaches its peak when the electron density reaches its peak with the discharge process.The electron temperature peak and the average value is 3.0 eV and 1.6 eV respectively, and the trend of the electron temperature at different discharge times is consistent with that of current.In this paper, the experimental platform based on the micro-structure electrode device of the dielectric panel surface was built, and the equivalent circuit was established based on the discharge physical process and the experimental results.A simulation model was established based on Kirchhoff voltage equation, Boltzmann equation, electric conductivity calculations, electronic continuity equation, and the variation of MDBD air gap voltage, dielectric surface voltage, discharge current, plasma resistance, electron density and electron temperature was obtained through Matlab/Simulink and BOLSIG+ software.Conclusions can be drawn as follows. The experimental platform and the equivalent circuit was established on the basis of the physical process of MDBD discharge and the experimental results.The MDBD U-I characteristic curves was obtained by experiment and simulation calculation respectively.Comparing the simulation result with the experimental result, we can know that the simulation model can be used to describe the discharge characteristics of MDBD. Under the experimental condition of this paper, the simulation results show that the air gap voltage remains about 3.5 kV after the air gap breakdown; the periodical change of electron density is consist with that of the discharge current, and at the peak of discharge current, the electron density reached its maximum value of 1.6 × 1016 m−3. In this paper, the electron temperature is obtained through BOLSIG+ software and simulation model.The electron temperature reached its peak 3.0 eV when the electron density reached its peak with the discharge process.The variation of the electron temperature is consistent with that of current.The reduced electric field is about 40 Td after the discharge gap breakdown, so the electron temperature of MDBD is lower than the conventional DBD.
In order to study the discharge mechanism and discharge parameters evolution of micro-cavity dielectric barrier discharge (MDBD), an experimental platform based on the dielectric panel surface grid micro-structure electrode device was built. Discharge equivalent circuit of the MDBD was established based on the deep analysis of the discharge physical process and experimental results. Then, using Matlab/Simulink and BOLSIG+ software, we solved the Kirchhoff's voltage equation, Boltzmann equation and the electronic continuity equation to obtain the variation of the discharge characteristic parameters, including air gap voltage, the dielectric surface voltage, the electron density and the electron temperature. The results show that the gas gap voltage and dielectric surface voltage are decreased slightly during discharge, the electron temperature and electron density are consistent with the variation of discharge current. The maximum electron temperature is about 3.0 eV, the average value is about 1.6 eV, and its value is lower than the conventional dielectric barrier discharge (DBD).
716
The phage growth limitation system in Streptomyces coelicolor A(3)2 is a toxin/antitoxin system, comprising enzymes with DNA methyltransferase, protein kinase and ATPase activity
Bacteria have evolved a plethora of diverse mechanisms to evade killing by bacteriophages.The mechanisms can act at any stage of the phage life cycle from preventing phage adsorption right through to inhibition of cell lysis and release of progeny phage.The phage growth limitation system of Streptomyces coelicolor confers resistance against the temperate bacteriophage ϕC31 and its homoimmune relatives.This system is called ‘growth limitation’ as phage infecting a Pgl+ strain for the first time undergoes a normal single burst to produce progeny phage but the progeny is attenuated for growth in a second round of infection.The progeny is however able to form normal plaques on a Pgl− host.The mechanistic explanation of the Pgl phenotype proposed by Chinenova et al. is that during the initial round of infection the progeny phage are modified and then restricted in the second round of infection.This system differs fundamentally from typical R–M systems, where, if the phage DNA becomes modified, there can be an escape from restriction and rapid spread of infection through sensitive bacteria.In Pgl, however, even if modification fails during the first burst, it is likely that it will occur in subsequent cycles thereby severely limiting spread of infection.Furthermore Pgl may confer an added advantage to a clonal population as being Pgl+ amplifies phage that might infect and kill phage-sensitive competitors.Previous work has identified four genes, located in two operons 6 kbp apart, required for Pgl in S. coelicolor.Bioinformatic searches on the products of the pgl genes revealed several protein motifs.The product of PglX is predicted to bind AdoMet and has an N6-adenine DNA methyltransferase motif, PglW is a putative serine/threonine protein kinase that contains a typical Hanks-like protein kinase domain, PglY possesses putative Walker A and Walker B motifs for the binding and hydrolysis of ATP/GTP and PglZ is predicted to have a conserved alkaline phosphatase-like fold.While the predicted function of PglX as a DNA methyltransferase fits well with the proposed mechanism by Chinenova et al. the Pgl system appears to involve additional activities that are novel to R–M systems, not least the putative kinase activity of PglW.Transcriptional analysis of pgl genes indicates that both operons are transcribed, even in the absence of phage infection.While transcriptional up-regulation of these genes is plausible during infection, we cannot rule out a role for the control of protein function through phosphorylation given that PglW is a predicted protein kinase.Indeed it has been shown that two isoforms of PglZ were detected with different isoelectric points in 2D-PAGE gels, suggesting that this protein could be post-translationally modified.Another feature of the Pgl system is that it is subject to high frequency phase variation in which a Pgl+ strain gives rise to a Pgl− strain with a frequency of 10−2 to 10−3 and Pgl− to Pgl+ with a frequency of 10−3 to 10−4.The phase variation has been attributed to a variation in length of a G tract in pglX.Switching the phage resistance on and off may help to ease the strong selection within the phage population to mount a counter defence.Here we set out to test the major bioinformatic predictions of Pgl protein functions.In the process we identified a toxin/antitoxin system comprising a toxic PglX protein, shown to be a DNA methyltransferase, and an antitoxin, PglZ.We also demonstrate that the protein kinase activity of PglW, and the Walker A motif of PglY are required for a functional Pgl system.We present a model as to how the Pgl proteins might confer phage resistance in S. coelicolor through an elaborate and novel R–M-like system.We set out to test whether mutations in the predicted functional motifs of the Pgl proteins were required for the phage defence phenotype.The strategy used was to generate knock-out or null mutations in each pgl gene and then complement these null mutants with either wild type or mutant alleles introduced ectopically into the chromosome using the ϕC31 int/attP system.Where possible we obtained further proof of Pgl protein function by heterologous expression of Pgl protein in Escherichia coli and biochemical assay of the partially purified extracts.Bioinformatic analysis of the PglX sequence suggested the presence of an N6-adenine methyltransferase motif at 378–381 aa.A ΔpglX null mutant, SPHX, was constructed using REDIRECT technology that involves recombineering of the kanamycin-resistant cosmid SCIF2, replacing the pglX coding sequence with an apramycin resistance gene.The apramycin resistance gene is flanked by loxP sites and also contains an origin of transfer to enable conjugation of the cosmid into the Pgl+ S coelicolor strain, M145.Double crossovers that retain the apramycin marker but had lost the kanamycin resistance marker were then infected with Cre-phage in a transient infection to remove the apramycin resistance marker.SPHX was sensitive to ϕC31 and could be complemented by the introduction of pPS8003 encoding a His-tagged version of PglX.To test whether the putative methyltransferase domain of PglX was necessary for Pgl function, the tyrosine residue of the conserved NPPY motif was targeted by site directed mutagenesis.This tyrosine residue in other methyltransferases is structurally essential for catalysis where it is required for flipping out the target base from the DNA double helix prior to methylation.The resulting plasmid, pPH1002 was conjugated into SPHX to test in vivo activity of the mutated pglX allele.PglXY381A did not restore phage resistance to SPHX suggesting that the putative methyltransferase motif is essential for the Pgl system.To assay DNA methyltransferase activity of PglX, attempts were made to express a C-terminally His-tagged PglX fusion in E. coli and to enrich extracts by affinity chromatography.The 136 kDa PglX–His6 from E. coli was barely detectable using a 6× His antibody in Western blots of the enriched proteins.Nevertheless, low levels of methyltransferase activity using ϕC31 DNA as a substrate were observed.In a time course of methyltransferase activity, incorporation of 3H-methyl groups into TCA precipitable material increase over the period of 60 min and the level of incorporation was dependent on the amount of protein added.An extract of PglXY381A–His6 was prepared in an identical procedure to the expression and enrichment of PglX–His6, but methyltransferase activity was almost undetectable, ruling out the possibility that the observed activity was due to endogenous E. coli enzymes.Further controls confirmed that incorporation of label into TCA precipitable material was dependent on DNA addition and could be competed by addition of unlabelled AdoMet.These data strongly suggest that PglX can methylate DNA in an in vitro assay.In silico predictions on the function of PglZ are limited to a region annotated as a ‘PglZ domain’, which falls within a family of proteins called the ‘alkaline phosphatase clan’.We attempted to create a knockout mutant of pglZ in M145.REDIRECT technology was used to replace the pglZ ORF in the cosmid SC4G2, with the apramycin marker generating a cosmid SC4G2:ΔpglZ::apra.When this cosmid was introduced into M145 by conjugation, an extremely low frequency of double recombinants was obtained.Three putative M145::ΔpglZ colonies were propagated and were found to be phage sensitive as expected.However, introduction of pglZ–His6 encoded by the integrating plasmid, pPH1001, was unable to complement the phage sensitive phenotype of any of these recombinants.To demonstrate that pPH1001 was able to complement a pglZ− defective allele, the plasmid was introduced into J1934, a Pgl− strain constructed by Bedford et al. by insertional inactivation resulting in the deletion of the 3′ end of pglZ encoding the C-terminal 130 amino acids.The plasmid pPH1001 complemented the pglZ1-834 allele J1934 to give phage resistance.These data suggest that the M145::ΔpglZ strains made by the REDIRECT approach had acquired a secondary mutation, possibly in one of the other pgl genes.The most likely site for a second site mutation is the G-tract present within pglX that had been shown previously to inactivate Pgl.Sequencing through the G-tract indicated that in two of the three M145::ΔpglZ strains the number of G residues had contracted and), while in the third strain the G-tract was as for the Pgl+ wild type M145 and), however this strain was also not complemented by pPH1001 indicating that the secondary mutation must be elsewhere in the genes required for Pgl.The provision of a second copy of pglZ in the Pgl+ wild type strain M145 enabled the disruption of the native pglZ at a frequency of recombination that is typical when an inessential gene is targeted, such as pglW by the cosmid SCIF2:ΔpglW::apra.To confirm the essentiality of an intact pglZ, we first performed alignments of related PglZ proteins from the sequence databases to identify conserved residues that we could target for mutagenesis.Alignment of PglZ homologues indicated the presence of conserved residues D535 and D694.These residues fall within the predicted alkaline phosphatase fold.D535 and D694 in PglZ were both changed to alanine by site directed mutagenesis in the plasmid pPS5045 and the mutant alleles were subcloned into the integrating vector pIJ6902 to generate pPH1007 and pPH1008.After conjugation of pPH1007 and pPH1008 into J1934, containing the pglZ1-834 allele, the exconjugants failed to complement the Pgl− phenotype indicating that both pglZD535A and pglZD694A alleles were defective.Plasmid, pPH1008, was then conjugated into M145 and exconjugants were used as recipients for the SC4G2:ΔpglZ::apra cosmid, but the formation of the pglZ knockout strains was again prevented.These experiments confirmed that the pglZD694A allele was unable to confer antitoxin activity in the presence of an otherwise intact Pgl system.These observations indicate that PglZ may be interacting with PglX to inhibit a toxic activity.To test whether mutations in other pgl genes would permit the disruption of pglZ, the cosmid, SC4G2:ΔpglZ::apra, was conjugated into strains SPHX, SPHW and SLMY.Deletions of pglZ were obtained at low frequency, similar to that observed for the deletion of pglZ in a wild type background, except in SPHX, where deletion of pglZ was obtained at the normal frequency.The G tracts in one SPHW::ΔpglZ and one SLMY::ΔpglZ strains were sequenced and they had suffered a contraction and an expansion, respectively and).These data are indicative of a specific suppression of a toxic activity of PglX by PglZ.We hypothesise that the pglZ1-834 allele in J1934 might retain the protective activity required against an intact pglX.If PglZ prevents the toxic activity of PglX, then the presence of a signalling mechanism that regulates Pgl activity would enable tight control of the system and prevent unwanted toxicity.Bioinformatic analysis of the PglW sequence reveals the presence of two putative protein kinase domains; a tyrosine kinase domain at 195–490 aa and a putative serine/threonine protein kinase domain at 530–816 aa.However only the putative STPK domain has a predicted ATP binding site that includes the central core of the catalytic loop and the invariant lysine.This residue was targeted by asymmetric PCR mutagenesis.It has previously been shown that the equivalent residue to K677 is absolutely conserved in the ATP binding domain of such proteins, and it is believed to be essential for autophosphorylation and the phosphotransfer reaction.The resulting plasmid pPH1012 was conjugated into SLMW to test whether the pglWK677A–His6 allele could complement for the deletion of pglW and restore phage resistance.The PglWK677A–His6 mutation resulted in a phage-sensitive phenotype, indicating that the putative Hanks-like protein kinase domain is both functional and required for a Pgl+ phenotype.The gene encoding PglW–His6 was cloned into the E. coli expression vector pT7-7 to generate pPS5012 and introduced into E. coli BL21 DE3.No expression of PglW–His6 was detected.Examination of the codon usage at the start of the pglW ORF showed the presence of several codons that are rarely used in E. coli.The first 19 codons were therefore optimised for expression in E. coli generating the expression plasmid pPS5025.The frequency of transformation of E. coli BL21 DE3 by pPS5025 was very low and expression trials of the few colonies that were obtained indicated the presence of insoluble and truncated proteins.It seems likely that PglW–His6 is toxic in E. coli BL21 and only plasmids that have suffered mutations can be established, explaining the poor transformation frequencies.At this stage we do not understand the basis for the toxicity of PglW although PglW has a putative, but as yet uncharacterised, N-terminal nuclease-related domain motif.The plasmid pPS5025 was used as a template in an in vitro expression system to generate sufficient full length PglW–His6 to test for autokinase activity.Incubation of PglW–His6 with ATP resulted in autokinase activity of the protein, resulting in incorporation of radioactive phosphate.A control reaction, incorporating 35S-methionine into in vitro expressed PglW–His6 showed that the autophosphorylating band had the same mobility as PglW–His6.The K677A mutant allele of PglW was also tested in the same assay for its ability to autophosphorylate; however, no radioactive signal could be detected, indicating that this residue is essential for the autophosphorylation reaction.These data are consistent with PglW forming part of a signal transduction system that senses the modification state of phage DNA during infection of a Pgl+ cell.Bioinformatic analysis of the PglY sequence suggested the presence of a Walker A motif at 75–82 aa.It has previously been shown that these motifs are involved in nucleotide binding, and is found in many protein families.We decided to test whether the ATPase motif is required for resistance to ϕC31 as a means to determine the role PglY might have in the Pgl phenotype.The essential consecutive lysine and serine residues were targeted by site directed mutagenesis in the plasmid pPS8008, which encodes PglY.The resulting plasmid pPH1003 was conjugated into SLMY.The K81A/S82A double substitution in PglY resulted in a phage-sensitive phenotype suggesting that ATP binding and/or hydrolysis is essential for conferring resistance to bacteriophage infection via the Pgl system.N-terminally His-tagged PglY was expressed in E. coli and purified by nickel affinity chromatography.A single band was observed at ~160 kDa; this band was excised and subjected to peptide mass fingerprinting by MALDI–TOF mass spectrometry and positively identified as PglY.The same approach was subsequently used for purifying His6–PglYK81A/S82A.The His-tagged wild-type and mutant PglY proteins were tested for their ability to bind and hydrolyse ATP.His6–PglY was found to hydrolyse ATP with a substrate affinity for ATP of 0.5 mM, the KM of the mutant protein His6–PglYK81A/S82A was 200-fold higher.These data indicate that PglY is a functional ATPase.PglY nucleotidase activity was specific for ATP, and was found not to hydrolyse the other nucleotides tested.The non-metabolisable ATP analogue, ADP-NP was tested for its ability to inhibit the activity of PglY in vitro.The addition of equivalent molar amounts of ADP-NP and ATP, and subsequent 2-fold dilutions of ADP-NP in each assay resulted in a severely inhibited rate of hydrolysis of ATP, indicative of ADP-NP binding and inhibition of activity.Taken together these data indicate an essential role for the ATPase activity of PglY in the Pgl system.Previous work describes the phenotype of Pgl in which an infection of S. coelicolor Pgl+ by phage ϕC31 undergoes a single burst but subsequent infections by progeny phage of S. coelicolor Pgl+ are attenuated.A logical mechanistic explanation of these observations is that the phage is modified in the first infection but the second infection is restricted.Modified phage is able to proceed through a normal infection cycle in a Pgl− strain.In this work we demonstrated that PglX is indeed a DNA methyltransferase, as predicted by the bioinformatics searches.A strong genetic interaction between pglX and pglZ implies that PglX is toxic and that toxicity is suppressed in strains that are pglZ+ or contain the truncated pglZ1-834 allele in J1934.This interaction resembles a toxin/antitoxin system.Many phage resistance mechanisms rely on some type of toxin/antitoxin pair to enable phage restriction and host immunity.Examples include the restriction–modification systems where host protection from the endonuclease is usually conferred by DNA modification.However S. coelicolor is unusual as it is known to contain methyl-specific endonucleases.It is therefore feasible that it is the DNA methyltransferase activity of PglX that is toxic in S. coelicolor and the antitoxin, PglZ, inhibits this.Although we have shown that PglX has methyltransferase activity in vitro we were not able to demonstrate the presence of methylated DNA from progeny phage or from M145 undergoing an infection using antibodies against DNA containing N6-methyladenine.Recently a phage defence system from Bacillus cereus called bacteriophage exclusion or BREX, has been described, that is mediated in part by a homologue of PglX.In the BREX system the target for the methyltransferase encoded by the B. cereus PglX homologue was elucidated by PacBio sequencing.The target, TAGGAG, is modified in uninfected cells, but infecting phage DNA was not modified despite containing multiple target sites.In the BREX system phage replication is prevented in the first infection, apparently through cessation of phage DNA replication.Thus although BREX and the Pgl systems are mediated by a core of orthologous proteins it seems there are significant differences in their mechanisms of resistance.We hypothesise that Pgl is adapted to cause phage resistance in the context of a host in which there is no detectable DNA methylation and in which there appears to be general methyl-specific restriction.In this context host DNA might be unmethylated during growth of uninfected host cells and the Pgl system would be inherently toxic unless the methyltransferase activity is highly regulated.We propose therefore that any DNA methylation occurs only during phage infection.The ATPase activity of PglY was also shown to be required for phage resistance, possibly implying the need for a motor to drive a processive activity, similar to Type I R–M systems.The model proposed by Chinenova et al. suggested that Pgl would differ from all known methyl-specific restriction systems owing to the proposed marking of phage or phage DNA and flagging it for restriction in later rounds of infection.We propose a model as to how the activities of the Pgl proteins could mediate the Pgl phenotype in vivo.The model resembles a Type I R–M system in which the modification and restriction activities are governed by the modification status of the DNA.We propose that the Pgl proteins switch between three activity states: resting, modifying and restricting.In uninfected cells we propose that the Pgl proteins are in a ‘resting complex’ in which PglZ suppresses the toxic activities of PglX.Evidence that the Pgl proteins are transcribed was obtained in previous work and PglZ was detected in the S. coelicolor proteome.In the model infection by phage coming from a Pgl− host causes a change in the activities of Pgl proteins to modify progeny phage, most likely by N6-adenine methylation through the activity of PglX.The putative N6-adenine methyltransferase activity could modify all the DNA in the infected cell or just targets in ϕC31 and its relatives.The trigger for switching between resting and modifying activities of the Pgl proteins might also be specific to infection by ϕC31 and its relatives.To avoid restriction of the modified phage DNA in the first infection cycle, we propose that Pgl proteins in the modifying state cannot switch directly to a restricting state or be reversed back to the resting state.We also propose that host methyl-specific restriction enzymes either do not recognise the modification in progeny phage or that the modification occurs late in the phage replication cycle.Thus in agreement with the observations of Chinenova et al., modified phage progeny are released in a normal burst.Infection of Pgl+ strains with this modified phage triggers the activation of the restricting activity of Pgl.The mechanism of restriction is not clear but could be mediated by the PglW NERD domain, the host cell methyl-specific restriction endonucleases or an unidentified motif in PglX that is responsible for conferring toxicity.If the latter is the case, PglX could resemble some Type IIS restriction/modification systems that are contained within a single polypeptide.We propose that PglW and PglZ sense the phage infection and/or presence of modified or unmodified phage DNA and control the activity of PglX and other Pgl proteins.As PglW has kinase activity and as PglZ have been identified in two isoforms in a proteomics experiment, we propose that control of the Pgl activity state is dependent on phosphorylation.It is known that bacteriophage are an important force for driving bacterial evolution, and the evolution of several types of phage resistance mechanisms, important for avoiding infection and lysis suggests evading the deleterious effects of phage is highly selective.Data presented here indicate that Pgl is a complex R–M-like system that demonstrates yet further biological novelty in the cellular mechanisms of defence against bacteriophages.The S. coelicolor strains used in this study are summarised in Table 1.All strains were cultivated on mannitol and soya flour agar.Plaque assays were performed as in Kieser et al. on Difco nutrient agar.Bacteriophage ϕC31 cΔ25 was used throughout this work as described in Kieser et al.Plasmids were conjugated into Streptomyces from the E. coli strain ET12567 containing pUZ8002 to provide the transfer functions.Plasmids used are summarised in Table 2.Plasmids for expression of the His-tagged Pgl proteins in E. coli were made as follows starting with the PglX–His6 expression plasmid, pPS5032: A 5772 bp BamHI fragment encoding PglX from pPS1001 was inserted into BamHI cut pARO191 to generate pPS2002.An NdeI site was introduced at the pglX start codon by replacing a 482 bp HindIII–FseI fragment with a 408 bp PCR fragment generate using primers MMUTF and MMUTR and cut with HindIII and FseI to generate pPS3009.DNA encoding the His6 tag was added to the 3′ end of pglX by replacing a 46 bp AatII fragment with a PCR fragment cut with AatII generated from primers IF2 new and MENDSQ to form pPS5028.The ORF encoding PglX–His6 was then cut out from pPS5028 with the NdeI and BamHI sites and the fragment was inserted into NdeI–BamHI cut pT7-7 to form pPS5032.The PglZ–His6 expression plasmid, pPS5045 was made as follows: A 5269 bp SstI–BsiWI fragment from pPS5001 was replaced with a 218 bp SstI–BsiWI fragment generated by PCR with primers ZNT and ZHisR to introduce an NdeI site at the start of pglZ, creating pPS5042.A HindIII site was then introduced just before the stop codon in pglZ by replacing a 282 bp SphI fragment with a PCR fragment from primers ZCHisF and ZCHisR, generating plasmid pPS5043.The NsiI–HindIII fragment was then inserted into pPSCHis fusing the 3′ end of pglZ to DNA encoding an in frame His6-tag, forming pPS5045.The PglW–His6 expression plasmid, pPS5012, was constructed as follows: The 1 kbp XbaI–NotI fragment from pPS1001 was replaced with a PCR fragment generated using primers KtipF and KtipR resulting in an NdeI overlapping the start of the pglW ORF, generating pPS3005.A DNA fragment encoding the His6-tag was then added at the 3′ end by replacing the BamHI–EcoRI fragment in pPS3005 with a PCR fragment made using primers KiHis/IRT3 and generating pPS5007.The NdeI–EcoRI fragment from pPS5007 was then subcloned into pT7-7 to generate pPS5012.pPS5025 containing the optimised codons at the start of the pglW ORF was created by replacing the NdeI–NotI fragment with a PCR product, generated using primers KCODON and KiseqR, also cut with NdeI–NotI.The His6–PglY expression plasmid was generated as follows: pPS5070, encoding an N-terminal His-tag fused to a truncated PglY, was constructed by inserting a PCR fragment, generated using primers YHisF and YHisR and then digested with AatII and SphI, into pGEM7.The entire pglY ORF frame was then reconstructed in pPS5071 by inserting the SphI–PvuII fragment from pPS5060 into SphI/SmaI-cut pPS5070.The His6–pglY gene was then inserted into pT7-7 using the NdeI–BamHI restriction sites.The pIJ6902-derived, pglZ-containing integrating vector, pPH1001, was constructed by subcloning the NdeI/EcoRI fragment from pPS5045.Primers for PCR reactions are listed in Table S1.The pglX and pglZ null mutants were created according to the protocol of Gust et al. using the pIJ774 apramycin resistance cassette, flanked with loxP sites.In the double mutants the marker was removed as described prior to disruption of the second target gene.Oligonucleotides for creating the disruptions are listed in Table S1.Point mutants in the Pgl proteins were created using site-directed mutagenesis, performed using the QuikChange XL site-directed mutagenesis kit: primer sequences are detailed in Table S1.The exception to this protocol was in pglW, where the parental vector was too large for Quickchange.The PglW K677A allele was produced by asymmetric PCR of a 900 bp region using the primers in Table S2.A first round of PCR created a product containing the mutant allele, with a flanking natural StuI site, and a second PCR created the mutant allele with a flanking SrfI site.The products were then mixed and a final round of PCR using the outer PCR primers resulted in a final product of 900 bp representing a region of pglW with the desired mutation.This fragment was cloned into pGEM-T-Easy, and the mutation confirmed by sequencing.The resulting plasmid, pPH1004, was cut with SrfI and StuI, giving a 460 bp product, which was used to replace the natural fragment in pPS8002, creating the vector pPH1012.Pgl proteins were overproduced as His6-tagged fusions.The plasmids were introduced in to E. coli Rosetta and protein expression induced by the addition of 0.1 M IPTG in exponentially growing cells.The resulting His-tagged proteins were purified by nickel affinity chromatography, and their identity confirmed by MALDI–TOF mass spectrometry.The exception to this was PglW, which could not be produced in any DE3 lysogens.The protein was expressed from the plasmid pPS5025 using the EcoPro T7 coupled in vitro transcription-translation system, with the mass of the expressed protein confirmed by incorporation of 35S-methionine in to the translation product according to the manufacturers instructions.In vitro phosphorylation of PglW was carried out according to Molle et al.Briefly, PglW was incubated for 1 h at 30 °C in a 20 μl reaction containing 25 mM Tris–HCl, pH7.0, 1 mM DTT, 5 mM MgCl2, 1 mM EDTA, 2 μg protein and 200 μCi ml−1 ATP.The reaction was stopped by the addition of an equal volume of 2× sample buffer, followed by heating at 98 °C for 5 min.The protein was visualised by autoradiography following separation on 4–12% SDS PAGE gels.The ability of wild-type and mutant PglY proteins to hydrolyse nucleotides was assayed at 30 °C in 40 mM HEPES·HCl, 10 mM MgCl2, 10 mM dithiothreitol, and 0.1 mg ml−1 bovine serum albumin.Nucleotides and protein was added at the indicated concentrations.Reactions were pre-incubated at 30 °C for 10 min with PglY prior to the addition of the nucleotide.Reactions were stopped by the addition of 2.5 µl of stop buffer, 5% SDS, 200 mM EDTA, and 10 mg ml−1 proteinase K) and incubation at 37 °C for 20 min.The hydrolysis of ATP was detected by measurement of the release of inorganic phosphate using acidic ammonium molybdate and malachite green according to the method of McGlynn et al.The reaction mixture contained 50 mM Tris–HCl, 7 mM 2-mercaptoethanol, 1 mM EDTA.12 μM -S-adenosyl-l-methionine, 2 μg ϕC31 DNA and 2 μg protein.Reactions were incubated for 1 h at 30 °C.Protein and unincorporated label were removed by phenol extraction and ethanol precipitation.Labelled was detected by liquid scintillation counting.
The phage growth limitation system of Streptomyces coelicolor A3(2) is an unusual bacteriophage defence mechanism. Progeny ϕC31 phage from an initial infection are thought to be modified such that subsequent infections are attenuated in a Pgl+ host but normal in a Pgl- strain. Earlier work identified four genes required for phage resistance by Pgl. Here we demonstrate that Pgl is an elaborate and novel phage restriction system that, in part, comprises a toxin/antitoxin system where PglX, a DNA methyltransferase is toxic in the absence of a functional PglZ. In addition, the ATPase activity of PglY and a protein kinase activity in PglW are shown to be essential for phage resistance by Pgl. We conclude that on infection of a Pgl+ cell by bacteriophage ϕC31, PglW transduces a signal, probably via phosphorylation, to other Pgl proteins resulting in the activation of the DNA methyltransferase, PglX and this leads to phage restriction.
717
The influence of of cross-linking process on the physicochemical properties of new copolyesters containing xylitol
Polyester materials are very popular among many research teams in the world.This is due to the fact that these materials can be obtained from easily available raw materials and the method of their synthesis allows you to modify them in the direction of the desired properties.Among many types of polyesters, succinic acid copolyesters and sebacic acid copolyesters deserve attention.These copolymers belong to the group of biodegradable aliphatic polyesters.Despite many advantages, widespread use of biodegradable polyesters is limited due to the limited range of mechanical properties and the difficulty of their modification due to the lack of reactive functional groups.In this paper describes a new class of multiblock copolymers composed of PBF poly PBS and poly PCL and with hexamethylene diisocyanate as a chain extender.According to the authors, such copolymers could be used in the field of biodegradable polymeric materials.PBS and PCL do not show miscibility in the amorphous phase.However, in the high temperature area two melting temperature of the crystalline phase can be observed.The thermal stability of the copolymers increases with the increase of PBS.However, the mechanical properties can be well regulated by changing the proportion of PBS and PCL, obtaining rigid materials characterized by a high value of stress to break, to elastic elastomers, which is characterized by a high value of strain to break.The authors emphasize that copolymers with 10–30% PCL are characterized by optimal mechanical properties and impact resistance.Examples of multiblock copolymers are those consisting of crystalline poly and amorphous poly, which were synthesized by chain elongation using hexamethylene diisocyanate .Authors of studies 13C NMR confirmed the receipt of copolymers with a sequential structure.They also confirmed that block copolymers have very good mechanical and thermal properties as well as excellent impact strength.The use of the amorphous soft PPSu segment not only gives the copolymers higher impact resistance without lowering the melting point, but also increases the rate of enzymatic degradation.In the article , the authors describe a biodegradable copolymer of poly and a butylene ester of dilinoleic acid, which were the coating for multi-component controlled release fertilizer.It has been shown that the studied PBS/DLA are very interesting materials in the processes of nutrient release due to their biodegradability, which can be extremely important in agricultural cultivation.Poly polyesters are biocompatible and biodegradable elastomers exhibiting a wide range of potential biomedical applications like scaffolds for treating cartilage defects vascular tissue engineering , myocardial tissue engineering , and retinal progenitor cell grafting .They also have potential applications as contact guidance materials , hollow conduit neural guides , and drug delivery .Polyesters with similar properties to poly can be synthesized by substituting glycerol with natural polyols like mannitol, sorbitol and xylitol .One of the possible monomers for biodegradable polyester synthesis, xylitol is a sugar alcohol found naturally in fruits and vegetables like lettuce, cauliflower, raspberry, grape, banana and strawberry.It can also be found in yeast, lichens, mushrooms, and seaweed.It is an intermediate in human carbohydrate metabolism, produced by human adults between 5 and 15 g/day .Xylitol-based polymers are of great interest to biomaterials science due to their biocompatibility, and biodegradability.Poly has been shown to possess in vitro and in vivo biocompatibility comparable to poly .Furthermore, both mechanical properties and degradation rate of polyesters containing xylitol can be fine-tuned by adjusting xylitol: dicarboxylic acid ratio .Those properties can also be fine-tuned by adjusting curing time and dicarboxylic acid chain length. .To the best of our knowledge, poly and poly have not been previously synthesized by other authors, and are novelty materials.Using butylene glycol as an additional monomer in polyester synthesis allowed us to obtain copolyesters with improved mechanical properties compared to polyesters synthesized using only polyol and dicarboxylic acid.All chemicals except xylitol were purchased from Sigma–Aldrich.Two copolymers containing xylitol were synthesized: poly with sebacic acid: butylene glycol: xylitol ratio of 2:1:1, and poly with succinic acid: butylene glycol: xylitol ratio of 2:1:1 Monomers were melted in a round bottom flask in a temperature above 100 °C under a blanket of N2.Following that, esterification reaction was performed for 13,5 h in 150 °C under a blanket of N2, catalyzed by Ti4.Then, polycondensation reaction was conducted in 150 °C under vacuum.Prepolymers were then cross-linked in a vacuum-dryer in 100 °C under 100 mBar for 288 h. Samples were taken directly after polycondensation reaction ended, and at consecutive stages of cross-linking process.Fourier transform infrared spectrometry was used to examine the chemical structure of all materials obtained at subsequent stages of the crosslinking process.FTIR transmission spectra were recorded between 400 and 4000 cm−1, with 2 cm−1 resolution.The test results were developed using Omnic software.Thermal properties were determined using differential scanning calorimetry.The measurement was carried out in a cycle heating in the temperature range from −100 to 200 °C.Mechanical tests were carried out with an Instron 3366 instrunents equipped with a 500 N load cell in accordance with standard PN-EN-ISO 527/1:1996.Hardness for materials after 288 h was measured using a Zwick/Material Testing 3100 Shore A hardness tester.The water contact angle was measured by using a KRUSS DSA100 digital goniometer.Static contact angle measurements were performed on the surface of degreased materials after 288 h croslinking by placing a 2 μL droplet of deionized water using the automatic dispenser of the goniometer.Contact angle was calculated using Kruss drop shape analysis software.Determination of gel fraction of elastomers after 288 was made by the extraction method.PN-EN 579: 2001.Material samples after 288 h croslinking were placed in Schott type P2 crucible and subjected to extraction in boiling tetrahydrofuran for 3 h.After extraction, the samples were dried in a vacuum oven at 25 °C for 3 h and then in a desiccator.Three determinations were made for each elastomer.The content of gel fractions was calculated from formula as the mean of three measurements:X = m1/m0 100%where: m1 - sample mass after extraction, m0 - sample mass before extraction.As a result of the described synthesis, two ester elastomers were obtained, the reactions of which are shown in Figs. 1 and 2.In Tables 1 and 2 the basic physicochemical properties of PXBS and PXBS were compared after 288 h of crosslinking.The obtained spectra show bands typical for polyester structure.For both materials four transmittance peaks can be observed and Fig. 4): COC group at about 1150 cm−1 wavelenght, CO group at about 1706 cm−1 wavelenght, CH at about 2944 cm−1 wavelght, and intermolecularly associated OH groups at about 3435 cm−1 wavelenght.Peak intensity of OH groups decreases, and peak intensity of COC groups increases with the cross-linking progress.It is the result of bonding between molecules in the adjacent polymer chains.Table 3 contains the values of the characteristic temperatures of phase transitions with thermal effects at subsequent stages of PXBS crosslinking.On the first-heating thermograms two melting temperatures can be observed, Tm1 is the result of melting of poly, and Tm2 is the result of melting of poly.Second melting enthalpy value decreases with the cross-linking progress.On the second-heating thermograms Tm2 can only be observed for the non-crosslinked polymer.Glass transition temperature can be observed on the second-heating thermograms.Cooling thermograms show crystallization temperatures.Both the temperatures and enthalpies of crystallization decrease with the cross-linking progress.Table 4 contains the values of the characteristic temperatures of phase transitions with thermal effects at subsequent stages of PXBSu crosslinking.On the first-heating thermogram glass transition temperature and melting temperature can be observed.Glass transition temperature increases with the cross-linking progress.Melting temperature increases, and melting enthalpy decreases with the cross-linking progress.For the completely cross-linked material melting temperature cannot be observed.On the cooling thermograms glass transition temperature can be observed.Fig. 9 shows a wide range of glass transition, which is probably associated with the overlap of certain effects associated with the crystallization of other crystalline forms.On the second-heating thermograms thermogram only glass transition temperature can be observed.Melting temperature cannot be observed which is probably the result of thermal cross-linking that occurred during DSC analysis.The mechanical properties of the PXBS and PXBSu elastomers after 288 h of crosslinking are shown in Tables 1 and 2.An elastomer based on succinic acid 288 h after the crosslinking process has a much higher value of tensile strength of 1.5 MPa than the elastomer PXBS.Elongation material PXBS is higher relative to the material as is evident PXBSu the observed values of tensile strength.Tables 1 and 2 show the hardness results for PXBS and PXBS after 288 h of crosslinking.It can be observed that both materials have a very low Shore A value and can be included in soft elastomers similar to cross-linked silicone rubber.Wetting properties of polymeric materials are very important for their applications in medicine.Therefore, it is important that the designed materials determine their hydrophilicity or hydrophobicity.Hydrophilic surfaces potentially improve biocompatibility.To investigate the wetting characteristics of the surfaces of our materials, the water contact angle measurements were made for ester elastomers after 288 h of the crosslinking process and the results are presented in Tables 1 and 2.Both the PXBS and PXBS material are characterized by a hydrophilic surface.However, for an ester elastomer based on sebacic acid, the surface of the material exhibits a significantly higher wettability compared to an elastomer based on succinic acid.The results of the gel fraction for ester materials presented in Table 1 and 2 after 288 h of cross-linking indicate obtaining materials characterized by a high degree of cross-linking.Both PXBS and PXBSu achieved gel fraction values of 91 and 96%, respectively.Slightly lower value can be observed for the PXBS material, which is directly related to the fact that the material shows an endothermic melting transformation visible on DSC thermograms after the crosslinking process.The PXBSu material does not show such a conversion, while DSC thermograms show the glass transition temperature, which is typical for elastomeric materials.Copolymers synthesized by us exhibit better mechanical properties than materials synthesized using only xylitol and dicarboxylic acid, while retaining their biocompatibility and biodegradability.Mechanical properties, and change in thermal and chemical properties with progress of the cross-linking process were investigated.Different temperatures and thermal effects corresponding to various phase transitions were recored at consecutive stages of the cross-linking process.Bonding between adjacent polymer chains lead to decreasing peak intensity of intermolecularly associated OH groups and increasing peak intensity of COC groups in FTIR analysis with progress of the cross-linking process.The authors declared that they have no conflict of interest.
The goal of this research was developing biodegradable and biocompatibile xylitol-based copolymers with improved mechanical properties, and investigating the change in their thermal and chemical properties with progress of the cross-linking process. Using a raw material of natural origin such as xylitol, a prepolymer was obtained by esterification and polycondensation. Then, at subsequent stages of the crosslinking process in a vacuum dryer, samples of materials were taken to determine the progress of the process using Fourier transform infrared spectroscopy. The method of differential scanning calorimetry also defined changes in the ranges of phase changes occurring at each stage of the crosslinking process. After the crosslinking process, ester materials based on sebacic and succinic acid were characterized in terms of mechanical and surface properties.
718
CEO power, government monitoring, and bank dividends
Powerful CEOs can invest in non-value maximizing projects to pursue managerial objectives including empire-building, expense preference behavior and the like.1,As such, shareholders monitor CEOs in order to prevent such expropriation, but this can be costly if ownership is dispersed.A partial solution to this problem is provided by dividend payouts.These can act as a monitoring device for shareholders because they reduce the amount of cash that CEOs can dissipate in non-value maximizing projects and also increase the frequency of CEO scrutiny from outside investors.The U.S. literature related to non-financial firms documents that CEO entrenchment leads to higher dividend payout ratios.This behavior is ascribed to the incentive of entrenched CEOs to discourage minority shareholder monitoring.Where corporate governance is weak dividends act as a pre-commitment device: a promise to regularly pay cash to shareholders reduces agency costs since it reduces the likelihood that these funds will be wasted on projects that increase the private benefits of CEOs without maximizing shareholder value.However, the incentive to pay larger dividends also depends on whether entrenched CEOs can fend off take-over threats, and on the degree to which monitoring from the board of directors is effective.A possible reason for weak shareholder monitoring and low dividend payouts relates to the protection of the rights of minority shareholders.In their seminal paper, La Porta et al. provide evidence that in countries with stronger minority rights payout ratios are higher, suggesting that high payout ratios are an outcome, rather than a substitute, of strong minority rights.Consistent with this hypothesis, Adjaoud and Ben-Amar find a positive link between the quality of corporate governance and payout ratios.2,There is evidence also that dividends dampen expropriation in group-affiliated firms, as investors anticipate the risk of expropriation from the controlling shareholder and require higher payouts.Moreover, shareholders in countries with strong creditor rights tend to be more sensitive to possible expropriation from insiders, suggesting that firm insiders set dividend policies with the objective to minimize agency costs of both equity and debt.This is an important finding – in equilibrium, payout ratios should reflect the monitoring incentives of all stakeholders.Building upon this literature, we aim to investigate the relationship between CEO power and dividend payouts in the banking sector.This is of interest because unlike non-bank firms, the objectives of managers and shareholders can conflict with those of other powerful stakeholders such as depositors and government regulators.Bank executives are subject to the scrutiny of different stakeholders.For instance, Schaeck et al. provide evidence of shareholder discipline for risky institutions, while there is no evidence of such discipline from debt holders and regulators.Monitoring by minority shareholders may well influence CEO behavior less than oversight from the government.In this case, the government may favor lower payouts since this could improve bank capital positions, resulting in safer institutions.Bank safety is a primary concern for the government, because bank failures result in long-lasting negative effects on economic growth.Because of government monitoring, the relation between CEO power and dividend payouts in banking is not necessarily positive.Banks with entrenched CEOs may have relatively low payout ratios to deter greater government scrutiny.Dividend policy can shape the features of agent-principal issues in banking and as such is worthy of further investigation.Characteristics of CEO power and dynamics are strongly intertwined with the role of bank corporate governance.3,This topic has recently drawn attention from academics and policy makers alike, because poor corporate governance can increase the probability of bank failure,4 with potentially large negative externalities due to contagion risk, disruption of the payment system, and costs deriving from deposit insurance payouts.5,Academic and policy interest in bank dividend policy has increased because of the importance of retaining earnings for bank soundness, especially during a recessionary period.Recent developments in banking regulation also impose restrictions on dividends for undercapitalized banks.This is necessary because banks can transfer default risk to their creditors and to the taxpayer, a phenomenon known as risk-shifting.For this reason, the government has incentives to monitor bank dividend policy.While there are studies that investigate bank CEO incentives, and the link between government ownership and bank performance and risk-taking, so far the literature has neglected the impact of government monitoring on bank dividend policy, and how it interacts with CEO power and incentives.In this paper, we intend to fill this important gap in the literature.We investigate the association between CEO power, internal monitoring from board of directors, government monitoring, and dividend payouts for banks operating in 15 European Union member states.We restrict our analysis to EU countries given the more uniform bank regulatory framework.Three main proxies for CEO power are investigated: CEO ownership, CEO tenure, and unforced CEO turnover.Internal monitoring from the board of directors is proxied using director ownership, which has been found to be positively related to firm performance in the non-financial literature.While CEO power proxies are expected to be negatively correlated with performance, internal monitoring proxies are expected to be positively correlated with performance.Our modeling approach also controls for a variety of determinants of dividend payout ratios.Unlike the previous literature on bank dividend policy, we can exploit data on government monitoring at the bank level in terms of ownership and the presence of government officials represented on bank board’s so as to see how the authorities monitor dividend policy.Our study presents various innovations.First, we use a new hand-collected dataset on bank ownership structure and corporate governance for 109 listed banks operating in EU-15 countries and combine this sample with data from Bankscope, Bloomberg, Datastream, Factset, SNL financial, and LexisNexis.Second, we employ Instrumental Variables estimation to elicit the impact of CEO power on bank payout ratios.In particular, we employ a dummy identifying CEOs that are also among the founders of the bank as an instrument for CEO ownership, unforced turnovers, and CEO tenure.Being a founder of the bank is positively correlated with CEO ownership and CEO tenure and negatively correlated with unforced CEO turnovers, satisfying the relevance restriction.Moreover, since the CEO does not have to decide every year to be a founder, this variable is clearly exogenous to dividend policy decisions.Finally, following Hirtle, we also consider the effect of share repurchases.Our main finding is that powerful CEOs tend to be detrimental for bank performance and distribute lower dividend payouts.In particular, we find a negative relation between CEO ownership and payout ratios and between CEO tenure and payout ratios, and a positive relation between unforced CEO turnover and payout ratios.Stronger internal monitoring from the board of directors, as proxied by the average shareholding of board members, improves performance and decreases payout ratios.These findings suggest that entrenched bank CEOs tend to distribute lower payout ratios, and stronger internal monitoring from board members decreases payout ratios, in contrast with what has been found in the non-financial literature.Moreover, when the government is a large owner or there is a government official on the bank board, payout ratios are lower, while performance does not change.These results suggest that monitoring from the government is detrimental to minority shareholders because the government is incentivized to put bank safety and the interests of creditors before the interest of minority shareholders.The remainder of the paper is structured as follows.Section 2 reviews the literature and develops the hypotheses.Section 3 describes the methodology and the data sample.Section 4 reports the results and robustness checks and Section 5 summarizes and concludes.This section briefly reviews the literature on dividend policy of nonfinancial firms and banks.Dividend policy is one of the cornerstones of financial economics and an extensive literature has evolved since Miller and Modigliani’s seminal work on the irrelevance of dividend policy.In the presence of taxes, a zero-dividend policy would be optimal.Yet, firms do pay dividends.Subsequent studies have sought to test Miller and Modigliani’s proposition to see if the results derived from theory hold in real financial markets.Empirical literature spans an array of areas covering dividend policy and how it relates to: tax clienteles, agency costs, signalling effects, life-cycle factors, catering incentives, and behavioral factors.One branch of the literature focusses on the relation between managerial entrenchment and dividend policy.The entrenchment hypothesis argues that managers who fear disciplinary actions tend to pay higher dividends as a protection against such actions.This hypothesis is grounded in the principle that dividends are paid to decrease agency costs between managers and shareholders.By paying dividends, managers increase the utility of outside shareholders and decrease monitoring incentives.Literature on non-financial firms typically support the entrenchment hypothesis.However, the incentive to pay dividends as a monitoring device is negligible for CEOs that can fend off take-over threats.In general, entrenched CEOs are less incentivized to pay large amounts of dividends in the absence of monitoring from minority shareholders, and when shareholder rights are weak.On the other hand, in the presence of laws that insulate managers from takeovers, dividend payout ratios fall.In banking dividend policy is an under-researched area.Early studies focus on the signalling power of bank dividends.More recently, bank dividend policy has been investigated because of possible risk-shifting behavior.Abreu and Gulamhussen confirm the importance of size, profitability, growth opportunities, and agency costs in determining bank dividend policy both before and during the financial crisis.In banking, monitoring can come from the government as well as outside shareholders.The government has incentives to monitor bank dividend policy so as to minimize the likelihood that excessive dividend payouts lead to inadequate equity capital buffers.For this reason, restrictions on dividend payments and share repurchases for under-capitalized banks are part of the Basel III framework.All other things being equal, a low dividend payout ratio can reduce the strength of government monitoring on the CEO, because of the positive impact on bank stability.A low dividend payout ratio could reduce potential losses for the deposit insurance provider, and in the case of a capital shortfall the government is incentivized to exert monitoring pressure on the bank.In the words of Abreu and Gulamhussen: ‘ the pressure associated with holding capital levels near or below the minimum requirement will lead banks to plowback earnings to recapitalize themselves.’,Because of these reasons, there is a clear conflict of interest between the government and outside shareholders as the government has a preference for low dividend payouts while outside shareholders prefer high payouts.All other things being equal, the sign of the relationship between CEO power and dividend payout ratios depends on whether entrenched CEOs wish to discourage monitoring from the government or from the outside shareholders.To the knowledge of the authors, there is currently no theoretical contribution that can help us predict the sign of this relationship.We expect that in Europe, a combination of weak protection of minority shareholders and government monitoring may allow entrenched bank CEOs to pay lower dividend payout ratios than CEOs with less power.To examine these relationships we employ three proxies for CEO power: CEO Ownership, Unforced CEO Turnover, and CEO Tenure.As CEO Ownership and CEO Tenure increase, the CEO becomes more powerful, in the sense that they acquire a stronger position in the decision-making process of the bank.6,CEO Ownership has two types of effects: an entrenchment effect, because of the voting power associated with the ownership of bank shares, and an incentive effect deriving from the right to receive dividends.The positive correlation between CEO Ownership and CEO power is substantiated by recent research showing that an increase in CEO Ownership decreases the likelihood of a CEO dismissal.CEO Tenure is the natural logarithm of the number of years for which the CEO has been in office.Finkelstein and Hambrick argue that some determinants of CEO power take time to develop, and for this reason CEO power tends to increase with tenure.7,Since the relationship between tenure and dividend payout ratios may be nonlinear, we consider the natural logarithm of tenure.While CEO Ownership and CEO Tenure increase CEO power, CEO turnover events should reduce it.This is because the new CEO may need time to entrench and pursue policies that do not maximize shareholder value.However, CEO Turnover may depend on dividends, since dividend cuts may lead to CEO dismissal.For this reason we consider only unforced CEO turnover as a proxy for CEO power, creating a dummy variable equal to one if turnover that cannot be defined as forced takes place, and zero otherwise.In a nutshell, unforced turnovers refer to turnovers that are not a result of dismissal, for instance, cases in which the CEO has retired.8,Our CEO power proxies are likely to be related to “bad” corporate governance.Apart from CEO Ownership, for which the incentive effect could dominate the entrenchment effect, the stronger the CEO, the stronger the degree of agency costs between the CEO and shareholders.We expect dividend payouts to be negatively linked CEO Ownership and CEO Tenure and positively related to CEO Unforced Turnover.This is in contrast with the received wisdom in the non-financial literature, which posits that firms with entrenched CEOs should have larger dividend payout ratios to discourage monitoring from outside shareholders.What happens if we consider the impact of stock ownership of board members?,This variable should be a proxy for “good” governance because, as suggested by Bhagat et al., director ownership improves monitoring on the CEO and other executives.Empirical studies show that director ownership consistently correlates with good performance.For this reason, we also investigate the impact of this variable on payout ratios and performance.Theoretically, larger director ownership should lead to less entrenchment, and therefore should lead to a decrease in agency costs and dividend payouts.Following Bhagat and Bolton, we employ the proxy Director Ownership €, which consists of the average value of the stake of board members.9,Pressure from the government could lead to lower dividend payout ratios, as a result of potential political and reputational damage associated with bank failure.In the following section we outline the impact of government monitoring on dividend payout ratios in the form of both government ownership and the representation of government officials on bank boards.The recent financial crisis has prompted a reconsideration of the role of government monitoring in the banking system, with the objective of aligning private incentives with public interest.Of particular interest is the case of government ownership.While government ownership of banks can provide the authorities with an additional tool for crisis management, it may also give rise to agency problems – for instance, politically motivated lending can lead to inefficiencies and cronyism.Iannotta et al. provide evidence of a negative effect of government monitoring, in the form of government ownership, on bank performance.In our analysis, we are interested in the role of government ownership on agency costs and private incentives.According to Gugler, when the government acquires ownership of a firm, there is a double principal-agent problem: between the government and citizens, and managers.Government ownership should result in increased monitoring and therefore higher dividend payout ratios, however, the government’s objectives can be twofold: maximizing shareholder value; and protecting depositors’ rights.The latter objective, as mentioned above, is likely to be a consequence of possible reputational and political damage in the case of bank liquidation,10 or it may be associated with concerns of potential losses deriving from deposit insurance schemes or other types of guarantees.Since high dividend payout ratios reduce the ability of a bank to pay back its creditors, government monitoring can also lead to lower dividend payout ratios.We employ two proxies for government monitoring.The first proxy is the percentage stockholding of the government, Government Ownership.This proxy is highly positively skewed, and for this reason we take this variable in natural logs.The second proxy considers both government ownership and the presence of a government official on the board of directors of the bank.We construct the dummy variable Government monitoring which takes the value one if either the government owns at least 3% of the bank shares11 or there is a government official on the board of directors of the bank, and zero otherwise.These variables are assumed to be positively correlated with the extent of government monitoring.This section describes the methodology and data set.Section 3.1 describes the econometric framework.Section 3.2 describes our instrumental variables.Section 3.3 outlines the data set.The empirical literature on CEO entrenchment for non-financial firms is heterogeneous in terms of econometric methodology and variables chosen.12,Since the government is likely to be concerned about safety, and common equity is a key component of the regulatory capital in banking, we employ the dividend to equity ratio as the dependent variable, following previous literature on bank dividend policy.Using equity in the denominator rather than earnings has an additional advantage: it eliminates the problem of dealing with negative dividend payout ratios.However, since the ratio of dividends to equity is highly skewed to the right, in our main regressions we use this variable in logs, DPE.To better identify whether the CEO is effectively entrenched or not, we must also investigate the determinants of performance variables.If the proxies for CEO power that we employ increase performance, then the CEO is unlikely to be entrenched.On the other hand, if there is a negative or insignificant relation between the CEO proxies and performance, then the CEO is likely to be entrenched.For example, in the case of CEO Ownership and performance proxies, a positive relation would suggest absence of entrenchment.As described above, we also consider the effect of director ownership, using the proxy Director Ownership €.We include in our regressions several controls in Eqs. and.Size, profitability and growth opportunities are believed to be the main drivers of dividend policy for non-financial firms.As stated above, we proxy for profitability and growth opportunities using the performance variables Market-to-book and Tobin Q.We proxy for Bank Size using the natural logarithm of total bank assets.The non-bank literature documents that large firms tend to pay higher dividends.Thus, we expect the coefficient on Bank Size to be positive.The stage of the bank life-cycle, represented by earned equity, is proxied by the Retained earnings ratio.Banks with large values of earned equity are likely to be at a more mature stage of their life-cycle, and thus should have more cash available for distribution to shareholders.In robustness tests, we also consider specifically the impact of the tax differential between capital gains and dividends, and we employ the total payout ratio.To allow for the effects of the Eurozone sovereign debt problems and the Capital Requirement and Bonuses Package, which came into force on January 1, 2011, we include the dummy Year > 2010, which is equal to one for the years 2011, 2012, 2013 and zero otherwise.15,For the regressions on bank performance, we include the following set of controls: Board Size, RetVol, Size, and Year > 2010, in addition to Treasury securities.These control variables are also justified by the non-financial literature, in particular Bhagat and Bolton.16,Because dividend policy, performance, and corporate governance/ownership structure variables are endogenous, we must rely on an Instrumental Variables setup for our econometric strategy.The CEO power proxies are instrumented by the dummy Founder CEO, which takes on the value one when the CEO of the bank is also one of the founders.This variable is likely to be positively correlated with the degree of clout of the CEO within the bank, and we expect Founder CEO to be positively correlated with CEO Ownership and CEO Tenure, and negatively correlated with Unforced Turnover.The CEO cannot decide every year to be a founder, suggesting that Founder CEO is exogenous to dividend policy.For the performance proxies, we employ the instrument Treasury securities, equal to the ratio of securities issued by governments to loans and we expect a positive relationship between this ratio and performance because banks are likely to buy these securities when government bond yields are high and bank stock prices are also high.Conversely, in periods of high risk-aversion, “flight to quality” occurs, and investors move from stocks to government bonds, leading to lower Treasury bond yields and Market-to-book and Tobin Q. Given that the level of Treasury securities depends mainly on current conditions in the bond and stock markets, this variable is likely to be exogenous to dividend policy.17,A positive correlation between performance and investing in government securities rather than loans for sample periods including the 2007–2009 financial crisis is consistent with Beltratti and Stulz.For the proxy of director ownership, we choose the value of the stake of the CEO as an instrumental variable, CEO Ownership €.We expect CEO Ownership € to be positively correlated with Director Ownership €, simply because of cross-sectional differences in the average emoluments paid to board members and executives at the bank-level.Therefore, this variable is likely to be exogenous to dividends and performance.We build a new hand-collected data set with information on board composition and ownership structure for 109 listed banks19 located in 15 EU countries for the period 2005–2013.20,The sample period starts in 2005 to reduce the impact of different accounting standards on cross-country comparability, since in this year International Financial Reporting Standards became compulsory for all EU listed companies.We start with the universe of European publicly quoted banks listed on Bankscope.For the sake of comparability, we focus on banks which use IFRS accounting standards.We focus on institutions classified as: commercial banks, bank holding companies, holding companies and cooperative banks.A total number of 127 banks satisfy these selection criteria.Next, we exclude institutions for which data on gross loans is unavailable.21,Finally, to allow hand-collection of information on corporate governance and ownership structure, we stipulate that there is at least one annual report for the period 2005–2013.These criteria results in a sample of 109 banks.The geographic distribution of our sample is similar to that in the related literature.22,Table 1 presents the main steps of our sample construction.Table 2 provides a breakdown of the number of banks per country and type of bank, and the sample representativeness relative to the population of listed banks in the EU-15 over the sample period.Our final sample is an unbalanced panel with 913 bank-year observations for 109 banks.However, data availability for the main variables reduces the amount of bank-year observations to 775, as shown in Table 3.In our analysis, we concentrate on payout ratios as well as the decision to pay a dividend.We calculate the dividend payout ratio as dividends paid for a given year divided by bank equity.Because this variable is positively skewed, our main regressions are based on the natural logarithm of DPE.Table 3 reports statistics for the decision to pay a dividend, DPE and proxies for CEO power and performance.We report the statistics for the whole sample and for the regressions on DPE, considering only the cases for which cash dividends are paid by the bank.Government shareholding is on average 2.7% for the whole sample, considering cases even when the government does not hold any bank shares.However, when we consider only cases where the government has an ownership stake the average value increases to 19.81%.Therefore, as said above, once the government buys bank stocks, the ownership structure becomes immediately more concentrated and the HHI increases.The cases for which we have a government official on the board of directors are 7.7% of the total sample.23,Fig. 1 shows the geographical distribution of the average DPE across countries over the sample period.The effect of the financial crisis of 2008–2009 and the subsequent Eurozone sovereign debt problems elicit heterogeneous responses from banks in different European countries.All countries except for Belgium and Sweden experienced a reduction in mean DPE from 2008 onwards.When we compare 2005–2007 with 2008–2009, the mean DPE increases even for Danish banks.Sharp declines in DPE occurred for the countries that were most affected by the crisis.For Portugal, the mean DPE dropped from 5.22% in 2005–2007 to 2.44% in 2008–2009.For Italy it fell from 4.70% to 2.91%.However, Irish banks were the most affected: the mean DPE was 6.82% just before the crisis in 2007 and 0% from 2010 until the end of the sample period.In Fig. 2, we compare the trend of DPE over the sample period for Ireland, where there is a sharp drop after the crisis, and Sweden, where DPE is overall stable.In this section, we report the results of our main regressions.We employ the econometric procedure described in Section 3.1.Section 4.1 reports the main results with respect to the effect of CEO power on payout ratios and performance, and the effect of director ownership on payout ratios and performance.Section 4.2 reports the main results for the impact of government monitoring on payout ratios and performance.Section 4.3 reports robustness checks.Table 4 reports our main results for the 2SLS regressions on the relation between CEO power and the payout ratio, DPE, allowing for the effect of bank performance as proxied by Market-to-Book and Tobin Q.The Kleibergen-Paap tests for weak IV and the coefficients on the IVs for the first stage of the regressions support the hypothesis that our instruments, Founder CEO and Treasury securities, are strongly correlated with the CEO power and bank performance proxies, respectively.The results for the coefficients on the proxies for CEO power suggest a negative relation between CEO power and payout ratios.The results for the coefficients on the bank performance proxies show a positive relation between performance and payout ratios.To what extent are the results reported in Table 4 a result of sample selection bias?,Table 5 reports the results for Heckman selection models investigating the impact of CEO power, proxied by CEO Ownership, and performance on the payout ratio), allowing for possible sample selection bias.Moreover, to increase the robustness of our results, we also run 3SLS models on the relation between CEO power, dividends, and bank performance.In these regressions, we also add a key variable to our framework: volatility of returns.This variable is found to negatively affect performance.We posit that this variable has an indirect effect on the payout ratio: firms with higher information costs are likely to have more volatile earnings, and therefore they will favor lower payouts to reduce the likelihood that earnings do not fall below some threshold needed to remit dividends.For this reason, RetVol enters the selection equation in the Heckman models, and the equation on Market-to-Book and Tobin Q in the 3SLS models.The results in Table 5 show that CEO Ownership is still negatively correlated with DPE.Moreover, as expected, higher RetVol results in lower performance and a lower propensity to pay dividends.The Likelihood Ratio test on the.s significance of the correlation between the residuals of the outcome and selection equation for the Heckman selection model confirm that some degree of sample selection bias exists.The 3SLS models also confirm the positive relation between performance and payout ratios, and indicate that CEOs increase their ownership as performance increases, but the opposite is not true.The latter results suggest that CEO Ownership is a good proxy for “bad” corporate governance or, in other words, CEOs with large stockholdings are “entrenched”.However, the negative coefficient on CEO Ownership for the regressions on the payout ratio, DPE, is inconsistent with the findings of the non-financial literature on CEO power and the payout ratio, which documents that CEO entrenchment leads to higher dividend payout ratios.24,Does stronger internal monitoring from the board of directors improve bank performance?,In Table 6 we examine the impact of Director Ownership € on bank payout ratios using Heckman selection models and 3SLS models that allow for the interactions between director ownership, dividend payout ratios, and performance.As said above, this variable has been found to be positively correlated with performance in previous empirical studies.The results reported in Table 6 confirm that this is the case: Director Ownership € is positively related to both Market-to-Book and the Tobin Q ratio.However, our results also provide some evidence of a negative impact on dividend payout ratios, although the coefficients on Director Ownership € are significant for the 3SLS regressions but not for the Heckman selection model,To further dig deeper into the relationship between performance and “good/bad” corporate governance, Table 7 reports the results of 2SLS regressions of current and future performance on Director Ownership €, using as IV CEO Ownership €.The results confirm that Director Ownership € increases both current and future performance.Among the control variables, RetVol and Size decrease current performance, while the variable Treasury securities increases performance.The dummy Year > 2010 is related to a drop in performance, most likely because of the consequences of the financial crisis and the ensuing Eurozone sovereign debt problems.Besides Year > 2010, Director Ownership € is the only variable that affects both Market-to-Book and Tobin Q even in the following years.Tables 8 and 9 report the same results but for CEO Ownership, Unforced CEO Turnover and CEO Tenure.The results suggest that CEO Ownership decreases current performance and the performance of the next year.However, the effect ceases to exist at t + 2.The results for Unforced CEO Turnover and CEO Tenure confirm the negative impact of our CEO power proxies on performance.To recap, the results for Director Ownership € are consistent with those provided by the entrenchment literature on non-financial firms: director ownership improves performance.The results for CEO Ownership suggest that the entrenchment effect dominates the incentive effect, and entrenched CEOs do not increase payout ratios to discourage monitoring from bank shareholders.These findings can be interpreted as evidence that performance-based incentives based on ownership stakes work well when they are applied to board members, but less well when applied to bank CEOs, for which the entrenchment effect appears to dominate.Moreover, they suggest that entrenched bank CEOs have little incentive to discourage monitoring from shareholders by increasing payout ratios.In the next section, we examine the impact of government monitoring on payout ratios and bank performance.Table 10 reports the results for 2SLS and 3SLS regressions investigating the impact of government monitoring on payout ratios.For the 2SLS models, the Kleibergen-Paap tests for weak IV and the coefficients on the IVs for the first stage of the regressions supports the hypothesis that HHI is a strong instrument for both Government Ownership and Government monitoring.The results for our 2SLS and 3SLS regressions suggest that government monitoring decreases payout ratios, contrary to what is argued by Gugler.These results backup the view that governments are keen to reduce bank payout ratios for the fear of potential reputational damage and financial losses deriving from bank defaults.Is government monitoring good for bank performance?,We answer this question in Table 11.The results suggest that government monitoring has little impact on bank current and future performance.25,These results are somewhat consistent with the findings reported by Iannotta et al., who find that in Europe government-owned banks do not outperform privately-owned banks.What happens if the government changes as a result of elections?,To answer this question, we collect data on elections from the Elections Database.This website contains information on the outcomes of parliamentary elections in terms of total votes and percentage of votes for each of the main parties in the elections.For presidential elections understanding whether there is a change in power is straightforward.26,However, the parliamentary elections information needs to be supplemented with other data sources because governments can be formed by various coalitions between two or more parties.As such, we combine the aforementioned information with other sources to determine whether the elections determined a change in government.We then run probit regressions where the dependent variable is the first-difference of the dummy Government Official on the Board and the independent variable is either Elections or Change in Government.The results, untabulated but available upon request, suggest that Change in Government does not have any effect on the probability of a change in Government Official on the Board.However, this may be due to the low number of cases for which there was a change in government.When we consider the results for the probit regressions with Elections, we find that they increase the probability that there will be government official on the board of a bank in the next year.27,However, when we include Elections in the regressions on dividend payouts, Elections does not have any impact on DPE, and the results for the other coefficients remain substantially unaltered.In this section we present robustness tests to allow for other determinants of dividend policy that may have not been considered in the regressions reported in Sections 4.1 and 4.2.We start from the possibility of tax clienteles.In Table 12 we report robustness tests using 2SLS models considering the effect of the tax differential between capital gains and dividends.28,To further increase the robustness of our findings, we also consider the effect of using the natural logarithm of CEO Ownership.The coefficients on Tax Differential tend to be negative but statistically indistinguishable from zero.This finding is consistent with DeAngelo et al. who found that taxes may not be first-order determinant of the choice between dividends and stock repurchases.The magnitude and significance of the coefficients on the other variables are substantially the same as those reported in Tables 4 and 5.In Table 13, following Hirtle, we investigate the impact of considering total payouts on our analysis.We replace DPE with the sum of cash dividends and the cash distributed through stock repurchases divided by total equity.29,The coefficients on the CEO power proxies remain significant and with the expected sign.30,We further examine the sensitivity of our results to changes in the dependent variable by substituting DPE with the ratio of dividends to total assets.The results for the CEO power proxies and for director ownership remain substantially unaltered, as shown in Table 14.Finally, we carry out further robustness tests on the effect of capital requirements, loans growth, and the proxy for profitability.31,First, to examine the impact of capital requirements, we include in the 2SLS regressions with the CEO power proxies the dummy variable Close, which takes the value one if the Tier 1 ratio is less or equal to six percent and zero otherwise.The coefficients on Close are negative and either weakly significant or insignificant, and the magnitude and sign of the coefficients on the CEO power proxies remain virtually unaltered.Second, we run again the 2SLS regressions with the CEO power proxies after replacing Market-to-book and Tobin Q with the variables ROA and Loans Growth.ROA is a proxy for performance and Loans Growth is a proxy for growth opportunities.The coefficients on the CEO power proxies remain negative and significant.The coefficient on ROA is insignificant, while the coefficient on Loans Growth is negative and significant.A negative coefficient on growth opportunities is consistent with the findings reported in the non-financial literature.When we repeat estimations using the proxy CEO Duality, instrumented by number of independent directors32 divided by the total number of board members, the results remain qualitatively the same.In this paper, we investigate the effects of CEO power on dividend policy in banks from EU-15 countries.We use a unique hand-collected data set with information on board composition and ownership structure for European listed banks over the period 2005–2013.We exploit detailed bank-level data on government ownership and officials represented on bank boards to investigate the impact of government monitoring on bank dividend policy.This sample is merged with data from Bankscope and bank annual reports that provides information on dividends and other financial characteristics.According to the managerial entrenchment literature, dividend payout ratios are positively related to CEO power since dividends discourage monitoring from minority shareholders.The non-bank evidence from Europe also suggests that dividends dampen expropriation of minority shareholders consistent with a positive relation between dividend payout ratios and expropriation incentives.However, we find that monitoring from the government leads to an inverse relation between CEO power and dividend payouts.Entrenched bank CEOs pay lower dividends and in doing so are less likely to attract undesired attention from government regulators.Our main findings document a negative relation between CEO ownership and CEO tenure and payout ratios, and a positive link between unforced CEO turnover events and dividend payout ratios.CEO ownership and CEO tenure are also negatively related to bank performance, while unforced CEO turnover events are associated with increases in bank performance.These findings suggest that entrenched CEOs in European banks do not have the incentive to increase payout ratios to discourage monitoring from shareholders.We also provide evidence that director ownership improves performance and reduces dividend payout ratios.According to the non-financial literature, the more effective internal governance mechanisms are, the larger the payouts required for entrenched managers to discourage monitoring.Our findings, on the other hand, suggest that in European banks the members of the board of directors tend to prefer low dividend payout ratios to support the capital position of the bank.We also document that government monitoring does not have a significant effect on bank performance but impinges on payout ratios.In line with the view that the government puts the interests of depositors before that of bank shareholders, we provide evidence of a negative relation between government ownership and payout ratios.When there is a government official on the board banks make lower dividend payouts.In conclusion, these results are consistent with the view that in banking, entrenched CEOs do not have a strong incentive to pay large dividends, because of a combination of weak minority shareholders regulation, an inefficient market for corporate control, and concerns of the government over bank soundness.These factors lead to the negative relation between CEO power and dividend payouts.
We investigate the role of CEO power and government monitoring on bank dividend policy for a sample of 109 European listed banks for the period 2005–2013. We employ three main proxies for CEO power: CEO ownership, CEO tenure, and unforced CEO turnover. We show that CEO power has a negative impact on dividend payout ratios and on performance, suggesting that entrenched CEOs do not have the incentive to increase payout ratios to discourage monitoring from minority shareholders. Stronger internal monitoring by board of directors, as proxied by larger ownership stakes of the board members, increases performance but decreases payout ratios. These findings are contrary to those from the entrenchment literature for non-financial firms. Government ownership and the presence of a government official on the board of directors of the bank, also reduces payout ratios, in line with the view that government is incentivized to favor the interest of bank creditors before the interest of minority shareholders. These results show that government regulators are mainly concerned about bank safety and this allows powerful CEOs to distribute low payouts at the expense of minority shareholders.
719
Comparing strengths and weaknesses of three ecosystem services modelling tools in a diverse UK river catchment
Ecosystem services modelling tools allow the quantification, spatial mapping, and in some cases economic valuation, of ecosystem services.The output from these tools can provide essential information for land managers and policy makers to evaluate the potential impact of alternative management options or land-use change on multiple services.Such tools are now being used around the world, at a range of spatial scales, to address a wide variety of policy and management questions.For example, they have been used to investigate the possible effects of climate change on water provisioning and erosion control in a Mediterranean basin, to provide guidelines for water resource management in China, and to examine the potential impact of agricultural expansion on biodiversity and carbon storage in Brazil.Ecosystem service decision support tools range in complexity, with the simpler models requiring less user time and data inputs while the more complex models require more technical skill but can result in greater accuracy and utility.The simplest include spreadsheets, and mapping overlay tools based on land-cover based lookup tables.Intermediate complexity spatial tools provide information on the relative magnitude of service provision, and the more complex tools allow spatial quantification and mapping of services, for example InVEST, LUCI and ARIES.With an ever increasing variety of tools available, there are now a number of reviews and comparisons that help potential users make informed decisions on which tool might be appropriate for their needs.These typically focus on tool capabilities, ease of access/use, time requirements and generalisability.For example, model outputs from ARIES and InVEST for carbon storage, water and scenic viewshed services were compared for a semi-arid river basin in Arizona, USA, and northern Sonora, Mexico, under different management scenarios.Vorstius and Spray investigated similarities in mapped outputs from three different tools in relation to service delivery at a local scale.Turner et al., focusing on methods to assess land degradation, briefly reviewed a range of decision support tools and other models whose outputs have been evaluated in the context of ecosystem services.There are also online toolkits available, for example, the National Ecosystem Approach Toolkit, providing guidance on selecting an appropriate modelling tool.At first glance, many of the ecosystem services modelling tools appear to produce similar outputs; they can model multiple services, and are designed to be used for scenario analysis and decision-making.However, the approaches taken and underlying assumptions made for the models within each tool are often different, the appropriate resolution and scale of their application can vary and, since the models are in continuous development, reviews can become rapidly outdated.Therefore, there is an ongoing need for comparison studies that compare multiple models for the same service and study site, along with a need to evaluate models in new biophysical settings.In particular, this paper demonstrates how three such tools differ, highlighting unique aspects and discussing their strengths and weaknesses, at a level of detail which is not met in most previous reviews.In this paper we compare three spatially explicit ecosystem services modelling tools, using examples of provisioning and regulating services.The models are parameterised for the UK and applied to a temperate catchment with widely varying altitude and land use in North Wales.While two of the tools have previously been compared, LUCI has not been evaluated in a tool comparison.Additionally, we focus on an aspect receiving little attention in previous reviews, i.e. that the modelling tools produce a range of different outputs for each ‘service’; these differing outputs may inform the choice of tool for a particular application.Lastly, since ecosystem services modelling tools are often used to evaluate the impacts of land-use change, we assess their sensitivity to varying severities of land-use change.The Conwy catchment in North Wales, UK, is 580 km2 in area.It is a small catchment in global terms, but is characterised by a diverse range of elevation, climate, geology and land uses.Predominantly rural, the land-use comprises sheep farming in the upland areas to the west and mixed dairy, beef and sheep farming in the lower areas to the east.The lowland flood plain area also contains some arable land.There is a large afforested area to the mid-west.Most of the sub-catchments contain some semi-natural woodland, including areas of riparian woodland.In the uplands to the south of the catchment lie extensive areas of blanket bog, protected under the European Natura 2000 biodiversity designation.More information can be found on the Conwy catchment in Emmett et al.We have chosen examples from both provisioning and regulating services, including those where the spatial context is important to the flow of services and where it is less directly important.We did not include a cultural service as ARIES and LUCI do not have readily available cultural models parameterised for the UK.ARIES, InVEST and LUCI were chosen as spatially explicit ecosystem services modelling tools that provide quantitative output, can be applied in different contexts, and can work at local or national scale, depending on the available data.InVEST combines land use and land cover data with information on the supply and demand of ecosystem services to provide a service output value in biophysical or economic terms.The models, written in Python, are available as stand-alone applications.LUCI is a decision support tool that can model ecosystem service condition and identify locations where interventions or changes in land use might deliver improvements in ecosystem services."Output maps are colour-coded for ease of interpretation: in default mode green is used to indicate good opportunity for changes, and red to mean “stop, don't make changes here”.The models incorporate biophysical processes, applying topographical routing for hydrological and related services, and use lookup tables where appropriate, e.g. for carbon stock.The models are written in Python, and run in an ESRI GIS environment.LUCI has a unique, built-in trade-off tool, which allows the user to identify locations where there is potential for “win-wins”, i.e., where multiple services might benefit from interventions, or where there may be a trade-off, with one service benefitting from interventions while another is reduced.In contrast, ARIES was developed as an online platform to allow the building and integration of various kinds of models.This allows the most appropriate ecosystem services model to be assembled automatically from a library of modular components, driven by context-specific data and machine-processed ecosystem services knowledge.ARIES focuses on beneficiaries, probabilistic analysis, and spatio-temporal dynamics of flows and scale, aiming to distinguish between potential and actual benefits.While InVEST and LUCI focus on using known biophysical relationships to model physical processes, ARIES, in addition to standard modelling approaches incorporated by model wrapping, can also use probabilistic methods if there are insufficient local data to use in biophysical equations.A key feature of ARIES is its conceptualisation of ‘source’ elements within a landscape that contribute to service provision, and ‘sink’ elements that detract from service provision.Models were parameterised for the UK and then applied to the study catchment.Although conceptually similar in some ways, differences in modelling approach can create differing requirements for some input data.Our aim was also to run each model realistically, as users would do in the real world, rather than as a direct comparison with identical input data.ARIES and InVEST were run using 50 m by 50 m resolution digital elevation data and the UK Land Cover Map 2007.LUCI used vector format LCM 2007 and soil type data, and 5 m by 5 m resolution digital elevation data as the accurate simulation of overland and near-surface flow mitigation requires detailed simulation of catchment hydrology, using high resolution topographic data.This is not required for InVEST and ARIES due to their approach of aggregating outputs at catchment or sub-catchment scale."Due to differences in required input format between the models and LUCI's requirement for higher spatial data as mentioned above, it was not possible to use the same sources for all input data used.More detail on data inputs for all models is available in the Supplementary material.The InVEST water yield model provides a value for annual water yield per grid-cell by subtracting the water lost via evapotranspiration from the average annual precipitation.Evapotranspiration is based on an approximation of the Budyko curve, and information on vegetation and rooting properties.The value per grid-cell is then summed to provide a total yield for the watershed.Water abstractions can also be included.The model can calculate the value of the energy that would be produced if the water reached a hydropower facility, therefore providing biophysical and economic outputs.The LUCI flood mitigation and water supply services model calculates direction of flow over the landscape using GIS functions.The model then combines this with spatial data on hydrologically effective rainfall, calculated by subtracting estimated evapotranspiration from precipitation, and simulates accumulation of this water across the landscape using flow accumulation routines.Average flow delivery to all points in the river network is simulated, and can be used to estimate water supply.The model identifies “mitigating features” that enhance infiltration and retention of water, such as woodland or wetlands, based on land use data.The model also identifies areas where flow is routed through the mitigating land use features, and maps these areas as “mitigated”, i.e. much less water is expected to travel to the watercourse as overland or other rapid flow.A few options have become available to model water supply under the ARIES framework, as the product has evolved.Process-based hydrological models are currently supported and maintained for this service.However, the probabilistic approach, using Bayesian networks, has been applied more often.We created two water supply models in ARIES: a probabilistic model, calibrated at UK scale and designed to accommodate the influences of land cover on evapotranspiration to enable sensitivity to land-use change to be evaluated, and a ‘Flow & Use’ model that can track service flows through the landscape using mechanistic routing algorithms and accounts for abstraction.The latter was implemented in an independent GIS environment as a variation of the Service Path Attribution Network modules, designed to map the flow of services, and its components, in previous distributions of ARIES.Using this method, it is possible to incorporate both point and diffuse water ‘sources’ and abstractions and to follow the fate of the service across the landscape.Spatial data on rainfall and evapotranspiration are handled in a deterministic way through flow routines, while source/abstraction points are accounted for as contributing masses.Both the Bayesian and the ‘Flow & Use’ models provide annual water supply as output for any location in the catchment.For all models, UK precipitation data from the CEH-GEAR dataset was used.Potential evapotranspiration data for the UK were calculated from the CHESS meteorological dataset using the Penman-Monteith equation.An annual average over the period 2000–2010 was used for the climate data, accounting for variability between years.Data on UK water abstractions were taken from annual estimates by the UK Department for Environment, Food and Rural Affairs.The InVEST carbon storage and sequestration model calculates carbon stocks within the landscape using lookup tables containing one value per land cover type.Carbon in four stores is summed: above-ground biomass, below-ground biomass, dead organic matter, and soil carbon.The depth of soil assumed for carbon stock depends on available data: two depths, 30 cm and 1 m, were used for the Conwy catchment.There is an option to provide current and projected land cover maps, which allows the net change in carbon stock resulting from land-use change over time, interpreted as either sequestration or loss, to be mapped.The model can perform an uncertainty analysis if the mean and standard deviation for each carbon estimate is given.The market or social value of the carbon stored in the study area can also be calculated.The LUCI model can calculate total carbon stocks at steady state, i.e. assuming that soil and vegetation carbon are at equilibrium, using data on average carbon stock in above and below-ground biomass, dead matter and the top 30 cm of soil for different soil and land use combinations.The lookup table aggregates land use into four types to provide sufficient data points for each soil and land use combination.If spatial data on historic or scenario-based land-use change are available, the model can be used to calculate change in carbon stock between two equilibrium values.In the absence of land-use change data, the model allows a comparison of carbon stock, with the potential value under the current land use assigned as the maximum soil carbon stock for that soil type, highlighting areas with potential to increase carbon stocks.LUCI output maps present opportunities to increase soil carbon stocks, or to protect areas where carbon stocks are already high within the landscape.Carbon regulation can be modelled in ARIES in different ways.When enough data are available, carbon budgeting simulated through biomass dynamics is handled by the LPJ-GUESS model, which has been ported to ARIES.However, in data-poor situations, other modelling choices are available.We opted for the Bayesian network approach, applied by Balbi et al.Carbon concentration in the upper 15 cm of soil was calculated, using available data to train the model.Explanatory variables were topography, growing period in degree days, precipitation, soil group and land use type.The model was calibrated and validated on measured carbon concentration from ~ 2500 sample points across the UK, then applied spatially to the Conwy catchment.Carbon stock was not calculated using the ARIES model.Annual average runoff in InVEST is calculated, using the water yield model.The model then determines the quantity of pollutant exported and retained by each grid-cell, based on a lookup table containing the nutrient loading and the filtering capacity of each land cover type.The nutrient loading value is adjusted by a Hydrologic Sensitivity Score, which helps to account for differences between conditions where the export coefficient was measured and the study site.Natural vegetation, for example forests and wetlands, retains a high percentage of the nutrient flowing through the cell, while urban areas have low retention.The final output is total annual nutrient export and retention for the watershed.Mapped outputs include, for each grid-cell, the nutrient export to stream, which reflects the nutrient released from each cell that reaches the stream, and the nutrient retention, with values based on the filtering capacity of the cell and the total load coming from upstream.The model can also calculate the economic saving that habitats within the ecosystem have provided due to avoided water treatment costs.The LUCI diffuse pollution mitigation model estimates nutrient loading in the landscape based on land cover, average stocking density and fertiliser input.Further functionality that also considers the influence of slope, soil type, and detail on land management has recently been developed, but is not yet parameterised for the UK.Accumulated loading over the landscape is calculated by combining the nutrient loading estimates with the flow direction layer calculated from topography and applying flow accumulation routines.Nutrient flow accumulation for near surface flow is calculated similarly, by weighting spatial data on flow direction with nutrient export coefficients and a factor for the solubility of nitrogen.For overland and rapid near-surface flow, “mitigating features”, as identified by the water supply model, are assumed to remove all or part of N and P entering them.The combined output from routed overland and near surface flow provides simulated values of spatially distributed annual mean in-stream loading and concentrations of dissolved nutrients within the stream network.As InVEST and LUCI only model diffuse pollution, a post-hoc estimation of point-source phosphorus entering the catchment, based on the number of people served by sewage works in the catchment, can be added to the final model output.At the time of writing, no modules for nutrient regulation have been formally released and supported within the ARIES framework, therefore no ARIES model was run for this service.However, users are free to implement customised models or adopt models made available by the ARIES user community.Barquín et al. discuss insights into the future development of ARIES, including broader scope for water quality modelling.To test the sensitivity of the models to land-use change, scenarios of varying severity were compared to a baseline of no change.The first three scenarios were 5%, 10% and 30% of the catchment changing from grassland to woodland.These scenarios were inspired by Welsh Government targets to increase the extent of forests by an additional 5% of the land area of Wales.Semi-natural grasslands were merged to form a single grassland habitat type.Random patches of woodland were placed within current grassland habitats to create input layers.Patch size ranged between 5 ha and 100 ha, based on an average field size of ~ 4 ha and an average farm size of ~ 40 ha in Wales.Given the structure of the landscape, it is unlikely that any larger patches would emerge.The fourth scenario used the ‘Managed Ecosystems’ scenario from the DURESS project.The DURESS scenarios were developed through discussions with stakeholders and experts on current and future drivers of land-use change in Wales.The ‘Managed Ecosystems’ scenario envisages an upland landscape with focus on management for carbon and biodiversity, expansion of woodlands and wetlands, restoration of peatlands and de-intensification of pastures.In terms of land-use change, this leads to a 2.3% increase in woodland, a 16% decrease in pasture and a 13% increase in semi-natural grassland within the Conwy catchment.Using ARIES and InVEST, post-hoc examination of the spatial pattern of service provision across the landscape under a variety of future scenarios can demonstrate trade-offs across multiple services.However, LUCI is the only tool in this study that currently has a module for evaluating trade-offs.The model applies an equal weighting to each service as a default, but users can increase the weighting of services that are of particular interest and/or exclude services.Units are normalised prior to input to the trade-off tool, with classifications based on user-defined thresholds.Model outputs were validated against measured data collected in the Conwy catchment.Flow data were taken from two sites within the gauging station network of the UK National River Flow Archive, which is coordinated by the Centre for Ecology and Hydrology.The boundary for each NRFA catchment was defined using the CEH Integrated Hydrological Digital Terrain Model.Soil carbon, above and below-ground biomass data were collected from up to 18 sites with varying land-use in the catchment, Smart et al.)."Water quality data for one site within the Conwy catchment were extracted from the UK Environment Agency's Harmonised Monitoring Scheme database and annual loads calculated using river flow data following Dunn et al., 2014.All of the tools provide comparable maps of annual water yield per grid-cell, with the lowest yield consistently seen on the eastern half of the catchment.The key InVEST output is annual water yield per sub-catchment, as calculations are performed at this scale.Grid-cell level maps are for model checking purposes only.The LUCI traffic light map of flood interception shows areas providing flood mitigation in red, including trees and deep permeable soils, while green areas have high flood concentration and could benefit from mitigation.LUCI also provides quantitative outputs, including overland flow accumulation, showing the accumulation of water over the landscape according to topographic hydrological routing of hydrologically effective precipitation, and average flows estimated as the annual flow at each point in the stream network.The ARIES ‘Bayesian’ model can also produce an uncertainty map, while the ARIES ‘Flow & Use’ model delivers the flow of water available for use through the catchment; named “Actual source” in the model, see Supplementary material).When model outputs were compared with observed annual flow data from two gauging stations in the catchment, model performance was similar, with LUCI and the ARIES ‘Flow & Use’ model providing the closest estimates to the measured values.The ARIES ‘Flow & Use’ model showed that demand was equal to water use at all abstraction points, while dummy data was used to illustrate how the model could report unmet demand.Water use at the Llyn Cowlyd abstraction point was particularly high.For the carbon models, InVEST and LUCI provide broadly comparable mapped outputs for total carbon stock.However, while the maps for ‘biomass + 1 m depth’ are similar between the models, the maps for ‘biomass + 30 cm soil depth’ show some differences in both the spatial pattern and the magnitude of carbon stocks reflecting differences between the carbon approach used for the two models.The ARIES output is carbon concentration in the top soil.Both InVEST and ARIES provide maps of uncertainty, using the standard deviation and the coefficient of variation respectively, for the carbon estimates per land class.The LUCI ‘carbon sequestration potential’ map identifies areas where existing carbon stock is already high, and where there may be potential for increasing carbon stocks under different land use.Modelled outputs for total carbon stock for the catchment using InVEST and LUCI were very similar, within 10% of each other, despite the differences in spatial distribution of carbon shown in the mapped output.When compared to total carbon calculated from measurements taken in the catchment, both models showed over-estimates, however values were on the same order of magnitude.Total measured carbon at points within the catchment was also compared with LUCI modelled values, with a mean difference of 14.35 kg m− 2 for biomass + 30 cm soil depth and a mean difference of 5.47 kg m− 2 for biomass + 1 m soil depth.The InVEST and LUCI nutrient retention models produce slightly different mapped outputs, which are not directly comparable.Fig. 4 shows examples for phosphorus but outputs are also available for nitrogen.While the InVEST Adjusted Loading Value and LUCI phosphorus load maps are based on nutrient exports per land class, the InVEST map also takes into account the flow upstream of the grid-cell.InVEST provides an output of the vegetation filtering capacity of each land class and the load from each grid-cell that eventually reaches the stream.LUCI outputs the nutrient concentration for any point in the stream network and the accumulated P loading for each point, considering the P contribution from uphill sources.When the average annual load was calculated using measured concentration and flow data and compared to model outputs, both models showed considerable underestimates for the study catchment, particularly the InVEST nitrogen model.The LUCI trade-offs tool showed that, when all modelled services were considered, there is some opportunity to enhance multiple services, particularly in the north and east of the catchment.The potential for possible gains was explored using maps of trade-offs between pairs of services.For example, when pairing carbon and flood mitigation, the area mapped in dark green indicates opportunity to enhance both services, while areas in the south and west of the catchment have existing high provision for both services.The sensitivity of the models to land-use change depended on the service, with greater changes seen for carbon stocks and nutrient loads than water yield.The change in annual water yield per watershed was minimal for all models.However, the area of mitigating features increased greatly and was 85% greater than the baseline for the 30% grassland to woodland scenario.Change in mitigated area is reported for the LUCI model as opposed to change in water yield, because the functionality available for UK applications does not yet include a function to adjust evapotranspiration for land-use change scenarios.For InVEST and LUCI, the change from grassland to woodland led to an increase in total carbon stocks, as did the DURESS scenario.ARIES predicted decreases in soil carbon for the GW scenarios.Nitrogen load generally decreased with increasing woodland, while the DURESS scenario for InVEST and LUCI showed large reductions in annual N load.While phosphorus load increased with increasing woodland using InVEST, LUCI showed a gradual decrease, however both models had similar outputs for the phosphorus DURESS scenario.To increase our understanding of ecosystem services modelling tools, there is a need for quantitative comparisons, for the same services and study area, in a variety of environmental conditions.Run for a temperate UK catchment, the three tools in this study were found to have broadly comparable quantitative model outputs for each service, as Bagstad et al., 2013b also concluded when using ARIES and InVEST for a semi-arid environment.However, the modelling tools also have unique features and strengths.InVEST has been used widely, has a comprehensive user manual and provides example input data per model.In addition to biophysical outputs, InVEST also provides estimates of valuation, based on user inputs, highlighting areas with high levels of provision for particular services, e.g. Fu et al., 2014."LUCI's traffic light maps allow quick and easy interpretation of the model output.The LUCI flood mitigation map has been applied as part of the Glastir Monitoring and Evaluation Programme, simulating impacts of interventions, such as riparian planting, to provide fast feedback to the Welsh government.LUCI is also the only tool with a trade-off module, providing a useful visual output of the impacts of land-use change on multiple services, and the only tool that respects fine-scale spatial configuration of landscape elements.ARIES represents a good option in data scarce areas and its probabilistic approach can cope with data gaps, providing maps of modelled outputs along with associated uncertainty.When analysing abstraction and water use, tracking the flow of service provision across the landscape is necessary.The ARIES ‘Flow & Use’ model allows for detailed mapping of the various flow components.Model validation revealed that performance against observed data was variable.The water yield models performed well, as Redhead et al. found when running the InVEST model for 42 catchments across the UK.Annual average flow values from the LUCI model also compare very well with measured values from the NRFA at Wales national scale.The InVEST and LUCI carbon models provided overestimates when total carbon in the catchment was considered, however values were on the same order of magnitude.This is to be expected as input data was extracted from a variety of literature sources and national scale spatial data was used.Also, the measured C data from the catchment did not include an estimate of dead matter, although this would represent a small percentage of the overall total.When modelled and measured carbon was compared for individual points, the LUCI model showed reasonable performance given the same constraints of generalised inputs and spatial data which did not match the observed land use or soil type for all points.All of the nutrient retention models performed less well, particularly for InVEST, partly due to the difficulties in assigning suitable export coefficients.However, at national scale, LUCI values for N in Wales compare well to measured values from the Water Information Management Solution database.All of the modelling tools share some limitations; these are ongoing areas for development.The water and nutrient retention models work on an annual basis, meaning that more detailed temporal changes in water supply, hydropower production and nutrient concentration are not considered.Sparsely sampled measured phosphorus data may not be representative of the annual load, due to high variability and the tendency for sampling during base-flow dominated conditions whereas much of the load may actually come from events.The water models also do not allow for surface water – groundwater interactions, where streams can either gain or lose water through the streambed.The use of average inventory values for carbon fails to account for variation within a land use type, due to many factors, including land management history, temperature or elevation.Chaplin-Kramer et al., 2015 adapted the InVEST carbon model to allow for edge effects on carbon storage in forests.There are also difficulties with calculating carbon emissions for land-use change scenarios, as soil type will affect preferred land use and inventories may thus not actually be indicative of the impact of land use on soil type.This space for time substitution is currently necessary due to a lack of appropriate process-based models, which do not require site specific calibration; incorporation of simplified process-based approaches would be advantageous.The nutrient retention models are highly dependent on the accuracy of the export coefficient values used.Published export coefficients tend to be derived from only a few case studies, and may not be directly applicable to the study area.Many factors can influence nutrient export within a land use type, including management practices, livestock density, topology, soil type and rainfall.Also, published export coefficients implicitly include the retention element, while InVEST decomposes the coefficients into export and retention factors, which may add further uncertainty.There was only one water quality monitoring station with associated flow data in our study area, which may not be sufficient to validate the models.Discrepancies between reality and export coefficients based on variations in land management may be expected to average out at larger scales.While an estimate of point source P was added to the InVEST and LUCI outputs, this value was based on the human population served by the sewage works within the catchment and did not account for the export of phosphorus from septic tanks.InVEST is currently easily accessible and free to download, but LUCI is not yet freely available for public use although it can be accessed by contacting the model developers, and a fully accessible, free to download version is planned for release in April 2017.ARIES is currently rooted on a shared and open source development, so is available at no cost for non-profit use, while its k.Lab technology, the technical documentation and the development environment are freely accessible to registered users.There is also an ARIES online modelling tool under development, which is due to be released in 2017.InVEST and LUCI are straightforward and simple to use for those with basic GIS skills; the gathering of input data is often the most time consuming step for application of either tool.The planned online ARIES tool is intended to be simple for new users, however the development of customised models in ARIES and further new algorithms through its k.Lab technology requires a high degree of technical skill.The InVEST water and nutrient retention models run at the grid-cell scale and summarise by sub-watershed/watershed, while the LUCI and ARIES models can provide information for every point in the landscape.The choice of modelling tool therefore depends on the required scale of the outputs.Also, as the key output for the InVEST nutrient retention model is annual load, both measured flow and nutrient concentration data are required for model validation, whereas the LUCI model outputs nutrient concentration as well as load per point.The differences between the carbon stock maps for the InVEST and LUCI maps reflect differences in the modelling approach; LUCI calculates carbon based on soil type and aggregated land use, whereas InVEST uses only land use, but splits this into more categories.In theory, InVEST could also allocate carbon based on the soil-type and land use combination, however this approach is not currently applied.As seen here, spatial variation in model output may cancel out at catchment scale, given that lookup table values are based on averages.In terms of spatial allocation of demand, in InVEST one value for consumptive demand is ascribed to each land class, although this could vary greatly within the same land class.Variation in demand could be incorporated by defining additional land classes, but only to a very limited extent.The LUCI tool does not currently consider demand.The ARIES ‘Flow & Use’ module can explicitly model demand spatially, if local information is available.As the sensitivity of the models to land-use change depended on the service, an assessment of different scenarios will depend on which services are being prioritised within the catchment.Bagstad et al., 2013b found broadly similar gains and losses for each service when comparing the impacts of land-use change scenarios using ARIES and InVEST.In the current study, the outcome of the GW scenario for the phosphorus nutrient retention model varied between InVEST and LUCI.This may be due to varying model assumptions on nutrient uptake/retention or slight differences in the default export coefficients used for each model.This highlights the importance of using data input values that have been collected under as similar conditions as possible to the study site and also demonstrates a need to be aware of and understand differences between default parameterisations for the models.For the current study, small changes in water yield are mainly due to the small amount of evapotranspiration relative to precipitation, so that change in vegetation does not greatly affect the amount of water flowing downstream.The same models applied to hot and/or dry regions may be expected to provide differing results.The placement of land-use change is also important and may affect the simulation of flood mitigation in LUCI due to influence of hydrological routing, and carbon modelling due to influence of soil and other site specific factors.Both extent and placement in the landscape should be considered when designing land-use change scenarios, with ARIES and LUCI providing added value for use in assessing the impact of spatially explicit land-use change."Using LUCI's trade-off module, the variation between maps indicates that, depending on the services considered, appropriate placement of interventions and protective measures may differ significantly.This unique feature is particularly useful for stakeholder participatory exercises, allowing visualisation of the impacts that different scenarios could have on multiple services.This comparison study also demonstrates how the use of a suite of modelling tools can deliver extensive information on service provision within a catchment.All of the tools emphasise the high carbon storage capacity of the catchment, especially the Migneint blanket bog to the south.Stream phosphorus concentrations are generally low, although the LUCI outputs suggest that any mitigation efforts should be targeted to the north-east of the catchment.LUCI output on mitigation services shows that large areas are already providing mitigation from nutrient runoff and overland flow but, due to placement of these features, a relatively small additional area receives benefit.The benefitting areas are mostly uplands in the west of the catchment, which have relatively low flow concentration and nutrient accumulation; hence current service provision may be considered somewhat limited compared to potential service provision.Analysis of flows using the ARIES models suggests a sustainable use of water resources in the catchment overall.Having a high proportion of water use compared to the total available, may imply serious consequences for both ecosystems and users downstream.In particular it could impact river habitats directly and ecological processes directly or indirectly."On the users' side, an overexploitation may hamper continuity of service provisioning.There may be room for increasing abstraction from some of the other reservoirs, in case of further demand, but that would likely involve trade-offs with other services.Ecosystem services modelling tools can provide useful decision support outputs.While the three tools highlighted the key areas of service provision within the catchment, each has unique strengths.The choice of tool therefore depends on the study question and user requirements.Based on our experience of using these three ecosystem services tools, we outline the characteristics that we judged most useful for each tool.As InVEST is freely available, with detailed documentation and example data, it is recommended for users with time constraints.It is also the only of the three tools with well-developed economic valuation models, so is recommended to those requiring economic valuation as an output.LUCI, available for public use in 2017, would benefit users seeking fine scale outputs or requiring trade-off maps for multiple services.Once parameterised for international use, LUCI will be particularly well suited to explore impacts of detailed rural change.While ARIES, with an easy to use online tool under development, currently allows the customisation of models and is particularly useful when data is scarce.Studies are beginning to assess the sensitivity of these modelling tools to the scale of input data, and to the local or national relevance of input data compared with global default values.Further work is required in both of these areas.There is still a lack of tools to map or quantify cultural ecosystem services.Although InVEST contains some tools such as ‘Scenic Quality’ and ‘Recreation and Tourism,’ further development of this aspect of ecosystem service modelling is desperately needed.Similarly, the majority of modelling tools focus on the supply side or potential ecosystem service delivery and do not focus sufficiently on the beneficiaries.While ARIES incorporates ‘Users’ within its conceptual approach, much more work is required to develop tools which adequately incorporate spatial mapping of the demand side, i.e. to map where, and how much, services are actually used by beneficiaries.For future validation studies, it would be useful to compare modelling tools across multiple scales and also to further develop coefficients and look-up tables for a variety of climates and regions.A tool comparison including more diverse services, for example, cultural would further inform users in their choice of tool.
Ecosystem services modelling tools can help land managers and policy makers evaluate the impacts of alternative management options or changes in land use on the delivery of ecosystem services. As the variety and complexity of these tools increases, there is a need for comparative studies across a range of settings, allowing users to make an informed choice. Using examples of provisioning and regulating services (water supply, carbon storage and nutrient retention), we compare three spatially explicit tools – LUCI (Land Utilisation and Capability Indicator), ARIES (Artificial Intelligence for Ecosystem Services) and InVEST (Integrated Valuation of Ecosystem Services and Tradeoffs). Models were parameterised for the UK and applied to a temperate catchment with widely varying land use in North Wales. Although each tool provides quantitative mapped output, can be applied in different contexts, and can work at local or national scale, they differ in the approaches taken and underlying assumptions made. In this study, we focus on the wide range of outputs produced for each service and discuss the differences between each modelling tool. Model outputs were validated using empirical data for river flow, carbon and nutrient levels within the catchment. The sensitivity of the models to land-use change was tested using four scenarios of varying severity, evaluating the conversion of grassland habitat to woodland (0–30% of the landscape). We show that, while the modelling tools provide broadly comparable quantitative outputs, each has its own unique features and strengths. Therefore the choice of tool depends on the study question.
720
A Research Review on the Key Technologies of Intelligent Design for Customized Products
Product digital design involves completing a product design process using advanced digital technologies such as geometry modeling, kinematic and dynamic simulation, multi-disciplinary coupling, virtual assembly, virtual reality, multi-objective optimization, and human-computer interaction.Although there is no universal definition for customized design, its basic meaning is that a customized product is designed to satisfy the customer’s individual and diversified requirements as quickly and at as low a cost as possible.Many scholars have carried out research into the methodology and key technology of product design .Customized design usually involves a strategy in which customer-oriented design is separated from order-oriented design .Customer-oriented design is based on an analysis of customer requirements, and involves a modular preformed product family that is developed through serialization.Order-oriented design, which is based on an existing product family, rapidly designs a product’s structure in order to satisfy the customized requirements of customers by configuration methods when customer orders arrive.Customer-oriented design influences the cost and time required to market new products.Designing for customer orders affects the delivery of individual customized products.Customized products are designed and manufactured on a per-order basis.Complex equipment—such as computer numerical control machine tools, cryogenic air-separation units, plate-fin heat exchangers, and injection-molding equipment—has many characteristics such as demand diversity, fuzzy dynamics, a cumbersome design response, and a complex design process.The question of how to satisfy customers’ individual requirements and achieve rapid design and innovation of complex customized equipment has become an important factor that determines the survival and competitiveness of equipment-manufacturing enterprises.Therefore, it is urgently necessary to develop an intelligent design platform in order to support the development of manufacturing products.In this way, the digital design or products will develop in the direction of intelligence and customization.Experts have predicted that more than half of future manufacturing will involve personal customization.Although the studies conducted by the Chinese Mechanical Engineering Society indicate that manufacturing enterprises will generate stronger demands for product development and changes, the harsh reality is that modern enterprises lack advanced design ability.It is worth mentioning that many institutions have carried out research into big data and the design technology of customized products.Stanford University’s structured design model combines requirements, technology, and product performance mapping.Yale University has also carried out analysis methods based on big data to support design research.The key technologies of intelligent design for customized products include: the description and analysis of CRs; product family design for a customer base; configuration and modular design for customized products; variant design for customized products; and a knowledge push for product intelligent design.CRs usually include obvious features such as fuzziness, uncertainty, or dynamism.It is important to describe fuzzy CRs in an accurate way for the realization of customized design.Designing for customization involves forming customized requirements to meet the CRs by means of analysis, data mining, and prediction.Common design methods include the analytical methods based on the Kano model and quality function deployment.In the Kano analytical method , CRs are divided into basic requirements, expected requirements, and exciting requirements.Customized design should first satisfy the basic requirements, and then satisfy the expected and exciting requirements as much as possible.The QFD method is a multi-level deductive analysis method that translates CRs into design requirements, part characteristics, process requirements, and product requirements.It then builds a product planning matrix called a “house of quality.,At this point, the difficulty of requirements-based design lies in how to analyze, predict, and follow the potential requirements of customers.Regarding the description and analysis of CRs, Jin et al. investigated information representativeness, information comparativeness, and information diversity and proposed three greedy algorithms to obtain optimal solutions for the optimization problem.Wang and Chin proposed a linear goal programming approach to evaluate the relative weight of CRs in QFD.Juang et al. proposed and developed a customer requirement information system in the machine tool industry, by using fuzzy reasoning and expert systems.Haug developed a conceptual framework based on 10 industrial designers’ interviews and studies on reference projects; this framework defined the overall CR emergence models and associated communicative issues, enabled designers to elicit CRs more efficiently, and allowed designers to reduce delay in the emergence of client requirements and avoid wasting effort on design paths.Wang and Tseng proposed a Naïve Bayes-based approach to describe clients’ technical functional requirements and subjective preferences, and to map them according to detailed attributes and design parameters.Raharjo et al. proposed a novel systematic approach to deal with the dynamics of customer demands in QFD.Elfvengren et al. studied the usefulness and usability of group decision support system in the assessment of customers’ needs in industrial companies.Çevik Onar et al. proposed a hesitant fuzzy QFD that could reflect a human’s hesitation more objectively than the classical extensions of other fuzzy sets; they then applied it to computer workstation selection problems.Osorio et al. proposed the extension of a universal product data model to mass customization and sustainability paradigms in order to meet the requirements of supporting a sustainable mass-customized product design process.Regarding the description of product requirements, research has focused on the following: the description of requirements based on set theory, the broader description of requirements based on ontology, and the description of requirements based on fuzzy clustering.Requirements-based design faces the following challenges: Modeling generalized requirements for customization.To rapidly improve the standardization of customized requirements and guarantee the accuracy and consistency of the design process for an understanding of CRs, it is necessary to build a multi-level model of the generalized requirements from the time dimension, space dimension, process dimension, and so on. Predicting and mining customized requirements.With the development and maturation of big data, it is possible to collect data through the Internet and the Internet of Things.It is important to mine users’ behavior patterns and consumption habits from massive data in order to forecast customized requirements and determine hidden customized requirements. Mapping and transforming customized requirements.To ensure consistency, accuracy, and timeliness in the transformation from CRs to technical requirements, it is necessary to build a model that automatically maps and transforms CRs, including dynamic, fuzzy, and hidden CRs, into technical requirements. Creating value design for the customized requirements of customers.It is difficult to predict and create new CRs based on analyses of existing CRs, and it is also difficult to build customization while considering factors such as cost, feasibility, and urgency.The description and analysis of CRs form the basis of intelligent design for customized products.The layout scheme design for a lathe-mill cutting center is shown in Fig. 1.PFD refers to the extraction of product variant parameters in accordance with CRs for a specific customer base, and the formation of a variable model of dynamic products that includes the main structure, main model, main document, and so forth.According to different variant-driven modes, the PFD method can be module driven or parameter driven .A module-driven product family includes a series of basic, required, and optional modules, and can satisfy different requirements from customers through a combination of different modules.A parameter-driven product family includes a series of products that have the same public variables but different adjustable variables; the structure and performance of products can then be changed by scaling the adjustable variables up or down while maintaining the same public variables, in order to satisfy the individual CRs.PFD focuses on ensuring product family optimization, data consistency, and traceability in the product life cycle.The challenges of PFD include: A design program for the product family.Given the preference to and importance of the requirements from customers and the performance characteristics of the products, it is difficult to program the rational variant parameters of the product family and value range so as to achieve integrated optimization of cost and competitiveness for the product family. Modularization of the product family.The design of the modularized product family focuses on forming a series of functional and structural modules along with a main structure based on the design constraint.It is difficult to form individual products that satisfy different customized requirements using a combination of different modules. A dynamic model of the product family.Due to market factors, technological innovation, maintenance, recycling, and other reasons, product family data changes during the product life cycle.The construction of a dynamic model for product family technology can ensure the consistency, accuracy, and traceability of life cycle data for the product family. An evolutionary genetic algorithm model of the product family.It is difficult to build a genetic algorithm model of the product family, which is done through analysis and mining of the evolution history and current situation of the product family.It is also difficult to achieve reuse of the product family and self-organized evolution, which is based on the principle of biological evolution and the corresponding evolutionary algorithm. A design evaluation of the product family.It is difficult to evaluate PFD and to direct the product family to obtain, for example, the lowest cost and best market competitiveness; these are done by using big data such as the reuse frequency and the maintenance service of products and parts.Configuration design means conducting a rational variant for a customer-oriented dynamic product model, in order to form an individual product structure that satisfies the CRs for MC .Current research into configuration design focuses on three aspects: the representation of configuration knowledge, the modeling of configuration knowledge, and the solutions to configuration problems .In future, the main problem of configuration design will be how to mine configuration knowledge, in order to improve the automation and intelligence and increase the optimization of configuration design.Regarding the design of the configuration and modules of customized products, Stone et al. proposed three heuristic methods: dominant flow, branching flow, and conversion-transmission function chains.Fujita discussed design and optimization problems in product variety.Carnduff and Goonetillake proposed a configuration management pattern in which configurations are managed as versions.Jiao et al. proposed a generic genetic algorithm for PFD and developed a general encoding scheme to accommodate different PFD scenarios.Tsai and Chiu developed a case-based reasoning system to infer the main process parameters of a new printed circuit board product, and used the secure nearest neighbor search method to objectively retrieve similar design situations.Yadav et al. amalgamated component modularity and function modularity in the product design in order to address design-for-supply-chain issues using a generic bill of materials representation.Schuh et al. proposed a three-stage holistic approach to develop modular product architectures.Pakkanen et al. proposed a method of rationalizing the existing product variety for a modular product line that supports product configuration, which is known as the Brownfield Process.Chen and Liu constructed a strategic matrix of interface possibilities in modular product innovation using the internal and external aspects of the product and the openness of the interface.Dahmus et al. proposed a method of building a product portfolio to exploit possible commonality by reusing modules throughout the product family on the basis of the functional modeling of products using function structures.Dou et al. proposed an interactive genetic algorithm with interval individual fitness based on hesitancy in order to achieve a fast and accurate response to users’ requirements for complex product design and customization.Du et al. developed a Stackelberg game theory model for the joint optimization of a product series configuration and a scaling design, in which a two-tier decision-making structure revealed the coupling decision between the module configuration and the parameter scaling.Ostrosi et al. proposed a fuzzy-agent-based approach for assisting product configuration.Khalili-Araghi and Kolarevic proposed a conceptual framework for a dimensional customization system that reflects the potential of a constraint-based parametric design in the building industry.Modrak et al. developed a methodological framework for generating all possible product configurations, and proposed a method for determining the so-called product configuration complexity by specifying the classes and sub-classes of product configurations.They also calculated product configuration complexity using Boltzmann entropy theory .Chandrasekaran et al. proposed a structured modular design approach for electro-mechanical consumer products using PFD templates.The abovementioned research studies thus focused on a module planning method for customized products based on a design structure matrix, the rule-based configuration design method, the instance-based configuration design method, and so forth.The following challenges are encountered during configuration and modular design: Achieving a configuration design for MOO.When meeting the requirements of customer orders, configuration design needs to comprehensively consider the manufacturing cost, service mode, low-carbon and green characteristics, and many other product goals, in order to achieve MOO. Mining configuration knowledge.Using big data technology, it is difficult to mine the existing historical data of enterprises and translate the findings into configuration knowledge that can provide a basis for the product configuration design. Using inference and decision technology for configuration design.With the increase of individual-level configuration-knowledge complexity, inference and decision technology has influenced the efficiency of configuration design and the effectiveness, economy, and feasibility of the configuration results. Achieving a configuration design based on VR.Given progress in VR, augmented reality, and mixed reality technologies, configuration design will provide online awareness and an experience function that is matched and received, thereby greatly improving customer satisfaction.The total structural deformation of the third and fourth order modal of a gantry is shown in Fig. 2 and Fig. 3, respectively.Modal analyses of the base frame and the vertical column were carried out in order to investigate dynamic characteristics.The frequencies of the first, second, third, fourth, and fifth order modal are 72.042 Hz, 78.921 Hz, 115.390 Hz, 162.860 Hz, and 163.680 Hz, respectively.Variant design refers to the completion of a design of either a geometric structure or a product module in order to produce more design schemes corresponding to CRs.Nidamarthi et al. proposed a systematic approach to identify the basic design elements of a profitable product line.Snavely and Papalambros proposed a method to reduce the size of configuration problems by abstracting components to higher levels of abstraction.Yu et al. proposed a joint optimization model for complex product variant design according to changes in customer demand for maximized customer satisfaction and minimized cost.Gero presented a number of computational models for creative designing.Hong et al. proposed a two-step similarity comparison method for boundary representation files in order to compare similarities between mechanical components in the design process.Fowler noted that variant design is a technique for accommodating existing design specifications in order to meet new design goals and constraints, and proposed barriers of variant design in current systems in order to improve current systems in their support of variant design.Chen et al. proposed a property-based, object-oriented approach for effectively and comprehensively implementing change impact analysis tasks in variant design.Lo et al. proposed a holistic methodology, based on three-dimensional morphological diagrams of QFD, to support the variant design of serialization products and to simplify the traditional cascading QFD process in order to meet the special needs of technically mature and highly modularized products.Modrak et al. investigated and presented a novel methodology for creating all possible product configurations and variations, based on a given number of base components and an optional number of complementary components.Wang et al. presented an assembly variant design system architecture and a complementary assembly method.Ketan et al. introduced three different types of variant feature models based on the concept of engineering description for variant features.Prebil et al. studied the possibilities of design process methods related to the capabilities of a computer-aided design system used for the manufacture of rotational connections and the design of workshop documentation.Nayak et al. proposed the variation-based platform design method for PFD, which uses the smallest variation of the product designs in the family to enable a range of performance requirements.Jiang and Gao proposed a class of drawing tool: the conicoid.The scope of a 3D diagram that can be drawn with a conicoid is larger than what can be drawn using only planes and spheres.After adding a conicoid, the designers can draw a figure that can be described by a sequence of equations of a degree that is less than nine.Lee proposed a degree-of-freedom-based graph reduction approach to geometric constraint solving for maximizing the efficiency, robustness, and extensibility of a geometric constraint solver.The challenges in variant design include: establishing the variant design of a structure based on multi-domain mutual-use models; customizing a design based on evolution; and developing a performance-enhancement design for complex equipment.To complete the design of either a geometric structure or a product module in order to produce more design schemes corresponding to CRs, Table 1 shows a comparison of two design schemes for multi-axis machine tools.Scheme 1 and Scheme 2 of a lathe-mill cutting center are respectively shown in Fig. 4 and Fig. 5.The kinematic chain of Scheme 1 is WCOYZXAD, and the kinematic chain of Scheme 2 is WCYOZXAD, where W represents the workpiece, O represents the machine bed, D represents the machining tool, and X, Y, Z, A, C represent the coordinate axes of the machine tools.Regarding product support from intelligent design, Younesi and Roghanian proposed a comprehensive quality function deployment for environment, a fuzzy decision-making trial-and-evaluation laboratory, and a fuzzy analytic network process for sustainable product design in order to determine the best design standards for a specific product.Pitiot et al. studied a preliminary product design method based on a primitive evolutionary algorithm called evolutionary algorithm oriented by knowledge.Costa et al. presented the product range model, which combines rule-based systems with CBR to provide product design decision support.Winkelman proposed an intelligent design directory that consists of a virtual design environment associated with standard component catalogues.Hahm et al. proposed a framework to search engineering documents that has fewer semantic ambiguities and a greater focus on individualized information needs.Akmal et al. proposed an ontology-based approach that can use feature-based similarity measures to determine the similarity between two classes.Morariu et al. proposed a classification of intelligent products from the perspective of integration, and introduced formalized data structures for intelligent products.Li et al. proposed a knowledge training method based on information systems, data mining, and extension theory, and designed a knowledge-management platform to improve the quality of decision-making.Diego-Mas and Alcaide-Marzal proposed a neural-network-based approach to simulate the consumers’ emotional responses for the form design of products, and developed a theoretical framework for the perceptions of individual users.Tran and Park proposed eight groups of 29 scoring criteria that can help designers and practitioners compare and select an appropriate methodology for designing a product service system.Kuo et al. used a depth-first search to create a predictive eco-design process.Andriankaja et al. proposed a complete PSS design framework to support integrated products and services design in the PSS context.Muto et al. proposed a task-management framework that enables manufacturers to develop various PSS options for their product-selling business.Ostrosi et al. proposed a proxy-based approach to assist with the configuration of products in conceptual design.Chan et al. proposed an intelligent fuzzy regression method to generate a model that represents the nonlinear and fuzzy relationship between emotional responses and design variables.Challenges affecting knowledge push design include: establishing an instance-based product design method; utilizing the intelligent design method based on knowledge-based engineering; and developing a knowledge push using task-oriented requirements.An electroencephalogram measures and records the electrical activity of the brain, using biofeedback and the biological effects of an electromagnetic field.Special sensors called electrodes are attached to the head.Changes in the normal pattern of electrical activity can show certain conditions, such as an epiphany, imagination, and reasoning.Fig. 6 shows the intelligent design of machine tools utilizing an EEG for the purpose of intelligent design.Fig. 7 shows the graphical user interface for measuring EEGs, and Fig. 8 shows the relative voltage of the EEG along with spectral analysis for a knowledge push.Existing intelligent design methods for customized products usually require the establishment of design rules and design templates in advance, and the use of knowledge matching to provide a design knowledge push and to enhance design intelligence.It is still difficult to complete a design for customized products with individual requirements.The following technical difficulties still hinder the achievement of rapid and innovative design for complex customized equipment: It is difficult to adapt the excavation of requirements based on big data.In a big data environment, the data source of individual requirements is mainly information from pictures, video, motion, and unstructured data in the form of radio frequency identification, which is not limited to structured data.It is difficult to establish a matching and coordinated relation of individual requirements between heterogeneous and unstructured multi-source data, so the accuracy of the requirements data analysis is affected. It is difficult to achieve many individual design requirements, and to rapidly respond to and support the design innovation schemes of customized products for individual requirements.It is difficult to complete intelligent design that comes from different specialties and different subject backgrounds through swarm intelligence in order to develop the intelligence of public groups. It is difficult to master the inherent knowledge and experience of designers.Existing intelligent design needs push knowledge based on learning specialties, owned skills, and existing design experience.We have researched intelligent design theory and the method and application of customized equipment for precise numerical control machine tools, a super-large cryogenic ASU, a PFHE, injection molding equipment, and low-voltage circuit breakers .In recent years, humankind has entered the big data era, with the development and application of technologies such as the Internet, cyber-physical systems, and more.Based on an Internet platform, China’s Internet Plus initiative, which began in 2015, crossed borders and connected with all industries by using information and communication technology to create new products, new businesses, and new patterns.Big data has changed the product design and manufacturing environment.These changes strongly influence the analysis of personalized requirements and methods of customized equipment design.The specific product performance is as follows.Designing customized equipment is different from designing general products, as it usually reflects particular requirements from customers by order.This CR information usually shows non-regularity.The relationship among different orders is not strong, which leads to situations in which the type of product demand information is extremely mixed up and the amount of information is very great.With the development of e-commerce concepts, such as online-to-offline, business-to-customer, business-to-business, and so forth, a large amount of information on effective individual needs becomes hidden in big data.An essential question in product design is how to mine and transform individual requirements in order to design customized equipment with high efficiency and low cost.Customized equipment design is usually based on mass production, which is further developed in order to satisfy the customers’ individual requirements.Modular recombination design and variant design are carried out for the base product and its composition modules, in accordance with the customers’ special requirements, and a new evolutionary design scheme that is furnished to provide options and evolve existing design schemes is adopted.An individual customized product is provided for the customers, and the organic combination of a mass product with a traditional customized design is achieved.In the Internet age, the design of customized equipment stems from the knowledge and experience of available integrated public groups, and is not limited to a single designer.In this way, the innovation of customized equipment is enhanced via swarm intelligence design.As a result, the Internet Plus environment has transformed the original technical authorization from a manufacturing enterprise interior or one-to-one design into a design mode that fuses variant design with swarm intelligence design.Intelligent design using intelligent CAD systems and KBE is a new trend in the development of product design.This is a gradually deepening process of data processing and application, which moves from the database to the data warehouse, and then to the knowledge base.Fig. 9 shows the GUI of an accuracy allocation design for NC machine tools.Fig. 10 shows the GUI of a design integration for NC machine tools.Fig. 11 shows surface machining using a five-axis NC machine center with a 45° tilt head.The process of intelligent design corresponding to individual requirements includes achieving individual mutual fusion of the requirements and parameters, and providing a foundation to solve the dynamic response and intelligent transformation of individual requirements.The basic features of future customized equipment design are numerous, incomplete, noisy, and random.Unstructured design requirement information is equally mapped between individual requirements.A mutual fusion-mapping model of the different requirements and design parameters from the big data environment is urgently needed.The process of customized design using swarm intelligence includes achieving the drive and feedback of a swarm intelligence platform design, and providing technological support for a further structural innovation design platform for Internet Plus customized equipment.The future design of customized products lies in the process of cooperation between multiple members of the public community and in swarm intelligence design, which is not limited to a single designer.Swarm intelligence design can be integrated into the intelligence of public groups.The process of intelligent design for customized products with a knowledge push includes achieving the active push of a design resource based on feedback features, and enhancing the design intelligence of complex customized equipment.In future, intelligent design for customized products can be achieved by design status feedback and scene triggers based on a knowledge push.With the development of advanced technology such as cloud databases and event-condition-action rules , future intelligent design for customized products will be more requirement-centered and knowledge-diversified, with appreciable specialty and higher design efficiency.
The development of technologies such as big data and cyber-physical systems (CPSs) has increased the demand for product design. Product digital design involves completing the product design process using advanced digital technologies such as geometry modeling, kinematic and dynamic simulation, multi-disciplinary coupling, virtual assembly, virtual reality (VR), multi-objective optimization (MOO), and human-computer interaction. The key technologies of intelligent design for customized products include: a description and analysis of customer requirements (CRs), product family design (PFD) for the customer base, configuration and modular design for customized products, variant design for customized products, and a knowledge push for product intelligent design. The development trends in intelligent design for customized products include big-data-driven intelligent design technology for customized products and customized design tools and applications. The proposed method is verified by the design of precision computer numerical control (CNC) machine tools.
721
Topoisomerase I in Human Disease Pathogenesis and Treatments
Topoisomerase 1 is a highly conserved enzyme that can be found in both prokaryotes and eukaryotes.In the mammalian system, TOP1 is an essential enzyme for normal development .A major function of TOP1 is to relax supercoiled DNA and alleviate the DNA helical constraints .This is achieved by the binding of TOP1 to the supercoiled DNA, followed by the cleavage of one strand of the duplex DNA to create a nick, allowing the duplex DNA to untwist and relax .DNA supercoiling is a naturally-occurring biological process when a DNA replisome or an RNA polymerase unwinds and translocates on the DNA to synthesize DNA or RNA.If not removed, these supercoiled DNA can hinder the progression of the replication fork or RNAP.In addition, negatively supercoiled DNA can facilitate the formation of RNA:DNA hybrids, or R-loops, between DNA template and the newly-synthesized RNA.If not resolved, R-loops can stall further transcription and DNA replication forks, leading to DNA double-strand break formation .TOP1 is known to interact directly with the active form of RNAPII and localize to transcriptionally-active regions of the genome .It has been suggested that TOP1 may aid to suppress R-loop formation by removing supercoiled DNA during RNAPII-dependent transcription .In addition to its function in relaxing supercoiled DNA, cumulative evidence supports a direct role of TOP1 in transcriptional regulation.For example, during transcription, RNAPII pauses at initiation and splice sites , while TOP1 has been proposed to hold RNAPII at the promoter-proximal pause site .Nonetheless, the exact molecular mechanism by which TOP1 pauses RNAPII at the initiation site remains to be defined.Furthermore, TOP1 has been shown to promote the recruitment and assembly of spliceosome at TARs , and this function may be contributed by a potential TOP1-associated kinase activity to phosphorylate splicing factors .Efficient recruitment and coupling of RNA processing factors to the TARs are critical for ensuring uninterrupted production of full-length mature mRNA.In addition, spliceosome assembly onto nascent RNA transcript has important implications for genome stability as well, because the binding of RNA processing factors to the newly-transcribed RNAs can also prevent these RNA strands from invading the DNA template to generate R-loops .The involvement of TOP1 in spliceosome assembly may explain why TOP1 is important for transcriptional progression and R-loop suppression.Nonetheless, whether TOP1 functions as a protein kinase for the spliceosome assembly remains in great debate, as evidence also suggests that TOP1 is unlikely the only or the primary kinase that phosphorylates splicing factors .The dynamic functions of TOP1 in DNA replication and transcription provide important clues to why TOP1 is essential for development in the mammalian system.However, because TOP1 forms a covalent link intermediate, known as TOP1–DNA cleavage complex, with the 5′ phosphate group of the DNA during the topoisomerase reaction, the TOP1 activity can generate toxic DNA lesions due to a naturally-aborted topoisomerase reaction, leaving the TOP1 covalently trapped on the DNA .Alternatively, single-strand breaks accumulate due to irreversible DNA cleavage by TOP1 adjacent to a misincorporated ribonucleotide .The presence of these TOP1cc and DNA lesions may lead to cell death or mutagenesis, a precursor for tumorigenesis.Therefore, the topoisomerase activity of TOP1 is a double-edged sword and can have both positive and negative consequences on genome integrity and normal cell growth.In addition, the potential direct involvement of TOP1 in transcriptional regulation suggests that TOP1 dysfunction may alter transcriptional landscape, leading to abnormal cellular functions.It is therefore not surprising that several human diseases have been linked to TOP1 regulation and activity.In this review, we will discuss the human diseases that may be linked to TOP1 and the mechanism by which the TOP1 activity may contribute to the etiologies of these diseases.In addition, we will also overview how the poisonous effect of TOP1cc on cell growth has benefited cancer treatments and how the ability in changing the transcriptional landscape by TOP1 has become a focus for developing possible novel strategy to treat genetic diseases.In yeast, TARs are prone to mutations that arise as erroneous repair of TOP1cc created by TOP1-mediated removal of supercoiled DNA or irreversible DNA nick generated by the TOP1 cleavage next to a misincorporated ribonucleotide .The mutagenic potential of the TOP1 activity demonstrated in yeast suggests that if the same activity was to exist in humans, TOP1 activity may be a significant contributor to tumorigenesis.However, to date, very little research has been done to evaluate the connection between TOP1 activity and cancer risk.It is possible that in human cells, TOP1 activity is regulated differently at TARs, such that TOP1 in human cells does not produce a high mutation rate during transcription.Indeed, recently, new studies from our laboratory have shown that human cells actively suppress the topoisomerase activity of TOP1 at TARs via novel SUMO modifications at the lysine residues K391 and K436, thereby reducing TOP1-induced DNA damage .These SUMOylation sites are located within the catalytic core of the enzyme and are only found in mammals but not in yeast.Therefore, our studies suggest that humans have evolved a mechanism to minimize this type of transcription-associated genome instability caused by the TOP1 activity.Nonetheless, the protective effect of TOP1 K391 and K436 SUMOylations against TOP1-induced DNA damage during transcription also strongly points toward the possibility that a SUMOylation defect on these residues could lead to genome instability, mutagenesis, and cancer.This defect could be a consequence of a mutation within the SUMOylation motif sequence for either K391 or K436.Alternatively, mutations that lead to a defect in the interaction between TOP1 and its SUMO conjugation enzymes may also contribute to elevated TOP1 activity at TARs and increase in transcription-induced mutagenesis.Clearly, more studies on the TOP1 mutations that affect these inhibitory SUMOylations are needed to establish a connection between tumor pathogenesis and a dysfunction in the regulation of TOP1 activity in human cells.While the accumulation of TOP1cc on DNA can lead to cell death, paradoxically, it is this toxic effect that makes the TOP1 activity a prime target for cancer therapy since ancient times.Camptothecin is a natural herbal compound derived from Camptotheca tree native to China and has been used in traditional Chinese medicine for thousands of years due to its anti-tumor activity .It was not until in the 1980 s, TOP1 was identified as the target for CPT .Since then, the synthetic analogs of CPT, such as irinotecan and topotecan, have been developed as chemotherapeutic drugs, which have been approved both in the United States and in Europe for treating several aggressive and metastasized cancers .CPT and its analogs are TOP1 poisons that have high affinity to the DNA-bound TOP1 molecules that are actively catalyzing the removal of supercoiled DNA .The binding of TOP1 poisons to the active TOP1–DNA complex prevents the completion of the topoisomerase reaction and traps TOP1 covalently onto DNA to create DNA damage and induce cell death .In addition, TOP1 poisons were found to sensitize cells to radiation therapy , increasing their potential usefulness in cancer therapies.Nonetheless, while fast-growing cells, such as cancer cells, are more vulnerable to DNA damage-induced cell death, the current dosages of CPT and its analogs used in chemotherapies can induce life-threatening side effects, including hematological toxicities, neutropenia, and diarrhea .Therefore, the development of new strategies to improve the efficacy of TOP1 poisons by increasing the sensitivity of fast-growing cancer cells to these drugs is an active research area.One way to sensitize cells to TOP1 poisons is to prevent the repair and removal of the TOP1 covalent adducts on the DNA.However, this approach may be complicated by the fact that there are several redundant DNA repair pathways that are potentially involved in repairing TOP1-induced DNA damages .Alternatively, since TOP1 poisons are thought to target only those TOP1 molecules that are actively catalyzing the topoisomerase reaction on the DNA, increasing TOP1 activity in cancer cells may enhance their sensitivity to killing by TOP1 poisons.Indeed, it has been observed that patients with higher TOP1 activity level responded to irinotecan- or topotecan-based chemotherapy better than those individuals with lower TOP1 activity level .However, the question is, is it possible to transiently increase TOP1 activity in a cell?,Our studies have shown that a defect in TOP1 K391 and K436 SUMOylations increases TOP1 activity , thereby causing more TOP1cc on the DNA and sensitizing human cells to the effects of TOP1 poisons.We thus suggest that developing a mechanism to block TOP1 K391 and K436 SUMOylations may be a useful therapeutic strategy to hypersensitize cells to TOP1 poisons during chemotherapy.The removal of TOP1cc and the repair of TOP1cc-induced DNA SSB lesions require the activation of ATM-dependent DNA damage response, which phosphorylates and activates tyrosyl-DNA phosphodiesterase-1 to remove the covalently-trapped TOP1 from DNA .Mutations in ATM and TDP1 have been linked to neurodegenerative diseases known as ataxia telangiectasia and spinocerebellar ataxia with axonal neuropathy, respectively .Brain functions are significantly impaired in both diseases, and one of the symptoms in these diseases is difficulty in speech, or dysarthria.Because both ATM and TDP1 are important for repairing DNA damages induced by TOP1 poisons , a recent study used mouse model to demonstrate that the accumulation of TOP1cc-associated DNA lesions due to defective ATM or TDP1 contributes to the pathogenicity of the neuronal degeneration phenotypes in neural tissue .Interestingly, transient development of dysarthria has been reported in rare cases during TOP1 poison-based chemotherapies due to their neurotoxicity .More studies should be done to understand the genetic backgrounds of those patients, who suffered dysarthria or other neurotoxic side effect during chemotherapeutic treatments using TOP1 poisons, to see if ATM or TDP1 are potential biomarkers for their susceptibility to these symptoms.High titer TOP1 autoimmune antibodies are among the most common features of scleroderma and are associated with a poor prognosis and a high mortality rate as well .Scleroderma describes a group of diseases characteristic of hardening of the skin and connective tissues caused by production of autoimmune antibodies.The majority of scleroderma patients produce autoimmune antibodies against their own nuclear constituents, which are not normally accessible to the immune system in healthy individuals .An autoimmune response can be triggered by an abnormally high level of apoptosis or a defect in the clearing of apoptotic cells, both of which can lead to an increase in the presentation of the apoptotic nuclear contents to the immune system .In addition, unusual post-translational modifications may cause the immune system to no longer recognize and tolerate the polypeptide as a self-protein .Indeed, the degree of SUMOylation, such as in the case of TOP1, is significantly elevated in scleroderma tissues .Epitope mapping indicates that α-TOP1 autoantibodies are highly reactive to its catalytic domain .However, the reason for which an individual develops a chronic autoimmune response against TOP1 and the consequence of the binding of these autoimmune antibodies to TOP1 remain unclear.Interestingly, patients with autoimmune antibodies against RNAPII are often positive for α-TOP1 autoantibodies as well , but the reason for the high frequency of RNAPII-TOP1 co-autoimmune response is not known either.The contribution of TOP1 to scleroderma is not limited to the production of α-TOP1 autoantibodies.In many scleroderma tissues, there are also a decrease in TOP1 catalytic activity and an increase in TOP1 SUMOylation , but the nature of this SUMOylation and the significance of these phenomena to scleroderma pathogenesis are yet to be defined.Since studies from our laboratory revealed that transcription-associated TOP1 K391 and K436 SUMOylation suppresses TOP1 activity while facilitating the TOP1–RNAPII interaction, it would be interesting to determine if TOP1 K391/K436 SUMO modification is deregulated in scleroderma.While a defect in transcription-associated TOP1 K391/K436 SUMOylation could lead to DNA damage and genome instability, hyper K391/K436 SUMOylation would be predicted to enhance the level of TOP1–RNAPII complexes in cells and alter transcriptional landscape, leading to transcriptional stress and increased programed cell death.The elevated level of cells undergoing apoptosis is expected to lead to increased presentation of the TOP1–RNAPII complex to the immune system, resulting in autoimmunity.The increased cell deaths could also contribute to organ failure and fibrosis observed in scleroderma patients.Interestingly, in addition to being widely used in cancer therapy, TOP1 poisons were recently shown to alleviate Angelman syndrome, a subtype of autism spectrum disorders by suppressing the exceptionally long, antisense RNA transcript UBE3A-ATS .UBE3A-ATS blocks the expression of its sense gene UBE3A that is important for preventing the disease .Nonetheless, how TOP1 poisons affect UBE3A-ATS expression remains unclear.Treatment with TOP1 poisons, such as CPT or topotecan, can lead to the reduced expression of many genes in both yeast and human cells .This transcriptional blockade was originally attributed to either the presence of unresolved supercoiled DNA or the accumulation of covalently-trapped TOP1 on the genomic DNA .However, recent observations demonstrated that TOP1 poisons only reduce the expression of exceptionally-long and highly-transcribed genes with median gene length of 66 kb, while up-regulating the expression of shorter genes that are normally expressed at low levels .In addition, similar transcriptional interference can also be achieved by TOP1 depletion , suggesting that the effect of TOP1 poisons on transcriptional progression is not due to DNA damage caused by the formation of TOP1cc, which requires the presence of catalytically-active TOP1.In addition, the level of transcriptional suppression by TOP1 poisons not only depends on the gene length but also positively correlates with the number of introns in the gene .Since TOP1 has been implicated in the recruitment and the assembly of spliceosome at TARs to promote efficient transcriptional progression , it is possible that TOP1 poisons may influence the spliceosome assembly to exert inhibitory effects on gene expression in an intron-dependent manner .Since spliceosome assembly on the newly-synthesized mRNA also contributes to suppressing R-loops , the possible effect of TOP1 poisons on spliceosome assembly is consistent with the observation that topotecan stabilizes R-loop formation, which correlates with the inhibition of the expression of UBE3A-ATS .In summary, TOP1 poisons could be useful for the treatment of Angelman syndrome or other genetic disorders that may be suppressed by blocking expression of long genes.However, these compounds are toxic chemotherapeutic drugs and are not safe for long-term use.Therefore, a better understanding of the mechanism by which TOP1 poisons block long gene expression is necessary in aiding researchers to identify novel alternative strategies to target TOP1 in gene expression regulation.The authors have declared that no competing interests exist.
Mammalian topoisomerase 1 (TOP1) is an essential enzyme for normal development. TOP1 relaxes supercoiled DNA to remove helical constraints that can otherwise hinder DNA replication and transcription and thus block cell growth. Unfortunately, this exact activity can covalently trap TOP1 on the DNA that could lead to cell death or mutagenesis, a precursor for tumorigenesis. It is therefore important for cells to find a proper balance between the utilization of the TOP1 catalytic activity to maintain DNA topology and the risk of accumulating the toxic DNA damages due to TOP1 trapping that prevents normal cell growth. In an apparent contradiction to the negative attribute of the TOP1 activity to genome stability, the detrimental effect of the TOP1-induced DNA lesions on cell survival has made this enzyme a prime target for cancer therapies to kill fast-growing cancer cells. In addition, cumulative evidence supports a direct role of TOP1 in promoting transcriptional progression independent of its topoisomerase activity. The involvement of TOP1 in transcriptional regulation has recently become a focus in developing potential new treatments for a subtype of autism spectrum disorders. Clearly, the impact of TOP1 on human health is multifold. In this review, we will summarize our current understandings on how TOP1 contributes to human diseases and how its activity is targeted for disease treatments.
722
Small RNA Sequencing Reveals Dlk1-Dio3 Locus-Embedded MicroRNAs as Major Drivers of Ground-State Pluripotency
Embryonic stem cells are derived from the inner cell mass of blastocyst-stage embryo and provide a perpetual cell source to investigate pluripotency and stem cell self-renewal in vitro.ESCs were originally derived and maintained in serum-containing media on feeder cells.Further studies revealed that feeder cells provide leukemia inhibitory factor whereas serum provides bone morphogenetic protein signals, which inhibit ESC differentiation into mesendoderm and neuroectoderm, respectively.Based on these findings, ESC cultures supplemented with BMP and LIF signals have been used to maintain ESCs in an undifferentiated state and to suppress endogenous differentiation-promoting signals.Notably, pharmacological inhibition of endogenous pro-differentiation ESC signals allows maintenance and establishment of ESCs from different mouse and rat strains.Such culture conditions are defined as 2i, whereby two small-molecule inhibitors are used to block the glycogen synthase kinase 3 and fibroblast growth factor-extracellular regulated kinase pathways, allowing indefinite growth of ESCs without the need for exogenous signals.This so-called ground state of pluripotency displays robust pluripotency due to efficient repression of intrinsic differentiation signals and shows a remarkable homogeneity compared with ESCs kept in serum.Recently, we devised alternative culture conditions, dubbed R2i, which allow ground-state cultivation and efficient generation of ESCs from pre-implantation embryos.R2i conditions feature inhibition of transforming growth factor β and FGF-ERK signaling instead of GSK3 and FGF-ERK blockage used in the 2i approach.Compared with GSK3 inhibition, suppression of TGF-β signaling reduces genomic instability of ESCs and allows derivation of ESCs from single blastomeres at much higher efficiency.Since 2i and R2i ESCs both represent the ground state of ESC pluripotency, a systematic comparison of similarities and differences might aid in the understanding of core mechanisms underlying ground-state pluripotency.MicroRNAs are ∼22-nt long non-coding RNAs that post-transcriptionally regulate a large number of genes in mammalian cells, thereby modulating virtually all biological pathways including cell-fate decisions and reprogramming.In ESCs, ablation of miRNA-processing enzymes impairs self-renewal, rendering ESCs unable to differentiate.Individual miRNAs play important roles in ESC regulation.miR-290–295 cluster or let-7 family members, for example, promote or impair ESC self-renewal, respectively.Moreover, miRNAs enriched in ESCs promote de-differentiation of somatic cells into induced pluripotent stem cells.So far, most studies have focused on the expression and functional significance of miRNAs in ESCs kept in serum, which leaves a critical gap about the functional importance of miRNAs in ESCs cultured in ground-state conditions despite many insights into the transcriptome, epigenome, and proteome of ground-state pluripotency.In the present study, we analyzed the global expression patterns of miRNAs in ESCs cultured in ground-state conditions of 2i and R2i compared with serum using small RNA sequencing.We provide a comprehensive report on the “miRNome” of ground-state pluripotency compared with serum cells, which enabled us to identify miRNAs specific to each cell state.Furthermore, we found that selected ground-state miRNAs contribute to the maintenance of ground-state pluripotency by promoting self-renewal and repressing differentiation.To obtain a comprehensive expression profile of miRNAs in ground-state ESCs, we used the RB18 and RB20 ESC lines maintained under feeder-free conditions in serum, 2i, or R2i cultures.RB18 and RB20 ESC lines were initially derived from C57BL/6 mice using the R2i + LIF protocol.Isolated R2i cells were then transferred to 2i or serum-containing medium and passaged at least 10 times to derive stable 2i and serum cell lines.Pluripotency of established cell lines was confirmed by chimera formation and germline contribution as previously reported.Small RNA-sequencing data were obtained for both the RB18 and the EB20 ESC lines each time using pools of three independently grown cultures.Analysis of the small RNA profiles of ESC samples revealed a total of 79,520,099 raw reads.After removal of adaptors and reads with <15–35> bases, the processed reads were aligned to mouse genome, which yielded a total of 35,287,315 mappable reads.We found that on average 29.15% of the processed reads were matched to known mouse miRNAs.Analysis of the length distribution revealed two peaks.The minor peak represented Piwi-interacting RNAs, whereas the major peak represented mature miRNAs.A Pearson correlation coefficient heatmap revealed a remarkably high degree of correlation between miRNA profiles in serum, 2i, and the R2i cell lines, which was also supported by hierarchical co-clustering of the samples.The heatmap revealed that 2i and R2i cells showed greater similarity to each other compared with serum cells, which suggests that 2i and R2i cultures share a similar miRNA profile that might reflect the ESC ground state.Moreover, two-dimensional principal component analysis showed that 2i and R2i cells were located in close proximity to each other, but quite distinct from serum cells, demonstrating that ground-state pluripotency features a unique signature of small RNA expression.Since both cell lines showed virtually identical results, we merged the data from both ESC lines for the subsequent analyses.Bioinformatics analysis identified a set of 20 miRNAs, which were abundantly expressed under all conditions in ESCs.Members of the miR-290–295 cluster represented the most highly expressed miRNAs, as previously reported.Interestingly, expression of most members of the miR-290–295 cluster did not change significantly in different culture conditions.Likewise, we observed that several other pluripotency-associated miRNAs were expressed at similarly high levels under all conditions, suggesting that the most abundant miRNAs in ESCs did not undergo major expression changes under different culture conditions.In contrast, members of the miR-302–367 cluster were mostly upregulated in 2i/R2i cells compared with serum ESCs.This finding is in disagreement with results reported by Parchem et al., who suggested that miR-302b is expressed at higher levels in serum compared with 2i cells.However, in the previous study 2i chemicals were added to serum-containing medium, whereas in the current study 2i cells were cultured serum free, which might explain the discrepancy.Analysis of the expression patterns of the pluripotency-associated miR-17 family revealed that most members of the miR-17–92 and miR-106b-25 clusters did not change their expression patterns in different ESC media.In contrast, most members of the miR-106a∼363 cluster were upregulated in serum ESCs compared with 2i and R2i.We next asked whether miRNAs associated with differentiation were differentially expressed.We found that the majority of differentiation-affiliated miRNAs were upregulated in serum cells compared with 2i and R2i, suggesting that serum + LIF failed to completely suppress differentiation-associated processes.Interestingly, we found that a small number of other differentiation-associated miRNAs including let-7g-5p were more abundant in ground state compared with serum cells, although their read numbers were very small.This finding is consistent with a previous study showing increased expression of let-7 miRNAs in 2i compared with serum.We hypothesize that some differentiation-associated miRNAs might promote some features of ground-state pluripotency and/or render ground-state cells “poised” for rapid differentiation once 2i/R2i + LIF components of the ESC medium are removed from culture.To validate the expression patterns of miRNAs obtained by small RNA sequencing, we performed TaqMan miRNA qRT-PCR assays of RNA isolated from serum, 2i, and R2i cells, and observed highly similar results compared with small RNA sequencing.However, we noted statistically significant differences in the expression of let-7a-5p and miR-16-5p between the two platforms.Nevertheless the differences were minor, indicating that the small RNA-sequencing data are suitable for further analysis.Our small RNA-sequencing approach detected 1,233 miRNAs in each ESC state.Scatterplots indicated that miRNAs were significantly differentially expressed under different conditions.Some miRNAs such as miR-127-3p and miR-410-3p were upregulated in 2i versus serum, whereas miR-467c-5p and miR-10a-5p were upregulated in serum.We observed that miR-21a-5p and miR-203-3p were expressed significantly higher in serum than in R2i, whereas miR-381-3p and miR-127-3p were expressed much higher in R2i than in serum.Moreover, although the miRNome of 2i and R2i states was strikingly similar, some miRNAs such as miR-211-5p and miR-142a-5p were differentially expressed between 2i and R2i."These data show that ESCs' miRNA expression changes dynamically in response to changes in extrinsic signals.A global assessment of miRNAs exhibiting >2-fold differences in expression revealed that 179 miRNAs were differentially expressed between serum and 2i cells.Additionally, when comparing R2i with serum cells, we found that 95 miRNAs were upregulated in R2i and 84 miRNAs were upregulated in serum.In addition, 2i cells expressed nine miRNAs more abundantly than R2i cells, whereas R2i cells expressed six miRNAs more abundantly than 2i cells.Next, we sought to pinpoint miRNAs associated with serum or ground-state pluripotency.We defined ground-state-associated miRNAs as miRNAs that are significantly upregulated in both 2i and R2i.Our data identified 91 miRNAs associated with serum and 101 miRNAs associated with the ground state.Top serum-enriched miRNAs included miR-21a-5p and miR-467c-5p, while top miRNAs upregulated in ground state included miR-381-3p and miR-541-5p.To identify the biological pathways that are potentially modulated by serum and ground-state miRNAs in ESCs, we performed miRNA target prediction using TargetScan for the miRNAs enriched in each ESC state.A total of 878 transcripts were predicted to be targeted by miRNAs under serum conditions and 1,241 transcripts were potentially targeted by miRNAs specific to ground state.To identify only pathways that are “specifically” regulated by state-specific miRNAs, we excluded all transcripts co-targeted by both sets of miRNAs.This rationale reduced the number of predicted targets to 649 and 1,012.DAVID analysis revealed that miRNAs associated with the serum condition did not modulate critical pathways associated with fate decision in ESCs, while miRNAs associated with ground-state pluripotency were predicted to control key developmental processes predominantly related to differentiation.Our gene ontology analysis using Enrichr also indicated that in contrast to serum-up miRNAs, which did not seem to modulate ESC fate decisions, ground-state-up miRNAs targeted several crucial differentiation-associated pathways including neurogenesis and organ morphogenesis.We reasoned that miRNAs upregulated in ground-state cells might contribute to the inhibition of differentiation, which is in line with the concept that ground-state pluripotency is achieved by repression of differentiation processes.Next, we sought to profile the global chromosomal distribution of miRNAs detected in the different ESC samples.miRNAs were mapped to different chromosomes and the relative expression of mature miRNAs per chromosome was determined.miRNAs were encoded on and expressed in ESCs by all chromosomes other than the Y chromosome.Four chromosomes were observed to transcribe miRNAs more actively than other chromosomes and, interestingly, these four chromosomes expressed the most abundant miRNAs in ESCs: chromosome 6 harbors miR-182, miR-183, and miR-148a species; chromosome 7 harbors the miR-290–295 cluster; chromosome 14 harbors the miR-17–92 cluster and miR-16; and chromosome X harbors the miR-106a∼363 cluster.These observations indicate that chromosomes 6, 7, 14, and X code for and express the most abundant miRNAs in ESCs.Next, we determined how many miRNA genes are active per chromosome in different ESCs.As shown in Figure 4A, two chromosomes in serum ESCs and three chromosomes in 2i/R2i ESCs activated the largest number of miRNA genes.Since chromosomes 2 and 12 showed a significant difference in the number of miRNA genes they activated under different cultures, we focused on these two chromosomes and found, interestingly, that most members of a repetitive miRNA cluster within the 10th intron of the imprinted Sfmbt2 gene located on chromosome 2 were expressed much higher in serum compared with 2i/R2i.Moreover, we found that approximately all members of a large miRNA cluster embedded in an imprinted region overlapping with the developmentally important Dlk1-Dio3 locus on chromosome 12 were expressed much higher in 2i/R2i than in serum.Surprisingly, these Dlk1-Dio3 locus-embedded miRNAs constituted the majority of miRNAs upregulated in ground-state cells.In conclusion, we uncovered a ground-state-specific, imprinted genomic region coding for miRNAs that can serve as a signature for ground-state pluripotency and might be of functional importance.The association of chromosome 12-located miRNA gene expression with ground-state ESCs raised the intriguing question of whether these miRNAs directly affect acquisition or maintenance of the ground state.To evaluate this possibility, we selected three ground-state-associated miRNAs located within the imprinted Dlk1-Dio3 locus based on the following: their high abundance compared with other ground-state-specific miRNAs and their putative ability to target diverse differentiation pathways.After confirmation of efficient delivery of the candidate miRNAs into cultured mouse embryonic fibroblasts, we treated undifferentiated serum-grown ESCs with individual miRNAs and analyzed morphology and expression of different stemness and differentiation genes 3 days after transfection.ESCs treated with candidate miRNAs exhibited typical ESC colony formation with more compact morphology compared with the Scr control.qRT-PCR analysis revealed that miRNA-transfected ESCs expressed higher levels of stemness genes and lower levels of differentiation genes.In addition, we found that miR-541-5p, miR-410-3p, and miR-381-3p promoted the viability of ESCs 3 days post treatment as measured by the MTS assay.Moreover, transfection of miR-541-5p, miR-410-3p, and miR-381-3p into ground-state R2i ESCs cultured for 3 days in the absence of R2i/LIF significantly enhanced viability.Additional assessments of viability using the Live/Dead Viability/Cytotoxicity Kit validated these results.Furthermore, we observed an increase of the number of alkaline phosphatase-positive ESC colonies 5 days post transfection in both the presence and absence of LIF.Notably, overexpression of miRNAs also enhanced AP activity in ESCs cultured without LIF for 5 days, further supporting our conclusion that ground-state-associated miRNAs promote pluripotency and partially “rescue” LIF-free ESCs from differentiation.Since ground-state-specific miRNAs promoted ESC viability and 2i and R2i cells exhibit a much higher cloning efficiency than serum cells, we wanted to determine the colony-forming ability of ESCs transfected with miR-541-5p, miR-410-3p, and miR-381-3p.Five days after reseeding of day-3 transfected serum ESCs, we analyzed AP activity by counting AP-positive and -negative colonies and observed a significant increase of the number of AP-positive colonies by all three miRNAs but particularly by miR-541-5p, as well as a general enhancement of AP staining intensity.These results indicated that ground-state-associated miRNAs boosted AP activity and markedly promoted ESC clonogenicity.Ground-state miRNAs might influence ESC viability and clonogenicity by modulating the ESC cycle.We focused specifically on the G1 phase, which is shortened in ESCs compared with differentiated cells and the extension of G1 phase upon differentiation.To induce ESC differentiation, we removed LIF from the cultures and concomitantly added miRNAs.Three days after transfection, differentiating ESCs were analyzed by flow cytometry.As expected, we observed prolonged G0/G1 phases in Scr-treated control cells after LIF withdrawal, indicative of the exit from pluripotency.In contrast, transfection with either miR-541-5p, miR-410-3p, or miR-381-3p shortened the prolonged G1 phase of differentiating ESCs, which reached values characteristic for LIF-grown ESCs.Similarly, all three miRNAs reduced the sub-G1 phase under LIF-free conditions, which implicates improved survival upon initiation of differentiation.To corroborate our results, we asked whether inhibition of identified miRNAs will disrupt the ESC ground state.Transfection of ground-state ESCs with miR-541-5p, miR-410-3p, and miR-381-3p antagomirs induced upregulation of differentiation-associated genes 3 days after treatment.Inhibition of miR-541-5p and miR-381-3p also reduced cellular viability in MTS assays.Moreover, inhibition of ground-state miRNAs not only reduced the number of AP-positive 2i/R2i ESC colonies 3 days after miRNA inhibitor delivery but also the clonogenicity of the cells 8 days post transfection.Of note, we also observed that inhibition of miR-541-5p remarkably decreased the colony size in both 2i and R2i ESCs 3 days and 8 days post transfection.Taken together, our data indicate that miR-541-5p, miR-410-3p, and miR-381-3p promote ESC self-renewal.We hypothesized that miR-541-5p, miR-410-3p, and miR-381-3p might contribute to the maintenance of ground-state ESCs by inhibiting differentiation, which would fit into the observed downregulation of these miRNAs 7 days after initiation of differentiation.For analysis of the effects of miR-541-5p, miR-410-3p, and miR-381-3p on ESC differentiation, undifferentiated serum ESCs were seeded 1 day prior to miRNA transfection and ESCs were induced to form embryoid bodies in LIF-free medium 1 day after transfection.EBs were harvested at day 7 and subjected to qRT-PCR expression analysis of key genes representing pluripotency and differentiation.Interestingly, we found that all candidate miRNAs repressed the majority of genes characteristic for ESC differentiation into different germ layers, suggesting that they block multi-lineage differentiation of ESCs.We also detected increased numbers of AP-positive EBs compared with the Scr control 7 days after differentiation.Similar to our previous results with undifferentiated ESCs cultured in monolayers, we observed increased AP activity of miRNA-treated EBs.Next, we examined whether inhibition of ground-state miRNAs stimulates multi-lineage differentiation of ground-state ESCs.To this end, we treated the cells with miRNA inhibitors 2 days prior to EB induction and then harvested the cells for qRT-PCR analysis 10 days post treatment.We observed a significant upregulation of most of the tested differentiation-associated genes, which is consistent with the data obtained using miRNA mimics.Taken together, our results indicate that miR-541-5p, miR-410-3p, and miR-381-3p contribute to the maintenance of ESC pluripotency and self-renewal by blocking differentiation.Since most differentially expressed miRNA genes are located in imprinted regions of the genome and serum cells own higher DNA methylation levels than 2i, we reasoned that differences in DNA methylation might contribute to differential miRNA gene expression from the Dlk1-Dio3 locus in ESCs.To test this hypothesis, we reanalyzed previously published DNA methylome data of serum and 2i cells.The average DNA methylation ratio in serum and 2i ESCs at the Dlk1-Dio3 locus was accounted for 73.2% and 37.6%, respectively, indicating that the Dlk1-Dio3 locus is hypomethylated in 2i compared with serum.To confirm this result, we subjected DNA from serum, 2i, and R2i ESCs to methylation-sensitive restriction enzyme digestion.The resulting genomic fragments flanking specific restriction sites within the Dlk1-Dio3 locus were analyzed by qPCR using specific primers.We found that ground-state ESCs compared with serum cells exhibited a significantly lower DNA methylation level at the three differentially methylated regions present in the Dlk1-Dio3 locus.We hypothesize that the higher DNA methylation levels at the Dlk1-Dio3 locus in serum cells explain the significantly lower expression of miRNAs embedded in this imprinted region.We next wanted to determine potential target genes of the candidate miRNAs.To achieve this goal, we intersected putative miRNA targets predicted by TargetScan with proteins that were downregulated in 2i and R2i ESCs relative to serum cells.This approach narrowed the putative target list down to 10, 4, and 5 genes for miR-541-5p, miR-410-3p, and miR-381-3p, respectively.Using qRT-PCR, we confirmed that miR-541-5p overexpression reduced prolyl 4-hydroxylase, miR-410-3p overexpression reduced serine-arginine rich splicing factor 11 and poly-binding protein 2, and miR-381-3p overexpression downregulated microtubule-associated protein 4.P4ha1 codes for a key enzyme implicated in the synthesis of collagen, which has been shown to function as a barrier in iPSC generation.The extracellular matrix is of critical importance to ESC maintenance, and the expression of collagens and other ECM components have been reported to be increased upon exit from ground-state pluripotency.Srsf11 is a nuclear protein involved in pre-mRNA splicing.Pcbp2 encodes an RNA binding protein that regulates pre-mRNA splicing in the nucleus and mRNA stabilization in the cytoplasm and is implicated in the regulation of collagen synthesis.Map4 is a non-neuronal microtubule-interacting protein promoting tubulin polymerization.Map4 phosphorylation modulates microtubule assembly and dynamics, and cell cycling.Although further investigation is required to uncover the exact function of these genes in serum and ground-state pluripotency, we can speculate that the Dlk1-Dio3 locus-embedded miRNAs contribute to the maintenance of ground-state pluripotency by regulating ECM, cytoskeletal dynamics, RNA processing, and cell cycling.Small RNA sequencing has been used to profile miRNAs in different plant and animal species, diverse cell types, and human tissue fluids, but similar datasets for ground-state ESCs have been missing so far.In the present study, we used small RNA sequencing to obtain expression profiles of miRNAs in ground-state ESCs versus serum ESCs.Several miRNAs were differentially expressed between ground-state ESCs and serum ESCs.Most miRNAs upregulated in both 2i and R2i ESCs are located within the Dlk1-Dio3 locus.Interestingly, activity of the Dlk1-Dio3 locus seems crucial to maintain pluripotency of iPSCs, and its reduced expression is associated with incomplete iPSC reprogramming and poor contribution to mouse chimeras.We assume that ESCs undergo a gradual decline of Dlk1-Dio3 locus activity during the transition from ground to serum state and differentiation.Interestingly, the majority of miRNAs that are preferentially expressed in serum compared with 2i and R2i ESCs are located in the imprinted locus Sfmbt2.Our study, therefore, identifies large sets of differentially expressed miRNAs at imprinted loci, which can serve as specific markers of ground-state pluripotency versus serum state.The lower DNA methylation level observed at the Dlk1-Dio3 locus in ground-state cells might lead to a more accessible chromatin for pluripotency factors to bind and activate the miRNAs embedded in the locus.Of note, the imprinted Sfmbt2 locus, which harbors a large miRNA cluster, does not seem to carry a classic DMR, suggesting that different mechanisms regulate the expression of miRNAs embedded in the Sfmbt2 locus.Results from a previous study indicated that miRNAs from the Sfmbt2 locus are expressed at higher levels in serum ESCs compared with epiblast stem cells.Since we found higher levels of Sfmbt2-embedded miRNAs in serum ESCs versus 2i/R2i, it is tempting to speculate that expression of miRNAs embedded within Sfmbt2 locus presents a key miRNA signature of naive pluripotency allowing distinction of different mouse pluripotent states.We demonstrated using miRNA gain- and loss-of-function analyses that ground-state miRNAs within the Dlk1-Dio3 locus contribute to the maintenance of ground-state pluripotency by enhancing viability, clonogenicity, AP activity, and ESC cycling as well as inhibiting differentiation.Our analysis revealed that the putative targets of miR-541-5p, miR-410-3p, and miR-381-3p are involved in cytoskeletal organization, ECM dynamics, control of RNA processing/decay, and cell cycling.These processes have previously been found to be important for pluripotency in general and ground-state pluripotency in particular.Our findings provide a useful perspective to temporarily lock ESCs in the ground state either by exogenous introduction of cell-permeable miRNAs or by the identification of small molecules that increase ground-state miRNA expression.Mouse ESC lines were cultivated on gelatinized tissue-culture plates and dishes and passaged every other day.Serum ESCs were cultured in the presence of 15% ES-qualified fetal bovine serum, and 2i/R2i cells were cultured in serum-free N2B27 medium.Additional experimental procedures are detailed in Supplemental Experimental Procedures.H.B. and S. Moradi conceived and designed the study.S. Moradi, H.B., and T.B. designed experiments and analyzed and interpreted the data.S. Moradi performed most of the experiments and wrote the manuscript.H.B., T.B., and S.A. provided financial and administrative support, discussed the results, and approved the manuscript.S. Mollamohammadi and A.S. contributed to cell-culture and cell-cycle analysis.A.S.-Z.performed bioinformatics analysis.A.A. contributed to qRT-PCR and DNA methylation analysis.S.A., G.H.S., and S.G. contributed to data analysis and interpretation.All authors reviewed and confirmed the manuscript before submission.
Ground-state pluripotency is a cell state in which pluripotency is established and maintained through efficient repression of endogenous differentiation pathways. Self-renewal and pluripotency of embryonic stem cells (ESCs) are influenced by ESC-associated microRNAs (miRNAs). Here, we provide a comprehensive assessment of the “miRNome” of ESCs cultured under conditions favoring ground-state pluripotency. We found that ground-state ESCs express a distinct set of miRNAs compared with ESCs grown in serum. Interestingly, most “ground-state miRNAs” are encoded by an imprinted region on chromosome 12 within the Dlk1-Dio3 locus. Functional analysis revealed that ground-state miRNAs embedded in the Dlk1-Dio3 locus (miR-541-5p, miR-410-3p, and miR-381-3p) promoted pluripotency via inhibition of multi-lineage differentiation and stimulation of self-renewal. Overall, our results demonstrate that ground-state pluripotency is associated with a unique miRNA signature, which supports ground-state self-renewal by suppressing differentiation. Ground-state pluripotency is a cell state in which pluripotency is maintained through inhibition of differentiation. In this paper, Baharvand and colleagues report that ground-state pluripotency is associated with a unique microRNA signature. They find that ground-state microRNAs, which are mostly encoded by the Dlk1-Dio3 locus, contribute to the maintenance of ESCs through stimulating self-renewal and inhibiting differentiation.
723
Antibacterial activities of the methanol extracts and compounds from Uapaca togoensis against Gram-negative multi-drug resistant phenotypes
Medicinal plants have been used since ancient time for the management of various humans and animals ailments."It is estimated that about 80% of the world's population depends wholly or partially on the traditional medicine for its primary healthcare needs.The important advantage claimed for therapeutic use of medicinal plants is their safety besides being economical, effective and their availability.Several plants are traditionally used to treat infectious diseases.Infectious diseases including bacterial infections continue to be a serious health concern worldwide, the situation being complicated by the appearance of multidrug-resistant pathogens.Several African medicinal plants previously displayed good antibacterial activities against Gram-negative MDR phenotypes.Some of them include Dichrostachys glomerata, Beilschmiedia cinnamomea and Olax subscorpioïdea, Lactuca sativa, Sechium edule, Cucurbita pepo and Solanum nigrum, Piper nigrum and Vernonia amygdalina, Beilschmiedia obscura and Peperomia fernandopoiana, Capsicum frutescens, Fagara tessmannii.In our continuous research on antibacterial plants, we designed the present work to investigate in vitro the antibacterial activity of the methanol extracts of fruits, bark and leaves of Uapaca togoensis Pax. against MDR Gram-negative bacteria.The study was extended to the assessment of the antibacterial activity of compounds previously isolated from the most active extract.U. togoensis is medicinal plant used in sub-Saharan Africa as an emetic preparation and a lotion for skin disorders.The leaves, fruits and bark are also used in Ivory Coast as remedy for pneumonia, cough, fever, rheumatism, vomiting, epilepsy and bacterial diseases.This plant previously displayed cytotoxic effects against a filarial worm, Loa loa, antibacterial activity against Streptococcus pneumoniae, Escherichia coli, Pseudomonas aeruginosa, Staphyloccocus aureus, Enterococcus faecalis, Streptococcus pyogenes and Bacillus subtilis and cancer cell lines.To the best of our knowledge, the antibacterial evaluation of this plant is being reported here for the first time against MDR bacteria expressing active efflux pumps.The leaves, bark and fruits of U. togoensis were collected in April 2013 in Bangangté.The plant was identified by a specialist of the National Herbarium in Yaoundé, Cameroon and compared with voucher formerly kept under the registration number 33630/HNC.Compounds previously isolated from the fruits of U. togoensis included β-amyryl acetate, 11-oxo-α-amyryl acetate, lupeol, pomolic acid, futokadsurin B, arborinine and 3-O-β-D-glucopyranosyl sitosterol.Their isolation and identification were previously reported.Compounds 1–5, and 7 were tested in the present study.Chloramphenicol ≥ 98% was used as reference antibiotics against Gram-negative bacteria.p-Iodonitrotetrazolium chloride ≥ 97% was used as microbial growth indicator.Compounds previously isolated from the fruits of U. togoensis included β-amyryl acetate, 11-oxo-α-amyryl acetate, lupeol, pomolic acid, futokadsurin B, arborinine and 3-O-β-D-glucopyranosyl sitosterol.Their isolation and identification were previously reported.Compounds 1–5, and 7 were tested in the present study.Chloramphenicol ≥ 98% was used as reference antibiotics against Gram-negative bacteria.p-Iodonitrotetrazolium chloride ≥ 97% was used as microbial growth indicator.The studied microorganisms included sensitive and resistant strains of P. aeruginosa, Klebsiella pneumoniae, Enterobacter aerogenes, Enterobacter cloacae, E. coli and Providencia stuartii obtained from the American Type Culture Collection.They were clinical strains and strains obtained from the American Type Culture Collection.Their bacterial features were previously reported.Nutrient agar was used for the activation of the tested Gram-negative bacteria while the Mueller Hinton Broth was used for antibacterial assays.The MIC determinations on the tested bacteria were conducted using rapid p-iodonitrotetrazolium chloride colorimetric assay according to described methods with some modifications.The test samples and RA were first of all dissolved in DMSO/Mueller Hinton Broth or DMSO/7H9 broth.The final concentration of DMSO was lower than 2.5% and does not affect the microbial growth.The solution obtained was then added to Mueller Hinton Broth, and serially diluted two fold.One hundred microliters of inoculum 1.5 × 106 CFU/mL prepared in appropriate broth was then added.The plates were covered with a sterile plate sealer, then agitated to mix the contents of the wells using a plate shaker and incubated at 37 °C for 18 h.The assay was repeated thrice.Wells containing adequate broth, 100 μL of inoculum and DMSO to a final concentration of 2.5% served as negative control.The MIC of samples was detected after 18 h incubation at 37 °C, following addition of 0.2 mg/mL of INT and incubation at 37 °C for 30 min.Viable bacteria reduced the yellow dye to a pink.The MIC was defined as the sample concentration that prevented the color change of the medium and exhibited complete inhibition of microbial growth.The MBC was determined by adding 50 μL aliquots of the preparations, which did not show any growth after incubation during MIC assays, to 150 μL of adequate broth.These preparations were incubated at 37 °C for 48 h.The minimal bactericidal concentration was regarded as the lowest concentration of sample, that prevented the color change of the medium after addition of INT as mentioned above.Compounds tested in this study included four triterpenoids, namely β-amyryl acetate, 11-oxo-α-amyryl acetate, lupeol, pomolic acid, one lignan futokadsurin B and one steroidal glucoside known as 3-O-β-D-glucopyranosyl sitosterol.Their isolation and identification from the fruits of U. togoensis were previously reported.They showed very low cytotoxicity against the normal AML12 hepatocytes.Also the antibacterial activity of compound 6 was previously reported in MDR Gram-negative bacteria.This compound was not tested again in the present study.Compounds 1–5 and 7 as well as the crude extract from the fruits, leaves and bark of U. togoensis were tested for their antimicrobial activities on a panel bacterial strains and the results are reported in Tables 2–3.As results of MIC determinations, the crude extracts from fruits, bark and leaves of U. togoensis inhibited the growth of 26/28, 24/28 and 24/28 tested Gram-negative bacteria, with the MICs ranged from 8 to 1024 μg/mL.The established antibacterial drug, chloramphenicol displayed MIC values ranged from 8 to 256 μg/mL on 25/28 tested bacteria.The best activity was obtained with the fruits crude extract, MIC values below 100 μg/mL being recorded on 7/28 tested bacteria.Furthermore, the lowest MIC values of 8 μg/mL were recorded the fruit extract against P. stuartii PS2636.MIC values obtained with fruit extract were lower or equal to that of chloramphenicol against E. coli AG102, E. aerogenes EA294, EA27, EA3, K. pneumoniae KP63, P. stuartii PS2636, NAE16 and E. cloacae ECCI69."Interestingly, the fruit extract previously showed the higher cytotoxic activities against cancer cell lines compared to bark and leaves' extracts.Consequently, compounds 1–5 and 7 previously isolated from the fruits of this plant were also tested for their antibacterial activity against 10 bacterial strains including MDR phenotypes.Results summarized in Table 3 indicated that MIC values were above 256 μg/mL for compounds 2, 5 and 7 against all the 10 tested bacteria meanwhile 1 and 3 were selectively active.Compound 4 inhibited the growth of 100% of the ten tested bacteria with the MICs ranged from 32 to 256 μg/mL.MBC values were also recorded with 4 against 6/10 of the tested bacteria."Structures of compounds 1, 2 and 4 are related and it was noted that the carboxylic acid function at C-28 and presumably the OH at C-19 are factors responsible for the potency's improvement in pentacyclic triterpenes.Similar results were previously reported for structurally closed skeletons.The antibacterial activity of a crude plant extract has been defined as significant when MIC is below 100 μg/mL, moderate when 100 μg/mL < MIC < 625 μg/mL or low when MIC > 625 μg/mL.The defined threshold values for compounds are: MIC below 10 μg/mL, 10 < MIC < 100 μg/mL and MIC > 100 μg/mL.Therefore, extracts from the fruits of U. togoensis could be considered as promising herbal drug, as MIC values below 100 μg/mL were obtained against 7/28 of the tested bacteria.Compound 4 can be considered as a moderate antibacterial agent, as MIC values were ranged between 10 and 100 μg/mL was obtained on 6/10 tested bacteria.However, MIC values obtained with compound 4 were equal to that of chloramphenicol against E. coli AG100Atet, E. aerogenes CM64, K. pneumoniae KP55, P. aeruginosa PA124 and P. stuartii PS2636, highlighting the antibacterial activity of this compound against MDR Gram-negative bacteria.Data in Tables 2 indicated that many MBC/MIC ratios for the crude extracts were above 4, suggesting their bacteriostatic effects on several Gram-negative bacteria.However, a keen look of the MICs and MBCs of compound 4 indicated that it rather exerted bactericidal effects effects on 60% of the tested bacteria.It is worth to highlight that, the triterpenoid identified as pomolic acid, was in general as active as the crude extract from the fruits from where it was isolated.Also, the acridone alkaloid arborinine isolated from Oricia suaveolens Engl., previously displayed low activity against E. coli AG100 and E. aerogenes EA27, E. coli AG102 and P. stuartii ATCC29916.These data suggest that the secondary metabolites of the fruits of U. togoensis may interact synergistically to produce the observed effects.Regarding the involvement of MDR bacteria in treatment failures and the re-emergence of infectious diseases, the antibacterial activity of the crude extracts and mostly that from fruits as well as that of compound 4 could be considered promising.P. aeruginosa is an important nosocomial pathogen highly resistant to clinically used antibiotics, causing a wide spectrum of infections and leading to substantial morbidity and mortality.MDR Enterobacteriaceae, including K. pneumoniae, E. aerogenes, E. cloacae and P. stuartii and E. coli have also been classified as antimicrobial-resistant organisms of concern in healthcare facilities.To the best of our knowledge, the antibacterial effect of the most active compound, pomolic acid against MDR bacteria is being reported for the first time.However, this plant previously displayed cytotoxic effects against a filarial worm, L. loa, antibacterial activity against S. pneumoniae, E. coli, P. aeruginosa, S. aureus, E. faecalis, S. pyogenes and B. subtilis.Data reported herein therefore provide additional information on the potential of the various parts of this plant to combat MDR bacteria and identified pomelic acid as the main active antibacterial constituent of the fruit extract.The results of the present study are important, taking in account the implication of the studied microorganisms in therapeutic failure.These data indicate that the crude extracts from U. togoensis as well as some of its constituents, and mostly pomelic acid should be explored more to develop potential antibacterial drugs to fight MDR bacterial infections.The following are the supplementary data related to this article.Chemical structures of the compounds from the fruits of Uapaca togoensis.β-amyryl acetate; 11-oxo-α-amyryl acetate; lupeol; pomolic acid; futokadsurin B; arborinine; 3-O-β-D-glucopyranosyl sitosterol.Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.sajb.2015.08.014.The authors declare that there are no conflict of interest.
Uapaca togoensis is a medicinal plant used in sub-Saharan Africa to treat various ailments including bacterial infections. In the present study, the methanol extracts from the fruits (UTF), bark (UTB) and leaves (UTL) of this plant as well as six compounds previously isolated from UTF were tested for their antimicrobial activities against a panel of Gram-negative bacteria including multidrug resistant (MDR) phenotypes.As results of the minimal inhibitory concentration (MIC) determinations, the fruit extract displayed the best activity, the MIC values below 100. μg/mL being recorded on 25.0% of the 28 tested bacteria. The lowest MIC value of 8. μg/mL with this extract against Providencia stuartii PS2636. Pomolic acid (4) inhibited the growth of 100% of the tested bacteria with MICs ranged from 32 to 256 μg/mL. The present study demonstrates that U. togoensis can be explored more for the development of herbal drugs to tackle MDR bacterial infections. Compound 4 is the main antibacterial constituent of the fruit of the plant.
724
Absence of Neuronal Response Modulation with Familiarity in Perirhinal Cortex
Many studies have provided evidence for a role for the medial temporal lobe in familiarity memory, a form of recognition that signals whether a stimulus has been previously encountered.In particular, lesion studies in animals have indicated a major role for the perirhinal cortex, an area in the MTL, as necessary for object novelty memory.Moreover, studies in humans with lesions to the PRH have confirmed the importance of this region for recognition memory.Indeed, experiments carried out mainly in monkeys, have identified a population of ‘familiarity-neurons’ within the PRH that respond to a visual stimulus by either decreasing or increasing their firing rate.In all studies investigating neural changes in PRH activity, the animals were familiarized to an object for extensive periods of time before neuronal recordings took place.For example, familiar objects were shown to rats every day for at least 5 days prior to the electrical recording.In most behavioral studies investigating the effects of PRH dysfunction on recognition memory, habituation to the sample object occurs over a relatively shorter period of time.One aim of the current study was therefore to characterize changes in primary visual cortex V1 and the PRH cortex following relatively short periods of exposure to visually presented cues.While lesion studies have consistently highlighted a role for the PRH in object novelty/familiarity discriminations, other evidence has suggested this cortical region plays a more significant role in object processing when stimuli have overlapping features.A second aim of the current study, therefore, was to characterize V1 and PRH neural activity using simple gratings and more complex images of everyday objects.We used head-restrained animals in all conditions to minimize the impact of exploratory or motivational factors in influencing V1 or PRH responses to passively presented visual stimulation.C57BL/6N mice, sourced from Charles Rivers were bred and maintained in-house on a C57/B6 background.The animals were kept on a normal 12:12-h light cycle, with lights on at 08:00, and were given access to food and water ad libitum.The housing room had a temperature of 19–21 °C and a relative humidity of 45–65%.Both female and male mice between the ages of 10 and 16 weeks were used for the experiments.General anesthesia was induced in an induction box with a delivery of 4% isoflurane in 2 L/min 100% O2.The animal was then transferred to a stereotaxic frame where it received 3% isoflurane, which was gradually reduced to 2–1.5% during the course of the surgery, while ensuring that the animal remained anesthetized and maintained a stable breathing pattern.The depth of anesthesia was gauged during the surgery by checking the hind paw withdrawal and tail pinch reflexes.The temperature of the animal was monitored and maintained at 37 °C with a homeothermic heat blanket.The animals head was shaved using electric clippers.Then, the skin was disinfected with a povidone-iodine solution to maintain a sterile surgical area.A paraffin-based eye lubricant was applied to both eyes.Then, an incision was made to the scalp from the back of the skull to between the eyes using surgical scissors.The connective tissue covering the skull was carefully removed using sterile surgical swabs.Bregma and lambda were then identified as the intersection between the front horizontal and posterior horizontal sutures, respectively, and the vertical suture; and their stereotaxic coordinates were measured using a needle held by a stereotaxic manipulator arm.Then, the mice were implanted with electrodes in the areas of interest.For LFP acquisition, two depth electrodes were implanted, one in the visual cortex, and one in the perirhinal cortex.A ground/reference screw was placed above the frontal sinus.For unit recordings, a silicone probe was mounted onto a mini-drive and was implanted in the PRH.Then, postoperatively the probe was slowly lowered into the recording area.The implantation sight was in a radius of about 100 µm around the intended implantation area, depending on brain vasculature.Two screws placed above the cerebellum were used as ground and reference.After surgery, any loose skin flaps were sutured using braided 0.12-mm silk sutures.The wound area was then washed with saline an antiseptic powder was applied around the incision site.The anesthetic flow was then ceased and the animal left to breathe pure oxygen for a few seconds, until it regained its pinch reflex.Then, the animal was carefully removed from the stereotaxic frame and allowed to recover under heating light until it regained its righting reflex.It was moved back to the holding room.Animals were given a week to recover before any experimental procedure took place.After implantation, rest and habituation, the animals were placed on linear treadmill, where they were head-restrained and free to run as previously described, while recording electrical activity from PRH and/or primary visual cortex.The sessions were 20 minutes long and comprised of presentation of visual stimuli on the screen to the left of the mouse.The stimuli were presented for one second with one-second inter-stimulus interval.All the sessions were comprised of the presentation of 500 stimuli.The stimuli were horizontal and vertical gratings or full-sized black and white pictures of different objects.The contrast and frequency of the gratings was chosen as the one eliciting the strongest response in previous studies.Each trial consisted of 2 stages.At the first stage a stimulus, referred to as the ‘control’ stimulus – either a stationary grating or a picture – was presented 500 times.After a retention interval of either 2 min or 24 h, at the second stage the stimulus from the first stage, now designated the ‘familiar’ stimulus, was presented 250 times, interleaved with a novel stimulus.Under conditions in which pictures were used, another test consisted of a slightly different second stage, where the familiar stimulus was presented 250 times interleaved with 50 cases of different novel pictures.For the 2-minute retention period, the mouse stayed in the apparatus, with the screen turned on but without any stimulus.For the 24-hour retention interval, the mouse was returned to its home cage.During the inter-stimulus interval, the screen was a uniform and constant light gray color.Object images were drawn from a standardized image bank Natural images were taken from a free stock photo website.Care was taken that images were not too similar when they were used for the same task, in terms of general contour and texture patterns.The images were resized to fit the entire presentation screen.A custom-made automatic script was used to find the evoked potentials in both V1 and the PRH.All results were later verified visually.The average signal for all the trials in the different cases was averaged for each animal.For V1, the most prominent trough was identified.The time of this trough relative to presentation onset was defined as the latency and the amplitude of the evoked potential was defined as the difference in amplitude between this trough and the peak directly preceding it.In the PRH, the first prominent peak was identified.The latency of this peak relative to stimulus onset was defined as the evoked-potential latency and its amplitude was defined as the difference between this peaks amplitude and the trough immediately preceding it.Movement was recorded by a motion detector attached to the wheel on which the animal was placed.The movement recorded was the angular rotation of the wheel.To obtain an index of locomotor changes related to visual presentations, the movement that occurred within 1 s of stimulus presentation was divided by activity in the 1-s bin before the presentation for each stimulus.Since work in humans has shown that event-related potentials are modulated by familiarity, we performed both ERPs and single-unit recordings.Mice were familiarized with a stimulus by presenting it 500 times, with both a presentation time and the interval between successive stimuli of 1 s.After a retention interval of 2 min or 24 h, 250 presentations of either the familiar or a novel stimulus were interleaved.The visual stimuli were either simple gratings, or natural images objects.Neuronal responses were recorded during the two presentations.To determine whether the ERPs were modulated by familiarity, we compared the amplitude and latency of the ERPs of the first 250 presentations of a stimulus with the following 250 presentations of the same stimulus and the presentations of the familiar and novel stimuli after the retention interval.For multi-unit recordings, we compared the firing rate before and during stimulus presentation under the different conditions described above.As expected ERPs were present in V1 indicating that both the gratings and the complex object pictures elicited neural activity in the early visual system.Both gratings and complex pictures evoked a robust ERP in the PRH.We next tested for the emergence of familiarity/novelty-related differences in ERPs.We found no evidence for a difference in neural responses to familiar/novel stimuli, either in the amplitude of the grating ERP = 2.11, p = 0.14, n = 10; ANOVA), or in its latency = 0.81, p = 0.49, n = 10) in the PRH.Similarly, no change in these parameters was observed when animals were exposed to natural images = 0.66, p = 0.58, n = 12; latency: F = 1.28, p = 0.29, n = 12).In all cases, the mouse did not show any change in motor activity during the novel stimulus with either gratings = 1.45, p = 0.25, n = 10), or pictures = 1.26, p = 0.30, n = 12).The absence of a reliable change in motor activity or neural activity in response to novelty might suggest the stimuli were not either processed effectively by the animal or the item designated as ‘novel’ became ‘familiar’ very rapidly during the procedure.We therefore increased stimulus ‘novelty’ during the test stage by randomly presenting 5 novel objects on the second trial.Under these conditions, a familiarity effect was observed in V1, whereby the ERP elicited by novel stimuli was smaller in amplitude than those elicited by familiar stimuli and the Control 2 stimuli = 5.28, p < 0.01, n = 13; Fig. 1H, I).In contrast, no change was detected in the latency = 0.07, p = 0.81, n = 13; Fig. 1H, J).Despite stimulus novelty-related changes in V1, there was, nevertheless, no change in ERP amplitude = 1.79, p = 0.17, n = 13) or latency = 0.43, p = 0.70, n = 13) in the PRH.Again, there was no difference movement in response to the different stimulus categories = 2.04, p = 0.125, n = 13).Following damage to the PRH, rats show deficits in the NOE task only for intervals greater than approximately 15 min.This observation suggests that the PRH response to novelty/familiarity may be influenced by a long-retention interval.Therefore, to determine whether familiarity responses emerged with a retention interval, the same tests were repeated after a 24-h delay.Despite this longer interval, there was no change in the PRH ERP following familiar and novel gratings = 0.71, p = 0.93, n = 15 latency = 0.32, p = 0.81, n = 15), complex pictures = 1.23, p = 0.31, n = 10; latency F = 0.08, p = 0.92, n = 10) or 5 novel complex pictures = 1.81, p = 0.16, n = 15; latency: F = 0.66, p = 0.53, n = 15).Interestingly, similarly to the short-delay experiments, in V1 the 5 novel natural images evoked a smaller ERP than the Control 2 stimuli = 4.73, p < 0.01, n = 18; Fig. 1K, L), while no changes were observed in their latency = 0.95, p = 0.37, n = 18; Fig. 1K, M).In all cases, there was no change in motor activity: = 1.95, p = 0.13, n = 15); = 1.34, p = 0.27, n = 10). F = 0.76, p = 0.52, n = 18).Since previous studies have reported the presence of a subpopulation of ‘familiarity’ neurons in the PRH, it could be that the lack of changes in ERP observed in the present study resulted from the inability of our stimuli to engage a large enough neuronal ensemble to affect the ERP, or that subpopulations may have their activity modulated in opposing directions.Consequently, we next recorded simultaneously from many individual PRH neurons using a silicon probe, while the mouse was presented with various visual stimuli.Overall, 218 units in the PRH were isolated using klusta-kwik from gratings, pictures and 5 novel pictures’ conditions.On average, 19.2 ± 2.7% of the recorded neurons showed stimulus-related modulation of their firing in the PRH.The remaining neurons showed no change in their firing-rate in response to any stimulus.Averaged across all sessions, 68 ± 15% of responsive PRH neurons increased their firing rate during stimulus presentation, while the others decreased their firing-rate.There was no difference in the firing rate prior to stimulus presentation among NR, VE and VI neuronal populations = 1.142, p = 0.32; NR: 170, VE: 26, VI: 11).Interestingly, the response latency of the VE neurons was shorter than the VI neurons = 3.375, p < 0.01).Importantly, none of the neurons showed familiarity-induced modulation in their response to stimuli.The present study showed that both ERPs and single-neuron responses in the mouse PRH was not modulated by stimulus familiarity when passively exposed to simple gratings or more complex visual images.One important difference between the current and previous studies that noted familiarity-related changes in PRH is the amount of exposure to the familiar cues.In this study, and in most NOE studies, the animal is typically exposed to the familiar stimulus over a relatively brief period.In previous electrophysiological experiments where a familiarity-modulated response in the PRH was observed, the animal was exposed to the stimulus over days prior to testing.Thus, it might be that the familiarity response reported in previous studies reflected extended exposure to a familiar stimulus.However, previous work has found that repeated exposure to a stimulus modulates both the ERP and multi-unit activity in V1.Similarly, in our experiments we have shown, that ERPs in V1 but not the PRH were modulated by familiarity, under some conditions.Thus, although there was evidence of familiarity-related changes in V1 in the current study, there were no changes observed in the PRH.Previous work using c-Fos as an indirect measure of neural activity has revealed increased expression of protein in the PRH when rodents were exposed to novel objects, but not when familiar objects were presented in novel locations.This evidence clearly suggests that the PRH is involved in some aspect of novelty processing.However, our own study suggests that this is not the case with passively exposed visual cues.Object-based recognition memory procedures differ from the current study in several ways.Perhaps one of the most important is the fact that object novelty paradigms involve an active process in which the animal samples the cue not only with the visual senses but also through other senses, such as olfactory and tactile information.It remains possible that the PRH is involved in familiarity/novelty discriminations but predominantly in situations involving an integrated multi-sensory representation of cues.On the other hand, other evidence has shown that lesions of the PRH caused disruption of recognition memory only whenever visual cues were available but not when olfactory or tactile information was available.This evidence suggests that the PRH is primarily involved in novelty/familiarity discriminations based on visual information.The absence of modulation of PRH activity when using passively presented visual cues is thus surprising; although not without precedent.One other important difference between the current method and object recognition paradigms is the opportunity in the latter to explore/sample different visual properties of an object.Although speculative, perhaps exploration of an object provides an opportunity to integrate visual information about an object from different perspectives, thereby minimizing interference between objects.The PRH may contribute to this higher level integrative process and the patterns of stimulation used in the present experiment may not have been sufficiently complex to engage this putative process.Although it is worth noting that we did vary stimulus complexity using gratings and more complex images of real-world objects, this did not reveal evidence of familiarity/novelty responses in the PRH.Finally, one other way in which the current study differs from standard tests of object familiarity in rodents is in the discrimination between novel and familiar cues presented concurrently on a trial.The comparison between familiar and novel cues may be an important component of the PRH neural response.Indeed, evidence has shown that while rats with lesions of the PRH were unable to perform simultaneous object novelty/familiarity discriminations, the same animals were able to perform a similar, successive, object novelty task.In the latter condition, familiar or novel objects were presented separately and successively on test trials, as in the present study.Further work is clearly required to investigate the conditions under which the PRH is engaged by familiarity v novelty comparisons at the neural level.In conclusion, the results of the present study are important in showing that neural activity in PRH cortex was not modulated by the familiarity/novelty of visual cues – despite changes in activity in V1.These results confirm and extend other evidence that PRH activity does not reflect a simple familiarity/novelty code but may reflect more complex processes contributing to the integration of visual information and/or assigning a familiarity/novelty signal to a cue in a simultaneous visual discrimination.
The perirhinal cortex (PRH) is considered a crucial cortical area for familiarity memory and electrophysiological studies have reported the presence of visual familiarity encoding neurons in PRH. However, recent evidence has questioned the existence of these neurons. Here, we used a visual task in which head-restrained mice were passively exposed to oriented gratings or natural images. Evoked potentials and single-unit recordings showed evoked responses to novelty in V1 under some conditions. However, the PRH showed no response modulation with respect to familiarity under a variety of different conditions or retention delays. These results indicate that the PRH does not contribute to familiarity/novelty encoding using passively exposed visual stimuli.
725
Vaccination against the human cytomegalovirus
The human cytomegalovirus, here abbreviated as CMV, is perhaps the most ubiquitous of human infections.Although better hygiene and lesser close contact between children and adults have decreased prevalence of CMV in developed countries, virtually 100% of adults in low- and middle-income countries have been infected when young.CMV infects T cells and modifies their responses.It is suspected in contributing to arteriosclerosis and immunosenescence and may promote cancers through an oncomodulatory effect.However, its principal medical importance is as the most common congenital infection throughout the world, causing most commonly hearing loss but also in some cases microcephaly, mental retardation, hepatosplenomegaly and thrombocytopenic purpura.As a generalization, between 1 in 200 and 1 in 30 newborns are infected by CMV transmitted from the mother.The seriousness of the infection in the fetus depends on whether the mother is seropositive or seronegative for CMV.Infections in seronegative pregnant women transmitted to the fetus carry the worst prognosis, but fetuses infected by seropositive mothers may also suffer serious consequences .In addition, CMV is the most common infection complicating transplantation.Solid organ transplant patients who receive a transplant from a seropositive donor may suffer disease and seropositive hematogenous stem cell recipients may reactivate CMV due to immunosuppression .These infections may result in serious disease and rejection of the transplant.In this article we review efforts to prevent CMV infections and their consequences in susceptible populations through immunization.Such efforts have been pursued for almost 50 years and today there is a wide range of candidate vaccines.Cytomegalovirus is the most frequent cause of congenital infection and an important cause of non-hereditary hearing loss and neurodevelopmental disabilities in U.S. and northern Europe .An accumulation of data over the past decade from resource-limited settings also show that congenital CMV infection is also a significant cause of neurologic morbidity in those populations .However, the awareness of this important fetal infection remains low despite the fact that in the U.S., the number of children who suffer long-term sequelae from congenital CMV approaches the disease burden from well-known childhood conditions such as Down syndrome and fetal alcohol syndrome and far exceeds that caused by other infectious diseases including pediatric HIV/AIDS or invasive Haemophilus influenzae type b infection prior to the introduction of vaccination .It is estimated that between 20,000 and 30,000 infants are born each year with congenital CMV infection in the U.S .Approximately 10% to 15% of congenitally infected infants exhibit clinical abnormalities at birth and these findings include petechial and purpuric rash, hepatomegaly, splenomegaly, jaundice with conjugated hyperbilirubinemia, microcephaly, seizures and chorioretinitis .In contrast to the involvement of the central nervous system, the hepatobiliary and hematologic abnormalities resolve spontaneously.The vast majority of infected infants have no detectable clinical abnormalities at birth and therefore, are not identified early in life .Most infants with clinically apparent or symptomatic congenital CMV infection and approximately 10–15% of those with subclinical or asymptomatic infection develop long-term sequelae .Sensorineural hearing loss is the most common sequela of congenital CMV infection and is seen in about half of symptomatic infants and in about 10–15% of children with asymptomatic congenital CMV infection .Other sequelae seen mostly in children with symptomatic infection include cognitive and motor deficits, vision loss and seizures .The diagnosis of congenital CMV infection is confirmed by demonstrating the presence of infectious virus, viral antigens, or viral DNA in saliva or urine from infected infants .PCR-based assays for the detection of CMV DNA in saliva or urine from neonates are now considered the standard diagnostic methods to confirm congenital CMV infection .The absence of clinical findings at birth coupled with the difficulty in confirming congenital CMV diagnosis retrospectively inasmuch as positive CMV testing of specimens from infants after 3 weeks of age can be the result of postnatally acquired CMV infection are important contributors to the underestimation of the disease burden caused by congenital CMV infection.A number of studies in populations in resource-limited settings over the past decade have demonstrated that congenital CMV infection is an important cause of childhood morbidity .As a higher prevalence of congenital CMV infection is seen in populations with high and often near-universal seroimmunity, a substantial number of infants born in the LMICs are CMV-infected.Based on an estimated 1% birth prevalence of congenital CMV infection, approximately 250,000 and 35,000 infected babies are born annually in India and Brazil, respectively.Of those, about 10–15% develop permanent sequelae.The natural history of congenital CMV infection in LMIC has not been well defined except for the data from prospective newborn screening studies in Brazil.These studies demonstrate that the prevalence of CMV-associated hearing loss is similar to that seen in the U.S. and northern Europe .The frequency and severity of other neurodevelopmental sequelae in these populations have not been well characterized.An important determinant of congenital CMV infection is the prevalence of maternal CMV seropositivity in the population .The prevalence of congenital CMV infection is directly proportional to the maternal seroprevalence such that higher rates of congenital CMV infection are consistently observed in populations with high maternal seroimmunity, which may in part reflect higher exposure.This is unlike rubella and toxoplasmosis where primary infection during pregnancy accounts for most vertically transmitted infections .Even within a geographic region, CMV seroimmunity varies among women from different racial, ethnic and socioeconomic backgrounds, translating into distinct epidemiologic patterns of congenital infection.Studies in the U.S. have documented that young maternal age and African American race are independent risk factors for delivering an infant with congenital CMV infection .A recent study from France also reported that young maternal age is a risk factor for having an infant with congenital CMV infection .Although it has been known since the 1980s that congenital CMV infection can occur in children born to mothers who are CMV-infected prior to pregnancy, the relative contributions of primary and non-primary maternal infection to congenital CMV infection and CMV-associated hearing loss and other neurologic sequelae have been recognized only recently.A systematic review and modeling of the data in the U.S. suggested that about two-thirds to three-quarters of all congenital CMV infections occur in children born to women with non-primary maternal infection .However, data from prospective studies in the U.S. to confirm these predictions are not available.Thus, it can be assumed that at least half of congenitally infected infants in high income countries are born to women with preexisting seroimmunity and in populations with high seroprevalence such as low-income minority women in the U.S. and the vast majority of women in LMIC, most infected infants are born to women with non-primary CMV infection.A recent large newborn CMV screening study at two different maternity units in Paris, France where the overall maternal seroprevalence was 61% showed that about half of all CMV-infected babies were born to women with non-primary CMV infection during pregnancy .The study also reported that a similar proportion of infected infants were symptomatic in both primary and non-primary groups.In a subset of 2378 women at one of the maternity units, prenatal sera collected between 11 and 14 weeks gestation were tested for CMV antibodies.The rate of congenital infection was fourfold higher in infants born to originally seronegative mothers compared to women seropositive before pregnancy.A retrospective study of mothers with non-primary infection during pregnancy in Italy showed that 3.4% of newborns were infected in utero .Studies in France , Italy the United States and Brazil have shown that intrauterine transmission of CMV is considerably lower in women with preexisting immunity compared to those with primary infection during pregnancy.This conclusion is derived from comparisons of the rates of fetal infection after primary infection with the rates of fetal infection from previously seropositive women.Thus, maternal immunity provides considerable protection against transmission to the fetus.Thus, maternal immunity is partly protective against fetal infection.The risk factors for acquiring CMV during pregnancy in seronegative women include increased exposures to CMV such as direct care of young children, sexually transmitted infections and other indices of sexual activity .Although the risk factors for non-primary maternal CMV infection have not been defined, it is likely that similar to primary maternal infection, increased exposure to other individuals excreting CMV is associated with non-primary infection.While the mechanisms have not been defined, reactivation of endogenous virus or reinfection with a new virus strain have been suggested as possible virus sources leading to intrauterine transmission of CMV in non-primary infections .Recent studies have demonstrated that exposure to a new strain of virus can lead to reinfection of seropositive women, intrauterine transmission and congenital infection .The characteristics of antiviral immune responses that provide protection against intrauterine transmission are also not well understood.In women with primary CMV infection during pregnancy, lower neutralizing antibody levels, slow development of antibody to the viral pentamer proteins, and a slow increase in IgG avidity have been associated with fetal infection .A lag in the development of CMV-specific CD4+ and CD8+ T-cell responses was also observed in women with primary maternal CMV infection who transmitted virus to their infants.However, no differences in the duration of viremia and the peak viral load were observed between women with and without intrauterine transmission.CMV infection is the most common infectious complication of transplantation, both solid organ and hematogenous stem cell.In the case of solid organ transplantation, including kidney, liver, lung and other, the most dangerous situation is when a CMV seronegative recipient receives an organ from a CMV seropositive donor .In that situation CMV infection is almost certain, and disease is common.In the case of kidney transplants, without antiviral prophylaxis, about one third of seronegative recipients of a kidney from a seropositive donor will have CMV disease.Interestingly, even seropositive recipients may have CMV disease when transplanted with an organ from a seropositive donor.Inasmuch as seropositive recipients who receive an organ from a seronegative donor have much less CMV disease, it suggests that the problem for seropositive recipients is superinfection with a new strain under the influence of immunosuppression, rather than reactivation.The situation after hematogenous stem cell transplant is different.In that case it is reactivation under the influence of immunosuppression that seems to be the danger.Inasmuch as latency of CMV occurs not only in circulating T cells but also in lymph nodes, endothelial cells, macrophages and other sites, it is not surprising that reactivation is a problem .Antiviral prophylaxis and/or treatment are practiced routinely in transplant centers to prevent serious CMV disease and has had considerable but not complete success.As discussed in the section on vaccines below, vaccination has had some early success in reducing the severity of CMV disease and definitive trials are underway to determine if such approaches are sufficiently efficacious.It appears that antibodies are necessary to prevent acquisition and spread of CMV by seronegatives, but T cells responses are needed to suppress reactivation of the virus in seropositives.The development of CMV vaccines began in the 1970s soon after the toll of the virus on infants in utero and transplant recipients became obvious.Two vaccine strains were attenuated starting with viruses that had been isolated for laboratory work: AD-169 and Towne .The AD169 attenuated strain was soon abandoned, but the Towne attenuated strain went on to extensive testing in solid organ transplant recipients and normal male and female volunteers .Recipients of kidney transplants who were administered the Towne attenuated strain virus were shown to be highly protected against serious CMV disease and rejection of the graft.Protection against infection, however, was not statistically significant.The investigational Towne strain vaccine could protect humans against a challenge with unattenuated CMV, but naturally acquired immunity protected against a higher dose challenge than did the vaccine .Also, the attenuated strain failed to prevent natural acquisition of CMV by women exposed to children in day care .The reason for the latter failure is unknown.The next important development was the purification of a surface protein of CMV called glycoprotein B, or gB, because of homology with a glycoprotein of other herpesviruses.When combined with the MF59 oil-in-water adjuvant, good levels of neutralizing antibodies were produced in humans after three injections over a six-month period .This regimen was tested twice in comparison with placebo in young women naturally exposed to CMV, and in both cases there was moderate reduction in acquisition, but antibodies and efficacy faded quickly.A booster injection did restore antibody levels.In addition, when the subunit gB protein was combined with the AS01 adjuvant that stimulates toll-like receptor 4, higher and more prolonged levels of anti-gB antibodies were elicited in humans, but that adjuvanted vaccine was never tested for efficacy.Significantly, the investigational subunit gB vaccine gave remarkable protection against CMV disease in solid organ transplant patients, suggesting the importance of antibodies in that situation .The fact that gB is a trimeric fusion protein suggests the possibility that a more immunogenic prefusion form may exist, but this has not yet been demonstrated.In the year 2000 an important event took place: the publication of a vaccine priority document by the Institute of Medicine of the United States .CMV was placed in its highest priority for vaccine development.This event strongly stimulated vaccine manufacturers and biotechnology companies to work in this field.Another event that has proved important is the discovery by researchers at Princeton University that a pentameric complex of proteins was present on the surface of CMV and that this structure, consisting of glycoprotein H, glycoprotein L, and the products of genes UL128, 130 and 131, elicited far more neutralizing antibodies than gB .Parallel work done by a team at the University of Pavia in Italy showed that in pregnant women infected by CMV a rapid response to the pentameric complex was associated with protection against transmission to the fetus .This discovery has since driven much of the vaccine field.Table 3 lists both live and inactivated candidate vaccines against CMV.An attempt was made to increase the immunogenicity of the Towne attenuated virus by making recombinants with the Toledo low passage “wild” CMV.Four recombinants were tested in small numbers of humans and one turned out to be suitably immunogenic .However, another attractive approach that in principle combines safety with immunogenicity is a replication-defective virus.This candidate is made in cell culture using a CMV with two proteins rendered potentially unstable by chemical combination but stabilized by a chemical called Shld 1.On injection into humans in the absence of Shld 1, the virus cannot form infectious particles but does express immunogenic proteins.In phase 1 trials the replication-defective virus gave good immune responses .A number of vaccine candidates are based on vectored genes of CMV, in particular gB and the tegument phosphoprotein 65.Immunogenicity has been demonstrated, and safety in some cases.In principle they should be protective in transplant patients and perhaps in seronegative normal subjects .Inactivated candidate vaccines are also listed in Table 3.Aside from the investigational gB subunit vaccines mentioned above, peptides, DNA and mRNA vaccines are significant candidates .DNA plasmids coding for pp65 and gB have shown preliminary evidence of efficacy in transplant recipients .In addition, a virus-like particle with gB on the surface has shown surprisingly high induction of neutralizing antibodies in animals, pp65-derived peptides combined with a tetanus toxin epitope have been immunogenic in man, and so-called dense bodies harvested from cell cultures of CMV contain all of the viral antigens .Thus, there is no lack of candidate CMV vaccines for use in humans.For prevention of infections in all forms of transplantation, induction of both antibody and T-cell responses are essential.At this stage of our knowledge it appears that both gB and pentamer should be included in vaccines designed to prevent fetal CMV infection and/or disease.There are several unanswered questions about the feasibility of a CMV vaccine, but there are also some clear answers.CMV is acquired by contact with saliva, sexual secretions and transplantation.In principle, the populations that could benefit from protection against CMV are four: seronegative women of child-bearing age, seropositive women of child-bearing age, recipients of solid organs donated by CMV seropositive individuals, and seropositive hematogenous stem cell recipients.The case for the two transplant populations is most evident: morbidity from CMV is considerable, antiviral prophylaxis is expensive, not completely effective, and cannot be continued indefinitely.Ideally, a CMV vaccine would be given before transplantation, but for HSC trial patients who acquire a new immune system, vaccination should continue after transplant.Although not 100% certain, it appears that CMV antibodies are needed by SO trial recipients , while HSCT recipients need reinforcement of T-cell immunity against CMV .The state of vaccine development arguably is such that definitive evidence for efficacy of these two approaches could be obtained in 1–3 years.The situation for women of child-bearing age is less clear.However, the multiplicity of candidate vaccines discussed above argues that we are in the position of being able to induce neutralizing antibodies against gB and pentamer as well as CD4+ and CD8+ T-cell responses against those two surface antigens, plus the pp65 matrix protein.Evidence for the importance of immune responses to these antigens in prevention of acquisition and transmission to the fetus has been acquired, and although controversial, passive antibodies may protect the fetus .Thus, CMV vaccination in North America, Europe and elsewhere where many women approach pregnancy without antibodies to CMV is justified.In addition, modeling suggests that vaccination of toddlers, similar to the practice for rubella vaccine, would offer strong indirect protection to women, many of whom are infected by their first child during a subsequent pregnancy .If duration of vaccine-induced protection is long enough, CMV vaccination could be offered to pre-adolescents at the same time as HPV vaccine.At this point, uncertainty exists about the immunological deficits that allow reinfection of seropositive individuals, including pregnant women, and studies to define those deficits are urgently needed.Although the incidence of abnormalities in infants born to reinfected mothers is less than those born to mothers who had primary infection, reinfection can cause serious consequences .Is reinfection the result of low pentamer antibodies, low T-cell responses, or a high force of infection?,Answers to these questions are badly needed, as the vast majority of women in the world who live in LMICs have been infected with CMV in childhood and are seropositive.Nevertheless, there is a burden of fetal and newborn disease that should be prevented by vaccination, as contact between asymptomatically infected children and mothers cannot be eliminated.
The human cytomegalovirus (HCMV) is the most important infectious cause of congenital abnormalities and also of infectious complications of transplantation. The biology of the infection is complex and acquired immunity does not always prevent reinfection. Nevertheless, vaccine development is far advanced, with numerous candidate vaccines being tested, both live and inactivated. This article summarizes the status of the candidate vaccines.
726
Effect of multi-pass friction stir processing on textural evolution and grain boundary structure of Al-Fe3O4 system
The conventional monolithic aluminum alloys are losing their industrial application to aluminum matrix composites.The AMCs are increasingly replacing them in various applications including aerospace, renewable energy and, automotive sectors and marine and nuclear engineering .Substitution happens because of the excellent physical, mechanical and tribological properties of AMCs such as low thermal expansion coefficient, high strength to weight ratio, stiffness, and good wear resistance .Conventionally AMCs are fabricated using a variety of solid and liquid phase processing techniques, including stir casting , powder metallurgy and squeeze casting .Besides conventional processing techniques, the variants of specialized processing techniques were also utilized for AMCs .Recently, friction stir processing which is a relatively new solid-state process adapted from friction stir welding, has been widely explored as a surface modification technology .Also, it has attracted much attention as a new process to fabricate metal-matrix nanocomposites .Heretofore during the past decade, a large number of investigations have been carried out to process MMCs by FSP.Recent researches showed that the combination of FSP and thermite powder, which undergoes through an exothermic reduction–oxidation reaction, could be resulted in the formation of AMCs reinforced by in situ Al2O3 nanoparticles.Various oxide systems such as Al–Fe2O3 , Al–CeO2 , Al–TiO2 , Al–CuO and Al–Fe3O4 have been used to fabricate the AMCs reinforced by Al2O3 nanoparticles by using FSP, where reactive mechanisms are utilized to form in situ Al2O3 particles.The production of aluminum matrix nanocomposites is the result of a chemical reaction between Al and proper metal oxides.The reduction reaction between oxides and Al results in the production of another metal and aluminum oxide.The metal can form as an intermetallic phase with Al or act as alloying constituent inside the Al matrix, which can affect as reinforcements element.The aluminum oxides as the other reaction products, specifically Al2O3, is a beneficial reinforcement for AMCs .Oxides of iron are a proven candidate in self-sustaining aluminothermic reaction.Fe3O4 is considered as suitable for its lower cost and high free energy of reaction .The Al–Fe3O4 system is known for the high exothermic reaction that can be exerted during thermal and/or mechanical treatments according to the following stoichiometric reaction :3Fe3O4 + 8Al → 4Al2O3 + 9Fe ΔH° = −3021 kJ,The final phases, α-Fe and α-Al2O3, are formed based on previously mentioned in situ chemical reactions wherein the iron oxide is reduced by aluminum.In relation to the thermite mixture the stoichiometric reaction has been suggested when a powder mix of 8Al and 3Fe3O4 is accumulated; nevertheless the final product can be manipulated by non-stoichiometric compositions.Additionally, the presence of extra Al can result in the formation of Al–Fe intermetallics.The reaction products, iron aluminide intermetallics and in situ formed Al2O3, can act as fine reinforcements.The distribution of them is homogenous within the Al matrix, and they are capable of contributing to strengthening because of their high strength performance.It is well accepted the FSW/FSP can result in the fabrication of fine/ultrafine and equiaxed grains in the nugget zone due to the dynamic recrystallization .In addition, the preferred orientation are varied around the centerline of stir zone which experienced frictional heating and severe plastic deformation .Texture mainly is induced by the rotating effect of tool shoulder during FSP which cause the compressive and shear influences .Although, it is well known that second-phase particles play an important role in recrystallization.Fine dispersoids tend to hinder boundary motion and slow down recrystallization and grain growth through a Zener drag effect ; texture can be affected by the presence of nano-sized inclusions within the aluminum matrix, by controlling the particle stimulated nucleation and Zener pinning mechanisms .Therefore, studying the microstructure transformation and the texture evolution in the center of the sintering zone are of great importance.Another important subject to be noticed is the nature and feature of grain boundaries as they have a huge effect on controlling the properties of polycrystalline materials .Kronberg and Wilson proposed the first model to determine the special grain boundaries in 1949 and it known as the coincidence-site lattice model.The assumption in their model is based on energy of grain boundary.The grain boundary energy is low when the coincidence of atomic position in both neighbor grains is high.The reason is related to the small number of bonds that are broken across the boundary.It can be explained in case the atoms in the lattice positions state in prefect arrangement when the Gibbs energy of the system is minimum.A grain boundary contain lower energy in case the coincident atoms positioned same as a perfect crystal in compare with non-coincident condition.It means that the two grains are misoriented by a chosen angle θ around a chosen axis 0.At superposition state related to crystals some atomic sites coincide, which are known as coincidence sites.As those sites are deployed regularly, all over the superimposition, they create superlattice, which is named as coincidence-site lattice .To have a detailed explanation, the misorientation is determined as Σn wherein n value is the reciprocating of coincident lattice sites density with regard to the principal lattice points, which exhibit the data, related to misorientation relationship.n is always odd, and it is possible to calculate it when the plane of the boundary is characterized by symmetric and asymmetric tilt grain boundaries .The CSL model is extensively utilized to categorized GBs into three classes; low angle grain boundaries while misorientation angle less than 15°.The limitation of 15° is based on the measurements of the contact angle at the grain boundary trace at free surface in bismuth that is exhibited the transition between low to high angle grain boundaries.A recent study measured the migration of planar grain boundaries in aluminum and confirmed a sharp limit between low-angle and high-angle 〈112〉 and 〈111〉 tilt grain boundaries at 13.6° . low ΣCSL boundaries with 3 ≤ Σ ≤ 29, and general boundaries which include both random boundaries and high-ΣCSL boundaries .Usually “general grain boundaries” is used when the behavior of the interface is studied.The boundaries with Σ ≤ 20 are known as special boundaries .The low value of Σ determines as a special grain boundary.This term can explain the boundaries that display sharp extremes at any property orientation dependence, such as, fracture toughness, diffusivity, the tendency to segregation, migration rate, sliding rate and corrosion rate .Recent orientation mapping studies offered that the materials with a high fraction of low-ΣCSL boundaries, particularly Σ3 boundaries, due to their potential for structural order in the boundary plane exhibit dominant properties .It can be stated that the improvement of certain properties has resulted in special grain boundaries .The objective of the study demonstrated in this article is to evaluate the fundamental concepts of grain boundary evolution and microtexture development in the center of sinter zone of in situ nanocomposites formed by using FSP in Al–Fe3O4 system.Accordingly, to provide a profound understanding of the grain boundary transformation and crystallographic texture evolution, the electron backscattered diffraction technique was used in conjunction with field emission scanning electron microscopy.To have a consistent explanation of microstructure and varieties of the properties in the final composite produce from Fe3O4–Al system, a AA 1050 rolled sheet was selected which has 5 mm thickness and contain a high amount of Al.The nominal chemical composition of the Al rolled sheet is presented in Table 1.The sheet was provided by Arak Aluminum Co., Arak, Iran.To prepare the workpieces, the Al sheets was cut off in the dimensions of 210 × 70 × 5 mm3 and a groove was machined in the middle length when the depth and width are 3.5 mm and 1.4 mm, respectively.Afterwards, the milled powder mixture is inserted to the machined grooves.Fe3O4, and Al powders were mixed based on the stoichiometric combinations in Reaction.The as-received powders morphology is illustrated Fig. 1 and.Mechanical milling was used to prepare the powder mixture.They milled for 1 h when the milling carried out by using high-energy planetary mill underneath of the argon atmosphere.Table 2 present the mechanical alloying process parameters.Rapidly after milling a double-layer titanium foil alongside with zirconium powder is used to wrap the powder product to avoid oxidation.Fig. 1 shows the morphology of the milled powder product after 1 h of milling.To prevent any powder oxidation, the FSP is done quickly after placing the milled product into the groove.Inside the groove, which is prepared on specimens, are filled with the mixture powder, which is the product of mechanical alloying by using a vertical milling machine.After that, the samples are subjected to FSP when the grooves on the workpieces sealed carefully to capsulize the powder and to restrain powder dispersion during the process.The FSP utilized by overpassing a pin-less tool which is operated with 10 mm shoulder diameter and with 1120 rpm rotational and 125 mm/min traversal speeds.Thereafter, the FSP is used for four passes were the process carried out with 100% overlapping using a H13 steel tool with 18 mm shoulder diameter, 5 mm pin diameter and 4 mm pin height.The pin had threads with a depth of 0.5 mm and angle of 30° with 2.5° nutation angle.The parameters are used for the FSP were w = 1400 rpm and v = 40 mm/min demonstrates details of the parameters suing during the FSP.To have a comparison sample, a workpiece is fabricated by using FSP without powder introduction when the same parameters are applied.An electronic top-view macro-image of the processed nanocomposite is shown in Fig. 2c, indicating a consistently processed specimen.The microstructural studies were performed on the specimens, which are collected from the FSPed workpieces by cutting sections transversely.The grinding and polishing stage is carried out up to the final polish step that performed by using the particular diamond paste and pad."Finally, the etching process on the specimens are performed using modified Poulton's reagent.The optical microscopy, scanning electron microscopy and transmission electron microscopy are used to study the microstructural evolution.The SEM was equipped with energy-dispersive spectroscopy and electron back-scattered diffraction detectors.Automatic grinding and polishing steps are employed as standard metallographic procedures and followed with a 45 min polishing step using colloidal silica to prepare the test samples for EBSD study.The Mambo and Tango software are used to process the EBSD data and plot misorientation distribution curves.As it is mentioned in the previous article during the EBSD study of different Al/oxide system, a cleanup procedure is used to re-index the data points.In this case, a cleanup with a 5° grain tolerance angle and 0.1 minimum confidence index was applied to re-index the EBSD patterns.The misorientations in the range of 3–15° are considered as low-angle grain boundaries which is indicated with bright contrast in EBSD images.Boundaries with higher angles were defined as high-angle grain boundaries, and they are illustrated with dark contrast in EBSD images.An X-ray diffraction unit is used to recognize the phases in fabricated composites within SZ.To collect the specimens for microscopy tests, the samples were removed with at least 30 mm distance from the FSP start/finish points.The ball milling process is used to provide an active and uniform mixture of powders from the initial powder mixture.The product powders have higher reaction kinetics during FSP.From first trials, it was concluded that utilizing the conventional mixing process is unsuccessful when dissimilar powders are using during mixing process since the density of Fe3O4 is much higher than aluminum.Therefore, to achieve a uniform mixture of powders, the ball milling process was utilized according to the stoichiometric composition presented in Reaction.Fig. 3a indicates the XRD pattern of the milled Al–Fe3O4 powder mixture after 1 h milling.As can be seen, the individual Al and Fe3O4 peaks are the only ones which have been detected, illustrating that almost no reaction between Al and Fe3O4 took place during milling.The exception is related to the existence of peak broadening; this is related to reducing the crystallite size and extending the lattice strain.The milling time effect on Fe3O4–Al reaction determines by using differential thermal analysis.It can be noticed that only the melting peak of aluminum at about 670 °C was revealed, specifying that no reaction occurred between Al and Fe3O4 after 1 h milling.As noted previously, to achieve a higher rate of reaction kinetics during FSP, the mechanical milling was employed to produce uniform and active powder mixture.Fig. 4 shows the FE-SEM images related to FSPed AA1050 specimens with and without powder mixture addition.It demonstrates a fully uniform distribution of particles is achieved in the SZs of FSPed AA1050 after powder mixture addition, without any evidence of particle clustering.Generally, the composite is fabricated by the FSP in the SZ after adding the milled powder mixture.Fig. 4c illustrates FESEM micrographs and elemental mapping analyses for the friction stir processed nanocomposite specimen.It can be seen, different contrasts are displayed by the particles in the composite due to their matrix.As stated by the result of EDS analyses, particles with light gray, pale gray, and bright contrasts are steady iron aluminide compound, iron oxide, and aluminum oxide compositions, respectively.The formation of iron aluminide compound as a result of solid-state reactions of Al with Fe3O4 is notable.Consequently, it becomes evident that the solid-state reaction between Fe3O4 particles and Al during FSP form Al–Fe intermetallic particles.It should be noticed that since two particles are small, it is difficult to detect them by the EDS, considering the resolution.It is suggested to use the chemical analyses based on qualitative elemental partitioning which can be described by partition coefficient as concentration ratio of an element between two phases, and it extracted from X-ray mapping instead of quantitative point analyses.As stated, the Al/Fe3O4 reaction occurs as the presence of particles is confirmed by the mentioned chemical analyses.Although the reaction is not kinetically finalized since even after four passes of FSP, the final produced composite contains some iron oxide particles.X-ray diffraction experiments were carried out to investigate the fabricated phases after FSP.Fig. 5 shows the XRD patterns of AA1050 with powder addition.As can be seen, dominant Al, and minor phases peaks related to Al13Fe4, Al2O3 and Fe3O4 were detected.These results illustrate and prove that the reaction between Al and Fe3O4 took place during four pass FSP and in situ hybrid nanocomposite has been formed.Although the ratio of the peak intensity related to particles to compare with background intensity is low, the particles volume fraction is not very low.The appearance of low peak-to-background ratios is expected because of the nano-sized particles formation and the associated peak broadening effect.The correspondent diffraction patterns related to iron oxide showing that after four passes of FSP, the reaction between Al- Fe3O4 is not terminated.Figs. 6–8 are exhibited the EBSD analyses of the microstructural details related to the as-rolled AA1050 base alloy and sinter zone of the FSPed samples with/without additional powder mixture.Also, Table 3 is presented the microstructural statistics for different specimens.Fig. 6– are included the main EBSD results for as-rolled AA1050 samples which are showing the grain boundary, grain orientation and recrystallization maps and misorientation angle distribution and restoration frequency histograms.Fig. 6c illustrated the grain boundaries when misorientation angles are larger than 15°.They highlighted by black color.White color is used for indication while the misorientation angles are lower than 15°.As it is mentioned before, the grain boundary with, misorientation angles larger and lower than 15° are known as HAGBs and LAGBs, respectively.This examined base material exhibit an elongated grains microstructure, a high ratio of LAGBs, a low proportion of recrystallized grains and comparatively small misorientation angle.Fig. 6d shows the distribution histogram of the misorientation angle.The illustrated distribution is close to the random MacKenzie distribution curve while the plot of Fig. 6f shows a high fraction of deformed grains.These appearances are typical for the rolling microstructures.By employing FSP, the equiaxed grains are formed with a mean size of ∼7.8 μm, where the elongated grains of base materials were diminished.In fact, the formation of the equiaxed grain structure is due to the effect of dynamic restoration phenomena and happened after friction stir modification without the presence of particles.In addition, the mean misorientation angle is cut down in comparison with AA1050 base alloy from 31° to about 29.5°.It appears that the grains size are refined to sizes down to 7.8 μm due to severe plastic deformation while the high angle grain boundaries have been formed because of dynamic recrystallization during FSP.Grain refinement happens in FSP/FSW due to existence of different dynamic restoration mechanisms .Furthermore, During FSP, a high level of geometrically necessary dislocations density can be created.Those resulted in a complex stress field with a very large strain gradient.The high GND density leads to the strain incompatibilities which can be functioned as initial and preferred nucleation sites because of the dynamic restoration phenomena .Besides that, it is apparent that the microstructure in the stir zone is mainly composed of an uncommon mixture of high and low angles of grain boundaries.The deform microstructure will be consumed by new dynamically recrystallized grains; this has occurred when the new distinctive grains are formed from dynamic formation and extermination of LAGBs; in other words, the dynamic recrystallization is related to the transformation of sub-grains into new grains .Eventually, it will be indicated that the fraction of LAGBs are reduced when correspondingly, the fraction of HAGBs are increased due to the formation of new individual grains.The increment of HAGBs number during dynamic recrystallization is due to the continued dislocations accumulation in the subgrain boundaries when the misorientation is kept at a low level .In fact, dynamic recrystallization Continuous occurs by the progressive accumulation of dislocations into LAGBs which increase their misorientation, and eventually, HAGBs are formed when the misorientation angles reach a critical value θc .Fig. 8 illustrates the EBSD mapping of the grain size and the orientation; also, contains the distribution histogram plots measured in the mid-thickness section for the nugget zone of the hybrid nanocomposite.Some other features can be noted such as the equiaxed grain structure that is formed with a mean grain size of 2.1 μm; and a slight increase that is occurred when the average misorientation angle is measured up to ∼31.5°.The further increment has happened while the fraction of recrystallized grains reaches ∼37%.In addition, the Al and Fe3O4 powder mixture has intensified grain refinements, and this is seen in the plots as the grain size distribution became narrow.Whereas the in situ precipitates is formed through changing the FSP to a reactive process by adding the milled powder.Those precipitates serve as reinforcements, and the matrix grains will refine by grain boundary pinning mechanism.These outcomes support the statement that the produced particles as a result of the reaction between Al and Fe3O4 had an extra effect on reducing the matrix grain size.As mentioned before, the presenting of various dynamic restoration mechanisms is the main reason for grain refinement happening during FSP/FSW.The static and dynamic restoration events can be affected significantly by the presence of nanoparticles, such as Al13Fe4 and Al2O3.Inserting inclusions during the process can make a higher number of nucleation sites.It can happen at the beginning based on the particle stimulating nucleation mechanism.An additional obstacle occurs on the short-range motion of grain boundaries, which conforms to the Zener–Smith Pinning mechanism .All those lead to the formation of a finer structure from HAGBs during static conditions of heat treatments within heating or cooling cycles.Likewise, the increment of the grain refinement within the sintering zone along with multiplication of the preferred pinning sites occurs from hard ceramic particles production during reactive FSP and in situ phase formations.A simple comparison between the EBSD results for the AA1050 base alloy and SZ regions of the FSPed specimen and hybrid nanocomposite can clarify the effects of adding milled powder on the microstructural details; it is shown in Figs. 6–8.The results show that the fraction of HAGBs is increased during FSP of both samples.The grain refinement and recovery mechanisms engaged in an active competition during FSP that is related to the previous cold worked and microstructure of the rolled alloy.On the other hand, the modification of fine sub-grains structure is responsible for grain refinement, which occurs because of dislocations re-arrangements.The transformation of LAGBs into HAGBs in aluminum alloy happen as a result of the continuous dynamic recovery.This also involves progression and accumulation of dislocations at LAGBs .There is reported in the literature that special boundaries are known as those boundaries with Σ ≤ 20 .The other boundaries, containing Σ ≥ 29, are named as random.Using different categories for particular boundaries from random ones at Σ29 is based on the correlation between special fractions and observed properties.The special fraction calculates with dividing the total number of boundaries in the category of 1 ≤ Σ ≤ 29 to the total number of boundaries .The distribution of different types of the boundary with respect to Σ is referred to the grain boundary character distribution term.It can be rigorously indicated that special boundaries), which have low Σ and happen at well-defined angles .They exhibit exceptional properties in different classes such as kinetic, electronic, mechanical, chemical, and energy characteristics.Fig. 9 shows particular boundaries Σn in BM, and FSPed specimens without and with Fe3O4/Al powder.Fig. 10 present quantitative approximation of different boundaries in all distinct specimens.The Σ threshold, which the particular properties of grain boundaries can be lost above that, depends on the characterization method and investigated property.Pressure and mainly, temperature are among other external conditions that can affect the Σ threshold .Accordingly, the CSL boundaries are used to drive the fraction boundaries related to of Σ20, Σ29 and Σ3n which these derived data are summarized in Table 4.Regarding the boundary fractions, it can attract the consideration that the numbers are very low for low ΣCSL boundaries related to BM.As it is indicated in Table 4, for Σ ≤ 20 boundaries, the fraction number is only about 0.7%.It is observed that in the comparison between BM and without powder FSPed specimens, the fraction of low ΣCSL boundaries increased due to the FSP.The interaction of pre-existing Σ3 boundaries through grain boundary migration, which is happened during dynamic restoration mechanisms, may form new Σ3 boundaries.Different investigation mentioned that the atomic interaction forms the new boundaries.These interactions associated with grain boundary migration phenomena during DRX.On the other hand, as can be seen in Fig. 10 and Table 4, the fraction of low ΣCSL boundaries is enhanced by adding the milled powder mixture.Therefore, it can be expressed that some other effective mechanisms on dynamic recrystallization will activate by the presence of nanoparticles.Grain boundary pinning and Particle-stimulated nucleation are examples for those effectual mechanisms on DRX .Altogether, for those reasons, the higher fraction of low ΣCSL boundaries can form, and it follows by grains refinement.The thermal exposure during FSP and the severe plastic deformation as nature of the process can form fine recrystallized grain structure and establish a specific texture .During FSP, the texture is developed mainly because of the active deformation mechanisms and microstructure transformation caused by restoration phenomena.It is expected to observe the formation of diverse texture components in the microstructure because of different response by grains of SZ to the imposed strain and temperature.On the other hand, the stacking fault energy is comparatively high for F.C.C metals and alloys such as aluminum and its alloys.The high SFE has a strong effect on the operating dynamic restoration mechanisms which is recovery and recrystallization during FSP, and it will control the resulting deformation textures .The inverse pole figure coloring maps is presented in Fig. 11.The figure relates to BM, FSPed without inserting the milled powder and hybrid nanocomposite samples.IPF maps show the crystallographic orientations for individual grains with respect to the rolling direction.The color code for orientation parallel to the RD axis is given in the basic triangle at the top right corner.The colors in the IPF map are same when the neighboring grains have the identical orientations.It can be seen that the rolled structure is characteristic for the 1050 aluminum alloy which indicate an intense rolling process performed on specimens.For the FSPed specimens the IPF map shows variations of preferential grain orientations.Figs. 12–14 show the orientation distribution functions of the specimens were concluded from the EBSD results.They are used to analyze the consequence of FSP and powder addition on the crystallographic texture evolution of the 1050 aluminum alloy.Fig. 12 shows the analyses of ODF maps related to BM that is marked a typical rolling texture for wrought 1050 aluminum alloy.Fig. 13 shows the main texture development in FSPed sample are constituted of CubeND 〈310〉, BR〈385〉 and R.The CubeND component has a proximate 40°〈111〉 rotation regarding Cu orientation.The effect of concurrent precipitation can bring the advantage in growth rate for the CubeND in comparison to the other orientations .The incubation time related to CubeND is shorter to compare with other orientations.This shorter time affects in less precipitation upon these nuclei.It is often to observe the R texture after annealing of cold rolled commercial metal aluminum and specified alloys.It is usually known as retained rolling texture.The reason for such referral is the similarity to the S deformation texture.Fig. 14 shows the ODF maps related to hybrid nanocomposite.The microtextural evolution within the SZ of hybrid nanocomposite is different.The dominant component is only CubeTwin.It is mentioned before that the particle stimulating nucleation and pinning mechanisms during dynamic recrystallization are affected by the presence and distribution of hard nanoparticles, which results in the texture development.On the other hand, incorporation of nanoparticles such as Al13Fe4 and Al2O3 throughout the matrix changes the shear textures to CubeTwin component.The most generalized recrystallization texture to compare with other textures is the cube component.With noticing that DRX nucleation can initiate from cube bands composed by onion-ring flow pattern of nanoparticles; continue with changing the shear direction during stirring motion of rotary tool during FSP which can enhance the chance of inducing C texture component at the sintering zone of hybrid nanocomposite specimen .The partial completion of DRX in the nugget zone, promoted by hard nanoparticles and precipitates can cause the change of texture components in the sintering zone of the nanocomposite sample.Table 5 demonstrates the texture intensity and fractions of HAGBs and LAGBs.The correlation between the fraction ratio of HAGBs and LAGBs and texture strength is presented in Table 5 after careful evaluation and data comparison.In other words, the grain boundaries evolution and texture transition within the SZ has a close relation to each other.This is apparent that hybrid nanocomposite to compare with FSPed AA1050 sample without additional powder shows higher-strength texture and LAGBs fraction.Generally, a high texture strength is the result of slight differences in crystallographic orientation of a large number of grains.The term strong texture means that the orientations of neighbor grains are close to each other, or it can be expressed as the dissimilarity between the orientations of adjacent grains.In this case, the boundary that exists between adjacent grains is classifying as a low-angle boundary.Consequently, analyzing the texture is one important parameter and significant tools to characterize the grain size, grain orientation, and type of grain boundaries.In situ Al/ hybrid nanocomposite was synthesized by using multi-passes overlapping FSP when the milled powder mixture of Fe3O4 and Al insert to the stir zone, and a reduction reaction is applied.The fabricated hybrid nanocomposite was studied in terms of the microstructure evolution, transformation of grain boundaries and different variation of texture and compared to those of the BM and FSPed 1050 aluminum alloy.The obtained results can be summarized as follows:The in situ formation and scattering of nanoparticles in the aluminum matrix accelerate the dynamic restoration process; also, it can act as a significant hinder on the grain growth during dynamic recrystallization; finally, it can result in the reduction of grain size.The average grain size of the hybrid nanocomposite was about 2.1 μm.The high angle grain boundaries fraction in the SZ increase during FSP, and it is an indicator of the dynamic restoration process.The fraction of Σ3n boundaries increased by FSP, which indicated that a greater part of Σ3 boundaries are newly nucleated during restoration process.The addition of milled powder mixture and formation of hard nanoparticles intensify the fraction of low ΣCSL boundaries.The main recrystallization texture components change to cube texture component by the presence of nanoparticles in the fabricated hybrid nanocomposite.The authors declare no conflicts of interest.
A mixture of pre-milled Fe3O4 and Al powder was added to the surface of an aluminum alloy 1050 substrate to obtain hybrid surface nanocomposites using friction stir processing. In situ nano-sized products were formed by the exothermic reaction of Al and Fe3O4. The reaction is triggered by hot working characteristics of the process. The microstructure and crystallographic microtexture transition and grain boundaries evolution of the fabricated nanocomposite were investigated using optical microscopy, X-ray diffraction, field emission scanning electron microscopy, and electron backscattered diffraction analyses. It is illustrated that matrix means grain size decreased in the specimens, which is processed without and with the introduction of the powder mixture to ∼8 and 2 μm, respectively. In addition, high angle grain boundaries showed marked increasing that demonstrates the happening of dynamic restoration phenomenon in the aluminum matrix. Moreover, the fraction of low ςCSL boundaries showed increasing (remarkably in the presence of hard particles); these boundaries play the main role in dynamic recrystallization. The incorporation of nano-sized products such as Al13Fe4 and Al2O3 in the dynamically recrystallized aluminum matrix produced a pre-dominantly CubeTwin texture component induced by the stirring function of the rotating tool. As a result, the effect of nano-sized products is constrained.
727
Comfort food: A review
The term comfort food refers to those foods whose consumption provides consolation or a feeling of well-being.Foods, in other words, that offer some sort of psychological, specifically emotional, comfort.1,It is often suggested that comfort foods have a high calorie content, 2 and that they tend to be associated with childhood and/or home cooking.Indeed, comfort foods are often prepared in a simple or traditional style and may have a nostalgic or sentimental appeal, perhaps reminding us of home, family, and/or friends.3,Nostalgia being an important aspect of many celebratory meals such as Thanksgiving in The States."Comfort foods tend to be the favourite foods from one's childhood, or else linked to a specific person, place or time with which the food has a positive association, as in: “Grandma always made the best mashed potatoes and gravy, they've become a comfort food for me.”",; 4 Or “We always got ice cream after we won at football as kids.,.The suggestion is that those who are alone tend to eat more comfort foods than those who are not.According to the results of one recent North American survey, the majority of those asked either agreed, or else strongly agreed, that eating their preferred comfort food would make them feel better.On the downside, though, many females, when questioned, report that consuming comfort food results in their feeling less healthy as well as quite possibly guilty.Although the Oxford English Dictionary traces the origins of the term comfort food back to a 1977 article that appeared in The Washington Post, Cari Romm recently suggested that: The phrase "comfort food" has been around at least as early as 1966, when the Palm Beach Post used it in a story on obesity: "Adults, when under severe emotional stress, turn to what could be called ‘comfort food’—food associated with the security of childhood, like mother's poached egg or famous chicken soup". ,Given that regular eating also results in a feeling of well-being, it is perhaps important here to distinguish what is special about comfort eating.The latter would seem to be different in terms of its emotional/affective associations and/or perhaps also the relatively narrow range of foods that are involved.These days, there is growing interest in the therapeutic use of comfort foods for those older patients who may well not be consuming enough to maintain their health and/or quality of life.In this group, comfort foods can also serve an important role in terms of triggering nostalgia.Given the above, it should come as little surprise, to find that many of the food companies are interested in trying to engineer new “comfort foods”.However, the relatively idiosyncratic way in which foodstuffs take on their role as comfort foods means that it is probably going to be quite a challenge for the food companies to achieve this goal.That said, restaurateurs have certainly been known to put more comfort foods on the menu when times are hard.NASA, too, have become interested in the topic, given the planned space mission to Mars.Comfort food probably being just what the astronauts will likely need on their undoubtedly stressful ultra-long-haul flights.Given that many comfort foods are associated with what our parents or grandparents may have given us to eat when we were ill as children, 5 there tends to be a lot of variation across both individuals and cultures in terms of the foods that people think of as comforting.That said, chicken soup often comes top-of-mind.6,According to a survey of more than 1,000 North Americans reported by Brian Wansink and Cynthia Sangerman, the top comfort foods were potato chips, ice cream, cookies, pizza and pasta, beef/steak burgers, fruits/vegetables, soup, and other.Intriguingly, however, these averages hide some striking gender differences.When asked to agree or disagree on whether particular foods were comfort foods to them, the top choices amongst females were ice cream, chocolate, and cookies.By contrast, the top three comfort foods for men were ice cream, soup, and pizza/pasta.Notice the place of hot main meals as comfort food for men, or as one newspaper headline put it: “Women like sugar, men like meat”.Importantly, it was not just the foods that differed by gender, differences were also identified in those situations that were likely to elicit comfort eating.Based on the results of a web-based survey of 277 participants, loneliness, depression, and guilt were all found to be key drivers of comfort eating for women, whereas the men questioned typically reported that they ate comfort food as a reward for success.So, while the clichéd view may well be that people reach for comfort food when their mood is low, the evidence reported by Wansink and Sangerman suggests instead that comfort foods are consumed when people find themselves in a jubilant mood, or else when they want to celebrate or reward themselves for something.Only 39% of those questioned in this study chose to eat comfort foods when they had the blues or were feeling lonely.Wansink and Sangerman also identified some interesting differences in what constitutes comfort food amongst the different age groups they polled: So, for example, while 18–34 year-olds preferred ice cream and cookies, those aged 35–54 preferred soup and pasta, and those aged 55 and over tended to prefer soup and mashed potatoes instead.Wansink et al. also found that older people were more likely to report positive emotions after having eaten their favourite comfort food.Here it is perhaps worth adding that people tend to remember/focus more on positive emotions/situations as they age.But, in all cases, it was the past associations that an individual had with the foods that turned out to be key!,Here, the question is whether there are any specific sensory cues can be identified that are especially strongly associated with those foods that are typically considered as comfort foods?,Are there particular tastes, textures, smells, etc., for instance, that tend to be overrepresented in the most commonly-mentioned comfort foods?,Now, as we have just seen, the fact that different people identify different foods as comforting hints at the difficulty of identifying any common feature across such a disparate range of foodstuffs.And while it may be true to say that many comfort foods are calorie dense that is certainly not always the case.So, does one sense dominate over the others as far as comfort foods are concerned?,Well, A large body of psychological research has shown that we are, generally-speaking, visually-dominant creatures.That is, no matter whether we want to know what something is, or where it is located, it is the input from our eyes that dominates over that from the other senses.It is also clear from the many studies that have been conducted over recent years that the visual appearance of food is very important to us.Indeed, as the Roman gourmand Apicius once put it: “We eat first with our eyes.,.So, the natural question to ask here is whether visual cues also dominate when it comes to defining those foods that we consider as comforting?,I would, however, wish to argue that the answer is probably not.,One might also think that comfort food ought not to make any noise.Or, as Rufus put it: “My comfort food must never draw attention to itself”."However, the fact that potato chips came top of Wansink and Sangerman's survey of comfort foods would seem to nix that idea, as the latter are amongst the noisiest of foods.That said, across the whole range of comfort foods, I would dare to suggest that noisy foods are perhaps underrepresented as compared to what one might expect, if one had people list, say, their most preferred foods.The latter would, I guess, on average, make more noise when consumed.Instead, in order to understand what, if anything, is special about comfort foods, one really needs to consider the role of the more emotional senses.In fact, it can be argued that what is common about those foods that we come to think of as comforting relates to their oral-somatosensory qualities; that is, what they feel like in the mouth.As Rufus puts it: “most of us are soothed by the soft, sweet, smooth, salty and unctuous.,Spence and Piqueras-Fiszman note that comfort foods typically have a soft texture.Dornenburg and Page suggest that those foods having this texture are seen as both comforting and nurturing. 7,Social psychologists have reported that warmth in the hand makes other people seem warmer – that is, there appears to be a link between physical and social warmth.As such, one could imagine that those who are feeling lonely might well benefit, psychologically-speaking, from holding something warm in their hands.Similarly, olfactory cues can also deliver a powerful emotional lift, having been shown to help aid relaxation.Here, it is interesting to note that various essential oils make an appearance both as relaxing aromas in the field of aromatherapy practice but also, on occasion, in food – think here only of lavender, lemongrass, or rosemary.The point here, as far as the link between aromatherapy and eating/drinking is concerned, is just to stress the overlap in terms of the key aromatherapy oils that are also edible.Whether such aromatic compounds are overrepresentated in comfort foods is an open question, though introspection does not seem to support the idea.In terms of basic tastes, sweet and salty would seem to be much more prevalent amongst a wide range of comfort foods than sour or bitter tastes.Remember here that the foods we are drawn to as kids tend to differ from those that taste most appealing as adults.And given that children do not tend to like bitter tastes, nor cruciferous vegetables much, this might also help to explain why iit is so hard to think of a green comfort food.In summary, though, there is little clear evidence to support a particular sensory profile across a range of common comfort foods other than, perhaps, that on average they tend to be soft, smooth, sweet, and possibly have a salty/umami taste.According to the research, one important trigger leading to the consumption of comfort foods occurs when people experience negative emotions, or else try to regulate their emotions.That is, people appear to comfort eat as a means of getting themselves into a more positive emotional state, or, at least, that is the effect that they wish to achieve.As we will see later, though, some have questioned whether, in fact, eating comfort food achieves this objective.Both our sensory-discriminative and hedonic responses to different basic tastes, food aromas, flavours, and possibly also food textures, change somewhat as a function of our mood/anxiety/stress levels.So, for example, under stressful conditions, the hedonic appeal of sweetness has been shown to increase, as has the perceived bitterness of saccharin.Indeed, over the years, a number of studies have reported that people consume more sweet foods when stressed.The evolutionary story here being that the energy signalled by sweetness might be just what an organism needs in order to deal with whatever is causing the stress in the first place, or else may act as what is known as a ‘displacement activity’.Such changes could perhaps provide one physiological explanation for why it is that people might find sweeter comfort foods more appealing when they are stressed or depressed than when they are not."Indeed, Kandiah et al. reported that stress influenced North American college women's preferences in terms of the specific foods that they find most comforting.A number of the women questioned also reported eating more, on average, when stressed.Stress often leads to more sweet foods, desserts, chocolate, candy, ice-cream being eaten."Researchers have also addressed the question of whether individual differences, specifically in terms of attachment style, influence the extent to which people reach for comfort food.In one study reported by Troisi et al., those individuals who classified themselves as securely attached were found to rate potato chips as tasting better after they had been encouraged to describe a fight that they had recently had with someone close to them.By contrast, no such effect was observed in those individuals who diagnosed themselves as having an insecure attachment style.Meanwhile, in a second study, Troisi et al. had 86 US students keep daily diaries over a two-week period.In this case, analysis of what the participants had written revealed that those with a secure attachment style consumed more comfort food in response to naturalistic feelings of social isolation.Taken together, then, the results of these two studies can be taken to suggest that the emotional benefits of comfort food are more likely to be experienced amongst those for whom consumption brings back positive associations of early social interaction."Consistent with this view, Troisi and Gabriel had already reported that the stronger an individual's emotional relationships, the more satisfying they tended to find chicken soup.The suggestion emerging from the latter research was that the “comfort” element of comfort foods comes from its affective associations with social relationships rather than anything else.Put simply, the claim is that comfort foods help alleviate loneliness. 8,Indeed, Troisi and Gabriel were able to demonstrate that the consumption of comfort foods automatically leads to the activation of relationship-related concepts.That said, Ong et al. subsequently documented there to be some important cross-cultural differences here."For, while the latter researchers were able to replicate Troisi and Gabriel's findings in a North American cohort, no such link between writing about comfort food and reduced feelings of loneliness were found amongst securely attached individuals after a belongingness threat in those from either Singapore or Holland.The latter null results would therefore appear to hint at an important cross-cultural component to the role/meaning of comfort food.Unpacking such cross-cultural differences, though, will likely require a good deal more research.It would, for example, be a good idea to repeat the study across a much wider range of cultures in order to determine how widespread the two response patterns identified by Ong et al. actually are.There may also be a neuropsychopharmacological angle here to comfort foods.It has, after all, been reported that eating palatable foods can lead to the release of trace amounts of mood-enhancing opiates.Similarly, the consumption of sweet, high-calorie foods has been linked to the release of opiates and serotonin, which, once again, may help to elevate mood in certain populations.The direct infusion of a fatty acid solution to the gut can help to reduce the negative emotional impact of watching a sad film clip too.Finally, there is evidence that drinking black tea, can also reduce stress.In one double-blind UK study, it lead to a significant reduction in cortisol.That being said, researchers have, in recent years, turned to the question of whether comfort foods really do, in any meaningful sense, provide a psychological benefit to those who consume them.For instance, Wagner et al. had their participants watch upsetting movie scenes for 18 minutes in order to induce a bad mood, assessed via a mood questionnaire. 9,Next, the participants ate their own preferred comfort food, another equally-liked food, a neutral snack, or else were given nothing to eat.Three minutes later they were given another mood questionnaire.The surprising result to emerge from this study was that the mood of the participants improved equally in all four of the conditions.That is, no specific evidence was garnered to support the claim that consuming comfort food conveyed any special emotional benefit over the other foods.10,So, should it be concluded from Wagner et al.’s research that consuming comfort food doesn’t provide any kind of emotional benefit, as suggested by some of the newspaper headlines covering the story?,I would like to argue instead that one might want to nuance the claim: At this stage, at least, perhaps it is safer to say that comfort food might simply not be all that effective at alleviating the short-lasting negative mood induced by watching depressing movie clips.It should, after all, be kept in mind here that the relatively mild mood induction procedure used by Wagner et al. is unlikely to have tapped the extremes of stress that may serve as the trigger for many everyday examples of comfort, or for that matter other forms of emotional, eating.Furthermore, it should also be remembered that the second mood questionnaire was given just 3 min after the participants had been offered the food. 11,The possibility must therefore remain that the beneficial effects of consuming comfort food emerge rather more slowly.For the sake of comparison here, it is worth bearing in mind that many neuropsychopharmacological effects take 1–2 h to kick in.Alternatively, however, based on the research reported by Troisi, Gabriel, and their colleagues, it could also be argued that comfort foods actually work by alleviating social isolation – i.e., rather than by improving mood per se.The movie clips chosen by Wagner et al. presumably did not induce any kind of social isolation, and so may not have been especially relevant to assessing this particular claim.As is so often the case, then, it is a matter of more research being needed in order to know if, under what conditions, and for which specific populations, the consumption of comfort food really does provide some sort of measurable psychological benefit.The concept of comfort food is one that is familiar to most people.That said, what constitutes comfort food differs widely from one individual to the next, and from one culture to another.Questionnaire-based research suggests that men and women tend to reach for somewhat different comfort foods.Furthermore, what constitutes comfort food for younger people differs from the foods that are typically chosen by older individuals.Certainly, the clichéd notion that comfort foods tend to be calorie-dense is not always correct.Or as Romm puts it: “certain foods promise solace as much as fuel.,That said, generally-speaking, comfort foods are not characterized as tasting especially good, nor are they characterized by their ‘healthfulness’.And nor, for that matter, do there appear to be any specific sensory characteristics that help to distinguish comfort from other classes of food.Indeed, given the wide variety of different foods that people describe as comforting to them, it would seem unlikely that there are going to be any particular components that one can point to as having a physiological impact on whoever is consuming them."Rather, it would seem that certain foods take on their role as comfort food through association with positive social encounters in an individual's past. 12",So, to the extent that comfort foods work, it is not so much a matter of elevating people from out of a bad mood as priming thoughts of prior positive social encounters, when exposed to a belongingness threat. 13,In other words, one important reason why people reach for the comfort foods is because they feel lonely.That said, anything else could perhaps be swapped in the place of food for, according to Shiri Gabriel: “anything else that brings the same soothing sense of familiarity, like re-reading a beloved book or watching a favorite TV show.,.Consistent with this view, the participants in one study were shown to feel less lonely after simply writing about comfort foods.Finally, it should be remembered that comfort foods are actually consumed under a relatively heterogeneous range of environmental conditions in order to achieve a variety of different psychological outcomes – of which, alleviating loneliness, may be but one.And, as we have seen in this review, it is just that simple mood elevation, when we are in an bad mood, may not be one of them.It remains for future research to demonstrate whether comfort foods can induce some form of robust mood enhancement under other, more ecologically-valid, conditions and who amongst us, might benefit most.
Everyone has heard of comfort foods, but what exactly are they, and what influence, if any, do they actually have over our mood? In this review, I summarize the literature on this important topic, highlighting the role that comfort foods play in alleviating loneliness by priming positive thoughts of previous social interactions, at least amongst those who are securely attached. The evidence concerning individual differences in the kinds of food that are likely to constitute comfort food for different sections of the population is also highlighted. Intriguingly, while most people believe that comfort foods elevate their mood, robust empirical findings in support of such claims are somewhat harder to come by. Such results have led to some influential headlines suggesting that the very notion of comfort food is nothing more than a myth. While this may be overstating matters somewhat, it is clear that many uncertainties still surround if, when, and for whom, the consumption of comfort food really does provide some sort of psychological benefit. This represents something of a challenge for all those marketers out there waiting to associate their products with the appealing notion of comfort food.
728
Soil seed bank dynamics and fertility on a seasonal wetland invaded by Lantana camara in a savanna ecosystem
Invasive alien plants have drawn attention in plant ecology, as they have emerged as one of the biggest threats to global biodiversity and ecosystem stability.Invasive alien plants can directly affect both the species composition and structure of ecosystems, water availability and alter soil quality.Although some empirical studies have been carried out on the impact of invasive species on soil properties, majority lack comparison with native non-invaded plant communities,.Lantana camara L. is one of the known major problematic IAP species across the globe.Although native to South America and West Indies, to date it has naturalized in at least 60 countries across the globe.Despite campaigns and efforts to control L. camara, it remains one of the biggest problem invaders worldwide.L. camara produces large quantities of seed, which are consumed by a number of endozoochorous birds causing rapid and long distance spread.The seeds can stay in the soil for years and be able to germinate once conditions are suitable, an essential prelude to persistence.Drake et al. emphasized the need for additional research focused on the general effects of individual IAP species on ecosystems.Indeed, a lack of knowledge regarding seed banks in southern African savannas has been acknowledged.Although studies have looked at the influence of fire and endozoochory on L. camara seed viability and germination, little has been done on the soil seed bank dynamics.Understanding soil seed bank dynamics is important for IAP species management, because reinvasion of cleared areas has been observed to be largely from soil seed bank.Knowledge of seed bank size before planning any management activities on invasive species facilitates effective proactive management of important ecosystems such as wetlands.Wetland ecosystems play several important functions in the ecosystem including water purification and control of flooding.In addition to occupying low-lying areas in the landscape, they are highly productive ecosystems due to an excess accumulation of resources."As a result, they are utilized extensively by both humans and animals making them susceptible to disturbance-mediated invasions.In this regard, soil fertility and seed bank dynamics were investigated in a seasonal wetland invaded by L. camara.Specifically, the objectives of this study were to determine if the abundance of L. camara seeds in the soil changes with increasing soil depth and soil nutrient levels are different under L. camara stands than in open areas.The study was carried out at New Gada wetland located at 17° 53′ 24′′ S and 31° 8′ 51′′ E.The wetland is approximately 15 km from the Harare, Zimbabwe.Common plant species on the wetland includes Eragrostis enamoena, Nymphea sp, Scirpus raynalii, Scirpus sinutus, Typha latifolia and Juncus sp.The average rainfall ranges between 650 and 850 mm/annum, and mean annual temperatures are 9 °C for winter and 40 °C for summer.The wetland was stratified into upper, middle and bottom based on altitude following water flow direction.Three L. camara patches, at least 400 m2 in area, canopy dominated by the target species were sampled in each stratum.Five, 5 m × 5 m sampling plots were marked, one at the centre of each sampling patch and the other four located, each at 5 m in the four cardinal points.A control plot 5 m × 5 m was randomly marked at least 20 m away of each L. camara patch in order to sample reference soil.A new location was only chosen if the first control location was falling in another L. camara infested patch or was not at least 20 m from any nearby infested patch.Four 5 m × 5 m plots were marked around central control plot similar to the L. camara infested patch.Soil was collected at a depth of 15 cm using a soil auger from the centre and the four corners of the central plot only.The other four surrounding plots were used to sample soil seed bank.The samples were thoroughly mixed to obtain a composite sample and 500 g was taken from this mixture for analysis at the laboratory.A total of 18 soil samples were tested for NH4+, NO3−, Resin-extractable P, pH, exchangeable Ca, Mg, Na and K at the Department of Research and Specialist Services, Chemistry and Soil Research Institute in Harare, Zimbabwe.In the laboratory, soil samples were air dried at room temperature before analysis.Soil pH was obtained using the CaCl2 method.Exchangeable Ca, Mg, K, and Na were extracted using the aqua regia digestion method.The resulting compound was then dissolved in concentrated HCl and filtered.The solution was diluted with distilled water.Using a spectrophotometer, total Ca and Mg were determined at 0.460 nm and 0.595 nm, respectively, and flame emission was used for K and Na.Total N was determined using a Kjeldahl method.Plant available phosphorus was determined using the molybdenum-blue calorimetric method.Data on soil seed bank were collected post-dehiscence in April.Soil seed bank samples were collected from four random positions in each 5 m × 5 m plot using a ridged steel quadrat, 30 cm × 30 cm in area, with lines curved every centimetre to indicate depths.Samples were air-dried, spread onto large trays and seeds picked-out using forceps.Firm seeds were counted and firmness determined by pinching with a forceps.Ninety eight percent of the samples collected deeper than 15 cm had no seed; therefore, we ended our assessment at 15 cm.Data were tested for normality and homogeneity of variance and transformed as necessary.Independent t-test was used to compare soil chemical properties between L. camara infested plots and the reference soil.Data on seed bank densities from different soil depths and wetland strata were analysed using a balanced three-way ANOVA.Data were presented as mean ± standard error.Post-hoc multiple comparison of means was conducted using Tukey honest significance difference to detect differences among slope positions and depths levels.All tests were significant at p < 0.05.Soil from L. camara invaded patches had significantly higher levels of Ca, Mg, Na, and NH4+ compared to the reference soil on the wetland by factors of 8.5, 4.8, 2.5 and 3.1, respectively.However, reference soil had superior concentration of NO3− and available P compared to L. camara invaded patches by factors of 3.7 and 1.6, respectively.There was no significant difference in K concentration between L. camara invaded patches and reference sites on the wetland.Soil pH was significantly lower in L. camara infestations compared to reference soil.A total of 3576 seeds were sampled on the wetland and of these 26%, 32% and 42% were sampled in the upper, middle and bottom sections of the wetland, respectively.Consequently, seed density significantly differed with slope position with greater densities at the bottom slope, followed by the middle slope and least at the upper slope.A similar pattern was observed at each soil depth.As a result, there was a significant interaction between slope position and depth on seed density.Depth had a significant influence on seed density, with a consistent decline in seed density with increasing depth across all sections of the wetland.Most of the seeds were located in the top 5 cm of the soil profile, declining to almost zero at depths of 10 to 15 cm with 98% of the samples collected below 15 cm containing no seed.Invasive alien plants have been shown to alter soil nutrient pools, salinity, moisture, and pH in many ecosystems.Consistent to the general trend of increased nutrient pools in topsoil in response to invasion reported in other studies, soil nutrient concentrations increased by a factor of 2.5 for Na, 3.1 for NH4+, 4.8 for Mg up to 8.5 for Ca.These changes in the concentration of base cations may alter the distribution and concentration of base cations at soil exchange surfaces potentially affecting cation exchange processes, soil pH, and soil organisms.In contrast, the concentration of NO3− and available P was lower on L. camara invaded patches than on un-invaded patches, which suggests that there is likely a higher plant uptake of these elements owing to the high plant biomass on L. camara invaded patches than on un-invaded patches.Additionally, invasive alien plants are also associated with reduced soil moisture content.Thus, considering that reduced soil moisture content can suppress nitrification rates, it is possible that the low soil NO3− concentration on L. camara invaded patches is likely due to soil moisture suppression on nitrification.However, we did not examine this possibility, which suggests future research directions.The high concentration of NO3− and available P on un-invaded patches suggests that there is high likelihood that L. camara will proliferate on the wetland since systems with abundant resources particularly those that limit plant growth are susceptible to invasion.The concentration of K was similar between L. camara invaded patches and reference sites implying that invasion did not affect the prior concentration of K on these soils.The soil pH was significantly lower in L. camara infestations compared to un-invaded patches contrary to findings of other studies.This change in pH may have important implications for many soil properties since soil pH is central to many soil processes.Small changes in pH have been shown to result in large effects on soil properties.Thus, the effect of pH on soil properties may be considered, even if the actual pH difference is rather small.This suggests that differences in soil nutrient concentrations observed between L. camara invaded patches and un-invaded patches were probably due to differences in pH between treatments.For example, the low NO3− recorded on L. camara invaded patches could be attributed to slowed nitrification due to a decline in pH.The concentration of base cations of most soils decreases when soil pH decreases.In contrast, we found that the concentration of base cations increased with invasion suggesting that other factors such as litterfall, nutrient uptake and soil moisture availability which change with invasion could explain the observed differences in soil nutrient concentrations.Additionally, animals and frugivorous birds visiting fruiting L. camara plants may concentrate nutrients through faecal droppings, which may also lead to elevated nutrients in invaded patches.While changes in soil nutrient concentrations occur with exotic invasions, the general direction of change is unpredictable because findings have been varied among studies.Similarly, in this study, there was an increase, a decrease or there was no change in concentration of soil nutrients with L. camara invasion.This suggests that effects of IAP on soil properties depend on the element under examination.Alternatively, it could be because soil cores, as used in this study, are point samples that miss nutrient dynamics over time.Thus, further research is required to establish the temporal variation in soil nutrient concentrations with invasion.Seed densities reported here, ranged from 2 to 657 seeds m− 2, are greater than those reported by Vivian-Smith et al.The high seed quantity recorded in this study can be attributed to L. camara fruiting at least twice in a year due to availability of resources on the wetland such as moisture, light and soil nutrients.Consistent with findings of other studies, we found that seed density increased down gradient on the wetland.It is likely that the soil seed bank is eroded down gradient from upper sections of the wetland and end up concentrating in the lower sections of the wetland.As a result, soil seed bank density was higher in lower slope microsites that trap eroded soil than in the upper slopes.This may limit L. camara recolonization on the upper slopes since seed loss is a limiting factor on the revegetation of eroded slopes.In this regard, management and eradication efforts to control L. camara invasion should focus on the lower slopes of the wetland.Seed density decreased with depth across all sections of the wetland consistent with findings of other studies.As a result, most of the seeds were located in the top 5 cm of the soil profile.We attribute the decline in seed density with soil depth to very little vertical dispersion of seed in the soil-by-soil transport or soil organisms.Alternatively, the fine texture of the soil likely reduced seed movement through the soil profile by reducing the action of percolating water or did not facilitate seed penetration into the soil profile as also suggested by Hopkins and Graham.High seed density in the 0 to 5 cm has important implications for the spread of L. camara in the wetland since the buried seeds are likely to be brought to the surface with minimal levels of disturbance.Disturbances lead to increased germination of L. camara seeds from soil seed bank because disturbance increases resource availability such as light and nutrients.Additionally, although seed density declined with soil depth, the presence of seed at depths below 10 cm suggests that L. camara may have a persistent seed bank in this wetland.A persistent seed bank together with the high concentration of nutrients is also likely to predispose the wetland to continued invasion as well as suggest that L. camara may be difficult to eradicate on this wetland.However, buried seeds face several mortality factors including decay caused by bacterial and fungal microorganisms.Thus, the potential to germinate depends on the viability of the seed, which was not tested, and suggest future research directions.In conclusion, L. camara invasion altered the concentration of soil nutrients but the direction of change was dependent on the nutrient being examined with an increase, a decrease and no change recorded in the concentration of soil nutrients with L. camara invasion.Seed density decreased with soil depth and altitudinal gradient, which suggested a surface-soil persistent seed bank on the lower slopes of the wetland.Collectively, these results suggest that the wetland may still be predisposed to continued invasion by L. camara particularly on the lower slopes of the wetland, and thus these areas should be the focus of management and eradication efforts.
Knowledge of seed bank status and dynamics is crucial for effective management of desirable and undesirable plant species in natural ecosystems. We studied the soil seed bank dynamics and soil nutrient concentrations in Lantana camara invaded and uninvaded patches at New Gada wetland in Harare, Zimbabwe. Soils were tested for pH, ammonium (NH<inf>4</inf><sup>+</sup>), nitrate (NO<inf>3</inf><sup>-</sup>), phosphorus (P), calcium (Ca), magnesium (Mg), sodium (Na) and potassium (K). We also assessed the soil seed bank density to a depth of 15cm over varied altitudinal zones. Soil nutrient concentrations increased by a factor of 2.5 for Na, 3.1 for NH<inf>4</inf><sup>+</sup>, 4.8 for Mg up to 8.5 for Ca with L. camara invasion. In contrast, L. camara invaded patches had a lower concentration of NO<inf>3</inf><sup>-</sup> and P than uninvaded patches. Seed density significantly declined with both soil depth and slope with high seed density in the upper surface soil of the lower slopes of the wetland. The elevated soil nutrient concentrations along with a high soil seed bank density suggest that the wetland may still be susceptible to continued invasion by L. camara particularly on the lower slopes of the wetland. Thus, management and eradication efforts should focus on the areas that receive or trap the eroded soil seed bank.
729
The lost stone — Laparoscopic exploration of abscess cavity and retrieval of lost gallstone post cholecystectomy: A case series and review of the literature
Laparoscopic cholecystectomy is the gold standard treatment for symptomatic gallstones.One of the common complication of LC, which is less discussed in the literature, is gallbladder perforation.The incidence of gallbladder perforation varies from 1.3% to 40% .Gallbladder perforation can cause gallstone spillage and, in most cases, an unsuccessful retrieval of the stones.Most of the spilled stones remain clinically a-symptomatic, however, in 0.04% to 19% of the cases adverse events were reported .Intra-abdominal abscess formation is the most prevalent complication.Today, the use of minimally invasive technique is growing, and expanded well beyond the traditional surgical cases.In this article, we’ll describe a novel technique to retrieve lost gallstones via laparoscopic exploration of an abscess cavity and review the relevant literature.The research work has been reported in line with the PROCESS criteria .A 74-year-old male presented to the emergency room with a six-month vague right upper quadrant pain that was exacerbated during the week prior to his arrival.His past medical history was remarkable for ischemic heart disease, chronic obstructive lung disease, diabetes mellitus hypertension and ten years status-post LC.Radiologic studies confirmed the presence of an abdominal abscess between the liver and the abdominal wall.Under Ultra-Sound guidance, the area of the abscess was marked.The patient was taken to the operating room for laparoscopic exploration of the abscess cavity by our staff.Under general anesthesia the abscess cavity was drained and irrigated using a per-cutaneous drain.A 5-mm port was inserted parallel to the drain and exploration of the abscess cavity revealed bile stones.A 10-mm port was inserted to the abscess cavity parallel to the previous port.The abscess cavity was irrigated and the stones were retrieved using laparoscopic forceps.During the procedure, there was an air leak into the peritoneal cavity which was drained using a veress needle.At the end of the procedure a drain was left in the abscess cavity.The patient received 24 h of prophylactic antibiotics and was discharge home two days post the procedure.During a follow-up meeting at the clinic the drain was removed and he is now 4 years post-surgery symptom free.A 41-year-old woman was presented to the ER with a one-month vague RUQ pain.Her medical history was remarkable for LC three years before her current admission.Radiologic studies revealed a large abscess close to the liver, adherent to the abdominal wall, and containing two gallstones.An US guided per-cutaneous drain was placed and the patient was scheduled for explorative laparoscopy, by our staff, during the following week, due to technical problems.A week later, under general anesthesia, a 5-mm port was inserted to the abscess cavity parallel to the drain.Laparoscopic exploration of the abscess cavity was done using the 5 mm and 10 mm ports.The abscess cavity was irrigated and the gallstones were retrieved.A drain was left in the abscess cavity.The patient was discharged home after 48 h.She returned seven days post the procedure with a clinical and radiological picture of intra-abdominal abscess which was adjacent to the previous one.A per-cutaneous drain was inserted and the patient was discharged home.At the follow-up meeting in the clinic, both drains were removed and the patient remained asymptomatic.In contrast to open cholecystectomy, where the entire operative field is fully visualized and spilled stone can immediately be retrieved, in the laparoscopic era the chances for misdiagnosis or incomplete retrieval of spilled stones are much higher.Spilled gallstones can lead to numerous long-term complications.Zehetner et al. found, in their review of the literature, 44 different types of complications due to spilled gallstones .Abdominal wall abscess and intra-abdominal abscess were the most frequent.Peritoneal gallstones create an inflammatory process that can lead to partial or complete reabsorption of the stone, abscess formation, granulomatous reaction and even erosion to other abdominal organs .Infected stones, which are more likely to happen in case of pigmented stones, intensify this process.Unfortunately, perforation of gallbladder during LC and especially, spillage of gallstone, is poorly reported in the operation note.It can cause delay in diagnosis, especially in cases that present several years post operation.Late complication of perforated gallbladder should be considered in any patient who had LC in the past.The treatment of abscess formation due to lost gallstone necessitate the need for drainage and complete stone removal.The stone can be removed endoscopically, percutaneously or by open surgery.The advantages of the minimally invasive technique over the open one include safe and controlled exploration of the abscess cavity and redundant peritoneal cavity exploration.In review of the literature we found two techniques for abscess cavity exploration that are similar to our technique.The authors used nephroscope and combined percutaneous and a choledocoscope .Both techniques used endoscopic techniques and not laparoscopic technique like we did.Nowadays, the use of minimally invasive technique, which includes a camera port and working ports, has expanded to different cavities in the human body.We describe a novel technique, which was never described before, using laparoscopic equipment in order to explore the abscess cavity and retrieve the lost gallstone.The use of laparoscopic equipment enables us controlled inflation of the abscess cavity which in turn promotes a meticulous exploration of the abscess cavity for stones and fragments of stones.Our technique, which performed by skilled minimally invasive surgeon, enable a safe and thorough exploration of the abscess cavity.This exploration will extract any fragment of gallstone that could be a nidus for continuous infection.Lost gallstone can cause long term complication even several years post-surgery and the documentation can shorten the time for diagnosis.Our novel technique enables meticulous exploration of the abscess cavity using laparoscopic equipment.It adds another treatment option for the minimally invasive surgeon to treat abscess drainage caused by gallstone.Uri Kaplan, Gregory Shpoliansky, Ossama Abu Hatoum, Boaz Kimmel and Doron Kopelman have no conflict of interest.Uri Kaplan, Gregory Shpoliansky, Ossama Abu Hatoum, Boaz Kimmel and Doron Kopelman have no financial ties to disclose.As my institution’s IRB policy states that studies of less than four subjects are not considered Human Research, my submission is exempt from IRB and Ethics approval.Written informed consent was obtained from the patient for publication of this case report and accompanying images.A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.UK designed the report, reviewed the literature, and drafted the manuscript.GS performed the study conception and design.OAH and BK participated in designing the report.DK carried out the surgical procedure and participate in critical revision.Not commissioned, externally peer reviewed.
Background: Laparoscopic cholecystectomy (LC) is considered the gold standard operation for symptomatic gallstones. Gallbladder perforation occurs in 6–40% of operations. It can lead to spillage of gallstones into the abdominal cavity with possible consequences of long-term complications. We report two cases where a unique use of laparoscopic technique was used to explore abscess cavity and retrieve lost gallstones without penetrating the peritoneal cavity. Case presentation: We report two cases of peri-hepatic abscess treated with laparoscopic cavity exploration, using 5 mm and 10 mm ports, to retrieve lost gallstones. It was done without entering the peritoneal cavity. Discussion: Today, minimally invasive technique is used in a variety of surgical cases. We report a novel technique, using laparoscopic skills, to drain abscesses caused by lost gallstones post LC without entering the peritoneal cavity. The use of minimally invasive surgery techniques in order to explore abscess cavities not only help us to extract the cause of the abscess but also prevents another surgery in the abdominal cavity. Conclusion: Laparoscopic exploration of an abscess cavity is a feasible and safe technique treating long-term complications of gallbladder perforation post LC.
730
Space-time information analysis for resource-conscious urban planning and design: A stakeholder based identification of urban metabolism data gaps
The notion of urban metabolism has inspired new ideas about how cities can be made sustainable and it has fostered quantitative approaches to the analysis of urban resource flows.UM refers to the processes whereby cities transform raw materials, energy, and water into the built environment, human biomass, and waste.UM can be traced back to Marx in 1883, who used the term metabolism to describe the exchange of materials and energy between society and its natural environment.In 1965 Wolman re-launched the term as he presented the city as an ecosystem, and later others also used the term UM in representing a city as an organism.Since Wolman’s early study of urban metabolic processes, two distinct quantitative UM approaches have developed that aim to describe and analyse the material and energy flows within cities.One describes the UM in terms of solar energy equivalents.Related school of scholars emphasizes the earth’s dependence on the sun as an energy source and the qualitative difference of mass or energy flows.The second and most widely used approach, is associated with the fields of industrial ecology and engineering.Related research largely consist of empirical studies that account for the energy and material/mass flows of a city, using methods such as material flow analysis, mass balancing, life cycle analysis and ecological footprint analysis.Multiple scholars have argued that the latter type of UM analyses, the flow quantifications associated with the mainstream UM approach, are useful for urban planning and design.However, these authors also argue that major efforts are still needed to make UM analyses useful for informing urban planning and design aiming at optimization of urban resource flows.Indeed, only three examples of application of UM for designing more sustainable urban infrastructures are referred to in literature, of which just one is a peer-reviewed article.1,The only recent scientific contributions on this topic all discuss the planning support system developed in the BRIDGE project.In professional literature, some other recent examples can be found.In the Netherlands, research on the resource flows of Rotterdam was conducted and used as a basis for urban design strategies, in the context of the International Architectural Biennale 2014 Urban by Nature.In the Circular Buiksloterham project in Amsterdam an ‘Urban Metabolism Scan’ was performed and used as foundation for a vision for the Buiksloterham area, including site-specific technical interventions and a design concept.So, although the theoretical potential of UM analysis for urban planning and design is increasingly addressed in the scientific literature, scientific reports that illustrate how this potential can be realised with practical implementation remain limited thus far.Possibly, UM analyses are still of limited use for urban planning and design because they are performed on a scale level that does not match urban planning and design practice.The UM is usually analysed for a period of a year on city or regional scale; analyses on a more detailed level are said to be hampered by lack of data.Such large-scale analyses, however, do not reveal which metabolic processes and functions are operating at various spatial and temporal scales.Yet, planners and designers need such information to decide upon the appropriate interventions to realize a resource-conscious strategy.In other words, they need this information to inform their planning and design decision-making regarding interventions aimed at urban climate adaptation, climate mitigation and/or resource efficiency.To be useful for urban planners and designers, UM analyses should thus provide detailed and spatial and temporal explicit data on the scale at which these practitioners work.Therefore, the study presented here aims to answer the following questions: “at which spatial and temporal resolution should resource flows be analysed to generate results that are useful for implementation of urban planning and design interventions?,and “is UM analysis at this desired level of detail currently hampered by a lack of data?”.To answer these questions the “Space-time Information analysis for Resource-conscious Urban Planning” tool was developed and applied in a case study of the city of Amsterdam, the Netherlands.The SIRUP tool enables an analysis on two levels: I) assessing on which level of detail in space and time stakeholders need information on resource flows to inform urban planning and design decision-making aimed at developing resource-conscious strategies, and II) evaluating whether existing data can provide the information needed or that there is a data gap.The qualitative tool facilitates information and knowledge sharing and discussion between stakeholders.Stakeholder involvement in UM research is essential to leverage availability of and access to urban resource data and it allows identifying the information needs of urban planning and design practitioners.The “Space-time Information analysis for Resource-conscious Urban Planning” tool is based on the work of Vervoort et al., who developed the tool Scale Perspectives to elicit societal perspectives and generate dialogue on governance issues.Their tool consists of a frame with pre-defined spatial and temporal scales in which stakeholders can outline the relevant scales for a particular governance issue.For the SIRUP tool, this frame is adapted for the purpose of identifying on which spatiotemporal resolution stakeholders need information on resource flows and for assessing whether existing data can provide this information on the resolution needed.The SIRUP tool is applied in four steps.These steps aim to generate an inventory of UM interventions, determine the information needed for implementing each of these interventions, describe the spatiotemporal resolution of existing data relevant for the intervention and identify whether the resolution of identified data can satisfy the stakeholders’ intervention information needs.The SIRUP tool was applied in a case study of Amsterdam.As part of this case study, stakeholders were involved that are engaged with urban planning and design decision-making aimed at developing resource-conscious strategies for the city of Amsterdam.The stakeholders comprised researchers, environmental managers from utilities, landscape architects and urban planning & design practitioners.Eleven of these stakeholders were interviewed, using semi-structured interviews, and thirteen stakeholders participated in a workshop.Step I and II of the SIRUP tool were used to identify on which spatiotemporal resolution stakeholders need information on resource flows.In step I, the stakeholders were asked to describe a resource-conscious intervention that they envision to be implemented in Amsterdam.Participants were also asked to specify in the SIRUP frame at which spatial scale level the intervention would take places and which time frame they envisioned for implementation.In step II, participants were asked to specify the information needed for implementing the intervention mentioned and to indicate the required spatial and temporal resolution of this information in the SIRUP frame.The interviewer or workshop facilitator had to ensure that participants described the information on resource flows that is necessary to enable the intervention.Pen-and-paper format was used because this allows for greater flexibility than a digital setting.After the workshop, all contributions were digitalized and labelled to enable the selection of interventions that are within the scope of the research.The interventions were labelled according to the type of intervention and the resource flow for which information is needed.We limited the research to spatial and technical interventions aimed at urban climate adaptation, climate mitigation and/or resource efficiency, focussing on energy and water flows because these are strongly related to such interventions.In step III a desk study was conducted to identify which data exist on Amsterdam’s energy and water flows and to compose an overview of these data, including a description of the spatiotemporal resolution of these data.In the study, data portals, databases and reports were considered that contain open or restricted data on Amsterdam’s energy and water flows.Expert consultation was used to identify relevant datasets and to obtain access to restricted datasets.Metadata was described for all datasets obtained, using a format that was based on the ISO 19115 and the INSPIRE metadata standards to ensure compatibility with other datasets in the world.The mandatory elements of the metadata standard used were, amongst others, the level of detail of the data in the spatial dimension and in the time dimension, i.e. the spatial and temporal resolution of the data.Moreover, it was required to state the limitations and rules on accessing, use and publishing of the existing data to indicate whether the data is open or restricted.Based on the metadata-description on spatial and temporal resolution, the datasets were placed in the SIRUP frame.In the final step, step IV, the data inventory was used to analyse whether the identified data can satisfy the stakeholders’ information needs.For each intervention it was evaluated whether the attributes of each data set were relevant for the information needed.Then, the relevant data and the information needs for the intervention were combined in one SIRUP frame and arrows were drawn from the dataset to the information needed.When the spatiotemporal resolution of information needs are equal to the resolution of existing data, these exact matches were indicated by a circle in the SIRUP frame.Subsequently, the size and dimension of the arrows were analysed to assess the presence and severity of data gaps.An arrow either indicates a two-dimensional data gap, when both the spatial and temporal resolution of existing data are insufficient, or it indicates a one-dimensional data gap, when either the spatial or the temporal resolution of existing data is lower than required.A two-dimensional data gap is indicated by arrows pointing towards the lower left corner.Arrows pointing downwards or to the lower right corner indicate a one dimensional data gap, in the spatial dimension only.The reason for this is that the arrows indicate that the temporal resolution is equal to or higher than needed.Because aggregation from higher temporal resolution to lower resolution is possible without an information loss, the required temporal resolution can be derived from this information.For example, hourly totals can be derived from data on minute level by summing all available minute data points.Arrows pointing to the left or higher left corner represent a gap in the temporal dimension only, because the spatial resolution of the data is sufficient or higher than needed.There is no data gap when existing data has either the right resolution in one dimension and a higher resolution in the other or a higher resolution in both dimension, implying that data can be aggregated to get to the required resolution.These matches are indicated by arrows pointing up, to the right, or diagonally in the upper right direction.In this case study a total of 52 different interventions were suggested by the stakeholders during the interviews and workshop.We selected fourteen of these interventions for further analysis, namely the spatial and technical interventions for which information on energy and/or water flows is required.The selected interventions have a total of 26 information needs that relate to energy and/or water flows.These information needs were categorized into four different clusters to facilitate interpretation: I) piped water, including waste water and drinking water; II) non-piped water, including groundwater, surface water, storm water and rainwater; III) energy demand; and IV) energy supply.Results show that out of the five information needs related to piped water, two can be met by existing open data and one by restricted data.Fig. 2a shows that the information needs that were expressed for piped water are scattered over the SIRUP frame.However, no information needs appear in the lower left corner of the frame, up to 12 h and district, nor at the highest scale levels, that is metropolitan region and five years and higher.The SIRUP frame of existing data, on the other hand, shows a different pattern.In terms of temporal resolution, these data fall in the range ‘minutes’ to ‘one year’.In terms of spatial resolution, open data ranges from the scale of a small neighbourhood to the metropolitan region.Additionally, one restricted access database provides drinking water quantity data on building level.No piped-water data has been identified that has both high temporal and spatial resolution.When the resolution of information needs and existing data are compared, it appears that two information needs can be met: I) the quantity and II) the quality of waste water that enters Amsterdam’s waste water treatment plants at seasonal up to yearly level.For the remainder of the data gaps, it shows that the size and dimension of the gaps depend on which existing data source is considered.The drinking water data gaps, for instance, are either two dimensional, using dataset 3, or with a spatial dimension only, when using the open data from source 4 or 6.Yet, with access to restricted datasets, there is a data gap with a temporal dimension only.Nevertheless, when using the restricted data, the size of the gap in the temporal dimension remains the same as when dataset 3 is used, from ‘one year’ to ‘month’.On the other hand, restricted data can close the data gap regarding waste water quantity at municipal and minute level entirely.Regarding waste water quality, there is a two dimensional data gap when using open data and a data gap in the spatial dimension only when using restricted data.The size of the gap in the spatial dimension remains equal when restricted data can be used.In the case of non-piped water, the spatiotemporal resolution of existing open data meets the resolution of six out of the twelve information needs.Regarding the resolution of these twelve information needs, a cluster of six shows on the right side of the field at the temporal scales ‘quarter of a year’ till ‘five years’.Of the remaining six information needs, four appear in the lower left corner of the SIRUP frame, delineated by week and district.Two of these information needs are rainwater related, the other two relate to groundwater quality and quantity.For these four information needs a data gap exists, because existing rainfall and groundwater data have a lower resolution than required.In the case of groundwater quantity, this is an exception because the resolution of the existing data − small neighbourhood to small district at seasonal level − is sufficient for the other three groundwater quantity related information needs.By contrast, the resolution of groundwater quality data, which is the metropolitan region and one to four year resolution, is insufficient for the two related information needs.Likewise, the resolution of rainfall data − in between municipality and metropolitan region for a day to half a year − is inadequate for all rainwater related information needs.The resolution of existing surface water data is sufficient to meet the three related information needs.Overall, six data gaps for non-piped water remain, including two data gaps in both dimensions, one data gap with a temporal dimension only and three gaps with a spatial dimension only.Out of the five information needs on energy demand detected, one can be met by existing restricted data.The information need that can be met, electricity demand of a neighbourhood at yearly basis, is the only one that is not part of the cluster in the lower left corner of the SIRUP frame, delineated by week and district.Existing data sources on Amsterdam’s energy demand, on the contrary, primarily provide data on yearly totals, within a spatial range of building to country level.The exception to this is a restricted dataset that provides data on the electricity demand of a streetlight per day.Accordingly, for all four information needs in the high-resolution cluster there is a data gap.Although these data gaps are similar because they have a temporal dimension only, they differ in the size of the gap.The data gap is smallest for household cooling demand, from one year to month resolution, whereas the data gap regarding total urban electricity demand is more substantial, from one year to week up to from one year to one hour.Results for energy supply show that out of the four defined information needs, one can be met by existing open data.All four information needs appear on the lower half of the SIRUP field, that is a spatial resolution of district level or higher.In terms of temporal resolution the information needs cover a larger range, namely from ‘seconds’ to ‘one year’.When the temporal resolution of existing energy supply data is considered, it appears that data is primarily available for yearly totals.The exceptions to this are a restricted database that provides electricity supply data on a monthly temporal resolution and open data on drinking water cooling supply with a half yearly resolution.In terms of the spatial resolution of existing data, findings show that open data with a resolution as high as the building level exists.As a result, energy supply related data gaps have a temporal dimension only.When only open data is considered, the gap for household cooling supply is the smallest, from one year to month resolution.The data gaps regarding total urban electricity demand range from one year to one hour or minutes.These gaps reduce in terms of the number of temporal scale levels to be bridged when there is access to the restricted database of the electricity supply of the waste-to-power plant.The information need that can be met, potential yearly electricity supply by PV panels on neighbourhood level, relates to the same intervention for which the energy demand related information need can be met, namely “PVs on roofs for public lighting”.To inform resource-conscious urban planning and design, information on water and energy is required on a higher spatiotemporal resolution than the resolution of current UM analyses.For 12 out of 14 interventions, stakeholders require information on a higher level of detail than the city/region scale and the annual time interval at which UM analyses are currently performed.In detail, three of the 26 expressed information needs are on the city-annual resolution.Ten information needs have either only a temporal resolution that is higher than annual or only a spatial resolution that is higher than city level, including six information needs on the neighbourhood-annual level.Another 13 information needs are on a high resolution in both the temporal and spatial dimension.The temporal resolution of these information needs is within the range seconds to week and the spatial resolution is between building and district level.The required spatiotemporal resolution appears to be linked to the resource flow targeted by an intervention.The resolution of water related information needs is scattered across the SIRUP frame, including a large range of both low and high spatiotemporal levels, whereas energy related information needs are on a high spatiotemporal resolution.That water related information needs cover a large range of scale levels could be indicative for current developments towards total water cycle management, also known as sustainable or integrated urban water management.Such an integrated management approach requires an understanding of the dynamics of urban water flows and the processes that affect these flows at multiple scales in time and space.These different scales are required because of the complexity of the urban water cycle, which includes sewerage, drinking water and drainage as well as surface water runoff, open water bodies and rainwater.All of these flows have different dynamics in space and time and therefore the level of spatiotemporal detail at which information is needed, varies with the water flows targeted by an intervention.This can be illustrated by comparing the interventions “Water square” and “Dike reinforcement”.High spatiotemporal resolution rainwater data up to small neighbourhood − hourly level, is needed for the storm water management intervention “Water square”.The information need for “Dike reinforcement”, on the other hand, covers the range months to five years, and large district up to provincial level.The relatively high spatiotemporal resolution of rainwater related information needs is known to be prerequisite to assess and predict urban runoff behaviour.Likewise, for urban flood management it is essential to understand the dynamics of surface water flows at different spatiotemporal scales, including the scale of the catchment level and a long-term perspective.In contrast, energy-related interventions require information on a high spatiotemporal resolution.Unlike water, decentralisation of energy services is becoming more frequent.This shows, for example, in the increasing penetration of renewable energy in the urban energy system − a consequence of current efforts to decarbonize the urban energy infrastructure.The two-way flow of energy that comes with this decentralization and the periodicity in both energy demand and in generation of renewable energy, call for a design and management of energy infrastructure that avoids negative impacts on the network, such as fluctuations in voltage or power output.Accordingly, to enable an optimal management of energy generation, distribution and storage, highly detailed data is needed about when and where energy is generated as well as when and where it is required.This is evident for the intervention “Regional smart grid” that aims to optimize energy management on the metropolitan scale.To implement this intervention, energy demand and supply data is needed on a spatial resolution of the building up to the district level and a temporal resolution of seconds to one hour.The inventory of existing data reveals another difference between water and energy flows, namely, in Amsterdam, high-temporal resolution data is available for water but not for energy.This gap is partly due to diverging data protection policies of water and energy utilities.Our energy data providers indicated that a strict data sharing policy is applied.One of the stakeholders indicated that this is due to their high stakes in the energy market—data have high commercial value in a competitive open energy market.As water utilities operate in a natural monopoly, their data have less commercial value.Further investigation is needed in order to facilitate a more in-depth explanation of these findings.With regard to the inventory of existing data in Amsterdam it should be noted that more sources might exist, especially databases with restricted data.Moreover, the scope of the present research was limited to an analysis of existing data on its spatiotemporal resolution.Nevertheless, the usefulness of data for urban planning and design may also be affected by the accuracy of the data and the spatiotemporal extent of the data, i.e., the geographical area and time period that the data cover.When aiming to compare different datasets on their information value, the SIRUP tool can also be employed to plot datasets according to their spatiotemporal extent.The findings show that the majority of resource-conscious interventions envisioned in Amsterdam require information on a more detailed spatial and temporal resolution than existing data can provide.Data gaps are absent for four out of the 14 interventions, including three water-related interventions: “Dike reinforcement”, “Recovery of protein from sewage”, and “Water-robust vital infrastructure”.The intervention “PVs on roofs for public lighting” is the only energy-related intervention without a data gap.One should keep in mind that the presence and/or size of data gaps of an intervention can depend on the objective of the stakeholder.The information need of the designer for the intervention “PVs on roofs for public lighting”, for example, was related to a hypothetical demand-supply matching.Namely, supplying a yearly amount of energy by PV panels in a neighbourhood that is equal to the amount used by the streetlights in that area on a yearly basis.When the objective would have been to implement PVs to make the neighbourhood self-sufficient in terms of its electricity for public lighting, a more detailed insight in the temporal differences in electricity supply and demand would be needed to design a reliable energy system.In that case, there would have been a data gap.Overall, the findings seem to imply that water-related interventions face fewer data barriers for implementation compared with energy-related interventions, such as “Parking garage as battery” and “Regional smart grid”.For energy-related interventions, the combination of high-resolution information needs and a lack of data may impair implementation.It must be emphasised that there are possibilities to close identified water and energy-related data gaps.In both fields technological advancements in sensor technology and modelling are likely to generate more high-resolution data in the future.For households monitoring, instalment of water smart-meters could yield water consumption data on the building level on a real-time or near real-time basis.These high resolution water demand data can inform both the planning of drinking water and waste water infrastructure, such as the intervention “More concentrated sewage flows”.Smart meters could also provide high-resolution energy demand data."The privacy issues that come with the sharing and use of high spatial-temporal resolution smart meter data can be minimized when appropriate data selection and ‘privacy friendly' processing techniques are applied.For non-piped water too, new high-resolution monitoring systems are being developed.The potential of X-band radar and the microwaves that serve mobile networks as new sources for measuring precipitation on a high resolution is currently being researched.Furthermore, modelling techniques are advancing fast for both water and energy to provide high time resolution and high spatial resolution data.To assess the cooling supply of drinking water for the intervention “Usage of cold from drinking water for cooling”, for example, a model that calculates the temperature change of drinking water in the supply system could be used.Although models are a simplification of reality and are therefore not fully accurate, the information value of these data may be accurate enough to inform the implementation of interventions.Further investigations are needed to understand which data accuracy is necessary for different resource-conscious interventions and UM analyses, and which data sources can provide data at this level of accuracy.Besides data accuracy, the potential impact of an intervention on the UM as well as the cost-effectiveness of an intervention are two other relevant aspects to consider when aiming to evaluate the feasibility and urgency of closing the different data gaps.These aspects were beyond the scope of this paper.Finally, results indicate that a fine scale approach to UM analyses alone will not suffice to disclose UM knowledge for urban planning and design practice.There is not ONE scale level of analysis that will serve all information needs.Rather than pursuing a linear, fine-scale approach to UM analysis, we suggest that a multi-scale, systemic approach to UM analysis is needed to provide the required information on resource flows from fine to coarse scale levels.The need for a systemic understanding of urban resource flows is supported by stakeholders expressing for half of the interventions that, next to insights on resource flows, information about the urban infrastructure is needed too.One of the stakeholders explicitly indicated that insight in the urban infrastructure is essential to evaluate the effects of an intervention on the functioning of the entire system.Moreover, this systemic approach should also account for social and ecological processes that influence the actual resource flows of cities.After all, a better understanding of the multi-scale processes of human-environment interactions that affect these flows is essential for sustainable resource management.Stakeholder input revealed that insight in biophysical processes underlying the urban system, such as solar irradiation, rain and the infiltration capacity of the soil, is needed for implementing resource-conscious interventions.Indeed, it has been suggested that these processes should be accounted for to improve the usefulness of UM analyses for urban planning and design that contributes to the sustainable management of resource flows.UM analyses should describe the complexity of urban systems more accurately, by linking the physical, quantitative knowledge of resource flows to its interaction with environmental, social and economic conditions.So, in order to disclose UM knowledge for resource-conscious urban planning and design, it is thus of key importance to develop a systemic understanding of urban resource flows.This systemic understanding should provide insight in the social and ecological processes that affect resource flows and in the interlinkages between processes and resource flows at different spatial and temporal scale levels.In conclusion, our results suggest that there is not one particular scale level of UM analysis that will generate meaningful results for urban planning and design aiming at optimizing resource flows.The relevant scale of analysis appears to depend on the nature of the intervention and the resource flow targeted.Findings do show that the current resolution of UM investigation, on city level and per year, is of insufficient detail to provide the information that is needed to inform resource-conscious urban planning and design decision-making.Moreover, the spatiotemporal resolution of existing data is a limiting factor for performing UM analyses that provide useful information for implementation of resource-conscious interventions.The SIRUP tool, proposed in this study, proved to be a practical tool for identifying these data gaps.The tool may also proof helpful to understand UM information needs and data gaps in other cities.Rather than performing conventional UM analyses on a finer, more detailed scale level, other types of UM analyses are required to disclose UM knowledge for urban planning and design.Further research is needed to investigate what type of analyses can provide a systemic understanding of resource flows and are tailored to inform urban planning and design aimed at optimizing resource flows.In such research, the accessibility of UM analyses for urban planners and designers should have a central position.
The research presented here examined at which spatial and temporal resolution urban metabolism should be analysed to generate results that are useful for implementation of urban planning and design interventions aiming at optimization of resource flows. Moreover, it was researched whether a lack of data currently hampers analysing resource flows at this desired level of detail. To facilitate a stakeholder based research approach, the SIRUP tool – “Space-time Information analysis for Resource-conscious Urban Planning” – was developed. The tool was applied in a case study of Amsterdam, focused on the investigation of energy and water flows. Results show that most urban planning and design interventions envisioned in Amsterdam require information on a higher spatiotemporal resolution than the resolution of current urban metabolism analyses, i.e., more detailed than the city level and at time steps smaller than a year. Energy-related interventions generally require information on a higher resolution than water-related interventions. Moreover, for the majority of interventions information is needed on a higher resolution than currently available. For energy, the temporal resolution of existing data proved inadequate, for water, data with both a higher spatial and temporal resolution is required. Modelling and monitoring techniques are advancing for both water and energy and these advancements are likely to contribute to closing these data gaps in the future. These advancements can also prove useful in developing new sorts of urban metabolism analyses that can provide a systemic understanding of urban resource flows and that are tailored to urban planning and design.
731
Data on the removal of fluoride from aqueous solutions using synthesized P/γ-Fe 2 O 3 nanoparticles: A novel adsorbent
High concentration of fluoride is toxic and causes digestive disorders, fluorosis, endocrine, thyroid and liver damages, and also decreases the growth hormone .In addition, it influences the metabolism of some elements such as calcium and potassium .Fluoride must be properly reduced before its discharge to the water bodies.Adsorption can be considered as an effective method for the removal of fluoride .The applicability of P/γ-Fe2O3 nanoparticles for fluoride removal was reported.Fourier transform infrared on the P/γ-Fe2O3 nanoparticles is given in Fig. 1.Fig. 2 shows the schematic illustration for the synthesis of P/γ-Fe2O3 nanoparticles.The functional groups present in the P/γ-Fe2O3 nanoparticles before and after fluoride adsorption are given in Table 1.The estimated adsorption isotherm and kinetic parameters are presented in Table 2.The adsorption experiment was conducted at batch mode using the one-factor-at-a-time method, that is, keeping a factor constant and varying the other factors to get the optimum condition of each variable.At first, for the purpose of this study, a stock solution of fluoride was prepared with distilled water from which other fluoride concentrations were prepared.The stock solution of fluoride was made by dissolving 2.21 g NaF in 1000 mL distilled water.A known mass of adsorbent was added to 1 L of the water samples containing different concentrations of fluoride.The pH of the water sample was adjusted by adding 0.1 N HCl or NaOH solutions.The removal efficiency was determined by varying the different adsorption process parameters such as pH, contact time, initial fluoride concentration and P/γ-Fe2O3 nanoparticles dosage.To create optimal conditions, the solutions were agitated using orbital shaker at a predetermined rate.After each experimental run, the solution was filtered and the filtrate was analyzed for the residual fluoride concentration.The initial and residual fluoride concentrations in the solutions were analyzed by a UV–vis recording spectrophotometer at a wavelength of absorbance: 570 nm .In this research, the influence of pH, contact time, initial fluoride concentration and P/γ-Fe2O3 nanoparticles dosage on the removal efficiency was investigated.Higher removal efficiency was obtained at pH of 7, an adsorbent dosage of 0.02 g/L, the initial fluoride concentration of 25 mg/L and contact time of 60 min.This optimum conditions of pH 7, adsorbent dosage: 0.02 g/L, contact time: 30 min and initial fluoride concentration: 25 mg/L gave an efficiency of 99%.An important physiochemical subject in terms of the evaluation of adsorption processes is the adsorption isotherm, which provides a relationship between the amount of fluoride adsorbed on the solid phase and the concentration of fluoride in the solution when both phases are in equilibrium .To analyze the experimental data and describe the equilibrium status of the adsorption between solid and liquid phases, the Langmuir, Freundlich, and Temkin isotherm models were used to fit the adsorption isotherm data.Several kinetic models have been applied to examine the controlling mechanisms of adsorption processes such as chemical reaction, diffusion control, and mass transfer .Three kinetics models, namely pseudo-first-order, pseudo-second-order, and intraparticle diffusion models were used in this study to investigate the adsorption of fluoride on P/γ-Fe2O3 nanoparticles.The estimated adsorption isotherm and kinetic parameter are presented in Table 2.Fig. 6 shows the adsorption kinetic plot for fluoride removal on P/γ-Fe2O3 nanoparticles.The removal of fluoride on P/γ-Fe2O3 nanoparticles followed the Ho kinetic model with a correlation coefficient of 0.999 at 25 mg/L, suggesting that the rate-limiting step is a chemical adsorption process .The isotherm data fitted into the Freundlich, Langmuir and Temkin isotherms but fitted more to the Langmuir isotherm, which indicates a monolayer adsorption on a homogeneous surface .This paper is the result of the approved project at Zabol University of Medical Sciences.
High concentration of fluoride above the optimum level can lead to dental and skeletal fluorosis. The data presents a method for its removal from fluoride-containing water. P/γ-Fe 2 O 3 nanoparticles was applied as an adsorbent for the removal of fluoride ions from its aqueous solution. The structural properties of the P/γ-Fe 2 O 3 nanoparticles before and after fluoride adsorption using the Fourier transform infrared (FTIR) technique were presented. The effects of pH (2–11), contact time (15–120 min), initial fluoride concentration (10–50 mg/L) and P/γ-Fe 2 O 3 nanoparticles dosage (0.01–0.1 g/L) on the removal of F − on P/γ-Fe 2 O 3 nanoparticles were presented with their optimum conditions. Adsorption kinetics and isotherm data were provided. The models followed by the kinetic and isotherm data were also revealed in terms of their correlation coefficients (R 2 ).
732
The relationship between inflammasomes and the endoplasmic reticulum stress response in the injured spinal cord
Traumatic spinal cord injury is generally considered to progress in two stages.The primary injury is the mechanical damage caused by a direct external force.It is followed by the secondary injury, which refers to the delayed spread of damage brought about by factors such as inflammatory cytokines, tissue acidosis, glutamate, and dysregulation of electrolyte homeostasis, leading to further functional deterioration .Oligodendrocytes, being particularly susceptible to the inhospitable environment after SCI, undergo necrosis and apoptosis which leads to demyelination and impairment of axon function.Although oligodendrocyte precursor cells have been shown to proliferate around the injured area in response to the loss of oligodendrocytes, many OPCs undergo apoptosis before differentiation into mature oligodendrocytes .Furthermore, studies have shown that OPC apoptosis inhibits remyelination and expands the damaged area of the injured spinal cord .Therefore, numerous studies have focused on pharmacological interventions that would mitigate glial cell apoptosis and secondary injury, and would hopefully improve the paralysis after SCI.We have focused on the role of endoplasmic reticulum stress as a trigger of glial cell apoptosis in the injured spinal cord.After SCI, accumulation of unfolded proteins in the ER in response to stressors such as tissue acidosis and electrolyte imbalance induces the activation of the unfolded protein response .The ER chaperone glucose-regulated protein 78 acts to reduce ER stress, but cell death is induced by the proapoptotic C/EBP homologous transcription factor protein when the effects of GRP78 are overwhelmed by ER stress .OPCs exposed to excessive ER stress promote CHOP and caspase-12 expression, which downregulates expression of the pro-survival proto-oncogene Bcl-2 and elevates the expression of the pro-apoptotic Bax, leading to apoptosis .Through immunohistochemistry of the injured spinal cord in a rat contusion model, we demonstrated that the expression of GRP78 is lower and the expression of CHOP is higher in OPCs compared to oligodendrocytes, astrocytes, and neurons .This indicates that OPCs have lower tolerance to ER stress compared to other cell types, which may be one of the many factors that lead to the observed apoptosis of OPCs after SCI.Another avenue through which neural cell death occurs in the central nervous system is through inflammasome-mediated pyroptosis.The inflammasome is a multiprotein construct generated through the oligomerization of inactive monomeric proteins from the nucleotide-binding domain, leucine-rich repeat protein family.Inflammasomes are formed upon activation by different stimuli, such as bacterial infections or endogenous danger signals, which induce the activation of IL-1β, IL-18, and caspase-1 that leads to mitochondrial damage and ultimately to cell death referred to as pyroptosis .Several inflammasome complexes have been reported which are denoted by the core NLR protein.The most intensively studied is the NLRP3 inflammasome which is formed when NLRP3 associates with the adaptor protein ASC and procaspase-1.In the biological process of inflammation there are multiple contributory factors, and there are often interactions between them.Recently, Shin et al. reported that the ER stress brought about by bacterial infections activates thioredoxin-interacting protein, which triggers the association of NLRP3 and caspase-2 to generate inflammasomes that ultimately lead to pyroptosis.In the injured spinal cord, it was reported that NLRP3 inflammasomes rapidly increase after SCI, and is a deteriorating factor for functional recovery after SCI .However, the association between ER stress and inflammasomes in SCI is unknown.Therefore, we studied the interaction between the ER stress response and inflammasomes in the injured spinal cord and the effect of this association on neuronal apoptosis.All animal experiments were conducted according to the protocol approved by the Center for Animal Research and Research Support at Tokai University School of Medicine.Female Sprague Dawley rats were purchased from Nippon Crea.Surgery was performed under aseptic conditions and 4% isoflurane anesthesia.After laminectomy of the tenth thoracic vertebra and exposure of the dura mater, a spinal cord contusion injury was created using the Infinite Horizon spinal cord injury device.We generated 3 experimental groups: a low-impact group subjected to 100 kdyne, a high-impact group subjected to 200 kdyne, and a sham group that only underwent laminectomy.As a countermeasure against dysuria after SCI, each rat was subjected to bladder massage for urination twice a day.The injured spinal cord was exposed under 4% isoflurane anesthesia at 1, 3, 7, or 14 days postinjury, and a 5-mm section of the cord was excised under the microscope.The spinal cord was placed on ice immediately after removal, washed with cold PBS, and then processed with a cell lytic nuclear extraction kit.For electrophoresis, 7.5% and 12.5% SDS polyacrylamide gels were used, and 5 μl of protein was loaded in each well.After electrophoresis, the proteins were electrotransferred to nitrocellulose membranes.Membranes were blocked with 5% BSA in TBST and then incubated overnight with anti-NLRP 3, anti-caspase-2, anti-ASC, or anti-TXNIP antibodies at 4 °C.Membranes were washed for 7 h with 0.05% Twin-20 in PBS, incubated with horseradish peroxidase-linked anti-Rabbit IgG at 25 °C for 60 min, and labeled with Immobilon Western Chemiluminescent HRP.Films were scanned by densitometry and analyzed using the software CS analyzer.As internal controls, β-actin and GAPDH labeled with mouse monoclonal antibodies in the same manner were used.For sham animals, 5 mm of the spinal cord just above the tenth thoracic vertebrae was excised and processed by the same procedure and used as a normalization control.The expression levels of NLRP3, caspase-2, ASC, and TXNIP at 1, 3, 7, and 14 days postinjury were compared between the 3 groups.At 1, 3, 7, and 14 days postinjury, perfusion tissue fixation was conducted using 2% paraformaldehyde in 0.1 M phosphate buffer under general anesthesia with 4% isoflurane.The spinal cord was removed and post-fixed in 2% PFA in 0.1 M PB at 4 °C for 2 days.After fixation, serial dehydration of the samples was completed with 7%, 15%, and 20% sucrose in water.Frozen spinal blocks were prepared using O.C.T compound, and a cryostat was used to cut frozen sections at a thickness of 10 μm A 2 mm area at the center of the lesion, which corresponds to the width at the tip of the IH impactor, was defined as the epicenter and the tissue 7 mm caudal from the epicenter was sectioned.Sections were washed 3 times in PBS for 10 min and then blocked for 60 min at 24 °C with 5% normal goat serum in PBS.After washing for 10 min, the sections were incubated overnight at 4 °C with markers for inflammasome constitutive proteins and cell markers.The sections were washed again in PBS and then incubated with the following fluorescent secondary antibodies for 60 min at 24 °C: Alexa Fluor594 for NLRP3, anti-rabbit, 1:800; Alexa Fluor594 for ASC, anti-rabbit, 1:800; Alexa Fluor594 for Caspase-2, anti-rabbit, 1:800; Alexa Fluor488 for NG2, anti-mouse, 1:800; and Alexa Fluor488 for GFAP, anti-mouse, 1:800.Nuclear staining was conducted using VECTASHIELD with DAPI.The stained sections were examined using fluorescence microscopy, and the number of positive cells in the dorsal cord was counted.The positive cell ratios of NLRP 3, caspase-2, and ASC in NG2- or GFAP-positive cells were counted.The mean value of 5 consecutive sections was calculated and compared between groups.We certify that all applicable institutional and governmental regulations concerning the ethical use of animals were followed during the course of this research.The expression of NLRP3 in the injured groups was significantly higher than that in the sham group on days 1, 3, and 7 postinjury.The expression of caspase-2 and ASC was also significantly higher in the injured groups on days 1, 3, and 7 postinjury when compared with that in the sham group.However, there was no significant difference between the LI and HI groups.The expression of TXNIP was significantly higher on days 1 and 3 postinjury and significantly lower on days 7 and 14 in both LI and HI groups when compared with the sham group.No significant differences were found between the LI and HI groups.The expression of NLRP 3, ASC, and caspase-2 in OPCs was significantly higher in the LI and HI groups when compared with the sham group on days 1, 3, and 7 postinjury.In addition, NLRP3 and ASC expression in astrocytes was significantly higher in the LI group on day 3 postinjury but not on days 1, 7, and 14 when compared with that in the sham group.Regarding injury severity, the ASC expression in OPCs on day 1 postinjury was significantly higher in the HI and LI groups when compared with the sham group.The comparison of NLRP3 expression between OPCs and astrocytes revealed that OPCs exhibited a significantly higher expression than astrocytes on day 1 postinjury.ASC and caspase-2 expression levels were also significantly higher in OPCs than in astrocytes on day 1 postinjury.In recent years, an inflammasome-based response has garnered attention as a novel pathway for the induction of inflammation .The constituent molecules of inflammasomes, such as NLRPs, caspase, and ASC usually exist separately.However, these molecules aggregate to form inflammasomes when a trigger such as infection stimulates them .The generated inflammasomes cleave the precursors of IL-1β and IL-18, which activate them and lead to pyroptosis .Historically, inflammasome-mediated pyroptosis was discovered to be a process induced upon infection with intracellular pathogens, and the subsequent inflammatory response was often effective against the infection.Inflammasome have also been reported to control intestinal microflora, protect the intestinal epithelial barrier, and contribute to the maintenance of intestinal homeostasis, but negative aspects of inflammasomes have also been identified .Depending on the stimulating factor, excessive inflammasome formation induces sustained inflammation that precipitates the onset of various diseases, such as arteriosclerosis, gout, type 2 diabetes, and Alzheimer’s disease .Caspase-1, the most significant caspase in the NLRP3-based inflammasomes, activates IL-1β and IL-18 precursors and causes pyroptosis .ER stress can also trigger inflammation through NLRP3, which activates caspase-2 and leads to mitochondrial dysfunction.Damaged mitochondria release mitochondrial-derived damage-associated molecular patterns that activate inflammasomes, which lead to caspase-1activation and pyroptosis .Therefore, caspase-2 that is activated through ER stress indirectly modulates caspase-1 activity.Here, we confirmed earlier reports that the expression of inflammasome proteins such as NLRP3, ASC, and caspase-1, in the spinal cord is elevated after SCI .With regard to injury severity, there was no significant difference between the LI and HI groups.Inflammasome is speculated to be activated even if the damage strength is large or small.When the expression of inflammasome proteins was examined by cell type, we found that they were high in OPCs and low in astrocytes.We previously reported that OPCs are vulnerable to ER stress while astrocytes have a more robust ER stress response .The situation seems to be similar in regard to inflammasomes, with OPCs being more vulnerable to inflammasome-mediated cell death, whereas astrocytes are resistant to this type of cell death.The resistance of astrocytes to inflammasome-mediated cell death may be one reason that astrocytes survive after spinal cord injury, leading to the formation of glial scars .However, even in astrocytes, the expression of inflammasome proteins significantly increased on day 3 postinjury.Although the expression was low when compared with that in OPCs, our data suggest that cell death mediated by inflammasomes also occurs in astrocytes.TXNIP is an important connecting factor between ER stress and inflammasome pathways .TXNIP is induced via the PERK and IRE1α pathways of the ER stress response, which then activates caspase-1, secretes IL-1β, and causes pyroptosis via the NLRP3 inflammasome .Along with inflammasome proteins, TXNIP expression was also elevated in the early stages of spinal cord injury, suggesting an association between the ER stress response and inflammasome pathway in injured spinal cords.It also suggests that the suppression of the ER stress pathway may also act to suppress inflammasome-mediated cell death, providing further means to ameliorate the secondary injury process in SCI.Hopefully, future studies will provide further insight into the association of the ER stress and inflammasome pathways, and will pave the path toward the development of drugs that target these processes.Inflammasome-based protein expression is promoted after spinal cord injury.The expression level of inflammasome proteins is high in OPCs and low in astrocytes, which may be related to high rates of OPC cell death after spinal cord injury.Inflammasome formation is associated with ER stress, which may increase neural cell death in the injured spinal cord.All authors declare no conflict of interest, either potential or real.
Study Design: Animal study. Objectives: The aim of this study is to investigate the influence of inflammasomes in the injured spinal cord of a rat spinal cord injury model. Setting: University laboratory in Kanagawa, Japan. Methods: A thoracic contusion spinal cord injury (SCI) was induced in female Sprague Dawley rats using an IH-impactor to create a moderate injury group (LI) and a severe injury group (HI). Using a sham group as a control, the injured spinal cords were removed at several time points after injury to evaluate the levels of inflammasome component proteins in the injured spinal cord by immunohistochemistry and Western Blot. Results: Western blot analyses revealed that the expression of inflammasome component proteins leucine-rich repeat protein 3 (NLRP3), apoptosis-associated speck-like protein containing a CARD (ASC), and Caspase-2 significantly increased in the SCI animals compared to the sham animals. Thioredoxin interacting protein (TXNIP), which is a protein induced by ER stress that activates the NLRP3 inflammasome, was also significantly higher in the SCI animals. Immunohistochemistry revealed significantly higher expression of NLRP3, ASC, and Caspase-2 in oligodendrocyte progenitor cells (OPCs) of the SCI groups compared to astrocytes of the SCI groups and OPCs of the sham group. Conclusions: Inflammasome component protein expression increases after SCI in association with increased ER stress. OPCs had significantly higher levels of inflammasome proteins compared to astrocytes, which may be associated with the high rates of OPC cell death after SCI.
733
Three-step surgical treatment of aortoesophageal fistula after thoracic endovascular aortic repair: A case report
Endovascular techniques were first reported for the abdominal aortic aneurysms by Parodi in 1991 .The first successful outcome of treatment for thoracic aortic aneurysm reported by Dake in 1994, was referred to as thoracic endovascular aortic repair .Since then, TEVAR has become an established treatment for aortic aneurysms or aortic dissection because it is minimally invasive and therapeutic outcomes are good .On the other hand, TEVAR is also associated with several complications, including paraplegia, renal failure, stroke, post-implantation syndrome, device migration and aortoesophageal fistula formation .The formation of an AEF after TEVAR was originally reported in 1998 by Norgren and the number of reports describing AEF has increased as TEVAR applications have widened and post-treatment followup periods have lengthened.Aortoesophageal fistulae develop after TEVAR in 1.7%–1.9% of patients at a median of 11.6 months .The main causes of death are fatal bleeding, mediastinitis and sepsis .The reported mortality rates after surgical and conservative therapy for AEF after TEVAR are 64% and 100%, respectively .Therefore, the prognosis of AEF after TEVAR is almost as poor as that of AEF arising in the absence of TEVAR, and only surgery can save the life of patients with AEF after TEVAR.However, treatment strategies including surgical approaches remain controversial.We describe a patient with AEF after TEVAR who was treated via a three-step surgical approach with a good outcome.This work has been reported in line with the SCARE criteria .A 71-year-old man with a history of TEVAR for Stanford B aortic dissection and aortic aneurysm rupture 20 months ago presented at a local medical clinic with fever over 38 °C.Laboratory findings revealed elevated infectious indicators, and he was prescribed with antibiotics.However, he presented at his primary care hospital one week later without symptomatic improvement.Contrast-enhanced computed tomography at that time identified a fistula between the esophagus and an aortic aneurysm, and upper gastrointestinal endoscopy revealed an esophageal ulcer.Therefore, he was diagnosed with AEF after TEVAR and immediately transferred to our hospital for surgical therapy.On admission, he did not have hematemesis and was hemodynamically stable.He required emergency surgery to the control the spread of infection and prevent fatal bleeding.We planned a three-step surgical approach.The first step of the procedure on the day of admission comprised esophagectomy via a right thoracotomy at the fourth intercostal space with the patient in the left lateral position.We cut the esophagus above the AEF and diaphragm and resected part of it that also included the AEF.Intraoperative findings revealed extensive inflammation of the mediastinal tissue and leakage of infected old blood from the aortic fistula without massive bleeding.We placed drains in the right thoracic cavity and the mediastinum beside the aortic fistula.The patient was then placed in the supine position and the residual esophagus was brought to the left cervical region as an esophagostomy.A feeding jejunostomy tube was then placed via a small abdominal incision.He was admitted to the surgical intensive care unit thereafter, and infection control was started by abscess draining and antibiotic administration.Gross findings of the resected esophagus showed a perforation site with a maximum diameter of 1.0 cm.No bacteria were identified in blood culture; however, Klebsiella pneumonia and Prevotella melaninogenic were identified in mediastinal tissue culture.Antibiotics, abscess drainage and pleural lavage were performed, however they did not completely improve the inflammatory response after the first surgery.Therefore, we removed residual infected foci as soon as the patient’s status became stabilized.The second step of the procedure was implemented one month later to remove the thoracic aortic aneurysm and artificial stent-graft and to restore the aorta in situ with a synthetic vascular prosthesis through a left thoracotomy.The prosthesis was infiltrated with rifampicin before graft replacement to prevent repeated infection.Thereafter, the inflammatory response and the general status of the patient gradually improved under antibiotics, drainage and pleural lavage.He developed strength through postoperative rehabilitation and enteral nutrition management.Three months after the second step, the third step addressed the esophageal defect.A narrow gastric tube fashioned by laparotomy was brought up through the ante-thoracic route, and cervical esophagogastrostomy proceeded.The patient recovered uneventfully, resumed oral intake and was discharged on postoperative day 37.The patient remains free of disease and adverse events at 24 months after completing the three-step procedure.Although AEF is rare, it is fatal with sudden massive hematemesis.However, the number of reported cases which can get back into society has been gradually increasing since a life-saving case by the surgical procedure was reported for the first time in 1983 .The causes of AEF are aortic aneurysm, esophageal cancer, esophageal foreign body and others including trauma and surgical complications .Recent reports have indicated that TEVAR can cause AEF .Several proposed mechanisms of AEF development after TEVAR include infection of a stent-graft and aortic aneurysm, direct erosion of a stent-graft through the aorta into the esophagus, necrosis due to continuous pressure from an aortic stent-graft and large aneurysm and ischemic esophageal necrosis due to occlusion of the esophageal artery that feeds the esophagus More AEF after TEVAR is predictable due to the recent broadening of TEVAR applications.Symptoms of AEF comprise not only hemorrhage or severe chest/back pain, but also vague non-specific symptoms such as fever and an elevated inflammatory response .The frequency of massive bleeding associated with AEF after TEVAR is relatively low because the fistula is located between the esophagus and false lumen of aorta after TEVAR.This can delay initial treatment as in our patient.When patients with a history of TEVAR present with non-specific symptoms, AEF should be considered.The control of infection and fatal bleeding is mandatory to save the lives of patients with AEF after TEVAR and sources of infection such as the esophagus, aortic wall and artificial stent-graft must be removed.Thereafter, antibiotics and sustainable drainage with lavage is required for continued infection control.Moreover, re-implantation of a synthetic vascular prosthesis with protection against re-infection, such as a synthetic graft infiltrated with antibiotics and omental packing, are necessary for revascularization .Previous reports have described simultaneous resection of the esophagus and aortic stent-graft via a left thoracotomy followed by a two-step surgical reconstruction of the esophagus .Here, we applied a three-step procedure consisting of resections of the esophagus and aortic stent-graft on separate occasions followed by esophageal reconstruction, because massive bleeding did not occur in AEF after TEVAR in this patient.The first procedure in the three-step approach is less stressful than that of the two-step approach.Furthermore, we could restore the aorta during the second procedure using a synthetic vascular prosthesis under conditions of considerable infection control.Thereafter, esophageal reconstruction can be planned as the third step after total infection control and adequate improvement in the general physical status of a patient with AEF.The general status of such patients is often too poor to endure highly invasive surgery.Therefore, we considered the need to improve safety as much as possible during each highly invasive step.The shortcoming of the three-step surgical approach is the possibility of difficult infection control after the first step due to an unresected infected aorta and risk of bleeding from the fistula.Therefore, the second step might need to be implemented as soon as possible if difficulties are encountered with infection control or bleeding.The main advantage of two-step surgery is better infection control because of complete removal of the infected tissue.From this point of view, two-step surgical approach may be more suitable for patients who can endure high operative stress.Reports describing AEF after TEVAR remain scant and optimal therapeutic strategies remain controversial.Here, we found that a three-step surgical approach improved the safety of each step of the procedure by reducing surgical stress.This resulted in a good outcome for this patient with AEF.Thus, this surgical strategy might be a useful option for treating AEF after TEVAR.Optimal therapy could save the lives of patients with AEF after TEVAR.Treatment strategies remain controversial, but we feel that the three-step surgical approach described herein could be a useful therapeutic option for AEF after TEVAR.All authors have no conflict of interest.No funding was received.The Institutional Review Board at Hiroshima University – This investigation is exempt from ethical approval at our institution.The patient provided written, informed consent to the publication of this case report.AK wrote the manuscript.YH, YI and ME supervised writing the manuscript.All authors were part of the surgical team that treated this patient.All authors read and approved submission of the final manuscript.Not commissioned, externally peer-reviewed
Introduction: Aortoesophageal fistula (AEF) is a fatal complication results in sudden massive hematemesis. Although thoracic endovascular aortic repair (TEVAR) is an established method of treating aortic aneurysms or aortic dissection, the number of AEF after TEVAR is recently increasing due to the spread of TEVAR. However, the therapeutic strategy for AEF remains controversial. Presentation of case: We describe a 71-year-old man with Stanford B aortic dissection and aortic aneurysm rupture treated by TEVAR who developed AEF between the thoracic aorta and upper thoracic esophagus 20 months thereafter. We applied a three-step surgical procedure for this patient comprising resection of the esophagus as the infectious source, removal of an aortic aneurysm with stent-graft and replacement of the aorta, and final reconstruction of the esophagus. Thereafter, the patient resumed oral intake and has remained relapse-free for 24 months without adverse events. Discussion: Previous reports have described simultaneous resection of the esophagus and aortic stent-graft via a left thoracotomy followed by a two-step surgical reconstruction of the esophagus. We applied a three-step procedure consisting of resections of the esophagus and aortic stent-graft on separate occasions followed by esophageal reconstruction in this patient. The first procedure in the three-step approach is less stressful than that of the two-step approach. Conclusion: The three-step surgical approach to treating AEF after TEVAR resulted in a good outcome for this patient. Thus, this surgical strategy is a useful option for treating AEF after TEVAR.
734
Caveolin-1 Modulates Mechanotransduction Responses to Substrate Stiffness through Actin-Dependent Control of YAP
The integral membrane protein Caveolin-1 engages in crosstalk with the actin cytoskeleton and connects directly to actin cables through the protein FLNA.CAV1 controls focal adhesion stability, actin organization, and actomyosin contraction through RHO GTPases and contributes to mechanosensing and adaptation in response to various mechanical stimuli, such as membrane stretching, shear stress, hypoosmotic shock, and cell detachment.However, current understanding remains limited regarding the mechanisms by which these phenomena are integrated with overall cell function.The transcriptional cofactor yes-associated protein operates downstream of the canonical Hippo pathway, a highly conserved pathway regulating organ growth control, tissue homeostasis, and tumorigenesis.YAP regulates the transcription of specific gene sets mainly through its interaction with TEA domain transcription factors.A cascade of kinases, including LATS1 and LATS2, lead to YAP phosphorylation and curb its nucleocytoplasmic shuttling, mediating its cytosolic retention through interaction with 14-3-3 proteins, thus downregulating YAP transcriptional output.This regulatory network is controlled by upstream cues related to tissue architecture and cellular context, such as cell-cell adhesion, cell density, and cell polarity.YAP is also controlled by mechanical signals, such as extracellular matrix stiffness, shear stress, and stretching.Stiff environments favor YAP nuclear localization, whereas attachment to soft substrates increases cytoplasmic retention.This mechanical control, which determines cell proliferation and differentiation, depends on RHO GTPase function and actomyosin-driven contractility but is largely independent of kinase regulation, because depletion of LATS1/2 kinases does not alter the mechanical responsiveness of YAP and non-phosphorylatable mutants are nonetheless sensitive to substrate stiffness.The adaptation of nuclear pore units to mechanical tension also contributes to the regulation of YAP nuclear entry.However, understanding is limited about the exact molecular mechanisms by which ECM stiffness controls YAP activity.Here, we identify CAV1 as an upstream positive regulator of YAP that affects the response to changes in ECM stiffness through a mechanism dependent on F-actin dynamics.The mechanical regulation of YAP underpins pathophysiological processes such as cardiovascular disease, inflammation and tissue regeneration, and cancer.YAP activation by ECM stiffness promotes cancer-associated fibroblast activation and subsequent peritumoral ECM remodeling and stiffening, establishing a positive-feedback loop that favors cancer progression.Here, we show that overexpression of constitutively active YAP mutants rescues the blunted contractility and ECM remodeling previously reported for Cav1 genetic deficiency.The positive impact of YAP activity on tumor initiation and progression is further showcased by its critical contribution to pancreatitis-induced acinar-to-ductal metaplasia, which favors pancreatic ductal carcinoma initiation.We further demonstrate CAV1-dependent positive regulation of YAP in vivo, showing that Cav1-knockout pancreatic parenchyma fails to upregulate YAP in response to induced pancreatitis and exhibits blunting of changes associated with YAP activation, such as ADM.Our results provide important insight into the mechanisms regulating YAP function.We identify CAV1 as an upstream regulator of YAP, controlling its transcriptional activity through the control of actin cytoskeleton dynamics.Conversely, YAP underpins an important share of CAV1-dependent phenotypes.We propose this CAV1-YAP regulation has important implications in the progression of some pathologies, such as cancer, and will allow us to better understand the principles governing processes driven by substrate stiffness in health and disease.ECM stiffness mediates CAV1 internalization.We confirmed that CAV1 was internalized in cells grown on soft substrates and trafficked to a RAB11-positive recycling endosome.Thus, cell detachment from integrin-ECM-mediated adhesions and cell growth on soft substrates both trigger the same translocation of CAV1 from the plasma membrane toward a recycling endosome.These observations suggest that CAV1 could mediate the response to changes in substrate rigidity.To evaluate the potential contribution of CAV1 to ECM stiffness mechanotransduction, we performed RNA sequencing in wild-type and Cav1KO mouse embryonic fibroblasts cultured on rigid or compliant polyacrylamide hydrogels.Using Ingenuity Pathway Analysis software and the Enrichr open-source tool, we queried our datasets for canonical functional programs and Gene Ontology terms responsive to substrate rigidity, classifying them according to their specificity for WT or Cav1KO backgrounds.This analysis identified a stiffness-induced increase in genes related to the regulation of actin cytoskeleton, focal adhesions, and cell junctions exclusively in WT cells.To explore the molecular mechanisms mediating this effect of CAV1 on gene expression, we focused on YAP because this transcriptional cofactor is a prominent transcriptional driver of genes involved in cell adhesion and actin cytoskeleton organization and is also positively regulated by mechanical cues such as ECM stiffness.To assess whether YAP function was controlled by substrate stiffness in our system, we first analyzed the expression of a panel of 61 genes previously characterized as YAP targets in MCF10A and NIH 3T3 cells.A Fisher exact test confirmed statistically significant upregulation of endogenous YAP targets by ECM stiffness in WT cells, but not in Cav1KO cells.This finding was supported by qRT-PCR analysis of the YAP targets Ankrd1 and Ctgf and by orthogonal assays to monitor TEAD activity based on the 8xGTIIC luciferase reporter.To explore the mechanism of this CAV1 dependency, we first studied YAP subcellular distribution, which was classified as cytosolic, nuclear, or evenly distributed.As expected, YAP was predominantly nuclear in WT cells plated on stiff substrate and retained in the cytosol in cells plated on soft substrate.However, in Cav1KO MEFs, YAP was predominantly retained in the cytoplasm independently of substrate rigidity and compliance.Defective YAP nuclear localization in Cav1KO cells was confirmed by biochemical fractionation.These results indicate that the positive regulation of YAP transcriptional activity by environmental rigidity is CAV1 dependent.To rule out a cell-specific effect on YAP-CAV1 functional interactions, we used small interfering RNA duplexes to transiently knock down CAV1 in epithelial MDA-MB-231 human breast carcinoma cells.CAV1 silencing significantly decreased Ctgf and Ankrd1 expression.Moreover, qRT-PCR profiling of immortalized neonatal mouse hepatocytes revealed a similar reduction in YAP target gene expression in cells harvested from Cav1KO mice compared with those from WT mice.To further assess the robustness of the CAV1-YAP interaction, we used the SEEK open-access resource to query known YAP target genes for coexpression patterns against the whole genome across extensive datasets from different tissues and cell lines.Cav1, whose mRNA levels highly correlate with its protein expression, showed one of the highest expression correlations with our YAP target list query.These observations were upheld by the analysis of an independent dataset, generated by assessing the correlation between the expression of Cav1 and the rest of the genome across 300 cell lines; in this analysis, 79% of YAP target genes correlated positively with Cav1 expression and 11% correlated negatively.Together, these observations suggest that CAV1-dependent regulation of YAP transcriptional activity is a general mechanism operating across different experimental systems.Simultaneous siRNA-mediated knockdown of YAP and TAZ, to prevent potential compensatory mechanisms, effectively blocked expression of the canonical targets Ctgf and Ankrd1 in WT MEFs.Consistent with a pivotal role for CAV1 in the positive regulation of YAP, YAP/TAZ silencing in Cav1KO cells did not further decrease Ctgf, Ankrd1, and Cyr61 expression.Notably, CAV1 absence did not alter total YAP protein levels, suggesting that the relationship between CAV1 and YAP-dependent transcriptional programs relies on CAV1-dependent regulatory mechanisms upstream of YAP and not on the regulation of YAP protein expression.Cell spreading modulates YAP activity such that YAP is predominantly nuclear in cells spread over large areas and cytosolic in cells with limited spreading.Moreover, cell polarization and spreading in MEFs is controlled by CAV1.To rule out the possibility that CAV1-dependent differences in YAP activity were secondary to differential cell spreading, we cultured MEFs on printed fibronectin micropatterns of fixed area and shape.As expected, YAP was predominantly cytosolic in WT MEFs spreading over small micropatterns, whereas growth on large micropatterns promoted a marked nuclear accumulation.This regulation was blunted in CAV1-deficient cells.These observations confirm that CAV1-dependent YAP modulation is not an indirect consequence of changes in cell geometry.To assess this relationship in the context of other mechanical cues, we evaluated the role of CAV1 in cell stretching, another established YAP-activating stimulus.Using a stretching device, we exposed cells to uniaxial cyclic strain.Stretching induced significant increases in Ctgf and Ankrd1 expression in WT MEFs, but not in Cav1KO cells, suggesting that CAV1 modulates YAP activity in response to different stimuli.We observed that YAP phosphorylation at S112 was increased in Cav1KO MEFs, and this increase was partly blocked by exogenous CAV1 expression.Previous reports proposed the existence of nuclear pools of S127-phosphorylated YAP in human cells, but our biochemical partition assays suggested that the phosphorylated form of the mouse homologous residue S112 is largely excluded from the nucleus in our cellular model.YAP phosphorylation at serine 127 promotes the retention of this transcription factor in the cytosol.We evaluated the involvement of YAP phosphorylation in CAV1-dependent regulation ectopically expressing YAP-FLAG and the non-phosphorylatable mutant YAP-5SA.We transiently transfected these constructs into WT and Cav1KO MEFs and analyzed their subcellular distribution by both immunofluorescence and subcellular fractionation.In WT MEFs, FLAG-tagged WT YAP was predominantly nuclear but was mostly retained in the cytosol in Cav1KO MEFs.However, FLAG-tagged YAP-5SA accumulated in the nucleus in both WT and Cav1KO MEFs, suggesting that cytosolic retention of YAP in Cav1KO MEFs is at least partially dependent on its regulated phosphorylation.Constitutive nuclear translocation of YAP-5SA proteins in Cav1KO MEFs correlated with the rescue of its downstream transcriptional output.YAP-5SA nuclear accumulation in Cav1KO MEFs correlated with increased canonical YAP-TEAD transcriptional activity, assessed by 8xGTICC-luciferase reporter assay.In contrast, whereas WT YAP enhanced TEAD activity in WT MEFs, it did not in Cav1KO MEFs, consistent with the cytosolic sequestration of WT YAP-FLAG and endogenous YAP in Cav1KO MEFs.These results were confirmed by qRT-PCR analysis.It is important to note that while the fold increase was higher in Cav1KO cells, YAP-5SA overexpression in Cav1KO cells did not reach the levels observed in WT cells, suggesting that phosphorylation-independent mechanisms could also be involved.Our observations indicate that YAP serine phosphorylation has an impact on CAV1-dependent control of YAP localization and activity.YAP serine phosphorylation can be mediated by the kinases LATS1 and LATS2.Knockdown of LATS1/2 increased Ctgf and Ankrd1 mRNA expression and TEAD-driven luciferase reporter activity in both WT and Cav1KO cells, with comparable fold increases.To further evaluate the implication of Hippo canonical kinases in the differences observed between WT and Cav1KO cells, we analyzed the role of neurofibromin 2.NF2 silencing led to an increase in Ctgf and Ankrd1 expression in WT cells, but not in Cav1KO cells, supporting the existence of alternative regulation upon suppression of LATS1/2 kinase activity by NF2 knockdown and precluding rescue of YAP activity in Cav1KO cells.Taken together, these results suggest that LATS1/2 kinases are not essential for CAV1-dependent YAP activity regulation.Since F-actin and RHO are necessary for YAP nuclear translocation and transcriptional activity, we next checked whether changes in actin cytoskeleton and RHO signaling could explain the altered YAP regulation in Cav1KO MEFs.For the analysis of actin dynamics and architecture, WT and Cav1KO MEFs were cultured on large fibronectin micropatterns to ensure the same spreading area for both genetic backgrounds and thus exclude effects of spreading area on actin dynamics of cell-cell interaction, spreading, and cell shape.Actin dynamics and architecture were also analyzed in cells cultured on stiff substrates.Actin fiber organization was inferred by anisotropy analysis of microscopy images to measure the degree of departure from a homogeneous distribution toward an increasingly discrete intensity distribution.Confirming CAV1 as a regulator of actin cytoskeleton organization, actin fibers were less organized in Cav1KO cells.We assessed the potential contribution of actin dynamics to CAV1-dependent YAP regulation by using the actin polymerization inhibitor cytochalasin D and jasplakinolide, an enhancer of F-actin actin polymerization.CytD decreased stress fiber density, reducing YAP nuclear accumulation and YAP target transcription throughput in WT cells to levels akin to those in Cav1KO cells.Conversely, jasplakinolide enhanced actin polymerization, restored YAP nuclear translocation in Cav1KO MEFs to WT levels, and increased YAP target gene expression in both WT and Cav1KO cells.A constitutively active DIAPH1 mutant, capable of boosting actin polymerization rates, significantly upregulated YAP target expression in Cav1KO cells.These data strongly suggest that actin polymerization is a key component of the YAP regulatory machinery in our system.Altered actin dynamics in Cav1KO cells are the direct cause of the reduced YAP activity observed in this genetic background.We next explored the contribution of RHO signaling to actin- and CAV1-dependent regulation of YAP using Y27632, a well-established inhibitor of the upstream kinase ROCK1/2.Exposure to Y27632 strongly reduced YAP target gene expression in WT cells, reproducing the effect of CytD; in contrast, Y27632 had only modest effects in Cav1KO cells.Transient transfection with a constitutively active form of RHOA that rescues RHO activity in Cav1KO MEFs further increased Ctgf and Ankrd1 expression in WT cells but did not enhance YAP target gene expression in Cav1KO cells.Our observations thus suggest that while RHO signaling is necessary for the CAV1-dependent positive mechanoregulation of YAP activity, it is not sufficient, since defective RHO cannot explain the deficient YAP activity in Cav1KO cells.To characterize the molecular mechanisms underpinning the effect of CAV1-dependent actin dynamics on YAP activity, we profiled the YAP interactome by YAP immunoaffinity purification and mass spectrometry of control and CytD-treated WT and Cav1KO cells.We identified several previously described YAP-interacting proteins: AMOTL2, POLR2A, TBX5, RUNX1, 14-3-3 proteins, and known members of the Hippo pathway interactome.Interestingly, only WT cells showed interactions between YAP and nuclear pore and/or transport complexes, presumably reflecting effective nuclear translocation.Conversely, both Cav1KO and CytD-treated cells were enriched for interactions with 14-3-3 proteins, which are reported to retain phosphorylated YAP in the cytosol.To assess the contribution to YAP regulation of each component of these context-specific YAP interactomes, we carried out an image-based RNAi focused screen by knocking down 89 identified YAP interactors and comparing YAP subcellular distribution in Cav1KO and WT cells.Setting a stringent threshold of |Zq| > 2.5, we identified hits specific to WT cells for 10 genes, whose knockdown blunted YAP nuclear translocation.These included siRNA pools targeting most nuclear pore components previously shown to selectively interact with YAP in WT cells.Conversely, 8 hits were identified as specific to Cav1KO cells, and siRNA-mediated depletion of these genes enhanced YAP nuclear translocation.This second subset included two Cav1KO-specific YAP interactors, the 14-3-3-domain proteins YWHAH and YWHAB.We confirmed by western blot that YWHAH interacts preferentially with YAP in Cav1KO cells in CytD-treated WT cells compared with control WT cells.Accordingly, efficient YWHAH siRNA-mediated depletion partially rescued the expression of YAP targets in Cav1KO cells and CytD-treated WT cells.Notably, this rescue was not effective in cells grown on soft substrates, indicating that additional mechanisms might be involved in this regulation.Taken together, these unbiased approaches suggest that CAV1 determines YAP activity through the control of actin dynamics, via mechanisms involving inhibition of the interaction between YAP and 14-3-3 proteins such as YWHAH.YAP and caveolins are involved in a number of pathophysiological processes, such as liver regeneration, muscular dystrophy, and ECM remodeling.We hypothesized that impaired ECM remodeling in CAV1-deficient cells might be caused by deficient YAP activity.To test this, we transfected Cav1KO cells with either non-phosphorylatable YAP-5SA or WT YAP.ECM remodeling was assessed by collagen gel contraction assay and quantitative image analysis of collagen fiber organization by second harmonic generation microscopy.As expected, ECM remodeling activity was blunted in Cav1KO MEFs.Interestingly, YAP-5SA overexpression restored the ability of these cells to remodel the matrix, whereas WT YAP was ineffective.Furthermore, we observed a clear correlation between CAV1 expression and YAP nuclear localization in human cancer-associated fibroblasts from pancreatic tumors, and CAV1 silencing in these cells induced YAP cytosolic retention, supporting a major role for a CAV1-YAP regulation in determining the activation state of stromal cell populations in vivo.Based on these observations, we propose that CAV1 and YAP nucleate a signaling pathway that drives ECM remodeling and stiffening.Pancreatitis causes tissue damage and desmoplasia, promoting the development of ADM and potentially contributing to PDAC onset and progression.We chose pancreatitis as a model to study the potential contribution of CAV1-YAP regulation in vivo, because YAP is required for pancreatitis-induced ADM and CAV1 expression is upregulated in pancreatic cancer and it is associated with decreased survival.Mild and reversible acute pancreatitis was induced in WT and Cav1KO mice by intraperitoneal administration of the cholecistokinin receptor agonist caerulein.2 hr and 4 days after caerulein treatment, nuclear YAP expression was significantly higher in WT preparations.This correlates with an increase in the areas presenting extensive ADM and fibrosis, assessed by αSMA expression in pancreatic stellate cells 4 days after treatment.Taken together, these data suggest that CAV1 is required for YAP activation in the context of caerulein-induced pancreatitis and that this activation correlates with increased ADM in pancreatic tissue.Our results identify CAV1 as an upstream regulator of YAP-dependent adaptive programs, working through mechanisms dependent on the control of actin dynamics.This CAV1-dependent control of YAP activity relies, at least in part, on the reversible phosphorylation of YAP, evidenced by the association of blunted YAP activity in Cav1KO cells with increased YAP phosphorylation and its rescued by exogenous expression of non-phosphorylatable YAP.We found no role for LATS1/2 in this mechano-dependent negative regulation, observing no YAP activity recovery either upon transient silencing of both kinases or upon silencing of NF2.YAP might also be a substrate for JNK and Abl or as-yet unidentified kinases that could be responsible for YAP regulation.Another possible explanation for increased YAP phosphorylation in the absence of CAV1 is a protection of phosphorylated YAP from dephosphorylation through interaction with 14-3-3 YWHA proteins.This interpretation is supported by our interactome profiling and systematic functional screening studies, which showed increased interaction of YWHA proteins with YAP in Cav1KO cells and specific rescue of YAP translocation upon their siRNA depletion.YAP retention in the cytosol upon interaction with YWHAH proteins led to deficient YAP transcriptional activity in these cells.Furthermore, YWHAH proteins positively control Yap expression, adding a new level of complexity to the control of YAP activity.Our studies also identify several regulators of YAP nuclear translocation, including nuclear pore components and proteins involved in nucleocytosolic transport.Changes in stromal stiffness and architecture can enhance tumor aggressiveness, promote resistance to therapy, and favor metastasis.During tumor progression, CAFs surrounding the tumor may favor an increase in the stiffness of the tumor mass.In CAFs, ECM stiffness itself is an activating cue, thus potentially enabling a mechanically driven feedforward loop in which YAP nuclear translocation is necessary for this activation.CAV1 expression in CAFs correlates with higher remodeling capacity and facilitates tumor invasion.Our results provide the first evidence of a functional connection between these nodes.3D assays show that exogenous expression of a constitutively active YAP mutant reverts the impairment of ECM remodeling associated with CAV1 deficiency.This proposed CAV1-YAP regulation is therefore likely a significant driver of key events in tumor progression.Pancreatitis is characterized by immune cell infiltration, interlobular and interacinar edema, and fibrosis and is a major risk factor for the development of pancreatic cancer.YAP contributes to acinar cell dedifferentiation in ADM and prevents the regeneration of injured areas.Furthermore, inflammation increases stiffening, and the increased tissue stiffness in caerulein-induced acute pancreatitis could explain the differences in YAP activation between WT and Cav1KO mice.Our results thus support an important role for CAV1-YAP regulation in vivo and suggest a potential link between inflammation-induced stiffness and disease progression.These results suggest the interesting possibility that CAV1-YAP regulation could determine pancreatic cancer progression, since YAP is required for the initial stages of PDAC development.Our results demonstrate that CAV1 regulates YAP activity, determining the mechanical response to changes in ECM rigidity and other mechanical cues.CAV1-YAP regulation modulates pathophysiological processes such as ECM remodeling and the response to acute pancreatitis.These findings suggest that this regulation could determine the onset and progression of different physiological and pathological processes, such as tumor development, through multiple mechanisms.Further information and requests for reagents may be directed to, and will be fulfilled by the Lead Contact, Miguel Ángel del Pozo.Cav1KO C57BL/6 mice were bred under specific pathogen-free conditions at the CNIC.Experiments were performed with 8-12-week-old males.All animal protocols were in accordance with Spanish animal protection law and were authorized by the corresponding local authority.MEFs were isolated from WT and Cav1KO littermate mice, immortalized, and cultured as described.Neonatal hepatocytes from WT and Cav1KO littermates were isolated, phenotyped, and kindly provided by Dr. Martín-Sanz.The human MDA-MB-231 breast carcinoma and HeLa cell lines were obtained from ATCC.HeLa cells expressing CAV1–GFP were kindly provided by Lukas Pelkmans.MEFs, hepatocytes and HeLa cells were grown in Dulbecco’s modified Eagle’s medium and MDA-MB231 cells were grown in DMEM/F-12; growth media were supplemented with 10% fetal bovine serum and 100 μg/ml penicillin and streptomycin.Primary pancreatic cancer associated fibroblasts were a gift from Manuel Hidalgo.PanCAFs were grown in Roswell Park Memorial Institute medium supplemented with 20% FBS, 100 μg/ml penicillin and streptomycin, and 5% glutamine.All cells were maintained in a humidified atmosphere at 37 °C and 5% CO2.Polyacrylamide gels with tuneable stiffness were prepared on glass coverslips as previously described.3-aminopropyltrimethoxysilane was applied over the surface of a coverslip using a cotton-tipped swab and another coverslip was treated with Sigmacote®.The coverslips were then washed thoroughly with sterilized water and dried.Acrylamide/bis-acrylamide solutions were prepared using appropriate concentrations to obtain stiff matrices and soft matrices as previously defined.Polymerization initiators were added to the bis-acrylamide mixture.A drop of this mixture was deposited on top of the silanized glass and covered with the sigmacote-treated coverslip; 183 μL was deposited for round coverslips and 50 μL for square coverslips.After polymerization, the upper coverslip was removed and the polyacrylamide surface was photo-activated by exposing the sulfo-SANPAH crosslinker to UV light.Finally, the surface was coated with fibronectin for 1 h at 37°C.Fibronectin was then removed and cells were seeded at low confluence.Experiments were performed 24h after seeding.YAP subcellular distribution was analyzed with the Columbus Image Data Storage and Analysis System or imageJ.Nuclei were segmented using the Hoechst signal.Mitotic and aberrant nuclei were then eliminated based on Hoechst intensity and nuclear roundness and area.Cells located at the image borders were also eliminated.The cytosol was segmented growing the nuclear segmentation.The cytosolic ROI for cytosolic YAP intensity calculation was built as a 4 pixel ring of cytoplasm grown radially from the segmented nuclear border.Finally, the ratio between nuclear and cytosolic YAP was calculated.Fibrillary collagen in non-fixed cell-embedded collagen gels was imaged using the SHG technique with a Zeiss LSM780 multiphoton microscope fitted with a short pulse laser.Luciferase assays to monitor TEAD transcriptional activity with the 8xGTIIC-luciferase reporter were as described.Cells were transiently co-transfected with 8xGTIIC-luciferase and pLVX-CMV-CherryFP-P2A-MetLuc.Luciferase activity was monitored with the Dual-Luciferase® Reporter Assay System in an ORION II microplate luminometer.Firefly luciferase was quantified in cell lysates by adding Luciferase Assay Reagent II, and MetLuc was quantified in culture medium by adding Stop & Glo reagent.Firefly luciferase activity was normalized to MetLuc activity to control for variability in transfection efficiency across samples.Contraction assays to monitor matrix remodeling were as described.Briefly, 1.5 × 105 MEFs were included in a collagen type I gel in an Ultra-Low Attachment 24-well plate.After gel polymerization, normal culture medium was added, and collagen gel borders were detached from the border of the plate.Gels were cultured at 37 °C, 5% CO2 for 48h.Gel contraction was monitored by quantifying the gel surface area on photographs with ImageJ.The fold-change with respect to the contraction observed in a control condition was calculated for each sample.Acute pancreatitis was induced by caerulein treatment as described.Before the experiment, mice were starved for 12h with unrestricted access to drinking water.Acute pancreatitis was induced by 7 intraperitoneal injections of caerulein dissolved in PBS; injections were given at 1-h intervals on 2 consecutive days at a dose of 50 μg caerulein /kg body weight per injection.Control animals received injections of PBS only.At defined intervals, animals were sacrificed and the pancreas excised for immunohistochemical analysis.Immunostained preparations were scanned with Hamamatsu Nanozoomer 2.0 RS and digitized with NDP.scan 2.5.Images were viewed and quantified with NDP.analyzer and NDP.view2.RNA was extracted from cell samples with the RNAeasy micro kit.For each sample, 1 μg RNA was reverse transcribed using the Omniscript RT kit and random primers.qPCR was performed with SYBR green Appropriate negative and positive controls were used.Results were normalized to endogenous GAPDH and HPRT1 expression using qBase plus.Primer sequences were summarized in Table S4.Next generation sequencing experiments were performed at the CNIC Genomics Unit.Total RNA was extracted as for qRT-PCR.RNA integrity was determined with an Agilent 2100 Bioanalyzer.Two RNA samples per condition were analyzed by single read sequencing in an Illumina HiSeq 2500 System.Data were analyzed in the CNIC Bioinformatics Unit.Enrichment analysis was conducted using Ingenuity Pathway Analysis software and the Enrichr web tool.Protein G–agarose beads bound to immunoprecipitated proteins were incubated at room temperature for 2h in a 60 μL volume of 2 M urea, 50 mM Tris-HCl pH 8.5, and 10 mM TCEP with gentle vortexing.Iodoacetamide was then added and the incubation continued in the dark.After dilution to 0.5 M urea with ammonium bicarbonate, 3 μg of trypsin were added and samples were incubated for 6-8 h at 37°C.Samples were then acidified to 1% TFA, and the supernatants were desalted on C18 minispin columns and dried down for further analysis.Experiments were performed with 5 independent replicates.Peptides were analyzed by LC-MS/MS using a C-18 reversed phase nano-column in a continuous acetonitrile gradient consisting of 0%–32% B over 80 min, 50%–90% B over 3 min at 50°C.Peptides were eluted from the nanocolumn at a flow rate of 200 nL/min to an emitter nanospray needle for real-time ionization and peptide fragmentation in a QExactive HF mass spectrometer.The chromatographic run analyzed an enhanced FT-resolution spectrum followed by the MS/MS spectra from the 15 most intense parent ions.Dynamic exclusion was set at 40 s. For peptide identification, all spectra were analyzed with Proteome Discoverer using SEQUEST-HT.For searching the Uniprot proteome database containing all sequences from mouse and frequently observed contaminants, the following parameters were selected: trypsin digestion with 2 maximum missed cleavage sites; precursor and fragment mass tolerances of 2 Da and 0.02 Da, respectively; carbamidomethyl cysteine as a fixed modification; and methionine oxidation as a dynamic modification.Peptides were identified by the probability ratio method, and false discovery rate was calculated using inverted databases and the refined method with an additional filtering for a precursor mass tolerance of 15 ppm.Proteins were quantified for each condition based on the number of scans/peptides identified at 1% FDR.The smart-pool siRNA library for selected YAP interactors detected by mass spectrometry was purchased from Dharmacon.Four different sequences per each gene were used.SiRNAs were transfected by reverse transfection in 384-well plates.Cells were fixed and stained for YAP detection 48h post transfection as described previously.Immunofluorescence images were acquired with an Opera automated confocal microscope.Three replicates were performed, with four wells per siRNA in each replicate.Two different ON-traget nontargeting siRNA controls were used.Transfection efficiency was validated by transfection with INCENP siRNA, which promotes the appearance of multinucleated cells and cells with aberrant nuclei.YAP nucleo:cytosolic ratios were calculated using Columbus as described above, and Z-scores were calculated as Z =/control standard deviation.The mean Z-score of the three replicates was calculated.Glass slides with pre-printed micropatterns were purchased from Cytoo.Designs for customized patterns with specific grid sizes were described by Dr. Piccolo and colleagues.Fibronectin coating was performed as specified by the supplier.Cells were plated, and 24 h later, fixed and stained following standard protocols.SEEK is a computational coexpression gene search tool.We queried this web tool with the list of previously published YAP target genes and used all human expression datasets from tissue samples and cell lines included in SEEK for coexpression analysis.The program gives a ranked list of genes ordered from the strongest positive correlation with the query to the weakest.With Enrichr, we analyzed the enrichment in KEGG annotated pathways and gene ontology terms of the 200 genes showing the highest positive coexpression with YAP targets.24h after plating on fibronectin-coated 6-well plates, cells were subjected to uniaxial cyclic stretching for 24h on a programmable Flexcell® FX-5000™Tension System under standard culture conditions.The ON-TARGET plus SMARTpool siRNAs were purchased from Dharmacon, siRNA targeting human CAV1 was custom made.Silencing was allowed to proceed for 48h before terminating the experiment.p2xFlag CMV2-YAP2 was a gift from Dr. Sudol.pCMV-flag YAP2 5SA was a gift from Dr. Guan.8xGTIIC-luciferase was a gift from Dr. Piccolo.The lentiviral backbone for pLVX-CMV-CherryFP-P2A-MetLuc was derived from pLVX_shRNA2 and was provided by the CNIC Viral Vectors Unit.CMVCherryFPP2A was obtained from pRRL_CMV_CherryFP_P2A.The Metridia luciferase was amplified from pMetLuc reporter and cloned in-frame with the CherryFP-P2A peptide.pEGFP–mDia1 and pcDNA3-HA-RHO were as described.All transient transfections were by electroporation with 5 μg plasmid DNA and 35 μg UltraPure salmon sperm DNA solution at 350V and 550ohms for 10msec.Drugs were added to cells 3h after plating, followed by incubation for a further 21h.The ROCK inhibitor Y27632 and cytochalasinD were from Sigma-Aldrich.Jasplakinolide was from Santa Cruz Biotechnology.Monoclonal antibodies were sourced as follows: anti-YAP and anti-TEF-1 from Santa Cruz Biotechnology; anti-CAV1 XP and anti-YWHAH; #5521) from Cell Signaling; anti-Flag M2 from Sigma-Aldrich; and anti-glyceraldehyde-3-phosphate dehydrogenase and anti-cortactin from Millipore.Polyclonal antibodies to phospho-YAP, LATS1, and LATS2 were from Cell Signaling; anti-Histone H3 was from Abcam; and anti-RHO GDI was from Santa Cruz Biotechnology.For immunohistochemistry, we used anti-YAP XP from Cell Signaling and αSMA from Thermo Fisher Scientific.For immunofluorescence procedures, cells were fixed in paraformaldehyde 4% at 37°C for 10 minutes, permeabilized, blocked with 0.2% Triton X-100 in BSA 1% for 10 min, and then immunostained with specific antibodies for 1h.Alexa647 phalloidin and Alexa546- and Alexa488-labeled secondary antibodies were from Invitrogen.Images were acquired either on a Zeiss LSM700 confocal microscope or an Opera automated confocal microscope.For subcellular fractionation the cells were lysed.Nuclear and cytoplasmic fractions were separated by centrifugation.The cytosolic fraction was precipitated with acetone and nuclei were lysed and centrifuged at 13000 rpm to remove the DNA.Both fractions were eluted with sample buffer and analyzed by western blotting.For immunoprecipitation, cells were lysed.Cell lysates were centrifuged for 10 min at 4°C.Supernatants were mixed with the specific antibody or control IgG for 2h, and protein G–agarose beads were added for a further 2h.Beads were washed with washing buffer and processed for mass spectrometry or western blotting.For western blotting, immunoprecipitated proteins were eluted with sample buffer and analyzed by western blotting on nitrocellulose membranes with primary and secondary HRP-conjugated antibodies using standard protocols.Proteins were detected by enhanced chemiluminescence.Nuclear and cytosolic subcellular fractions were prepared as described.Statistical details of experiments are reported in Figure Legends.Significance was evaluated by paired Student’s t test, using GraphPad Prism.Differences were considered statistically significant at ∗p < 0.05, ∗∗p < 0.01, ∗∗∗p < 0.005, and ∗∗∗∗p < 0.0005.YAP-target gene enrichment on stiff versus soft substrates in the RNA-Seq analysis was compared by the Fisher exact test using an online Fisher exact test calculator.The accession number for the RNA-seq data reported in this paper is GEO: GSE120514.
The transcriptional regulator YAP orchestrates many cellular functions, including tissue homeostasis, organ growth control, and tumorigenesis. Mechanical stimuli are a key input to YAP activity, but the mechanisms controlling this regulation remain largely uncharacterized. We show that CAV1 positively modulates the YAP mechanoresponse to substrate stiffness through actin-cytoskeleton-dependent and Hippo-kinase-independent mechanisms. RHO activity is necessary, but not sufficient, for CAV1-dependent mechanoregulation of YAP activity. Systematic quantitative interactomic studies and image-based small interfering RNA (siRNA) screens provide evidence that this actin-dependent regulation is determined by YAP interaction with the 14-3-3 protein YWHAH. Constitutive YAP activation rescued phenotypes associated with CAV1 loss, including defective extracellular matrix (ECM) remodeling. CAV1-mediated control of YAP activity was validated in vivo in a model of pancreatitis-driven acinar-to-ductal metaplasia. We propose that this CAV1-YAP mechanotransduction system controls a significant share of cell programs linked to these two pivotal regulators, with potentially broad physiological and pathological implications. Moreno-Vicente et al. report that CAV1, a key component of PM mechanosensing caveolae, mediates adaptation to ECM rigidity by modulating YAP activity through the control of actin dynamics and phosphorylation-dependent interaction of YAP with the 14-3-3-domain protein YWHAH. Cav1-dependent YAP regulation drives two pathophysiological processes: ECM remodeling and pancreatic ADM.
735
Stakeholder management in complex product systems: Practices and rationales for engagement and disengagement
Business and management research has focused increasing attention on external stakeholder engagement in business activities in different contexts, including innovation management, marketing, complex product systems, service-based value creation, project management and supply chain management."These studies emphasize the increasingly collaborative nature of value creation in contemporary business, where organizations are more dependent than ever on external resources and inputs in meeting complex market needs. "In addition to internal stakeholders who belong to the formal decision-making coalition of a complex system external stakeholders' participation in value-creating and decision-making activities can be crucial for firm performance and long-term survivability.Previous research of stakeholder management in the context of complex product systems is founded on a firm-centered perspective, in which stakeholder engagement is explicated from the perspective of a single firm, where the focus is on the performance outcomes of that firm emphasizing its value capture possibilities.This research has explored the distinct reasons for a firm to engage external stakeholders, the varying roles of actors in the system, and the engagement strategies utilized.Also, the challenges of engaging external stakeholders have been studied extensively.Recent advancements have shifted to a network-level perspective, in which stakeholder engagement is explicated from the incorporated views of a network of actors.This has been supported by a systemic view that shifts the outcome focus to system-wide benefits, such as joint benefits to a network of actors and overall value created for the system.This research has explored factors and conditions that can lead to effective engagement of a network of stakeholders, the influence of different forms of engagement interaction, and, the varying generic engagement processes that can lead to systemic value outcomes.While recent advancements have advocated the benefits of engaging external stakeholders, it is clear that less attention has been directed to the disengagement of external stakeholders, and particularly to the interaction of stakeholder engagement and disengagement over time, which can be considered a salient feature of governing a complex product system.To augment the explanatory power of previous research in the field, we analyze inter-organizational practices and rationales, through which internal stakeholders engage in and disengage external stakeholders from the decision-making and value-creating activities in a complex product system over time.By inter-organizational practices, we mean the routines and activities that occur at a detailed, fine-grained level between different stakeholders, including both internal and external sides.Moreover, we investigate the schemes of reasoning, the rationales, for engaging and disengaging external stakeholders timely in CoPS.We pose the following research question for our empirical analysis: How and why do internal stakeholders engage and disengage external stakeholders over time in a complex product system?,To address our research question, we have taken on a qualitative and inductive research approach.Specifically, we utilized stakeholder theory as a lens for our theory elaboration approach.We drew on a single case study design, and conducted loosely structured interviews with informants from several organizations over many years.We also gathered archival data for triangulation purposes.Our case context is a district development megaproject located in Europe inside a metropolitan area.Megaprojects that contain physical constructs, intangible services or hi-tech engineering solutions and systems are special cases of CoPS.Hence, a megaproject provides a highly dynamic, multi-actor environment, which is suitable for our empirical enquiry.This megaproject started in 2004 and is estimated to be completed in 2020.The district is known as a spacious garden district and a cultural cradle of the metropolitan area.The cultural and historical heritage is to be valued and preserved in the development project.The scope of the megaproject is to demolish the entire district center and rebuild a commercial shopping center and residential complex with multiple modern transportation facilities as well as an environment that conveys the cultural heritage of the area."The total development volume over the project's lifecycle exceeds EUR 3.4 billion.In the empirical study, we found distinct practices that the internal stakeholders employed to engage the external stakeholders in the decision making and further development of the megaproject.Conversely, we also found practices used to uncouple the external stakeholders from the decision-making process of the megaproject.In addition, framing of the system, legitimating the governance structure of the system, maintaining dynamic stakeholder interaction in the system, and expanding the design rights within the system, were identified as rationales for whether or not to engage the external stakeholders in a timely manner.Our study has three major contributions for stakeholder management literature.First, our findings highlight the crucial role of timely disengagement of external stakeholders in governing CoPS.This means that while it is certainly true that stakeholder engagement is important for overall value creation and system-wide benefits in CoPS contexts, it is just as important to timely disengage external stakeholders for reaching the systemic outcomes.Second, our four novel rationales are empirically driven and bound to the lifecycle of CoPS forming a processual description, which elaborates the more theoretically oriented general rationales found in literature in this specific context of CoPS.More importantly, our findings suggest a temporal ordering for these schemes of reasoning and show how the rationales may change when the CoPS proceeds on its lifecycle, providing new knowledge of the justification for engaging and disengaging stakeholders timely from a systemic view.Third, our overall findings provide new fine-grained understanding of the nuances of stakeholder management in CoPS contexts, particularly, by adding causal logics, empirical grounding and elaborating new conceptual relationships.The paper is organized as follows.First, we review stakeholder management literature in CoPS contexts as the necessary background for our research.Next, we outline the research design, methods and analysis protocol for our empirical enquiry.We then provide a synopsis of our key findings in form of a narrative with necessary results-figures.We conclude by discussing and translating our findings into theoretical contributions and practical implications for managers along with research limitations and future research suggestions."Stakeholder theory ultimately deals with the question of how different stakeholders should be managed and taken into account in a firm's decision-making.However, the presented rationales and approaches for stakeholder engagement and disengagement have differed considerably across different schools of thought.The dominant traditional instrumental approach in the context of CoPS assumes a bargaining mode by adopting the “management of stakeholders” perspective."It focuses on a single firm's performance outcomes and value capture in dyadic stakeholder relationships, highlighting the boundaries of decision-making between internal and external stakeholders. "In this discourse, the rationale for stakeholder engagement is the prioritization and balancing of most salient stakeholders' interests and requirements in a manner that ensures the attainment of the goals of the single firm. "Frooman accentuates the resource-based view as rationale for engaging stakeholders, where stakeholders are considered merely as valuable resource and information providers for the firm's self-centric purposes.The identity-based rationales for stakeholder engagement have broadened our understanding of the symbolic role that stakeholder involvement may play in the formation of desired organizational identity."Instead of value contributors, external stakeholders, such as citizens' associations, are often portrayed through a conflict-driven approach and considered in a negative light as an opposing and homogeneous group of actors who should be approached primarily through disengagement or symbolic engagement.The disengagement of non-salient stakeholders is therefore considered as rational, since the engagement of those actors who do not possess critical resources for the project’ survival is not beneficial from the perspective of the focal firm.Aaltonen and Kujala in turn show that the rationales to use certain strategies to engage or disengage external stakeholders in CoPS have revolved around short-term related project efficiency indicators of time, budget and scope.Contemporary business, however, increasingly shifts toward the collaboration of multiple actors, including a combination of for-profit and non-profit actors.In line with this change, the more modern systemic approach emphasizes the importance of shifting from the management of stakeholders to the broad engagement of stakeholders."This perspective focuses on system-wide benefits and overall value created for the network of actors, where even the peripheral or external stakeholders' participation in value-creating and decision-making activities can be crucial for firm performance and long-term survivability.For instance, the value of peripheral stakeholder engagement in the development and diffusion of new ideas and innovations has been found to be an important rationale for stakeholder engagement in the context of new product development and innovation research, where the concept of open innovation has gained particular prominence.Moreover, a shared knowledge base and knowledge sharing have been identified as rationales for stakeholder engagement in Public-Private-Innovation processes.Further, Rampersad et al. found that the rationale for stakeholder engagement in innovation networks is to distribute power and create trust among stakeholders that eventually lead to network-level efficiency.Finally, the institutional perspective has highlighted the role of stakeholder engagement in the formation of organizational legitimacy and reputation.Nevertheless, the knowledge of the rationales for stakeholder engagement, and particularly for stakeholder disengagement in systemic perspective is rather limited and unilateral.Understanding these rationales in-depth is relevant for developing a more contextualized understanding of stakeholder management in CoPS.There exists a broad spectrum of stakeholder management strategies ranging from disengagement through symbolic engagement, to genuine engaged participation, where stakeholders can truly affect the decision-making processes of CoPS."Research has identified various general strategies to include external stakeholders into an organization's decision-making processes, although empirical and, in particular, processual investigations on the actual practices through which these strategies are enacted in the context of CoPS have been more limited.In their empirical analysis of two complex projects, Aaltonen et al. show how proactive influence strategies consisting of active dialogue and early stakeholder engagement shifted the opposing external stakeholders into neutral ones, and also provide early indications of how the use of stakeholder management strategies may actually change over time."Savage, Nix, Whitehead, and Blair's and Olander and Landin's typologies also suggest that managers should differentiate their stakeholder management strategies based on the position and attributes of stakeholders. "For example, collaboration and informing strategies can be used to increase the most crucial stakeholders' degree of supportiveness, while the strategy of defending can be used to decrease the power of non-supportive stakeholders.Even though stakeholder management literature offers a set of tools and frameworks for stakeholder analysis and classification, decisions on whom to engage in the decision-making, as well as when and how to engage them, are highly challenging in practice and also constantly debated among scholars in the field of CoPS."For instance, Missonier and Loufrani-Fedida argue that transparency and broad engagement of external stakeholders as early as possible contribute to a successful CoPS as all stakeholders' opinions and interests are incorporated into the success criteria and objective definition.However, in practice, many practitioners experience this kind of boundaryless and inclusive approach to stakeholder management as extremely resource-intensive and costly, and they perceive the risk of extremely painful and challenging decision-making with lock-ins and dead ends.Therefore, exclusion and disengagement approaches can also be favored during the early lifecycle phase by the internal stakeholders to secure the go-decision for the CoPS.In practice, the internal stakeholders in CoPS have the complicated task of balancing, and still timely engaging, a heterogeneous group of external stakeholders, ranging from authorities to neighborhood associations, who often have conflicting goals.Prior research on stakeholder management strategies, however, tends to portray their use as rather static and dependent on the attributes of the stakeholders instead of associating the rationales for the engagement and disengagement to the actual context of the developing multi-stakeholder system.Consequently, what are almost completely missing in prior literature are in-depth and fine-grained portrayals of how external stakeholders are engaged and disengaged over time in practice, and how the interplay of engagement and disengagement practices may unfold over time.Furthermore, what makes stakeholder management particularly challenging in the context of CoPS is the evolving inter-organizational nature of operations.In this context, the practices that are employed to engage and disengage external stakeholders are not enacted and coordinated by one single organization in a dyadic relationship with its stakeholders, as suggested in the traditional hub-and-spoke stakeholder models, but are formed and enacted through the interactions of internal, and even external, stakeholders in a networked setting.The temporal dynamics of stakeholder engagement and disengagement over the system lifecycle and deciphering the paradox of engaging versus disengaging external stakeholders are therefore particularly important for developing a more contextualized understanding of stakeholder management in CoPS.We investigate CoPS in a megaproject context with a theory-elaboration approach.We approach this context using stakeholder theory as a lens and seek to deepen existing concepts and their relationships regarding engaging and disengaging external stakeholders, using empirical context and theory concurrently in a balanced manner.In so doing, we employ a single-case study design.Following the principles of theoretical sampling, we selected a district infrastructure development megaproject as our case context.We consider this kind of a megaproject as a theoretically suitable context for this study, because it involves highly dynamic, inter-organizational and temporary nature of operations that enable to illuminate and deepen existing concepts and their relationships regarding engaging and disengaging external stakeholders.Our case context is a district development megaproject located inside a European metropolitan area that started in 2004 and is scheduled for completion in 2020."The initial plan was modest: renovate two district center buildings and enhance the district center's streets with new pavement flagstones and streetlights. "However, the scope gradually expanded during the project's lifecycle. "The final plan is to demolish and rebuild the entire district's center area.This means the demolishment of five massive buildings and rebuilding of a new commercial shopping center and residential complex with modern transportation and surveillance facilities, including a new metro station, centralized car parking for over 2000 vehicles, a centralized area surveillance and a new regional main bus terminal.All these new facilities are integrated together with several park-and-ride systems and other interfaces.The new complex includes 12 stories spread across five massive buildings.This district is internationally famous for its spacious garden district ambience and for being the cultural cradle of the metropolitan area."The district's architecture highlights post-war modernism: famous center tower, public pool and fountains, culture center, modern art museum, theater, library and many other cultural subjects that all served inhabitants during the World War II recovery period.This cultural heritage is to be valued and preserved in the project at the national level, which contributes to a very complex stakeholder environment.The Park City and a major investor of the megaproject, a real estate investment and development department of Insurance Company, are the owners of the project who together invest more than 3.4 billion euros.In 2004, the project involved only few stakeholders, which included the real estate owners of the district center and the department store tenants."However, during the gradual expansion, the stakeholder network broadened to several stakeholders including new customers, real estate owners, resident's association, political actors, contractors, end-users, private investors, consultant and architect and designer companies. "We introduce the megaproject's stakeholders with a short description of their role, whether they are internal or external stakeholders, and our collected interview data in Table 1. "Our study analyzes and describes the megaproject from the perspective of internal stakeholders' key representatives.Our analysis investigates the context from the early project initiation 2004 up to the mid project execution and early operation in 2016.We collected data through loosely structured interviews that lasted approximately 60 to 90 min.We supported our interview data by collecting an archive of open and closed access data for triangulation.We selected knowledgeable informants purposefully, and during the interviews we used the snowball sampling method to identify other knowledgeable interviewees.We interviewed some informants more than once, as they were extremely relevant informants in multiple times and were key personnel related to significant events in the case context."We interviewed nine organizations from both internal and external stakeholders' side, and persons in several different roles to cover a large range of different perspectives for transverse coverage and reduced bias.In total, we organized four interview rounds that ensured longitudinal data coverage and limited post-hoc rationalization.We audio recorded and transcribed interviews for further analysis.We had a common interview guideline agreed among interviewers."First, we focused on the interviewee's personal history and career background in the studied organization and project.Next, we asked the informant to provide her/his own rich unfiltered narrative of the project event by event."We focused on interviewee's own interpretation of all kinds of stakeholder interactions: decisions, actors, events, actions and activities that included multiple actors with as accurate dates as possible.We intervened with open-ended and guiding follow-up questions to stimulate dialogue, keep focus and gather details.We fostered a confidential, transparent and communicatively active atmosphere.We also utilized a critical incident technique by asking the interviewee to memorize certain positive or negative events during the project to advance our understanding of the temporality of significant stakeholder related events and activities.We also collected a data archive of documented material retrospectively from 2004 and then in real time from 2011.The archive contains more than 200 unique sources of newspaper articles, project reports, presentations, brochures, company reports, and detailed plans.We used this data for triangulation and for producing a valid background information.In practice, we verified the chronology of key events, actions, decisions and activities that had a crucial role in how the project and its stakeholder landscape developed.We conducted an inductive thematic analysis of the interview data in three phases.Simultaneously, we used archival data for triangulation.We performed the analysis at the organizational-level using ATLAS.ti and MS Office software.In the first phase, we inductively recognized general themes and patterns from raw interview data to produce a broad depiction and proper background comprehension of our research context.The different themes and patterns were recurring and non-recurring activities, actions, events, relationships and roles described at a very empirical level with as accurate timestamps as possible, ranging from years to exact dates.For example, different project and company meetings, and operation plans.We first analyzed from single stakeholder and interviewee perspective and then combined the different accounts to reduce biases.We concurrently triangulated the timestamp information from our data archive whenever possible to ensure the trustworthiness and chronology of our descriptions."In the second phase, we identified the practices that internal stakeholders used to engage and disengage external stakeholders based on the previous analysis phase's empirical themes and patterns.We developed these practices with the information of who did, what, how, why, when, where, with whom and to whom, to provide fine-grained descriptions."For instance, in 2004, the Park City's Development director and Property manager and Insurance Company's former CEO founded a joint decision-making board together with other real estate owners to draw the stakeholder boundaries into internal and external, and ease planning procedures through more unified decision-making.We reflected similarities and differences among the practices to distil them properly from each other.In the third phase, we interpreted and developed more abstract reasons for the found practices."That is, what the internal stakeholders' rationale behind employed practices to engage or disengage external stakeholders is from a systemic view.For instance, the internal stakeholders sought to frame the CoPS toward the external stakeholders by establishing a joint decision-making organ to delineate decision-making boundaries and by implementing a novel planning tool to actively inform external stakeholders about the developing megaproject concept.All this contributed to system-wide benefits such as successful governance and timely progress of the megaproject planning.We followed some best practices to assess the trustworthiness of our methodology and findings.We have reported illustrative interview quotations from multiple stakeholder perspectives grounding our findings to data, we have utilized constant data triangulation, and we organized a formal validation workshop to discuss the initial findings of this study.The workshop participants from Insurance Company shared our findings and saw that their practical experience resonated very much with our analysis results."Further, this manuscript version has been sent for review to Insurance Company's representatives and they have been given a chance to comment our final findings.Our analysis resulted in writing a detailed narrative of the case, distinct engagement and disengagement practices, and related rationales.In total, we found nine practices and four rationales that internal stakeholders used for both engaging and disengaging external stakeholders.Based on these steps we built two models, first an empirical depiction of the found practices and then a theoretical model of the rationales and enacted practices.We represent our findings in the following chapter."In 2004, the former CEO of Insurance Company had discussions with Development director and Property manager from Park City, and rounded up all the real estate owners of the district to a joint meeting held in the district's old premises.The real estate owners agreed upon and established a joint organization and decision-making body District Area Development.In terms ofengaging external stakeholders, there were two main purposes.First, this kind of umbrella organization constituted and formalized the boundaries between the internal and external stakeholders of the project."As a closed system, DAD would be used to disengage external stakeholders, such as the End-users of the district, residents' association and Local Cultural and Environmental Bureau from the project's early phase decision-making.In practice, internal stakeholders did not communicate the project idea creation or planning to the external stakeholders or listen to their proposals.The scope of the project in 2004 was ambiguous and internal stakeholders iteratively envisioned and developed it.DAD formed a manageable organizational ensemble and unified decision-making and thus made it easier for internal stakeholders to organize preliminary studies and divide responsibilities.Even though the planning ideas were modest in the beginning of the project, the internal stakeholders realized that it could be easier to keep the planning in own hands, instead of trying to satisfy every stakeholder.An excerpt from the data illustrates this thinking:“We are not there , because residents are not “actors”.Real estate owners, landowners and commercial community are.There they decide what is to be done.Reputedly, they write memos in the DAD, but the memos are being tore after and no information is left on paper for outsiders., "",Second, this joint organization represented the collective interest of the internal stakeholders, such as Park City, Insurance Company and other real estate owners, to authority related external stakeholders, such as Construction office, Urban planning unit authority and Building inspection authority, who acted as gatekeepers for project development ideas in the political decision-making."In other words, the internal stakeholders engaged authority related external stakeholders to the project's early value-creating activities. "The district's fragmented ownership caused challenges, because real estate owners proposed multiple parallel but divergent ideas to authorities, which hindered the project initiation and choosing of direction for development.DAD as the new communication channel would unify these ideas, represent the collective interest of internal stakeholders, and ease the communication and interaction with authority related external stakeholders.The CEO from Insurance Company described this:“We collectively thought that we must generate some kind of community over the development project where actors together contemplate issues.Then was established, and I would say that it was the coalition of the most significant actors, and still is.It became the essential communication channel toward .It is easier to organize concrete dialogs since is the conversation partner toward .,Soon after forming DAD, the internal stakeholders set up a so-called designer team that consisted of key representatives from Insurance Company, Consultant Company, Designer Company and Architect Company.The designer team invented a novel planning tool, reference planning, which they utilized over the project lifecycle to engage external stakeholders in value-creating activities."In particular, it was the idea developed by Designer and Consultant to overcome stakeholder interaction challenges related to regular bureaucratic town planning procedure, which would have been too slow hindering development'.The reference plan visualized the ‘big picture’ of the district in 2D and 3D forms, how the district would look like in the future in various different phases with alternatives."This big picture was available for anyone in a public website and it steadily introduced the planning ideas to external stakeholders, such as Residents' association, End-users and Customers, who had the possibility of providing feedback, decreasing the chance of rebuttals.Building inspection and Surveillance authority, Urban planning unit authority and Local Cultural and Environmental Bureau gave tentative acceptance, feedback and guidance in choosing the planning direction before official lock-in decisions.Two interview quotes from different stakeholders describes the new planning approach:“In my opinion, a critical starting point for solutions was the using of reference planning… it has surely been an innovative tool, but from the authority perspective, undeniably… it has developed some teething problems with the administrative proceeding.,“The formal town planning procedure was too burdensome for timely progression."Thus, we invented this so called reference planning, which shows to anyone who's interested in, how looks like in 2020 and in 2030.”", "The designer team's Consultant and Architect invented and started to utilize a second novel planning tool, master planning as a reinforcement tool, to disengage authority related external stakeholders from decision-making activities. "Particularly, the master planning idea emerged from the interactions between Consultant and external stakeholders' Building inspection authority to overcome external stakeholders' opposition regarding building permissions.Some buildings lacked detailed analyses, which authorities could not inspect from the reference planning that was too abstract and generic level visualization, and were therefore unable to grant building permissions.The master planning was a more detailed in-depth analysis of the district built upon the reference planning, which would show the building specific analyses.The designer team used this master planning protocol to actively argue, justify and defend the planning development and their decisions.Particularly, against external stakeholders such as, Building inspection and Surveillance authority and Urban planning authority to gain acceptance for building permissions with minimal opposition, bureaucracy and changes.The designer team described this:"“This started when building inspection authority's head said to me that this building will not get construction permits, unless we can indicate that it is a functioning entity… Then we piloted this master planning tool toward the authorities… It includes these functional plans, there was human safety, fire safety, heat and smoke venting, and everything… But with this we actually managed to work the holdouts into this.”",”It was a challenge to depict all the necessary safety and fire precautions toward authorities and other .We came up with the idea of master planning that shows detailed 3D plans of everything in the .And then everything has to be done according to the master planning and everything needs to be connected to it accordingly.This has been the tool to justify what we do and gain clearance from ., "The Park City's, Trade promoter and Project manager held several different briefings for external stakeholders, especially for Residents' association, National and Local Cultural and Environmental Bureaus and End-users of the district to actively engage them in value-creating activities by providing platforms for external stakeholder to influence and engage in project planning. "The purpose to organize such briefings ascended from interactions with National and Local Cultural and Environmental Bureaus and residents of the district, who strongly opposed the more modern planning development, because of the district's cultural and historical heritage value.Internal stakeholders needed a straightforward, transparent and honest way of communicating the planning to these external stakeholders, and to collect feedback."Park City's Project manager and Chairman of the urban planning unit board organized various information seminars open for anyone at the district's famous movie theater and art museum.Trade promoter created a website for the project to introduce even more graphical material, analyses and to provide information openly, such as PowerPoint presentations of the plans that anyone could comment.To illustrate, two quotes from both stakeholder sides emphasize the role of the platform for communication:“There was also this Tower Seminar that was held in if you remember.It dealt with issues regarding high-rise construction…,“Actually in the quite early phases yet over several years, the chairman from organized all sorts of workshops among authorities and other .These workshops were held in the old movie theater of and all participants openly brainstormed ideas for future development which were incorporated into future plans to some extent.,However, the transparency and active informing about the project planning had its downsides.The designer team on the other hand disengaged external stakeholders from value-creating activities by concealing specific information and by not responding to all references from external stakeholders."This practice emerged from the interactions between Park City's representatives, Residents association, End-users and Local and National Cultural and Environmental Bureaus in distinct briefings. "Many of these external stakeholders wanted to know more specific details about the project's development, especially concerning economical aspects, such as total construction volume, floor and square meter prices, and other financial factors behind the planning development. "The internal stakeholders interpreted that in order to have a manageable ensemble and protect and proceed with the planning robustly, every stakeholder could not be satisfied or listened to, and some issues were better to be kept in one's own hands.Thus, the internal stakeholders concealed certain information on purpose and did not return certain references from these external stakeholders.Two quotes to illustrate the ways selective referencing occurred:"“Residents' Association is not against the development basically, but they support Local Environmental and Cultural Bureau's opinion that high-rise construction should be forbidden.But we know that there exists no profitable economical function for demolishing low-rise buildings and re-building low-rise buildings, hence, we have to hold some information about the total construction volume to ourselves and believe that we can see this through even after appeals.,“We discussed with them about the architectural aspects and about how this district is going to be developed.We also highlighted the boundary conditions .But, we realized that the economic and financial boundary conditions are the stumbling block.These were never discussed properly with us, and we then of course think that they try to only maximize efficacy and profits.The transparency particularly ends at the side of the real estate owners and private sector.The economic and financial parameters are never brought up, even though they de facto affect., "Residents' association, Construction office, Building inspection and Surveillance authority, Urban planning authority and Local Cultural and Environmental Bureau still fostered and held tight to the cultural heritage values, which was contradictory to the project plans that aimed at creating modern high-rise buildings and businesses, slowing down late project planning phase'.Previously, a representative, Designer, from Designer company had been the specific individual who would have discussions and represent the master and reference planning to these external stakeholders for timely engagement and disengagement.However, this Designer was not effective enough in long-term, and the designer team decided to change their tactic, and put another individual forth, this time Architect and partner from Architect Company who held a professorship in local university, and was nationally famous for his designs and references.This specific individual had the capability and prestige to arbitrate diverging interests among internal and external stakeholders.Architect could especially engage these external stakeholders by opening a communication channel and opportunity for external stakeholders to influence, but concurrently disengage these external stakeholders by defending how the new plans would also take into account the cultural and historical heritage values with novel park and garden areas.Several informants described the role of this Architect in the project:“ is now here thinking about the district center and the display of the architecture.He then for instance justifies these for the authorities and how do they look like and why do they look like that.And when they provide distinct comments , he is of course very capable of addressing them.,” from has for sure been a significant person in gaining acceptance from for future plans and operations.,Project manager from Park City and Consultant and partner from Consultant Company also participated in several meetings and discussions and acted as heralds in arbitrating the interests between the external and internal stakeholders.These heralds simultaneously provided a communication channel and opportunity for external stakeholders to influence, yet still defended existing ideas and plans.To illustrate, several stakeholders promoted these two persons:”If I remember correctly, the wanted to be the trustworthy contact person toward .He for instance conveyed information from the authorities to us.,” has been a significant person indeed.He and their organization possesses capabilities in following through these kinds of large projects.He is excellent in terms of contractual issues but also in integrating other to the process of doing this project., "However, the opposition from the Local Cultural and Environmental Bureau, Residents' association and End-users was still very strong during final planning phase, resulting in the rejection of the first town plan proposal in court.During the first town plan proposal, the internal stakeholders had active discussions with the external stakeholders about boundary conditions for development, even though this first proposal ended up rejected.The designer team interpreted that they have to engage external stakeholders better in the planning, to get the official town plan proposal approved in the future."Thus, the designer team's representatives invented a novel practice, so called District's Development Theses for engaging external stakeholders. "In practice, the Park City's Project manager wrote theses about district development that functioned as a common guideline or ground rule, and published them in the earlier opened website. "The contents of the Theses described how the new development direction would take into account the external stakeholders' boundary conditions and integrate the cultural and historical heritage values.The Theses of course required the internal stakeholders to somewhat restrain own planning ideas, for instance regarding high-rise construction.Both external and internal stakeholders characterized the role of the development theses:“It is of course partly official propaganda, but there are of course good things included.There are good things… such as enlivening the district., "",“The were important and still are, as they set the common direction for the development and in a positive way, we can advance things according to it.,During the construction and early operations phase, the internal stakeholders comprehended that they need to engage external stakeholders even more to progress further."The internal stakeholders' Consultant and partner, Architect and partner, Designer, Project manager, Trade promoter, Development director and Manager of real estate development collectively altered their approach, and started to actively organize meetings and discuss with the external stakeholders' Local Cultural and Environmental Bureau, Construction office, Building inspection and Surveillance authority and Urban planning authority in particular.These representatives from internal stakeholders listened to their expectations about future phases, similarly as in the early project phases in the different briefings, but now even more personally and transparently.The dialogs enhanced joint benefits for internal and external stakeholders, and created harmony, securing official town plan in near future.Two quotes to illustrate this:“I think it was who personally went to meet them and tried to discuss ideas of the future development., "“Well in regard of some smaller issues, I have personally visited and had discussions with the Park City's preparatory officer. ,Especially] when has turned up, then I have familiarized myself with it, and maybe provided some viewpoints to it., "",The designer team began to engage the Residents association, Customers and End-users, even more by incorporating their ideas during the later construction phase and early operations phase."In particular, Insurance Company's CEO, Real estate investment manager, Fund manager and Head manager of real estate investment hired commercial consultants to conduct a commercial enquiry of the district area. "The hired consultants conducted a large survey in the district by interviewing and collecting data and feedback from the district's current and potential End-users and Customers.They also gathered similar data from other rival districts for comparison.The aim was to disentangle a more advanced commercial profile and garden district atmosphere including interior designs for the future premises, by listening to the actual End-users, and Customers and comparing the existing profile to other districts.The internal stakeholders could also interpret from the feedback and dialogs that what was working in current solutions and what was not, helping to adjust the contents of the Theses accordingly.Two quotes from different stakeholders to illustrate the gathering of feedback.”I think that everything started to fall into place, when we started to collect feedback via the market research., "“Actually, they conducted this kind of enquiry about what is needed in this district and what kind of End-users do we have here and then they compared the results with the capital downtown's profile.They did interviews and approached this issue, a bit, if I may say so, more from a psychological perspective.Like what would the new atmosphere be like in the new district in future, so that it would serve the people who visit there.They depicted the people that visit the district in four elements, whether you are rational or more impulsive and so forth.They developed this kind of synthesis, and based on that we would start to create the district or actually the feeling there for End-users and Customers.,We have depicted and summarized the identified nine practices in Fig. 1.We divided the practices by whether the internal stakeholders utilized them for engaging or disengaging the external stakeholders.Here, we have grouped external and internal stakeholders into one entity for clarity in analyzing the practices.Doing so also served an analytical purpose, because creating the model would have otherwise been impossible.However, in practice, stakeholders do not form a monolithic group of actors.Founding a joint organization and utilizing heralds were two practices that the internal stakeholders used simultaneously for both engaging and disengaging the external stakeholders.The lifecycle axis is relational and we have used it as means of illustrating the order of the practices.As illustrated in Fig. 1, the practices to engage or disengage the external stakeholders changed back and forth over time through the interactions among the external and internal stakeholders.The figure demonstrates that the practices did not evolve linearly, but were interrelated and contributed to each other through a highly dynamic nonlinear process.The establishment of DAD both formalized stakeholder boundaries and set a mutual communication channel between the external and internal stakeholders.The internal stakeholders instigated this; by implementing the visualization tool for informing external stakeholders and simultaneously implementing the reinforcement tool, they also defended their governance structure to the external stakeholders in a timely manner.These tools ultimately set two rather closed systems of internal and external stakeholders, which led to the need for discussion platforms between the two.We can see from the figure that, even though discussion platforms actively engaged the external stakeholders in the decision-making, there were downsides due to this rather boundaryless approach."As a result, some specific information had to be concealed and some external stakeholders' references were not answered.The interesting thing is that the aforementioned heralds facilitated this dichotomy, meaning that they controlled the discussion platforms as gatekeepers and decided, which information and ideas were shared and listened to, and, ultimately, which were not, as depicted by the double-headed arrows in Fig. 1."Due to this mediating process, one of the heralds established the Development Theses as ground rules for future development, which met the external and internal stakeholders' requirements halfway.This led rather linearly to maintaining active personal dialogs and gathering feedback from the external stakeholders.However, as seen in Fig. 1, the last three practices were interrelated and contributed to each other in the following way: The Theses set the guidelines for the contents of further dialogs, which led to gathering of feedback about how they functioned.The internal stakeholders then received this feedback in the personal dialogs and fed it back to the Theses, which they updated over the system lifecycle.The dynamic pattern formed from both engagement and disengagement practices in Fig. 1, is what we named as the pacing strategy.This pacing strategy takes into account the temporary perspective of stakeholder engagement and disengagement practices.We understand that the contents of these practices are contextually specific, but the logic behind the pacing strategy is to represent the dynamic pattern, which goes back and forth between engagement and disengagement, highlighting the timely crucial role of disengagement in governing CoPS for systemic benefits.Based on our analysis, we interpreted four rationales for the identified practices.These rationales describe the more abstract reasons for engagement and disengagement practices from a systemic perspective."The internal stakeholders of the CoPS communicated widely about the system via two practices, which facilitated the framing of the megaproject's identity: founding a joint organization and implementing a visualization tool. "This framing initially involved the search for a common goal and the development of the project concept within a limited collective, which included the megaproject's key architects and the area's asset owners.In this process, clear boundaries between the internal and external stakeholders were drawn by founding the DAD organ, which also provided a clear basis for the identification of the governance structure.The exclusion of the external stakeholders from the CoPS during its initial phases ensured a focused framing process of the CoPS.The reason for this exclusion was to eliminate the need for complex negotiation and bargaining processes among external stakeholders with conflicting interests and to avoid the uncertainties, instability and progress delays that could arise from overly complex organizational arrangements."When the initial project concept was developed, the internal stakeholders implemented the novel visualization tool to actively communicate and frame the newly established megaproject's identity.Furthermore, it supported the interpretation processes of the external stakeholders about what the project is about, how they could attach themselves to it and what their roles could be within it.The framing of the complex product system was conducted in a sequenced manner to manage the growth of the organizational network and the boundary arrangements."The internal stakeholders introduced and utilized the visualization and reinforcement tools for the megaproject to legitimate the stakeholders' roles in the governance of the CoPS.The two planning tool practices formalized the roles of the internal stakeholders as the key architects in the system and on the other hand legitimized and marginalized the roles of the external stakeholders as those whose project-related activities and input provision would be strictly controlled by the internal stakeholders.The visualization tool was used to inform the external stakeholders that the internal stakeholders were the focal designers of the megaproject and in charge of its governance.The reinforcement tool assigned such value-adding roles to the external stakeholders that mainly supported the purposes of the internal stakeholders.That is, the internal stakeholder handled the external stakeholders as instruments to achieve the desired organizational structure for the megaproject and to implement the megaproject in a timely manner.These tools also established the communication channels that governed the interaction and relationships between the internal and external stakeholders."To activate the external stakeholders' contributions to the development of the CoPS, the internal stakeholders concurrently created urgency for cooperation, but also signaled the external stakeholders that certain activities were being handled by the internal stakeholders only.The mobilization of the external stakeholders and the maintenance of dynamic interaction with them contributed to desired governance structures of the megaproject and its timely progress.Three diverse practices—forming discussion platforms, ignoring references and utilizing heralds— maintained the momentum for participation.The formed discussion platforms structured communication and participation processes with clear input and decision-making windows.This formalized and scheduled “gate-based” participation process supported the creation and maintenance of momentum for participation by regularly creating a sense of urgency around selected issues during the system lifecycle, such as high-rise construction.The active dialogs with the external stakeholders also emphasized the positive societal implications that their participation in the provision of system inputs and complementary activities would have, further maintaining the momentum for participation.It is notable that, to maintain the momentum for participation, “planned” disengagement practices, such as ignoring certain references or requests from the external stakeholders, were also crucial, especially to keep the external stakeholders active and on the alert.The disengagement activities may have therefore also played a significant role in establishing urgency among the external stakeholders to mobilize and motivate them."For instance, when Residents' association noticed that their voices were not being heard properly but they still had the opportunity to influence certain issues, they raised their interests to influence the complex system, which prevented collective inaction and maintained the active momentum for participation. "To timely balance this concurrent engagement and disengagement, the use of specialists as heralds seemed crucial in arbitrating the external and internal stakeholders' interests regarding the matters that were open for debate, while still “favoring” the internal stakeholders' governance structure and timely progress of the megaproject.That is, the heralds provided further opportunities for the external stakeholders to participate and keep them active, at least ostensibly, i.e., providing a belief of an opportunity to influence."When entering the later lifecycle phases of the megaproject, all the major and significant issues and challenges regarding the system's scope had been solved.Thus, to proceed with the CoPS in its later lifecycle phases, the interactions between the internal and external stakeholders could be shifted to active engagement of external stakeholders via three practices: forming development theses, organizing personal meetings and organizing inquiries."The rationale behind these practices was to expand the internal stakeholders' governance structure to also include the external stakeholders, and to empower them with design rights for the system concerning remaining minor and medium issues and details without creating too much complexity or uncertainty anymore for the timely progress of the megaproject.Expanding the design rights and opening of the governance structure of the CoPS had positive implications of mutuality and collaboration between the internal and external stakeholders, as the actors were able to aggregate those remaining conflicting interests that contributed to the greater good.The development Theses were means of making the value of collaboration explicit during the later phases of planning and of engaging a variety of external stakeholders widely and openly in the planning of the CoPS having positive societal impacts."In the early operations phase, the construction site and its logistics caused noise emission and problems for the district's accessibility.Thus, the personal meetings with transparent and active dialogs between the internal and external stakeholders offered additional means to influence the design, open the governance, and make the value of collaboration explicit, which further motivated external stakeholders."Lastly, to completely vouchsafe access to influence the CoPS ‘s design, organized inquiries and feedback requests were effective means of collaboration, showing every stakeholder that their contribution was needed for the long-term governance, timely progress and problem solving of the megaproject's remaining challenges.Based on our analysis of the rationales and enacted practices in the investigated megaproject, we suggest a processual model that delineates the findings.The developed processual model of rationales and enacted practices in CoPS is presented in Fig. 2.Our empirical investigation of the district development megaproject revealed that stakeholder engagement and disengagement in CoPS is a multifaceted phenomenon.We identified several practices and rationales that internal stakeholders used for engaging and disengaging external stakeholders timely.We discuss the theoretical contributions and managerial implications of this study in the following two sub-sections.We conclude with limitations and suggestions for further research.Stakeholder engagement and disengagement practices in a complex product system form a cyclical process.Our observation of the pacing strategy includes varying practices that cycle back and forth between engagement and disengagement of external stakeholders.Our findings about practices contribute to previous knowledge of stakeholder management in CoPS in three major ways.First, our Fig. 1 highlights the crucial role of timely disengagement of external stakeholders in governing CoPS.This means that while our findings demonstrate that it is certainly true that stakeholder engagement is important for overall value creation and system wide benefits in CoPS contexts, it is just as important to timely disengage external stakeholders for a balanced approach in reaching the systemic outcomes.Our findings here denote that internal stakeholders can utilize the same engagement or disengagement practices toward the same or different external stakeholders in a value-adding manner without any significant damage to the inter-organizational relationships.This indicates that it is not worthwhile for the internal stakeholders to develop a completely open or closed stakeholder system with clear-cut boundaries for decision-making, but that it is beneficial to have some degree of permeability in the system.This permeability accentuates the importance of timely disengagement of external stakeholders.Previous research of stakeholder management in CoPS contexts from the systemic view has focused unilaterally on stakeholder engagement.Particularly, on the benefits of stakeholder engagement in reaching systemic outcomes.However, the potential downsides of engagement, the benefits of disengagement, and a balanced view of both stakeholder engagement and disengagement in reaching systemic outcomes have remained mainly unexplored.Thus, our findings here are rather antithetical and add to this literature by showing the importance of timely disengagement of external stakeholders for enhancing systemic outcomes, and providing a balanced approach to the interplay of stakeholder engagement and disengagement in governing CoPS.We further argue that this timely disengagement and then re-engagement of specific external stakeholders is a novel and specific feature of CoPS when compared to other kinds of contexts, where this kind of behavior may be deteriorating for stakeholder relationships that might result in impossible forms of interaction to reach the desired outcomes.Second, our narrative of engagement and disengagement practices is described at a very fine-grained level and our practice findings are empirically grounded.Indeed, our processual description provides a temporary perspective to the engagement and disengagement practices that covers the entire CoPS lifecycle from early front-end phase to operations phase.In practice, the internal stakeholders did not have any pre-set toolkit from which they could have chosen practices and deliberately utilized them for specific external stakeholders.In fact, the practices arose from the networked interactions including both internal and external stakeholders.Previous research of stakeholder management in CoPS contexts from the systemic view has provided knowledge of the generic abstract engagement processes, such as coordination, consultation and compromising that can lead to systemic value outcomes.Additionally, previous research has highlighted the influence of different general forms of engagement interaction, namely collaboration and cooperation.Descriptions of such processes and forms of engagement interaction have remained rather theoretical, static and distant to empirical data.Moreover, extant research of stakeholder management in CoPS contexts from firm-centered perspective has identified various general strategies to manage external stakeholders, ranging from engagement to disengagement, even though, empirical and processual investigations on the actual practices through which these strategies are enacted have been more limited.Therefore, our findings here elaborate previous knowledge of stakeholder management in CoPS contexts by providing empirical grounding, causal logics and temporal dynamism over the system lifecycle.This is also unique in the sense that prior research on stakeholder management in CoPS contexts has tended to focus on the interactions of internal stakeholders in early lifecycle phases, instead of analyzing the interactions between internal and external stakeholders over the entire lifecycle.Third, our findings address the importance of utilizing both engagement and disengagement practices on a continual basis to maintain interaction and timely progression throughout the entire system lifecycle.Previous stakeholder management literature in CoPS contexts has highlighted a paradox whether to engage or disengage external stakeholders particularly in the early lifecycle phases."That is, some scholars argue that transparency and broad engagement of external stakeholders in early lifecycle phases contribute to systemic benefits as all stakeholders' perceptions are included into the success criteria and objective definition.Nonetheless, other scholars argue that this kind of boundaryless and inclusive approach to stakeholder management may be extremely resource-intensive and costly with the risk of extremely complicated decision-making with lock-ins and dead ends.Therefore, internal stakeholders can favor disengagement approaches during the early lifecycle phases to secure the timely progress of the CoPS.Our findings show that the answer to this paradox lies not in a dichotomy of engagement or disengagement, but in gradual employment of both engagement and disengagement approaches simultaneously.The rationales for stakeholder engagement and disengagement in complex product systems are bound to the system lifecycle.Based on our analysis of the rationales for engaging and disengaging external stakeholders in the present case, we suggest a processual model for stakeholder management in CoPS.The model in Fig. 2 identifies four groups of rationales; framing of the system, legitimating the governance structure of the system, maintaining dynamic stakeholder interaction, and expanding the design rights within the system that are bound to the system lifecycle and stakeholder interaction evolution in CoPS context.Previous research of stakeholder management in CoPS contexts from the systemic perspective has provided limited knowledge of such CoPS context specific rationales and their temporary order.Resource-based perspective has highlighted the rationale of external stakeholder engagement in the diffusion of new ideas and innovations.Knowledge-based perspective has highlighted the role of shared knowledge base and knowledge sharing as important rationales for stakeholder engagement.Lastly, the institutional perspective has accentuated the rationale of stakeholder engagement in the formation of organizational legitimacy and reputation.These rationales in existing literature are primarily theoretically oriented general insights about engaging external stakeholders and dismiss largely the temporality or the disengagement aspect.The four novel rationales that we identified are empirically driven and specific to the lifecycle of a complex product system.The value of our findings is that we elaborate the previously identified more theoretically oriented general insights on engaging external stakeholders in the specific context of CoPS with the disengagement component.More importantly our findings suggest a temporal ordering for these schemes of reasoning and show how the rationales may change when the CoPS proceeds on its lifecycle.A processual understanding of the functions and reasons behind stakeholder engagement and disengagement in CoPS contexts is valuable for further enlightening the understanding of stakeholder management dynamics in inter-organizational systems.Who of the stakeholders to engage and disengage, when and how, are highly relevant challenges for managers of CoPS.The findings of this study on the interplay of stakeholder engagement and disengagement practices, and particularly the pacing strategy, suggest that managers of CoPS need to adjust their stakeholder management strategies to the changing nature of the context.This means that managers need to adopt a flexible and balanced approach for stakeholder management, and be able to change back and forth from stakeholder engagement to disengagement, when circumstances change.Additionally, both engagement and disengagement practices can be used toward the same stakeholder in a value-adding manner over time."This is an important lesson for managers' stakeholder management process, as practical and academic advice have typically either advocated an in-depth and inclusive engagement of external stakeholders for sustained value creation throughout the system lifecycle or suggested that managers should consistently disengage the non-value adding external stakeholders over the lifecycle. "Another crucial lesson for managers is that the rationales for stakeholder engagement and disengagement in CoPS contexts should be approached from a systemic perspective instead of focusing only on single organization's short-term cost and scope effects. "Instead of trying to optimize the stakeholder management activities from their own organizations' perspective, managers need to be concerned about the system-level value creation and outcome of the CoPS, and develop the stakeholder engagement and disengagement activities in co-operation with other actors, including both internal and external stakeholders.Concrete actions can be derived from our findings concerning the above two lessons for managers.Managers can establish together with other actors and stakeholders jointly controlled inter-organizational bodies and working groups, where further round table discussions and collaborative meeting routines can be set with external stakeholders.Additionally, joint planning tools and principles among stakeholders can help to establish design rules for development.These activities are particularly relevant during the early stages of CoPS.They serve as platforms for receiving inputs and collecting ideas from a wide range of stakeholders that contribute to timely planning.Managers maintain these activities throughout the CoPS lifecycle, but the key is to use feedback to identify whose inputs and engagement are timely needed for further progress, and whose are not.In later CoPS lifecycle phases, managers can design together with other actors distinct workshop and seminar formats, to have active dialogs with external stakeholders, distribute information, test novel ideas preliminary, and more importantly receive feedback about what is currently working and what is not.These kinds of workshops and seminars have at least symbolic value, meaning that stakeholders have at least an ostensible opportunity to influence.Finally, hiring prestigious persons as heralds to arbitrate different stakeholder interests in all of these activities can be crucial for timely progress.While our study reports an in-depth analysis of stakeholder management practices and rationales in a complex product system, it has some limitations.First, the study focuses only on one CoPS in a specific context of a district development megaproject.Other contexts might have different challenges and stakeholder activities and practices.Hence, one should be cautious in generalizing the findings.Second, we suggest that the pacing strategy, a cyclical engagement and disengagement pattern can be found in other contexts, even though the content of this pattern – the actual practices and interactions – will likely be different across contexts.Third, our analysis grouped external stakeholders into one entity for clarity in analyzing the practices pursued by the internal stakeholders.However, stakeholders do not form a monolithic group of actors in practice, whereby more research of stakeholder-specific practices is advised.Fourth, we acknowledge that the rationales for expanding or narrowing down collaboration in CoPS may vary upon several reasons related to, for example, risks, growth aspirations, and factors in the competitive environment.Thus, we call for more research in different contexts for both quantitative and in-depth qualitative analyses to identify other possible practices and rationales for stakeholder engagement and disengagement.Further research could for example assess engagement and disengagement of external stakeholders in other industry and cultural contexts, which would provide an avenue for further enhance the contingency approach to stakeholder management.Moreover, future research could dig deeper into the patterns of engagement and disengagement, assessing, for example how the organizational architecture of the CoPS may affect the interplay of these two and the value creation of the multi-stakeholder system.Finally, building stronger linkages between the stakeholder management, practice theory and strategy-as-practice research streams would provide a fruitful avenue for the development of the stakeholder theory.
Collaboration with stakeholders has become a cornerstone of contemporary business; however, absolute collaboration is not trouble-free. The present study explores how and why firms engage and disengage external stakeholders in their value-creating activities in complex product systems over time. From the existing research on stakeholder management, we know that actor roles, strategies, reasons and challenges of engaging external stakeholders in innovation and business activities vary across contexts. However, additional research is needed to construct a more comprehensive understanding of the practices as well as their rationales by which firms engage or disengage external stakeholders in complex product systems. Our empirical study of a European district development megaproject improves the current understanding of stakeholder management in complex product systems contexts. We derive nine practices and four rationales that timely describe the engagement and disengagement of external stakeholders. The study develops a processual model of stakeholder management in complex product systems with implications for both stakeholder management literature and managerial practice.
736
The cannibalization effect of wind and solar in the California wholesale electricity market
Increasing penetration of zero marginal cost variable renewable energy technologies in the wholesale electricity markets pressures electricity prices downwards due to the merit-order effect.This has far-reaching implications, not only for the wholesale electricity market itself,but also for VRE generators,policy makers and wholesale electricity prices?),and the electricity system as a whole.California has seen a significant increase of solar and wind electricity generation in the last years, reaching daily penetration records of 23.5% and 14.7% respectively between January 2013 and June 2017.This has led to the decline of wholesale electricity prices, with this decline being stronger at the times when solar and wind are generating more, undermining their unit revenues and value factors.We estimate the absolute and relative cannibalization effect of solar1 and wind technologies in the California wholesale electricity market for the period January 2013 to June 2017.We first calculate daily unit revenues and value factors from hourly data, and then perform a time series econometric model of the four dependent variables as a function of solar, wind, natural gas and net imports penetration, gas prices and electricity consumption.We further explore non-linearities to test the stability of the parameters across different consumption and penetration ranges."With this model we can estimate not only how increasing a technology's penetration undermines its own value, but also the cross-cannibalization effects between technologies. "The absolute cannibalization effect indicates by how much the revenues per MWh decline for generators as their technology's penetration increases.The relative cannibalization shows by how much the value of the technology-specific electricity drops with respect to the average value of electricity in the wholesale electricity market as penetration increases.In other words, it represents the cost of the generation variability.Our primary data are provided by the California Independent System Operator for the day-ahead wholesale electricity market.However, these data do not include solar generation of distributed small-scale installations.Therefore, we use centralized-only data from CAISO, but also combine it with distributed solar generation estimations by the Energy Information Administration, and find that excluding distributed generation would lead to anoverestimation of thesolar cannibalization effect."This paper contributes to the merit-order literature by going one step further and jointly quantifying the effect of wind and solar penetration on their own and each other's unit revenues and value factors ex-post with historical market data.This is the first paper to our knowledge to jointly estimate cannibalization and cross-cannibalization effects within and between technologies of both solar and wind based on actual market data.Finally, by including distributed generation we find that the literature on the merit-order effect might beoverestimating the effect ofsolar penetration when distributed generation is omitted and it represents a significant share of the total solar generation.The remainder of the paper is structured as follows: Section 2 reviews the literature on the merit-order effect and the value of renewable electricity.Section 3 presents the data and method.Section 4 explains the results, focusing first on the general results of the relative and absolute cannibalization effects and exploring then potential non-linearities related to the effect of different consumption and penetration levels on wind and solar value factors.Section 5 discusses the results and their implications, and compares them with the previous literature.Section 6 concludes.The cannibalization effect is caused by the merit-order effect: for any given demand, zero marginal cost electricity technologies entering the market will shift the supply curve to the right and therefore the marginal matched price will decline.This effect has been widely identified and quantified in the literature for markets with high penetration of variable renewables such as Texas, Germany, Italy or Spain.Although these studies focus on different aspects of the merit-order effect, such as its relation to support policies or distributive effects, they all find conclusive evidence of the effect of variable renewables on lowering wholesale electricity prices.The most common methods to quantify the merit-order effect are either market simulations or time series econometric models.Electricity is a perfectly homogeneous commodity in three dimensions: time, space and lead-time between contract and delivery.This entails that electricity prices vary across these three dimensions even holding all other factors constant, and therefore the value of VRE electricity depends on the time and place of generation and level of uncertainty about future production.Since VRE technologies have a higher level of uncertainty and generate electricity only when and where the resource is available, their value significantly differs from the value of conventional dispatchable electricity generation technologies.The value of variable renewables has been quantified in the literature either ex-ante with dispatch or dispatch and investment models or ex-post with econometric models and historical market data.Most of these studies agree that the value of VRE tends to decline as penetration increases.For instance, Zipp studies how the drop in prices due to the merit-order effect is translated to the decline in the revenues of VRE generators in Germany.Likewise, Clò and D’Adamo estimate the effect of solar penetration on solar and gas unit revenues and value factors in the Italian day-ahead wholesale electricity market.For the specific case of California, Woo et al. find evidence of a small but significant merit-order effect caused by both wind and solar in both the day-ahead and the real-time electricity markets.Likewise, both Borenstein and Lamont identify value factors above 1 for solar, declining with penetration at low penetration levels.The most comprehensive study assessing the value of VRE in California has been done by Mills and Wiser, who use a dispatch and investment model to estimate ex-ante the long-term value of wind and solar up to 40% penetration, finding also evidence of a downward trend as penetration increases."We build upon this previous literature to estimate the cannibalization and cross-cannibalization effects of solar and wind technologies, i.e. how increasing solar and wind penetration affects their own and each other's unit revenues and value factors.In the discussion section we will expand this brief literature review by comparing our results with the most relevant findings of the aforementioned references.We use hourly data from the California day-ahead wholesale electricity market provided by the California Independent System Operator for the period January 2013 to June 2017: day-ahead electricity prices, day-ahead demand, day-ahead electricity imports and exports, solar and wind generation day-ahead forecast and gas prices.Since we study the day-ahead electricity market, it is more accurate to use day-ahead forecast generation and demand rather than realized data, for DAM agents base their decisions on these forecasts given the uncertainty regarding actual future generation/demand.While the correlation between day-ahead forecast and actual demand is almost perfect, the correlation between solar, and in a larger extent wind forecast and actual generation is lower, so using realized data rather than day-ahead forecasts would likely bias our results.CAISO does not provide, however, data on natural gas generation, which we obtained from the Environmental Protection Agency Air Markets Program Data.These data have two caveats: it is realized rather than day-ahead and gross load rather than net generation.This entails that gas penetration is overestimated and therefore its parameter is likely to be underestimated.Therefore, although the specific parameter of the gas penetration variable should not be trusted, it is still a useful control for our interest variables: wind and solar penetration.Another limitation of the data concerns the electricity trade between electricity markets.Although we control for net imports, it would be more accurate to sum solar/wind imports to solar/wind generation to have exact solar/wind penetration.Unfortunately, there is no import/export decomposition at the hourly level so controlling for net imports is the best we can do with the available data.Fig. 1 shows the daily solar and wind unit revenues and value factors observed in the California DAM for the period January 2013 to June 2017, which will be the dependent variables of our models.While solar and wind unit revenues have a strong positive correlation, solar and wind value factors move in opposite directions.Both solar and wind UR declined between January 2013 and June 2017.Although the average2 solar UR were $2.9/MWh higher than those of wind in 2013, solar UR decreased faster than wind UR and started to lay below wind UR from 2015 onwards, in 2016 being $4/MWh lower than wind UR.Within four years, solar UR fell almost $20/MWh, from $45.9/MWh to $26/MWh, and wind UR dropped $13/MWh, from $43/MWh in 2013 to $30/MWh in 2016.Solar UR have even reached negative values due to the negative prices observed in the wholesale electricity market at peak solar generation times.A summary of the descriptive statistics is provided in Table 1, see Appendix A for detailed descriptive statistics.The average solar VF was 105.5% in 2013, meaning that PV electricity was worth 5.5% more than the average unit of electricity traded in the wholesale electricity market during that year, thanks to the positive correlation between the solar generation profile and the demand load.At the same time, wind VF was 99%, 1% less than a hypothetical flat generation profile, because its generation is more randomly distributed along the day instead of being concentrated around demand-peak time as solar.However, while solar VF decreased over the studied period down to 85.7% in 2016 and only 63.7% in the first half of 2017, wind VF slightly increased up to 102% in 2016, entailing that wind VF surpassed solar VF since 2014.In summary, while solar value was 6.5 percentage points higher than wind in 2013, their opposite evolution during this time entailed that in 2016 wind VF was 16.3 p.p. higher than solar.The evolution of unit revenues and value factors is determined by the levels and hourly distribution of solar/wind generation, and wholesale electricity prices.Electricity prices declined in California during the studied period due to three main causes: the merit-order effect, the decline of gas prices and the decline of net electricity consumption.Whereas total consumption increased when distributed electricity is considered, net consumption declined due to the stronger increase of self-consumption.The downward trend of unit revenues is a reflection of the merit-order effect, and therefore their evolution mimics that of wholesale electricity prices.Fig. 2 shows the average hourly distribution of wind and solar generation during the years 2013 and 2016.While both wind and solar have the same seasonal pattern, they have opposite daily patterns: solar generation is concentrated around noon whereas wind generation is more randomly distributed along the day with its minimum generation around noon.The comparison between both panels of Fig. 2 shows that whereas wind generation was higher than solar in 2013, solar installed capacity and generation increased faster than wind surpassing it in 2014.Fig. 3 shows likewise the average hourly distribution of wholesale electricity prices per month and year between 2012 and June 2017.We can observe that not only the price levels generally fell during this period, but the hourly distribution changed.As solar penetration increases, prices drop at noon and spike in the evening when solar generation declines, creating a pattern that resembles the “duck curve” caused by solar generation in the net load profile.Therefore, whereas at low penetration solar generation is usually correlated with electricity prices, the drop in prices caused by the merit-order effect at the times when there is high solar penetration reverses this correlation causing the downwards trend of the solar value factor observed in Fig. 1.From the centralized-only dataset and the daily solar distributed generation interpolation we build the distributed-included dataset by summing up, on the one hand, centralized and distributed generation; and on the other hand, daily consumption and distributed generation.Thus, when computing the solar penetration of the distributed-included dataset we take into account distributed generation both in the numerator and the denominator.The omission of distributed solar generation has two opposite effects.On the one hand, omitting distributed generation underestimates the effect of solar penetration, since the drop in wholesale prices caused by the lower net load would be attributed to lower demand rather than higher penetration derived from the self-consumed electricity.On the other hand, omitting distributed generation overestimates the cannibalization effect since the drop in prices occurs at an apparently lower solar penetration level than actually realized when distributed generation is considered.As we will see in the results section, the latter effect is stronger so the omission of distributed generation causes anoverestimation of thesolar cannibalization effect.Finally, we do not take into account curtailment, understood as a “reduction in the output of a wind or solar generator from what it could otherwise produce given available resources”.By ignoring curtailment, we estimate the cannibalization effect of actual generated electricity.An alternative approach would be to include curtailment and estimate the cannibalization effect of potential rather than actual electricity generation.Once we have the UR and VF time series, as well as wind and solar penetration and all the control variables, we test for the presence of unit roots.We perform augmented Dickey–Fuller and Phillips-Perron unit root tests with constant and linear trend and the number of lags determined by the Akaike Information Criterion on each of the time series in levels and first differences.Although the ADF does not reject the null hypothesis of unit root for the solar penetration, gas price and consumption time series in levels, the PP test safely rejects the unit root null of all the time series in both levels and differences at 1% significance.Since the PP test is non-parametric and asymptotically more efficient we take our data as stationary.In both cases, we use heteroskedasticity consistent standard errors when the Durbin–Watson and Breusch–Pagan tests detect autocorrelation and/or heteroskedasticity respectively.This section presents the main results of the analysis in terms of absolute and relative cannibalization and cross-cannibalization effects of wind and solar technologies for centralized-only and distributed-included solar generation with two econometric approaches: OLS and Prais–Winsten FGLS; with HC/HAC standard errors when there is autocorrelation and/or heteroskedasticity.Fig. 4 summarizes the main results, which are presented in detail in Tables 3–6 .First, the consistency between both econometric approaches, widely used in the literature, confirm the robustness of our estimation.Additionally, Appendix B presents the results of the same regressions using total generation rather than penetration and showing consistent results, which again supports the robustness of our approach.Second, we observe that omitting distributed solar generation leads to an overestimation of the solar cannibalization effect and an underestimation of the wind cannibalization effect, while barely affecting the cross-cannibalization effects.It is therefore likely that the merit-order effect caused bysolar penetration isoverestimated in the literature when distributed generation is not included and it represents a significant share of the total solar generation.Given these preliminary observations, we will focus on the results of the Prais–Winsten FGLS estimation with the distributed-included dataset.Our results confirm both absolute and relative cannibalization effects for both technologies, being stronger for solar than for wind."The same applies for the absolute cross-cannibalization effects.The value factor effects between technologies, however, are opposite: while wind penetration reduces the solar VF, solar penetration increases the wind VF.We define the absolute cannibalization effect as the decline of the technology-specific unit revenues) as their respective market penetration increases.The upper panels of Fig. 4 show the partial effects of solar and wind penetration on solar and wind UR according to the model presented in Eq., and Tables 3 and 4 present the detailed regression results for solar and wind UR respectively.Regarding the control variables, both consumption and gas prices have positive effects on both solar and wind UR as expected, since their positive effect on electricity prices, already identified in the merit-order literature, is directly reflected in the technology-specific UR.Gas penetration also has a positive effect but of very small magnitude.Finally, net imports penetration negatively affect both solar and wind UR.The evolution of the UR is generally a reflection of the evolution of electricity prices, and therefore our results are consistent with the previous merit-order literature.More interesting is, however, the evolution of the value factors, on which we will focus from now on.We define relative cannibalization effect as the decline of the technology-specific value factors) as their respective market penetration increase.The lower panels of Fig. 4 show the partial effects of solar and wind penetration on solar and wind VF according to the model presented in Eq., and Tables 5 and 6 present the detailed regression results for solar and wind VF respectively.Regarding the control variables, both gas penetration and consumption have a positive effect on solar VF but negative on wind VF, whereas neither net imports penetration nor gas prices have a significant effect on wind/solar VF.As already mentioned, when testing a quadratic functional form of penetration and an interaction term between penetration and consumption, the results, although inconclusive, suggested potential non-linearities regarding these two variables.Instead of including quadratics and interactions, we decided to subset the dataset into smaller chunks of data for different penetration and consumption ranges and estimate again regressions for each subset in order to have a more detailed illustration of these non-linearities.For the analysis of the consumption non-linearities we subset the dataset into four chunks corresponding to four quartile ranges with the same number of observations and then estimate a regression on each of the subsets.Since the subsetting process eliminates time series nature of the data, we now estimate the regressions specified in Eq. by OLS with heteroskedasticity and autocorrelation consistent standard errors, which has been demonstrated to be robust in the previous sections.Fig. 5 shows the relative cannibalization effect across consumption levels, with point estimates located in the average value of each range and a 95% confidence intervals, and Tables 7 and 8 present the detailed regression results.The left panel of Fig. 5 shows the effect of solar and wind penetration on the solar VF.Although at different magnitudes, the effect of both solar and wind penetration on the solar VF weakens as consumption increases, becoming insignificant in the highest consumption quartile.The right panel of Fig. 3 shows a similar pattern for the wind VF.Although the trend is not as smooth for the wind VF, we can also observe that the effect of solar/wind penetration is weaker at high consumption levels.The main differences are in the extremes.The effect of both solar and wind penetration is strong when consumption is low.When consumption is high, the positive effect of solar penetration on the wind VF vanishes, whereas the relative wind cannibalization effect is weaker than the average but still significant.Finally, we now explore potential non-linearities regarding solar and wind penetration.For that purpose, we define three penetration ranges for solar and for wind of equal spread within each technology.Fig. 6 illustrates the results presented in tables 9 and 10 .The left panel of Fig. 6 shows the effect of solar penetration on solar and wind VF, indicating that the effect of solar penetration becomes stronger as penetration increases.This effect is however of opposite sign for solar and wind VF, as already suggested by the previous results.The positive effect of solar penetration on wind VF is only significant in the higher range of solar penetration.The right panel of Fig. 6 shows the effect of wind penetration on solar and wind VF.The effect of wind penetration is in both cases stronger at high than at low wind penetration, although the trend of increasing marginal effect as penetration increases is clearer for solar than for wind VF.The cannibalization effect of solar is generally stronger than that of wind, because solar generation is concentrated in a few hours around noon, whereas wind generation is more evenly distributed during the day, which causes the reversion of the positive correlation between the solar generation profile and the demand load and therefore electricity prices.We also find evidence of decreasing solar/wind cannibalization effects as consumption increases and increasing solar/wind cannibalization effects as their respective penetration levels increase.Whereas the cross-cannibalization effects between technologies are generally negative, we find a positive effect of solar penetration on the wind value factor, at least at low consumption levels and high solar penetration levels.As shown in Fig. 2, solar and wind have opposite daily patterns.The increase of solar penetration at noon entails that wholesale electricity prices plummet at that time due to the merit-order effect.When the sun sets, solar generation declines fast, causing spikes in the wholesale electricity market.Bushnell and Novan show that the solar generation ramp up in the morning and the ramp down in the evening cause a shift in the type of gas power plants generating during those times, from efficient combined cycle gas turbines to more flexible but higher marginal cost gas turbines, causing therefore an increase in wholesale prices at those times as shown in Fig. 3.There is a chain of causation: whereas the immediate cause of the price spike and the consequent increase of the wind value factor is the shift from combined cycle to gas turbines, this shift is likewise caused by the increased flexibility needs caused by the sudden drop of solar generation.Therefore, the gas turbine shift is the mechanism through which increasing solar penetration increases the wind value factor.3,Indeed, looking at Figs. 2 and 3 together, we can see that the change in the hourly distribution of electricity prices caused by increasing solar penetration shifts the correlations between the hourly distribution of electricity prices and the hourly distribution of solar and wind generation.Thus, whereas at low penetration the positive correlation between solar generation and electricity prices entails a solar value factor above one, the solar VF declines as this correlation becomes negative.On the opposite, the value factor of wind increases due to the shifting positive correlation of prices and generation when solar penetration is high.This explains the trends in wind and solar value factors observed in California during the last years.Whereas an increase in supply causes ceteris paribus a drop in prices in any market, the cannibalization effect is specific for the case of variable renewables due to the combination of five factors: zero marginal cost and non-dispatchability of variable renewable energy technologies; electricity being a non-storable and perfectly homogeneous good; and the power system stability constraint.Factors and entail that an increase in supply directly translates to prices.Additionally, entails that the floor is not even zero, but prices can become negative when there is oversupply due to the constraint that supply must equal demand at every time.Finally, and entail that producers do not have any control over their supply once capacity has been installed beyond curtailment.For the case of California, our results are consistent with the early findings of Borenstein and Lamont who identified VF above 1 for solar at low penetration levels, declining with penetration.Likewise, the results observed for the solar and wind unit revenues are a direct reflection of the merit-order effect caused by these technologies and identified by Woo et al. in the day-ahead electricity market.Mills and Wiser carried out the most comprehensive study about the value of renewables in California.They use a dispatch and investment model, which accounts for the long-term evolution of simulated “energy-only” day-ahead and real-time wholesale electricity markets.This entails that our results have to be compared with caution, since our methods differ significantly: while we estimate the ex-post cannibalization effect based on historical market data, their ex-ante model allows for the optimal capacity adaptation of the electricity system and therefore for endogenous mitigation of the cannibalization effect.Our conclusions are still consistent in several ways4 : the solar VF is above 1 at low penetration thanks to its capacity value, but drops considerably as its penetration increases; the unit revenues of solar are higher than those of wind, but due to the stronger decline of the solar UR, they are at some point surpassed by wind UR; and both solar UR and VF decline as their respective penetration levels increase.After estimating the merit-order effect of wind and solar penetration in Austria and Germany, Zipp compares the evolution of the wholesale electricity prices with the evolution of solar and wind unit revenues.Our results are also consistent since he finds that the unit revenues of solar decline faster than the wholesale electricity price.The decline of the wind UR was lower than that of wholesale prices from 2015, causing therefore the increase of the wind VF, the same phenomenon observed in the California electricity market.Finally, we can compare our VF results with the comprehensive review of Hirth, who presents three different estimations: a review of dispersed results found in previous literature; a simple econometric model with annual VF of several European countries; and a dispatch and investment model of the Northwestern European power market.The review results suggest the declining trend of the solar VF, as confirmed by our results.The ex-post estimation with market data indicates a higher cannibalization effect for solar than for wind, as also confirmed by our results.Although the ex-ante results based on dispatch and investment models tend to present lower cannibalization effects, the results provided by Hirth based on the EMMA model are more pessimistic than those presented here for California, since they estimate that the VF of wind will drop to almost 50% at 30% penetration, and to less than 50% for solar at 15% penetration.The cannibalization effect has far-reaching implications for variable renewables in particular and for the whole electricity system in general.Although the levelized cost of solar has been rapidly declining during last decades, even reaching, or about to reach grid parity in many countries, if the value of solar falls faster than its cost, the value-adjusted levelized cost would increase jeopardizing thus the competitiveness of photovoltaics.On the opposite, the positive effect of solar on the wind value factor suggests that there could be some level of complementarity between both technologies thanks to their opposite daily patterns.The cannibalization effect could be considered for cost–benefit analyses assessing the optimal penetration of variable renewables.The increasing cannibalization effect as penetration increases entails that the value of flexibility will increase in the future.Therefore, all measures oriented to increasing flexibility will mitigate the cannibalization effect.In this sense, Mills and Wiser estimate that geographical diversification has the highest potential to mitigate the cannibalization effect of wind at high penetration levels, and low-cost storage has the highest potential for the case of solar.Finally, policies that guarantee a specific price for the electricity sold for a determined number of years will have an increasing cost for the government due to the widening gap between unit revenues and the guaranteed price.Likewise, once socket parity has been achieved, in terms of policy costs, net billing self-consumption regulation is preferable to net metering, since it sets a price incentive for prosumers to self-consume the maximum possible amount of electricity generated minimizing thus the impact of distributed solar on the electricity system.We have first calculated daily solar and wind unit revenues and value factors from hourly data of the day-ahead wholesale electricity market in California for the period January 2013 to June 2017.While both solar and wind unit revenues declined during this period, solar and wind value factors evolved in opposite directions.Wind value factor slightly increased whereas the solar value factor strongly declined.We then performed a time series econometric model with solar centralized-only and distributed-included data.We find that omitting distributed solar generation leads to anoverestimation of thesolar cannibalization effect.Our results confirm both the absolute and relative cannibalization effects of both solar and wind technologies.In other words, increasing solar and wind penetration undermines their own unit revenues and value factors.Both solar and wind cannibalization effects are stronger at high solar/wind penetration and low consumption levels.We have also identified cross-cannibalization effects between technologies.While wind penetration generally undermines the value of solar, the effect of solar penetration on the wind value factor is positive.This is caused by the opposite daily patterns of wind and solar generation and the fact that the latter is more concentrated around noon time.The cannibalization effect has far-reaching implications.First, it jeopardizes the competitiveness of variable renewables if their value falls faster than their cost.It might increase the policy costs of promoting renewables.Since the VRE electricity has practically zero marginal cost, the stronger the cannibalization effect, the higher the value of flexibility.Storage and demand management allow the transfer of electricity loads between periods of low/high value, flattening thus the hourly distribution of electricity prices and interconnections allow to geographically link regions of complementary supply and demand patterns.Finally, these results could be useful to adjust the levelized cost of electricity of wind and solar for the value of their electricity and perform more accurate cost–benefit analyses, as well as to calibrate ex-ante dispatch and investment models.Further research should explore how measures to mitigate the cannibalization effect, such as storage, demand management and interconnections, possibly acknowledging seasonal patterns, would affect the value of variable renewables.The authors declare no competing financial interests.
Increasing penetration of zero marginal cost variable renewable technologies cause the decline of wholesale electricity prices due to the merit-order effect. This causes a “cannibalization effect” through which increasing renewable technologies’ penetration undermines their own value. We calculate solar and wind daily unit revenues (generation weighted electricity prices) and value factors (unit revenues divided by average electricity prices) from hourly data of the day-ahead California wholesale electricity market (CAISO) for the period January 2013 to June 2017. We then perform a time series econometric analysis to test the absolute (unit revenues) and relative (value factors) cannibalization effect of solar and wind technologies, as well as the cross-cannibalization effects between technologies. We find both absolute and relative cannibalization effect for both solar and wind, but while wind penetration reduces the value factor of solar, solar penetration increases wind value factor, at least at high penetration and low consumption levels. We explore non-linearities and also find that the cannibalization effect is stronger at low consumption and high wind/solar penetration levels. This entails that wind and (mainly) solar competitiveness could be jeopardized unless additional mitigation measures such as storage, demand management or intercontinental interconnections are taken.
737
Improving the rheometry of rubberized bitumen: experimental and computation fluid dynamics studies
Rheological properties and their measurement are of paramount importance for the development, performance and applications of products across a wide range of industries.More specifically, bitumen technologists are used to monitoring the high-temperature viscosity of these binders during manufacture, compaction and quality control .Furthermore, the use of bituminous binders modified with polymers is a common practice used to enhance the performance of road pavements and roofing membranes.Nevertheless, measurements of their viscosity/rheology can be challenging due to the often heterogeneous structure of these complex systems, especially if these materials suffer from phase stratification within the time frame of typical viscosity measurements, as in the case of rubberized bitumen .A common instrument used to perform these measurements is the rotational viscometer, typically by means of the coaxial cylinder testing geometry.This setup consists of a static outer cylindrical vessel into which is poured the test fluid and a concentric cylinder which is inserted and then rotated at a given angular velocity so that the applied torque can be measured.A standard cylindrical spindle that can be used in the Brookfield viscometer is shown in Fig. 1.This arrangement, however, is incapable of providing reliable viscosity measurements of multiphase systems containing suspended particles with a density different to that of the continuous phase .In fact, when standard cylindrical spindles are used to measure viscosity of fluids with suspended solid particles, if the two phases have very different densities the higher density component will tend to settle to the bottom during the measurement rendering the data acquired of very little use.These type of scenarios are encountered in many type of complex systems such as: chocolate, plastic, rubber, ceramics, food, cosmetics, detergents, paints, glazings, lubricants, inks, adhesives and sealants .Rotational viscometers are provided with supplementary spindle designs that help in some of these cases.For instance, a vane spindle allows performing measurements with paste-like materials, gels, and fluids where suspended solids migrate away from the measurement surface of standard spindles.Furthermore, the Brookfield Helipath Stand is designed to slowly lower or raise a Brookfield T-bar spindle so that it describes a helical path through the test sample.Nevertheless, these accessories are not designed to minimize the heterogeneity of multiphase blends, especially when that sample has the tendency to stratify due to phase density differences.Fig. 1 shows the inefficiency of the vane spindle when used for viscosity measurements of suspensions.In an effort to improve the rheometry of these scenarios by overcoming sample’s phase separation issues, Lo Presti et al. successfully designed, manufactured and tested a prototype of a Dual Helical Impeller for Brookfield Rotational Viscometers.Experimental studies were carried out to evaluate whether the DHI is able to improve the degree of homogenisation of high viscous fluids in order to obtain more realistic viscosity measurements of a blend of fluid with suspended particles.In comparison to the Brookfield standard, cylindrical geometry, the DHI always predicted a different “apparent” viscosity.This result was explained by the capability of the DHI to create what have been likened to convective flows as opposed to the axisymmetric swirling flow induced by the standard SC4-27 spindle.Rubberised bitumen is a complex system of the type described above, where the bitumen is the fluid matrix and the particles are the swollen tyre rubber crumbs.These two components have a moderate difference in densities and for this reason the phase separation could not possibly occur within standard rotational viscosity measurements at 135 °C.However if long equilibrium times are required, rather than high percentage of modifier, higher testing temperatures and high spindle speeds, the phase separation issue is very likely to occur, especially for a wet process-high viscosity binder.Furthermore, this issue is particularly relevant within the product development of rubberised binder with rotational viscometer used as a mixing device offering a continuous monitoring of the viscosity .In fact, in this scenario the processing is made at high rotational speed and at a temperature where the bitumen viscosity is quite low and rubber particles are not swollen yet and tend to agglomerate in layers, mainly at the bottom of the tube.The DHI presented above was developed specifically to solve this type of issue within low-shear development of rubberised bitumen.This research showed that carrying out measurements of rubberised bitumen with the DHI at 135 °C reduces the initial effort needed to accelerate a bitumen-rubber blend from a stationary position and provide more stable viscosity readings.This allowed the authors to declare that this type of measurements were “more realistic”.However, despite these satisfying results, the mechanisms behind the enhanced mixing efficiency provided by the DHI and the actual level of enhancement were not clear and this present study aims to clarify.In order to provide the reader with further information for the interpretation of the presented results and conclusions, the following sections will provide a background on measuring viscosity by means of a rotational viscometer and a brief review on the use of CFD for modelling mixing of complex fluids.So, since all the variables on the right-hand-side of Eq. are measurable, the viscosity can easily and reliably be determined.For a design such as a DHI, this is a simplification but one which can produce values of apparent viscosity that are of practical use.Lo Presti et al. attempted, with some success, to match the behaviour of the DHI to the range of SC-XX spindles offered by Brookfield.The SC4-XX range are of the cylindrical type and each has unique SRC and SMC values.The Brookfield viscometer allows only the selection of a spindle code, which has associated SRC and SMC values, thus limiting fine adjustment to a discrete set of values.Lo Presti et al. were nevertheless able to find a spindle code that matched closely the DHI and did this by testing the DHI against a number of standard liquids of known viscosity.The SC4-28 spindle was found to most closely match the DHI with values of SRC of 0.28 and SMC of 50.The validity of this approach is open to question because of the radically differing spindle geometries of the SC4-28 and DHI.Thus, computational fluid dynamics was seen as an alternative method of obtaining values of SRC and SMC for the DHI geometry.The mixing of two or more miscible phases, whether it be solid/liquid, gas/liquid or liquid/gas or combinations thereof, are widely encountered in mineral, food, pharmaceutical, polymer, metallurgical, biochemical, and other industrial processes .The mixing of highly viscous fluids is often carried out in the laminar and transition regimes.Many impellers were proposed by researchers based either on using large impeller diameters or close-clearance designs like anchors and helical ribbons .Generally, this mixing is carried out in stirred vessels and it has been reported that the so-called helical ribbon and helical screw impellers are most appropriate for efficient mixing of high viscosity Newtonian and non-Newtonian liquids .Before the development of numerical simulations, mixers were normally evaluated experimentally using various measures: power consumption; mixing time; and circulation time.However, none of these measures give an understanding of the spatial variation of the phases or the nature of the transport processes, making the understanding and hence the efficient optimisation of the mixer design difficult.Numerical simulations offer a greater flexibility in analysing and visualising the mixing.In recent years, the simulation of mixing vessels is widely used to optimize mixer geometries and get better insights of the complex flow patterns generated by the impeller-vessel wall interaction .From the perspective of numerical analysis, one of the pioneering CFD works focused on the mixing performance of helical ribbon impellers in cylindrical vessels is the contribution made by Tanguy et al. .They developed a three-dimensional model, which was validated experimentally, based on the finite-element method for the analysis of a Helical Ribbon-screw impeller.The authors reported good liquid circulation at low impeller speeds and showed evidence of poor pumping in the vessel bottom.They noticed that the segregation increased upon increasing the impeller speed.In subsequent work numerical models were developed for several helical ribbon geometries and fluids of rheological behaviours .The numerical modelling of mixing in a stirred tank has attracted a great deal of attention and a review of the state-of- the-art in CFD simulations of stirred vessels can be found in Sommerfeld and Decker .The rapid development of numerical techniques and computational power has unleashed the possibilities of Computational Fluid Dynamics in this area.CFD is now an important tool for understanding the mixing in stirred tanks .Nevertheless, modelling of the complex flow in the presence of a rotating impeller is a computational challenge because of the complex geometry of the impeller and the nature of the flow in stirred tanks.Although CFD codes have made remarkable steps towards the solution of such engineering problems over the last decade, it still remains a difficult task to use such codes to help the design and the analysis of stirred tanks.Iranshahi et al. investigated the flow patterns and mixing progress in a vessel equipped with a Maxblend impeller in the case of Newtonian fluids.In that study, they found that the Maxblend impeller showed good performance when used with baffles in the transition and laminar regimes.In another study, a CFD characterization of the hydrodynamics of the Maxblend impeller with Newtonian and non-Newtonian inelastic fluids in the laminar and transition regimes was carried out by Devals et al. .In that study, the effect of the impeller bottom clearance and the Reynolds number on the power characteristics, the distribution of shear rates and the overall flow condition in the vessel were investigated.Yao et al. performed numerical analysis for the local and total dispersive mixing performance in a stirring tank with a standard type of Maxblend and double helical ribbon impellers.They showed that the double Helical Ribbon cannot be an efficient dispersive mixer – however, the results were not validated by experimental tests.Iranshahi et al. investigated the fluid flow in a vessel stirred with a Ekato Paravisc impeller in the laminar regime using CFD.The viscous mixing characteristics of the Ekato Paravisc were compared with those of an anchor and a double helical ribbon.They were able to show, through a number of experimentally validated criteria, that the Paravisc impeller was capable of producing homogeneous mixtures.Numerical modelling was carried out by Bertrand et al. to predict a rise in power draw due to elasticity.They explained the numerical methodology and compared the results of the simulation with experimental tests in the case of a stirred tank with a helical ribbon.Barailler et al. performed CFD modelling of a rotor–stator mixer in the laminar regime.In this study they investigated the characteristics of the rotor–stator mixing head in the case of viscous Newtonian fluids.Delaplace et al. developed an approximate analytical model based on the Couette flow analogy to predict power consumption for the mixing of pseudoplastic fluids with helical ribbon and helical screw ribbon impellers in the laminar regime.They presented extensive comparisons between the predicted results and the data which was reported in the existing literature.This paper presents a Computational Fluid Dynamics model able to reproduce the observations obtained from a laboratory investigation aimed at resembling a bitumen-rubber system during product development over a wide range of testing conditions.In order to allow visual inspections, the experimental programme was performed by using tyre rubber particles and transparent fluids having viscosities similar to the bitumen phase of rubberised binder at manufacturing and testing temperatures.The overall objective of the present study is to assess whether a CFD model would be able to simulate this complex scenario and to couple these outcomes with visual images to shed light on the results obtained in a previous study , where the DHI provided “more realistic” viscosity measurements when compared to the standard spindle.By doing this, we aim to highlight the benefits and limitations of using the DHI geometry to perform viscosity measurements of complex fluids, as well as using the model to validating the empirical calibration procedure.Section 2 introduces a brief description of the experiments along with the numerical framework and governing equations for the CFD model.The results of the experimental and numerical modelling, along with a discussion thereof, are presented in Section 3, which ultimately aims to validate the numerical model.Finally, conclusions are presented in Section 4 with some thoughts about the next steps in this programme of work.As mentioned earlier, in a previous investigation experimental studies were carried out to calibrate the DHI and to evaluate the improved mixing when used in a Brookfield viscometer .In order to allow visualisation of the movement of particles in the system, the viscometer was customised with a transparent outer cyclinder.The experimental programme used multi-phase fluids made up of a range of standard viscosity fluids and tyre rubber particles.Thus, nine different complex fluids were made from the combination of different standard-viscosity fluids and different diameter tyre rubber particles– all fluids being tested at 10, 100 and 200 rpm at ambient temperature.The range of viscosity and particle diameters have been chosen in order to recreate systems similar in terms of viscosity and physical composition to tyre rubber-bitumen blends at temperatures between 100 and 200 °C.This is considered to be representative of the possible scenarios occurring during the low shear modification of bitumen with crumb rubber particles .Due to the expected high torque, which would be close to the viscometer’s limits, the tests with the f500 fluid were performed up to 100 rpm.Tests were conducted according to Subhy et al. , which are based on international standards on viscosity measurements of a rubberized bitumen .These were undertaken for durations of between 15 and 20 minutes, depending on how quickly the mixing was seen to reach a steady distribution.The particles were added to the fluid and the blend was shaken to produce an even distribution of the particles.In these conditions, the impellers were quickly submerged in the blend and the viscometer was turned on to carry out the test, during which torque and angular velocity were measured and the viscosity calculated from Eq.Results were compared against those obtained with a standard spindle, SC4-27, to establish which geometry produced the highest degree of homogenisation and hence the more realistic, repeatable and reliable viscosity measurements.In parallel with these experiments, a CFD study was conducted with the aim of qualitatively comparing the two sets of results.Version 14 of ANSYS Fluent, the commercially available CFD software, was used in this work.The modelling involves the solution of the Navier–Stokes equations, which are based on the assumptions of conservation of mass and momentum within a moving fluid.In order to simulate the behaviour of the complex, two phase fluids in the present application, the mixture model was used.Here, the momentum and continuity equations are solved for the mixture, the volume fraction equations for the secondary phases, while algebraic expressions are used for the relative velocities and inter-phase drag.The mixture model lends itself to particle-laden flows with relatively low volume loading, which is the situation in the present work.The mixture model in ANSYS FLUENT uses an algebraic slip formulation, which is based on the work of Manninen et al. .It uses the slip velocity, which is related on the drift velocity defined in Eq.The velocity of the kth phase is calculated from algebraic expressions, rather than a separate momentum equation.So, the final term on the right hand side of Eq. imparts a momentum source or sink to the mixture momemtum equation, based on the relative motion of the primary and secondary phases.In the present work, the viscosity of the two phases are assumed to be equal to that of the primary, bituminous phase.For the spindle geometry, smooth, no slip wall boundary conditions were applied to the inner, outer and bottom walls.The upper boundary was set as a symmetry plane as the shear between air and bitumen is insignificant.A moving mesh approach was used, where the entire fluid domain was rotated about the vertical z axis at the appropriate angular velocity, while the outer wall was held stationary relative to the moving zone.The reason for doing this, rather than simply moving the outer wall relative to a stationary fluid domain was so that animations for the DHI impeller could be produced with the helices being seen in motion.There was no significant computational overhead associated with the approach used.The geometry of the spindle was developed based on the Brookfield SC4-27 and realised with ANSYS DesignModeler.A mesh consisting of tetrahedra and triangular prisms was generated using ANSYS Meshing and the volume mesh associated with the surface mesh shown in Fig. 5 contains 155,000 cells.The DHI geometry was designed to create a convective like flow within the sample, which allows the uniform distribution of suspended solids within a viscous fluid.The idea is that the outer helix pumps the fluid downwards while the inner helix pumps it upwards.Based on the prototype design, DesignModeler and Meshing were used to produce a mesh, which, due to the increased complexity of the design, contained approximately 1300,000 tetrahedral and triangular prism cells.The CAD model and associated surface mesh can be seen in Fig. 6.A mesh convergence study was conducted with the DHI geometry and key flow parameters at monitoring points were found not to change as the number of cells increased above the 1.3 million of the mesh shown in Fig. 6.Standard solver settings were used throughout: the SIMPLE algorithm was used for pressure–velocity coupling; the Least Squares Cell Based discretization for gradients and second order differencing for the momentum and volume fraction equations.Due to the use of a moving mesh, the solution was necessarily transient and it was found that the simulations had to be run for ∼60 s of real time to achieve a stationary solution, where the mean velocity and volume fraction did not drift with time, as measured at a number of monitoring points.The exact run time varied depending on fluid properties and rotation rate.In the manufacturing of rubberized bitumen, the process can take up to 2 h, at which point the particles have swollen to their maximum extent.However, in these experiments, in certain cases the particles were already settling after a few seconds.It was decided that a testing period of 10 minutes was sufficient to resolve the settling process without the swelling of the particles compromising the results.Figs. 7 and 8 show images of the distribution of the particles within the blend at the start of the tests and after 10 minutes of rotation for the f100 and f500 fluids respectively.These figures show that using the lower viscosity fluid, f100, at 10 rpm both standard spindle and the DHI do not maintain particles in suspension.At 100 and 200 rpm it is interesting to note how, due its shape, the standard impeller creates two layers of crumb rubber particles on top and bottom.In this case, the particles are forced to migrate from the narrow gap into regions where the swirl is less intense.However, at these higher rotation rates, the DHI creates a more even distribution of particles.Observations confirmed that this was a result of a combination of effects consisting in the inner spiral transporting the particles upwards while the outer thread moves the particles back down – confirming that the design was working as intended.As a result, particles are circulated throughout the container and phase separation is avoided for the whole duration of the experiment.These results show that the rotational speed is of fundamental importance when considering the extent to which the sample is homogenised and plays a crucial role in determining the efficacy of the impeller.This, of course, has significant consequences on the viscosity readings, which are a function of testing geometry, rotational speed and applied torque.Fig. 9 shows that the apparent viscosity measurements are independent of the rotational speed for the DHI than the SC4-27 spindle.The fact that the curves for 100 and 200 rpm are convergent for the DHI echoes the observations made about Fig. 7, where the higher rotational speeds produce a less heterogeneous blend.At 10 rpm, there is clearly some intermittency to the viscosity measurements as the particles do not reach a steady-state distribution.In any case, both viscosity values over time and shear dependency, play in favour of the DHI, which seems to provide more stable rheological information of this complex system – itself an indication of the better mixing efficiency over the timescale of the test.For the higher viscosity fluid, f500, over the same timescale, these observations are not so clear.In this test, all viscosity measurements are still increasing after 600 s, suggesting that the particles were still undergoing redistribution or possibly the swelling process is starting, especially at the higher rotational speed.Indeed, the absence of a 200 rpm result prevents the same conclusion being drawn for convergence of the viscosity results as the rotation speed is increased above 100 rpm.Indeed, after 20 minutes the viscosity had still not settled down to a consistent value.Unfortunately, running the tests for a longer period meant that the temperature of the mixture would begin to drop and viscosity measurements would then have a temperature component.Again, however, the viscosity trend with time still favours the DHI.While not presented here, it should be noted that the CFD model of the SC4-27 spindle produced values of SRC and SMC that were very close as those quoted in the Brookfield literature for a range of rotational speeds and fluid viscosities.Then, for the DHI geomerty, in an attempt to reproduce the single phase experimental testing of Lo Presti et al. , a number of single phase simulations were performed.The rotational speed of the impeller was again varied as too was the viscosity of the fluid.Values of both the rotational speed and dynamic viscosity were matched to those used by Lo Presti et al. as closely as possible.The dynamic viscosity values used were 0.0094, 0.098, 0.488, 0.970 and 5.040 Pa s, while rotational speeds of 10, 50, 100 and 200 rpm were chosen.Thus, in total, 20 simulations were performed.In order to be able to compare across this range of viscosity and rotational speed, the mixing Reynolds number, Eq., was used with an equivalent diameter of 11.7 mm.The choice of the diameter is somewhat arbitrary and corresponds in this case to the diameter of the SC4-27 spindle, although it is a value roughly halfway between the inner and outer helices of the DHI.The SMC values are calculated from a re-arrangement of Eq., via a conversion of the torque that Fluent reported to an equivalent percentage torque, T%, reported by the Brookfield viscometer.Fig. 11 shows a clustering of the SMC values around a value of 50, across a 5 decade range of Reynolds numbers.Note that the value of SMC that Lo Presti et al. found for the DHI was 50 and so there is quantitative agreement between the experiment and simulations.What the numerical modelling reveals, however, is that at a fixed rotational speed, there is a dependency on the Reynolds number, the extent of which is reduced at the highest rotational speed of 200 rpm.Similarly, we see that for a fixed value of viscosity there is a marked increase in SMC as the rotational speed is increased.This can be seen in the clustering of results in lines comprised of the four symbols from left to right.There are several explanations for this functional dependence of SMC on Rem.First since the same mesh was used for all simulations, the local cell Reynolds numbers would differ from simulation to simulation, which may result in a numerical errors being introduced.Second, from the physical perspective, it is thought that the variations are down to the flow patterns changing across the range of Reynolds numbers.There is some experimental evidence to back this up.With reference back to Fig. 7, it can be seen that the DHI produces more consistent mixing at a rotational speed of greater than 100 rpm.This may be attributed to the impeller producing a different flow field, a more efficient flow field from the mixing perspective, that at 10 rpm.This difference may explain the function dependence of SMC on the mixing Reynolds number.Before considering the mixing efficiencies of the two designs, it is instructive to look at the velocity fields in both cases.Fig. 12 shows contours of the steady-state velocity magnitude for the SC4-27 spindle.The SC4-27 creates a Taylor–Couette flow between the rotating spindle and the stationary wall.There is a large circumferential or swirling component to the flow but not a vertical component.As such, particulates in the flow are not driven vertically through the gap but rather tend to sink to the bottom under the effects of gravity.On the other hand, the instantaneous velocity field for the DHI, Fig. 12, presents a far more complex picture.Here the plot is showing a snapshot of the velocity magnitude and what cannot be gleaned from this plot that the outer helix is moving fluid downwards, while the inner one is moving fluid upwards.This becomes clear in Fig. 13, in which the vector length is constant but the colour represents the local velocity magnitude.While the figure shows only the vectors on a vertical plane and the view focusses on the mid-height region, the flow is now entirely 3D in nature and the simple observations and theory associated with Taylor-Couette flow are no longer possible.To gain a better understanding of the degree of homogenisation, the volume fraction of the particulate phase was evaluated once the volume fraction for both types of spindle had reached a steady-state.For the DHI, while the velocity is not steady, the distribution of particles does reach something very close to a steady state because of the very low settling velocity of the particles in the highly viscous fluids under consideration.Qualitatively, the results from the numerical simulations, compared very favourably with the experimental results.Fig. 14 shows the results achieved using the f100 and f500 fluids at different rotational speeds for both the CFD model and the experimental tests.With reference to the figure, the contour plots of the volume fraction of the particulate phase do bear some similarity to the photographs from the experiments that are shown below them.The highly concentrated zone of particles near the bottom of the container becomes less apparent when the angular velocity of the spindle and the viscosity of the fluids are increased.In fact, the contour range is clipped to a maximum of 0.08, so although the zone for the f100, 10 rpm case looks small, it is in fact a very small region of very high concentrations.In this case, the suspended particles are able to sink to the bottom and concentrate on the central region where fluid motion is minimal – see Fig. 12.There is still some stratification of the secondary phase in all the plots in Fig. 14, which is also apparent in the experimental results.Two effects are not captured by the CFD model, however.First, the floating scum of particles on the surface of the liquid are not seen because the top boundary of the CFD domain is a symmetry boundary – these particles are held there by surface tension effects, which are not included in the numerical model.Second, the CFD model does not show the that some particles stay above the narrow Couette flow region, as well as falling below it.The gap between the spindle and the container wall seems to prevent the particles from settling out, at least on the time scales over which the experiments were run.The CFD model confirms that the main issue of using the standard spindle is that it causes phase separation between the particles and bitumen leading to misleading viscosity measurements.This is due to the absence of vertical components of velocity in what is essentially a flow regime with cylindrical symmetry.Increasing the fluid viscosity can also lead to a more homogeneous distribution of particles in the liquid and reduces the phenomenon of particles concentration on the bottom.However, it is not clear that this enhanced mixing is not simply due to the lower terminal velocity of the particles in the more viscous fluid.A look at Fig. 10 would indicate that the apparent viscosity for the f500, 100 rpm case had not settled to a constant value and thus a longer test may have produced more stratification.A very different picture generally emerges when considering the DHI impeller.There is qualitative agreement between the CFD and experimental data in this case.However, at 10 rpm, the DHI still produces a noticeably stratified particulate phase.Again, due to the clipping of the volume fraction at 0.08, the fact that the red region is larger than for the SC4-27 cases, then there is some agitation of the fluid in this region.This agitation is sufficient to keep the particles partially suspended above the base of the viscometer.Nonetheless, this level of mixing is not sufficient to render any apparent viscosity measurements reliable.At 100 rpm, a level of homogeneity is seen in the particulate phase, which indicates that the impeller is performing its intended task, which is to mix the particulate phase throughout the device.In this study, the authors used customised laboratory testing and computer simulations to gain a deeper understanding of the fundamental mechanisms behind the improved viscosity measurements obtained when using the DHI with rubberised bitumen.Furthermore, this study look closely at the phase separation issues that can occur in a wide range of complex fluids.Overall, the following conclusions can be drawn:The experimental programme provided a clearer picture of the mixing enhancement provided by the DHI when measuring multiphase fluids composed of a liquid and a suspended particulate phase.Furthermore, the DHI geometry allows obtaining a steady apparent viscosity measurement and requires lower torques.This extends the range of measurable viscosity when compared to the standard spindle, SC4-27.CFD simulations clarified the mechanisms behind the previously assumed convective flow created by the DHI.In fact, the analysis of the velocity fields confirms that the central screw of the DHI drags up the complex system while the external screw transports the mixture downwards.Results also highlighted that with current design, the central screw of the DHI could be more effective at pumping the fluid vertically – this leads the way to enhancements in the design.The single phase CFD simulations produced values of SMC that were in close agreement with the experiments of Lo Presti et al. .In summary, CFD helped to gain insights in the complex flow regimes and shows potential to be used as a platform to design new testing geometries for complex fluids as well as for virtual rheology measurements.In the next future, researchers should look with confidence in using CFD as platform for improving rheometry of complex fluids.The DHI showed to be a significant step forward from the testing geometries currently used for viscosity measurements of complex fluids, especially within product development.The author are looking at improving the current design as well as performing a campaign aimed at experimentally testing the viscometer for a variety of standard viscosity fluids with different particle loadings and diameters, as well as using complex systems with different rheological properties.This will allow to produce a library of results that will mean this rheometry can be used effectively in the bitumen and other industries for product development and quality control of these type of complex systems.
Multi-phase materials are common in several fields of engineering and rheological measurements are intensively adopted for their development and quality control. Unfortunately, due to the complexity of these materials, accurate measurements can be challenging. This is the case of bitumen-rubber blends used in civil engineering as binders for several applications such as asphalt concrete for road pavements but recently also for roofing membranes. These materials can be considered as heterogeneous blends of fluid and particles with different densities. Due to this nature the two components tends to separate and this phenomenon can be enhanced with inappropriate design and mixing. This is the reason behind the need of efficient dispersion and distribution during their manufacturing and it also explains while real-time viscosity measurements could provide misleading results. To overcome this problem, in a previous research effort, a Dual Helical Impeller (DHI) for a Brookfield viscometer was specifically designed, calibrated and manufactured. The DHI showed to provide a more stable trend of measurements and these were identified as being “more realistic” when compared with those obtained with standard concentric cylinder testing geometries, over a wide range of viscosities. However, a fundamental understanding of the reasons behind this improvement is lacking and this paper aims at filling these gaps. Hence, in this study a tailored experimental programme resembling the bitumen-rubber system together with a bespoke Computational Fluid Dynamics (CFD) model are used to provide insights into DHI applicability to perform viscosity measurements with multiphase fluids as well as to validate its empirical calibration procedure. A qualitative comparison between the laboratory results and CFD simulations proved encouraging and this was enhanced with quantitative estimations of the mixing efficiency of both systems. The results proved that CFD model is capable of simulating these systems and the obtained simulations gave insights into the flow fields created by the DHI. It is now clear that DHI uses its inner screw to create a vertical dragging of particles within a fluid of lower density, while the outer screw transports the suspended particles down. This induced flow helps keeping the test sample less heterogeneous and this in turns allows recording more stable viscosity measurements.
738
Impacts of using the electronic-health education program ‘The Vicious Worm’ for prevention of Taenia solium
Transmission of Taenia solium is associated with low economic development and the prevalence varies with sanitation standards, pig husbandry practices, and eating habits.Endemic regions include Latin America, South and South-East Asia, and sub-Saharan Africa;;;.Taenia solium is responsible for the highest burden of parasitic foodborne diseases in the world and considered one of the major contributors to death caused by foodborne diseases worldwide.Transmission occurs when humans eat undercooked T. solium infected pork, leading to tapeworm development in the intestines, taeniosis.When tapeworm eggs are excreted via the stool and into the environment, pigs can ingest the eggs via contaminated feed or water upon which the eggs hatch and migrate to tissues as larvae.In the tissues, the larvae encyst, causing porcine cysticercosis.If humans accidentally ingest T. solium eggs, larvae migration can occur as described for pigs causing human cysticercosis and more specifically neurocysticercosis, if the cysts are locate in the central nervous system, which may cause symptoms such as severe headaches and seizures.A systematic review showed that approximately one third of all epilepsy cases in endemic regions could be ascribed to NCC.There are several intervention tools available to control T. solium and the World Health Organization points to preventive chemotherapy, improved pig husbandry, and health education among the candidates.Access to technologies such as mobile phones is becoming increasingly common in Africa, and it is estimated that 93% of the population have access to a mobile phone.Smartphones are continuing to make up a larger proportion of mobile phones, thus, the potential exist for health education to reach widely and broadly through smartphone apps.Already existing computer-based education tools could be transformed into apps to make them more accessible to the public.The computer-based educational tool, ‘The Vicious Worm’, was developed to provide education regarding the prevention and control of T. solium taeniosis/cysticercosis.It was designed to target stakeholders across disciplines and sectors, providing information regarding the transmission, risk factors, diagnosis, prevention, and control of the diseases caused by T. solium.TVW is programmed with three levels visually represented as a village, a town, and a city.Each level is designed to provide various information to different types of recipients e.g. village level for laypersons, town level for professionals, and city level for decision makers.The design of TVW was rigorously described in 2014.In 2014, Ertel et al. assessed the short-term knowledge uptake among health and agricultural professionals from Tanzania using TVW.The authors tested the knowledge level of the study population by a pre-test, then preformed health education using TVW, where the participants were allowed to use TVW for 1.5 h, followed by a post-test, and a second post-test to assess the knowledge uptake.The study showed a highly significant short-term knowledge uptake and a positive attitude towards the program.The aim of this study was therefore to assess the long-term knowledge uptake and potential practice changes within the same study population one year after their exposure to the electronic learning tool TVW.The study was conducted in three different regions of Tanzania, from June to August 2015 and consisted of a test in 2014), a questionnaire survey, and interviews.All activities were conducted approximately one year after participants were first introduced and tested in the English version of TVW.The study population was fixed, based on participation in the previous study by Ertel et al., and thereby limiting the sample size.Originally, 79 professionals consisting of veterinarians, veterinary students, meat inspectors, agriculture/livestock extension officers and students, health officers, medical offices, and assistant medical officer students participated in the study.They were chosen due to their key role in health education of the general population.All professionals had a moderate to high level of English obtained through their education, albeit a translator was present at all times during the study to facilitate communication.To conduct the evaluation, the professionals were reached by the contact details they handed in at the intervention one year earlier.Ertel et al., was originally reviewed and pilot-tested on experts from the Section of Parasitology and Aquatic Diseases, University of Copenhagen, and three experts in the field with similar cultural background as the participants Ertel et al.A hardcopy of the test in English was handed to the professionals in person when possible; otherwise, the test was done via the phone or by email.The test was self-administered and if a question needed to be translated, an interpreter as well as the researcher were present.The time for completing the test was estimated to be 30 min, but no actual time limit was set.The test, as described in Ertel et al., consisted of 24 questions in English, divided into eight specific aspects addressing acquisition and transmission of T. solium infections, acquisition of NCC, HT in general, NCC in general, PC diagnosis, PC treatment, relation between PC, HT and NCC, and prevention of PC, HT, and NCC.In addition to the test, a questionnaire containing five questions was included.The questions aimed to verify whether the participants had accessed the USB flash drive with TVW given to them after the study in 2014.The questionnaire also contained three open-ended questions, which gave the professionals the possibility to explain what they had done in regards to prevention of T. solium following the intervention and whether they had been inspired by TVW to change practices.The final question explored associations between the obtained test scores and having used the TVW in the previous year.Semi-structured in-depth individual interviews or group interviews were carried out after the participants filled out the test.An interview guide was followed, but additional questions were asked if found appropriate."The interviews investigated the participants' current knowledge of T. solium, their work practices regarding preventive measures of HT, PC, and NCC, whether they had experienced any cases of infections within the last year, their use of TVW, and whether they had taught others regarding T. solium infections.The interviews were conducted in Mbeya, Iringa, and Kituli regions.The interviews were audio recorded using the “ALON Dictaphone” app for iOS.For the group interviews, the participants were divided into groups according to their professional background to ensure that they all knew each other and felt comfortable in the group forum.This aimed to minimise bias by providing the participants with a forum where they could speak as freely as possible and provide in-depth answers from a group perspective.Due to the limited sample size, participants were divided into two groups for a descriptive analysis based on their professional background: health sector group and agricultural sector group.Knowledge persistence was assessed using a paired t-test measured.Data were analysed using R.The development in the specific professional groups was described.‘Usage of TVW’ was the participants’ estimate of how much they used TVW, after they were handed the USB flash drives.They chose between the categories: ‘Not used’, ‘used once’, ‘used several times in private’, and ‘used and introduced to others’.The individual and group interviews were transcribed by meaning, with an exclusion of words being repeated and fillers used during the interview.Transcribed interviews were uploaded to Nvivo and analysed using Malteruds editing analysis style.Citations in the results were modified with respect to the meaning of the statement to correct English grammar.The study was embedded in the Danida funded project, “Securing rural livelihoods through improved smallholder pig production in Mozambique and Tanzania”.Sokoine University of Agriculture, Morogoro, Tanzania, approved the study, including the informed consent form used and provided ethical approval of the study as well as a research associateship for the principle investigator.All participants were thoroughly explained the aim of the study and asked to give oral consent if tested on the phone or written consent if tested and interviewed in person or by email.All participants were emailed the results of the study.In total, 64 out of the original 79 participants in the study by Ertel et al. participated in the follow-up, divided into professional groups.Thus, 15 of the participants were lost to follow-up.Since the previous year, the student participants had finished their education in Mbeya and moved to other regions of the country for work.This meant that the Health and Agricultural Officers during their participation in this study worked in 16 out of the 21 administrative regions in Tanzania as compared to only one region before.The participants had a significant increase in test scores one year later compared to baseline, but a reduction in test scores from the second two-week follow-up performed by Ertel et al.Although there was an overall improvement in test scores, a few of the participants did score lower in the one-year follow-up compared to baseline.However, as seen in Fig. 2, the distribution shifted upwards towards a higher test score from baseline to one year after.The significant changes in test scores from baseline to the one-year follow-up is illustrated in Fig. 2, which shows the majority of the population falling between a 1 and 7 points improvement.The informants had improved their test score from baseline: 75% correct answers, to the third follow-up: 79%.In the health-group, all 13 participants had improved their score, from 73% correct answers at baseline to 85% at the third follow-up.In the agricultural-group, they had a higher baseline but 55% of the 49 participants had increased their test score compared to baseline.Since the second follow-up 82% of the participants stated that they had used the program for personal learning or to engage others in TVW, 38% had used and introduced it to others, 26% used it once and 18% had used it several times.The remaining 18% explained during the interviews that reasons for not using TVW included lack of access to a PC or inability to access the USB flash drive due to it being corrupted or having lost the flash drive.Health education was a part of the work duties of all the participants included in this study.No suggestions to disseminate knowledge were provided to the participants during the first study.The participants presented their individual ways of disseminating knowledge.There were three main methods outlined: The most frequently mentioned way of educating was group meetings in the villages where they would talk about preventive methods against, NCC, PC and HT.In addition, education of individual farmers was mentioned.They taught by using simple drawings explaining disease transmission pathways.The third way was to show TVW to the farmers, either in small groups on a PC or through a projector at village meetings.A participant described the way of introducing TVW as:“We started showing the farmers ‘the village’ because that is the area we live in, and we wanted to show them how they can be affected by this issue,Five of the participants told students about prevention of T. solium.A medical doctor showed the medical students TVW after they found a cyst on a CT-scan, so the students would understand the disease.One participant shared his knowledge on a bus ride, and another at a bar selling pork.These findings showed that the participants were engaged in distributing the knowledge they gained through using TVW.This was in accordance with their answer regarding question 28 in the test, where 81% of the participants confirmed that they had acted to prevent HT, PC, or NCC, and 79% of the participants supplied examples of how they had disseminated their knowledge to stakeholders such as co-workers, family members, friends, or farmers.When questioned about changes in the daily work, 31% of the participants stressed that increased knowledge was the largest change, however 5% of the participants found no change in their work or private life, even though they had used the program.Of the agriculture/livestock officers 3% said that their confidence in their work had improved; one of them said he had convinced the farmers that it was a real disease and it could be prevented.One veterinarian explained how his increased knowledge had enabled him to deliver the same information he had previously done, but in a simpler and more specific manner after using the program.There were several examples of work routines, or practical changes in the areas where the participants worked.An agriculture extension officer said that he educated the village on how to build latrines and implemented a by-law where the villagers had to build a latrine for their family or receive a fine.Another had encountered challenges with farmers allowing free roaming of pigs, but after introducing TVW, he taught farmers to build pig pens and after saw positive changes within the community.Four participants said that the largest change was the farmers now keeping pigs confined compared to roaming before.In order to prevent HT, 35% of the participants mentioned that they taught people only to eat was properly cooked pork, following the village level messages of TVW.There was no general agreement on how to cook the pork, but a participant said it was when you boil the pork, another said that you had to boil the pork for twenty minutes.During the previous year, 34% of the participants had seen pigs infected with porcine cysticercosis.In the agricultural-group 11% had specifically told the owner to condemn the pork.In a group interview, the participants expressed concerns regarding the relationship to the public if they recommended the pork to be condemned.Washing vegetables in clean water as a way of preventing disease, and teaching the farmers that eating or drinking products which were contaminated with eggs could cause disease was mentioned in the interviews by 5%.It was a focus point for several informants to educate local farmers how to build hand-washing stations and to use soap when washing their hands.One participant told farmers to build tables in order to get food and cookware off the ground to avoid contamination.This study showed that participants after 1.5 h using TVW on a computer and following access to the program through a USB-stick, were able to maintain a significantly improved level of knowledge regarding T. solium and implement preventive measures one year later.This is in line with a study from Northern Tanzania where significant knowledge increase was detected in form a school-based health study, one year after intervention.A study evaluating efficacy of teaching methods in prevention of NCC found that if there was a second visit with health education including a one-on-one talk with the researcher, the knowledge would not decline after six months, and it was 11 times more likely that the farmer had heard about T. solium.In this study, the participants said that they were using their knowledge actively when educating farmers and patients in prevention of T. solium, and the participants felt more confident when educating others after using TVW.The study showed an increase in knowledge regarding T. solium.A study done in 2002–2004 in Mbulu District, Tanzania, suggested that health-education reduced the consumption of cyst-infected pork and the incidence of porcine cysticercosis, but at the same time found no improvement in human behaviour preventing infection, such as the use of latrines and whether the latrines had closing doors.In our study, only seven of the 21 participants who had encountered infected pigs in the previous year specifically instructed the farmers to condemn cyst-infected pork.This could reflect that although knowledge is gained, social or economic barriers still exist in implementing change, as the test showed that all participants knew how to handle cyst-infected pork.More specifically, maintaining a good relationship with the public was mentioned in a group discussion as a barrier.A study involving schoolchildren found that their attitude towards pork condemnation was more positive after an intervention, but they remained unlikely to contact a veterinarian upon finding an infected pig.This supports that knowledge regarding best practice exists, but barriers such as economic loss, were more important than benefits of pork condemnation.The participants also included cooking pork properly as a part of the preventive measures for HT.Another study showed that the lack of knowledge regarding T. solium may result in farmers assuming that cysticercosis originates from feeding pigs maize bran, and that cysticercosis is only transmitted between pigs, therefore making pork safe to consume.Maridadi and colleagues reported that participants ate pork in local brew bars where pork was served fried but undercooked.The study also found that educated people gained knowledge more easily, regarding the life-cycle and prevention of T. solium.Targeting professionals to spread knowledge regarding T. solium appears to be a good approach, as they are in contact with the public and they understand the basics of preventing T. solium, resulting in potential practice changes as explained in the interviews.Especially training of students, since the results from this study showed that after finishing their education they dispersed to more than three quarters of Tanzania’s administrative regions.The study would have benefitted from a control group, but as this was not included in the original study nor were data available on official or unofficial training regarding T. solium during the time before the evaluation.Another major constraint represents the limitation on 79 participants.Thus limiting the validity and reliability of the study.In addition, the communication was problematic.The tests and interviews were conducted in English, not the official language Swahili, although participants did at least possess a moderate level of English.Still, several participants needed help translating the test and during the interview.The participants engaging in the interviews were mainly from the agricultural sector and probably influenced the themes brought forward during the interviews.Six participants chose to answer the test by e-mail.They could potentially have had access to TVW while answering; however, their answers were in no measure better than the rest of the participants.Social desirability bias might affect the self-reported measure, which was an important limitation in the study.This study showed an increase in knowledge for the health sector by use of TVW, but with the limited sample size, it was not possible to generalize these results.Although participants informed of changed practices and using TVW for educational purposes, the study was limited by the lack of observational data.Before TVW can be implemented and upscaled as a component in a control programme, this should be confirmed by a more rigorous evaluation.The study found that training participants by giving them access to an electronic learning tool, provided long-term knowledge uptake, and that gained knowledge were used to educate others.The education of students was demonstrated to be effective in spreading individuals with knowledge to the majority of regions in Tanzania.Changed practices were identified through interviews consisting of by-laws implemented, and practical workshops on building latrines, tables for food and cookware, pigpens and hand washing stations by locally available resources.Many of the participants pointed to the fact that TVW was an English program, but it has now been translated to Swahili, so it is accessible for a larger part of the population in Eastern Africa.The authors are grateful for the financial support of the study received from Danida Travel Grant and Augustinus Foundation.
Background: Taenia solium is a zoonotic tapeworm widely distributed across sub-Saharan Africa. Specific health education is regarded as a central element in controlling T. solium. In 2014, an electronic health education tool called ‘The Vicious Worm’, which was concerned with prevention of T. solium was introduced to health and agricultural professionals in Mbeya, Tanzania, an endemic setting. Introduction to ´The Vicious Worm’ of 1.5 hours significantly improved the participants’ knowledge. This study revisited the same study subjects one year later to assess persistence of knowledge regarding T. solium taeniosis/cysticercosis and to assess if the health education had changed work practices for the participants and the public. Methods: The study was conducted in Tanzania between June and August 2015, with a fixed population of health and agricultural professionals recruited from a previous study testing ‘The Vicious Worm’. The study used a test, a questionnaire survey, as well as semi-structured group and individual interviews. Results: The 79 study subjects, all health or agricultural professionals, had within one year relocated from Mbeya to 16 of 21 administrative regions of Tanzania. Sixty-four agreed to participate in the test and 48 to an interview. The test showed significant improvement in knowledge regarding T. solium taeniosis/cysticercosis, compared with the baseline knowledge level of the participants. Interview data found that the participants had used ‘The Vicious Worm’ as an educational tool and applied the knowledge from the program to implement new practices consisting of by-laws and practical workshops on building latrines, pig pens and hand washing stations in their communities. Conclusion: Introduction to ‘The Vicious Worm’ led to changed practices and persistence in knowledge regarding T. solium. Incorporating health education as a specific health intervention tool should be encouraged and implemented at national or programmatic level.
739
Precursor processes of human self-initiated action
Functional and neuroanatomical evidence has been used to distinguish between two broad classes of human actions: self-initiated actions that happen endogenously, in the absence of any specific stimulus, and reactions to external cues.Endogenous actions are distinctive in several ways.First, they depend on an internal decision to act and are not triggered by external stimuli.In other words, the agent decides internally what to do, or when to do it, without any external cue specifying the action.Second, we often deliberate and consider reasons before choosing and performing one course of action rather than an alternative.Thus, endogenous actions should be responsive to reasons.Many neuroscientific studies of self-initiated action lack this reasons-responsive quality.They often involve the paradoxical instruction to ‘act freely’ e.g., “press a key when you feel the urge to do so”.However, this instruction has been justifiably criticised.Here, we adapted for humans a paradigm previously used in animal research, which embeds endogenous actions within the broader framework of decision-making.Participants responded to the direction of unpredictably-occurring dot motion stimuli by pressing left or right arrow keys.Importantly, they could also choose to skip waiting for the stimuli to appear, by pressing both keys simultaneously whenever they wished.The skip response thus reflects a purely endogenous decision to act, without any direct external stimulus, and provides an operational definition of a self-initiated action.Self-initiated ‘skip’ responses were compared to a block where participants made the same bilateral ‘skip’ actions in response to an unpredictable change in the fixation point.Controversies regarding precursor processes have been central to neuroscientific debates about volition.The classical neural precursor is the readiness potential).The RP is taken to be “the electro-physiological sign of planning, preparation, and initiation of volitional acts” and was considered a pre-requisite of the conscious intention to act.Classical studies explicitly or implicitly assume that the RP reflects a putative ‘internal volitional signal’, with a constant, characteristic ramp-like form, necessarily preceding action initiation - although this signal is heavily masked by noise on any individual trial.However, the idea that the RP reflects a specific precursor process has been recently challenged.Instead, the time of crossing a threshold for movement could depend in part on stochastic fluctuations in neural activity.Crucially, averaging such fluctuations time-locked to action initiation reproduced the “build-up” pattern of the mean RP, suggesting that the classical interpretation of RP as a stable precursor of voluntary action could be deceptive.On this account, RP is not a specific, goal-directed process that triggers action, but is rather an artefact of biased sampling and averaging of neural noise.However, classical and stochastic models offer different explanations for the variability of EEG signals prior to self-initiated action.On the stochastic model, neural activity eventually and necessarily converges because stochastic fluctuations must approach the motor threshold from below.The degree to which the EEG signal converges prior to action and the timing of that convergence should depend only on the parameters of the accumulator, and the temporal structure of the noise input to the accumulator.In contrast, classical models would attribute the convergence of single trial RPs to consistent precursor processes of action preparation that reliably precede self-initiated action.While variability of RP activity has rarely been studied previously), several studies of externally-triggered processing have used variability of neural responses to identify neural codes.For example, variability goes down in the interval between a go-cue and movement onset, and during perceptual processing.We thus compared EEG variability prior to self-initiated skip actions with variability prior to externally-triggered actions occurring at a similar time.We used a systematic modelling approach to show that a stochastic accumulator framework could indeed explain the pattern of EEG variability, but only by assuming an additional process modulating the level of neural noise.24 healthy volunteers, aged 18–35 years of age, were recruited from the Institute of Cognitive Neuroscience subject data pool.Two participants were excluded before data analysis.All participants were right handed, had normal or corrected to normal vision, had no history or family history of seizure, epilepsy or any neurologic or psychiatric disorder.Participants affirmed that they had not participated in any brain stimulation experiment in the last 48 h, nor had consumed alcohol in the last 24 h. Participants were paid an institution-approved amount for participating in the experiment.Experimental design and procedure were approved by the UCL research ethics committee, and followed the principles of the Declaration of Helsinki.Participants were placed in an electrically shielded chamber, 55 cm in front of a computer screen.After signing the consent form, the experimental procedure was explained and the EEG cap was set up.The behavioural task was as follows: participants were instructed to look at a fixation cross in the middle of the screen.The colour of the fixation cross changed slowly and continuously throughout the trial.This colour always started from ‘black’ and then gradually changed to other colours in a randomised order.The fixation cross changed colour gradually, taking 2.57 s.The fixation cross was initially black, but the sequence of colours thereafter was random.At the same time, participants waited for a display of randomly moving dots, initially moving with 0% coherence with a speed of 2°/s, to move coherently towards the left or right.They responded with the left or right hand by pressing a left or right arrow key on a keyboard, accordingly.The change in dot motion coherence happened abruptly.Correct responses were rewarded.Conversely, participants lost money for giving a wrong answer, for responding before dots start moving, or not responding within 2 s after dot motion.The trial was interrupted while such error feedback was given.Importantly, the time of coherent movement onset was drawn unpredictably from an exponential distribution, so waiting was sometimes extremely long.However, this wait could be avoided by a ‘skip’ response.Participants could lose time by waiting, but receive a big reward if they responded correctly, or could save time by ‘skipping’ but collect a smaller reward.The experiment was limited to one hour, so using the skip response required a general understanding of the trade-off between time and money.Participants were carefully informed in advance of the rewards for responses to dot motion, and for skip responses, and were clearly informed that the experiment had a fixed duration of one hour.There were two blocked conditions, which differed only in the origin of the skip response.In the ‘self-initiated’ condition blocks, participants could skip waiting if they chose to, by pressing the left and right response keys simultaneously.The skip response saved time, but produced a smaller reward than a response to dot motion.Each block consisted of 10 trials.To ensure consistent visual attention, participants were required to monitor the colour of the fixation cross, which cycled through an unpredictable sequence of colours.At the end of each block they were asked to classify the number of times the fixation cross turned ‘yellow’, according to the following categories: never, less than 50%, 50%, more than 50%.They lost money for giving a wrong answer.At the end of each block, participants received feedback of total reward values, total elapsed time, and number of skips.They could use this feedback to adjust their behaviour and maximise earnings, by regulating the number of endogenous ‘skip’ responses.In the ‘externally-triggered’ condition blocks, participants could not choose for themselves when to skip.Instead, they were instructed to skip only in response to an external signal.The external signal was an unpredictable change in the colour of the fixation cross to ‘red’.Participants were instructed to make the skip response as soon as they detected the change."The time of the red colour appearance was yoked to the time of the participant's own previous skip responses in the immediately preceding self-initiated block, in a randomised order. "For participants who started with the externally-triggered block, the timing of the red colour appearance in the first block only was yoked to the time of the previous participant's last self-initiated block.The colour cycle of the fixation cross had a random sequence, so that the onset of a red fixation could not be predicted.The fixation cross ramped to ‘red’ from its previous colour in 300 ms. Again, a small reward was given for skipping.The trial finished and the participant lost money if s/he did not skip within 2.5 s from beginning of the ramping colour of the fixation cross.The ‘red’ colour was left out of the colour cycle in the self-initiated blocks.To control for any confounding effect of attending to the fixation cross, participants were also required to attend to the fixation cross in the self-initiated blocks and to roughly estimate the number of times the fixation cross turned ‘yellow’.Each externally-triggered block had 10 trials, and after each block feedback was displayed.Each self-initiated block was interleaved with an externally-triggered block, and the order of the blocks was counterbalanced between the participants.The behavioural task was designed in Psychophysics Toolbox Version 3.While participants were performing the behavioural task in a shielded chamber, EEG signals were recorded and amplified using an ActiveTwo Biosemi system.Participants wore a 64-channel EEG cap.To shorten the preparation time, we recorded from a subset of electrodes that mainly covers central and visual areas: F3, Fz, F4, FC1, FCz, FC2, C3, C1, Cz, C2, C4, CP1, CPz, CP2, P3, Pz, P4, O1, Oz, O2.Bipolar channels placed on the outer canthi of each eye and below and above the right eye were used to record horizontal and vertical electro-oculogram, respectively.The Biosemi Active electrode has an output impedance of less than 1 Ohm.EEG signals were recorded at a sampling rate of 2048 Hz.EEG data preprocessing was performed in Matlab with the help of EEGLAB toolbox.Data were downsampled to 250 Hz and low-pass filtered at 30 Hz.No high-pass filtering and no detrending were applied, to preserve slow fluctuations.All electrodes were referenced to the average of both mastoid electrodes.Separate data epochs of 4 s duration were extracted for self-initiated and externally-triggered skip actions.Data epochs started from 3 s before to 1 s after the action.To avoid EEG epochs overlapping each other any trial in which participants skipped earlier than 3 s from trial initiation was removed.On average, 5% and 4% of trials were removed from the self-initiated and externally-triggered conditions, respectively.RP recordings are conventionally baseline-corrected by subtracting the average signal value during a window from, for example, 2.5 until 2 s before action.This involves the implicit assumption that RPs begin only in the 2 s before action onset, but this assumption is rarely articulated explicitly, and is in fact questionable.We instead took a baseline from −5 ms to +5 ms with respect to action onset.This choice avoids making any assumption about how or when the RP starts.To ensure this choice of baseline did not capitalize on chance, we performed parallel analyses on demeaned data, with consistent results.Finally, to reject non-ocular artefacts, data epochs from EEG channels with values exceeding a threshold of ±150 μv were removed.On average 7% and 8% of trials were rejected from self-initiated and externally-triggered conditions, respectively.In the next step, Independent Component Analysis was used to remove ocular artefacts from the data.Ocular ICA components were identified by visual inspection.Trials with artefacts remaining after this procedure were excluded by visual inspection.Preliminary inspection showed a typical RP-shaped negative-going slow component that was generally maximal at FCz.Therefore, data from FCz was chosen for subsequent analysis.Time series analysis was performed in Matlab with the help of the FieldTrip toolbox.We measured two dependent variables as precursors of both self-initiated and externally-triggered skip actions: mean RP amplitude across trials and variability of RP amplitudes across and within trials, measured by standard deviation.To compare across-trials SD between the two conditions, data epochs were divided into four 500 ms windows, starting 2 s before action onset: , , , .All p-values were Bonferroni corrected for four comparisons.To get a precise estimate of the standard error of the difference between conditions, paired-samples t-tests were performed on jack-knifed data.Unlike the traditional methods, this technique compares variation of interest across subsets of the total sample rather than across individuals, by temporarily leaving each subject out of the calculation.In addition, we also performed cluster-based permutation tests on SD.These involve a priori identification of a set of electrodes and a time-window of interest, and incorporate appropriate corrections for multiple comparisons.Importantly, they avoid further arbitrary assumptions associated with selecting specific sub-elements of the data of interest, such as individual electrodes, time-bins or ERP components.The cluster-based tests were performed using the following parameters: time interval = , minimum number of neighbouring electrodes required = 2, number of draws from the permutation distribution = 1000.To measure variability of RP amplitudes within each individual trial, the SD of the EEG signal from FCz was measured across time in a 100 ms window.This window was applied successively in 30 time bins from the beginning of the epoch to the time of action onset.We used linear regression to calculate the slope of the within-trial SD as a function of time.This was performed separately for each trial and each participant.Slopes greater than 0 indicate that EEG within the 100 ms window becomes more variable with the approach to action onset.Finally, we compared slopes of this within-trial SD measure between self-initiated and externally-triggered conditions in a multilevel model with single trials as level 1 and participants as level 2 variables.Multilevel analysis was performed in R.Values > 0 indicates that power at a specific frequency and a specific time is higher relative to the average power at the same frequency during the first 500 ms of the epoch.Finally, we asked whether percentage change in power relative to baseline differs between self-initiated and externally-triggered skip conditions in the beta band.Beta band Event-related Desynchronization during action preparation is a well-established phenomenon.Beta power was calculated in a 500 ms window starting from 1 s and ending 0.5 s prior to skip action.We avoided analysing later windows to avoid possible contamination from action execution following presentation of the red fixation cross that cued externally-triggered responses.The average normalised power across all pixels within the selected window was then calculated for each participant and compared across conditions using paired-samples t-tests.Parameter estimation for self-initiated skip actions was performed by fitting the model against the real mean RP amplitude of each participant in the self-initiated condition.First, 1000 unique trials of Gaussian noise, each 50,000 time steps, were generated for each participant and were fed into the model."The initial values of the model's parameters were derived from a previous study.The output of the model was then averaged across trials and was down-sampled to 250 Hz to match the sampling rate of the real EEG data.A least squares approach was used to minimise root mean squared deviation between the simulated and real mean RP, by adjusting the free parameters of the model for each participant.Note that this procedure optimised the model parameters to reproduce the mean RP, rather than individual trials.To fit the model to our externally-triggered skip condition, we fixed the threshold of each participant at their best fitting threshold from the self-initiated condition.We wanted to keep the threshold the same in both conditions so that we could test the effect of changing noise levels for a given threshold.Importantly, we also fixed the value of c1 at its optimal value form the self-initiated condition.By using this strategy, we can ask how noisiness of the signal changes, from its initial value, and we can compare this change in noise between conditions.We additionally performed parallel simulations without the assumption of a common initial noise level, and obtained essentially similar results.Specifically, Δc in the all-parameter-free model was similar to the Δc in the model with c1 and threshold fixed.The remaining parameters were optimised by minimising the deviation between the simulated mean RP and the real mean RP in the externally-triggered condition.Finally, we tested the model on the across-trial variability of RP epochs, having fitted the model parameters to the mean RP."All parameters of the model were fixed at each participant's optimised values for the self-initiated condition, and for the externally-triggered condition respectively.The model was run 44 times with the appropriate parameters, and 1000 separate trials were generated, each corresponding to a putative RP exemplar.The Gaussian noise element of the model ensured that these 1000 exemplars were non-identical.The standard deviation across trials was calculated from these 1000 simulated RP exemplars, for each participant and each condition."Importantly, this procedure fits the model to each participant's mean RP amplitude, but then tests the fit on the standard deviation across the 1000 simulated trials.Finally, to assess similarity between the real and predicted SD reduction, the predicted SD in self-initiated and externally-triggered conditions was plotted as a function of time and the area between the two curves was computed."We then compared the area between the SD curves in a 2 s interval prior to self-initiated and externally-triggered conditions for all participants' simulated data, and actual data, using Pearson's correlation.Participants waited for a display of random dots to change from 0% to 100% coherent motion to the left or right.They responded by pressing a left or right arrow key on a keyboard, accordingly, receiving a reward for correct responses.However, the time of movement onset was drawn unpredictably from an exponential distribution, so waiting times could be sometimes extremely long.In the ‘self-initiated’ condition blocks, participants could choose to skip waiting, by pressing both left and right response keys simultaneously.This produced a smaller reward than a response to coherent dot motion.Participants were informed that the experiment was limited to one hour, so that appropriate use of the skip response implied a general understanding of the trade-off between time and money.Crucially, this design meant that the skip response reflected a purely endogenous decision to act, without any direct external instruction or imperative stimulus, but rather reflecting the general trade-off between smaller, earlier vs later, larger rewards.This operational definition of volition captures some important features of voluntary control, such as the link between internally-generated action and a general understanding of the distributional landscape for reasons-based decision-making.We compared self-initiated skip decisions to skips in ‘externally-triggered’ blocks, where participants could not choose for themselves when to skip.Instead, they were instructed to make skip responses by a change in the fixation cross colour, yoked to the time of their own volitional skip decisions in previous blocks.Thus, self-initiated and instructed blocks were behaviourally identical, but differed in that participants had internal control over the hazard function in the former, but not the latter condition.On average participants skipped 108 and 106 times in the self-initiated and externally-triggered conditions, respectively.They responded to coherent dot motion in the remaining trials, with a reaction time of 767 ms.Those responses were correct on 86% of trials.The average waiting time before skipping in the self-initiated condition was similar to that in the externally-triggered condition, confirming the success of our yoking procedure.The SD across trials had a mean of 3.17 s for self-initiated skips.Our yoking procedure ensured similar values for externally-triggered skips.In the externally-triggered condition, the average reaction time to the fixation cross change was 699 ms.On average participants earned £2.14 from skipping and £2.78 from correctly responding to dot motion stimuli.This reward supplemented a fixed fee for participation.The mean and distribution of waiting time before skip actions of each participant are presented in Table S1 and Fig. S1.EEG data were pre-processed and averaged separately for self-initiated and externally-triggered conditions.Fig. 2A shows the grand average RP amplitude in both conditions.The mean RP for self-initiated actions showed the familiar negative-going ramp.Note that our choice to baseline-correct at the time of the action itself means that the RP never in fact reaches negative voltage values.This negative-going potential is absent from externally-triggered skip actions.The morphology of the mean RP might simply reflect the average of stochastic fluctuations, rather than a goal-directed build-up.However, these theories offer differing interpretations of the variability of individual EEG trajectories across trials.To investigate this distribution we computed standard deviation of individual trial EEG, and found a marked decrease prior to self-initiated skip action.This decrease is partly an artefact of the analysis technique: individual EEG epochs were time-locked and baseline-corrected at action onset, making the across-trial standard deviation at the time of action necessarily zero.However, this premovement drop in EEG standard deviation was more marked for self-initiated than for externally-triggered skip actions, although the analysis techniques were identical.Paired-samples t-test on jack-knifed data showed that this difference in SD was significant in the last three of the four pre-movement time bins before skip actions: that is from −1.5 to −1 s = 4.32, p < 0.01, dz = 0.92, p values are Bonferroni corrected for four comparisons), −1 to −0.5 s = 5.97, p < 0.01, dz = 1.27), and −0.5 to 0 s = 5.39, p < 0.01, dz = 1.15).To mitigate any effects of arbitrary selection of electrodes or time-bins, we also performed cluster-based permutation tests.For the comparison between SDs prior to self-initiated vs externally-triggered skip actions, a significant cluster was identified extending from 1488 to 80 ms premovement.This suggests that neural activity gradually converges towards an increasingly reliable pattern prior to self-initiated actions.Importantly, this effect is not specific to FCz but could be observed over a wide cluster above central electrodes.However, the bilateral skip response used here makes the dataset suboptimal for thoroughly exploring the fine spatial topography of these potentials, which we hope to address in future research.We also analysed mean and SD EEG amplitude prior to stimulus-triggered responses to coherent dot motion.Importantly, because coherent dot-motion onset is highly unpredictable, any general difference in brain state between the self-initiated skip blocks and the externally-triggered skip blocks should also be apparent prior to coherent dot-motion onset.We did not observe any negative-going potential prior to coherent dot motion.More importantly, the SD of EEG prior to coherent dot motion onset did not differ between conditions in any time window.This suggests that the disproportionate drop in SD prior to skip actions cannot be explained by some general contextual difference between the two conditions, such as differences in expectation of dot stimuli, task difficulty, temporal monitoring related to discounting and hazard function.If the decreased variability prior to self-initiated skips had merely reflected a background, contextual process of this kind, low variability should also be present when this process was unpredictably interrupted by coherent dot motion.However, this was not found.Rather, reduced variability was associated only with the period prior to self-initiated action, and not with any difference in background cognitive processing between the conditions.Finally, variability in the reaction time to respond to externally-triggered skip cues could potentially smear out stimulus-driven preparation of skip actions.Such jitter in RT would have the artefactual effect of increasing EEG variability across trials.To rule out this possibility we checked whether across-trial EEG convergence was correlated across participants with variability in behavioural reaction time to the skip response cue, but found no significant correlation between the two variables.This suggests that the difference in EEG convergence between self-initiated and externally-triggered skip conditions could not be explained by mere variability in RT to skip cues.Leaky stochastic accumulator models have been used previously to explain the neural decision of ‘when’ to move in a self-initiated task.A general imperative to perform the task shifts premotor activity up closer to threshold and then a random threshold-crossing event provides the proximate cause of action.Hence, the precise time of action is driven by accumulated internal physiological noise, and could therefore be viewed as random, rather than decided.However, the across-trial variability of cortical potentials in our dataset suggests that neural activity converges on a fixed pattern prior to self-initiated actions, to a greater extent than for externally-triggered actions.This differential convergence could reflect a between-condition difference in the autocorrelation function of the EEG.The early and sustained additional reduction in SD before self-initiated actions motivated us to hypothesise an additional process of noise control associated with self-initiated actions.To investigate this hypothesis we first performed a sensitivity analysis by investigating how changing key parameters of the model could influence across-trial variability of the output.We modelled the hypothesised process of noise control by allowing a gradual change in noise prior to action.We also explored how changes in the key drift and leak parameters would influence the trial-to-trial variability of RP.We gradually changed each parameter while holding the others fixed, and simulated RP amplitude in 1000 trials time locked to a threshold-crossing event.SD was then measured across these simulated trials.Simulated across-trial SDs showed that lower drift rates and shorter leak constants were associated with a higher across-trial SD.Conversely, reductions in noise were associated with a lower across-trial SD.Thus, for the model to reproduce the differential EEG convergence found in our EEG data, either the drift or the leak should be higher, or the change in noise parameter should be lower, in self-initiated compared to externally-triggered skip action conditions.We next fitted the model on the mean RP amplitude of each participant, separately for the self-initiated and externally triggered conditions.The best fitting parameters were then compared between the two conditions.The drift was significantly lower = −4.47, p < 0.001, after Bonferroni correction for the three parameters tested) in the self-initiated compared to the externally-triggered condition.The leak was also significantly lower = −4.20, p < 0.001, Bonferroni corrected) in the self-initiated compared to the externally-triggered condition.The change in noise was negative in the self-initiated but positive in the externally-triggered condition.This difference was significant between the conditions = −5.38, p < 0.001, Bonferroni corrected)."Finally, to investigate which parameters were most sensitive to the difference between self-initiated and externally-triggered conditions, we expressed the effect of condition on each parameter as an effect size.Importantly, the effect size for the between-condition difference in the change in noise parameter was larger than that for the drift or the leak parameters.So far, we fitted model parameters to the mean RP amplitude, and noted through separate sensitivity analysis their implications for across-trial SD.Next, we directly predicted the drop in across-trial SD of simulated RP data in self-initiated compared to externally-triggered conditions, using the optimal model parameters for each participant in each condition."We therefore simulated 22 RP data sets, using each participant's best fitting parameters in each condition, and computed the SD across the simulated trials.We observed a marked additional drop in simulated across-trial SD in the self-initiated compared to externally-triggered condition."The differential convergence between conditions in the simulated data closely tracked the differential convergence in our EEG data.Optimum parameter values from the model suggest that a consistent process of noise reduction reliably occurs prior to self-initiated actions.This theory predicts that, compared to externally-triggered actions, EEG variability should reduce more strongly not only across trials but also within each single self-initiated action trial.To test this prediction we measured SD within a 100 ms sliding window for each trial, and each condition.We then used linear regression to calculate the slope of the within-trial SD change for each trial, and compared slopes between the self-initiated and externally-triggered conditions using a multilevel model with single trials as level 1 and participants as level 2 variables.While EEG variability decreased within self-initiated skip trials, it increased within externally-triggered trials.The between-condition difference in slopes was highly significant = 3.39, p < 0.001; Fig. 6B), consistent with a progressive reduction of EEG variability prior to self-initiated actions.Previous discussions of amplitude variation in EEG focussed on synchronised activity within specific spectral bands.Preparatory decrease in beta-band power has been used as a reliable biomarker of voluntary action.While time-series methods identify activity that is phase-locked, spectral methods identify EEG power that is both phase-locked and non-phase-locked, within each specific frequency band.Since motor threshold models simply accumulate all neural activity, whether stochastic or synchronised, we reasoned that reduction in the noise scaling factor within an accumulator model might be associated with reduction in the synchronised activity.We therefore also investigated the decreasing variability of neural activity prior to self-initiated action using spectral methods.Specifically, we focused on the event-related desynchronization of beta band activity.We compared ERD between the self-initiated and externally-triggered conditions in a 500 ms window).Beta power in this period decreased prior to self-initiated skip, but not before externally-triggered skip actions.Importantly, percentage change in beta power was significantly different between the two conditions = −4.16, p < 0.001).The capacity for endogenous voluntary action lies at the heart of human nature, but the brain mechanisms that enable this capacity remain unclear.A key research bottleneck has been the lack of convincing experimental paradigms for studying volition.Many existing paradigms rely on paradoxical instructions equivalent to “be voluntary” or “act freely”.In a novel paradigm, we operationalized self-initiated actions as endogenous ‘skip’ responses embedded in a perceptual decision task, with a long, random foreperiod.Participants could decide to skip waiting for an imperative stimulus, by endogenously initiating a bilateral keypress.Although previous studies in animals also used ‘Giving up waiting’ to study spontaneous action decisions, we believe this is the first use of this approach to study self-initiated actions in humans.The skip action in our task has many features traditionally associated with volition including internal-generation, reasons-responsiveness, freedom from immediacy, and a clear counterfactual alternative.Crucially, operationalising self-initiated voluntary action in this way avoids explicit instructions to “act freely”, and avoids subjective reports about “volition”.We compared such actions to an exogenous skip response to a visual cue in control blocks.The expectation of visual stimulation, and the occurrence and timing of skip responses were all balanced across the two blocks, so the key difference is that participants had voluntary control over skips in the self-initiated, but not the externally-triggered blocks.We noted above that voluntary control in turn involves a number of components, including decision, initiation, and expectation of action.We cannot be certain how much each of these components contributes to our electrophysiological results.However, these different components are all considered important markers of self-initiated voluntary action.The neural activity that generates self-initiated voluntary actions remains controversial.Several theories attribute a key role to medial frontal regions.Averaged scalp EEG in humans revealed a rising negativity beginning 1 s or more before the onset of endogenous actions, and appearing to originate in medial frontal cortex.Since this ‘readiness potential’ does not occur before involuntary or externally-triggered movements, it has been interpreted as the electro-physiological sign of planning, preparation, and initiation of self-initiated actions.RP-like brain activities preceding self-initiated actions were also reported at the single-neuron level.However, the view of the RP as a causal signal for voluntary action has been challenged, because simply averaging random neural fluctuations that could trigger movement also produces RP-like patterns."Such stochastic accumulator models were subsequently used to predict humans' and rats' self-initiated actions in a task similar to ours.Thus, it remains highly controversial whether the RP results from a fixed precursor process that prepares self-initiated actions, or from random intrinsic fluctuations.We combined an experimental design that provides a clear operational definition of volition, and an analysis of distribution of pre-movement EEG across and within individual trials.We report the novel finding that self-initiated movements are reliably preceded by a process of variability reduction, measured as a decreasing variability in individual trial RPs, over the 1.5 s prior to movement.Importantly, this variability reduction was specifically associated with the premovement period before a self-initiated action: First, variability reduction was stronger prior to self-initiated skip actions than prior to externally-triggered skip actions.Second, and crucially, the variability reduction did not reflect any general contextual factor that might differ between these two conditions.In our task, the onset of coherent dot motion provides an unexpected snapshot of the brain state in the specific context provided by each condition, but not at the time of the skip event.Fig. 3 showed that any such contextual differences between conditions did not affect EEG variability, and thus could not explain the reduced variability prior to self-initiated skip actions.Thus, the reduced variability in our self-initiated skip condition is linked to the impending action itself, and not to any general difference in the cognitive processing or task demands between the two conditions.This pattern of results suggests a neural precursor of self-initiated action, rather than other background contextual factors unrelated to action preparation.Measurement of inter-trial variability has been extensively used in the analysis of neural data.For example, presenting a target stimulus decreases inter-trial variability of neural firing rate in premotor cortex.Interestingly, RTs to external stimuli are shortest when variability is lowest, suggesting that a decrease in neural variability is a marker of motor preparation.Moreover, reducing neural variability is characteristic of cortical responses to any external stimulus, and could be a reliable signature of conscious perception.Importantly, in previous studies, the decline in neural variability was triggered by a target stimulus, i.e. decreasing neural variability was triggered exogenously.Our results show that inter-trial variability also decreases prior to a self-initiated action, in the absence of any external target.Classical models might attribute variability reduction prior to self-initiated action to a consistent process of preparation.In contrast, stochastic fluctuation models have been recently used to account for the neural activity preceding self-initiated actions in humans and rodents.We did not aim in this experiment to compare these models directly, but to investigate their predictions regarding the shape and variability of the RP.Our modelling showed variability reduction could be explained within a stochastic fluctuation model, with the additional assumption of progressive decrease in the input noise level.In the absence of external evidence, stochastic models depend only on internal physiological noise to determine the time of action.Thus, Schurger et al.’s model first shifts premotor activity closer to a motor threshold, while the actual threshold-crossing event is triggered by accumulating stochastic fluctuations."By fitting a modified version of the leaky stochastic accumulator model on each participant's mean RP amplitude, we observed that integration of internal noise evolves differently prior to self-initiated and externally-triggered skip actions.The rate of the drift and the leak was lower and the change in noise was negative prior to self-initiated actions, compared to externally-triggered actions."Importantly, by fitting model parameters to each participant's mean RP, and testing the variability of EEG data generated with those parameters, we found that variability reduction before self-initiated action was mainly driven by a gradually-reducing noise level.Previous studies show that changes in noise level influence choice, RT and confidence in accumulation-to-bound models of perceptual decision making.Interestingly, the motivating effects of reward on speed and accuracy of behaviour were recently shown to be attributable to active control of internal noise.In general, previous studies show an important role of active noise control in tasks requiring responses to external stimuli.We have shown that similar processes may underlie self-initiated action, and that a consistent process of noise reduction may be a key precursor of self-initiated voluntary action.This additional process of noise control may make a stochastic approach more similar to classical models of voluntary action.Finally, we showed that a decrease in premotor neural variability prior to self-initiated action is not only observed across-trials, but is also realised within-trial and as a reduction in EEG power in the beta frequency band.The observed reduction in beta-band power is entirely consistent with the proposed reduction in neural noise preceding self-initiated action that was suggested by our modelling.Clearly, any natural muscular action must have some precursors."Sherrington's final common path concept proposed that descending neural commands from primary motor cortex necessarily preceded voluntary action.However, it remains unclear how long before action such precursor processes can be identified.Our result provides a new method for addressing this question.The question is theoretically important, because cognitive accounts of self-initiated action control divide into two broad classes.In classical accounts, a fixed, and relatively long-lasting precursor process is caused by a prior decision to act.In other recent accounts, stochastic fluctuations would dominate until a relatively late stage, and fixed precursor processes would be confined to brief, motoric execution processes.The precursor processes that our method identifies may be necessary for self-initiated action, but may not be sufficient: identifying a precursor process prior to self-initiated movement says nothing about whether and how often such a process might also be present in the absence of movement.On one view, the precursor process might occur quite frequently, but a last-minute decision might influence whether a given precursor event completes with a movement, or not.Our movement-locked analyses cannot identify any putative precursor processes or precursor-like processes that failed to result in a movement.However, our spectral analyses make this possibility unlikely.They show a gradual decline in total beta-band power beginning around 1 s prior to self-initiated action.Any putative unfulfilled precursor processes would presumably produce partial versions of this effect throughout the epoch, but these are not readily apparent.Lastly, there might be a nonlinear relation between the recorded signals and the decision process.Our analyses assumed a simple, linear relation between the decision process and the measured variables.This assumption may be simplistic, but almost all analyses of neural data make similar assumptions at some level.Interestingly, our endogenous skip response resembles the decision to explore during foraging behaviour.That is, endogenous skip responses amounted to deciding to look out for dot-motion stimuli in forthcoming time-periods, rather than the present one.This prompts the speculation that spontaneous transition from rest to foraging or vice-versa could be an early evolutionary antecedent of human volition.In conclusion, we show that self-initiated actions have a reliable precursor, namely a consistent process of neural variability reduction prior to movement.We showed that this variability reduction was not due to a background contextual process that differed between self-initiated and externally-triggered conditions, but was related to self-initiated action.We began this paper by distinguishing between a classical model, in which a fixed preparation process preceded self-initiated action, and a fully stochastic model, in which the triggering of self-initiated action is essentially random – although the artefact of working with movement-locked epochs might give the appearance of a specific causal signal such as the RP.We found that the precursor process prior to self-initiated action could be modelled within a stochastic framework, given the additional assumption of a progressive reduction in input noise.Future research might usefully investigate whether the precursor process we have identified is the cause or the consequence of the subjective ‘decision to act’.Conceptualization, N.K., P.H., A.S., and A.D.; Methodology, N.K., A.S., and A.D.; Formal Analysis, N.K., L.Z., and P.H.; Investigation, N.K., L.Z.; Writing-original draft, N.K., Writing-review & editing, P.H., A.S., Supervision, P.H. and A.S.
A gradual buildup of electrical potential over motor areas precedes self-initiated movements. Recently, such “readiness potentials” (RPs) were attributed to stochastic fluctuations in neural activity. We developed a new experimental paradigm that operationalized self-initiated actions as endogenous ‘skip’ responses while waiting for target stimuli in a perceptual decision task. We compared these to a block of trials where participants could not choose when to skip, but were instead instructed to skip. Frequency and timing of motor action were therefore balanced across blocks, so that conditions differed only in how the timing of skip decisions was generated. We reasoned that across-trial variability of EEG could carry as much information about the source of skip decisions as the mean RP. EEG variability decreased more markedly prior to self-initiated compared to externally-triggered skip actions. This convergence suggests a consistent preparatory process prior to self-initiated action. A leaky stochastic accumulator model could reproduce this convergence given the additional assumption of a systematic decrease in input noise prior to self-initiated actions. Our results may provide a novel neurophysiological perspective on the topical debate regarding whether self-initiated actions arise from a deterministic neurocognitive process, or from neural stochasticity. We suggest that the key precursor of self-initiated action may manifest as a reduction in neural noise.
740
Homoeologs: What Are They and How Do We Infer Them?
Many plants – and virtually all angiosperms – have undergone at least one round of polyploidization in their evolutionary history .In particular, numerous important crop species, such as Arachis hypogaea, Avena sativa, Brassica juncea, Brassica napus, Coffea arabica, Gossypium hirsutum, Mangifera indica, Nicotiana tabacum, Prunus cerasus, Triticum turgidum, and Triticum aestivum, exhibit allopolyploidy, a type of whole-genome duplication via hybridization followed by genome doubling .This hybridization usually occurs between two related species, thus merging the genomic content from two divergent species into one.Allopolyploidization has been studied since at least the early 1900s.Some of the first investigations were about chromosome numbers and pairing patterns of hybrid species .The term homoeologous was coined to distinguish chromosomes that pair readily during meiosis from those that pair only occasionally during meiosis .However, the definition of homoeology has varied and at times been used inconsistently.Homoeology has been broadly used to denote the relationship between ‘corresponding’ genes or chromosomes derived from different species in an allopolyploid.Accurately identifying homoeologs is key to studying the genetic consequences of polyploidization; knowing the evolutionary correspondence between genes across subgenomes allows us to more accurately estimate gene gain or loss after polyploidization and to study the major structural rearrangements or conservation between homoeologous chromosomes.Additionally, we can study the functional divergence of homoeologs on polyploidization, particularly in terms of expression, epigenetic patterns, alternative splicing , and diploidization.From a crop improvement viewpoint, identifying homoeologs that may have been functionally conserved is important for elucidating or engineering the genetic basis for traits of interest .This high interest in the genetic and evolutionary consequences of polyploidization has driven the development of several methods for homoeolog inference.However, because of their highly redundant nature polyploid genomes have been notoriously challenging to sequence and assemble .Recent breakthroughs in sequencing and assembly methods suggest that we are finally overcoming this hurdle and as increasing numbers of polyploid genomes are sequenced there will be a growing interest in homoeology inference.Thus, it is necessary to establish a common framework.Here we examine the current and common definitions of homoeology and point out imprecise usage in the literature, from historical definitions to modern understandings.We advocate a precise and evolutionarily meaningful definition of homoeology and connect homoeology and orthology inference.We then review homoeolog inference methods and discuss advantages and disadvantages of each approach.It is first important to make the distinction between homology and homoeology.The prefix ‘homo-’ comes from the Latin word for ‘same’, whereas the prefix ‘homoeo’ means ‘similar to’ .Homoeology has alternatively been spelled as ‘homeology’.Both terms have a history of varied and, at times, inconsistent usage in different fields, but in biology it is now generally accepted that homology indicates ‘common ancestry’; by contrast, ‘homoeology’ is more ambiguous.The term homoeologous was first used in a cytogenetics study of allopolyploid wheat, where Huskins defined it as ‘phylogenetically similar but not strictly homologous chromosomes’ in a hybrid.Huskins goes on to explain further:To distinguish between chromosomes which come within the commonly accepted meaning of the term homologous and those which are, as evidenced by their pairing behavior, similar only in part, the latter might be referred to as homœologous chromosomes, signifying similarity but not identity…This term would include chromosomes of different ‘genomes’ which pair occasionally in allopolyploids, often causing the appearance of mutant or aberrant forms, and also, as a corollary, chromosomes which pair irregularly in many interspecific hybrids. ,Two decades later, in the 1949 Dictionary of Genetics, R.L. Knight defines homoeologous chromosomes as ‘chromosomes that are homologous in parts of their length’ .Thus, in its historical context, a pair of homoeologous chromosomes is thought of as being similar but exhibiting only infrequent pairing during meiosis.In a survey of 93 studies of autopolyploids and 78 studies of allopolyploids, multivalent pairing on average occurred more in autopolyploids than in allopolyploids .Although chromosome pairing patterns give a good indication of homology type, this should not be used as a criterion.Over the years, the definition of homoeology has evolved and diverged to have different usages depending on the scientific field of study or topic.The term homoeologous can mean different things and may not be as simple as ‘genes duplicated by polyploidy’ .Table 1 highlights the differences between the different definitions of homoeology depending on the context in which it is used.The variation among definitions depends on the level of biological analysis: at the chromosome, gene, or sequence level.Even in modern evolutionary biology contexts, the term homoeolog has been used inconsistently.For instance, some have used it not just in the context of allopolyploids but to relate duplicates created by autopolyploidy as well.This is, however, at odds with the original description of homoeologs as belonging to an allopolyploid genome .There are biological differences between genes that arise due to speciation versus duplication and thus also, conceivably, between allo- versus autopolyploids.Autopolyploids by definition are created by genome doubling, with an exact copy of the genome formed.By contrast, allopolyploids are formed by the merger of closely related species that have already started to diverge.Although still poorly understood, these fundamental differences could have significant effects on the genome of the polyploid.Hybridization can induce a ‘genome shock’ prompting epigenetic or expression changes that might not be present with strictly genome doubling per se .The functional consequences of genes duplicated by allo- vs autopolyploidy still needs to be investigated, which is why a clear distinction of terminology between the two is important.Furthermore, this usage of homoeolog overlaps with another term – ohnologs – used to denote genes resulting from whole-genome duplication .The term homoeolog has even been used to refer to similar chromosomal regions in different species .Although closely related species do have similar chromosomes and gene content, this latter usage is unorthodox: the term homoeolog has been overwhelmingly used to denote relationships within polyploids, and therefore within a single species rather than between closely related species.A cross-species definition of homoeology is also redundant with that of orthology.Consequently, there is a need for a unifying, evolutionarily precise definition of homoeology, formulated in terms of the key events that gave rise to the genes in question.The ideal definition should be as consistent as possible with the widespread usage of the term and should complement the other ‘-log’ terms, which have served the community well.We define homoeologs as pairs of genes or chromosomes in the same species that originated by speciation and were brought back together in the same genome by allopolyploidization.Figure 1 depicts how this definition complements the other ‘log’ terms.In particular, the analogy between homoeologs and orthologs implies that homoeologs can be thought of as orthologs between subgenomes of an allopolyploid .Note that the term ‘paleolog’ is sometimes used to denote ancient polyploidization events.The term is convenient for plants such as soybean where the polyploidization event occurred more than a few million years ago and where it is unknown whether these were auto- or allopolyploidization events .Because of the analogy between homoeology and orthology, homoeologs are under the same common misconceptions that afflict orthologs: the notion that homoeologs necessarily in a one-to-one relationship or that they have remained strictly in their ancestral positions since speciation.Since homoeology is characterized by an initial speciation event, once the progenitor species of the future allopolyploid begin to diverge, the corresponding genes in each new species that descended from a common ancestral gene start diverging in sequence.The sequence divergence will depend on the time since the progenitor divergence and other factors.In addition to genic sequence divergence, other scale evolutionary events may occur, including single-gene duplications, deletions, and rearrangements.As a consequence, orthologous relationships are not necessarily one-to-one between species and may exist in one-to-many or many-to-many relationships, especially among highly duplicated plant genomes .The same is true for homoeologous relationships.Depending on the duplication rate since the divergence of the progenitor species, there may be more than one homoeologous copy of a given gene per subgenome.In many plant species, a high degree of collinearity, or conservation of gene order , has been observed between homoeologous chromosomes in polyploids.Genes tend to stay in their ancestral position since divergence, leading to the concept of positional orthology and, analogously in allopolyploids, of positional homoeology.However, there may be rearrangement of homoeologs via single-gene duplication/translocation either before or after polyploidization, going against the widespread notion that homoeologous genes are always positional, as stated for example in .Although we can expect that most homoeologs remain positionally conserved and in a one-to-one relationship after polyploidization, these are only a subset of the homoeologs.The frequency of homoeolog duplication may be significantly underestimated in some species due to use of the best bidirectional hit – an approach inherently limited to inferring one-to-one relationships .In general, homoeology inference reduces to identifying similar genes within a polyploid genome and inferring whether pairs of homologs started diverging from one another through speciation, in which case they are homoeologs, or through duplication, in which case they are paralogs.The methods for doing so have changed over time with advances in technology, from low-throughput laboratory techniques to high-throughput computational ones.In this section we survey these techniques and highlight their relative strengths and limitations.Although whole-genome sequencing has become commonplace thanks to next-generation sequencing techniques, many species do not yet have a fully sequenced reference genome.Techniques used to isolate homoeologous genes from polyploid species before NGS were based on hybridization, using a probe or primer as a template to retrieve the homoeologs of interest.However, due to the high sequence similarity of homoeologs as well as paralogs, one would obtain a mixture of DNA molecules representing homoeologous and paralogous copies, which then needed to be separated.One method of separating homoeologous copies from each other in a pool of highly similar DNA molecules is by using the mixture of homoeologs obtained from PCR to transform into bacteria, resulting in only a single copy of either homoeolog in each bacterial colony.Colonies can then be isolated, sequenced, and assigned to subgenomes by using knowledge from diploid progenitor species, specifically differentialgenome restriction patterns .Note that the true progenitors may no longer exist in nature and that the term ‘progenitor’ may refer to their extant, unhybridized descendant or close relative .Another way of separating homoeologous sequences makes use of restriction-digested DNA followed by size fractionation on a gel .Minor differences among homoeologous copies can be expected to result in sequence differences at restriction sites and thus digestion cuts homoeologous copies into different sizes.This is followed by isolating the DNA from the separated bands and then amplifying these homoeologous copies by cloning.Alternatively, isolated homoeologs can be obtained using PCR primers to produce a mixture of homoeologous copies, and after size fractionation the same primers can be used to amplify individual homoeolog copies .The above techniques are all performed on a gene-by-gene basis with molecular methods and therefore are small scale and relatively time-consuming and laborious.A more recent and larger-scale technique to separate homoeologs, based on hybridization of genomic DNA to an array, is able to target hundreds or thousands of genes at a time, each individually spotted on the array.Salmon et al. used this technique to capture homoeologous pairs in G. hirsutum.After hybridization, the probes on the chip, enriched for homoeologous pairs, were then sequenced with NGS.Homoeologs could be distinguished by sequence polymorphisms between them.These experimental techniques have several limitations.First, they are appropriate for studies focusing on a small number of genes but scale poorly to entire genomes.Additionally, they all require prior sequence information for the gene of interest.If cDNA is used as the starting point, one can combine homoeolog inference with differential expression studies.However, this works only for genes that are expressed in the particular condition from which the cDNA library was made.Homoeologs are assigned to a subgenome by comparing the individual homoeolog sequences from the polyploid to their orthologous counterparts in the diploid progenitors.Therefore, these experiments need to be performed on the progenitor species as well, which may not always be readily available.Finally, it can be difficult to distinguish homoeologous from paralogous sequences, as the degree of sequence divergence between the two can be slight and thus not result in a clear difference in hybridization pattern.Thus, these techniques do not perform well on large gene families.Before the era of whole-genome sequencing, molecular markers were used to detect synteny and collinearity between chromosomes.However, molecular mapping is more complicated in a polyploid than a diploid, as there needs to be sufficient allele polymorphism to distinguish among the different homoeologs.Several techniques exist to circumvent this problem by comparative mapping in diploid relatives or by using aneuploid lines .Many studies have been published using mapping to identify homoeologous relationships between chromosomes or genes in several allopolyploids, including Gossypium , B. napus , A. hypogaea , and T. aestivum .Wheat researchers played a major role in popularizing the term homoeology in the 1990s, with many molecular mapping papers showing the collinearity between wheat homoeologous chromosomes.Although conservation of position in the genome can be used as another layer of evidence above sequence similarity to infer homoeology, there are several inherent problems with homoeology inference based solely on this approach.Mapping homoeologs is possible only if the molecular markers are able to distinguish sequence polymorphisms between homoeologs.Additionally, conservation of relative genomic location in itself is not a requirement for homoeology, which depends only on the type of event that gave rise to the sequences.Due to potential duplications, chromosomal rearrangements, or other events leading to gene movement , relying on positional conservation to infer homoeology may lead to a substantial fraction of missed homoeologous relationships and introduce a bias.Like orthologs or paralogs, positional and non-positional homoeologs could differ in their biological characteristics.For example, orthologous genes maintained in the same position have slower evolution rates, are less likely to undergo positive selection, and are more likely to have a conserved function .Additionally, positional orthologs have been shown to maintain a higher expression level and breadth compared with non-positional orthologs .Paralogs that have inserted into distant regions of the genome tend to have a more divergent DNA methylation pattern and expression than tandem duplicates .High-throughput sequencing allows fast and affordable production of genome-wide sequence information, making it possible to identify similar regions and infer homoeology computationally at a genome-wide scale.However, despite rapid improvements in sequencing technology it remains a challenge to obtain a high-quality, fully assembled reference genome sequence for many plant species .This is mainly because of their large, complex genomes, which are highly repetitive due to duplication and transposon activity .With entire chromosomes in multiple copies, this difficulty is compounded in polyploid genomes.Because of these issues, most polyploid plant genome sequences remain in a draft, highly fragmented state, usually comprising small contigs harboring only a few genes .The identification of homoeologs thus first requires assembling short sequences at low stringency followed by homoeolog discrimination based on sequence polymorphisms between the reads.For example, Udall et al. assembled ESTs from allotetraploid cotton and the two diploid progenitors.Most assembled contigs contained four copies: two orthologs from the progenitors and one from each of the homoeologs.They then assigned the homoeolog ESTs to their appropriate subgenome based on sequence comparison with the progenitors .In another example , homoeologs were distinguished in hexaploid wheat by first assembling, at a relatively low stringency, transcriptome NGS reads into clusters of sequences containing homoeologs and close paralogs.The second step was to reassemble each cluster separately using a more stringent assembler to separate homoeologs.After discriminating between homoeologous genes, it is generally necessary to map the reads back to the progenitor species to infer to which subgenome they belong.For example, Akama et al. sequenced and de novo assembled both Arabidopsis halleri and Arabidopsis lyrata.They identified homoeologs by aligning the allotetraploid reads to both the A. halleri and A. lyrata genomes and considered high-scoring alignments as homoeologs.A similar technique was performed in hexaploid wheat taking advantage of the recently sequenced diploid progenitors Triticum urartu and Aegilops tauschii .Another method of separating contigs into individual homoeolog copies employs the strategy of ‘post-assembly phasing’ using remapped reads, which detects polymorphisms in reads and determines whether they were inherited together .Provided that the progenitors’ genomes are known and well separated, techniques based on short reads and sequence polymorphisms to infer homoeologs can be effective.Because they tend to be based on RNA-seq reads, one can simultaneously quantify their expression.However, there will be false negatives if one or both of the homoeologs is unexpressed.Also, it can be costly to first sequence the progenitor species.Another disadvantage is that, again, these methods do not establish one-to-many or many-to-many relationships.Additionally, as with experimental hybridization methods, it can be difficult to distinguish homoeologs from paralogs.We indicated above that homoeologs should be defined as pairs of genes within an allopolyploid that originated by speciation and were reunited by hybridization.Thus, fundamentally, the relationship between homoeologs is based on evolutionary relationships rather than sequence similarity.Furthermore, the parallel between homoeologs and orthologs suggests the possibility of repurposing orthology inference methods – a relatively mature area of research with many well-established computational methods .These methods, which all work at the genome-wide scale, are divided into phylogenetic-tree-based and graph-based.Methods based on phylogenetic trees use the process of gene/species tree reconciliation, which determines whether each internal node of a given gene tree is a speciation or duplication node using the phylogeny of the species tree.With this information one can determine whether any two genes are related through orthology or paralogy; pairs of genes that coalesce at a speciation node are orthologs, whereas pairs of genes that coalesce at a duplication node are paralogs .To our knowledge, the only phylogenetic tree-based homoeology inference approach taken so far is that of Ensembl Genomes, which has repurposed their Compara phylogenetic tree-based pipeline to distinguish orthologs, paralogs, and homoeologs in wheat .This is achieved by treating each subgenome as a different species, running their usual orthology pipeline, and finally relabeling orthologs inferred among subgenomes as homoeologs.This information is found in the ‘location-based display’ on their website under ‘Polyploid view’.In general, graph-based orthology methods comprise inferring and clustering pairs of orthologs based on sequence similarity .Graph-based orthology methods have also been adapted to infer homoeologs.One of the simplest and most widely used methods of ortholog detection is by finding BBHs between pairs of genomes .This method uses BLAST or another sequence alignment algorithm to find the set of reciprocally highest-scoring pairs of genes between two genomes.Such an approach was used to infer homoeologs between the subgenomes of hexaploid wheat, identifying triplets of best bidirectional protein hits between subgenomes .However, the BBH method has inherent drawbacks.By identifying only the ‘best’ pair, it cannot identify one-to-many or many-to-many homoeology.This is particularly problematic for highly duplicated plant genomes .As a result, BBH between subgenomes will at best infer a subset of the homoeologous relationships, thereby yielding false-negatives.Additionally, differential gene loss among the subgenomes can cause erroneous inference of paralogs as homoeologs .Finally, using alignment scores is suboptimal in the presence of many fragmentary genes and sequencing errors .Another graph-based homoeolog inference approach to analyze the wheat genome was performed in the Orthologous Matrix database – a method and resource for inferring different types of homologous relationships between fully sequenced genomes .This technique identifies mutually closest homologs based on evolutionary distance while considering the possibility of differential gene loss or many-to-many relationships .Again, the application of the orthology inference pipeline was achieved by treating each subgenome as a different species, running the standard pipeline, and, finally, calling orthologs between subgenomes homoeologs.Compared with the BBH approach, the OMA algorithm has the advantages of considering many-to-many homoeology, identifying differential gene losses, and relying on evolutionary distances rather than alignment score.The main issue limiting the use of repurposed orthology methods such as Ensembl Compara and OMA is the requirement for a priori delineation of the subgenomes.If there have been rearrangements across subgenomes since hybridization of the progenitors occurred, this will cause errors in homoeolog inference because subgenomes can no longer be straightforwardly treated as individual species.Another problem with both similarity- and evolution-based techniques is that they are highly dependent on the quality of sequence assembly and annotation used to infer homoeologs.Polyploid species are widespread throughout the plant kingdom.There is much interest in polyploidy and accurately identifying homoeologs allows us to better study the genetic and evolutionary consequences on genomes of polyploids.Many exciting findings have been published recently that provide insights into the structural and functional divergence of homoeologs and the chromosomes they reside on .As a result, polyploidy has emerged as potentially a major mechanism of adaptation to environmental stresses .The term homoeologous was first used in 1931 to describe chromosomes related by allopolyploidy.Since then, the definition has changed over the years and now suffers from inconsistent interpretation, usage, and spelling.In recent decades there has been increasing interest in polyploidy and the word homoeology has experienced an increase in usage.There has been a surge in sequenced plant genomes and polyploid genomes are not far behind, despite their increased complexity and challenges due to their repetitive nature .Thus, just as it was important to establish clear definitions of orthology and paralogy , now is the time to establish common and consistent definitions for homologs that exist in a polyploid.Based on our survey of the usage of the term and related concepts in evolutionary biology, we advocate defining homoeologs as pairs of genes that started diverging through speciation but are now found in the same species due to hybridization.This evolution-based definition has several implications that call for a fundamental shift in the way we as biologists, plant breeders, and bioinformaticians think of homoeology.First, homoeolog inference may suffer from false negatives if inferred solely on the basis of positional conservation.This is because genes can move and, by definition, different types of homologous relationships are based on how the genes originated and not where they are located in the genome.Syntenic conservation is helpful to infer homoeologs but should be used only as a soft criterion to provide additional evidence that a pair of genes are homoeologs.We recommend using the term positional homoeolog when referring to the subset of homoeologs with a conserved syntenic position.Furthermore, looking at homoeology from an evolutionary perspective has an impact on the relationship cardinality.Homoeology is not necessarily a one-to-one relationship, especially in highly duplicated plant genomes.This conceptual change is important because one-to-one positional homoeologs are likely to have significantly different biological characteristics than one-to-many, non-positional homoeologs – as has been previously observed with orthologs.The establishment of a clear and meaningful definition of homoeology is timely.With rapid progress in sequence technology, we are at the cusp of an explosion of sequenced polyploid genomes.However, although assembling allopolyploid genomes might no longer be ‘formidable’ , unraveling the evolutionary history of the genes they contain remains resolutely so.Overcoming this challenge will require a major coordinated effort among plant, evolutionary, and computational biology scientists.A common definition and framework constitutes a first essential step toward that goal.Can dependable computational methods be devised to infer from genome sequence alone whether a polyploid species originated by allopolyploidization or by autopolyploidization?,Certain computational pipelines need delineation of subgenomes before homoeology inference.This will, however, not work if there has been considerable chromosomal rearrangement between subgenomes after polyploidization.Can one simultaneously detect rearrangement, separate subgenomes, and infer homoeologs?,In general, what are the functional differences between homoeologs and ohnologs?,There is a growing body of research looking at the functional implications of polyploidization, but so far a clear answer remains elusive.
The evolutionary history of nearly all flowering plants includes a polyploidization event. Homologous genes resulting from allopolyploidy are commonly referred to as 'homoeologs', although this term has not always been used precisely or consistently in the literature. With several allopolyploid genome sequencing projects under way, there is a pressing need for computational methods for homoeology inference. Here we review the definition of homoeology in historical and modern contexts and propose a precise and testable definition highlighting the connection between homoeologs and orthologs. In the second part, we survey experimental and computational methods of homoeolog inference, considering the strengths and limitations of each approach. Establishing a precise and evolutionarily meaningful definition of homoeology is essential for understanding the evolutionary consequences of polyploidization.
741
Mechanical vibration bandgaps in surface-based lattices
Vibration can be a significant issue in precision engineering, contributing to measurement uncertainties and limiting manufacturing precision.When the amplitude is high enough, vibration can cause physical damage, especially when the frequency of the incident wave is at or close to the natural frequency of the mechanical system in question.Conventional practice is to design the mechanical system to have a natural frequency much greater than or lower than the frequencies of the input waves .Using the conventional practice, our previous work has shown that additively manufactured lattice structures can be used for vibration isolation in one degree of freedom by designing the lattice to have a resonant frequency lower than a particular frequency of interest .Further to this, Wang et al. showed how topology optimisation and density grading could be implemented with AM lattice structures to provide isolation in a selected frequency region .One of the drawbacks of the conventional vibration isolation practice is that it does not guarantee complete elimination of vibration in the frequency of interest.An alternative approach is to design structures that exhibit phononic bandgaps.AM gives us the freedom to design and manufacture these phononic bandgap structures and, more importantly, to tailor the structural parameters and to tune the properties for specific applications.A phononic bandgap is a range of frequencies in which the propagation of elastic waves is prohibited by Bragg scattering.The bandgap is caused by the destructive interference of reflected waves of certain frequencies as they propagate through a periodic medium .Bandgap structures have been reported for use in a range of applications.Recent examples relevant to the aerospace sector include the work of Ampatzidis et al. , who presented a bandgap structure to act as an acoustic isolator.Design parameters of bandgaps have been studied by Richards and Pines , who used the principles of stop-band/pass-band for the reduction of vibration in a mesh of mechanical gears.Sigmund and Jensus presented a design of a waveguide , Diaz et al. designed a bandgap structure using non-structural masses as design parameters to control features in the dispersion curves , Lucklum et al. presented additively manufactured lattice structures on the millimetre-scale, Kruisova et al. tested bandgaps in ceramic lattices, and Wormser et al. presented an approach for maximisation of bandgaps through gradient optimisation.In addition, Maurin et al. presented a statistical analysis of the issues associated with restricting the detection of bandgaps to the contour of the irreducible Brillouin zone instead of the full IBZ.They have reported that restricting the detection to only the contour of the IBZ provides accurate results when the lattice is of high structural symmetry.Recent studies that have investigated cellular structures for use as bandgap structures include the work of Ruzzene et al. , who studied grid cellular structures and presented a method for the guidance of waves, Abueidda et al. , showed that primitive cell, IWP and Neovious triply periodic minimal surface lattices can develop three-dimensional bandgaps and presented a method for controlling them, Matlack et al. , presented structures that provide vibration absorption at frequencies as low as 3 kHz–4 kHz using lattices of different stiffness , and Li et al. , examined the dispersion curves of two-dimensional phononic crystals using a finite element method.The mechanical properties of TPMS lattices can be modified by controlling their volume fraction .TPMS lattices have complex morphologies making their fabrication by conventional manufacturing methods challenging, if not impossible.AM enables the fabrication of TPMS lattices for a range of applications, but the literature to date has focussed mainly on their load-bearing capability .There exists a wide range of TPMS lattice cell types .In this paper, four types of TPMS structures are considered for the development of 1D phononic bandgaps under 15 kHz, which represents part of the acoustic frequency range.The nature of the bandgaps presented in this work is realised by the tessellations of the unit cells along one single direction which form a beam-like lattice, hence the name ‘1D’.However, the dispersion curves, from which the bandgaps will be extracted, rely on three degrees of freedom of the nodes associated with the 3D unit cell model; this ensures that the dispersion curves pick up the transverse waves as well as the longitudinal waves in the structure.The TPMS lattices examined in this paper are the network gyroid, network diamond, matrix gyroid and matrix diamond.These lattice types have proved to provide high resistance to compressive failure , have higher manufacturability than other strut-based lattices due to less stress concentration during AM , and provide high structural stiffness for use in different applications .To the authors’ knowledge the phononic behaviour of the considered lattices has not been studied before.The novelty of the presented manuscript lies in the discovery of 1D bandgaps in these surface-based structures, which has not been presented before, and in providing numerical results that can be used to design an AM lattice structure with a desirable bandgap.Many applications in different industries are expected to exploit the ability of TPMS structures to provide vibration bandgaps.For example, the transport sector could make use of TPMS lattice structures for sound absorption in vehicles, while benefitting from their inherently light-weight nature.Structural frames for precision machines would also benefit from TPMS lattices; they could be used to isolate environmental vibration within certain frequency ranges; for example, those associated with laboratory or workshop equipment.This paper is structured as follows: Section 2 provides the theoretical background of the finite element method and introduces the TPMS lattice unit cells used in the study.Section 3 presents the method of obtaining the dispersion curves associated with each of the TPMS lattices.Section 4 presents and discusses the dispersion curves of the TPMS lattices.The dependance of the frequency and bandwidth of the bandgaps on the cell size and volume fraction is presented in Sections 4.1 and 4.2, respectively, with the aim of providing a simple tool for designing bandgaps at desired frequencies.Prototype structures are fabricated with additive manufacturing to demonstrate the manufacturability of surface-based lattices; the results are reported on in Section 4.3.Conclusions are provided in Section 5.AM has enabled the design of structures with tailorable mechanical properties using lattice structures.The properties of a lattice are not solely dependent on its constituent material, but also on its cellular geometry, the connectivity of features, unit cell size, number of cell tessellations and volume fraction .The lattice unit cells used in this study are the network and matrix phases of the gyroid and diamond surface, as shown in Fig. 1.Network phase cells have one void region and one solid region, both of which retain their connectivity in every part of the structure.Matrix phase lattices have two non-connected void regions separated everywhere by a solid wall or sheet.In addition, matrix phase lattices are known to have higher specific stiffness than their network phase equivalents .The determination of phonon dispersion curves requires analysis of a single lattice unit cell.The unit cells are designed using software developed at University of Nottingham called the Functional Lattice Package .The volume fraction and size of the cells shown in Fig. 1 is 20% and 15 mm, respectively.These values are based on the properties of unit cells that have provided vibration isolation in previous work .The geometrical specifications of the unit cells are shown in Table 1.The minimum feature size of the matrix unit cells is the sheet thickness.For network type unit cells, the thickness differs across the unit cell.The parameter t for the network unit cells is, therefore, defined as the thickness in the slimmest regions.Design equations of gyroid and diamond TPMS can be found in the AM work of Maskery et al. and chemistry work of Gandy et al. , respectively.The finite element method used in this paper relies on Bloch theorem, which governs the displacement of the element nodes.Floquet boundary conditions are used, which simulate an infinite tessellation of the unit cell .The work uses 3D lattice models with three DOF at each node to capture all the possible modes of vibration.This technique is common in the analysis of 1D dispersion characterisation, including the prediction of bandgap formation in elastic mechanical structures .By solving Eq., frequency modes for each wave vector in the first BZ can be obtained.The mass and stiffness matrices of the unit cells are rearranged with the help of the nodes numbering obtained from a commercial finite element package.The mass and stiffness matrices are then arranged in the form shown in Eq.The generalised eigenvalue problem of Equation is constructed.The frequency eigenvalue problems are solved for 100 equally spaced wave numbers spanning the first BZ of the TPMS unit cells.All wavebands below 15 kHz in each lattice were included in the analyses.The properties of lattice structures that can be tuned to potentially induce a phononic bandgap include cell size, volume fraction and cell geometry.Here, the four unit cells identified in Section 2.1 will be analysed first, with the most promising candidate for bandgap development then being chosen for bandgap tuning.The characteristic wavebands for the initial settings of the chosen cell found under 15 kHz are examined under different volume fractions and cell sizes.The range of volume fractions used in this study extends from 20% to 40%, while the examined cell sizes are of 15 mm, 20 mm, 25 mm, 30 mm and 40 mm.The mass and stiffness matrices on which the phonon dispersion curves depend were found to have converged with respect to the mesh density.The bandgap dispersion curves for the network gyroid lattice, shown in Fig. 3, show a total of four bandgaps in the sub-15 kHz region.The broadest is formed between the 6th and 7th wavebands, is 1047 Hz wide and starts from 7905 Hz A bandgap of similar width spans 978 Hz from 11,349 Hz to 12,327 Hz and is formed by the 9th and 10th wavebands.A bandgap narrower in width than the previous two appears in the range of 9340 Hz to 9506 Hz, and another one appears in the range of 10,134 Hz to 10,238 Hz.The scattering of the mechanical waves in a structure relies on the impedance mismatch between two adjacent geometrical features .As shown in Figs. 3 and 4, the network gyroid lattice possesses phononic bandgaps below 15 kHz while the network diamond lattice does not.This can be explained by considering the differing internal geometries of the respective cells.As a wave travels from a thicker to a thinner solid region of the cell, or from the solid phase to the void phase, it is partially reflected, owing to the change in local impedance.This process is repeated for each reflected wave, giving rise to complex dispersion curves such as those in Figs. 3 and 4.The lowest frequency bandgap is usually formed by one acoustic waveband and one optical waveband .Although Bragg bandgaps can also be formed by two optical wavebands, which is the case of all the bandgaps in this paper, it is impossible for a Bragg bandgap to be formed before the cut-off frequency of acoustic wavebands.We compare the ability of the network diamond and the matrix diamond lattices to form bandgaps by examining the cut-off frequency of their acoustic wavebands.As can be seen in Figs. 4 and 6, respectively, the acoustic wavebands cut-off at a higher frequency in the matrix diamond cell while they cut off at a much lower frequency in the network diamond cell.The network diamond lattice also showed a larger number of wavebands within the tested frequency region.However, similar to its matrix counterpart, the network diamond cell did not possess bandgaps within the examined frequency range.Similar behaviour is observed by the network gyroid and the matrix gyroid cells; the cut-off frequency of acoustic wavebands of the network gyroid cell is around 7000 Hz while the corresponding frequency in the matrix gyroid cell is around 9600 Hz.The matrix gyroid formed a bandgap spanning from 12,952 Hz to 13,220 Hz.This bandgap is higher in terms of the starting frequency and narrower in terms of width than the lowest frequency bandgap observed in the gyroid network cell.In addition, matrix type lattices have almost constant wall thickness across the inner parts of the cell.This suggests that matrix cells would have reduced capacity to hinder wave propagation from one end of the cell to the other than in the network type lattices.This is because wave reflection, which is the mechanism by which Bragg induced bandgaps are formed, is expected to be higher when there is a large difference in densities; or large difference in wall thickness.The dispersion curves of the matrix lattices support the claim presented by Kapfer et al. in that matrix type lattices have higher stiffness than network type lattices.The examined lattices are of identical volume fraction and cell size and, therefore, identical mass.Thus, the natural frequency of a matrix type lattice is higher than its corresponding network counterparts.In wave reflection by Bragg scattering, the bandgap does not appear at frequencies lower than the natural frequency of the structure.Thus, the high natural frequency of matrix gyroid lattice, as can be seen from Fig. 7, prohibits the opening of bandgaps at lower frequencies than the network gyroid lattice.This is seen in Fig. 3 and 5, where one bandgap appears in the matrix gyroid dispersion curves, while several appear within the same frequency range using the network gyroid lattice.The reader is referred elsewhere for more information on matrix and network type TPMS lattices.The network gyroid lattice represents a suitable candidate to examine the control of bandgaps, because we have established that it supports multiple bandgaps at a practical cell size and volume fraction, as demonstrated in Section 4.The absolute bandgaps frequencies arising from the network gyroid lattice with cell sizes of 15 mm, 20 mm, 25 mm, 30 mm and 40 mm, at constant volume fraction of 20%, are calculated.Fig. 8 shows the dependency of the absolute bandgaps frequencies on the cell size of Nylon-12.The bandgap with the largest bandwidth was seen for the 15 mm cell, where the bandgap spanned approximately 1048 Hz from 7905 Hz to 8953 Hz.Of the examined network gyroid cell sizes, the 40 mm lattice showed a bandgap of the lowest frequency.This bandgap is formed between wavebands 6 and 7.The starting frequency of this bandgap is around 60% lower than the corresponding band gap in the 15 mm cell.The phonon dispersion curves of these lattices are presented in Figs. 3 and 10.Fig. 11 shows the dependence of the bandgaps on the volume fraction of the 15 mm cell.The width of the bandgap between the 9th and the 10th wavebands was the largest at a volume fraction of 25% and spanned a frequency range of around 1900 Hz.Increasing the volume fraction above this value reduced the width of this bandgap.In addition, the starting frequencies of all bandgaps increased with the increase in volume fraction, except between wavebands 9 and 10, where the starting frequency showed a reduction of approximately 1% over that of 20% volume fraction network gyroid.The bandgap between the 8th and 9th wavebands disappeared when the volume fraction went from 20% to 25%, but it returned when the volume fraction was 30%, 35% and 40%.Similar behaviour is observed by the bandgap of wavebands 6 and 7; this one does not appear in the 35% and 40% volume fraction dispersion curves.Thus, the bandgap formed by the 9th and the 10th wavebands and the bandgap formed by the 7th and the 8th wavebands are the only bandgaps that sustained the variation of the volume fraction and the cell size.The network gyroid cell with 40% volume fraction shows a bandgap between wavebands 9 and 10 which appears at a starting frequency 45% greater than that of the 20% volume fraction cell.These results indicate a means to control the frequency and width of phononic bandgaps in lattice structures by controlling their volume fraction.The bandgap behaviour of AM surface-based lattice structures has not received much attention.Of relevance to our investigation is the work of Matlack et al. , who used internal resonators lattices, allowing the development of bandgaps with starting frequencies of 3000 Hz–4000 Hz.Our work shows that TPMS structures have the ability to open up bandgaps at similar starting frequencies with the potential to go even lower by choosing an appropriate cell size and volume fraction.Using multi-material unit cells can result in large differences in impedance and ultimately a wider bandgap than those reported in this work; Ampatzidis et al. presented a structure of Nylon-12 glued to a composite panel that provided a 1D bandgap.The normalised bandgap frequency of their structure was from 0.24 to 0.27.In comparison to the first bandgap of 20% volume fraction gyroid examined in this work, the structure of Ampatzidis et al. is higher by 160% in terms of the starting normalised bandgap frequency and wider by 50%.Lucklum et al. presented a bandgap of normalised frequency between 0.15 to 0.25.In this work, the normalised frequency of the first bandgap formed by the 20% gyroid unit cell is from 0.09 to 0.11.This bandgap is lower by 40% in terms of the starting frequency and narrower by 90% in terms of width than the bandgap presented by Lucklum et al.Network gyroid prototype samples were fabricated on a selective laser sintering system using a 21 W laser of a scan speed and hatch spacing of 2500 mm·s−1 and 0.25 mm, respectively.The nominal spot size of the laser is 0.3 mm and the layer thickness is 0.1 mm.Nylon-12 material is used to fill a 1320 mm × 1067 mm × 2204 mm powder bed at a temperature of 173 °C.The measured values are measured using a vernier caliper for length and a mass balance for mass.Each measurement was repeated four times and the standard error of the measurements are shown alongside the mean properties in Table 3.The measured volume fraction is calculated as the ratio between the measured mass and the mass of a solid structure of dimensions identical to the measured lattice assuming a 950 kg˖m−3 density .Future advances in the accuracy and minimum feature sizes of SLS systems are expected to reduce the gap between the nominal and fabricated lattices.These improvements may also push the theoretical cell size limit for gyroid lattices below the fabrication limits which are set in Fig. 13, for example, below 7.8 mm for 20% volume fraction gyroid cells.This will provide an opportunity to open bandgaps at higher frequencies by manufacturing unit cells of lower cell sizes.We demonstrated that TPMS lattice structures can induce mechanical bandgap behaviour which can be tailored for vibration isolation purposes.The novelty of the presented work lies in predicting the 1D bandgaps of beam-like surface-based lattices, which have not been studied before.Our analysis showed that:At reasonable cell sizes and volume fractions for AM capabilities, the network gyroid and the matrix gyroid lattices have bandgaps, while other examined lattice types do not.Changing the lattice cell size and volume fraction of surface-based lattices can alter the width of a pre-existing bandgap, the starting frequency, or both.In addition, the potential to open up bandgaps that did not exist previously between two wavebands was demonstrated.The network gyroid and the matrix gyroid TPMS lattices have several bandgaps under 15 kHz when their volume fraction is 20% and cell size is 15 mm.Bandgaps at frequency regions as low as 3000 Hz are demonstrated to be achievable using a cell size of 40 mm and 20% volume fraction.Fabrication of prototype lattice structures was done using SLS system which fabricated 4 × 1 × 1 lattices of cell sizes of 15 mm, 25 mm, and 40 mm.The SLS system fabricated the lattices with a maximum deviation of 1.8% and 10% from the nominal cell sizes and volume fractions, respectively.The measured minimum feature t showed a maximum difference of 3.2% from the nominal value.All the differences between measured and nominal values are below the laser spot size in SLS which is 0.3 mm.Introduced here are new design factors for tuning bandgaps of phononic structures which are realised by the nature of TPMS lattices and the manufacturing freedom of AM.The designer of lattice structures can now use these results to design and fabricate structures with AM that exhibit inherent vibrational isolation properties for the use in different engineering applications.More work, including the control of more TPMS parameters for tuning mechanical bandgaps, will follow.Additional work for opening bandgaps in more propagation directions and at lower frequencies using AM lattice structures is also in progress.
In this paper, the phonon dispersion curves of several surface-based lattices are examined, and their energy transmission spectra, along with the corresponding bandgaps are identified. We demonstrate that these bandgaps may be controlled, or tuned, through the choice of cell type, cell size and volume fraction. Our results include two findings of high relevance to the designers of lattice structures: (i) network and matrix phase gyroid lattice structures develop bandgaps below 15 kHz while network diamond and matrix diamond lattices do not, and (ii) the bandwidth of a bandgap in the network phase gyroid lattice can be tuned by adjusting its volume fraction and cell size.
742
Data on study of hematite nanoparticles obtained from Iron(III) oxide by the Pechini method
The data contain a description on synthesis of hematite nanoparticles via Pechini method.The synthesis procedure is shown in flowchart.The precursor material was obtained from steel industry, an industrial waste.Their characterization was carried out using various analytical techniques.Thermal analysis methods were used to study the endothermic processes as well as exothermic processes, see Fig. 2.Additionally, structural characterization was carried out by XRD.FTIR spectra of the samples are presented in Fig. 4.A more detailed analysis was carried out using a deconvolution of the FTIR spectra to bands at lower wavenumber than 800 cm−1.All corresponding Table and figures are provided with this article.This article reports the detailed data analysis from α-Fe2O3 nanoparticles.An oxide ferric precursor from steel industry was used to synthesize hematite nanoparticles.The reagents were obtained from Mallinckrodt Pharmaceuticals and Merck and used as received."For characterization of the samples, the thermal analysis was performed using TA Instruments under N2 atmosphere, X-ray diffraction were obtained using a PANalytical X'Pert Pro diffractometer, and an infrared spectrophotometer was used to obtain the IR spectra of the samples.Fig. 1 shows the flowchart used for the synthesis of hematite nanoparticles via Pechini method.The precursor oxide ferric is a byproducts of the steel industry.The procedure consists in the dissolution of the precursor material in an acidic media under a basic pH under stirring at 140 °C.In this procedure, the yield was around 40% for each sample.Thermal analysis was carry out from room temperature to 1200 °C with finality to determine the formation and decomposition phase occurring during heat treatment of synthesized samples.Fig. 2 displays the thermal analysis for the pre-calcined samples.In TG thermogram, three mass changes were observed and associated with the three anomalies showed in DTA thermogram.For TG, a first weight loss step occurred gradually between room temperature and 208 °C.The mass loss was of ∼ 4.1%, which is attributed to the elimination of water present on α-Fe2O3 nanoparticles.Additionally, the DTA analysis enabled us to find an endothermic peak at 65 °C confirming the water elimination.A second step corresponds to a mass loss of ∼ 22.18% occurring at around 208–400 °C, exothermic peak at 368 °C for DTA curve, which is associated to the volatile compounds, oxidation of the organic phase present in the samples and crystallization of the oxide.In addition, in DTA curve is observed a third peak at around 820 °C, which is associated with a little mass gain giving rise to an oxidation that is possibly associated to the loss of magnetism .Based on the TG/DTA analysis, for the ceramic powders obtained by the Pechini method, it was defined that the most suitable temperature for the calcination of the material should be higher than 450 °C.The obtained solids, precalcined at 300 °C, where thermal treated based on the results of the Fig. 2.The temperatures used for the thermal treatment was 500 °C and 1200 °C, during 4 h.Fig. 3 shows the XRD patterns of the starting precursor material and synthesized samples.For starting precursor material and precalcined sample at 300 °C, a mixture of two phases was observed).For both samples, there are coexistence of magnetite and hematite phases.The intense peaks at 2θ values of 18.1°, 30.1°, 35.12°, 37.22°, 43.1°, 53.23°, and 57.64° corresponding to,,, and crystal planes are characteristic peaks of cubic structure of magnetite .By treating the sample thermally at 500 °C, a greater crystallization is achieved.The XRD pattern match well with the PDF no. 75–469 of the rhombohedral structure of pure hematite , as indicated by the DTA analysis.This phase is stable over a wide range of temperatures, but their magnetic property decreased with increasing of temperature.The crystallite average size was determined using the Debey-Scherrer formula , and reported in Table 1.Fig. 4 shows the FTIR spectra for α-Fe2O3 samples synthesized by the Pechini method at different temperatures.For the precalcined sample, the typical O–H bands at ∼ 3410 cm−1 and ∼ 1606 cm−1, indicate that during the resin formation, all water present in the sample was not completely eliminated, and is in agreement with thermal studies.These bands appear strongly in the spectrum at 300 °C, but their % transmission decreases as the temperature increases.In addition, this shows that the presence of the OH groups in the samples decreases as the temperature increases, disappearing in the sample treated at 500 °C.The wide band around 1370 cm−1 can be associated with the vibrational band of residual C–H groups .Apart of the typical absorption bands around 3400 cm−1 and 1600 cm−1 due to stretching and flexion vibrational modes of hydroxyl groups, a series of absorption bands are presents in the range of 800 to 400 cm−1.In this region, the Fe–O vibrational bands of α-Fe2O3 at around 627, 580 and 485 cm−1 are presents in all samples .The vibrational band at 627 cm−1 is due to longitudinal absorptions, while the bands at 580 cm−1and 485 cm−1 are due to the transverse absorption of α-hematite structure.These bands are presents in the FTIR spectra of Fig. 4 .A detailed analysis was carried out in the region of interest using a deconvolution of the FTIR spectra, see Fig. 5.For calcined samples at 500 °C and 1200 °C, the organic species were removed after the thermal treatments.While, for precalcined sample at 300 °C, the organic phase has not been removed completely.However, the spectrum of the precalcined sample show additional vibrational bands in comparison of the sample containing α-Fe2O3 as predominant phase, due to the fact that the organic phase has not been completely removed.
This article presents the data on α-Fe2O3 nanoparticles synthesized via Pechini method using iron(III) oxide precursor from steel industry. It is important to highlight the added value that is given to an industrial waste. The samples were characterized by thermal analysis (DTA, TG), X-ray diffraction (XRD), and Fourier transform infrared spectroscopy (FTIR). The TG showed three mass changes, whereas DTA resulted in three anomalies. X-ray diffraction pattern of the samples disclosed rhombohedral structure characteristic of the nanocrystalline α-Fe2O3 phase. The crystallite size was estimated for each thermal treatment. Fourier transform infrared spectroscopy confirms the phase purity of prepared nanoparticles. A detailed study on the local structure of the samples was carry out in the region of 800 and 400 cm−1, where the associated bands of Fe–O bonds are presents. The data have not been reported nor discussed for now.
743
Deglacial variability in Okhotsk Sea Intermediate Water ventilation and biogeochemistry: Implications for North Pacific nutrient supply and productivity
Today, no new deepwater is formed in the subarctic North Pacific, because the surface water masses are not dense enough to sink to significant water depths and initiate convection due to low surface salinities and an associated strong halocline.Accordingly, the deep North Pacific is only weakly ventilated and carries high concentrations of macronutrients, in particular nitrate and silicate.In contrast, the intermediate-depth level is occupied by a relatively fresh and oxygen-enriched layer of North Pacific Intermediate Water.NPIW is today mainly characterized through a ventilated precursor water mass, the Okhotsk Sea Intermediate Water.Modern NPIW and OSIW are highly variable in their biogeochemical characteristics, even on relatively short instrumental timescales, and keep mid-depth waters in the North Pacific moderately oxygenated.While the North Pacific overall is a large modern oceanic sink for atmospheric CO2, surface utilization of nutrients by primary producers and export production of carbon remains incomplete in the Western Subarctic Pacific Gyre, mainly due to the rapid seasonal depletion of iron as micronutrient, and silicate4; Harrison et al., 2004).On glacial-interglacial timescales the export production and degree of surface nutrient utilization, as evidenced by studies of δ15N, are anti-correlated, implying more effective use of available nutrients during glacial periods.This more effective utilization allowed a lower primary production to export carbon from the surface into the deep ocean more efficiently.Thus, indicators for export production varied in phase with atmospheric CO2 concentrations for at least the last 800 ka.However, on shorter, millennial timescales this relatively straightforward relation does not necessarily hold.During the last glacial termination, surface nutrient utilization and export production become partially decoupled in the WSAP.During the deglacial warm Bølling-Allerød and Preboreal phases, widespread productivity peaks were accompanied by more effective nutrient utilization, contrary to glacial-interglacial patterns.Such millennial-scale structures call for additional mechanisms that regulate the nutrient dynamics in the mixed layer and their biological utilization.Potential candidates are changes in the ratio of macronutrients relative to bio-available iron and changes in the physical stratification of the mixed layer.In line with these deglacial changes in upper ocean processes, the North Pacific underwent drastic and rapid changes in intermediate to deep water circulation and ventilation, which are as well thought to being closely linked with changes in the utilization of nutrients and biogeochemistry.During the cold Heinrich Stadial 1, convection and the increased formation of new intermediate–deep water masses were proposed for the North Pacific, although newer evidence alternatively suggested that a potential Pacific overturning cell did not extend beyond 1400–2000 m water depth and was initiated through increased mid-depth ventilation in the marginal subarctic seas.This early deglacial episode is sharply contrasted by subsequent oxygen declines in intermediate water depths during the B-A the PB, leading to widespread anoxia along the North American Pacific margins, in the Bering Sea, and on the western Pacific margin off Japan.In the pelagic abyssal North Pacific, no Oxygen Minimum Zones developed during this time.To explain the unusually severe and widespread O2-depletion of intermediate waters during the Bølling-Allerød and Preboreal, increases in export production have been invoked.Alternatively, reduced or ceased ventilation of NPIW could have decreased O2 concentrations across the North Pacific., or a combination of these processes was responsible for the decline in mid-depth O2 concentrations by increased respiration of organic matter in initially well-ventilated NPIW, along its pathway in the Pacific.Previous works from the Okhotsk Sea and Bering Sea have shown that mixed layer stratification, sea ice action and export production varied on millennial timescales during the deglaciation, implying a combination of these potential factors, in addition to changes of freshwater runoff and the flooding of continental shelf areas by sea level rise.However, information used to infer changes often stemmed from single site locations, sometimes relatively distant to source regions of water formation processes or were affected by insufficient temporal resolution.In this study, we analyzed a suite of high-resolution sediment cores retrieved directly downstream of the main ventilation region of OSIW, on the eastern continental margin of Sakhalin island in the Okhotsk Sea.We use a multi-proxy approach to discuss changes in ventilation and export production within the Okhotsk Sea as the most prominent modern NPIW ventilation source region.We put these outcomes into context with XRF-scanning based qualitative assessments of terrigenous supply of Fe as essential micro-nutrient and compare our results with earlier, published data that showed millennial-scale variations in ventilation and nutrient utilization during the last glacial termination.We constrain potential causes for the observed rapid changes in OSIW/NPIW ventilation and productivity patterns, and assess their potential consequences for open-ocean WSAP nutrient supply and biogeochemistry.North Pacific Intermediate Water is defined as a salinity and potential vorticity minimum and spreads between the density lines of c. σΘ 26.6–27.2 east- and southward across the north Pacific down to a latitude of about 20° N.Because water masses with these densities are at present not outcropping on the surface in the open North Pacific, NPIW is ventilated by one of its main precursor water mass, the Okhotsk Sea Intermediate Water, through largely diapycnal mixing processes.The principal origin for high O2 of OSIW is Dense Shelf Water.At present, relatively warm WSAP Water is transported in the East Kamchatka Current southward as part of the larger Western Subarctic Pacific Gyre and enters the Okhotsk Sea mainly via Kruzensthern Strait and Bussol’ Strait.WSAP Water flows within the main cyclonic Okhotsk Gyre to the shallow northeastern shelf areas, where DSW is formed during the sea ice season north of Sakhalin by brine rejection in large persistently recurring polynias.DSW is characterized by relatively low salinity, very low temperatures and high O2 content, and is subsequently entrained into deeper water layers along the northern Sakhalin margin by vertical tidal and diapycnal mixing while being transported by the East Sakhalin Current."It finally leaves the Okhotsk Sea as OSIW through deep passages between the Kuril Islands and is transported with the Oyashio Western Boundary Current into the Mixed Water Region and the Western Pacific Subarctic Gyre circulation.This well ventilated, cold, low salinity OSIW and the subsequent export of this water mass into the open North Pacific is critical for maintaining sufficiently high O2 levels in the intermediate water layer of the extra-tropical North Pacific.Below the layer of OSIW, the Okhotsk Sea has a weakly developed OMZ today along the Sakhalin margin between ca. 1000 and 1400 m.Not only does OSIW ventilate the intermediate-depth North Pacific, it also transports high amounts of both dissolved and particulate terrigenous matter along the Sakhalin margin into the open North Pacific within a turbid layer that bears extremely high concentrations of organic carbon, lithogenic particles, silicate and other suspended material.This material is entrained by vigorous tidal mixing on the northeastern shallow continental shelf into the turbid water layer and transported along the Sakhalin margin on the density surfaces of OSIW for long distances.The remnants of this entrainment are visible in the pelagic subarctic North Pacific as intermediate maxima in both Dissolved and Particulate Organic Carbon.In addition, a subsurface to intermediate-depth concentration maximum has been identified in both dissolved and particulate iron in the North Pacific.Its origin was tracked to the Okhotsk Sea, where Fe is entrained both from surface sediments by tidal currents and surface river runoff during intense mixing processes on the shallow northern shelf areas into DSW/OSIW and is transported with OSIW into the open North Pacific.The amount of Fe flux to the North Pacific surface water by OSIW and NPIW today is substantial.Results from time-series and modeling studies suggest that it is comparable to the modern Fe flux derived from long-range atmospheric dust transport, which has been conventionally regarded as the major delivery mechanism of Fe to the surface North Pacific to relieve micronutrient limitation.Thus, the Okhotsk Sea also acts as a modern potential source region for both micronutrients, organic matter and macronutrients to distal pelagic regions in the western North Pacific.During wintertime wind-induced deep mixing both soluble and particulate Fe are transported into the subsurface and surface water layer, where they become bio-available and induce additional wintertime extensive phytoplankton blooms in this Fe-micronutrient-limited High Nutrient - Low Chlorophyll region, effectively replenishing depleted stocks of atmospherically-derived Fe.We use three sediment cores in this study: gravity core LV29-79-3, gravity core LV28-4-4 were recovered onthe northeast Sakhalin continental margin from undisturbed, continuous hemipelagic sediment sequences in a number of expeditions with R/V Akademik M. A. Lavrentiev and R/V Sonne carried out within the framework of the bilateral Russian–German KOMEX projects between 1998 and 2004.As the cores used here were retrieved from a principally dynamic sedimentary environment with relatively active bottom currents, significant care was taken to obtain undisturbed cores from continental margin sites along Sakhalin that were free of both underlying large-scale fault features, as well as small-scale sedimentary re-working or re-distribution events.Extensive use was made of existing Russian bathymetric and seismic survey data, complemented by the use of high-resolution shallow sub-bottom profiling systems on expeditions LV29 and SO178 to exclude sites probably affected by sediment re-distribution.In the case of the externally mounted sub-bottom profiling system SES2000DS during LV29, e.g. an average penetration depth of 20–30 m was achieved with a resolution of 25 cm or better, depending on pulse length and frequency.With a total of 860 km of profiles obtained, core 79 was retrieved from a facies with parallel, undisturbed high-amplitude reflectors with good continuity.In the case of SO178, the shipboard Atlas Parasound sub-bottom profiling and SIMRAD 120 multibeam echosounder systems were used to search for and select sites for coring operations, in the course of which a total of more than 2000 nautical miles of bathymetric profiles were obtained.Like in previous expeditions, core 13 was retrieved from a flat, protected site with no indications of sediment re-working or disturbances.Details for survey lines shot across sites were given in the respective cruise reports.All cores feature similar lithofacies, consisting mainly of silty clays with sand and occasional larger dropstones derived from sea ice-transported terrigenous matter.Cores were sampled with varying resolution between 2 and 20 cm, depending on the respective sample protocols for each expedition.In some cases such changing sampling protocols over the course of the years led to different sample resolutions depending on which proxies were used.Age control was achieved through radiocarbon dating of planktic foraminifera, and benthic fauna in a few cases if the available samples did not yield enough foraminifera.For radiocarbon analyses N. pachyderma and G. bulloides were picked from the 150–250 μm size fraction.Most radiometric dates were measured in the Leibniz Laboratory for Radiometric Dating and Isotope Research, Kiel according to established standard workflow protocols.Dating of two samples at a later stage was carried out by the W. M. Keck Carbon Cycle Accelerator Mass Spectrometry Laboratory at University of California, Irvine according to established protocols.Conventional 14C ages are reported according to Stuiver and Polach.To better assess changes in the ventilation of intermediate water masses we supplement our data with a set of deepwater ventilation ages derived from published data and measurements of planktic foraminifera and benthic organisms from the same core depths.We use raw Benthic-Planktic ages, expressed in 14C years.This technique, while not without errors e.g. due to changing surface reservoir age changes has nonetheless been successfully used in a number of studies in this region before.We do not use the alternative projection age method, as one of the underlying assumptions of this method, is the single-source origin of the originating water mass may be violated in this region of active water formation.The data used and their respective sources are listed in Supplementary Tables S2 and S3.To deduce changes in the ventilation of OSIW we use stable isotopes of benthic foraminifera.The δ13C of various epibenthic Cibicidoides species is widely used as reliable and well understood proxy for the δ13C ∑CO2 and thus ventilation properties of bottom waters.These species have an epibenthic to slightly elevated epibenthic lifestyle.In addition, they exhibit relatively low tolerances against longer hypoxic periods in their environment and are often absent in OMZs in the Okhotsk Sea.In the Okhotsk Sea, mainly two cibicid species have been found, C. mundulus and C. lobatulus.In this paper, we only use the former due to its more regular abundance in samples from all three cores and to obtain species-specific time series.This species has been shown to reliably record ambient average bottom water δ13C of the dissolved inorganic carbon in the Pacific region, even under high organic matter flux and deposition of phytodetritus layers on the sediment surface.In the Okhotsk Sea, subsurface and deeper water mass δ13CDIC is roughly linearly correlated with water column PO4 and O2 content, and thus allows using δ13CCib data as proxy for ventilation of OSIW.In addition, we analyzed samples of the species Uvigerina peregrina as qualitative indicator for sediment oxygenation and changes in pore water O2 concentrations as a function of organic carbon flux.Sediment samples were weighed, freeze-dried and subsequently washed over a 63 μm screen.Dried samples were sieved into 63–150, 150–250, 500–1000, and >1000 μm fractions and weighed.Specimen for isotope analyses were picked from the 250–500 μm fraction, between one and five tests were used for each depth interval.All specimens were inspected for preservation; only well-preserved tests with translucent and non-corroded walls were used.Measured foraminifera were clean, did not show signs of re-crystallisation or overgrowths and had lustrous, unpitted test surfaces with open and original pores, as checked under light and scanning electron microscopy on selected samples.Foraminiferal abundances were principally low in all cores.In some samples, we were not able to find the desired species.Some clearly defined, limited intervals of the cores had indications of carbonate concretions and diagenesis detected in the shipboard sampling and description procedures.We avoided using samples from these intervals.All stable isotope analyses were carried out in the Paleoceanography Stable Isotope Laboratory at GEOMAR, Kiel with a Thermo Finnigan MAT 252 mass spectrometer, coupled online to an automated Kiel II AUTO CARBO carbonate preparation device.Long-time analytical precision was better than ±0.08‰ for δ18O and ±0.05‰ for δ13C.Calibration was achieved via an in-house Solnhofen limestone standard and National Institute of Standards NBS-19.Results are reported in the δ notation as per mille with reference to Vienna - Pee Dee Belemnite.Productivity proxy analyses followed established procedures in the Sedimentology/Anorganic Geochemistry Laboratory at GEOMAR.All methods have been used successfully on sediments from the Okhotsk Sea.Chlorins were extracted with acetone under sonication and subsequent centrifugation in three consecutive steps; samples were ice-cooled after each extraction.Pigments were acidified with HCl to transform chlorophyll-a into phaeopigments.The sediment extracts weremeasured with a Turner TD-700 fluorometer immediately after the third extraction under low light conditions to hinder decomposition.Chlorophyll-a, acidified with 2 ml HCl was used as standard.To check extractions and instrument drift an internal standard sample was measured after a dozen measurements.Precision of the method is about ±2% and all chlorin concentrations are reported in ng/g.Measurements of biogenic opal concentrations followed the automated leaching method, in which opal is extracted from the previously freeze-dried, homogenized bulk sediment with NaOH at about 85 °C over c. 45 min.time.The leaching solution is continuously analyzed for dissolved silicon by molybdate-blue spectrophotometry and a mineral correction was applied.The long-term accuracy of the method is ±0.5 wt %, deduced from replicate and in-house standard measurements.A CARLO ERBA CNS analyzer was used for measuring the carbon and nitrogen content of sediment core samples.The total carbon content was measured from bulk sediment previously homogenized by milling, the Total Organic Carbon content was measured on bulk sediment samples previously decalcified with 0.25 M HCl.The CaCO3 content was calculated by subtracting the TOC from the TC value and using the formula: CaCO3 = 8.333⋅.Long-term analytical precision is around 2%.For chlorophycean freshwater algae counts, 32 samples were analyzed for Pediastrum spp. and Botryococcus cf. braunii.Between one and two gram bulk sediment samples were taken in 5–20 cm intervals from core 4.Preparation included 10% HCl treatment to dissolve carbonates, sieving with deionized water through a 6 μm mesh, 40% cold Hydrofluoric Acid and H2O washing, hot 10% KOH treatment to remove humic acids, acetolysis treatment with acetic acid - anhydride mix and concentrated sulphuric acid to remove cellulosic matter.Sediment was washed in glacial acetic acid and subsequently rinsed with H2O.Samples were counted under a Zeiss Axiophot light microscope with phase contrast properties under 400× magnification.Total slides were counted to prevent effects of fractionated grain distribution in the slide, and between 229 and 529 pollen grains were counted in each sample.For obtaining algae concentrations the freeze-dried sediments were spiked with Lycopodium clavatum tablets and calculation followed:C =/X * k)/m,where X was the number of Lycopodium clavatum spores added to sediment, X the number of added Lycopodium clavatum spores counted, k is the number of the counted palynomorphs in the sample, and m the weight of freeze-dried sediment in gram.We used XRF scanning to determine bulk sedimentary geochemistry with high resolution.Cores were measured on the CORTEX and AVAATECH XRF scanners in the IODP Bremen Core Repository at 1 cm scanning steps.Details of the setup and analytical protocols of both scanners have been described previously.On the CORTEX system a single run with 10 KV was used for cores 4 and core 79, the AVAATECH scanner was used for core 13 with dual runs on 10 KV and 50 KV.We use data from XRF-scans as qualitative indicators or relative concentrations in the form of count ratios."However, quantification and inter-calibration of XRF relative scanning intensities was obtained by measuring a set of representative discrete sediment core samples taken at count minima and maxima from core 4 on an X'Unique XRF spectrometer with Rh tube housed at GEOMAR.All reported elements were inspected if they followed a linear regression between discretely sampled and XRF-scanned samples, all showed excellent correlation with R2 values between 0.97 and 0.91.We apply XRF-scanning-based iron counts normalized to potassium to qualitatively determine the changing amount of sedimentary iron within the terrigenous sediment fraction.We use potassium instead of other commonly employed elements for normalizing ratios, as it is reliably recorded in both different scanners.More importantly, potassium is as well suited as the more often used Aluminium in reflecting background terrigenous sedimentation, and least influenced by differential transport processes.In comparison to potassium, titanium, an element frequently used for normalization in terrigenous pelagic settings, may not change fundamentally between glacial and interglacial conditions in its abundance, but is influenced by preferential transport processes, in our case sea ice vs. river transport, which we think play fundamental roles in the distribution of the lithogenic fraction on the Sakhalin margin.All ages are given in calibrated years before present with reference to 1950 CE as either “cal.yr BP” or “ka”.Ice core record timescales originally referenced to 2000 CE or “b2k” were corrected accordingly.Age models for the three cores are based on AMS 14C dates on planktic foraminifera and in two cases, where abundance of planktic foraminifera was too low to permit dating, on benthic mollusk shells.We used a regional reservoir age correction value of 500 yr ± 50 yr, in line with published correction values for the Okhotsk Sea and earlier works.While ΔR values used here potentially changed over the last 18 kyr, recent studies in various locations of the North Pacific, the Okhotsk and Bering Sea have provided substantial evidence that variations were of small enough magnitude to justify keeping values constant over the entire age scale reported here.In the lower part of core 4 we used two benthic ages from mollusks.The benthic 14C age in the lowermost part of the core that we took to supplement the benthic-planktic ventilation age differences was not used in the construction of the age model.We set an additional offset for the ventilation age correction for benthic values of 360 14C years, based on our own measured benthic-planktic age differences from the same depth interval in nearby core 13 and published data.In the case of core 79 we had to use one additional benthic age control point, this value was corrected by an additional 500 14C years based on published values for the same time interval from corresponding water depths and nearby cores."We checked potential age-depth relationships by comparing the chlorin records of the cores on their respective independent depth scales against each other, constrained by each cores' individual AMS 14C age tie points.We assumed that major changes in surface productivity patterns at almost identical locations should be broadly similar in timing to each other within the overall error of the individual age models.In the case of core 13, we were thus able to transfer a set of 14C ages measured in core 79 to core 13, in a mode similar to earlier stratigraphic correlations from this region.We then used this set of newly transferred ages together with original 14C dates used in an earlier study to establish a revised age model.For core 4, no correction of the initial independent age model was deemed necessary.We used the program CALIB 6.1.1 and Calib 7.0.1 MARINE 13 calibration curves for initial age determinations.To establish final age-depth relationships between our age control points we used the routine CLAMS written for the software package R as a compromise between manually fitting regressions between age control points and more sophisticated, statistically superior age-depth modeling routines.The final best-fit runs were performed in CLAMS with 10,000 iterations using a smooth spline function for matching all age control points in their respective 2 σ calibrated age distribution ranges.Alternative control runs in CLAMS with linear interpolations and higher order polynomial fits displayed only minor offsets within the principal error of the age control points, but the latter yielded higher regression residuals and were thus discarded.Due to the nature of CLAMS and the chosen fitting procedure, the age-depth relationship is based on the entire core and thus reported here; however, in this study we only focus on the deglacial and early Holocene time.In the case of core 13, our new age model differs slightly from the one reported earlier that encompassed only the deglacial part of the core, as that earlier model was based on linear interpolation between single age control points and less AMS 14C dates available.However, in general, age deviations are less than a few hundred years across the deglacial section discussed here.According to the age models, average sedimentation rates in the studied time interval vary from 15 to 75 cm/ka, from 58 to 250 cm/ka, and from 88 to 300 cm/ka, thus yielding high, centennial resolution for most proxy series over the deglacial interval.Both cores 4 and 79 show significant variations in their epibenthic δ13CCib signatures that follow a similar trend, albeit with smaller-scale differences between the two records.Our southern core 4 records very negative minima of −1.5 and −1.2‰, respectively, during the onsets of the B-A and PB warm phases, whereas its other data fall nearly in line with the records of the northerly core 79 and corresponds to published values δ13CCib of the closely related species C. lobatulus in core 13.The observed amplitude of stadial-interstadial δ13CCib variations is higher than 0.5‰ for both transitions, significantly exceeding the glacial-interglacial carbon isotope shift of about 0.3‰.As modern values for δ13CDIC in the OS commonly range between −0.3 and −0.5‰, we note that deglacial δ13CCib values never reached modern values during the glacial termination after observed ventilation maxima during HS-1, suggesting a deglacial OSIW on average less oxygenated than today, while apparently not reaching the extreme oxygen depletions in other regions.The endobenthic δ13CUvi curves of both northerly cores 13 and 79 resemble the epibenthic values in their patterns, though amplitude variations are more pronounced between interstadial low δ13CUvi and high δ13CUvi data, with average differences of 0.7‰ and more.Absolute minima are reached during the PB and middle to late Allerød, though this assessment might be biased due to the absence of Uvigerina spp. from the benthic species assemblage in the peak Bølling interval, indicating potentially even lower O2 bottom and pore water concentrations.The compiled OSIW ventilation ages help to establish connections between OSIW formation and concurrent deglacial southerly NPIW dynamics, as evidenced in sites from the NE Japanese margin.Both the upper and lower OSIW B-P ages show remarkably little variation of less than 200–300 yr over the deglaciation into the Holocene.Absolute B-P ages are comparable to, or generally slightly lower than modern values in the North Pacific.In fact, if error values are included, the OSIW B-P ages remain nearly unchanged between the cold HS-1, warm B-A and the Holocene.In addition, during the Bølling-Allerød warm phase, the internal water mass structure between the upper and lowermost cores stays relatively invariant, indicating no large changes occurred in the physical structure and vertical stratification.This is contrasted by NE Japan margin B-P ages that show clear variations in B-P ages in the upper NPIW.Values similar to the upper OSIW during the cold HS-1 are followed by drastic and rapid drops to much lower values of 1200–1400 yr during the B-A and a recovery to better ventilated NPIW waters in the subsequent YD and PB.During both the B-A and the PB the B-P ages on the NE Japan margin appear older than in the Okhotsk Sea at comparable water depths.Both total organic carbon and chlorin concentrations follow a pattern evident in all cores, with maxima during the mid-late Allerød and peak values during the PB, especially in the more northerly cores 13 and 79.Deep Core 79 records maximal chlorin values during the PB and the highest amplitude variations.Notably, southern core 4 does not record a distinct productivity peak in chlorin during the B-A, in contrast to northern cores and other sites further downstream.CaCO3 concentrations in cores 4 and 79 follow the pattern recorded in the other bulk productivity proxies, while biogenic opal concentrations show a late increase from uniformly low values of about 5% to steadily increasing early Holocene maxima of about 25%.In particular, opal concentrations do not exhibit the rapid millennial-scale variations expressed in all other biological productivity proxies.The studied interval is characterized by rapid changes in C/N ratios in cores 4 and 79.Elevated C/N ratios are recorded in both cores during the warm interstadial B/A and PB periods of the termination, with maxima in both cores during the PB phase and early Holocene.Northern core 79 shows higher, maximum values of 11–14 in the late PB, with a second, albeit smaller peak of about 11 in the late Allerød, indicating a major influence of lateral transport of terrigenous particulate organic carbon to the core site.Southern core 4 shows a general pattern of elevated C/N ratios during inceptions of the warm Bølling and PB phases, however, the overall amplitude of the signal appears more muted with values reaching maxima of only 9–9.5, significantly lower than in core 79 and in line with a more distal location to the northern source areas for terrigenous POC.However, we note that the TOC concentration and C/N ratios are not linearly correlated in either of the two cores.XRF-derived Fe/K ratios indicate an increase in Fe delivery to the core sites during interstadial warm phases, with an overall increasing trend towards the Holocene.The observed increases are interrupted by minimal values during the cold HS-1 and YD stadials, whereas local maxima occur during the Allerød and the late PB around 10.2 ka.The differing ratios we report are due to the fact that we, unfortunately, had to use two different XRF scanner types for the study, an older CORTEX Scanner for the cores 79 and 4, and an Aavatech XRF scanner for core 13.The sedimentation rates are more similar between cores 79 and 13, with core 4 having a lower sedimentation rate during the deglacial, due to its more distal location to the main sediment source areas, explaining the very slightly higher count ratios in core 79 than in core 4, which were both measured on the same scanner.Our sedimentary Fe/K ratios are assumed to reflect delivery of Fe to the core sites by the turbid OSIW and DSW, respectively, that passes directly over our sites.Its origin was tracked to the northern shelf areas and the Amur river discharge, where Fe is entrained both from surface sediments by tidal currents and surface river runoff during intense mixing processes on the shallow northern shelf areas into DSW/OSIW.A number of recent studies based on sediment traps, water column measurements as well as sediment surface sample data have consistently shown that the concentrations and processes of Fe deposited along the Sakhalin margin are directly related to upstream entrainment dynamics into intermediate-depth concentration maxima both in dissolved and particulate iron in OSIW that is ultimately transported into the North Pacific.Low-resolution freshwater algae counts from our southernmost core 4 location indicate that a first, smaller discharge occurred during the B/A with Pediastrum spp. concentrations reaching 2335 spec/g, followed by a higher freshwater discharge event between 11 and 9 ka, where concentrations in two samples peaked at 21,380 and 12,700 spec./g sediment, respectively.Freshwater algae concentrations during these two peaks are more than an order of magnitude higher than most Holocene and modern values, with uppermost samples in core 4 yielding concentrations of 51–149 spec./g sediment over the last 1 ka, confirming that neither Pediastrum nor Botryococcus taxa occur in substantial numbers at site 4 today.As a result of long-range transport from the Amur river, green algae are often viewed as pure freshwater forms, but Pediastrum species are also known to occur in brackish environments with salinities higher than 10 psμ.Previous work on surface sediments in the Arctic Ocean has shown that the concentration of the algae Pediastrum spp. decreases with the distance from the river mouth, whereas Botryococcus spp. in general is more evenly distributed into outer shelf and deep sea sediments.This behavior is partly caused by a high buoyancy, but also Botryococcus spp. colonies tolerate higher salinities than Pediastrum spp. and are likely to survive longer in the brackish conditions of a freshwater plume.Two principal, non-exclusive mechanisms could account for the observed interstadial ventilation minima of the OSIW layer.Either formation of DSW and thus well-ventilated OSIW ceased due to physical forcing and circulation pattern different from today, leading to little to no export of OSIW from the Okhotsk Sea into the open North Pacific.Such a collapse of OSIW production would be caused by an absence of seasonal sea ice, and thus polynia-induced brine rejection, leading to DSW formation.Alternatively, OSIW was still produced and exported out of the Okhotsk Sea similarly to modern conditions, but its initial oxygen content was lower due to shortened atmosphere-ocean exchange and more intense organic matter remineralization after formation because of higher nutrient loads.In the latter case, it likely faced increased POC and DOC consumption and respiration already on the Sakhalin margin soon after formation, rapidly decreasing the already lower-oxygenated OSIW even further.We favor the second explanation as more important factor for the following reasons: Previous works have shown, based on IRD provenance and abundance studies, that sea ice was present during all phases of the deglaciation in the northeastern part of the Okhotsk Sea and on the Sakhalin margin, even if overall abundances increased and decreased with cold and warm phases.We thus presume that necessary physical preconditions for sustained formation of DSW masses were fulfilled, i.e. opening of polynias and formation of brine rejections during cooling in winter.However, the maximum duration of the sea-ice season and thus the intensity of atmosphere-ocean exchange and DSW ventilation in winter polynias were likely shortened due to the stronger seasonality and stronger deglacial freshwater flux into the Okhotsk Sea via the Amur.The latter inhibits sea ice growth in winter seasons due to the high amounts of sensible heat it carries in its late summer discharge peaks from lower latitude monsoon-influenced regions.Still, some ventilation and O2 entrainment into OSIW likely persisted even in warm deglacial phases, supported by the notion that despite high productivity, the occurrence of prominent laminated intervals in the mid-depth Okhotsk Sea is not observed in the cores 13 or 4, like at nearly all other mid-depth North Pacific locations where they indicate transient anoxia during the B/A and PB warm interstadials.In addition, our δ13CCib data reported here and earlier indicate that the vertical extension of OSIW between our shallower and the deeper sites remained surprisingly homogenous over stadial-interstadial transitions.We thus presume that if a substantial weakening of OSIW formation rate or flow volume during warm interstadial phases had happened, an increasing δ13CCib gradient between deeper core 79 and shallower cores 4 and 13 should have appeared, because core 79 is at the deepest potential extension of intermediate depth water and more sensitive to switches between better ventilated OSIW and deeper, oxygen-poorer PDW.However, no such dynamic change can be observed.To the contrary, during shorter intervals in the warm B/A around c. 13.4–13.8 ka, and in the PB around c. 9.4–10.8 ka and 10.8–11.8 ka, we observe lower δ13CCib values in upper core 4 compared to lower core 79.Such a pattern is not easily explainable by mixing between deeper, low-δ13C PDW and shallower mid-depth, high δ13C OSIW end members alone, pointing instead to additional processes that influence the δ13C signal and thus core layer OSIW ventilation.Indeed, we recorded co-variations of epifaunal δ13CCib and shallow infaunal δ13CUvi data, which indicate that the δ13CDIC and also O2 concentrations of OSIW were to some extent determined by mesopelagic respiration.As such remineralization processes are most active within the core layer of the high-OM turbid OSIW core layer, i.e. in the 400–1000 m water depth range, the oxygenation of OSIW was likely significantly reduced already close to the formation region during deglacial warm phases, corroborating earlier hypotheses based on export production records from the pelagic North Pacific."In analogy to modern observations, a minimum of about 30% of the mesopelagic waters' initial O2 concentration was likely consumed relatively quickly by organic matter respiration.The Benthic–Planktic ventilation ages derived from our and other cores in the Okhotsk Sea from water depths of 675 m and 1300 m support our hypothesis, because they show no major coherent changes towards previously proposed simply higher B-P ages during the B-A and PB.We observe that while increases in NPIW ventilation and formation did occur during the cold stadial phases in the North Pacific, the warm interstadial phases in contrast did not result in persistent older water masses compared to the modern situation in the OSIW source region, implying a mode similar in strength and volume to modern circulation patterns or even stronger.This reasoning is in line with the notion that the global Meridional Overturning Circulation as one forcing for variations in North Pacific circulation was comparable to modern conditions or stronger as evidenced in records from the Atlantic region during the B/A.However, a collapse of NPIW ventilation off Japan between 900 and 1400 m is indicated by the drop of B-P ages to much older values characteristic of deeper PDW values around 2100 m water depth.This pattern implies that either upwelling of old, deeper water masses occurred at the Japan margin into intermediate depths without reaching the surface ocean, or that circulation patterns in the intermediate water layer off Japan significantly changed to old water masses during the B/A, while leaving deeper PDW around 2000 m relatively unaffected at the same time.Some works indicate that the boundary between Kuroshio and Oyashio currents shifted by several degrees in concert with stadial and interstadial variations during the last glacial and the last termination, probably affecting the mid-depth waters at the northern Japan margin sites, switching them between a more northern and southern-derived source.As paleo-ventilation ages in the Okhotsk Sea and by inference the WSAP gyre region, i.e. northward of the Kuroshio-Oyashio extension, are apparently unaffected by these changes and show no coherent signature, but only minor variations over the glacial termination, we presume that during transient warm interstadial phases southern subtropically- influenced intermediate water became more isolated from the northern OSIW-influenced WSAP gyre than today.This reasoning is in line with previous evidence from a well-dated mid-depth core in the WSAP-influenced Bering Sea and indicates an increased injection of more OSIW into the northerly subarctic gyre circulation that may have more effectively decoupled deglacial subarctic and subtropical mid-depth water masses in the mid-latitude North Pacific than today.In contrast, during the remainder of the deglaciation, similar B-P ages on the Okhotsk and Japanese continental margins argue for a southward propagation of OSIW-influenced intermediate waters into lower latitudes, analog to modern conditions.In any case, our compilation outlines a more complex pattern of deglacial North Pacific circulation changes in both large gyre regions than previously assumed and should warrant more detailed study with both proxy reconstructions and modeling.The deglacial O2 decline during the warm B-A indicated in our δ13Ccib data is accompanied by pronounced maxima in surface ocean productivity.However, in contrast to neighboring regions and more southern lower-resolution sites our results place the maxima in primary productivity to the PB and the Allerød, rather than the Bølling or entire B-A phases.This timing is not strictly correlated to the rapid warming and MOC changes known from the Atlantic.Instead, a first small productivity peak occurs slightly earlier around ca. 15 ka, followed by a second stronger peak centered towards the late Bølling to early Allerød.This timing is corroborated by multi-proxy and benthic species assemblage observations from nearby sites.There, relatively late increases in mainly carbonate-bearing primary producers, occurring only after the Bølling around 13.6–13.1 ka, were tentatively ascribed to a potential lagged transfer of climatic forcing from the North Atlantic region and changes in the AMOC to the North Pacific, but the timing and causation could not be further verified due to potential age model uncertainties in the investigated core.Our new, high-resolution set of cores confirms this particular delayed occurrence in the Okhotsk Sea.In other circum-Pacific regions, sudden deglacial spikes in productivity have been ascribed to various causes, like eustatic sea level rise, mixed layer stratification, lateral material transport or circulation changes.With regard to the effect of these potential causes in the Okhotsk Sea, we assume that flooding of the extensive northern shallow shelf areas during the early deglaciation and the pronounced MWP Ia likely enhanced southward transport of lithogenic terrestrial material with the East Sakhalin Current.It has been shown that such shelf-derived supply of additional refractory nutrients supports primary production further offshore.In the Okhotsk Sea, the first pre-Bølling and Allerød productivity peaks were likely supported by such a mechanism.However, fertilization of marine productivity by se-level alone rise was likely not the main cause for productivity peaks in the Okhotsk Sea due to the following considerations: Absolute deglacial productivity maxima peak later than maximum sea level changes at all sites, namely during the PB with initial spikes around 11.6 to 11.2 ka.Thus, reported productivity maxima would be offset in timing and lag the sea-level rise maxima by several thousand years.In addition, hypothetically mobilized terrigenous organic matter was highly refractory and, at least during the early phase of the deglaciation, inundated shelf areas were presumably still frozen and thus relatively inert to sediment mobilization, due to their location within the glacial permafrost region.For an explanation of these discrepancies, we consulted speleothem and loess records from the low-latitude hinterland region, which are indicators for the East Asian Summer Monsoon temperature and/or precipitation history.The Dongge Cave speleothem record correlates well in overall shape and pattern with our proxy reconstructions of OSIW ventilation and export production, but diverges in the fine structure.Remarkably, the loess-derived precipitation reconstruction shows that the deglacial EASM increases in precipitation were more gradual than the speleothem δ18O series suggests and thus started later than the initial rapid onset of the EASM temperature increases.To establish a link between these hinterland reference records indicative of hinterland climatology and the Okhotsk Sea freshwater influx, we turned to our freshwater algae counts from the southernmost core location.Results showed the first, smaller discharge peak during the B/A was followed by a larger Preboreal freshwater discharge event.Thus, freshwater algae concentrations during these two peak periods are significantly higher than modern values, though modern Amur river discharge and fluvial sediment load is quite substantial.Even if changes in sedimentation rates and thus overall accumulation may have played a role in changing the concentrations throughout the last 18 ka, we tentatively infer that precipitation increases during the B/A and PB warm phases caused sustained fluvial discharge peaks into the Okhotsk Sea via the Amur.Land-based records from NE-China peat bogs support this scenario, indicating maxima in effective precipitation in the Amur catchment area.Concurrent maxima in terrigenous nutrient and detrital iron load derived from the Amur started in the deglacial between 14 and 12 ka and peaked during the PB as evidenced by maxima in Fe/K ratios and matching peaks of terrestrial biomarkers in nearby marine cores.These higher-than-modern peaks in terrigenous sediment load were in all probability buttressed by melting of permafrost in the Amur hinterland during the late deglaciation.Though temporally poorly constrained, the melting was mostly dated in terrestrial records to the Preboreal.Hinterland records and nearby organic terrestrial biomarker data, combine with our sedimentological and freshwater algae data to provide evidence that high amounts of fresh organic matter, nutrients and sediment suspension were supplied to the upper water column of the NW Okhotsk Sea during the warm B-A and PB phases.These elevated nutrient were optimally provided to the photic zone, as indicated by maxima in mixed layer stratification and longer sea ice-free summer seasons during the deglacial warm B-A and PB, thereby supporting primary production increases.This conditioning of the upper mixed layer is also assumed to have provided carbonate producers with a competitive advantage over diatoms over the deglaciation, thus explaining the lack of an early deglacial bio-siliceous productivity peak that was reported in a number of previous studies and thus is a coherent feature throughout the Okhotsk Sea.Together, our data indicate that freshwater supply by the Amur was a particularly important factor in rapidly supplying relatively fresh terrigenous matter and nutrients for maximum primary production to the Okhotsk Sea continental margin during the B-A and PB warm phases, while sea level rise was likely another important contributor of elevated terrigenous refractory organic matter content.Our results also support a scenario with environmental forcing that is regionalized and differing between particular NW-Pacific areas, being less strictly correlated simply to North Atlantic millennial-scale climatic variability as shown in Greenland ice core records and AMOC changes.Apparently, in the Okhotsk Sea, low-latitude forcing by hinterland changes through the migration of the East Asian Summer Monsoon front and deglacial changes in the terrestrial cryosphere played an additional critical role in shaping the deglacial hydrography and biogeochemistry.Our results provide evidence for significantly higher organic matter and nutrient loads within the OSIW core layer during the warm interstadial phases of the last glacial termination.Based on a number of modern sediment trap and time series results, locations on the Sakhalin margin reliably record the amount of terrigenous and biogenic material entrained in the DSW formation region, which is then transported within a highly turbid OSIW water layer southward into the North Pacific.Today, the amount of laterally transported POC and DOC entrained within DSW/OSIW is considerably larger than the amount of POC that settles through the water column from surface biogenic productivity.Thus, lateral entrainment and subsequent transport of suspended material, rather than vertical settling of nutrients, preconditions the biogeochemistry of OSIW.Our multi-proxy data imply that the flux of the entrained POC and DOC and terrigenous lithogenic material was even higher than today during the B-A and PB warm phases.We thus hypothesize that significant amounts of dissolved and particulate terrigenous carbon, as well as nutrients such as Si4, were laterally transported within the OSIW during the B-A and PB, and exported from the Okhotsk Sea to the pelagic WSAP.While the modern Si4 concentrations in the mid-depth water column of the Okhotsk Sea are already relatively high, a decrease in deep to intermediate-depth water mass stratification during deglacial warm phases may have helped to load the OSIW during B/A ad PB with higher Si4 concentrations from the deep ocean compared to today.Concurrent Fe maxima observed in all sediment records along the pathway of the southward-flowing ESC indicate that in addition the amount of lithogenic material increased within the OSIW, thus creating an “optimal nutrient mix” by adding iron to the sediment suspension.Even if the enhanced amount of terrigenous Fe provision to the suspension load of OSIW waters may have not become fully bio-available, a number of recent studies provided evidence that a close coupling exists on instrumental time scales between fluvial Amur Fe discharge and bio-available Fe in the pelagic subarctic Pacific and its marginal seas.Such higher concentrations of suspended macronutrients together with Fe were further reinforced through the observed low primary production of biogenic silica within the Okhotsk Sea, especially during the B-A likely caused by local stratification processes, e.g. due to high freshwater runoff.In combination with maximum riverine supply such biogenic under-utilization probably further conserved Si4 and bio-available Fe concentrations in the mid-depth, highly turbid, weakly ventilated water column.These OSIW masses may have thus contributed to the partial decoupling of silicate and nitrate-based isotope proxies in the WSAP during the Bølling-Allerød.In addition to these hypothesized changes in the nutrient availability, stratification changes induced warm mixed layer maxima, which further fostered productivity maxima.Independent support for this scenario is provided by data from cores located in the southern Okhotsk Sea, distal from the immediate influence of local fluvial transport.There, maxima in terrigenous detrital OM accumulation supply were observed during the deglaciation.The authors concluded that sea ice transport, fluvial supply by the Amur and continental debris from rising sea levels and inundated shallow shelf areas would all constitute likely causes.Combined with evidence from our productivity and OSIW ventilation data and the established modern oceanic transport mechanisms of suspended material in the turbid OSIW layer, we suggest that OSIW acted as an efficient, lateral transport mechanism for nutrient export over wide distances into the open mid-depth North Pacific ocean.There, the low-O2, and nutrient/Fe-enriched OSIW was upwelled into the upper mixed layer during wintertime turbulent mixing, in analogy to modern conditions and facilitated the transient repletion of nutrients, including Fe, during the B/A and PB.Our hypothesis helps to explain the rather enigmatic pattern of higher-than-modern nutrient utilization during the deglacial productivity peaks throughout the subarctic NW-Pacific and its marginal seas based on nitrogen isotopes and other proxydata.In addition, we presume that due to the observed ventilation minima in OSIW during the B/A and PB, denitrification was regionally enhanced, supporting a slightly elevated d15N OSIW signature.At the same time, the decreased O2 content of OSIW and the northward-displaced circulation potentially fostered the establishment of oxygen minimum zones in the North Pacific Ocean, including northerly subarctic regions like the Bering Sea.Lastly, though being a sink for atmospheric CO2 today, as the subarctic Pacific constitutes one of the three main modern High Nutrient/Low Chlorophyll regions globally, transient Fe-replete conditions caused by lateral subsurface input would re-adjust the efficiency of the “biological pump” to more effective sequestration of atmospheric CO2 into the deep ocean by better nutrient utilization and higher primary production."The last glacial termination's productivity spikes in the North Pacific likely contributed to shaping millennial-scale deglacial atmospheric CO2 changes by a varying interplay between release of old CO2-rich deep water masses into the atmosphere and a sequestration of CO2 into the deep ocean through primary production and changes in the efficiency of carbon export from the upper ocean to the abyss.Notably, our timing of changes in OSIW biogeochemical and ventilation characteristics correlate both with a North Pacific maximum in nutrient utilization/export production, and a transient slowdown in the deglacial atmospheric CO2 rise during the late phase of the Bølling-Allerød period, i.e. after maximal releases of old carbon into the atmosphere as hypothesized earlier.Thus, intermediate water masses likely constitute important facilitators for larger regional-scale modulations in ocean biogeochemistry, with the potential to affect the deglacial global carbon cycle, warranting a more detailed understanding and future studies on both glacial-interglacial and shorter timescales.We carried out a comprehensive multi-proxy-based reconstruction of Okhotsk Sea Intermediate Water ventilation and biogeochemical characteristics over the last glacial termination based on a set of high-resolution AMS 14C-dated sediment cores.Based on ventilation ages and epibenthic stable carbon isotopes we have provided evidence that decreases in OSIW ventilation were largely driven by increases in organic matter content and remineralization rates, rather than changes in formation rates and overturning of OSIW in the Okhotsk Sea.From differences in ventilation ages between the northerly Okhotsk Sea and the southerly open Japan margin we deduce that OSIW influence on lower latitude subtropical North Pacific Intermediate Water waned during the during the warm Bølling-Allerød and Preboreal phases.These regional differences explain observed large scatter in earlier data compilations and indicate more complex intermediate water circulation patterns than previously assumed, in analogy to recently observed, regionally differing surface and mixed layer hydrography during the last glacial termination.At the same time, OSIW was no source of oxygen for NPIW during the Bølling-Allerød, thus providing a positive feedback for basin-wide intensification of Oxygen Minimum Zones during the last glacial termination, in line with hypotheses that called for elevated respiration rates in the open North Pacific to decrease the O2 content of NPIW.We observed millennial-scale OSIW ventilation decreases during deglacial warm periods to be in phase, with increases in terrigenous suspension load and deposition in the Okhotsk Sea.To some extent, terrigenous sediment was likely supplied by continental shelf flooding through eustatic sea-level rise, mainly during the early-mid deglacial.However, we speculate that during the warm Allerød and Preboreal phases, marine productivity maxima on the Sakhalin margin, fuelled by iron- and nutrient-rich terrestrial material was mainly sourced from the Siberian hinterland and delivered via Amur freshwater discharge peaks.Support for this assumption of maxima in river runoff comes from the close correlation with deglacial increases in hinterland precipitation and heat transport linked to the northward propagation of the East Asian Summer, which were conceivably linked to widespread deglacial thawing of Siberian permafrost in the vast Amur catchment area.Likely, such melting mobilized significant amounts of terrigenous organic matter and detritus that was transported from the catchment into the Okhotsk Sea basin, where it in turn fostered OSIW oxygenation decreases through maxima in respiration processes, while increasing the dissolved and particulate nutrient loads in mid-depth waters.This closely coupled continent-ocean system thus provides an example for low-latitude East Asian monsoon forcing exerting an influence on mid-latitude and subarctic North Pacific oceanography and marine biogeochemistry.We suggest that correlation of the observed ventilation minima with maxima in primary productivity and nutrient utilization in the open North Pacific implies an increase of OSIW-sourced lateral transport of macronutrients and iron into the subarctic North Pacific during the Bølling-Allerød and Preboreal.Variations in this export mechanism have presumably contributed to pelagic productivity peaks and simultaneously lead to a higher utilization of nutrients than observed today during deglacial warm phases.In the subarctic North Pacific region, this scenario would have temporarily relieved the upper ocean from micro-nutrient limiting conditions.The upwelling of the mid-depth nutrient-enriched OSIW during wintertime mixing likely switched the region to a transient, more efficient CO2 sink during the Bølling-Allerød and Preboreal, two intervals that are marked by a slowdown in atmospheric CO2 rise.LLJ measured and compiled data, and wrote paper.LLJ, RT and DN designed study and measured isotope data.RK measured opal and XRF on core 4, UK carried out freshwater algae analyses analyses, LM contributed to isotope and AMS 14C data, UR contributed XRF measurements, SG provided samples and data.All authors participated in the discussion of results and conclusions and contributed to the final version of the paper.
The modern North Pacific plays a critical role in marine biogeochemical cycles, as an oceanic sink of CO2 and by bearing some of the most productive and least oxygenated waters of the World Ocean. The capacity to sequester CO2 is limited by efficient nutrient supply to the mixed layer, particularly from deeper water masses in the Pacific's subarctic and marginal seas. The region is in addition only weakly ventilated by North Pacific Intermediate Water (NPIW), which receives its characteristics from Okhotsk Sea Intermediate Water (OSIW). Here, we present reconstructions of intermediate water ventilation and productivity variations in the Okhotsk Sea that cover the last glacial termination between eight and 18 ka, based on a set of high-resolution sediment cores from sites directly downstream of OSIW formation. In a multi-proxy approach, we use total organic carbon (TOC), chlorin, biogenic opal, and CaCO3 concentrations as indicators for biological productivity. C/N ratios and XRF scanning-derived elemental ratios (Si/K and Fe/K), as well as chlorophycean algae counts document changes in Amur freshwater and sediment discharge that condition the OSIW. Stable carbon isotopes of epi- and shallow endobenthic foraminifera, in combination with 14C analyses of benthic and planktic foraminifera imply decreases in OSIW oxygenation during deglacial warm phases from c. 14.7 to 13 ka (Bølling-Allerød) and c. 11.4 to 9 ka (Preboreal). No concomitant decreases in Okhotsk Sea benthic-planktic ventilation ages are observed, in contrast to nearby, but southerly locations on the Japan continental margin. We attribute Okhotsk Sea mid-depth oxygenation decreases in times of enhanced organic matter supply to maxima in remineralization within OSIW, in line with multi-proxy evidence for maxima in primary productivity and supply of organic matter. Sedimentary C/N and Fe/K ratios indicate more effective entrainment of nutrients into OSIW and thus an increased nutrient load of OSIW during deglacial warm periods. Correlation of palynological and sedimentological evidence from our sites with hinterland reference records suggests that millennial-scale changes in OSIW oxygen and nutrient concentrations were largely influenced by fluvial freshwater runoff maxima from the Amur, caused by a deglacial northeastward propagation of the East Asian Summer Monsoon that increased precipitation and temperatures, in conjunction with melting of permafrost in the Amur catchment area. We suggest that OSIW ventilation minima and the high lateral supply of nutrients and organic matter during the Allerød and Preboreal are mechanistically linked to concurrent maxima in nutrient utilization and biological productivity in the subpolar Northwest Pacific. In this scenario, increased export of nutrients from the Okhotsk Sea during deglacial warm phases supported subarctic Pacific shifts from generally Fe-limiting conditions to transient nutrient-replete regimes through enhanced advection of mid-depth nutrient- and Fe-rich OSIW into the upper ocean. This mechanism may have moderated the role of the subarctic Pacific in the deglacial CO2 rise on millennial timescales by combining the upwelling of old carbon-rich waters with a transient delivery of mid-depth-derived bio-available Fe and silicate.
744
Deactivation study of the hydrodeoxygenation of p-methylguaiacol over silica supported rhodium and platinum catalysts
Bio-oils upgrading can be performed using a variety of different approaches.In order to blend with crude oil, or to drop-in to existing petroleum processes, the oxygen content of the bio-oil has to be reduced.Deoxygenation of the bio-oils can be achieved using a zeolite cracking approach or catalytic hydrodeoxygenation .Reductive media such as hydrogen or a hydrogen donor solvent are typically used for hydrodeoxygenation or the hydrogen transfer reaction.While hydrodeoxygenation of bio-oils has been studied for decades, the catalytic mechanisms and reasons for catalyst deactivation are still not fully understood .The chemical composition of the bio-oils is extremely complex and depends on the amount of cellulose, hemicelluloses and lignin in the biomass feedstock and the pyrolysis conditions.During the pyrolytic process, celluloses and hemicelluloses produce sugars and furans which undergo additional decomposition to generate esters, acids, alcohols, ketones and aldehydes .The phenolic compounds are produced from the lignin component.Amongst all the compounds present in the bio-oil, the phenolics are by far the most studied.The reasons are their multiple functional groups, their high proportion in the bio-oil and their tendency to promote catalyst deactivation.Another reason of the extensive use of phenolics as model compounds for bio-oil upgrading relies on the higher bond dissociation energy required to break aryl-hydroxyl or aryl-methoxy linkages compared to alkyl hydroxyl or alkyl ether linkages .Within the pyrolysis of aromatic compounds, guaiacol has received the most attention .During the upgrading process, guaiacol can undergo demethoxylation, demethylation and partial or complete hydrogenation.Various catalysts have been studied for the hydrodeoxygenation of guaiacol.In a previous study, noble metals catalysts such as Pt, Pd or Rh, when compared to conventional sulfided CoMo/Al2O3, showed better performance and exhibited a lower carbon deposit .A comparative study of Pt/Al2O3, Rh/Al2O3 and presulfided NiMo catalysts for the HDO of microalgae oil reported the better stability of the noble metal catalysts reaching a steady state after 5 h time on stream.The NiMo catalyst which did not reach steady state after 7 h reaction was prone to higher carbon deposition .Catalyst supports also play a significant role in the stability of the catalysts.Previous works reported that use of basic magnesia supports reduced the coking of the catalyst when compared to acidic alumina supports .In this paper we report on the HDO reaction of p-methylguaiacol over silica-supported rhodium and platinum catalysts.Silica was selected as the catalyst support for this study due to its less acidic properties with the aim of reducing carbon deposition.Instead of guaiacol, HDO was performed using p-methylguaiacol as the model compound, as it is one of the main components of the pyrolytic oil formed from lignocellulosic feedstocks.Also unlike guaiacol, the methylation in the para position allowed discrimination of different reaction pathways via the generation of m- or p-cresol as illustrated in Fig. 1.The complete list of product names was given in Table S.1.Two 2.5% Rh/silica catalysts and a 1.55% Pt/silica catalyst were tested for this study.p-Methylguaiacol and reference products were purchased from Sigma-Aldrich.A 2.5% Rh/SiO2 catalyst was obtained from Johnson Matthey and prepared by incipient-wetness impregnation rhodium chloride salt on a Grace-Davison silica support.A 1.55% Pt/SiO2 and a 2.5% Rh/SiO2 catalysts were prepared by incipient-wetness impregnation of aqueous ammonium tetrachloroplatinate2PtCl4) and Rhodium chloride, over fumed silica.Detailed protocols for Pt/SiO2 and Rh/SiO2 catalysts prepared by Aston University was described in previous work .Each catalyst was ground and sieved to between 350 and 850 μm before use.The characteristics of the catalysts are listed in Table 1.All other reagents and solvents were purchased from Sigma-Aldrich and used without further purification.In a previous study, the eluent gas stream of guaiacol HDO was quantified using on-line GC analysis .However in our system the complexity of the HDO products mixture was not compatible with on-line GC analysis of the deoxygenated/hydrogenated oil.For example, para and meta-cresols could only be GC-differentiated after a silylation step.Therefore a collector was required to sample the condensable products at different time on stream without interrupting the reaction.In a previous paper the liquid products were collected by bubbling the vaporised products into a cold liquid such as isopropanol .This technique has the advantage to give an absolute value for each product, however this technique was felt more suitable for a low pressure system.In the present study, liquefaction of the product gas stream was obtained after passing through a condenser at 5 °C.As illustrated in Fig. 2, after passing through the condenser, the gas-liquid were separated and the liquid collected by gravitation into a ¼ inch stainless steel tubing before filling the collector from the bottom.A system of valves permitted isolation of the collector for sampling without disturbing the pressure of the system.The light products were also collected into a U-shape pipe, cooled to −60 °C connected after the pressure relief valve.The analysis of the light trap showed that only 5% of the toluene was not condensed after passing through the condenser.No other products were detected to have passed the condenser in the gas phase except a trace of p-methylguaiacol due to its large excess in the product stream.However, due to the lack of precision on the sampling volume, an exact mass balance and an absolute quantification of each product could not achieved.As consequence, only a relative molar quantification of products was performed with the conversion, yield and selectivity as defined in Eqs.–:Conversion =/Yield = mol of product/Selectivity = mol of product/Σ moles of products,The catalytic test was performed in a continuous-flow, fixed-bed reactor over 0.45 g of silica supported noble metal catalyst.Similar catalyst bed volumes of 0.84–0.88 cm−3 were estimated from the bulk densities of the catalysts of 0.51, 0.52 and 0.54 g cm−3 for the Rh/SiO2, Pt/SiO2 and Rh/SiO2, respectively.With a reactor inner diameter of 0.40 cm, the bed catalyst length was around 6.7–7.0 cm.The catalyst was pre-reduced in-situ before reaction at 300 °C for 2 h under 100 mL min−1 of 40% H2/Argon.After the catalyst was reduced, p-methylguaiacol was pumped into the gas flow and vaporised at 200 °C.The reaction temperature was 300 °C with a hydrogen partial pressure of 4 barg giving a H2:PMG molar ratio of 15.The total pressure was made up to 10 barg using argon.The weight hourly space velocity of PMG was 2.5 h−1, while the gas hourly space velocity was 7200 h−1 with gas flow rate of 100 mL min−1.Gas mass flow controllers were used to feed hydrogen and argon while a Gilson HPLC pump was used to feed the p-methylguaiacol.In order to avoid condensation, gas lines before and after the reactor were heated to 220 °C.A condenser at 5 °C was used to liquefy the products before sampling.The HDO products were diluted in 5 mL of dichloromethane.In order to fully quantify the products and due to the significant variation of products’ abundance, three distinct solutions were prepared from the same mixture of products/internal standards.First, an aliquot of the HDO products in DCM was mixed with 50 μL of IS.Then, 20 μL of this mix was silylated while the remaining mixture was diluted with 0.5 mL of dichloromethane.Finally, 5–10 μL of the diluted solution was also silylated to quantify the PMG, methylcatechol and cresol products.The non-silylated solutions were injected to quantify the light products such as methylcyclohexane and toluene but also the 4-methyl-2-methoxycyclohexanone which co-eluted with the trimethylsilyl methylcyclohexanol.This technique permitted a full quantification of minor and major products.Qualitative analyses of the HDO products were performed on a Shimadzu GC-2010 coupled to a MS-QP2010S.Samples were injected on a ZB-5MS capillary column.The quantitative analyses were performed on an HP 5890 gas chromatograph fitted with a Supelco DB-5 capillary column.Quantification was obtained using decane C10 and heptadecane C17 as internal standards and the relative response coefficients were based on exact products when possible or on response coefficients of similar product structure for non-commercial products.Temperature programmed oxidation was carried out using a combined TGA/DSC SDT Q600 thermal analyser coupled to an ESS mass spectrometer for evolved gas analysis.A sample loading of 10–15 mg was used and samples were typically heated from 30 °C to 900 °C using a ramp rate of 10 °C min−1 under 2% O2/Ar, with a flow rate of 100 mL min−1.For mass spectrometric analysis, various mass fragments were followed such 18, 28, and 44.All TGA work was kindly carried out by Andy Monaghan at the University of Glasgow.Nitrogen porosimetry was conducted on the Quantachrome Nova 4000e porosimeter and analysed with the software of NovaWin version 11.Samples were degassed at 120 °C for 2 h under vacuum conditions prior to analysis by nitrogen adsorption at −196 °C.Adsorption/desorption isotherms were recorded for all parent and Pt-impregnated and Rh-impregnated silicas.The BET surface areas were derived over the relative pressure range between 0.01 and 0.2.Pore diameters and volumes were calculated using the BJH method according to desorption isotherms for relative pressures >0.35.Pt and Rh dispersions were measured via CO pulse chemisorption on a Quantachrome ChemBET 3000 system.Samples were outgassed at 150 °C under flowing He for 1 h, prior to reduction at 150 °C under flowing hydrogen for 1 h before room temperature analysis.A CO:Pt surface stoichiometry of 0.68 was assumed according to the literature ; the CO-Rh interaction was much more complicated because the CO bond is very sensitive to the particular electron distribution, such as the spin-states and the initial occupations of the Rh 5 s electronic states ; therefore, the CO:Rh ratio was difficult to determine.For estimation, a CO:Rh surface stoichiometry of 1 could be assumed on the basis of literature ."Carbon loadings were obtained using a Thermo Flash 2000 organic elemental analyser, calibrated to a sulphanilimide standard, with the resulting chromatograms analysed using Thermo Scientific's Eager Xperience software.Vanadium pentoxide was added to aid sample combustion.Raman spectra of post reaction catalysts were obtained with a Horiba Jobin Yvon LabRAM High Resolution spectrometer.A 532.17 nm line of a coherent Kimmon IK series He-Cd laser was used as the excitation source for the laser.Laser light was focused for 10 s using a 50× objective lens and grating of 600.The scattered light was collected in a backscattering configuration and was detected using nitrogen cooled charge-coupled detector.A scanning range of 100 and 4100 cm−1 was used.The activity/selectivity of the three catalysts was studied with the time on stream.The main purpose of these experiments was to determine if the catalyst activity reached a steady state or if deactivation was continuous.Low activity of the catalyst was not an issue but identification of the variation of catalyst activity and product selectivity were critical for a future kinetic study of the hydrodeoxygenation of p-methylguaiacol.Catalyst testing was performed at 300 °C, a WHSV of 2.5 h−1, 4 barg hydrogen and a H2:PMG molar ratio of 15:1.The rhodium/silica catalyst and both Rh/silica and Pt/SiO2 catalysts were studied over several days.Rh/SiO2 showed fast deactivation initially but this was followed by a period of constant activity, whereas although the Rh/SiO2 showed the same deactivation profile initially, no steady state was observed.The deactivation profile of the Pt/SiO2 catalyst was different from that of the rhodium catalysts in that it exhibited a constant loss of activity.This linear deactivation has previously been reported on HDO of guaiacol over Pt/Al2O3 and Pt/MgO .The deactivation of the Pt/silica was plotted using the relationship, ln = ln − kdt, where, Xt is the conversion the reactant at time t, k is the rate constant, τw represents weight time, kd the deactivation rate constant and t is time .The deactivation plot gave a deactivation rate constant of 0.02 h−1.However, the Rh/Silica data fitted a logarithmic curve with regression coefficient of 0.99 which showed that the deactivation mechanism was not time independent.The catalysts initial selectivity and those after ∼12 h and ∼32 h TOS are shown in Fig. 5.Compared to the Pt/SiO2 and the Rh/SiO2, the Rh/SiO2 was the only catalyst that showed constant selectivity from 10 h to 33 h TOS.After 32 h on stream, 42 mol% of the products were p-methylcatechol with Rh/SiO2 whereas with Rh/SiO2 the selectivity to p-methylcatechol was only 12 mol%.This significant variation may be explained by the different nature of silica support used.As illustrated in Fig. 6, Rh/SiO2 produced p-methylcatechol with the same rate from 12 to 72 h TOS.While deoxygenation and hydrogenation reaction were deactivated over time, the demethylation of the PMG was not affected.The high demethylation activity for the Rh/silica can be suggested by higher acidity of the silica support.For the Pt/SiO2, the selectivity toward the 4-methyl catechol increased from 8 to 25 mol% from 1 h to 32 h TOS while the selectivity toward the m- and p-cresol decreased from 22.6 and 40.1 mol% to 13.6 and 34.0 mol%.Comparing the two rhodium catalysts some notable differences are observed.Initially the Rh/silica shows a high selectivity to toluene and a low selectivity to 4-methyl catechol but by 32 h TOS the selectivity has reversed.This behaviour raises the question as to whether it is possible to remove two OH groups before desorption, effectively by-passing the formation of p-cresol as an intermediate.This behaviour is not seen with the Rh/silica catalyst where the selectivity is relatively unchanged over the TOS.There are two significant differences between the rhodium catalysts.Different silica supports were used and the metal crystallite size is different.Their preparations were identical so the significant difference of products selectivity between the two rhodium catalysts could be attributed to either the nature of the silica support or a metal particle size effect.A more in-depth investigation of these effects will be required to fully interpret these changes in selectivity.The variations of the molar yield of the principle products are shown in Fig. 6 for the three catalysts.With Rh/SiO2 and Pt/SiO2 catalysts p-cresol was the main product.For the Rh/SiO2, p-cresol was the main product for the first 24 h TOS, subsequently methylcatechol was the main product.As illustrated in Fig. 6, Pt/Silica showed similar conversion rate of p-methylguaiacol to 4-methyl catechol than Rh/Silica from 24 h to 72 h. However, the Pt/silica catalyst showed a different deactivation profile for the three main products, with the yield of 4-methylcatechol decreasing more slowly than the yields of both para- and meta-cresol suggesting that catalyst deactivation affected the demethylation of the p-methylguaiacol less than the demethoxylation and direct deoxygenation.Platinum has been reported to favour the demethylation of guaiacol and this could explain the high demethylation activity in the early stages .In the case of both Rh catalysts, there was low conversion of p-methylguaiacol to 4-methylcatechol initially, which then increased and stabilized to a constant production after 10 h or 5 h on stream for Rh/silica or Rh/silica respectively.Indeed, for the Rh/silica catalyst, 4-methylcatechol becomes the principal product.This is explained by the concomitant loss of deoxygenation of the methylcatechol to m- or p-cresol.The yields of the p-cresol and m-cresol also stabilized after 10 h on stream for the Rh/silica catalyst consistent with it achieving steady state.On the other hand, the Rh/silica and Pt/silica did not reach a steady state within the time of the study.It could be speculated that Rh/silica required a longer reaction time in order to reach a steady state as illustrated by the deactivation profile in Fig. 3.However, in the case of the Pt/silica catalyst, the continuous deactivation profile suggested that the catalyst may not reach a steady state condition.Extended testing would be required to determine whether a low activity steady state was reached or whether the system was subject to continuous deactivation.As illutrated in Fig. 1, the production of m-cresol or p-cresol required the demethylation and direct deoxygenation.The p-cresol can also be produced directly from the demethoxylation of the methylguaiacol.As illustrated in Figure S.1, the ratios p-cresol/m-cresol were 3.1, 2.4 and 2.2 after 12 h TOS for Rh, Rh and Pt catalysts, respectively.The production of p-cresol via demethoxylation was more pronounced in the case of the Rh and can be explained by the low activity of the catalyst toward demethylation.On the other hand, the Rh and Pt catalysts produced a p-cresol/m-cresol ratio closer to two suggesting that both demethoxylation and demethylation/direct deoxygenation pathways were more balanced.This can be explained by the higher activity of both catalysts toward the demethylation.However, while the Rh showed a constant ratio between the m- and p-cresol, the Rh and Pt showed an increase of the ratio with TOS.As consequence, the pathways for the production of m-cresol and p-cresol were not affected the same way with the deactivation of the Pt and Rh catalysts.It could be suggested that the demethoxylation was less affected than the direct deoxygenation.However, different rate of hydrogenation of the p-cresol and m-cresol could also explained this difference and will be discussed in the next section.Hydrogenation can be affected differently according to the nature of the intermediate products and the catalyst used.According to the reaction pathways illustrated in Fig. 1, the molar ratio of p-cresol, m-cresol, p-methylcatechol and the hydrogenated products of the p-cresol, m-cresol, p-methylcatechol was calculated for the three catalysts.As illustrated in Fig. 7A, the hydrogenation of the p-cresol and m-cresol on the Pt/silica catalyst was in the same range and followed the same loss of activity.From 3 h to 72 h on stream the ratio increased from 3.9 and 3.7 to 8.5 and 6.4 for the p-cresol and m-cresol, respectively.In the case of both Rh catalysts, the hydrogenation rate of m-cresol was around twice the rate of hydrogenation of the p-cresol after 48 h. For the Rh/silica catalyst, while the loss of hydrogenation activity for the p-cresol was in the same range than for Pt/silica catalyst, the loss of hydrogenation activity related to m-cresol was far more pronounced.Finally, the Rh/silica catalyst showed a loss of hydrogenation activity for both cresols up to 10 h on stream followed with a constant ratio around 18 and 10 for p-cresol and m-cresol.As illustrated in Fig. 7, the evolution of 4-methylcatechol hydrogenation was less affected for the Pt/silica and Rh/silica catalysts than the Rh/silica catalyst.In the case of Rh/silica, the molar ratio of methylcatechol:hydrogenated products of methylcatechol increased from 2 to 9 after 56 h on steam while the ratio only increased from 2 to 2.8 for Rh/silica.In contrast, the ratio slightly decreased from 2.2 to 1.4 for the Pt catalyst.While the hydrogenation of the cresols over the Pt catalyst decreased with time on stream, the hydrogenation of the catechol inversely increased leading to a constant overall selectivity to hydrogenated products.In all cases, the ratio between non hydrogenated:hydrogenated products showed that the catalysts were more active for the hydrogenation of the methylcatechol than the cresols.This suggested that the presence of vicinal alcohol favoured the adsorption of the catechol on the catalysts.Previous work had suggested that catalyst deactivation was due to carbon deposition on the surface of the catalyst with the initiation of coke formation suggested to be located at the acid site of the support .As illustrated in Fig. 8, TPO analysis of the spent catalysts clearly showed the presence of carbonaceous deposits.It is interesting to note that the Rh/silica catalyst showed the highest amount of carbon laydown yet it reached a steady state in contrast to the other catalysts.It is also notable that the two catalysts with the same support show quite similar mass loss, which could be expected if the carbon deposit was principally associated with the support.The extent of overall carbon laydown however is very low when considered as a percentage of the feed.Over the Rh/silica 0.18% of the feed was deposited on the catalyst, while for Rh/silica only 0.03% was deposited.Over Pt/silica the amount deposited was only 0.04% of the feed.Looking in detail at the TPO the Rh/silica catalyst showed mass loss at low temperature suggesting the released of adsorbed species as there is no concomitant generation of carbon dioxide.There are weight losses resulting in carbon dioxide evolution at ∼250 °C and 300 °C.At these temperatures the surface species are likely to be pseudo-molecular with a significant H:C ratio.There are then two weight loss events at 445 °C and 469 °C, which are accompanied with carbon dioxide evolution.These weight losses reveal the presence of two similar carbonaceous deposits; the lower temperature species is unique to the Rh/silica catalyst, while the higher temperature event is common to all three catalysts.The rapidity of the weight loss at 469 °C indicates fast combustion of the deposit suggesting that this deposit is hydrocarbonaceous in nature and is associated with the metal.There is a further weight loss event at 583–640 °C, on all the catalysts, which is accompanied with carbon dioxide evolution.This high temperature weight loss can be associated with the combustion of graphitic species on the silica supports, which would be consistent with the loss in surface area as measured by BET.The carbon content of the used catalysts was also determined by CHN analysis and showed the same trend as that found with the TGA.The surface area of the Rh/silica catalyst was nearly twice that of the Pt/silica and Rh/silica catalysts which could explain the higher carbon deposition.Reduction of the surface areas of 20%, 30% and 43% for Rh, Pt and Rh respectively, are attributed to the carbon blocking pores.As illustrated in Table 1, after reaction the metal dispersion was reduced in all three catalysts.The Pt catalyst showed the largest drop with metal dispersion reducing from 7.2 to 4.8% and a concomitant increase in metal crystallite size.Sintering of Pt/silica catalysts under HDO conditions has been observed in a previous study and can be explained by a weak interaction between Pt and the silica support .This sintering, in conjunction with the carbon laydown, would explain the continuing loss of activity of the Pt/silica catalyst.In contrast, the metal dispersion of the Rh/silica and Rh/Silica was only reduced from 2.8 to 2.6 and from 6.8 to 6.1, respectively, indicating a much stronger interaction between support and Rh metal.Finally, by the end of the reaction, the metal surface area of the Rh/silica was three times higher than that of the Rh/silica catalyst, yet the p-methylguaiacol conversion was lower, indicating that there was not a simple correlation between metal surface area and activity.Both Rh/silica catalysts showed both similar deactivation profiles with a fast deactivation at early time on stream followed with slow deactivation for the Rh/silica or constant activity for the Rh/silica.The Pt/silica catalyst showed continuous deactivation correlated with metal sintering and carbon laydown.The carbon deposit, higher in the case of the Rh/silica compared to the Pt and Rh/silica, could be explained by the different nature of the silica support.Detailed analysis of the product distributions with time revealed that the specific activity of the catalysts for demethylation, demethoxylation and hydrogenation were affected differently by the catalyst deactivation.The demethylation activity was the least affected by the catalyst deactivation, whereas hydrogenation activity was severely decreased for the Rh/silica catalyst.This behaviour suggests that different sites are responsible for demethylation and hydrogenation activity.The Pt catalyst showed a shift of hydrogenation selectivity from cresols to 4-methylcatechol and the production of 4-methyl cyclohexan-1,2-diol.TPO analysis of the deposited carbon revealed at least three carbonaceous species on the surface of the rhodium catalysts, while only two different carbon species were detected on the platinum catalyst.Only the Rh/silica reached a prolonged steady state after 10 h on stream and modelling of the kinetics of PMG HDO will be reported in a subsequent paper.
Hydrodeoxygenation of para-methylguaiacol using silica supported Rh or Pt catalysts was investigated using a fixed-bed reactor at 300 °C, under 4 barg hydrogen and a WHSV of 2.5 h−1. The activity, selectivity and deactivation of the catalysts were examined in relation to time on stream. Three catalysts were tested: 2.5% Rh/silica supplied by Johnson Matthey (JM), 2.5% Rh/silica and 1.55% Pt/silica both prepared in-house. The Rh/silica (JM) showed the best stability with steady-state reached after 6 h on stream and a constant activity over 3 days of reaction. In contrast the other two catalysts did not reach steady state within the timeframe of the tests, with continuous deactivation over the time on stream. Nevertheless higher coking was observed on the Rh/silica (JM) catalyst, while all three catalysts showed evidence of sintering. The Pt catalyst (A) showed higher selectivity for the production of 4-methylcatechol while the Rh catalysts were more selective toward the cresols. In all cases, complete hydrodeoxygenation of the methylguaiacol to methylcyclohexane was not observed.
745
The effectiveness of capacity markets in the presence of a high portfolio share of renewable energy sources
In this research, we analyze the effectiveness of a capacity market in the presence of a growing share of intermittent renewable energy sources.The European Union is at the forefront of the renewable energy transformation.The increasing reliance on electricity generation from variable renewable energy sources has led to concerns regarding the security of supply.The missing money problem and other vulnerabilities of the electricity markets due intermittent renewable energy resources in the supply mix have been extensively discussed in the literature.Concerns about the security of supply can be addressed by implementing capacity mechanisms to ensure adequate investment in generation capacity.These are sometimes considered as a means of providing stability during the transition to a decarbonized electricity system.A capacity market is a quantity-based mechanism in which the price of capacity is established in a market for capacity credits.In a capacity market, consumers, or agents on their behalf, are obligated to purchase capacity credits equivalent to the sum of its expected peak demand and a reserve margin.Capacity credits can be allocated in auctions or via bilateral trade between consumers and producers.The reserve margin requirement is expected to provide a stronger and earlier investment signal, thereby ensuring adequate generation capacity and more stable electricity prices.Capacity markets have been discussed extensively in literature such as: Hobbs et al., 2001; Stoft, 2002; Joskow, 2008; Chao and Lawrence, 2009; Cepeda and Finon, 2011; Rose, 2011; Cramton et al., 2013; Finon, 2013; Mastropietro et al., 2015; Meyer and Gore, 2015; Höschle and Doorman, 2016; Bhagwat, 2016; Bhagwat et al., 2016a,b, 2017; Bothwell and Hobbs, 2017; Bushnell et al., 2017; Höschle et al., 2017.In the literature, several types of computer models have been used to study capacity markets.Hach et al., Cepeda and Finon and Petitet et al. use a system-dynamic approach.Moghanjooghi uses probabilistic model.Botterud et al., Doorman et al., Dahlan and Kirschen, and Mastropietro et al., use an optimization modeling approach.Ehrenmann and Smeers use a stochastic equilibrium model, while a partial equilibrium model is used by Traber.In the existing research, capacity markets are modeled without sufficient granularity to understand the operational dynamics of these policy constructs and to compare different capacity mechanism designs.Moreover, none of the reviewed studies considered the combined effects of uncertainty and path dependence on the development of electricity generation portfolios with a growing share of RES.In reality, the ability of investors to make decisions is bounded and may lead to myopic investment decisions and consequently, suboptimal achievement of policy goals.The use of an agent-based modeling approach allows us to study the development of the electricity market under imperfect information and uncertainty.Moreover, the use of EMLab-Generation allows higher granularity in modeling the capacity market.This work also extends the research on the effectiveness of capacity markets in providing reliability in the presence of demand shocks resulting in load loss and a growing share of renewable energy in the supply mix.In the next section, we describe the EMLab-Generation model and its implications for implementing capacity markets.Section 3 describes the scenarios that we use.In Section 4, we present the results from our simulation of a capacity market implemented under various conditions.The conclusions are summarized in Section 5.EMLab Generation is an open-source agent-based model of interconnected electricity markets that was developed with the aim of analyzing the impact of various carbon, renewable and adequacy policies on the long-term development of electricity markets.EMLab-Generation model was developed at Delft University of Technology.Agent-based modeling utilizes a bottom-up approach in which key actors are modeled as ‘agents’ that make autonomous decisions, based on their interactions with the system and other agents in the model.The advantages of using ABM in modeling complex socio-technical systems has been discussed.In the context of electricity markets, ABM captures the complex interactions between energy producers and a dynamic environment.No assumptions regarding the aggregate response of the system to changes in policy are needed, as the output is the consequence of the actions of the agents.The main agents in this model are the power generation companies.They make decisions regarding bidding on the electricity market, investing in new generation capacity and dismantling existing power plants.Their decisions are based on factors endogenous to the model as well as exogenous factors.As the model is designed to analyze the long-term development of electricity markets, the simulation is run for a period of several decades, with a one-year time step.Power-plant investment decisions are based on expected net present value.There are 14 different power generation technologies available for the agents to choose from in the model.The attributes of the power generation technologies, such as operation maintenance costs and fuel efficiencies, are based on data from IEA World Energy Outlook 2011, New Policies Scenario.The assumptions regarding the power generation technologies are presented in Table 3 of the Appendix.Electricity demand in the model is represented as a load-duration curve developed which is based on empirical data and approximated by a step function with multiple segments of variable length.The advantages of using the load-duration curve approach in this model are described in.In this model demand is inelastic to price.The government sets annual targets for electricity generation from RES.In case the competitive generation companies do not invest enough in RES to meet the government target, a specific renewable energy investor will invest in the additional RES capacity needed to meet the target RES capacity, regardless of its costs.This way, the current subsidy-driven development of RES capacity is simulated.The variability or intermittency of renewables is approximated by varying the contribution of these technologies to the different segments of the load-duration function.The segment-dependent availability of RES is varied linearly from a high contribution to the base segments to a very low contribution to the highest peak segment.A detailed description of how intermittency is modeled is available in De Vries et al. and in Richstein et al.The power companies make price-volume bids for all power plants in their portfolios for each segment of the load-duration curve.The bids equal the variable cost and the available capacity of the underlying power plants.The electricity market is cleared for every segment of the load-duration curve in each time step.The market price for each segment is set by the highest clearing bid.If the supply is lower than demand, the clearing price for the segment is set to the value of lost load.This causes high price volatility; demand elasticity would dampen prices, which in turn might reduce the propensity toward investment.We consider an isolated electricity market.A detailed description of this model is available online in the EMLab-Generation technical report1 and other previously published work.The capacity market module in EMLab-Generation is a simplified representation of the NYISO-ICAP capacity market.We chose this for its relatively simple design, because it was one of the first capacity markets and because it is arguably meeting its policy goals.It is projected that no new resources would be required in the NYISO region till 2018.We start our description with the consumer side.Load-serving entities are obligated to purchase the volume of unforced capacity that has been assigned to them.UCAP is defined as the installed capacity adjusted for availability, as provided by the Generating Availability Data System.NYISO has defined two six-month capability periods during which it tests the maximum generation output of parties that have sold capacity credits: a Summer capability period and a Winter capability period.The NYISO determines the volume of unforced capacity that the load-serving entities must buy as a function of forecast peak load plus an Installed Reserve Margin, a security margin that is intended to limit the risk of generation shortfalls.The IRM is defined as the required excess capacity and is established such that the loss-of-load expectation is once in every ten years, or 0.1 days/year.The LOLE represents the probability that the supply would be lower than demand, expressed in time units.In NYISO, days per year are used.The LSEs do not actively purchase the required capacity themselves; instead, the ISO contracts the required capacity from the capacity market on behalf of load serving entities and passes the cost along to them.To this end, once per year, the ISO organizes mandatory auctions for capacity for the coming year.In these auctions, supply-side bids of capacity are cleared against a sloping demand curve, which is administratively determined by the ISO.The parameters of the sloping demand curve are reviewed every three years.Market parties are allowed to correct their positions in secondary markets.Imports are allowed to bid into the capacity market, provided that they adhere strictly to rules regarding transmission capability, electricity market bidding, and availability.Market parties are also allowed to conclude bilateral contracts.A detailed description of the market rules is available.In the capacity market module of EMLab-Generation, the capacity for the coming year is traded in a single annual auction and is administered by an agent whom we call the capacity market regulator.Users set the IRM, capacity market price cap, and parameters for generating the slope of the demand curve.The supply curve is based on the Price – Volume bid pairs submitted by the power generators for each of their active generation units.The agents calculate the volume component of their bids for a given year as the generation capacity of the given unit that is available in the peak segment of the load-duration curve.We use a marginal cost-based approach to calculate the bid price.For each of power plant, the power producers calculate the expected revenues from the electricity market.If the generation unit were expected to earn adequate revenues from the electricity market to cover its fixed costs operating and maintenance costs, the bid price is set to zero, as no additional revenue from the capacity market is required to remain operational.For units that are not expected to make adequate revenues from the energy market to cover their fixed costs of remaining online, bids reflect the difference between the fixed costs and the expected electricity market revenue, the minimum revenue that would be required to remain online.Renewable energy generators are allowed sell capacity, but their UCAP is set equal to their contribution to peak load, which is only a small percentage in the case of solar and wind energy.The capacity market-clearing algorithm is based on the concept of uniform price clearing.The bids submitted by the power producers are sorted in ascending order by price and cleared against the above-described sloping demand curved.The units that clear the capacity market are paid the market-clearing price.When making investment decisions, both commissioning and decommissioning, the power generators take into account the expected revenues from the capacity market.In this section, we discuss the scenarios for the simulation runs.Each scenario consists of 40 time steps that are run 120 times, Monte Carlo fashion, with identical starting conditions.In the reference scenario, the model is run without a renewable energy policy in order to assess the effectiveness of a capacity market without possible effects from a renewable energy policy.The other scenarios involve a renewable energy policy so as to address the core research question regarding the effectiveness of capacity markets in regions with a growing share of renewables in their supply mix.The scenario settings are described in Table 1.TM indicates a thermal-only, as opposed to a scenario with a RES policy.BL indicates the baseline of no capacity market; the presence of a capacity market is indicated with CM.We consider a single electricity market without interconnections.On the supply side, the electricity market consists of four identical energy producers.At the start of the simulation run, their power generation portfolios consist of four conventional generation technologies: OCGT, CCGT, coal, and nuclear power.The energy producers may consider investing in other available technologies while making their investment decisions during the simulation period.The supply mix is roughly based on the portfolio of thermal generation units in Germany.We introduce a renewable energy policy that causes rapid growth in the share of intermittent renewable energy resources over the period of the simulation.The renewable energy trends are based on the German renewable energy action plan until 2020 and extrapolated after then.The price trends for the various fuels and demand growth are modeled stochastically, based on a triangular, mean-reverting probability function.The coal and gas prices are based on fossil fuel scenarios published by the Department of Energy and Climate Change.The biomass prices are based on Faaij.The initial load-duration function is based on 2010 ENTSO-E data for Germany.Demand grows by 1.5% per year on average.Estimating the value of lost load is difficult.The estimates of the value of lost load in the literature vary widely depending on the location and nature of the load.In this modeling, VOLL was chosen at the relatively low level of 2000 €/MWh.We also chose this level to take into account demand flexibility that might occur during periods of high prices.The scenario consists of a capacity market with a capacity maximum price of 60 000 €/MW per year.We assume that the capacity market regulator requires a reserve margin of 9.5%2 based on the NYISO-ICAP reserve margin requirement, which we lower to reflect the fact that we do not model generation outages.Lower and upper margins of 2.5% are introduced to generate a sloped demand curve.The parameters specified for each power generation company are - the look-forward period, the look-back period for making forecasts in the investment algorithm, the look-back period for dismantling, equity interest rate, loan interest rate, and equity to debt ratio.In the scenarios used for this research, 30% of the investment is financed with equity with an expected return on equity of 12%, and 70% is financed with debt at an interest rate of 9%.In the investment algorithm, power generation companies use a look-forward period of 7 years, while the lookback for forecasting is set at 5 years.In the case of dismantling the look-back period is 4 years.The values used were based on Richstein.We use the following indicators for the evaluation of the effectiveness of the capacity market:The average electricity price: the average electricity price over an entire run.Shortage hours: the average number of hours per year with scarcity prices, averaged over the entire run.The supply ratio: the ratio of available supply over peak demand.The cost of the capacity market: the cost incurred by consumers for contracting the mandated capacity credits from the capacity market, divided by the total units of electricity consumed.The cost to consumers: the sum of the electricity price, the cost from the capacity market and the cost of the renewable policy per unit of electricity consumed.3,Fig. 3 provides an overview of the results of the simulation runs.The results are also presented numerically in Table 4 of the Appendix.At the start of the simulation run in the baseline scenario, we observe a decline in the supply ratio.This is caused by the dismantling of excess capacity that exists in the system due to the high supply ratio set in the initial scenario settings.Moreover, demand response is not considered in this study.The presence of even a small level of demand response would lead to considerable reduction in shortage hours observe in the baseline scenarios.We test the effectiveness of a capacity market in the absence of renewable energy policy by comparing it to the baseline case without a capacity market.In our model, the capacity market exceeds the adequacy goals: an average supply ratio of 1.11 is observed in the presence of a capacity market, which is 1.5% higher than the adequacy target of 9.5%.In this figure and others, the mean is indicated by a solid line, the average with a dashed line, the 50% confidence interval with a dark gray area and the 90% confidence interval with the lightly shaded area.The average capacity price is 36,496 €/MW.The observed overshoot in adequacy can be attributed to the configuration of the demand curve used in this analysis.The capacity market clears at a level where it becomes economically viable for excess idle capacity above the targeted IRM to remain available.The higher supply ratio that is induced by the capacity market leads to a reduction in the average number of shortage hours from 21.7 h/year in the baseline scenario to nil.The electricity price is 11% lower and volatility is also reduced, as can be seen in Fig. 5.The net cost to consumers increases slightly, as the lower electricity prices are offset by the capacity payments.The main impact of implementing a capacity market on the generation mix is a substantial increase in ‘peaker’ plants: on average there is 19.9 GW of OCGT capacity in the scenario with a capacity market as compared to 6.1 GW of OCGT in the baseline scenario.This is due to the low utilization rate of the last plants in the merit order in the presence of a capacity market.The revenue from the capacity market is sufficient for OCGT capacity to remain online even when these units have very little or no revenue from the electricity market.Fig. 6 illustrates the development of OCGT generation capacity over the simulation.The presence of intermittent renewable energy generation in the supply mix reduces the supply ratio from 0.97 to 0.92 in the baseline scenario.As a result, the average numbers of hours of supply shortage more than double, from 21.7 to 62.6 h/year.The reason is that the presence of a high share of renewables in the system reduces the number of dispatch hours and therefore the revenues of thermal generators.This leads to a reduction in investment and causes the dismantling of some existing power plants that no longer receive adequate revenue from the electricity market.The higher number of shortage hours offsets the reduction in costs to consumers due RES.A capacity market can compensate for this effect.A supply ratio of 1.12 is maintained fairly consistently in the model, which is 2.5 - percentage points higher than the adequacy target of 9.5%, also in high RES scenarios.This overshoot indicates that the current configuration of the capacity market provides greater incentive than what is required to maintain the adequacy target.The average capacity market clearing price is 31,558 €/MW per year.We also observe that the capacity market is less volatile in terms of capacity prices in the presence of renewables as compared to the TM-CM scenario and that the average capacity price is lower.However, the additional cost of RES support leads to higher net costs to consumers in the RES scenarios as compared to the thermal-only scenarios.In our model, the presence of additional capacity eliminates shortages entirely.Consequently, the average electricity price declines by 24% in RES-CM, as compared to RES-BL.A significant reduction of electricity price volatility is also observed in RES-CM.The total cost to consumers is 9% lower in the presence of a capacity market in the high-RES scenario.To understand this reduction, we analyze the impact of a capacity market on electricity prices and the cost of renewable energy policy.The presence of a high supply ratio leads to a steep decline in shortages, which has a substantial damping effect on the electricity prices.However, the lower electricity prices increase the need for RES subsidy by 14% due to the lower electricity market revenues of the renewable generators.The cost savings from the electricity market, which stem mainly from avoiding outages, are larger than the costs of the capacity market plus the higher renewable energy subsidy.To provide insight on the effect of RES on the system, Fig. 9 illustrates the shares of different technologies in the generation mix of the system in both a case without and with a capacity market. from different technologies over 120 Monte Carlo runs.),In the scenario with a capacity market, the average annual electricity generation is 201 GWh more than in the baseline scenario.The additional supply eliminates the shortages that occur in the baseline scenario RES-BL.The installed capacity of the various generation technologies at the end of the simulation is presented in Table 7 in the Appendix.In this scenario, the capacity market mainly results in more investment in ‘peakers’.On average, the volume of OCGT capacity rises from 5.4 GW in the baseline scenario to 28 GW in the presence of a capacity market.The additional revenue from the capacity market is sufficient for additional OCGT capacity to remain online even when these units receive very little or no revenue from the electricity market.Due to the high share of renewables in the system, thermal units operate fewer hours than in a scenario without renewables.These conditions make OCGT plant more attractive for peak capacity.However, it also appears that the capacity requirement is set too high, given that plant outages are not simulated.Too high a margin would lead to investment in plant that rarely runs, in which case the choice for OCGT, as the technology with the lowest capital cost, is logical.Fig. 10 illustrates the development of OCGT capacity over the length of the simulation.The comparison of the scenarios with and without a growing share of renewables suggests two more observations.In neither scenario is the remuneration from the capacity market sufficient to stimulate investment in nuclear power.This finding suggests that countries that desire new investment in nuclear power will need to implement a support policy, as corroborated by the UK, which has a feed-in tariff for nuclear policy in addition to its capacity market.Secondly, the average capacity market-clearing price is lower when there is more renewable energy generation capacity in the electricity system.As we allow renewable power producers to offer the peak-available capacity of their renewable resources to the capacity market, the presence of renewable energy generation capacity dampens capacity market prices as renewables push out some of the expensive peak capacity from the capacity market.This effect depends on the assessment of the contribution of variable renewable energy to peak demand and on the way that renewable energy is treated in the capacity market.To understand the sensitivity of the model results to the assumed peak contribution of renewable energy generators, the model was also run in a configuration in which the contribution of intermittent renewables to the peak segment was set to zero and intermittent renewable energy generators therefore also do not receive capacity credits.We observe a modest impact on the model results.In the baseline scenario with zero peak contribution of RES, higher average electricity prices are observed as compared to RES-BL, which is expected due to the reduction in available peak capacity.The implementation of a capacity market in a configuration with zero peak contribution of RES results in a supply ratio that is similar to the RES-CM scenario.There is an increase in net cost to consumers, as no capacity from the renewable resources is traded on the capacity market, leading to a higher capacity-clearing price.The results of these runs are presented in Fig. 12.Establishing a strategic reserve is an alternative to implementing capacity market.In earlier work, the effectiveness of a strategic reserve in the presence of a growing share of renewable energy in the supply mix was analyzed.To compare the results from the two capacity mechanisms and to maintain the consistency of all scenario settings, the model was run with a strategic reserve, while all other scenario parameters were kept the same as in RES-BL.In our model, both capacity mechanisms reduce the net cost to consumers in the presence of imperfect information and potentially myopic decision-making.However, unlike the strategic reserve, the effectiveness of the capacity market in providing the required reserve margin does not decrease with an increase in the share of intermittent renewable energy.Capacity markets should help avoid uneconomic investment cycles.A comparative analysis between the performance of strategic reserves and capacity markets in the context of interconnected power systems with cross-border effects was presented in Bhagwat et al.However, capacity mechanisms such as capacity subscriptions and reliability contracts, which were not included in this study, may also prove to be effective because they too control the total volume of capacity.Decentralized capacity mechanisms, such as capacity subscriptions, could be more effective in reducing free-riding as consumers choose and pay for the adequacy level required by them.Reliability contracts may have a better operational performance with regard to mitigating market power as compared to a centralized capacity mechanism such as a capacity market.In EMLab-Generation, strategic behavior of generators such as the exercise of market power was not modeled.Correspondingly, consumer behavior also was not modeled.Therefore, capacity subscriptions and reliability contracts are outside of the scope of this research.As a sensitivity analysis, we assess the effectiveness of a capacity market with respect to differences in electricity demand growth and with demand shocks.We also test the impact of changes in several capacity market parameters such as the targeted reserve margin, the capacity market price cap and the slope of the demand curve.The following Table 2 provides an overview of the scenarios for the sensitivity analysis.To evaluate the robustness of the capacity market with respect to demand growth uncertainty, model runs were performed with the four different demand development scenarios that are described in Table 2.All other parameters and scenario variables, including the growth of intermittent renewable sources, are the same as in the RES-CM scenario.The ability of a capacity market to meet its adequacy targets is not strongly affected by the average demand growth rate.A decline or no growth in demand combined with a high share of renewables in the generation portfolio leads to higher prices in the capacity market by thermal generators, as they require greater remuneration from the capacity market to cover their fixed costs.Consequently, consumer costs are also higher as compared scenarios with medium or high growth rates.A reserve margin of 11% is observed in the scenario with declining demand, which is higher than the required reserve margin target of 9.5% but within still the bounds of the upper margin.If demand growth is moderate or high, the revenues from the electricity market increase, which allows the generators to offer their capacity at a lower price to the capacity market, thereby damping of capacity market prices and reducing costs to consumers.While the average demand growth rate affects the net cost to consumers, the capacity market is robust enough to provide an adequate reserve margin under widely varying demand growth conditions.In a declining demand scenario, more support from the capacity market is needed to maintain a given supply ratio.The opposite is true in a high demand-growth scenario.The model was run with an IRM between 6% and 18% in increments of 3 percentage points.All other parameters were kept the same as in the RES-CM scenario.The results are illustrated in Fig. 16.The IRM targets are met.A higher IRM requirement leads to a higher capacity market clearing price and hence to an increase in the net cost to consumers.A well-designed capacity auction can be used to achieve any reserve margin, but high reserve margins increase the cost to consumers without a significant increase in the security of supply.However, an IRM that is too low may not be able to handle any unforeseen events, including demand shocks, and thus lead to an adverse impact on consumer costs.The capacity market price cap is the value at which the capacity market clears in the event that demand is higher than the available supply in the capacity market, and is expected to affect investment incentives.It has been suggested that the price cap should be set somewhat higher than the cost of new entry for the marginal generator.We changed the level of the capacity market price cap in increments of 20 k€/MW per year, while keeping all other scenario parameters the same as in the RES-CM scenario.The capacity market price cap impacts the slope of the demand curve A higher price cap makes the demand curve steeper, which has two implications.First, for the same volume of generation capacity, the market would clear at a higher price.Second, a steeper demand curve would make the capacity market price more sensitive to changes in capacity levels.We observe that the price cap has a significant impact on the volatility of the capacity market prices, as can be observed in Figs. 17 and 18.In all scenarios, the required reserve margin targets are achieved.The supply ratio in a scenario with a lower capacity price cap is more stable but lower on average than in the scenarios with higher price cap values.See Fig. 19.If the price cap is set too low, the capacity market may not be able to provide adequate incentive to attain the IRM target.Thus, a price cap close to the cost of new entry indeed provides the required adequacy and also minimizes volatility in the capacity market.In the initial years of scenarios with price cap greater than 40 k€/MW, we observe a dip in average capacity price, which can be attributed to high capacity clearing price at the starting year caused by the initial scenario set up.This causes an overshoot in generation capacity investment and thus a consequent dip in capacity market clearing price when this capacity comes becomes available.Another design aspect that may affect the performance of a capacity market is the slope of the demand curve.As explained in Section 2.2, this is determined by the upper and lower margins.In this section, we increase the UM and LM in two increments of 2.5 percentage points.See scenarios 15–17 in Table 2.All other scenario parameters are kept same as in the RES-CM scenario.As discussed before, a steeper demand curve makes the clearing price more sensitive to changes in demand and supply of capacity, as compared to a gentler slope.No significant difference is seen in either the average supply ratio or the average capacity market-clearing price.However, the volatility of the capacity market prices declines with increasing values of the upper and lower margins.We modeled a demand shock to test the ability of a capacity market to cope with extreme events.The simulated demand trajectory is shown in Fig. 21.After 14 years of 1.5% demand growth, the system experiences a sudden drop in demand, followed by a zero growth for several years.These trends are still the averages of 120 runs; individual runs may deviate significantly.Eventually, in the last 11 years of the simulation, demand grows again at 1.5%.This scenario simulates the impact of the 2008 economic crisis in electricity demand in Western Europe, with the assumption that demand growth eventually will return to its pre-crisis level.Fig. 22 shows that the sudden drop in demand followed by zero growth leads to a long cycle that continues up to year 30.The initial drop in demand in year 15 causes a sudden increase in the supply ratio.As demand growth does not rebound, we see a gradual dismantling of excess capacity over the next years.We also observe an increase in the volatility of capacity prices.The high supply margin after the demand drop causes the capacity price to fall.This causes an overshoot in dismantling and consequently a spike in the capacity prices as the supply ratio goes below the administratively set lower margin.This reinforces the investment cycle.In this scenario, the high IRM protects consumers from shortages, despite the investment cycle.However, in a system with a lower IRM requirement, these swings threaten security of supply.Thus the optimal level of the IRM depends on the expected volatility of electricity demand growth: the higher the uncertainty, the higher an IRM is justified.The uncertainty about the magnitude of future demand changes and investment cycles poses a difficulty for the regulator: setting the margin too high will cost the consumers money, setting it too low may result in shortages despite the implementation of a capacity market.However, the social cost of over investment is much smaller than the cost of shortage.To limit computational time, electricity demand is be modeled as a segmented load-duration curve.As a result, the temporal relationship between different load hours is lost.Thus, short-term operational constraints such as ramping and unplanned shutdowns of power plants were ignored.Furthermore, due to the inflexibility of demand, the clearing prices in the electricity market are either set by the marginal generator or the VOLL.These abstractions may cause underestimation of the effect that intermittent renewable generation has on the development of the electricity market.The effects of non-coincident renewable energy generation and peaks in demand are also lost.These modeling assumptions along with the segmented nature of the load-duration curve make capturing the short-term dynamics less precise and explain the overshoot in adequacy that is observed in the model results.However, they do not change the effect caused by investment based on extrapolation of historic trends and combined with a construction delays.The purpose of the model is to simulate realistic imperfections in investment behavior.In this study, we focused on uncertainty and a demand shock in order to analyze the robustness of a capacity market under these conditions.However, the capacity market in the model is idealized.Real capacity markets are vastly more complex, and the many associated rules entail risks of regulatory failure.The model does not include policy uncertainty, which may have a substantial impact on investment decisions.There is no period of regulatory uncertainty around the introduction of the capacity market, nor are there incremental modifications to the capacity market in the model.Therefore the model simulates a well-functioning capacity market within a suboptimal electricity market.Network congestion and market power were left out of the scope.Therefore the dynamics that may arise due to the strategic behavior of various market participants, e.g., during shortages, are not captured.These effects may create further challenges for the implementation of a capacity market in practice.Demand response and storage have also been left out of the scope of this research because their impact is limited currently.They have a stabilizing impact on electricity prices and may reduce the need for a capacity mechanism in the long term.We present a model of a capacity market in an isolated market with an ambitious renewable energy policy.While an energy-only market within an optimized investment equilibrium is optimal in theory, we show that a capacity market can be an effective remedy when less-than-optimal circumstances might lead to too little or too late investment in generation capacity.We simulate three types of conditions that may cause investment not to be optimal: imperfect information and uncertainty; a demand shock; and a growing share of renewable energy in the generation portfolio.Under these circumstances, a capacity market may provide a significant reduction in the number of shortage hours, as compared to an energy-only market.Due to the high social cost of outages relative to the limited additional investment in generation capacity, total social cost can be reduced.The net cost to consumers of a capacity market is sensitive to the growth rates of demand.In a declining demand scenario, higher support from the capacity market is required to maintain a given supply ratio.The opposite is true in a high demand-growth scenario.If the administratively determined reserve margin is high enough, security of supply is not significantly affected by uncertainty or a demand shock.Uncertainty about future demand and investment needs presents a difficulty for the regulator: setting the margin too high will cost consumers money, while setting it too low may result in shortages despite the implementation of a capacity market.However, the social cost of over investment is much smaller than the cost of shortage.Capacity markets mainly lead to more investment in low-cost peak generation units.It does not provide sufficient incentive for investment in nuclear power plants.Investment in nuclear power requires separate policy support, as is implemented in the UK.We also find that a lower price cap reduces capacity market price volatility without affecting its ability to reach the target IRM, as long as the price cap is above the cost of new entry.Therefore a capacity market price cap close to the cost of new entry should provide the required adequacy while minimizing capacity market price volatility.Extending the upper and lower margins of the demand function also reduces capacity market price volatility.A capacity market provides a more stable supply ratio and is therefore more effective than a strategic reserve in providing the required reserve margin, especially in the presence of a demand shock and a growing share of renewable energy.However, the EM-Lab model we used simulates a relatively ideal capacity market in the presence of imperfect investment behavior and markets.This leads to an optimistic assessment of capacity markets.Our results thus illustrate the potential benefits of a well-implemented capacity market without accounting for the inevitable complications, such as regulatory uncertainty or market power.
The effectiveness of a capacity market is analyzed by simulating three conditions that may cause suboptimal investment in the electricity generation: imperfect information and uncertainty; declining demand shocks resulting in load loss; and a growing share of renewable energy sources in the generation portfolio. Implementation of a capacity market can improve supply adequacy and reduce consumer costs. It mainly leads to more investment in low-cost peak generation units. If the administratively determined reserve margin is high enough, the security of supply is not significantly affected by uncertainties or demand shocks. A capacity market is found to be more effective than a strategic reserve for ensuring reliability.
746
Assessing private R&D spending in Europe for climate change mitigation technologies via patent data
Research and development spending in energy-related technologies is part of one of the key pillars of the European Energy Policy for 2020 and post-2020 frameworks .The combined effort of public institutions, academia and companies is required in order to accelerate the energy transition and to contribute to a cleaner, sustainable and secure energy system.In particular, the private sector, with its capacity and willingness to invest, plays a crucial role in this process .The Mission Innovation initiative also calls for a more close collaboration between companies and national governments in order to increase investments and boost innovation in the energy sector.3,Moreover, the Energy Union strategy has identified private R&D investment as one of the key indicators to show progress already made in the transition to a low-carbon, secure and competitive energy system and to design future actions .However, in order to mobilise private investments in specific technological and geographical areas via appropriate policies, policy makers need insights on how the private sector invests in R&D and what triggers or hinders this activity.But, as for other technology areas, the measurement of private R&D investment in the energy sector is proven to be difficult , due to lack of data availability, quality and granularity.Consequently, evidence is scarce and it is very challenging to support effectively policy-making process when the latter needs to be tailored to specific technology or geographical areas.Therefore, there is a clear need to gain insights on private R&D investments, considering the central role of industry in carrying out and financing innovation in the energy sector.Few scientific studies have tried to estimate private R&D investment in the field of energy technologies.This can be attributed to both a lack of interest and on a mandate to do so, but, more importantly, to the lack of appropriate and readily accessible information sources.As a result, studies concentrate on specific technologies or pockets of activity, trying to derive insights from best available datasets rather than building a methodology and information sources to address the entire sector.Two studies have contributed to this research line.Ref. proposes a four-step procedure, combining qualitative and quantitative information to assign R&D investment to companies active in more than one sector.The method uses several datasets but it introduces uncertainties due to lack of data coverage, resulting in several assumptions to fill gaps.Ref. provides estimates of private R&D investment and R&D workforce through a top-down patent analysis.However, the assessment only focuses on photovoltaic-related R&D investments, overlooking other, still relevant, low carbon energy technologies.By strengthening and combining elements of these two approaches, this paper proposes a method to estimate private R&D expenditures, considering the geographical location of private companies, which may also be simultaneously active in multiple technology areas.The method consists of a multi-step procedure and it combines together a tailored patent analysis with companies’ financial information.Intermediate results give the opportunity to measure several aspects of the R&D activity in the private sector, detailed by year, technology area, sector and country.These are: unitary cost for a patent, R&D exposure to specific technology, total R&D spending and geographical distribution of R&D budget in multinational corporations.This method is applied to the European private sector and its activities related to research and development of climate change mitigation technologies.The methodology is scalable to other geographical areas and sectors.Accordingly, the main contribution of this paper is the proposal of a new methodological approach to estimate R&D expenditure in the private sector which provides a highly granular dataset of R&D activity not available otherwise.As a result, policy makers can be better informed on the private R&D spending, facilitating the whole policy cycle.The structure of the paper is the following.The research context is explained in the next section.It discusses why provision of R&D data from companies is scarce, and why patents could provide a valid proxy for the estimation of R&D spending, despite the controversial literature.Section 3 illustrates the way in which patent data are analysed, and presents the steps in the estimation procedure.Section 4 presents the data and discusses results concerning R&D expenditure in Europe related to CCMTs, and the geographical distribution of R&D investments in European MNCs.Closing remarks and future developments of the research are given in section 5.Dissemination of financial information by companies depends on their strategies and legal obligations.On the one hand, companies may be reluctant to disclose figures on the amount and destination of their R&D spending since it can unveil strategic choices .Thus, information is treated as confidential despite the fact that companies might benefit from announcing an increment in R&D expenditure, since it anticipates market growth opportunities , especially for those active in high-tech industries or in concentrated markets .On the other hand, publicly-traded companies are legally bound to produce and disclose detailed periodic statements on their economic performance, filed in compliance with formal and legal standards.However, private companies with limited liability of the shareholders, albeit requested to report their accounts, are subject to dissimilar requirements.4,In some cases, companies may even be exempt from any obligation.It follows that the sole source of information on private R&D investment consists of annual reports and financial statements provided by listed companies, meaning that small and medium enterprises are underrepresented, even though they are recognised as important players in the innovation process .The main drawback of this information is the lack of provision of insights on the allocation of private R&D, making it very difficult to split private investments among activities, especially when these impact multiple sectors ."Furthermore, in large corporations, it is also difficult to assess R&D contribution of associated subsidiaries, since MNCs only report the group's consolidated financial statement.To work around all these difficulties and to compose a richer dataset related to private investments, this paper proposes a methodological approach which is based on the assumption that a relationship exists between patents and R&D expenditure , and the assumption is also valid in the energy sector .Patent data are used as a proxy of invention and the objective is to estimate R&D expenditure through the inventive activity of companies.This method combines together elements of two approaches in a more systematic way , trying to expand their range of applicability and to avoid uncertainties introduced due to lack of consistent data, expert judgement, or assumptions.As in the top-down approach proposed in Ref. , we estimate a cost per patent which is then used to quantify R&D expenditure for private companies.In our method, the cost per patent is estimated considering several cost determinants, therefore requiring a more elaborated procedure compared to the cited approach.Nevertheless, this method extends the scope of the analysis to more technologies and reveals differences among countries over time.In addition, we strengthen the third step of the bottom-up R&D estimation process in the field of low-carbon energy technologies proposes in Ref. ."Here a four-step procedure is proposed, including identification of key industrial players, gathering relative information on total R&D investments, allocation of R&D investments to energy technologies for each player, and summing up individual company's R&D investment by technology.While Ref. uses qualitative information and/or proxy-indicators to assign R&D investment to those companies that are active in more than one technological field, the approach presented in this paper uses primarily patent statistics for this purpose.This is because patent data provide quantitative evidence that reduce uncertainties due to assumptions during data collection.However, patent data are complex and their use as proxy of inventive activity and technological progress generates controversy among the scientific community and lack of consensus between opponents and advocates .Patents are a mean to protect inventions , but not all inventions are patented .The decision to patent is dependent on business strategies and on the possible industrial application of the invention .Further, patent propensity varies across countries and industries .Patents are assets for organisations, hence they can be acquired or licensed depending on the agreement between parties .For example, data show that IBM, the American technology company, tripled the number of patents in the period from 2011 to 2013 while the R&D expenditure slightly declined during the same time span.Hence, higher R&D does not automatically imply higher patenting activity, as proven for the pharmaceutical sector, but it might determine higher patent quality .Companies that are looking for external investments have more incentives to patent, since these are now becoming important financial signals for investors .Patents play a dual role in the innovation process, either as an incentive or as an obstacle .This dual perception is fuelling the debate on the role of patents in energy-related inventions .On the one hand patents are considered as a mean to increase market opportunities, which create stimuli for companies towards incremental innovations."On the other hand, since patents prevent others to use such protected inventions, diffusion of climate friendly technologies is influenced by companies' strategies.This could have a negative impact on global efforts against climate change if market opportunities are not expected in all areas worldwide.Therefore, using patent statistics requires careful consideration and interpretation of the data.However, acknowledging these controversies, patents remain a “measurable” proxy of the inventive work , p.7] and they represent a very rich set of information which should be exploited ."In fact, they are often used to evaluate country's and industry's technological performance , technology diffusion and technology transfer and internationalisation of technology .Nevertheless, it is required to be as transparent as possible in the use of patent statistics since there are multiple ways to perform patent analysis .Accordingly, section 3.1 below presents the way in which patent data are used in order to feed the multi-step procedure proposed in this paper.The methodology presented in this paper measures different aspects of R&D activity.In particular, by focusing on the activity of MNCs it analyses globalisation or concentration of R&D and how investments flow from parent companies to their subsidiaries .Previous studies have found that there are several determinants of the decision to establish and support a subsidiary in another country : better policy regimes , market opportunities , knowledge spillovers , increasing collaborations and networking opportunities , and ensuring knowledge and technology transfer .In the context of MNCs, the role of national policies is also crucial since regulators must consider that internal policies may determine the activity of companies that are active across borders.Therefore, it is important to provide robust and detailed evidence on how MNCs allocate their R&D budget and to study their level of internationalisation.This is also relevant to find evidence concerning national systems of innovation , their level of sectorial-dependency and technological specialisation .For this study, patent data are retrieved from PatStat 2018 Spring Edition, the worldwide patent statistical database created and maintained by the European Patent Office.Given the issue of data “accuracy and completeness” in PatStat , the automatic data clean-up process is applied in order to eliminate blank entries, typos, errors and inconsistencies .To restrict the analysis to the CCMTs sector, the Y02 and Y04 schemas of the Cooperative Patent Classification are used since these codes identify all patents related to CCMTs .Table 1 below summarises the CCMTs analysed in this paper.Given the focus of this research, patent analysis is restricted to patent applicants since they are considered the owners of the patent and, thus, those directly investing and financing R&D activities.Applicants are categorised as organisations or as physical persons.Applicants can apply for one or more patent and patent applications are submitted to patent offices following different routes.Applications to all offices are considered, with no restrictions regarding national or international route, which are therefore treated with the same level of relevance; the main reason for this lies in the fact that the focus is on where and when the R&D activity is financed rather than where and when an applicant seeks protection for the invention.Together with the filing date, a priority date is also assigned to the patent application, which corresponds to the filing date of the earliest application in the patent family.A patent family5 is a group of patent applications, which share the same priority and so they refer to the same invention .In other words, a patent family defines one invention that is composed by one or more patent application.The earliest patent of the family sets the priority date of the invention itself and we assume this date as the date in which the inventive activity started, hence financed.Subsequent patents belonging to the family, filed at a later stage, define the duration of the inventive activity.Thus, the use of the priority year as the moment in which R&D is financed implicitly considers the time-lag between R&D expenditure and invention/patent production .The inventive activity can involve more than one applicant and can target the development of one or more aspects of a technology.In order to estimate proportionally the effort committed by each participant to the development of each technology aspect, the well-established technique of fractional counting is used .6,The sum of the fractional contributions attributed to applicants quantifies their total inventive activity, and can be detailed by specific technology areas, country of provenance or applicant categorisation."The company's technology fractional is used as a proxy of inventions produced and financed by the company itself.In other words, the patent analysis results in a list of all companies active in developing CCMTs and it quantifies the number of inventions.This information is the starting point of the R&D estimation procedure described in the next section.In conclusion, the proposed estimation procedure provides a method to address the lack of R&D data broken down by company and technology, especially for those MNCs that include subsidiaries located around the globe.This methodology uses patent statistics that allow for the inclusion of cross-country, cross-sector and cross-technology heterogeneity in the mathematical formulation.The two processes Eqs.– and Eqs.- are complementary and together they allow the estimation of the R&D expenditure for the full list of companies active in the sector of CCMTs, detailed per technology and year.The two parts of the estimation procedure calculate two distinct unitary expenditures per invention that are used to estimate R&D per company and technology.The first one is the sector unitary expenditure).It considers the specificity of MNCs, such as the residence country of the parent company as well as the sector of economic activity, and this value is associated to all subsidiaries).The second expenditure is the technology unitary expenditure) that captures two more cost determinants, in this case related to subsidiaries: residence country and technology area.This value is then associated to the remaining companies).Therefore, in the mathematical procedure the subsidiaries’ heterogeneity, in terms of technologies and countries, is initially neglected in order to reach homogeneous country and sector unitary expenditure of MNCs which is then used to estimate heterogeneous country-technology levels of expenditures.The total value of R&D increased over time, confirming the increasing propensity of the European private sector to invest in activities related to CCMTs.In total, from 2003 to 2014 European companies have invested about 250 €bn in R&D activities concerning CCMTs.European companies that are active in the CCMTs sector have consistently invested more than 60% of the CCMT R&D budget to develop technologies related to Energy and Transportation.Interestingly, the R&D invested in Transportation, is higher than that in Energy, despite the fact that the number of companies in this sector is lower.This implies that the R&D effort per company changes in relation to technology: in 2014, on average, a company active in Transportation spent about 10.5 €mn, in comparison to the 3.5 €mn spent by a company active in Energy.Among European countries, companies resident in Germany have the highest total R&D expenditure, about three times that of the private sector in France, the second largest R&D investor.The countryies’ portfolio of private investments in CCMTs varies and it shows the technological specialisation and strengths of their national systems of innovation .Accordingly, the private sector in Austria invests about 60% of its R&D budget in activities related to Transportation, followed by France and Germany that contribute in this area with the largest proportion of their spending.Denmark and Spain have their maximum share of R&D expenditure in Energy, showing inclination to invest more where their level of specialisation is higher.A way to validate the methodology is to check whether the estimation procedure provides comparable results to already existing analysis.Ref. estimate 1.9 €bn from the private sector in the 28 European Member States only and with a focus on the SET-Plan priority energy technologies, which mainly include renewable energy technologies.The authors also find that 90% of the total R&D investment is concentrated in six countries: Germany, France, UK, Denmark, Spain and Sweden.Our method, instead, estimates 4.2 €bn invested by 38 countries in the Energy-related CCMTs area, which includes more technologies than RETs.However, when this method is restricted to EU28 and to RETs, it estimates 1.9 €bn, directly comparable with previous evidences, and it also indicates that the same six European Member States account for 82% of the total investment .A second comparison can be done on the estimated cost per invention.Table 2 shows technology unitary costs, as calculated in Eq.These values represent the estimated effort in terms of R&D expenditure by the European corporate sector to produce one invention in a specific technological area.They indicate that inventive activities require a different R&D effort, in relation to technology tackled and year.Apart from the individual unitary expenditure per technology area, although important to evaluate the learning process, the average R&D cost per inventive activity in CCMTs is estimated to be about 3.3 €mn.Other studies have also tried to measure an average cost per patent.Ref. calculates 3.24 €mn as the average R&D investment per patent family in the PV industry.Ref. estimates that the R&D unitary expenditure per patent in the manufacturing sector in Germany, France and Italy is about 4.89 US$mn.10,Ref. reports the estimation of the R&D cost per patent awarded by the top four patenting companies in three different sectors in USA: manufacturing, electronics and pharmaceutical.This value is about 3.8 US$mn.11,Even though these estimates are derived from different assumptions and analyses and they cover different sectors, they are comparable, in order of magnitude, from the one found in this study.In the period 2010–2014, there were 442 European MNCs financing activities related to CCMTs, classified in distinct ICB sectors.Fig. 5 shows the share of the total CCMTs R&D expenditure allocated to different technology areas by ICB sector.This analysis confirms that, although MNCs are active in multiple CCMTs sectors, they mainly maintain their sectoral specialisation.European MNCs whose sector of economic activity is Aerospace & Defence or Automobile & Parts dedicate more than 80% of their CCMTs budget to activities related to Transportation.Similarly, MNCs active in the Alternative Energy or Electricity sectors allocates respectively 82% and 71% of their R&D to Energy.The ICT technology area is the one preferred by related ICB sectors, such as Fixed Line & Telecommunication, Media and Technology Hardware & Equipment.MNCs in Households Goods & Home Construction sector invest about 68% of CCMTs R&D budget into the Building technology area.Other sectors, instead, given their heterogeneous nature of economic activity, invest across all CCMTs areas.This is the case of, among others, General Industries, Food Producers and Support Services.Another way to evaluate R&D spending in MNCs is to assess their exposure to CCMTs, which is the share of the estimated R&D budget allocated by MNCs towards CCMTs over the total R&D investment declared in their financial statements.Fig. 6 shows the sector exposure of European MNCs, together with the relative unitary cost.There is a high level of heterogeneity among sectors.European MNCs active in the Alternative Energy industry, on average, allocate more than 70% of their overall R&D budget to CCMTs.These MNCs, are considered mono-technology companies, meaning that R&D activity is dedicated almost entirely to the CCMTs sector, resulting with a very high exposure.On the contrary, multi-technology corporations, such as those in the Automobile & Parts industry, have lower exposure to CCMTs, meaning that this sector could put more effort towards environmentally friendly technologies, such as increasing R&D activity related to less polluting fuels or development of electric vehicles.Only two other sectors, namely Electricity and Gas, Water & Multiutilities, have exposure to CCMTs higher than 40%, on average.This is consistent with the fact that activities in these sectors are somehow related to CCMTs in contrast to other sectors like, for example, Pharmaceuticals & Biotechnology, which, being very distant to CCMTs-related fields, also shows the highest sector unitary cost.Considering the residence of MNCs, the R&D exposure to CCMTs can be also calculated at country level.As shown by the map on the left in Fig. 7, it varies substantially among European countries.On average, European countries have an R&D exposure to CCMTs equal to 12%, and, among these, only Austria and Denmark have an exposure higher than 20%.In particular, Denmark is the European country with the highest share of exposure to CCMTs, about 41%.This result is explained by the fact that the core business of the two most important MNCs located in Denmark is highly energy-oriented with a strong renewable energy component, or, in other words, since these two MNCs are mono-technology, the overall country exposure is relatively very high.In MNCs, the strategic decision to allocate R&D investment to subsidiaries in different locations depends on characteristics of the destination-country and it is technology- or sector-dependent .In other words, country attractiveness varies in relation to CCMTs.Table 3 shows, for each technology area, the five most prominent destination countries for the R&D investment in CCMTs allocated by MNCs resident in different countries.Germany is the favourite country in five of the nine areas, with very high share in Energy and Transportation.Switzerland is the top destination-country related to CCS and ICT, where more than 70% and 30% of R&D investment is allocated respectively.In the remaining technology areas, other countries also have elevated high level of attractiveness: Belgium and Italy are, respectively, the most targeted country in respect to Adaptation and Waste.Given the lack of data regarding private R&D spending, particularly for companies active in multiple technology areas simultaneously, and driven by the clear need of this information to provide more robust quantitative analyses and evidence, this paper proposes a methodological approach to estimate private R&D investments.The method is based on a multi-step procedure.The proposed circular feedback between patent data and companies’ information improves the quality and quantity of data and allows the assessment of R&D spending for both large firms and small and medium enterprises.Results are consistent with previous works, as shown by contrasting R&D-related indicators.Furthermore, the resulting information set can be used to analyse aspects of R&D investment with respect to time, industrial sector, country and technology.In the context of this paper, the method is applied to assess the R&D activity of the private sector in Europe in the area of climate change mitigation technologies.Nevertheless, the estimation procedure is scalable to other research fields, with no specific restrictions on the nature of the companies, type of business or technology area.In the period 2003–2014, the European private sector has made considerable efforts to increase R&D towards CCMTs: about 250 €bn have been invested in total.The most targeted R&D activities regard CCMTs related to energy generation, transmission and distribution and those related to transportation.The private sector in Germany is the one investing the most, followed by the one in France and in the United Kingdom.Each country invests mainly where its level of specialisation towards CCMTs is higher.The analysis of European multinational corporations in the context of CCMTs has highlighted the propensity to allocate large share of R&D budget to subsidiaries located in the same country of their parent company.By studying the flow of investments that crosses national borders, it emerges that countries interact mostly when there is geographical proximity and when there is a common technology specialisation.Despite the advancements in estimating private R&D, there are still some key aspects that need to be considered for further improvement.Only companies present in PatStat are included in the process.This means that if a company is performing R&D activities in the CCMTs sector, but it has not patented any invention, this company does not feature in the analysis.Although this might create biases, these are assumed not to be significant since the major investors, especially for those foreseeing market opportunity, are very prone to patent and considered in the estimation procedure.There is also a need to improve the way in which the structure of MNCs is defined, that is the recognition of subsidiaries in all corporations.Currently this is done manually but it is potentially subject to errors since it is based only on the names of patentees as they are given in PatStat.Combining different data sources together is an option.For example, it would be sufficient to connect via identification numbers the ORBIS database with the PatStat database to considerably improve the quality of this assessment and of the overall estimation procedure.The latter, in effect, is one of the limitations of this approach, together with others that require further discussion.As stated before, MNCs are manually built through the grouping exercise, which tries to link subsidiaries to the respective parent company.This activity impacts the first part of the estimation procedure, namely the calculation of the sector unitary expenditure.The more subsidiaries recognised as subordinated of a MNC, the higher the accuracy of this calculation.In fact, it is assumed that the entire R&D budget of a MNC is allocated to develop the full portfolio of inventions, which is the sum of inventions of all subsidiaries.However, this assumption also has another limitation since it is not always true that the entire R&D expenditure is allocated to the production of technology inventions.Other limitations relate to the first part of the procedure, which focuses on the calculation of the sector unitary expenditure.There are two important assumptions concerning this value.The first is that, in a country and in a sector of economic activity, the same value of unitary expenditure is assigned to inventions in different technology area, either if they concerns CCMTs or other fields.A follow-up of this research would be recommended to assign a weight to technology areas where patenting activity may require higher R&D intensity than in others.The second concerns the assumption that the sector unitary cost is assigned to all subsidiaries that are subordinated of MNCs located in a specific country and active in a specific sector, with no consideration of subsidiaries’ heterogeneity, namely their technology or sector specialisation or their country of residence.It would be important to investigate the effect of these characteristics in future improvements of this estimation procedure.The opinions expressed in this paper are the authors’ alone and cannot be attributed to the European Commission.
Detailed information on research and development (R&D) spending of the private sector is very limited, particularly when the interest is on small and medium enterprises or focuses on companies active in multiple technology areas. This lack of data poses challenges on the robustness of quantitative analyses and, as a consequence, on the reliability of evidences needed, for example, to support policy-makers in policy design. This paper proposes a patent-based method to estimate R&D expenditure in the private sector. This approach is then applied to assess private R&D spending in Europe in the context of climate change mitigation technologies. Findings show that R&D strategies in Europe are both country and technology dependent. Furthermore, R&D budget does not easily flow across national borders, but, when it is allocated to other countries, it is mainly done because of geographic or technological proximity.
747
Johnson-Cook parameter evaluation from ballistic impact data via iterative FEM modelling
It is well known that metals exhibit strain rate-dependent plastic flow behaviour.The mechanisms responsible for this sensitivity are known, but it cannot be predicted in any fundamental way and it is not easy to capture experimentally.Nevertheless, simulation of this strain rate dependence is essential for FEM modelling of many situations involving high strain rates.Since such modelling is routinely carried out on a massive global scale, often with little scope for cross-checking of its reliability, validation of the procedures, and accurate quantification of the characteristics for a range of materials, are very important."It may, however, be noted that, for purposes of establishing the value of the strain rate sensitivity parameter, C, it's not essential to use an analytical expression for the first term in Eq.Since this σ relationship is taken to be fixed, an experimentally-obtained set of data pairs can be employed, instead of a functional relationship.This is done in the present work, since it turned out to be difficult to represent the plasticity accurately with an analytical expression.Eq. is considered to provide a fairly realistic representation of the behaviour, at least in the regime below that in which shock waves are likely to have a significant effect.Several minor variations to the Johnson–Cook formulation have been put forward , but the basic form is in general considered to be quite reliable.It should nevertheless be recognised that it is essentially just an empirical expression, and also that several other formulations have been proposed.There have certainly been criticisms of the J-C formulation.Moreover, a recent investigation has covered the effect of the exact shape of the yield envelope, in a study oriented towards FEM simulation of ballistic impact.Conventional mechanical testing procedures have certain limitations for this purpose and the maximum strain rates achievable in a controlled way during uniaxial tensile or compression testing are below those that are often of interest.Jordan et al. reported on such testing carried out at strain rates of up to 104 s−1, although displacements were measured on the cross-head and are unlikely to have been very accurate at such high strain rates.These authors tested as-received and annealed copper samples, reporting that an increase in the strain rate by a factor of 106 raised the flow stress by factors of ∼30% and 100% respectively.These values may not be very reliable, although it is plausible that softer material may be more susceptible to strain rate hardening.In practice, the split Hopkinson bar test is commonly employed, and can create relatively high strain rates, although it is subject to significant levels of uncertainty, arising from various sources .There is also the Taylor cylinder test , although this is similar in concept to the SHB test and subject to the same type of limitations.Despite these issues, there have been many publications reporting on outcomes of testing of this type, often involving evaluation of C. Values obtained in this way include 0.001 for a 1000 series Al alloy , 0.023 for a 7000 series Al alloy , 0.039 for an AISI-1018 low carbon steel and 0.011 for an AISI-4340 low alloy steel , 0.048 for an AISI-1018 low carbon steel and 0.009 for an ultra-fine grained copper ."It's difficult to rationalise such outcomes in any way and even this small cross-section of results indicates that they are not always consistent. "Nevertheless, it's clear that some metals exhibit considerably greater strain rate sensitivity than others.There have been many publications involving use of FEM for simulation of impact testing procedures, with the objective of investigating the strain rate sensitivity of the material.These include FEM studies of the SHB and similar types of test.Furthermore, several studies have involved FEM simulation of the impact of a sharp indenter to explore the strain rate dependence of the yield stress.However, none of these studies has treated the evolving stress and strain fields in detail, many do not take account of temperature changes during the test and few involve any systematic iterative FEM: the focus has tended to be on effective average strain rates during tests.Nevertheless, the inverse FEM method is a potentially powerful technique.Its application to ballistic indentation is particularly attractive, since a wide range of local strain rates can be generated and there are few uncertainties in the formulation of the model.Furthermore, the requirements in terms of sample dimensions and preparation procedures are less demanding.It is also potentially easier than SHB-type tests in terms of experimental implementation.One of the challenges in implementing iterative FEM procedures concerns the methodology for converging on an optimied set of parameter values."With multiple parameters, this can be complex, although it's clear that the “goodness-of-fit” between modelled and experimental outcomes should be quantified and progress has been made on systematic optimization of this for instrumented indentation with spheres, aimed at characterizing quasi-static plasticity.It can be seen from Eq. that, when employing the J-C constitutive law, only one parameter needs to be evaluated, provided that the quasi-static plasticity parameters have already been obtained.This simplifies the convergence procedure considerably.An overall methodology is proposed in the current work and illustrated via its implementation for two different materials, using two outcomes for the optimization procedure.Two materials were employed in the present investigation.An extruded OFHC copper bar was used both in the as-received state and after an annealing treatment of 2 h at 800 °C, in an inert atmosphere.This treatment caused recrystallization and hence a substantial drop in the hardness of the material.Both conventional uniaxial compression testing and dynamic indentation were carried out on both materials, along the extrusion axis.The grain structures are shown in Fig. 1.It can be seen that the grain size was of the order of 30–50 µm in the as-received material, but had coarsened to about 300–400 µm after annealing.Some annealing twins are also present."Such grain structures, which are far from uncommon, present challenges in terms of using indentation to obtain properties, since it's clear that these can only be obtained by mechanically interrogating a representative volume.The indents were therefore created using relatively large cermet spheres, obtained from Bearing Warehouse Ltd.The resultant indent size depends, of course, on both the impact velocity and the sample hardness, but in general they were of the order of 1 mm in diameter, so multi-grain volumes were being tested in all cases.Of course, projectiles in this size range are common, and much smaller ones would have been difficult to use, but instrumented indentation is frequently carried out on a much finer scale than this."For present purposes, it's important to be able to simulate the stress-strain curve over a wide range of strain.This is well beyond the levels to which conventional uniaxial testing can be carried out.This is not such a problem for the as-received material, since the rate of further work hardening is low and the flow stress will tend to remain approximately constant up to large strains.For the annealed material, however, the initial work hardening rate is high and extrapolating this behaviour to strains beyond the measurable regime is subject to considerable error.This problem was tackled by applying three swaging operations to the annealed material, each inducing a well-defined level of plastic strain, extending up to about 200%.These materials were tested in compression and the yield stress taken as a flow stress level for the annealed material at the strain concerned.This allowed the stress-strain curve to be simulated over the complete strain range of interest.In order to obtain the “correct” plasticity parameter values for these two materials, samples were subjected to uniaxial compression testing between rigid platens.Cylindrical specimens were tested at room temperature, using MoS2 lubricant to minimise barrelling.Displacements were measured using an eddy current gauge, with a resolution of about 25 µm.Testing was carried out under displacement control, using an Instron 5562 screw-driven testing machine, with a load cell having a capacity of 30 kN.The strain rate generated during these tests, which was taken to be the reference rate for use in Eq., was thus about 5.5 10−3 s−1.Tests were done up to displacements of about 1.5 mm, so that each test took about 45 s to complete.It was confirmed that barrelling was negligible over this strain range.Tests were carried out over a range of temperature, up to 300 °C.,The set-up employed is depicted in Fig. 2.The gas gun used is based on three coaxial components - a 2 m barrel and two high-pressure chambers.The barrel is separated from one high-pressure chamber by a thin Cu membrane, with a similar membrane between it and a second chamber.Both chambers are filled with nitrogen, with pressure drops between barrel and first pressure chamber, and between the two pressure chambers, both set to values insufficient to burst the membranes.The first chamber is then evacuated, creating pressure differences across both membranes that are sufficient to cause bursting.The expanding gas then drives the projectile, held inside an HDPE sabot, along the barrel of the gun.At the end of the barrel the sabot is stripped from the projectile by a "sabot stripper", so that only the projectile strikes the sample.The impact velocity is controlled, at least approximately, via manipulation of the thickness of the membranes and the pressure in the chambers.All impacts were at normal incidence, with samples rigidly supported at the rear, employing impact speeds in the range 50–300 m s−1.The samples were obtained from the copper rods by machining into cylinders of diameter 25 mm and height 30 mm.It may be noted that it was found to be important to secure the sample rigidly on its rear surface.The modelling covered everything happening within the sample, but one of the boundary conditions was that it was supported on an immoveable surface and it was important to ensure that this condition was closely approached in practice.A massive, rigidly-held steel plate was used to provide this support.A Phantom V12.1 high-speed camera was used to record impact events, with a time resolution of ∼1.4 µs and exposure time of 0.285 µs.Linear spatial resolution of ∼50 µm per pixel was achieved and images comprised 128 × 24 pixels.From video sequences and known calibration factors, time-displacement histories were then extracted for the projectile motion, with attention being focussed on the location of the rear of the projectile.A Taylor Hobson profilometer, with a wide-range inductive gauge and 20 µm radius cone recess tip, was used to measure residual indent profiles.Scans were carried out in two perpendicular directions, both through the central axis of the indent.The height resolution of these scans is about 25 µm.Tilt correction functions were applied to the raw data, based on the far-field parts of the scan being parallel.The average profile from the two orthogonal scans was used in the g-screening exercise.An axi-symmetric FEM model for simulation of impact and rebound was built within the Abaqus package.Both projectile and target were modelled as deformable bodies and meshed with first order quadrilateral and/or triangular elements.The projectile is expected to remain elastic throughout, although it can be important in high precision work of this nature not to treat it as a rigid body: not only is it possible for its elastic deformation to make a significant contribution to the overall displacement, but its lateral Poisson expansion could affect the outcome, particularly if attention is being focused on the shape of the residual impression.Of course, such modelling also allows a check to be made on whether there is any danger of the projectile being plastically deformed.The volume elements in the model were CAX4RT types, with about 5000 elements in the sample and about 2000 in the projectile.Meshes were refined in regions of the sample close to the indenter.Sensitivity analyses confirmed that the meshes employed were sufficiently fine to achieve convergence, numerical stability and mesh-independent results.The complete sample was included in the simulation, with its rear surface rigidly fixed in place.In modelling the complete sample, contributions to the displacement caused by its elastic deformation are fully captured.A typical set of meshes is shown in Fig. 4.Simulation required specification of an initial velocity for the projectile, after which it moved in free flight to strike the sample at normal incidence.Any effects of air resistance between sample and projectile were neglected.Displacements of the projectile were output at a series of specified time values throughout penetration and immediately after rebound.The residual indent shape, and the surrounding fields of residual stress, plastic strain, strain rate and temperature were also predicted in each case.All material properties were assumed to be isotropic."The Young's moduli, E, and Poisson ratios, ν, of projectile and sample were respectively taken to be 650 GPa and 0.21 and 120 GPa and 0.30.The density of the cermet was measured to be 14,800 kg m−3.The thermal conductivity, κ, of the copper was taken to be 401 W m−1 K−1 and its heat capacity, c, to be 3.45 MJ m−3 K−1.The fraction of the plastic work converted to heat was set at 95%.,Heat transfer from sample to projectile was neglected.The plasticity properties of the sample were simulated using the J-C constitutive relation), with the first term represented by a set of data pairs - see §4.As with any FEM implementation of a constitutive law in the form of a family of curves, a rationale is required concerning the progressive deformation of individual volume elements.As the projectile penetrates, any particular element in the sample will experience changes in both temperature and strain rate.At any stage during the process, the values of these two in the location concerned will define the appropriate stress-strain curve for deformation during the next time interval.In the current work, it has been assumed that the cumulative strain defines the ‘state’ of the material.This then fixes the point on the appropriate stress-strain curve where the gradient is to be evaluated.,By using the von Mises stress and strain in an expression based on uniaxial testing, the von Mises yielding criterion is implicitly being used to predict the onset of plasticity.This is common, although the effect of varying this criterion between von Mises and Tresca limits has been explored in the recent paper by Holmen et al .There is effectively only a single material property parameter to evaluate, so the operation of converging on an optimal solution is relatively straightforward.However, there is the issue of the coefficient of friction, which, realistically, cannot be obtained via any separate experimental study and so has to be treated as another parameter to be iteratively optimised.In the present work, however, the effect of these two parameters was decoupled, so that optimization was carried out first for μ, using a fixed value of C, and then vice versa.This process was repeated a couple of times.Since two experimental outcomes and δ plots) are being studied, there is scope for a more systematic convergence operation, simultaneously incorporating both C and μ, but it was not considered necessary for present purposes.It may be noted at this point that, while repeated FEM simulation is integral to the procedure being used, there are good prospects for this to be integrated into user-friendly automated software packages.This is already happening for inferring quasi-static stress-strain relationships from load-displacement indentation data - see reference 26 - and something similar would be possible for evaluation of C.Data from typical compression tests with each material are shown in Fig. 5, plotted as both nominal and true values.The variation between tests was in general very small.It can be seen that, as a true stress – true strain relationship, the as-received material) exhibits little or no strain hardening.This is not unexpected, since the extrusion process probably left the material in a heavily cold-worked state.The annealed material, on the other hand, exhibits substantial strain hardening from the outset), with the relative change in flow stress during straining being much greater than that for the as-extruded material.This also is unsurprising for an annealed material.However, it does lead to a complication in the present context, in terms of representing the behaviour using Eq.It should be noted that, while these uniaxial experiments often cannot be regarded as reliable beyond strains of the order of 20–25%, strain levels well above this can be generated during projectile impact, and are thus likely to be employed in the FEM model.Under these circumstances, use of the equation with L-H parameter values fitted over the low strain regime leads to prediction of unrealistically high flow stresses at high strains.In practice, the flow stress is not expected to exceed that of the as-received material - they are, of course, basically the same material, apart from work-hardening effects."This behaviour therefore can't be represented realistically over a large strain range using the L-H equation.The solution adopted for the annealed material has therefore been to use sets of data pairs in the FEM model, conforming to the experimental outcome for low strains and constrained to conform to the yield stress values for the swaged samples.This is illustrated in Fig. 5, which compares experimental data with extrapolated sets of data pairs, extending in both cases up to very high strains.A comment is needed here with regard to the outcome of this procedure for the annealed material."It's clear that the curve does not have the expected shape around the transition between the directly measured range and the regime in which the flow stress has been obtained from the swaged samples.In practice, a smoother transition in gradient is expected.There are possible explanations for this discrepancy.For example, the swaging would have created more heating than the quasi-static loading, which may have promoted a degree of microstructural recovery during the process, hence softening the material somewhat.However, it would be difficult to compensate in any way for such effects and it seems simpler to just follow the described procedure, accepting that there are inevitably limits on the reliability of the stress-strain curves.The effect of anisotropy is illustrated in Fig. 6, which compares the plots obtained by loading in axial or radial directions, for both materials.It can be seen that there is an effect, which is slightly more noticeable for the annealed material.Of course, the ballistic indentation was carried out only in the axial direction, but in that case the deformation is much more multi-axial than during compression testing, so that the overall response is expected to lie between the axial and radial extremes.If the objective were to obtain the quasi-static stress-strain curve from indentation data, and comparisons were being made with uniaxial outcomes, then this anisotropy would need to be taken into account.However, since the focus here is on the strain rate sensitivity, the exact shape on the base stress-strain curve is not expected to have a strong influence and so the data from axial testing were used in the modelling.True stress – true strain plots are shown in Fig. 7 for the four temperatures employed, together with best-fit modelled curves.In the temperature sensitivity part), the melting temperature, Tm, was taken to be 1356 K and ambient temperature, T0, to be 295 K.The dependence of the flow stress on temperature is reflected in the value of m), with a low value giving a high sensitivity.It can be seen in the figure caption that the best-fit values of m were respectively 1.09 and 1.05 for as-received and annealed material.Of course, there are difficulties associated with only being able to obtain experimental data over a strain range that is considerably smaller than the range likely to be experienced during an impact event, and also with the fact that this is purely an empirical curve-fitting exercise.Nevertheless, these modelled curves probably capture the quasi-static behaviour reasonably well.The as-received material does appear to undergo a small degree of initial strain softening, perhaps associated with liberation of some dislocations as straining starts.The local conditions after different degrees of penetration naturally depend on both the incident velocity and the hardness of the sample.The present work covers two materials with very different hardness levels and, in each case, a range of impact velocities.It is helpful to at least be broadly aware of the nature of these fields in different cases, since this will give an indication of the ranges of strain, strain rate and temperature over which the stress-strain curves are expected to affect the response of the material.Such predicted outcomes can, of course, only be obtained if a value is assumed for C. However, while this is unknown a priori, simply taking a value in the range that is broadly expected is acceptable for these purposes.A set of illustrative outcomes is shown in Fig. 8, which refers to the annealed material subjected to impact at 70 m s−1, for 3 times after initial impact.The cumulative strains are shown in Fig. 8, where it can be seen that these peak at around 60%, with the region that has experienced fairly substantial strains extending by the end of penetration to significant depths below the surface.The strain rates) peak at ∼3105 s-1, but these occur only transiently in a small volume and most of the plastic deformation takes place at rates below 105 s−1.Nevertheless, the figure does confirm that, even with this relatively low velocity, most of the plastic deformation takes place above 104 s−1.This is related to Fig. 8, which shows that the flow stress at which much of the plastic deformation occurs is above the quasi-static value in the strain range concerned, which is ∼300 MPa - see Fig. 7.This confirms that strain rate hardening effects are significant.Finally, Fig. 8 confirms that the temperature rises are not very significant.Of course, this is a relatively low impact velocity.The influence of projectile velocity is illustrated by Fig. 9, in which the corresponding fields to those in Fig. 8 are presented for 200 m s−1.As expected, penetration is much deeper and the strains, strain rates, stresses and temperatures also reach higher values.However, some are increased more than others.It can be seen in Fig. 9 that the cumulative strains are raised considerably, reaching peaks of over 200% in places and exceeding 100% in relatively large volumes of material.Strain rates are also somewhat higher than for the lower velocity, peaking at nearly 106 s−1, although again this is only for short periods in small volumes.The peak stress levels, on the other hand, are rather similar to those for the lower velocity impact and they drop off more quickly as the ball penetrates.This is due to the effect illustrated in Fig. 9, which shows that the temperature rises more quickly, and reaches relatively high values >150 °C) in a fairly large volume, bringing down the stress levels.The material response for these two impact velocities will thus be sensitive to different parts of the family of stress-strain curves, with the main difference being that in the high velocity case there will be a greater sensitivity to the high strain regime.For the as-received material, the behaviour will be different again, with strains being lower, but stresses being higher.Furthermore, the change in flow stress as straining occurs will be less.Of course, the two materials may have different strain rate sensitivities.There are no well-established ground rules for even approximate prediction of the value of C in different cases, although there might be an argument for expecting softer materials to have higher values.Finally, it may be noted that the peak strain rates are less important than the distribution of values that are effective locally while plastic deformation is occurring.This distribution is illustrated in Fig. 10 and, which provide data for both materials, with two different impact velocities.As expected, the average strain rate is higher for the higher impact velocities, although the differences are not very great. and 2.4104 and 7.1104 s−1 for).Of course, a higher strain rate makes the material harder, tending to limit the amount of strain that occurs and hence reduce somewhat the amount of deformation occurring at such rates.On the other hand, with the initially softer material), while more deformation occurs, the strain rates tend to be lower than for the harder material.These plots demonstrate that the predominant strain rate range in these experiments is of the order of 104–105 s−1, with values up to ∼106 s−1 being generated in the harder material.Finally, the significance of the frictional work is illustrated by Fig. 10, which compares the strain rate distribution of the plastic work, obtained with the best fit value for μ of 0.1, with that in the absence of friction.Of course, the plastic work done is lower when friction is included.It can be seen that this is a small, but not insignificant, fraction of the total work.It is also apparent that the frictional work is more significant in the higher strain rate regime, which is consistent with this taking place under conditions such that the normal stress at the interface is higher.Illustrative comparisons for the as-received material are shown in Fig. 11 between model outcomes and experimental data, with 3 different incident velocities, in terms of projectile displacement histories and residual indent shapes.These predictions are for a particular value of C.It can be seen that, in both cases, the agreement is fairly good.Such comparisons were made for a range of C values, with the goodness-of-fit parameter, g, being evaluated in each case.The outcome of this set of comparisons is summarised in Fig. 12, which shows plots of g as a function of C, for each type of comparison, and for each of the 3 impact velocities.While the outcome is not entirely consistent, optimum values of C are mostly around 0.016.It should be recognised that this procedure constitutes a comprehensive examination, not only of the value of C, but also of the reliability of the J-C formulation.The outcome does suggest that it is at least approximately valid, with, for this material, the appropriate value of C apparently being ∼0.016 ± 0.005.Corresponding plots to Figs. 11 and 12, for the annealed material, are shown in Figs. 13 and 14.The comparisons in Fig. 13 are for C = 0.030.It can be seen that agreement is again quite good, with this value of C, for both high-speed photography and profilometry data.It is also clear from the g plots in Fig. 14 that a higher value of C than for the as-received material gives the best agreement for the annealed samples.Again, the agreement is not perfect.In particular, the plots for the V = 70 m s−1 case appear to be a little inconsistent, apparently indicating a best-fit C value above 0.04 for the displacement data and below 0.02 for the indent shape data.This could be at least partly attributable to the fact that the strain rates were relatively low in this case, which is likely to introduce errors into the inferred value of C. Taken overall, the results for the annealed material indicate that the most appropriate value of C is about 0.030 ± 0.010.While it is difficult to compare these values with anything in a systematic way, they are of a similar magnitude to those reported in a number of previous publications .Furthermore, that the softer material should be more susceptible to strain rate hardening than the harder material certainly appears to be plausible: there is clearly more scope for relatively greater hardening with softer materials and it thus seems likely that the effect of an increased strain rate would be more noticeable.In fact, the data presented here are more comprehensive than those of earlier studies, both in terms of the spatial and temporal variations in local strain rate being fully incorporated into the modelling and because two independent sets of experimental measurements have been obtained in each case.The fact that, in general, both types of measurement point to similar values of C in each case does allow increased confidence in their reliability.In detail, there are certainly some discrepancies, notably in terms of the results for the softer material, for which the outcome with the lower impact velocity appears a little inconsistent with those dominated by higher strain rates."It's clear that the J-C formulation is simplistic, with complete decoupling of the base shape of the stress-strain curve, the softening effect of raising the temperature and the hardening effect of raising the strain rate. "From a mechanistic point of view, it is the mobility of dislocations that is the key factor, and, while this will be enhanced by high temperature and reduced by imposing a high strain rate, it's quite likely that there would be some kind of inter-dependence between the two effects.Furthermore, studies aimed at exploring dislocation dynamics over a range of strain rates have indicated that there is often a transition in the rate-determining process as the strain rate is increased.It is therefore not unreasonable to expect that the apparent strain rate sensitivity would be different in two experiments in which the strain was imposed at substantially different average strain rates.However, there may be a danger of over-analysing these results, which do, in general, confirm that the J-C formulation appears to provide a broadly reliable description of the strain rate sensitivity, and also that the proposed methodology allows this sensitivity to be quantified in approximate terms.Of course, the methodology could also be used to check on the reliability of alternative formulations.The following conclusions can be drawn from this work:A novel procedure has been developed for experimental evaluation of the strain rate sensitivity parameter, C, in the Johnson–Cook equation.It involves impact of the sample by a hard spherical projectile, followed by monitoring of its penetration and rebound by high-speed photography and/or profilometry of the residual indent shape.Iterative FEM simulation is then carried out, using trial values for C, with quantification of the level of agreement between predicted and measured outcomes.Input requirements for the model include data characterising the quasi-static plasticity behaviour of the material and also the effect of interfacial friction.This procedure has been carried out on two different materials, in the form of as-received and annealed copper.In both cases, three different impact velocities were used, with both high-speed photography and residual indent profilometry being employed.Good levels of agreement were obtained, over the range of velocity employed, for both types of experimental data.The strain rates operative during the plastic deformation were predominantly of the order of 104–106 s−1.The values obtained for C were 0.016 ± 0.005 for the harder material and 0.030 ± 0.010 for the softer material.Using these values, the level of agreement observed between predicted and observed experimental outcomes is good, with goodness-of-fit parameter values mostly around 90–95%, allowing a reasonable level of confidence to be placed in both the broad reliability of the Johnson–Cook formulation and the accuracy of the inferred values of C.The procedure employed, while involving iterative FEM modelling runs, is one that is amenable to automated convergence.User-friendly software packages for its implementation, requiring no FEM expertise or resources, are likely to become available in the near future.
A methodology is presented for evaluating a strain rate sensitivity parameter for plastic deformation of bulk metallic materials. It involves ballistic impact with a hard spherical projectile, followed by repeated FEM modelling, with predicted outcomes (displacement-time plots and/or residual indent shapes) being systematically compared with experiment. The “correct” parameter value is found by seeking to maximise the value of a “goodness of fit” parameter (g) characterizing the agreement between experimental and predicted outcomes. Input for the FEM model includes data characterizing the (temperature-dependent) quasi-static plasticity. Since the strain rate sensitivity is characterised by a single parameter value (C in the Johnson–Cook formulation), convergence on its optimum value is straightforward, although a parameter characterizing interfacial friction is also required. Using experimental data from (both work-hardened and annealed) copper samples, this procedure has been carried out and best-fit values of C (∼0.016 and ∼0.030) have been obtained. The strain rates operative during these experiments were ∼104–106 s−1. Software packages allowing automated extraction of such values from sets of experimental data are currently under development.
748
Perturbation of Ribosome Biogenesis Drives Cells into Senescence through 5S RNP-Mediated p53 Activation
Most mammalian somatic cells lose proliferative capacity as a consequence of a finite number of population doublings, activation of oncogenes, inactivation of tumor suppressor genes, or treatment with DNA damage-inducing drugs, a state termed cellular senescence.Although cellular senescence acts as a barrier to tumor formation by preventing proliferation or by inducing immune clearance of pre-malignant cells, recent studies have revealed that senescent cells are associated with age-related dysfunction through the inflammatory response.Several cellular events, including telomere dysfunction, DNA damage response, and oxidative stress response, activate p53-p14/p19Arf and p16INK4A-RB pathways during senescence.The tumor suppressor p53 acts as a vital regulator of stress response by inducing distinct classes of target genes for cell-cycle arrest, apoptosis, DNA repair, or cellular senescence.Under normal physiological conditions, p53 is maintained at low levels by its interaction with E3 ubiquitin ligases such as MDM2.MDM2 is inhibited in response to cell stress, followed by upregulation of p53 transcriptional activity and the production of a number of downstream effects.For example, Arf binds and inhibits MDM2, thus preventing the degradation of p53 during replicative senescence and oncogene-induced senescence.Recent studies have revealed that the nucleolus senses various stressors and plays a coordinating role in p53 activation.The most well-known function of the nucleolus is ribosome biogenesis, which involves transcription of precursor ribosomal RNA, pre-rRNA processing, and assembly of mature rRNA with ribosomal proteins.The nucleolus comprises RNA and a large number of proteins, some of which are released during stress.For example, nucleophosmin, nucleolin, nucleostemin, and the ribosomal protein L11, RPL5, RPL23, and RPS7 directly bind to MDM2 and prevent ubiquitin-mediated p53 degradation, which delays proliferation under nucleolar stress.RPL11 is a well-studied participant in the p53-nucleolar stress response pathway.Translational upregulation of RPL11 is observed during impaired 40S ribosome biogenesis, resulting in p53 activation.Several studies have revealed that RPL11 regulates MDM2 as part of a 5S ribonucleoprotein particle consisting of RPL11, RPL5, and 5S rRNA.Nucleolar proteins, including RRS1, BXDC1, and PICT, are implicated in 5S RNP production.Importantly, Arf and 5S RNP components, particularly RPL11, interact with each other in Arf-overexpressing cells.Overexpression of Arf enhances the RPL11-MDM2 interaction, leading to p53 activation.However, the relevance of 5S RNP to cellular senescence remains unknown.In this study, we show that 5S RNP components are involved in p53 activation and cellular senescence under oncogenic and replicative stresses.Oncogenic stress accelerated rRNA transcription, whereas replicative stress delayed rRNA processing, both of which triggered p53 activation and cellular senescence.Indeed, we extended the replicative lifespan of mouse embryonic fibroblasts by exogenously expressing rRNA-processing factors.Moreover, accelerated rRNA transcription by overexpressing transcription initiation factor IA or delayed rRNA processing by depleting rRNA-processing factors induced RPL11-mediated p53 activation and cellular senescence, indicating that 5S RNP couple a perturbed ribosomal biogenesis with p53 activation during senescence.Aberrant activation of oncogenes or persistent replicative stress induces cellular senescence, accompanied by high expression levels of p53, p21, and p16.Senescent cells also display drastic changes in morphology, such as enlarged and flattened shapes and enlarged nucleoli.Consistent with these observations, the transduction of oncogenic HRasG12V or E2F1 increased p53, p21, and p16 protein levels in primary MEFs and MCF-7 human breast cancer cells.Consequent replicative stress at a population doubling level of 14 increased p53, p21, and p16 protein levels.Next, we evaluated induction of cellular senescence in HRasG12V- or E2F1-expressing cells and PDL14-MEFs by microscopy and senescence-associated β-galactosidase staining, a widely used marker of cellular senescence.These cells displayed a marked increase in SA-β-gal activity, flattened morphology, and enlarged nucleoli, suggesting that these stresses induced cellular senescence.Immunofluorescence staining using upstream binding factor and Myb-binding protein 1A as nucleolar markers revealed that cells treated with control vector or PDL4-MEFs had multiple nucleoli, whereas oncogenic HRasG12V- or E2F1-expressing cells and PDL14-MEFs had a large merged nucleolus.This result suggests that senescence-inducing stresses affect ribosome biogenesis in the nucleolus.Thus, we examined rRNA production by northern blotting using probes specific for the internal transcribed spacer region.Increases in 47/45S, 41S, 30S, and 21S pre-rRNA species were observed in oncogenic HRasG12V- or E2F1-expressing cells and PDL14-MEFs.To further investigate the quantity of RNA species within the nucleolus, we isolated nucleoli and determined nucleolar RNA content.Nucleolar RNA content in MEFs and MCF-7 cells increased following exposure to oncogenic and replicative stresses.Next, we examined rRNA transcription and processing under these stress conditions using nuclear run-on assays and a pulse-chase labeling experiment.The nuclear run-on assays indicated that oncogenic HRasG12V or E2F1 accelerated rRNA transcription, whereas replicative stress did not.To investigate the cause of replicative-stress-induced increase in nucleolar RNA content, we examined the expression of rRNA-processing genes.Oncogenic stress scarcely affected expression of the rRNA-processing genes: dyskeratosis congenita 1, ribosomal RNA processing 5, Rrs1 ribosome biogenesis regulator homolog, RNA terminal phosphate cyclase-like 1, superkiller viralicidic activity 2-like 2, and NOP2 nucleolar protein.In contrast, the expression of these genes was significantly lower in PDL14-MEFs compared with that in PDL4-MEFs.The 47/45S pre-rRNA levels in PDL14-MEFs after 60 min in the pulse-chase labeling experiment were higher, and mature 18S and 28S rRNA levels were lower in PDL14-MEFs than those in PDL4-MEFs.These results indicate that rRNA processing is delayed in PDL14-MEFs.A similar cellular response was observed in WI-38 human lung fibroblasts, indicating that this phenomenon is not cell type specific.Taken together, oncogenic-stress-induced acceleration of rRNA transcription and replicative-stress-induced delay in rRNA processing led to increased nucleolar RNA content and nucleolar size.Recent studies have revealed that the nucleolus senses various stresses and plays a coordinating role in activating p53.We considered the model that aberrant ribosome biogenesis; accelerated rRNA transcription and delayed rRNA processing, may induce p53 activation and cellular senescence under oncogenic and replicative stresses.However, many signals and genes related to telomere dysfunction, DDR, and oxidative stress response have been implicated in these processes, and their effects cannot be excluded.To evaluate our senescence induction model, we first examined the effect of accelerated rRNA transcription observed under oncogenic stress on cellular senescence.We overexpressed the RNA polymerase I transcription factor TIF-IA in MEFs to accelerate rRNA transcription.We confirmed that overexpressing TIF-IA enhanced rRNA transcription and increased nucleolar RNA content.Overexpression of TIF-IA also induced accumulation of p53 and its downstream target p21.PUMA, a BH3-only protein essential for p53-dependent apoptosis, was not induced by this treatment.Overexpression of TIF-IA increased the protein level of p16, the number of SA-β-gal-positive cells, and enlarged nucleoli, indicating the induction of cellular senescence.It is well recognized that the senescence-associated secretory phenotype is a marker for senescent cells.Therefore, we evaluated the expression levels of SASP factors: Matrix metallopeptidase 1, Matrix metallopeptidase 10, Plasminogen activator inhibitor-1, Leptin receptor, chemokine receptor 2, TIMP metallopeptidase inhibitor 1.Overexpression of TIF-IA increased the expression of the SASP factors, except that of Mmp-1.Taken together, these data show that oncogenic-stress-induced upregulation of rRNA transcription causes aberrant ribosome biogenesis, resulting in the activation of p53 and cellular senescence.Further, we depleted the rRNA-processing factors Dkc1, Rrp5, and Rrs1, which play a distinct role in rRNA maturation and were downregulated in PDL14-MEFs, to investigate whether delayed rRNA processing activates p53 and cellular senescence during replicative stress.Depletion of these genes inhibited rRNA processing and increased nucleolar RNA content in MEFs and MCF-7 cells, which was similar to results observed in PDL14-MEFs.Immunoblotting revealed that depletion of these factors led to the accumulation of p53, p21, and p16.In addition, the cells depleted of these factors also displayed a flattened morphology, elevated SA-β-gal activity, and an enlarged nucleolus.Furthermore, depletion of these factors increased the expression of SASP factors, with an exception of depletion of Rrp5, which decreased Pai-1 expression in MEFs.When we depleted the other rRNA-processing factors, such as pescadillo ribosomal biogenesis factor 1, WD repeat domain 3, BMS1 ribosome biogenesis factor, and UTP6 small subunit processome component in MCF-7 cells, we found that all the cells accumulated p53, p21, and p16, increased SA-β-gal activity, and developed an enlarged nucleolus.These observations are most consistent with the notion that aberrant ribosome biogenesis activates p53, resulting in cellular senescence under replicative stress.Our results indicate that upregulation of rRNA transcription or inhibition of rRNA processing increased the level of p53 and its downstream target p21 and induced cellular senescence.The question is whether p53 and/or p21 were involved in the cellular senescence induced by accelerating rRNA transcription or inhibiting rRNA processing.To clarify this question, we overexpressed TIF-IA in MEFs and depleted p53 and p21.Overexpression of TIF-IA increased the number of SA-β-positive cells, which was counteracted by p53 or p21 depletion.Next, we assessed the contribution of p53 or p21 in inducing senescence in MEFs depleted of the rRNA-processing factors, Dkc1, Rrp5, or Rrs1.As with the case of TIF-IA overexpression, the increased number of SA-β-positive cells by the depletion of the rRNA-processing factors was counteracted by p53 or p21 depletion.Similar results were obtained when p53 was depleted from MCF-7 cells.Taken together, the results show that the p53-p21 pathway plays a critical role in cellular senescence induced by accelerating rRNA transcription or inhibiting rRNA processing.Ribosome-free RPL11 and RPL5 reportedly diffuse into the nucleoplasm during ribotoxic stress where they inhibit MDM2 and promote the activation of p53.We observed aberrant ribosome biogenesis under oncogenic and replicative stresses.Here we predicted that RPL11 and RPL5 are involved in activating p53 and cellular senescence under these stresses.Thus, we evaluated the quantities of endogenous ribosome-free RP11 and RPL5 in MEFs by isolating ribosomal and nonribosomal fractions from unstressed and stressed cells.The obtained cell lysates were subjected to sucrose gradient centrifugation followed by immunoblotting with anti-RPL11 and anti-RPL5 antibodies.As a striking feature, we detected a marked increase in the quantity of ribosome-free RPL11/RPL5 in HRasG12V-expressing cells and in PDL14-MEFs.Similarly, co-immunoprecipitation experiments showed that the oncogenic and replicative stresses reduced the interaction between MDM2 and p53.In contrast, the interaction between MDM2 and RPL11/RPL5 was enhanced.These results indicate that oncogenic and replicative stresses increase the levels of ribosome-free RPL11 and RPL5, which bind to and inhibit MDM2.Oncogenic and replicative stresses increase the expression of the nucleolar protein Arf, which leads to the accumulation of p53 by binding to MDM2.Immunoblotting showed that Arf levels increased in the presence of oncogenic and replicative stresses.The co-immunoprecipitation experiments indicated an enhanced interaction between Arf and MDM2 under these stresses.These results suggest a connection between Arf and the ribosomal proteins that activate p53 in response to oncogenic and replicative stresses, which is consistent with reports showing a functional connection between Arf and RPL11.Next, we assessed the contribution of RPL11 and RPL5 in p53 activation under oncogenic and replicative stress conditions.RPL11 or RPL5 depletion suppressed the accumulation of p53 and p21 induced by the oncogenic and replicative stresses.The increased p16 level was also compromised by RPL11 or RPL5 depletion.Depleting RPL11 or RPL5 decreased the number of SA-β-gal-positive cells under these stresses.These data indicate that oncogenic and replicative stresses induce the accumulation of RPL11 and RPL5 in the ribosome-free fraction where they bind to MDM2, resulting in the activation of p53 and cellular senescence.RPL11 and RPL5 comprise 5S RNP together with 5S rRNA.Recent studies have revealed that 5S RNP is essential for activating p53 in response to ribotoxic stresses.We then examined whether 5S rRNA, the other 5S RNP component, was required to activate p53 under oncogenic and replicative stresses.The depletion of TFIIIA, a cofactor specifically required for 5S rRNA transcription, effectively inhibited 5S rRNA biogenesis without affecting tRNA production, which was consistent with a previous report.We found that TFIIIA depletion abrogated the accumulation of p53, p21, and p16, as well as the number of SA-β-gal-positive cells in HRasG12V- or E2F1-expressing MEFs and PDL14-MEFs.Taken together, any of the 5S RNP components, such as RPL11, RPL5, and 5S rRNA, are necessary for oncogenic-stress- or replicative-stress-induced activation of p53 and cellular senescence.We investigated whether experimental upregulation of rRNA transcription or downregulation of rRNA processing, which were observed under oncogenic or replicative stress, induce p53 activation and cellular senescence in a 5S RNP-dependent manner.We first tested whether the 5S RNP component is required for p53 activation and senescence induced by aberrant upregulation of rRNA transcription.Thus, we depleted RPL11 in TIF-IA-overexpressing cells.We found that RPL11 depletion suppressed the accumulation of p53, p21, and p16, and the number of SA-β-gal-positive cells induced by TIF-IA overexpression.Next, we tested whether 5S RNP is required for p53 activation and cellular senescence induced by downregulating rRNA processing.We depleted RPL11 in the presence of small interfering RNAs specific for DKC1, RRP5, and RRS1, any of which induced p53 accumulation, p21 induction, p16 induction, and cellular senescence.We found that RPL11 depletion abrogated these events.Thus, overexpressing TIF-IA or depleting the rRNA-processing factors lead to phenotypes similar to those observed under oncogenic or replicative stress via 5S RNP.Taken together, our results indicated that oncogenic or replicative stresses increase nucleolar size, trigger p53 accumulation, p21 induction, and cellular senescence by perturbing ribosome biogenesis, such as upregulating rRNA transcription or inhibiting rRNA processing, respectively.Our results also show that perturbing ribosome biogenesis induces 5S RNP-mediated p53 activation under oncogenic or replicative stresses.Next, we uncovered the molecular basis for aberrant ribosome biogenesis in senescent cells.Although oncogenic HRasG12V or E2F1 enhances rRNA transcription via extracellular signal-regulated kinase, it remains unclear how the expression of rRNA-processing factors is suppressed under replicative stress.c-Myc upregulates expression of the rRNA-processing factors such as Dkc1, Rrp5, and Rrs1.Arf binds and inhibits c-Myc transcriptional activity, independently of p53.Hence, Arf may block rRNA processing by repressing c-Myc.Consistent with this hypothesis, Arf depletion restored the levels of rRNA-processing factors that were downregulated in senescent MEFs.In contrast, the depletion of p53 or p21 did not markedly influence levels of the rRNA-processing factors, suggesting that Arf regulates expression of rRNA-processing factors independently of p53.Arf depletion also counteracted the increase in nucleolar RNA content following replicative stress.These results suggest that Arf inhibits rRNA processing by downregulating the rRNA-processing factors under replicative stress.Furthermore, Arf depletion suppressed p53 accumulation, p21 induction, SA-β-gal activation, and nucleolar enlargement, which is in agreement with our data, which suggest that downregulation of the rRNA-processing factors led to 5S RNP-mediated p53 activation and cellular senescence.Direct binding of Arf to MDM2 prevents ubiquitin-mediated degradation of p53 and promotes cellular senescence.Consistent with these reports, we demonstrated that replicative stress enhanced the interaction between Arf and MDM2.Along with this pathway in which Arf directly inhibits MDM2, our results indicate that Arf-mediated inhibition of rRNA processing may lead to p53 activation and cellular senescence via 5S RNP.Thus, it appears that both pathways cooperatively activate p53, leading to replicative senescence.Finally, we determined whether exogenous expression of the rRNA-processing factors abrogates replicative senescence, because decreased expression of the rRNA-processing genes delayed rRNA processing, increased nucleolar RNA content, and resulted in replicative senescence.We exogenously expressed Dkc1, Rrp5, and Rcl1, whose expression decreased in PDL14-MEFs, and examined the nucleolar content.Exogenous expression of Dkc1 or Rrp5 suppressed the increase in nucleolar RNA content, indicating recovery of rRNA processing.In contrast, Rcl1 overexpression modestly affected nucleolar RNA content of PDL14-MEFs.Under these conditions, immunoblotting showed that exogenous expression of Dkc1 or Rrp5 inhibited the accumulation of p53, p21, and p16 in PDL14-MEFs.The effect of Rcl1 overexpression was not so evident.In addition, exogenous expression of Dkc1 or Rrp5 extended the replicative lifespan of MEFs.Replicative-stress-enhanced SA-β-gal activity and nucleolar enlargement were also abrogated by exogenous expression of Dkc1 and Rrp5.In contrast, the effect of exogenous expression of Rcl1 on replicative lifespan, SA-β-gal activity, and nucleolar size was not as apparent as that of Dkc1 and Rrp5, indicating a correlation between restored nucleolar RNA content and extending replicative lifespan capacity.The effect of exogenous expression of Dkc1 and Rrp5 on SASP was less evident; however, increased expression of SASP factors in old MEFs partially decreased by exogenous expression of the rRNA-processing factors.Taken together, these data strongly support our model that suggests that impaired rRNA processing induces replicative senescence.Normal somatic cells can undergo permanent growth arrest after a finite number of cell divisions or activation of oncogenes, accompanied by p53 activation.In this study, we showed that oncogenic stress accelerated rRNA transcription, whereas replicative stress impaired rRNA processing, both of which triggered 5S RNP-mediated p53 activation and cellular senescence.We also found that expression of the rRNA-processing factors decreased because of replicative stress, which impaired rRNA processing and limited the number of cell divisions.The replicative lifespan of MEFs was extended by exogenous expression of the rRNA-processing factors.5S RNP components activate p53 when ribosome biogenesis is blocked by siRNA-mediated depletion of rRNA transcription or -processing factors.Physiological stresses, such as gamma-irradiation, nutrient restriction, or hypoxia, inhibit rRNA transcription, whereas factors that inhibit rRNA processing have not been identified.Here, we show that replicative stress inhibited rRNA processing and caused cellular senescence.Namely, expression of the rRNA-processing factors decreased in late-passage PDL14-MEFs, which caused defects in rRNA processing.These cells displayed enlarged and flattened shapes, enlarged nucleoli, and accumulated p53 and p16, which are typical characteristics of senescent cells.These phenotypes were rescued by exogenously expressing certain rRNA-processing factors, demonstrating the physiological significance of rRNA-processing defects.Our results showed that exogenous expression of Dkc1 and Rrp5 rescued the processing defects and attenuated cellular senescence.However, it was odd that exogenous expression of the single rRNA-processing gene rescued the processing defects because rRNA processing in eukaryotic cells requires a large number of proteins, some of which decreased in senescent cells.Rrp5 is reportedly involved in pre-rRNA cleavage at sites A0–A2 in the 18S rRNA synthetic pathway and A3 cleavage in the 5.8S/25S rRNA synthetic pathway.Dkc1 also functions in multiple rRNA-processing steps by regulating pseudouridine synthesis.In contrast, Rcl1, whose expression partially rescued processing defects and cellular senescence, is mainly required for 18S rRNA biogenesis.Based on these observations, we speculate that Dkc1 and Rrp5 are comprehensively involved in processing; therefore, their exogenous expression may upregulate rRNA processing.Expression of Dkc1 or Rrp5, which restored nucleolar RNA content, extended replicative lifespan.The effect of Rcl1 overexpression on nucleolar RNA content and replicative lifespan was not as evident.These results suggest that rRNA-processing defects are strongly implicated in the induction of senescence under replicative stress.We found that Arf depletion restored the decreased expression of rRNA-processing factors under replicative stress.Arf accumulates in late-passage cells, which is consistent with previous reports, and inhibits c-Myc transcriptional activity.c-Myc transcriptionally upregulates ribosomal components.These results suggest that restoring rRNA processing by Arf depletion is due to cancelation of the Arf inhibitory effects on c-Myc.Along with the direct inhibition of MDM2 by Arf, our results indicate that Arf-mediated defects in rRNA processing may lead to p53 activation and cellular senescence via 5S RNP.This result is consistent with previous reports demonstrating a functional connection between RPL11 and Arf.The senescence response hinders tumor formation by preventing proliferation of pre-malignant cells.However, recent studies have revealed that senescent cells are associated with age-related dysfunction and tumorigenesis through the inflammatory response.Therefore, it is important to understand senescence signaling.Many signals and genes related to telomere dysfunction, DDR, and oxidative stress response activate p53 and induce cellular senescence.It is also well known that the nucleolus senses various stresses and is a central hub for coordinating the stress response and transmitting signals to regulate p53.In this study, we showed the mechanisms of how oncogenic or replicative stress triggers 5S RNP-mediated p53 activation and cellular senescence.In addition, we observed that hydrogen peroxide induced RPL11- and RPL5-mediated p53 activation and cellular senescence.The 5S RNP components promoted p53 activation in all cases.Therefore, it appears that nucleolar surveillance signaling is required for a broad range of senescence induction activities.Understanding these processes will provide necessary insights into senescence and could contribute to new treatments for age-related disorders.MCF-7 human breast cancer cells were maintained in DMEM containing 1,000 mg/l glucose supplemented with 10% fetal bovine serum, 100 units/ml penicillin, and 100 μg/ml streptomycin.MEFs and WI-38 cells were cultured in DMEM containing 4,500 mg/l glucose with 10% FBS, 100 units/ml penicillin, and 100 μg/ml streptomycin.The cells were cultured in 5% CO2 and 20% oxygen conditions.MEFs, WI-38, and MCF-7 cells were transiently infected with the pBabe-HRasG12V, pBabe-TIF-IA, and the corresponding control vector using the 293GP cell line.Eighty percent confluent 293GP cells were transfected, using the Polyethylenimine MAX, with HRasG12V-expressing vector, TIF-IA-expressing vector, or the GFP vector alone, and the obtained replication-incompetent retroviruses were used for the transduction of MEFs, WI-38, and MCF-7 cells.The next day, after infection, the cells were selected with the appropriate antibiotic for a day and collected at indicated times.Nucleoli were isolated from 6 × 106 MEFs or 1.2 × 107 MCF-7 cells in high purify by density gradient fractionation as previously described.Total nucleolar RNA was prepared from the isolated nucleoli and quantified by spectrometry.Statistical analysis was performed by Student’s t test.MEFs were lysed with lysis buffer and layered onto a 8%–48% sucrose gradient containing 30 mM Tris-HCl, 100 mM NaCl, and 10 mM MgCl2 and centrifuged with a Beckman SW41 rotor for 240 min at 23,000 rpm.Fractions were collected from the top of the gradient, and ribosomal and nonribosomal fractions were determined using 18S/28S rRNA as an indicator.See Supplemental Experimental Procedures for more information.K.N., T. Kumazawa, A.M., J.Y., and K.K. conceived the project and designed experiments.K.N. and T. Kumazawa analyzed the data with significant assistance from T. Kuroda, N.K., M.T., N.G., and R.F. K.N., T. Kumazawa, and K.K. wrote the manuscript.
The 5S ribonucleoprotein particle (RNP) complex, consisting of RPL11, RPL5, and 5S rRNA, is implicated in p53 regulation under ribotoxic stress. Here, we show that the 5S RNP contributes to p53 activation and promotes cellular senescence in response to oncogenic or replicative stress. Oncogenic stress accelerates rRNA transcription and replicative stress delays rRNA processing, resulting in RPL11 and RPL5 accumulation in the ribosome-free fraction, where they bind MDM2. Experimental upregulation of rRNA transcription or downregulation of rRNA processing, mimicking the nucleolus under oncogenic or replicative stress, respectively, also induces RPL11-mediated p53 activation and cellular senescence. We demonstrate that exogenous expression of certain rRNA-processing factors rescues the processing defect, attenuates p53 accumulation, and increases replicative lifespan. To summarize, the nucleolar-5S RNP-p53 pathway functions as a senescence inducer in response to oncogenic and replicative stresses.
749
Resistance and resilience responses of a range of soil eukaryote and bacterial taxa to fungicide application
Every community of living organisms is subjected to a range of stresses that can potentially deleteriously impact some or all of the species present, with the potential to affect community structure, function and/or diversity.Such community responses can be considered in terms of resistance, which refers to the capacity of a community to maintain its size, composition, and function in the presence of a disturbance, and resilience, which describes the ability of an impacted community to recover its initial structure and function following a disturbance.Giller et al. proposed two possible relationships between stress levels and microbial community diversity: an “extinction” relationship in which the diversity of a community is negatively correlated to an increase in stress levels, and a “competitive exclusion” relationship in which there is a hump-backed response to stress.In a hump-backed response, a mild stress would enhance the removal of dominant organisms, and promote an increase in diversity as other proliferate to fill the niche.However, there is limited experimental data to support these responses.It has been suggested that the resistance and/or resilience of soil communities to disturbances could be influenced by the initial biodiversity of a particular system.Girvan et al. observed greater resilience to benzene application in soil with a higher natural diversity, as demonstrated by a quicker recovery in the mineralisation rate of 2,4 dichlorophenol, than in lower diversity soil.Some previous research has indicated the presence of “competitive exclusion” diversity responses to some stresses e.g. copper or cadmium amendment.However, whether such relationships apply following the addition of organic chemicals such as pesticides remains unknown.The effects of pesticides on non-target organisms and the wider environment as a whole have been a concern for many years due to the biologically active nature of the compounds.Such non-target effects may result from either the direct toxicity of the compound, or as indirect impacts caused by the removal and/or increased proliferation of other species.It is thought that microbial communities may have lower natural resistance and/or resilience to pesticide impacts than plants and other larger organisms.Indeed, previous research using a range of broad-scale and molecular methods has shown that pesticides can significantly alter microbial community structure in different environments.However, such studies have primarily been limited to bacterial and fungal communities.In particular, there is a paucity of information available about the impacts of pesticides on higher trophic level soil microorganisms such as nematodes and protozoa.Such organisms are integral members of soil food webs as both predators and prey and their activities are beneficial to nutrient cycling within the soil, with the potential to impact plant growth.Culture-dependent methods have previously shown impacts of pesticides on non-target eukaryotic microorganisms in soils.However, such studies are limited by the fact that only a small percentage of soil microorganisms are culturable.There has been limited use of culture-independent molecular methods to investigate the non-target effects of pesticides on eukaryotic soil microorganisms.Bending et al. showed that three fungicides each had specific effects on eukaryote communities, apparently reducing the abundance of specific taxa.However, these effects occurred in the absence of impacts on broad-scale measurements such as microbial biomass.Similarly, Adetutu et al. found that the fungicide azoxystrobin altered the structure of soil fungal communities with impacts still observed up to 84 d after application, by which time over 60% of the applied compound had been degraded.The current study investigated the impacts of pesticide application on the resistance and resilience responses of soil microbial communities from different trophic levels using the strobilurin fungicide azoxystrobin as a model compound.The strobilurin group of fungicides represent one of the most important groups of pesticides currently in use worldwide for the control of fungal crop pathogens.In 1999, sales of strobilurins totalled US$620 million worldwide and this had increased to US$1.636 billion by 2007.Their structures are based on those of natural products secreted by wood-degrading basidiomycete fungi such as Oudemansiella mucida and Strobilurus tenacellus and can be either fungicidal or fungistatic.Azoxystrobin acts by binding to the ubiquinone site of cytochrome b which forms part of the cytochrome bc1 complex in the fungal mitochondrial membrane.This binding disrupts the transfer of electrons from the cytochrome b portion of the complex, to the c1 portion, which stops the mitochondria producing ATP for the cell.Despite their widespread use, little is known about the effects of azoxystrobin and other strobilurin compounds on soil microbial communities, particularly with reference to non-target organisms.Soil biomass-N and dehydrogenase activity analyses were performed to give an indication of broad-scale impacts, whilst molecular methods were used to determine the impacts of azoxystrobin concentration on the structure and diversity of specific microbial groups from different trophic levels.HPLC analysis was used to monitor azoxystrobin degradation/dissipation over the course of the experiment.Soil was collected from Hunts Mill field at the Wellesbourne Campus of the University of Warwick School of Life Sciences, UK, during January 2008.The soil is a sandy loam of the Wick series with a composition of 73.4% sand, 12.3% silt, and 14.3% clay.The field had been managed as set-aside for over a decade and thus had received no pesticide applications.Soil was collected from the top 20 cm to comply with OECD guidelines for soil sampling in agricultural soils.Prior to azoxystrobin application, the soil was re-wetted to a matric potential of −33 kPa.This equated to a soil moisture content of 13.5%.Azoxystrobin was dissolved in acetone and added to the soil at a solvent:soil ratio of 1:20, giving concentrations of 1, 5, 10 and 25 mg kg−1 soil, with 5 mg kg−1 representing the UK maximum recommended dose of azoxystrobin in the top cm of soil and therefore the maximum dose which could reach the soil either directly, such as from spraying prior to canopy closure, or indirectly, following residue wash-off from the canopy.A total of 2.4 kg of soil was required for each pesticide concentration.The azoxystrobin solution was initially applied to one quarter of the soil and mixed with a sterile stainless steel spoon.The soil was then stored at room temperature in a fume hood for 2 h to allow evaporation of the acetone.The remaining three quarters were then mixed in gradually over a 10 min period to ensure an even distribution of the compound throughout the soil.Control soils were amended in the same way as the treated soils, but without azoxystrobin.120 g Portions were then transferred to sterile 250 mL glass Duran bottles, wrapped in aluminium foil and stored at 15 °C in the dark.4 Replicates of each treatment were destructively sampled at time 0, and then on a monthly basis for 4 months.Soil biomass-N was measured using the CHCl3 fumigation method of Joergensen and Brookes.Obtained ninhydrin-N values were converted to biomass-N using a conversion factor of 3.1.Dehydrogenase activity was monitored as detailed by Tabatabai.10 g of azoxystrobin-amended soil was added to 50 mL centrifuge tubes and mixed with 20 mL of HPLC-grade acetonitrile.The tubes were shaken by hand and placed on a shaker for 1 h. Following shaking, the samples were left for 30 min to settle and then centrifuged at 4000 rpm for 2 min.2 mL Of the supernatant was transferred into a 2 mL screw-top glass HPLC vial.Samples were analysed using an Agilent 1100 series unit with a diode array detector and LiChrospher® 100 RP-18e HPLC column.A liquid phase composed of 75% HPLC-grade acetonitrile and 25% distilled water was used at a flow rate of 1.30 mL min−1.25 μL of each sample was injected and the concentration of azoxystrobin determined by monitoring the absorbance at 230 nm.The limit of detection of the equipment was 0.04 μg g−1 soil and the limit of quantification was 0.1 μg g−1 soil.DNA was extracted from the soil samples using a FastDNA® Spin Kit.PCR reactions were set up containing MegaMix ready PCR Mix using the manufacturer’s guidelines, and a fluorescently-labelled primer pair specific to the microbial community being studied.Total RNA was extracted from 0 and 25 mg kg−1 soils 1-month post-amendment using the FastRNA® Pro Soil-Direct Kit and reverse transcriptase PCR was performed using the Qiagen® OneStep RT-PCR Kit.The primer pairs used for RT-PCR were: EF4f-FAM/EF3r, 63f-NED/1087r-VIC, and Nem18Sf-VIC/Nem18Sr.All labelled primers were obtained from Applied Biosystems, Warrington, UK and all un-labelled primers from Invitrogen, Paisley, UK.PCR products were prepared for T-RFLP analysis as described by Hunter et al.T-RFLP analysis was carried out using an Applied Biosystems 3130XL Genetic Analyzer, and the results analysed using GeneMarker® software.TRF sizes were determined by reference to LIZ-1200 standards, and the default software settings.Only peaks with intensity values of 50 or over were used for further analysis.Clone libraries were produced for nematodes and fungi from 0 mg kg−1 and 25 mg kg−1 treatment RNA extracts at the 1 month time point as described by Hunter et al.Forward and reverse sequences from each clone were determined using the primers M13R and M13F.Sequencing was performed using an Applied Biosystems 3130XL Genetic Analyzer.Sequences were analysed using the SeqMan™ programme.The sequences were aligned and the insert sequence determined.Sequence homologies were identified using the nucleotide BLAST database.80 and 83 18S rRNA sequences were obtained from the 0 and 25 mg kg−1 fungal samples, respectively.For nematodes, 81 and 79 sequences were obtained from the 0 and 25 mg kg−1 treatments, respectively.The EditSeq™ programme was used to determine the position of HhaI and MspI, and AciI and HaeIII restriction sites in each sequence.Quantitative PCR analysis was carried out for 0 mg kg−1 and 25 mg kg−1 bacterial and fungal DNA samples, 1 month post application.qPCR reactions were set up using the 2× SYBR Green mix according to the manufacturer’s instructions.Each qPCR reaction was carried out in triplicate.The primer pairs Eub338f/Eub518r and 5.8s/ITS1F were used for bacterial and fungal community analysis, respectively.All primers were obtained from Invitrogen™.Fungal and bacterial standard samples produced from existing clone libraries were first sequenced as described in Section 2.6.The bacterial standard had a 97% sequence homology to the uncultured α-Proteobecterium clone AKYH1384, and the fungal standard had a 98% sequence homology to the species Phoma exigua var.exigua.Purified products were obtained using the QIAfilter Plasmid Midi Kit using the manufacturer’s instructions.qPCR reactions were carried out using an ABI Prism 7900 HT sequence detection system.All samples were analysed using the SDS 2.1 software.qPCR samples were considered successful if the R2 value was >0.99 and the slope value between −3.0 and −3.4.qPCR products were run on a 1% agarose gel to ensure that only the intended fragment was amplified.Mean quantities, and standard deviations were calculated for each triplicate set using Microsoft Excel 2007.DNA copy numbers were calculated as described by Whelan et al.Two-way analysis of variance with contrast analysis was used to determine significant differences between treatments over the experimental period.The factors used in the ANOVA were azoxystrobin concentration and sampling time.An additional contrast function was added to the ANOVA using the GenStat Version 12 statistics programme to enable the impacts of individual azoxystrobin concentrations to be compared to each other.An example of the GenStat code used for this analysis can be seen in Fig. S1.The time taken for 50% of the applied azoxystrobin to degrade was calculated using the vinterpolate function in GenStat Version 12.T-RFLP data was analysed using non-metric multi-dimensional scaling, using Primer6 software, in order to determine the impacts of pesticide concentration on microbial community structure.This method compares the dissimilarity between each sample and plots the distances between each sample on a 2D ordination plot.Similarity percentage analysis was used to determine which TRFs contributed to the community variation between defined treatments.The significance of differences in community structure between 0 mg kg−1 and 25 mg kg−1 treatment bacterial and fungal clone libraries were determined using the Mann–Whitney U-Test page of the Caenorhabditis elegans WWW Server.2-way ANOVA and LSD analyses were used to identify significant differences in fungal and nematode SSU rRNA sequence copy numbers between treatments.In all azoxystrobin treatments, degradation proceeded rapidly within the first month post-application, but had almost ceased after 3 months.The degradation rate in the 25 mg kg−1 treatment was slower than the other treatments, whereas the compound degraded most quickly in the 5 mg kg−1 treatment.DT50 values ranged from 19 to 47 d.After 4 months, between 10% and 37% of the applied azoxystrobin remained across the different treatments.Azoxystrobin concentration had no significant effect on biomass-N over the course of the experiment.Average biomass-N values during the 4-month experimental period ranged from 112.8 to 228.8 μg g−1 soil.Soil dehydrogenase activity was significantly affected by the concentration of azoxystrobin applied.There was a marked decrease in dehydrogenase activity in all treatments after 1 month, relative to the control.At this time point dehydrogenase activity ranged from 75% of the un-amended control in the 1 mg kg−1 treatment, to only 45% in the 25 mg kg−1 samples.Dehydrogenase activity remained at these levels up to the 2 month sampling point.However, by 3 months, there had been a marked increase in dehydrogenase activity in all treatments.Indeed, the 1 and 5 mg kg−1 treatments recorded activity levels that were 102.9% and 103.6% those of the control.Values of 88.4% and 98.8% were recorded for 10 and 25 mg kg−1 treatments respectively.After 4 months, dehydrogenase activity in the 10 and 25 mg kg−1 treatments were higher than those of the controls.The 1 and 5 mg kg−1 treatments recorded values 87.1% and 97.8% that of the control.There was no significant difference in the overall number of TRFs recorded in the different treatments or across the different sampling times.64 unique TRFs were observed, with between 12 and 24 recorded in an individual trace.The diversity in the 25 mg kg−1 treatment was significantly lower than all of the other treatments, recording values between 78% and 80% those of the unamended controls for each time point.No significant differences in diversity were observed between any of the other treatments.Although ANOSIM analysis showed no overall significant impact of azoxystrobin on fungal community structure, pair-wise analysis did show that the application of azoxystrobin at a concentration of 25 mg kg−1 significantly changed the fungal community structure, relative to the control, 1, and 5 mg kg−1 treatments across the experimental period.Sampling time did not significantly affect the fungal community structure.TRFs at 143, 146, and 148 bp were dominant in the 0–10 mg kg−1 concentrations, but were absent in the 25 mg kg−1 treatment.SIMPER analysis identified that the absence of these TRFs was responsible for approximately 12%, 23%, and 17% of the total community structure variation between the 25 mg kg−1 and the 0–10 mg kg−1 treatments.T-RFLP analysis of RNA extracts taken from 0 and 25 mg kg−1 treatments 1 month post-application was used to determine if azoxystrobin significantly affected the active fungal community.The application of azoxystrobin significantly affected the numbers of TRFs with amended samples having, on average, 53% the number of TRFs of the unamended controls.This compares with 67% recorded for the DNA-derived samples.Furthermore, NMDS and ANOSIM analyses showed that azoxystrobin significantly affected the active fungal community structure whilst fungal diversity was also significantly affected.The NMDS plot showed no significant difference in the control and 25 mg kg−1 treatments for the DNA-derived samples.However, there was a significant difference in fungal community structure between the control and 25 mg kg−1 treatments for the RNA-derived samples.Furthermore, the DNA- and RNA-derived samples also formed clearly distinct groups.This suggested significant differences in the structures of these groups.ANOSIM analyses supported the NMDS plot observations.There was a significant difference in community structure between amended and un-amended samples for both DNA- and RNA-derived samples.There was also a highly significant difference in the community structure of samples produced using extracted DNA or RNA.A total of 67 different nematode TRFs were obtained with between 8 and 29 being recorded in a single trace.There was no significant impact of azoxystrobin concentration or sampling time on the number of nematode TRFs.However, azoxystrobin concentration did significantly impact nematode community diversity.Contrast analysis showed that this effect was due to highly significant differences in diversity in the 25 mg kg−1 treatment compared to the other treatments.ANOSIM analysis showed that azoxystrobin application significantly affected the nematode community structure.Pair-wise comparisons showed that this was due to the community structure in the 25 mg kg−1 treatment being significantly different to those of the control, 1 mg kg−1, 5 mg kg−1, and 10 mg kg−1 treatments.There was no significant effect of sampling time on nematode community structure.Two of the recorded TRFs, 421 bp and 458 bp, showed dramatic reductions in intensity in the 25 mg kg−1 samples, compared to the other treatments.Indeed, TRF 421 bp which was present in all of the 0–10 mg kg−1 treatments, was absent from all of the 25 mg kg−1 treatments, whilst TRF 458 bp was present in all of the 0–10 mg kg−1 treatments, but was only present in the 25 mg kg−1 treatment at the 1 month time point.SIMPER analysis showed that the absence of these two TRFs was responsible for an average of 13% and 12% of the total community structure variation between the 25 mg kg−1 and 0–10 mg kg−1 treatments.As with the fungal samples, azoxystrobin had a significant impact on RNA-derived nematode TRF numbers.RNA from the amended samples contained an average of 70% the TRF numbers present in the un-amended controls.However, there was no significant difference in the RNA-derived active nematode diversity between the control and 25 mg kg−1 treatments.NMDS and ANOSIM analyses showed that there was a significant difference in community structure when the DNA- and RNA-derived samples were compared.Furthermore, azoxystrobin application had a significant effect on the structure of the active nematode community in the RNA-derived analysis.However, no significant differences between nematode community structure in DNA-derived samples were observed.A total of 159 individual TRFs were recorded for the bacterial samples.In a single trace, the number of TRFs varied markedly from 33 to 109.Neither azoxystrobin concentration nor sampling time had a significant impact on bacterial TRF numbers.There was also no significant impact of azoxystrobin concentration on bacterial community structure or diversity.RNA-derived analysis also showed no significant impacts of azoxystrobin on active bacterial diversity, TRF numbers, or community structure.T-RFLP analysis produced a total of 98 individual archaeal TRFs with between 8 and 33 being present in an individual trace.Azoxystrobin concentration was found to have no significant impacts on archaeal TRF numbers, diversity, or community structure.A total of 73 individual pseudomonad TRFs were recorded with between 9 and 19 being present in an individual trace.However, azoxystrobin concentration did not have a significant impact on the pseudomonad community with p values of 0.905, 0.147, and 0.941 recorded for effects on TRF numbers, community diversity, and community structure, respectively.In order to gain a greater insight into the impacts of azoxystrobin on soil fungal and nematode populations, 18S rRNA gene clone libraries were produced from the 1 month RNA samples from the 25 mg kg−1 treatment, and the un-amended controls.Analysis of the fungal clone libraries using the Mann Whitney U-Test indicated that azoxystrobin application had a significant impact on community structure.The fungal libraries consisted of 80 individual sequences for the 0 mg kg−1 treatment and 83 for the 25 mg kg−1 samples.The 0 mg kg−1 library was composed of 66.5% ascomycetes, 20% zygomycetes, 9% basidiomycetes, and 6% that showed sequence homology to “uncultured fungi”.In contrast, in the 25 mg kg−1 library only 30% of the sequences showed homology to ascomycetes whilst there was an increase in the percentage of zygomycetes to 54%.This change was mostly due to an increase in the prevalence of sequences showing a homology to Zygomycete sp.AM-2008a, from 9.0% in the un-amended treatment library to 49.5% in the 25 mg kg−1 library.Basidiomycete and “uncultured fungi” sequences accounted for 3.5% and 12% of the whole library, respectively.Mann Whitney U-Test analysis of the nematode clone library data showed that azoxystrobin application had a significant effect on community structure.The clone libraries included 81 and 79 sequences for the 0 mg kg−1 and 25 mg kg−1 treatments, respectively.Taxonomic analysis of the 0 mg kg−1 clone library showed that the majority of the clones came from the orders Enoplida and Tylenchida.The remainder were from the orders Araeolaimida, Aphelenchida, and Rhabditida.5% of the clones were classified as “uncultured nematodes”.The most common sequence recorded showed homology to Pratylenchus neglectus which accounted for 26% of the clone library sequences.Sequences showing homologies to Xiphinema rivesi, Achromadora sp. and Trichistoma sp. constituted 17%, 13.5%, and 10% of the clone library sequences, respectively.Following azoxystrobin application the major change that occurred was an increase in prevalence of nematodes from the order Araeolaimida to 26.5% of the total clones.This was due to a large increase in the number of sequences showing homologies to the genus Plectus sp. There was also an increase of 6.5% in the number of clones that showed sequence homologies to the order Tylenchida.Conversely, those with homologies to the order Enoplia decreased by 14.5%.In contrast to the control, there were no clones present that showed sequence homologies to the orders Aphelenchida or Rhabditida.The 25 mg kg−1 clone library consisted of 12 named genera.The dominant sequences in this library were related to P. neglectus, Xiphinema sp., and Plectus rhizophilus.Sequences for the genera Achromadora were not found at all in the 25 mg kg−1 library and Trichistoma sp. sequences only represented 2.5% of the sequences.Conversely, the prevalence of P. rhizophilus increased by 9% in the 25 mg kg−1 library samples.qPCR analysis was performed to determine the effects of azoxystrobin concentration on 16S and 18S rRNA gene copy number.Samples were analysed 1 month post-application.Azoxystrobin application did not have a significant effect on bacterial copy number.An average bacterial copy number of 3.15 × 106 was recorded for the 0 mg kg−1 control.Copy numbers in the amended samples ranged from 2.275 to 3.775 × 106.In contrast, azoxystrobin application did have a significant impact on fungal copy number.Fungal copy numbers in the un-amended controls averaged 0.83 × 104 g−1 soil.No significant differences in fungal copy number were observed between the 1, 5, 10 and 25 mg kg−1 treatments.Fungal copy numbers for the 1, 5, 10, and 25 mg kg−1 samples were 0.58, 0.57, 0.55, and 0.43 × 104 g−1 respectively.In this study the different broad and fine scale methods gave sometimes contradictory indications of azoxystrobin impacts on soil microbial communities.18S rRNA analysis of fungal and nematode communities produced largely complementary results indicating significant impacts only in the 25 mg kg−1 treatments across the 4 month experimental period.The exception to this was the fungal diversity analysis which showed a concentration dependent reduction in diversity, followed by a recovery in the 1–10 mg kg−1 treatments after 3 months.This result more closely matched that of the dehydrogenase analysis which showed a significant concentration dependent impact on activity after 1 and 2 months, followed by a recovery to the control levels by 3 months in the 1 and 5 mg kg−1 treatments.In contrast, no significant impacts on soil biomass-N were observed for any azoxystrobin concentration, at any sampling time.There were no apparent relationships between dehydrogenase activity, soil microbial biomass-N, microbial diversity, and azoxystrobin degradation.NMDS analysis based on fungal and nematode 18S rRNA T-RFLP data and Shannon diversity analysis of the nematode community, showed a “threshold” response with no significant impacts observed in the 1–10 mg kg−1 treatments, but significant changes in community structure and reductions in diversity recorded in the 25 mg kg−1 treatments.A “threshold” response was also observed for the fungal qPCR analysis, but at a much lower concentration than was suggested by the diversity analysis, with all treatments exhibiting significantly lower SSU rRNA gene abundance than the unamended controls.These “threshold” relationships between stress levels and community diversity differs from both the “extinction” and “competitive exclusion” responses proposed by Giller et al.However, fungal community diversity only exhibited a “threshold” relationship after 1 and 2 months, after which a response similar to the “competitive exclusion” relationship proposed by Giller et al. was observed with the 1, 5, and 10 mg kg−1 treatments exhibiting significantly higher diversities than the control samples.Interestingly, this increase in fungal diversity after 3 months appears to correlate well with the observed recovery in soil dehydrogenase activity.In contrast, the diversity in the 25 mg kg−1 treatments remained significantly lower than the control at all of the sampling times.Soil microbial biomass-N was not significantly affected by azoxystrobin application.NMDS and Shannon diversity analysis showed that the soil bacterial community was similarly unaffected, unlike the fungal and nematode communities.This may suggest that the chloroform fumigation assay used for biomass-N analysis preferentially represented a measure of the bacterial community.If so, this would differ from previous suggestions that fungal populations are more susceptible to chloroform fumigation than bacterial populations.However, the effectiveness of biomass-N analysis as a measure of total biomass can be limited by potentially large variations in the N contents of different microbial groups.As a result, future work may benefit from the use of biomass-C as a more reliable indicator of total biomass.In contrast, dehydrogenase activity showed a proportional, time-dependent effect, with increasing pesticide concentrations resulting in greater effects on the microbial community.This suggests that dehydrogenase analysis could have been a stronger indicator of impacts on nematode and/or fungal communities.In the 1 and 5 mg kg−1 treatments there was an initial decrease in dehydrogenase activity, which indicated that the microbial community had a low initial resistance to azoxystrobin application.However, by the 3 month sampling point dehydrogenase activity had returned to levels comparable to those observed in the un-amended controls.This suggests that the microbial community had recovered from the initial impact caused by the pesticide, despite the HPLC data showing that between 15% and 25% of the initially applied azoxystrobin still remained in the system, suggesting that there was not a direct link between dehydrogenase activity and azoxystrobin degradation.In contrast, in the 10 and 25 mg kg−1 treatments dehydrogenase activity was significantly higher than in the controls after 4 months.This increase in dehydrogenase activity is again indicative of community recovery following removal of the fungicide stress, but the fact that it was elevated over the control could also reflect either the removal of competitive interactions, or new microbial growth following the utilization of biomass killed by the azoxystrobin.Both fungal and nematode communities were found to be susceptible to the application of azoxystrobin, particularly the 25 mg kg−1 treatment.Fungal community structure, diversity, and 18S rRNA gene copy number were all significantly impacted.Additionally, significant community structure and diversity impacts were also observed for nematode communities.SIMPER analysis of fungal T-RFLP traces showed that the TRFs at 143, 146 and 148 bp predominated in the 0–10 mg kg−1 treatments but were absent in the 25 mg kg−1 samples.Predicted TRF sizes within this range were also present in the clone library produced from 0 mg kg−1 samples, but not in the 25 mg kg−1 treatment.These clones showed homologies to Byssoascus striatosporus along with the plant pathogens Gibberella fujikuroi and Fusarium oxysporum.All of these species are ascomycete fungi.This raises the possibility that ascomycete fungi may be more susceptible to azoxystrobin application.Indeed, of the 24 taxa from the clone library that decreased following the application of azoxystrobin, 22 were ascomycetes with the other two being basidiomycetes.This appears to contradict the widely-accepted view of the broad-spectrum nature of this compound.In contrast, there was a significant increase in the number of sequences showing homology to zygomycete fungi following azoxystrobin application.In particular, 49.5% of the sequences from the amended samples showed homology to Zygomycete sp.AM-2008a, compared to 9% in the un-amended samples.Many zygomycete species are fast-growing r-strategy fungi and this could explain their rapid growth to fill the niche left by the negatively impacted ascomycete fungi.In contrast, there were no significant changes in basidiomycete communities between the two treatments.However, these changes in fungal community structure, diversity, and 18S rRNA gene copy number were not mirrored by significant changes in the overall bacterial, archaeal or Pseudomonad communities.This suggests that the bacterial community were neither competing with the fungi, nor using fungal biomass killed by the fungicide as a nutrient source.Soil nematode communities were impacted by azoxystrobin application with both community structure and diversity found to be significantly affected.This is important as it represents a non-target impact of azoxystrobin on higher trophic level organisms.Sequences with homology to X. rivesi, Xiphinema chambersi, and Bitylenchus dubius decreased in abundance following the application of azoxystrobin, whereas sequences with homology to P. neglectus and an uncultured Xiphinema sp. became more prevalent.Unfortunately, the grazing habits and other traits of many of the nematode species identified by clone libraries in this paper remain unknown so it is not clear whether these changes reflect direct impacts of the fungicide, or indirect effects associated with changes to the biomass of fungal taxa on which some nematodes graze.This serves to emphasise that current knowledge of the fine-scale aspects of soil nematode community dynamics is very limited, particularly in comparison with other microbial groups such as bacteria and fungi.Edel-Hermann and colleagues raised this point, and allied it to the fact that different microbial groups do not exist in isolation within the soil, but interact extensively in areas such as nutrient cycling, competition and predation.This represents an extensive knowledge gap considering the role that nematodes are considered to play in regulating the structure and function of the soil microbial community as a whole.The main aim of this work was to ascertain the impacts of pesticides on soil microbial resistance and resilience responses at different trophic levels using the widely-used, broad-spectrum fungicide azoxystrobin as a model compound.Azoxystrobin application significantly affected the structure and diversity of fungal and nematode communities over the 4-month period.The only evidence found to support either of the relationships between stress levels and community diversity proposed by Giller et al. was observed in the fungal diversity analysis after 3 months.Instead, the molecular analyses mostly appeared to show a “threshold” relationship where community diversity was unaffected between 0 and 10 mg kg−1.However, between 10 and 25 mg kg−1 the diversity decreased rapidly, with no apparent recovery.Similarly, there was a significant concentration-dependent impact on dehydrogenase activity.Resilience responses in dehydrogenase activity were recorded for the 1 and 5 mg kg−1, but not the 10 and 25 mg kg−1 treatments.However, bacterial, archaeal, and pseudomonad communities were unaffected by azoxystrobin application.Current knowledge on the effects of pesticide application on non-target microbial communities in soils remains limited, particularly in relation to higher trophic level microorganisms.The work presented here gives an indication of the resistance and resilience of such communities following perturbation and how stress levels may affect community diversity.However, further research in this area could benefit from the use of meta-genomic approaches to study changes in microbial community structure in response to strobilurin fungicide application, or changes in the expression of genes thought to be involved in stress responses.
The application of plant protection products has the potential to significantly affect soil microbial community structure and function. However, the extent to which soil microbial communities from different trophic levels exhibit resistance and resilience to such compounds remains poorly understood. The resistance and resilience responses of a range of microbial communities (bacteria, fungi, archaea, pseudomonads, and nematodes) to different concentrations of the strobilurin fungicide, azoxystrobin were studied. A significant concentration-dependent decrease, and subsequent recovery in soil dehydrogenase activity was recorded, but no significant impact on total microbial biomass was observed. Impacts on specific microbial communities were studied using small subunit (SSU) rRNA terminal restriction fragment length polymorphism (T-RFLP) profiling using soil DNA and RNA. The application of azoxystrobin significantly affected fungal and nematode community structure and diversity but had no impact on other communities. Community impacts were more pronounced in the RNA-derived T-RFLP profiles than in the DNA-derived profiles. qPCR confirmed that azoxystrobin application significantly reduced fungal, but not bacterial, SSU rRNA gene copy number. Azoxystrobin application reduced the prevalence of ascomycete fungi, but increased the relative abundance of zygomycetes. Azoxystrobin amendment also reduced the relative abundance of nematodes in the order Enoplia, but stimulated a large increase in the relative abundance of nematodes from the order Araeolaimida. © 2014 .
750
Cultivar differences in the grain protein accumulation ability in rice (Oryza sativa L.)
Rice is the staple food for nearly half of the world’s population, primarily in Asia, including many developing countries.Rice is an important source of proteins and calories.Although nitrogen fertilization may increase rice grain protein content, development of high- GPC cultivars is expected to ensure consistently high GPC.However, protein content affects the texture of cooked rice, increases hardness, and reduces stickiness.In Japan, where sticky and tender cooked rice is favored, high GPC is considered to negatively affect rice eating quality and is not desirable.Thus, there are diverse demands in terms of rice GPC in different regions of the world.Different cultivars differ in GPC.GPC is also affected by the nitrogen application rate, nitrogen application method, and other cultivation practices, such as planting density and weed control.GPC in rice is correlated with plant nitrogen concentration at various growth stages.There are genotypic differences in the responses of GPC to nitrogen levels.Highly significant effects of interactions between genotype and environment or management on GPC have been reported.GPC may be affected by genotypic differences in nitrogen uptake ability at a given soil nitrogen availability, and by the ability to incorporate the absorbed nitrogen and accumulate storage protein in seeds.The grain protein accumulation ability is a characteristic independent of plant nitrogen status and thus can be a stable criterion for the evaluation of the effects of genotype on GPC.However, to the best of our knowledge, no evaluation of the effects of genotype on GPA has been reported.GPA is important for determination of grain quality and grain yield.The potential capacity of the sink to accumulate assimilates is suggested to be a measure of sink strength; therefore, GPA can be a good measure of sink strength for nitrogen.GPA may affect nitrogen dynamics during the grain-filling period and remobilization of nitrogen from leaves, which reduces photosynthetic ability.Therefore, characterization of genotypes with different GPA may be useful for optimum nitrogen management for each cultivar.In this study, we hypothesized that GPC is determined by GPA and the amount of nitrogen available for developing grain per unit sink capacity and GPA was defined as the regression coefficient between GPC and logarithm of Nav.The total amount of nitrogen available for grain is the sum of the amount of new uptake during the grain-filling period and the amount which can be remobilized from leaves.Sink capacity affects nitrogen dynamics during the grain- filling period.The objective of this study was to determine and compare GPA among six high-yielding, lodging-tolerant cultivars with different GPC.To produce a wide variation in Nav, nitrogen topdressing at heading and spikelet thinning were conducted.Field experiments were conducted at Ishikawa Prefectural University, Nonoichi, Japan in 2014 and 2015, in Gray Lowland soil.Six Japanese rice cultivars with different genetic backgrounds – Bekoaoba, Habataki, Takanari, Hokuriku193, Momiroman, and Akenohoshi – were grown under irrigation.These are high-yielding lodging-tolerant multipurpose cultivars that have been bred by crossing japonica and indica cultivars.Bekoaoba, Momiroman, and Akenohoshi are japonica-dominant, whereas Habataki, Takanari, and Hokuriku193 are indica-dominant.Seeds were sown in a seedling nursery box.In 2014, 21-day-old seedlings of Bekoaoba, Habataki, and Takanari and 35-day-old seedlings of Hokuriku193, Momiroman, and Akenohoshi were transplanted on 23 May.In 2015, 25-day-old seedlings of early cultivars and 35-day-old seedlings of late cultivars were transplanted on 22 May.One seedling per hill was transplanted.Late cultivars were sown earlier to ensure harvesting before rainy and cold weather starts.All cultivars received a total of 8 g N m−2.At heading, half of each plot received 4 g N m−2 as topdressing while the other half did not.Nitrogen was applied as ammonium sulfate.Phosphorus and potassium were also applied to all plots as basal fertilizers.Weeds, insects, and diseases were controlled with standard chemicals as necessary.The experimental plots were arranged in a split-plot design with three replicates.Plants were sampled at the full-heading stage and at maturity.Maturity was regarded as the date at which more than 95% of spikelets became yellow in cultivars except Habataki and Takanari.In Habataki and Takanari, the date at which most leaves senesced was regarded as maturity because it was earlier than the date at which more than 95% of spikelets became yellow.Twelve plants were harvested from each plot.Ten plants were dried for 72 h at 80 °C and weighed.Two plants with the average number of panicles were separated into the leaf blade, leaf sheath + culm, panicle, and dead parts, which were all dried as above and weighed.Each dried sample was ground to a powder with a cyclone sample mill with a 0.5-mm screen.The nitrogen content was measured by the Dumas combustion method.Filled grain was then ground to a powder, and nitrogen content was measured.The protein content was calculated by multiplying the nitrogen content by 5.95.GPC was adjusted to 14% moisture content.Harvest index was calculated as the fraction of grain dry weight relative to total above ground dry weight at maturity.Nitrogen harvest index was calculated as the fraction of nitrogen in grain relative to the total above ground plant nitrogen content at maturity.At full heading, primary rachis branches except the uppermost and the second ones in 2014 and those except the uppermost one in 2015 were removed from all panicles of eight neighboring plants in each subplot.Plants were harvested at maturity.Dry weight, nitrogen content, and GPC were measured as in intact plants.Analysis of variance was performed using SPSS version 21 according to the split-plot design to assess cultivar differences, the effects of nitrogen topdressing at heading, and the effects of cultivar × nitrogen interactions.For each cultivar, the significance of the differences between mean values was analyzed using Tukey’s test.Multiple regression analysis was conducted to determine the contribution of GPA to NHI.The homogeneity of regression coefficients between GPC and Nav was tested according to Gomez and Gomez.The heading dates were 2 Aug to 22 Aug in 2014 and 29 July to 15 Aug in 2015.The range of mean temperatures during the grain-filling period was 21.4–24.5 °C in 2014 and 22.0–25.7 °C in 2015.The grain-filling period was shorter in Habataki and Takanari due to early leaf senescence.The amount of solar radiation was 419–540 MJ m−2 in 2014 and 412–511 MJ m−2 in 2015.Mean temperature tended to be lower and the amount of radiation during the grain-filling period tended to be higher for cultivars with later heading.Dry matter production during the whole growth period averaged over treatments was 1249–1817 g m−2 in 2014 and 1142–1631 g m−2 in 2015; in both years, this parameter was highest in Hokuriku193 and lowest in Bekoaoba.Nitrogen topdressing at heading increased dry matter production by 67 g m−2 in 2014 and 81 g m−2 in 2015.Dry matter production during the grain-filling period was 633–823 g m−2 in 2014 and 525–705 g m−2 in 2015.Single-grain weight was approximately 50% larger in Bekoaoba than in other cultivars in both years, whereas the number of spikelets was 40% smaller.Sink capacity was highest in Momiroman and lowest in Bekoaoba and Habataki in both years.Hulled grain yield was 719–832 g m−2 in 2014 and 679–808 g m−2 in 2015; differences among cultivars in hulled grain yield were much smaller than differences in dry matter production.Nitrogen topdressing at heading increased nitrogen uptake during the grain-filling period by 2.9 g m−2 in 2014 and 2.6 g m−2 in 2015.There was no significant cultivar difference in nitrogen uptake during the grain-filling period.Nitrogen topdressing at heading increased nitrogen uptake during the whole growth period by 2.9 g m−2 in 2014 and 2.8 g m−2 in 2015.This parameter was highest in Hokuriku193 in 2014 and in Hokuriku193 and Takanari in 2015 and was lowest in Bekoaoba in both years.In both years, HI varied widely, 39–51% in 2014 and 42–51% in 2015.In both years, HI was highest in Bekoaoba and lowest in Hokuriku193.There was no significant effect of nitrogen topdressing at heading on HI.NHI also varied widely among cultivars, 55–72% in 2014 and 62–72% in 2015.In both years, NHI was highest in Takanari and lowest in Hokuriku193 and Akenohoshi.There was no significant effect of nitrogen topdressing at heading on NHI.There were highly significant differences in GPC among cultivars in both years.GPC of intact plants was in the range of 6.4–7.7% in 2014 and 6.2–7.6% in 2015.GPC was highest in Takanari and lowest in Momiroman in both years.Nitrogen topdressing at heading significantly increased GPC.Interaction between cultivars and nitrogen topdressing was highly significant.The differences in GPC among cultivars were increased by nitrogen topdressing, and the difference between Takanari, the highest, and Momiroman, the lowest, was about 1.3% in both years.Spikelet thinning markedly increased GPC and cultivar differences.GPC of spikelet-thinned plants was 8.0–12.5% in 2014 and 8.4–13.4% in 2015.GPC was highest in Takanari and lowest in Momiroman and Bekoaoba, with a difference of about 3% in both years.In each cultivar, there was a logarithmic relation between GPC and Nav, and the coefficients of determination were higher than 0.915 and highly significant.In the regression equation GPC = A × Ln + B, A is the regression coefficient and B is a constant.The A values varied widely among cultivars, from 0.969 in Bekoaoba to 1.820 in Takanari.A test for homogeneity of regression coefficients revealed a highly significant difference in the regression coefficients among cultivars.Multiple regression analysis was conducted to determine the contribution of GPA to NHI.With GPA and the ratio of sink capacity to dry matter production as independent variables, the overall regression was highly significant with the coefficient of determination of 0.801 in 2014 and 0.716 in 2015.Regression coefficients for both GPA and the ratio of sink capacity to dry matter production were significant.Partial correlation coefficients for GPA were 0.544 in 2014 and 0.627 in 2015, whereas those for the ratio of sink capacity to dry matter production were 0.736 in 2014 and 0.679 in 2015.However, with GPC as an independent variable instead of GPA, the partial correlation coefficient for GPC was not significant, although the overall regression was significant with coefficients of determination of 0.609 in 2014 and 0.552 in 2015.The cultivar difference in GPC varied between N application rates and years.For example, in the N− plots in 2014, GPC was higher in Hokuriku193 than in Akenohoshi but not significantly different between Hokuriku193 and Takanari, whereas GPC was lower in Hokuriku193 than in Takanari and there was no significant difference between Hokuriku193 and Akenohoshi in N+ plots in 2014 and 2015.The highly positive interactions between cultivar and nitrogen management are in good agreement with previous studies.The marked increase in GPC by spikelet-thinning treatment supports the effects of sink capacity on GPC, suggesting an association of nitrogen availability per unit sink mass with GPC.The negative correlation between GPC and grain yield may reflect the positive effect of sink capacity on grain yield and its negative effect on Nav.We found a logarithmic relation between GPC and Nav, with different regression coefficients for different cultivars.The environment may affect GPC through Nav, and differing relationships between environment such as soil nitrogen level and Nav for each genotype may explain the interaction between genotype and environment for GPC.Nitrogen uptake ability is one of the major traits determining Nav and is affected by root architecture, morphology, transporter activity and carbon availability.There is wide genotypic variation in nitrogen uptake ability in rice and the response of nitrogen uptake to different soil nitrogen availability may differ among genotypes.Sink capacity is genetically determined but is also highly influenced by the environment.There is also wide genotypic variation in sink production efficiency, i.e. sink capacity per nitrogen uptake at full heading, which directly affects Nav.Nav may be affected by the environment through grain weight.High radiation intensity during grain-filling period increases single- grain weight, which would decrease Nav.There is a cultivar difference in the radiation use efficiency in rice and there may be a cultivar difference in the relation between assimilate availability and grain weight.The environment may affect Nav through differences in the response of growth duration to daylength or temperature.Late-maturing cultivars intercept larger amounts of solar radiation and produce more dry matter, which reduces plant nitrogen concentration, resulting in lower GPC than in early-maturing cultivars.The wide variation in GPC due to different environments and management practices can be well accounted for by the variation in Nav.Although there was only a small cultivar difference in GPC at low Nav, the difference became larger as Nav increased; consequently, there was a wide difference in the regression coefficient A among cultivars.The regression coefficient A indicates the increment in GPC in response to the increment in the logarithm of Nav: higher A values show higher GPC at a given Nav.Therefore, the regression coefficient A represents the intrinsic ability of grain to accumulate protein, or GPA.GPC may be affected by temperature during grain-filling period directly or indirectly.High temperature shortens grain- filling period and reduces the amount of assimilate available for developing grain and thus grain weight, which would increase Nav because a large part of total amount of nitrogen available for developing grain would be determined by the grain-filling period.As a result, GPC would be increased but the regression coefficient in the relation between Nav and GPC would not be affected by temperature unless temperature directly affects grain protein accumulation.Yamakawa et al. revealed that high temperature during grain-filling period reduced accumulation of some storage protein but the high temperature used was far beyond the optimum temperature.Thus the direct effect of temperature on the regression coefficient is unknown in the optimum or suboptimum temperature range.GPA affected plant nitrogen dynamics during the grain-filling period.NHI was explained well by multiple regression with GPA and the ratio of sink capacity to dry matter production as independent variables.GPA can be considered as the sink strength for nitrogen and the ratio of sink capacity to dry matter production as the relative sink size.The high NHI indicates a high proportion of nitrogen accumulated in grain relative to that remaining in the vegetative parts at maturity.Although NHI represents only the ultimate result of plant nitrogen dynamics, it indicates the involvement of GPA in plant nitrogen dynamics.The higher GPA means higher allocation of nitrogen to grain, i.e. a higher proportion of nitrogen acquired during the grain-filling period distributed to developing grain or a larger amount of nitrogen remobilized from vegetative organs to grain.However, the partial correlation coefficient for the effect of GPC on NHI was not significant.GPC is only a resultant of Nav and GPA and does not represent sink strength for nitrogen.The importance of GPA in nitrogen dynamics increases in high-yielding cultivars, whose nitrogen uptake is generally higher than that of standard cultivars.Sink production efficiency is larger in high-yielding cultivars.High-yielding cultivars with large sink capacities require more nitrogen during the grain-filling period than standard cultivars.This higher demand for nitrogen is met by remobilizing nitrogen from vegetative parts.The amount of nitrogen remobilized from leaves to panicles is larger in rice cultivars with larger sink size.Nitrogen remobilization from leaves decreases photosynthetic capacity, because approximately 80% of total leaf nitrogen in rice plants is invested in chloroplasts, and the synthesis and amount of Rubisco reflect plant nitrogen status.Therefore, adequate nitrogen management according to GPA of each cultivar is necessary to make the best use of its yield potential.This is especially important for high- yielding cultivars because a small increase in GPA would result in a greater increase in total nitrogen demand in cultivars whose sink capacity is greater than in standard cultivars.GPA can be a good criterion for evaluating genotypes for GPC because GPA is genotype-specific and is unaffected by soil nitrogen availability.GPC as such is not suitable for genotype evaluation because its heritability is low possibly due to its dependence on Nav.Although some QTLs for GPC have been reported, such QTLs should also include QTLs for traits that affect sink capacity and nitrogen uptake, such as spikelet number, grain size, and root profile, because these traits are indirectly associated with GPC.Some of such QTLs may not be detected under certain environments or nitrogen managements.Ye et al. compared GPC of 21 single- chromosome substitution lines in 8 environments and found a highly significant interaction between substitution and environment.Some substitutions had a large positive effect in one environment but no or negative effect in another.It would be very interesting to determine which of the substitutions is associated with GPA.High GPA can be targeted in breeding programs for regions where high GPC is preferable from the point of view of nutrition.However, high GPA does not guarantee high GPC.Even in high-GPA cultivars, sufficient Nav is necessary to attain high GPC.Maintaining Nav at a certain level requires sufficient nitrogen uptake during the grain-filling period and in some cases also the control of sink capacity at the expense of yield.Nevertheless, high GPA results in efficient grain accumulation of protein synthesized using absorbed nitrogen.In regions where low GPC is preferred from the point of view of eating quality, such as Japan, low GPA can be targeted.Appropriate nitrogen management for cultivars with low nitrogen demand for grain may enable high yields with relatively low nitrogen input, which would reduce the environmental burden and cultivation cost.In conclusion, we found that GPC is determined by Nav and GPA.GPA is a cultivar-specific parameter and a measure of sink strength for nitrogen, because it affects plant nitrogen dynamics during the grain-filling period.Because GPA does not depend on plant nitrogen status or sink capacity, it would be a good trait for evaluation of the effects of genotype on GPC.Furthermore, GPA is an important trait for optimization of the nitrogen management method for each cultivar.
The demand for rice grain protein content (GPC) differs in different regions of the world. Despite large differences in GPC among cultivars, evaluation of the effects of genotype on GPC is difficult because GPC is influenced not only by cultivar traits (such as nitrogen uptake ability, sink size and heading date) but also by the environment. We hypothesized that grain protein accumulation ability (GPA) also affects GPC. The objective of this study was to clarify the differences in GPA among six lodging-tolerant, high-yielding Japanese cultivars: Bekoaoba, Habataki, Takanari, Hokuriku193, Momiroman, and Akenohoshi. To produce a wide variation in nitrogen availability per unit sink capacity (Nav), we used nitrogen topdressing at heading and spikelet-thinning treatment. In each cultivar, we found a logarithmic relation between GPC and Nav: GPC = A × Ln(Nav) + B, where A is the regression coefficient and B is a constant. A highly significant difference in regression coefficients among cultivars was found (P < 0.01). The regression coefficient was considered to be a measure of GPA; it varied from 0.969 in Bekoaoba to 1.820 in Takanari. This relation suggests that GPC is determined by Nav and GPA and that the environment affects GPC through Nav. GPA is a good criterion for evaluation of the effects of genotype on GPC. Nitrogen harvest index was highly significantly explained by multiple regression with GPA and the ratio of sink capacity to total dry matter production as independent variables, suggesting the influence of GPA on plant nitrogen dynamics during the grain-filling period. Therefore, it would be useful to determine the cultivars' GPA values for optimizing nitrogen management.
751
Mitigation versus adaptation: Does insulating dwellings increase overheating risk?
The buildings sector accounts for 25% of global fossil fuel related greenhouse gas emissions .These emissions arise primarily from the demand for space heating and cooling , hence, improved building insulation lies at the heart of energy reduction policies .Taking the UK as an example, buildings represent the sector with the single greatest emissions, accounting for 37% of total CO2e emissions and, in order to meet the planned national trajectory of emission cuts, considerable reductions are expected from the sector.Increased wall insulation is expected to provide 42% of this reduction, heating-related measures 27%, other measures 24%, and building fabric measures other than wall insulation 6% .Consequently, at 48%, improved insulation/fabric will be the largest contributor and therefore critical in meeting the trajectory.As seen in the European heat wave of 2003, where over 14,000 died inside buildings in Paris alone excessive temperatures in buildings can lead to a severe loss of life.Several studies have suggested that improved insulation might exacerbate overheating, implying a direct conflict between mitigation and adaptation for this key policy.If correct, these studies suggest alternative routes to mitigation will have to be found, or carbon trajectories rethought with much greater cuts from other sectors such as transport or electricity generation .However, other studies have found the opposite.For example, the empirical evidence collected during the Paris heat wave shows higher internal temperatures in rooms without insulation .Given that improved insulation in buildings is one of the central planks of climate change policy in many countries, and a belief that this might exacerbate temperatures would be a serious challenge, these contradictions therefore need to be resolved.Increasing insulation could be regarded as a measure that will reduce the ability of a building to dissipate heat, and hence exacerbate overheating.However, this will only be the case when the external temperature is lower than the internal, and is further complicated by fabric elements that receive direct sunlight reaching much higher temperatures, and therefore additional insulation reducing external heat gains.There is also the need to consider the size of the internal/external temperature difference.In winter this might be 20 K or more, in summer in much of the world it will be considerably smaller.In naturally ventilated buildings in winter, air ingress is likely to be low and fabric heat exchange will play an important role.In summer, however, much larger air flows will be the norm to alleviate high internal temperatures, making air ingress the likely dominant heat path—and more so in well insulated buildings.The situation is also expected to vary over the day as the internal/external temperature difference changes sign.It might also potentially differ with occupant willingness to open and close windows and the internal heat gains.Moreover, other effects such as the dynamic influence of thermal mass or shading further obscures an intuitive characterization of the role of increased insulation in overheating.Several studies have addressed these concerns.Chvatal and Corvacho studied the relationship between overheating and insulation.They altered thermal transmittances, shading and night ventilation for a free-running dwelling in various locations, showing that trends in discomfort hours shifted in sign according to shading conditions in certain circumstances: it could either increase or decrease the duration of overheating.They also found that additional insulation was detrimental in cases with extremely high, and probably unrealistic, levels of overheating; whereas it was not for lower, and more realistic, levels of overheating.This shift in the sign of the effect was found for solar energy transmittances ranging from 0.32 to 0.61, but the low summertime purge ventilation rates considered in most of the work suggest these results do not correctly account for occupant behaviour, that would result in much higher ventilation rates.Porrit et al. , performed several studies regarding measures to lessen overheating during heat waves as part of the Community Resilience to Extreme Weather project.Focusing on retrofits and mid-2000 dwellings, they also assessed orientations, wall coatings, glazing types and occupancy profiles, showing that all parameters had an impact on overheating.The research concluded that the control of solar gains was the most effective action to reduce overheating, and that insulation was also beneficial except when placed in the layers closest to the occupied space.Mavrogianni et al. arrived at similar conclusions about insulation when characterising London dwellings and retrofit measures.Gupta and Gregg further supported these findings but stressed that overheating depends highly on how measures are combined.McLeod et al. considered the performance of Passivhaus Institute Standard and Fabric Energy Efficiency Standard compliant dwellings under a changing UK climate.It was shown that the slightly better building envelope of the PHIS case outperformed the FEES variant.The study also included a sensitivity analysis that ranked parameters according to the increase in overheating risk they posed, as follows: glazing ratio > thermal mass > shading device > airtightness.Unfortunately, the study did not include natural ventilation, a key measure against overheating.However, van Hooff et al. found exactly the opposite when looking into changes in U-values from 0.20 W m-2 K-1 to 0.15 W m-2 K-1 with increasing insulation significantly increasing overheating.Besides the potential influence of overheating criteria, it is not clear if the differences in impact are caused by different choices of locations, parameters or assumptions, as there is not enough information in the publications to compare them.Taylor et al. , building on the studies of Mavrogianni et al. , focused on the influence of different locations, obtaining significant changes in overheating patterns within the UK.Yet, the performance of each measure remained qualitatively similar for most parameters.Additionally, the study correlated wall retrofits to internal temperature increases of 0.1–3.5 K, a greater effect than the ±1 K variation obtained in the previous study but similar to the combined reduction due to roof and windows retrofits.A further publication, based on the findings from CREW, investigated how overheating changes for different occupancy patterns and considered different levels of engagement with the operation of windows and shading devices .As expected, overheating increased significantly for cases with higher internal gains and lower occupant engagement in the operation of openings, in particular for the pensioners.The work clearly quantified the extent to which occupant behaviour alters overheating, and the implications this can have for people not able to operate the house as advised.Unfortunately, highly insulated dwellings were outside of the scope of these studies, as was the impact of different levels of insulation.Overall, the findings reviewed in the previous section show a tendency towards a holistic characterization of the problem, arriving at the idea that every parameter is equally critical.In addition, some authors have suggested, sensibly, that the combined performance of building elements is not the sum of individual ones.Few studies, however, have characterised the contributions of each parameter concurrently with the changes in others.Taylor et al. specifically focused on the relationship between overheating and the characteristics of London dwellings.Overall, they found similar trends as other studies did, although the ranking of the influence of parameters varied by location in the UK.Unfortunately, this work does not specifically cover the impact of insulation because the aim was to characterise the building stock.Another study focused on the performance of retrofit packages, but it only covered a limited number of variables and it did not include low U-values .On the other hand, McLeod et al. performed a sensitivity analysis of thermal mass, glazing ratio, shading, airtightness and internal gains for the previously mentioned PHIS and FEES variants.They found that the most important factors were glazing ratio, followed by thermal mass, shading devices and airtightness.Although this ranking should be contextualized within the range of the variables under consideration, it provides a good starting point to evaluate the importance of different parameters on overheating.Unfortunately, different purge ventilation strategies were not included in the sensitivity analysis.It has been pointed out that real, rather than modelled, modern buildings might overheat significantly more than older ones .However, increased levels of insulation are only one of the many differences between older and newer buildings, making it hard to connect cause with effect.Pairwise comparisons with different building fabrics do not exist, but there have been several monitoring studies reporting the performance of highly insulated dwellings .Dwellings in these studies developed high indoor temperatures, but the causes pointed to other driving forces, particularly issues with building services, e.g. gaps in pipe insulation, poor commissioning or heating on during summer, rather than improved building fabric.On the contrary, during the European heat wave of 2003, it was found that older houses and those lacking thermal insulation were at a higher risk .A further point is that thermal comfort research highlights that indoor conditions should be evaluated by occupants themselves whenever possible ."The abovementioned field studies monitored indoor air properties without the associated occupant's thermal satisfaction, for which they compared results with standard overheating criteria.Therefore, they do not indicate whether occupants wanted to be at a lower temperature.In fact, Baborska-Narożny et al. showed that occupants might not take actions to reduce temperatures.Fletcher et al. suggested that familiarity with the mechanical systems, its configuration and perceived security can play a significant role in these groups, although they also acknowledged the need to further link assessed overheating with actual occupant perception.In this sense, Vellei et al. analysed the differences in indoor conditions between vulnerable and non-vulnerable groups.They showed that the dwellings of vulnerable people were statistically warmer than the non-vulnerable, but also that vulnerable people indeed preferred warmer conditions when questioned.The aim of our work is to clarify whether additional insulation exacerbates overheating.Given the challenges arising in previous research, this covers a complete, realistic and consistent range of building parameters, occupant behaviours, locations across the world and definitions of overheating.Unfortunately, this cannot be achieved via a meta-study due to wide differences in methodology, scope and building parameters used in previous work.Key to doing this, we will present enough information for the results to be reproduced by others, and to cover enough variants of the situation to be comprehensive.In particular, the objectives are:To quantify the combined impact of building features on overheating.To understand the impact of increased insulation on overheating risk.Point to why previous studies have been contradictory.The paper is organised as follows.Firstly, we propose a methodological approach that combines time-resolved simulations of indoor conditions in parametrically-designed dwellings to encompass a wide range of conditions and scenarios.Next, the influence of insulation and every other parameter is analysed and discussed.To this end, techniques such as regression and classification trees as well as classical hypothesis testing techniques will be applied to express the results in a meaningful way and to draw generally-applicable conclusions.The results will allow us to determine the role of increased insulation, with key findings summarised in the last section.Like almost all work on the topic, we calculate overheating performance using mathematical models of buildings because this allows for pairwise comparisons to isolate the influence of changing a particular parameter.In our case, the simulations are based on validated models that replicate the performance of real monitored dwellings.In total 576,000 cases were modelled.Each case comprises specific combinations of the following building parameters: insulation level, location, building type, thermal mass, windows size, shading, natural ventilation rate and control, internal gains, infiltration and orientation.We purposely avoid attempting to weight these samples with their true distributions, as these are unknown.Instead, every possible combination is considered, regardless of its propensity to exist.This ensures all possibilities are covered and no bias is introduced.Although overheating is important in all buildings, naturally ventilated buildings and their occupants are at far greater risk due to a lack of any air conditioning or mechanical ventilation system to provide cooling.In addition, due to the greater impact of overheating in vulnerable groups, particularly the elderly, and the greater time spent at home, dwellings are of more concern than commercial buildings.Hence, we concentrate on naturally ventilated domestic properties.In the following the parameter space is described, followed by a description of the overheating metrics, monitoring and validation.There are an infinite number of possible buildings, hence we explore this large parameter space via combinations of fundamental architectural parameters.In total, 576,000 cases, i.e. specific combinations of building parameters and occupant behaviour, have been chosen to span the space, and importantly, have enough variety to cover a greater range than previous work, hence answering some of the criticisms of such work; such as too narrow a range of: ventilation, insulation, or shading.This approach aims to clarify how fundamental parameters affect overheating by studying every combination of the parameters involved.Therefore, the models are conceived to bound plausible ranges for the relevant building physics parameters involved in overheating, and do not necessarily reflect the expected prevalence in the real building stock.The buildings were simulated over one year using EnergyPlus within a computer cluster.EnergyPlus is an open source building simulation engine that integrates the three fundamental domains in building physics: surface heat balance, air heat balance and building systems.These domains are coupled and solved at the defined timestep ranging from 1 min to 60 min.The study was based on a worst-case scenario to bound one side of the parameter space and a best-case one to bound the other.An apartment with ventilation from windows on only one façade is selected as the worst case because this form of building is most prone to overheating and because it is a common typology in the various, worldwide, locations considered in this study.It corresponds to a real apartment built in the UK in the late 2000s.A top floor unit was selected due to the greater exposure to solar gains, thereby further exacerbating overheating.A detached house was then used as a best-case scenario, i.e. least likely to overheat, as heat losses are maximized due to external exposure on all four façades, and maximised natural ventilation due to cross-ventilation.The apartment is surrounded by identical units on either side with the other two faces exposed to the external environment; only the main façade, that with the living spaces, has windows.The model is considered in be in an urban low-rise environment and the conditions for the elements defining each zone are the following:Façades: exposed to wind and sun.Party walls and floor: The adjoined units develop the same temperature as the apartment, i.e. with no net heat transfer across the separating walls or floors.This simplifies the analysis, it is again the worst case and it is consistent with other studies.Nevertheless, the thermal mass of these elements is still considered.Internal walls: Energy exchanges through these elements are modelled to capture the effects of higher gains in some rooms passing to other rooms.The building is modelled out to the external side of the thermal envelope .Each room constitutes a thermal zone to obtain individual temperature readings and to have complete control over the definition of heat gains.The ventilation model is an airflow network.Here, air exchanges are driven by wind and stack ventilation.The external environment and the internal zones are represented as a set of nodes linked with the windows and other elements such as doors.The chosen window type is sliding and only the upper half of the opening is considered openable and up to 5% of the room area.The system is then solved for pressure and airflow to give temperature, humidity and the resulting thermal loads.This evaluates input parameters and runtime conditions to decide whether windows should be opened or not and, if so, to what extent .External conditions are derived from the weather files and adjusted for height and building context through wind profiles and wind pressure coefficient models."For the latter, the building form-dependent pressure coefficients after Swami and Chandra's low-rise model with rectangular obstructions is used1 .The overall effect of all these conditions can be observed in Fig. 11.The detached house follows the same approach, but every external surface is fully exposed to outdoor conditions.Additionally, there are windows in the main façade and the opposite one to allow cross-ventilation, consistent with a best-case scenario.The wind pressure coefficients are modelled after Grosso1, since it was applicable for the building and urban characteristics at hand .Thus, the particular wind pressure coefficient values at the precise opening location within the façade is accounted for.Like in the apartment, the overall effect of the ventilation parameters can be observed in Fig. 11.The predicted annual heating energy demand and temperature time series provides data for the validation of the parametric models.The heating is the same in every case, although schedules and values vary according to the occupancy under consideration.Heating is provided through an ideal loads system to control the energy demand without explicit modelling of building services, to generalise results.Background ventilation is provided to control CO2 concentrations.Overheating is appraised in the living room and the main bedroom separately.Eight locations across the world were selected to assess overheating risk for different climates and latitudes, i.e. solar paths and timings.Within those, we selected reference capitals for representativeness and weather data availability.The weather files used represent a typical year based on historical weather data.Five cases are considered with wall transmittances between 0.60 W m-2 K-1 and 0.10 W m-2 K-1.The thermal resistance of elements was based on both building regulations and best practice standards in the UK to ensure consistency and future relevance of the results.Table 3 defines U-values and glazing properties for each building element present in the models.The U-values of the walls give the names of U-value cases, although the performance of other elements varies consistently with construction practices.Thermal mass has been identified as a potentially important parameter in previous work.Consequently, three cases were established based on the Thermal Mass Parameter, a metric that takes into account the thermally-active depth of a construction.Following the ISO-13790 method, a thermally lightweight construction is taken as having a TMP of 38 kJ m-2 K-1, with medium and heavyweight ones at 281 kJ m-2 K-1 and 520 kJ m-2 K-1, respectively.Short timestep dynamics are accounted for by setting the simulation timestep to 10 min.Construction assemblies were serialized in groups according to their thermal mass.Lightweight construction requires internal insulation whereas it is externally located for the medium and heavyweight cases.Internal blocks of different properties achieve target TMP values.The insulation thickness conforms with the thermal resistance of these layers.Internal partitions are based on a standard drywall assembly.Lastly, the real internal areas and volumes for each of the fifteen combinations were implemented in the model rather than assuming they correspond to the space enclosed by outdoor surfaces.Thus, energy exchanges are invested in the real enclosed air, accounting for changes in building volumes associated with different construction thicknesses.Three different window areas were considered ensuring that solar gains remained constant across other modifications that could influence them.All windows are rectangular and are kept in their original location.For the double-sided detached house, cases at 15%, 20% and 25% window area to floor area ratio were explored.For the single-sided apartment this means three cases at 9%, 12% and 14% wall-to-floor ratio each.Frame thicknesses were set consistently with window U-values.Finally, since different wall thicknesses can affect solar heat gain through different depths of reveals, they were adjusted to remain constant at 5 cm for all simulations.The shading strategies considered were ‘none’ and ‘full’.In the first case, windows are completely exposed to solar radiation, accounting for the worst case.In the second case, openings are shaded via fixed horizontal overhangs and vertical fins to realistically capture the physics of the heat transfer while minimizing model complexity.These were based on the latitude and designed to fully shade windows at the summer solstice.The overall effectiveness of this ‘full’ shading strategy is a median reduction of direct solar radiation of 45% compared to the ‘none’ case.Table 4 summarizes key characteristics for the locations under study and the properties of the shading devices as a function of the opening characteristics.In line with previous studies, two cases were examined to cover different types of behaviour: a working couple ‘away’ from 9:00 to 17:00 and another ‘home’ all-day-long.The former concentrates internal gains early in the morning and evenings, and the latter induces lower but sustained internal gains throughout the day.Occupancy was modelled as discrete individuals in specific rooms.Lighting and other gains were based on the current state of the art .These establish a power ‘budget’ spent according to occupancy, but consider residual loads and specific appliances in the kitchen.Resulting average gains were 2.84 W m−2 and 3.38 W m−2 for the ‘away’ and ‘home’ scenarios, respectively.As this is a study of naturally ventilated buildings, a model/algorithm for window opening is needed.Such models are based on the thermal comfort of the occupants, and assume people will, or will not, open windows to restore comfort.Unfortunately, differences in the way this has been accounted for and reported in previous studies precludes a meta-analysis that would shed light on the role of insulation in overheating."We use the two standard thermal comfort models: Fanger's model , which assumes that the majority of occupants are comfortable at a fixed temperature, and uncomfortable at a higher fixed temperature, and the adaptive comfort model , which assumes Tneu and Tmax vary based on the historic time series of external temperature.As it is unknown just how responsive occupants are, we assume occupants might start to use purge ventilation once the temperature of the room is above Tneu, or only once Tmax is reached.We are agnostic to F and A — since both are considered valid representations of the physiology and psychology of occupants — and we allow occupants to adopt either.They can also not react to the temperature in the room at all, leaving the windows only open enough to ensure reasonable air quality.In addition, it is possible, that occupants might behave differently at different times of day.The final requirement is that windows are only opened to provide additional cooling if the external temperature is lower than the internal temperature.Windows are opened to provide purge ventilation based on a set of rules.This provides a transparent approach based on first principles that is coherent with the thermal comfort models mentioned above.This allows us to account for a wide range of scenarios, in light of the known epistemic limitations in window occupant behaviour , while retaining the ability to perform pairwise comparisons across building variants.In the model, windows are opened for purge ventilation if the following conditions are all met simultaneously:A trigger internal temperature is surpassed.This accounts for the natural tendency for occupants to open windows to provide cooling.The external temperature is lower than the internal.In very hot climates the cultural norm is for windows to be left closed during periods of peak external temperature.A rule based on time of the day and occupancy.To stop windows being opened when the building is unoccupied, or if occupants feel nervous about leaving windows open when they are asleep.These rules are: None: Purge ventilation is never available.This constitutes a worst-case scenario and assumes occupants never open windows in response to overheating. Day-O-Tmax: Purge ventilation is available during the day if there are occupants in the dwelling.The trigger temperature is Tmax calculated under the fixed or adaptive model.This represents a minimal reaction to temperatures above the acceptable threshold at times occupants are awake and adaptation is possible. Day-O-Tneu: Same as Day-O-Tmax, but the trigger temperature is Tneu calculated under the fixed or adaptive model.This represents occupants that aim for optimal comfort conditions, as suggested by the standard thermal comfort models. Day-A-Tmax: Same as Day-O-Tmax, but purge ventilation is always allowed during daytime regardless of the occupancy.This increases its availability during the hottest periods of the day i.e. windows can be left open, or open via electronic sensors. Occupied-Tneu: Purge ventilation is available day or night if there are occupants in the dwelling.The ventilation set point is the comfort temperature.This represents traditional natural ventilation strategies in hot countries aimed at taking advantage of colder night-time temperatures .Altogether, the combination of the ‘Comfort Algorithm’ and ‘Purge ventilation’ parameters result in ten ways windows can be opened to deliver purge ventilation.Four cases, one per cardinal point, were considered.The models were validated against data recorded in an apartment and a detached house in Southern England.Model performance was appraised through the internal temperature time series in summer and the space heating demand in winter.For the validation, data was collected between 2013 and 2014 and weather conditions reconstructed from public databases .A typical summer week was selected according to weather conditions and occupancy as derived from electricity and gas data, consumption of the MVHR unit and dry bulb temperature and relative humidity in the living room.The simulation model was created with the as-built documentation of the dwellings and building regulations information, adjusting iteratively parameters such as window opening temperature threshold based on monitored air temperatures.Agreement between the real and the simulated internal temperature time series was assessed using the standard procedure by ASHRAE through the mean bias error and the coefficient of variation of the root mean squared error).The MBE was used as the indicator of the average difference, which resulted in deviations of −1.1% and 0.7%.Similarly, the CV was taken as the indicator of the hourly differences and gives 3.2% and 2.4%.These can be interpreted as a strong indication that the models are performing as expected since the ASHRAE standard considers models as validated when MBE is within ±10% and CV is within ±30% when using hourly data .Since our study involves only model-to-model comparisons the differences observed during the “validation” process do not affect their ability to characterise the phenomenon at hand.The winter space heating demand was used to ensure the parametric simulations are within reasonable limits and no gross errors had been made.This was done by selecting those simulations in London with relevant pairs of insulation and airtightness.Results are expressed according to the building fabric standard they represent and are in agreement with expected values .Fig. 4 summarizes the space heating demand performance of the parametric and base case simulations in London according to the house type.Building characteristics are mapped to equivalent building regulations and standards for low energy buildings and Passivhaus Institute Standard).These can then be compared to energy consumptions in the UK and the specific frameworks of each standard.For the 1985, 1995 and 2006 building regulations, it must be noted that, for comparison purposes, the total heating energy consumption takes into account domestic hot water energy and the efficiency of the equipment.Considering that domestic hot water is about 30% of the demand and a typical boiler efficiency of 85%, values would be 1.5 times greater than those in Fig. 4.FEES and PHIS directly specify their heating energy demand targets; this means that demands beyond these limits are due to cases in the parametric study that are not optimized to satisfy them.Altogether, the results indicate reliable performance of the parametric simulations.In our results, overheating is found to be highly sensitive to the building design parameters and the operation of the building.While in most locations it is possible to ensure low levels of overheating by using good design and behavioural strategies, poor design decisions or poor operation of the building leads to considerable overheating, which underscores the importance of good design in ensuring resilient performance.To understand the impact of the different input parameters on overheating and the role of insulation, a regression approach is used based on regression trees ."This technique finds a model made of simple decision rules based on the study's input parameters by the recursive partitioning of the input parameter space.The algorithm disaggregates data into groups considering one parameter at a time, and chooses the partitioning rule which best explains the difference in the overheating performance of the buildings.One of the key features of the technique for this study is its suitability for a large number of parameters and the depiction of their interactions.In our case, a collection of trees was trained by randomly sampling 70% of the dataset with replacement.These trees are then integrated into an ensemble that makes predictions by averaging the individual predictions of each of its trees.This ‘bagging’ of trees results in a more robust model when evaluating its performance against the remaining 30% of the dataset.The performance of the regression ensemble is appraised with the coefficient of determination R2.Every parameter was found to have some impact on overheating.Altogether, the top four parameters explain 86% and 82% of the overheating variance between buildings for duration and severity, respectively.‘U-value’, i.e. the level of insulation, explains only about 3.5% and 2.9% of the variance, respectively.Due to the way the study was designed, it is possible to isolate the exact influence of insulation on overheating by taking pairwise comparisons between buildings identical in every aspect except for the level of insulation.Such pairwise differences in duration of overheating show that greater insulation levels exacerbate the risk in about three-quarters of the cases under study, but reduce it in one quarter.In the case of severity, increased insulation increases overheating in approximately two-thirds of cases and reduces it in about one-third.However, many of the cases represent buildings that are already overheating, often severely.If we select cases with overheating duration below 3% of occupied hours and analyse their pairwise insulation variants, the distribution between positive and negative cases is remarkably different: the groups are approximately equal.It is noteworthy that typical standards recommend an upper threshold limits between 1% and 3% , hence our selection of 3% can be treated as conservative.In fact, when selecting thresholds below 3.7% increased insulation reduces overheating.The question now becomes why increased insulation exacerbates overheating in some buildings but not in others.This is answered by focusing on classification rather than regression.The overheating indicators are continuous variables which are converted to categorical ones with two possible values: ‘increase’ and ‘other’.The classification tree algorithm then finds rules to disaggregate these two groups by considering one parameter and level at a time, and chooses the parameter split with the best performance.The contribution of each parameter in each tree in the ensemble is then aggregated in the same way as before to express their overall influence.The categorical results are directly visualized in Fig. 10.This displays the cases where overheating does not increase with increased insulation and the opposite one.The plot is arranged in four sorted stages, one for each of the four main parameters of importance noted above.Each stage shows the relative proportion of the cases within that parameter, sorted from highest to lowest.For example, Fig. 10-A1 shows all cases not displaying overheating with increased insulation levels, those cases controlled by the ‘Occupied-Tneu’ strategy are approximately three times as many as those cases controlled by ‘Day-O-Tneu’.This is easily seen through the relative proportions of the respective horizontal segments in the ‘Purge strategy’ stage on top.This is many times the number than when controlled by strategy ‘None’, where occupants do not react to increased temperatures.This representation not only exposes the internal composition of the results, but it also captures interactions — as individual rays in the bottom stage can be traced to their origins by their colour.When windows are not opened, higher insulation levels almost always increase overheating and for warmer locations such as Cairo, Shanghai and New Delhi, higher insulation levels reduce overheating duration mainly if windows can be opened during the night.This analysis also holds for severity.Moreover, it also indicates that higher insulation levels are generally useful against severe overheating given that the plot now involves 34% of the dataset.A potential reason for the conflicting results of previous studies, can be seen by visually comparing the overheating found in poorly insulated and well insulated buildings as a function of the ventilation provided during occupied hours averaged over the year.This is only possible now that the relative contributions of each parameter have been exposed in the model underlying Fig. 9.The overheating found in un-insulated and super-insulated buildings separate naturally into three distinct groups, dependant on the ventilation strategy.The median overheating hours found for each group reduces linearly as the median ventilation increases.The overall arrangement shows that whilst in general overheating is greater for highly insulated buildings when occupants do not open windows, this is not so if occupants open windows.It is worth reiterating that the ‘None’ case still includes enough ventilation to ensure good air quality.Fig. 11-c subtracts the duration of overheating obtained for otherwise identical but differently insulated buildings minus super-insulated buildings).When the ventilation strategy is modelled in a way that accounts for the most likely response of occupants, i.e. opening the windows, improving fabric insulation does not lead to an increase in overheating.Indeed, as the median for Occupied-Tneu lies just below the axis, increased insulation is found, on average, to slightly reduce overheating.It is noteworthy that in cases where insulation appears to increase the risk of overheating, this is only true in buildings that are already overheating severely due to the ventilation mode selected.Overall, it must be noted in the x-axes of Fig. 11 that the average air changes in these models are rather constrained and well below expected monitored values.This indicates that, to benefit from reduced overheating levels with higher insulation, the airflow magnitude is not the critical factor.What influences one behaviour or the other is when windows are opened.To ascertain the influence of wind-pressure coefficients on our results, the 576,000 cases were also simulated with wind pressure coefficients for isolated buildings.Overall results were similar to those presented here although air changes, expectedly, were significantly higher5.This suggests that our findings are robust against the uncertainty associated with wind pressure coefficients.Given the proven relationship between overheating in buildings and mortality, concerns have been voiced over whether increasing fabric standards might entail increased overheating risk.This is a serious question of great interest to those devising energy policies across the world since such improvements in insulation play a key role in climate change mitigation strategies.To resolve this question, a large parametric study was undertaken that correctly accounts for the complete range of variables, including ventilation strategy and climate.The analysis methods used allow the quantification of the relative impact of each parameter in the dataset for the first time while accounting for non-linear effects.A regression-based and a categorical-based analysis both suggest that insulation plays a minor role in overheating even when comparing un-insulated to super insulated buildings.In the dataset, it can at best explain 5% of the difference in overheating performance.However, the key finding is that little evidence was found for increases in insulation levels also increasing overheating, unless access to purge ventilation is either severely or unrealistically curtailed.If purge ventilation is sensibly used, better insulation levels tend to result in both lower durations of overheating and reductions in severity.Our results align with the empirical evidence from the 2003 heat-wave that increased insulation levels counteracted overheating in buildings.It is possible that some social groups might not deploy purge ventilation, either through lack of mobility, concerns about security, or a lack of understanding of the potential dangers of not doing so.Our results do indicate that in such cases improving the insulation can increase overheating.However, this is mainly in buildings in our dataset that are already overheating severely; hence it would be difficult to conclude that insulation is the issue, but rather the lack of window opening or an unfortunate combination of design parameters, such as large unshaded windows in a hot climate.These results suggest that, in cases with acceptable overheating levels, the use of improved insulation levels as part of a national climate change mitigation policy is not only sensible, but also help delivering better indoor thermal environments.Datasets created during this study are openly available from the data archive at https://doi.org/10.15125/BATH-00390.
Given climate change predictions of a warmer world, there is growing concern that insulation-led improvements in building fabric aimed at reducing carbon emissions will exacerbate overheating. If true, this would seriously affect building regulations all over the world which have moved towards increased insulation regimes. Despite extensive research, the literature has failed to resolve the controversy of insulation performance, primarily due to varied scope and limited comparability of results. We approach this problem through carefully constructed pairwise comparisons designed to isolate the effect of insulation on overheating. We encompass the complete range of relevant variables: latitude, climate, insulation, thermal mass, glazing ratio, shading, occupancy, infiltration, ventilation, orientation, and thermal comfort models — creating 576,000 building variants. Data mining techniques are implemented in a novel framework to analyse this large dataset. To provide confidence, the modelling was validated against data collected from well-insulated dwellings. Our results demonstrate that all parameters have a significant impact on overheating risk. Although insulation is seen to both decrease and increase overheating, depending on the influence of other parameters, parameter ranking shows that insulation only accounts for up to 5% of overall overheating response. Indeed, in cases that are not already overheating through poor design, there is a strong overall tendency for increased insulation to reduce overheating. These results suggest that, in cases with acceptable overheating levels (below 3.7%), the use of improved insulation levels as part of a national climate change mitigation policy is not only sensible, but also helps deliver better indoor thermal environments.
752
Dataset for comparison between single and double pilot injection in diesel-natural gas dual fuel engine
The data presented in this paper was based on the experimental activity which investigates the LTC in a dual fuel light duty engine .The experimental analysis carried out in Ref. was aimed at measure the performance and emissions of an internal combustion engine fueled with natural gas and natural gas/hydrogen mixtures as main fuels, pilot ignited by small amount of diesel fuel, in the case of LTC combustion.To activate this mode of combustion it was necessary to increase the injection advance of the pilot diesel fuel, with respect to the end of the compression stroke, in a remarkable manner.The advance pilot injection can lead to a strong increase in the maximum pressure gradient which, beyond certain limits, causes an unreliable operation of the engine.Therefore, the limit value of 15 bar/CAD was used during the tests to set the maximum injection advance increasing, as done by other researchers .With this limit, it was decided to use the DPI strategy instead of the SPI, to control the energy release rate in the first combustion stage, allowing the operation under LTC.This article reports all the data which support that choice, most of which have been not reported in Ref. .The main characteristics of the engine used for the test are listed in Table 1.Table 2 provides diesel fuel injector parameters, while Fig. 1 represents the drawing of the combustion chamber.The composition and main characteristics of natural gas used as main fuel in DF mode are reported in Table 3.Table 4 provides the specifications of the equipment used for the tests.About the test procedure: Table 5 provides the air mass flow rate and manifold pressure, while Table 6 provides the fuel flow rates and injection conditions, in all the test points.The cycle by cycle acquisitions of the intake manifold absolute pressure are available in the dataset named “Cycle-based acquisitions” of the linked public repository.The average acquisition over 100 consecutive operating cycles of the indicated pressure cycle, of the diesel injector control signal together with the heat release rate curves, obtained by calculation, are plotted in Figs. 2–5 respectively for the 4 cases analyzed: 1) full diesel injection, 2) DPI with minimum diesel for stable combustion, SPI 3) with the same diesel mass injected as the DPI case indicated as SPI and SPI with minimum diesel for stable combustion indicated as SPI.The cycle by cycle acquisitions and calculation of these parameters are available in the dataset named “Cycle-based acquisitions” of the linked public repository.Finally, an analysis of the maximum in-cylinder pressure rise was performed and the values are plotted in Fig. 6, as average value over 100 cycles, and given in Table 7, as percentage of operating cycles affected by a certain value of the maximum pressure gradient.All the data used for that analysis, which are the values of the maximum pressure increase recorded in each of the 100 consecutive cycles acquired under the different test conditions, could easily be obtained from the individual pressure cycles available in the dataset.However, for ease of reference, they are present in the excel file called “Maximum pressure rise.xlsx” which is available in the connected public repository.The “Maximum pressure rise.xlsx” excel file contains all the values of the maximum pressure increase recorded in each of the 100 consecutive cycles acquired in the different test conditions.About the “Cycle-based acquisitions” dataset, it is a folder that contains the acquisitions of in-cylinder and intake-manifold cycle pressure, diesel injector current signal and heat release rate on a crank angle basis for 100 consecutive engine running cycles in different test conditions.Each tested strategy, FDI, DPI, SPI, SPI is a sub-folder of the dataset, while each test condition is a compressed file in the sub-folder.The name of each of the compressed files contains the engine test conditions, the injection and feeding strategy, and the nominal start angle of the diesel fuel injection.In this way it is easy to associate the other data to the file of the test: the air flow rate or the fuel flow rates.A multi-cylinder diesel engine was used, which was converted to operate in the DF mode.The engine was a light-duty four-cylinder diesel type, with the diesel injection split into a pilot and main step and a combustion chamber as shown in Fig. 1.The diesel fuel has been direct injected at 800 bar.Main characteristics of the injectors are reported in Table 2.The diesel injector control signal was detected with a simple amperometric ring.This signal is useful to study and control the pilot diesel injection.However, the actual fuel mass flow rate is related to the control signal, but is delayed and depends on the characteristics of the injection system .The DF mode was achieved by adding a timed natural gas injection system with four injectors, which introduced the fuel at 2.4 bar pressure, close to the intake valve of each cylinder, during the corresponding intake stroke.The compositions and properties of natural gas used are listed in Table 2.The engine was coupled with an eddy current dynamometer and equipped with two pressure transducers, one in the intake manifold and the other in the combustion chamber, to acquire dynamic pressures with the crank angle.The list of instruments used, range of measurement and accuracy is given in Table 4.The engine was operated at 100 Nm of brake torque and 2000 rpm.The eddy current dynamometer was controlled to maintain the engine speed constant, while two electronic modules were used to set the engine load by adapting the fuel flows.The DF operation was performed with the highest possible gaseous fuel percentage, by decreasing the diesel fuel flow rate at the minimum quantity required for a stable combustion start for the cases: DPI and SPI.For the case SPI, the pilot diesel fuel flow rate was set at the same amount of the DPI.The FDI case is in full diesel as reference.A set of experiments were conducted setting the start of the pilot injection at 10, 20, 25, 35 and 45 CAD BTDC, with the limit of not exceeding a pressure gradient of 15 bar/deg.In the cases of FDI and DPI, the second diesel fuel injection was delayed by approximately 1.7 ms compared to the first one.For each test, fuel and air average mass flow rate were acquired together with in-cylinder cycle pressure, dynamic intake-manifold pressure and diesel injector current signal, on a crank angle basis, for 100 consecutive engine running cycles.Air mass flow rate measurement and average values of intake-manifold pressure are reported in Table 5, for all the test conditions.Table 6 provides the fuel flow rates and injection operation modes, in all the test conditions.In this last Table, SOPI is the nominal value of diesel injection start angle while the following parameters describe the injection strategies: start of the first pilot injection, end of the first pilot injection, start of the second pilot injection, end of the second pilot injection, start of the main injection and end of the main injection.SOFPI and EOFPI refer to the pilot phase which is present in all the tested injection strategies, SOSPI and EOSPI refer only to the DPI strategy while SOMI and EOMI refer to main injection of the full-diesel cases.Air and fuels flow rates per stroke in Tables 5 and 6 are relative to a single cylinder of the engine.These data were obtained by dividing by four the measured engine hourly average flow rates of air and fuels, and considering the number of intake strokes per hour, carried out by each of the four cylinders.The average acquisition over 100 consecutive operating cycles of the indicated pressure cycle, of the diesel injector control signal, together with the heat release rate curve, obtained by calculation, are plotted in Figs. 2–5 respectively for the 4 analyzed cases: FDI, DPI with minimum diesel for stable combustion, SPI with the same diesel mass injected as the DPI case and SPI with minimum diesel.An analysis of the maximum in-cylinder pressure rise was performed and the results are plotted in Fig. 6 as average value over 100 cycles, and given in Table 7 as percentage of operating cycles affected by a certain value of the maximum pressure gradient.In the case of SPI it was not possible to test SOPI higher than 25 CAD BTDC due to a strong overcoming of the pressure gradient limit and the occurrence of detonation cycles.
The present data article is based on the research work which investigates the low-temperature combustion (LTC) in a dual fuel light duty engine. The LTC mode was activated by means of double pilot injection to control the energy release rate in the first combustion stage, thereby minimizing the increase of the rate of pressure and allowing the operation under LTC. This data article presents all the data which supports the choice for double pilot injection vs single pilot injection for that research. In this experimental work the engine was fueled with diesel, in full-diesel (FD) mode, and in dual fuel (DF) mode with natural gas and natural gas – hydrogen mixtures as main fuel. In DF mode the pilot diesel fuel was injected both with a single and a double injection at the same engine speed and torque. The pressure cycle in one of the four cylinders, the intake manifold pressure and the injector current signal were acquired on a crank angle basis for 100 consecutive engine cycles. Analysis of combustion rate, maximum pressure rise and fuel/air flow rate were performed. The data set, which also includes some engine control parameters, the combustion chamber geometry and some injector features, can potentially be reused to numerically model the combustion phenomena and, in particular, to investigate the effect on the ignition phase of combustion in LTC, also considering the variability from cycle to cycle.
753
Hydrogen from ethanol reforming with aqueous fraction of pine pyrolysis oil with and without chemical looping
Solid biomasses naturally contain variable levels of moisture, which reduce their grindability and thus the efficiency of their conversion through thermochemical processes.They also have high minerals and metals content, causing emission of pollutants and corrosion during combustion.Different degrees of pre-treatments can be applied to produce clean fuels or chemical feedstocks from solid biomasses of diverse origins.The fast pyrolysis process, which utilises moderate temperatures of around 500 °C and vapour residence times below 2 s, is suitable for minimally pre-treated moist biomass, and is tolerant of a variety of feedstock.It generates volatiles with yields in the region of 70 wt%, alongside solid residues, as well as flammable gases.Char and/or gases can be burned to sustain the process energy requirements.After cooling, the volatiles condense into bio-oil, which, with an energy density several times that of the original biomass, is more easily transported and stored, as well as being compatible with catalytic post-processing due to its low boiling point.However bio-oils produced in this way are rarely engine- or boiler-ready owing to high content in water and oxygenates, placing their gross calorific value in the 16–19 MJ/kg range, i.e. roughly half that of standard, non-oxygenated, liquid fuels.Despite being considered a lower quality by-product, the water soluble fraction obtained from the liquefaction of varied types of biomass is also water rich and usually represents liquefaction’s highest yield on a mass basis.The compositions of bio-oil and of water soluble product of liquefaction are representative of the biomass of origin.Carbohydrates derive from the cellulose and hemicellulose biomass content, and aromatics from the lignin.The carbon- and hydrogen-rich lignin derived compounds can be phase-separated by further adding water to the bio-oil.The lignin – also called ‘organic’ – fraction thus obtained can be used as a natural substitute in phenolic derived resin or may be reformulated for gasoline blending compounds.The aqueous fraction, which contains carbohydrate derived compounds and residual aromatics, has few industrial applications as food flavouring, or de-icing agent and would pose disposal challenges as an untreated water stream.More means of recycling the aqueous fraction are sought, which would categorise it as resource rather than waste.It has been proposed that aqueous fractions from bio-oil or liquefaction processes can be upgraded via biological means or a reforming process which uses their water content as reagent.Hydrogen is at present a valuable chemical that will be required in ever increasing amounts mainly due to population increase.This increase is mirrored in the production of ammonia-based fertilisers.It is also reflected in petroleum refinery operations where modern on-site steam methane reforming plants are expected to play a growing role.In refineries, hydrogen is increasingly outsourced, i.e. produced elsewhere and imported to the refinery.Hydrogen is gradually more utilised in hydrodeoxygenation operations during the upgrading of biocrudes.Hydrogen is also widely expected to enable the worldwide transition to a hydrogen economy, in which transportation and power generation currently relying on fossil fuels will switch to cleaner and more energy efficient hydrogen-run fuel cells.In the review by, nickel and cobalt catalysts feature prominently as active catalysts of ethanol steam reforming.Cobalt catalysts are shown to achieve ethanol conversions of 100% at temperatures as low as 623 K. However, this is achieved at very high molar steam to carbon ratios, whereas at moderate steam to carbon ratios, only the Ni based catalysts show ability to achieve both high feedstock conversion and selectivity to hydrogen, typically for temperatures above 650 °C.When reforming aqueous bio-fractions, two major problems have been reported: clogging of the feeding line due to vaporisation and coking in reactors from carbon deposits.Incorporating a cooling jacket around the feeding line can help prevent vaporization, while the usual approach to prevent coking is to increase reforming temperatures in order to favour carbon steam gasification and the reverse Boudouard reaction.For instance, employed 850 °C with molar steam to carbon ratio of 7 when steam reforming the aqueous fraction of pine sawdust bio-oil in a fluidized bed reactor. used aqueous fraction of pine bio-oil at 650 °C and S/C of 7.64, where little amounts of oxygen were introduced to gasify the coke, which eventually reduced coke deposits by 50%.The uses of elevated temperatures but in particular S/C in excess of 4, can represent prohibitive energy penalties.This is illustrated for ethanol feedstock in Table 1, which compares enthalpy changes of producing 1 mol of H2 via thermal water splitting with steam reforming of ethanol for S/C between 1 and 12.The calculations assumed atmospheric pressure, reactants initially in liquid phase at 25 °C and products at 650 °C, using equilibrium data generated with the CEA code.The heat demand of producing 1 mol of H2 via SRE at 650 °C increases linearly with S/C in the range studied, and is dominated by raising steam at 650 °C from liquid water at ambient conditions.The heat demand of SRE becomes equal to that of thermal WSP at approximately S/C of 6.4, invalidating the need for ethanol feedstock.Table 1 also lists for S/C of 3 the ratio of total heat demand of SRE to that of thermal WSP for temperatures between 500 and 800 °C, and shows that the minimum ratio of 0.67 is reached between 600 and 650 °C, indicating SRE is at its most advantageous compared to thermal WSP.The present study is motivated by demonstrating the conversion of aqueous fractions of bio-oils to hydrogen by steam reforming at moderate temperature and steam to carbon ratios below 4 without oxygen addition.The medium temperature minimises reverse water gas shift and thus favours a H2 rich syngas.However, using lower steam to carbon ratios than those reported in the literature for bio-oil reforming are expected to lower both the maximum achievable hydrogen yield and hydrogen purity in the syngas, but should increase the thermal efficiency of the process.Given that the aqueous fractions of bio-oils have an organic content of a few weight percent, and thus exhibit S/C ratios much higher than 10, on their own, aqueous fractions of bio-oil intrinsically far exceed the target S/C range of 2–5 for thermally efficient steam reforming.Here, the hydrogen production by steam reforming is considered thermally in-efficient when it requires more energy input than when the hydrogen is produced by water splitting.To address the problem of enthalpic burden of the water reactant, we chose to combine a bio-oil aqueous fraction with another dry feedstock so as to achieve a feed mixture S/C between 2 and 5.Biomass derived aqueous fractions can add a renewable contribution to the steam reforming of a fossil feedstock for the production of hydrogen via its use as the steam resource.It can also complement the steam reforming of a water-free biofeedstock.Due to its production routes from both fossil fuels and energy crops, its ease of transport and storage, its lack of toxicity, its high solubility in water permitting a single feed line, and its volatility, ethanol was considered a good candidate as primary dry feedstock to test its reforming with the aqueous fraction of pine pyrolysis oil instead of steam.In addition, there is extensive literature on the steam reforming of ethanol and it generates a good maximum yield of hydrogen content in ethanol due to the steam contribution to the H2 produced) compared to more oxygenated biofeedstocks.A further aim of this study was to investigate the potential of chemical looping steam reforming an ethanol/bio-oil aqueous fraction feed mixture, similarly to the authors’ prior findings on unseparated bio-oils.The latter investigation inscribed itself in a programme of research by this group on the fuel flexibility of the chemical looping steam reforming process using packed beds and alternating feed flows, and in particular, applied to feedstock of biomass or waste origin.The chemical looping steam reforming process relies on alternating the oxidation of a catalyst under air feed with its reduction under feedstock and steam feed allowing steam reforming in near autothermal conditions, while producing non N2-diluted syngas, thus high H2 content.This is despite using air for the heat-generating oxidation reactions, rather than the pure O2 from a costly air separation unit, as do the conventional partial oxidation or conventional autothermal reforming processes.The stoichiometric molar S/C ratio of the coupled SREtOH-WGS reactions is 1.5 for complete conversion to H2 and CO2.The experimental set-up is shown in Fig. 1.The feeding system, reactor, gas and condensates collection and analysis, temperature and flow measurement have been described elsewhere and were used previously to investigate the performance of pyrolysis oils from palm empty fruit bunch and pine wood by chemical looping steam reforming and sorption enhanced steam reforming in packed bed reactor configuration.The aqueous fraction from pine oil was selected over that of PEFB oil in the present study due to the high lignin content in pine, which in theory makes it more suitable for upgrading for the resin industry, leaving its aqueous fraction as by-product.Ethanol with purity ⩾99.5% was purchased from Sigma–Aldrich for steam reforming with the aqueous fraction.The ‘dry’ and ‘wet’ molar compositions for AQ were used to determine the flow rates of AQ to achieve a given molar steam to carbon ratio when in mixture with ethanol, while the assumed individual components’ mole fractions were necessary inputs to the chemical equilibrium calculations.Two types of catalyst were supplied in pellet form by Johnson Matthey Plc.Catalyst ‘A’ consisted of, when fully oxidised, ∼18 wt% NiO on α-Al2O3 support, and catalyst B contained ∼25 wt% NiO on γ-Al2O3 support.Catalyst A was measured for BET surface area before and after steam reforming experiment using a Quantachrome Instrument Nova 2200 and the multipoint BET method.Due to the mesoporosity of catalyst B, the surface area and pore size for the fresh and used catalyst B were obtained using N2 adsorption/desorption isotherm and reported using BJH method.The BET surface areas of fresh catalysts A and B were 3.3 m2/g and 50.5 m2/g respectively.Characterisation of the fresh catalyst A by XRD, SEM and TEM is described in.The ethanol/aqueous fraction of pine bio-oil mixture was prepared prior to experimental runs according to the target range of molar steam to carbon ratio between 2 and 5.The mixture was delivered into the reactor by a programmable syringe pump.The steam reforming experiments were performed at atmospheric pressure and temperature of 600 °C in the down-flow packed bed reactor system as described in, albeit with the difference that a single feed was used here as opposed to separate bio-oil/water feeds of the previous study, due to the high miscibility of alcohol in the bio-oil aqueous fraction.This temperature was within the temperature range for maximum hydrogen yield for ethanol steam reforming at S/C = 3 as predicted by chemical equilibrium calculations using the CEA code.The experimental runs were divided into 3 stages:Steam reforming of ethanol using catalyst A for S/C = 2–5, and catalyst B at S/C = 2.Steam reforming of EtOH/AQ using A, then using B for S/C = 2.05–4.18.Chemical looping steam reforming of EtOH/AQ using A, then B for S/C = 3.33.The pellets of A and B were crushed to 0.85–2 mm size particles and the same amount of 6 g was used for all the experimental runs.The experiment started with a ‘conventional’ catalyst reduction step using 5 vol% H2/N2 at flow rate of 320 cm3 min−1.At this stage all the NiO was converted to the catalytically active phase Ni.The reduction step reached completion when the H2 value indicated 5 vol% again at the H2 gas analyser.The flow of H2/N2 mixture was then stopped and H2 in the system was flushed using a N2 flow of 200 cm3 min−1 until the H2 concentration subsided to zero.Following this step, the syringe pump for the mixture was switched on to start the steam reforming step.The weight hourly space velocities were in the range 2.41–2.94 h−1, calculated based on flow feeds in the ranges of 185–200 cm3 min−1 of N2, 0.46–0.9 ml/h of liquid ethanol, and 1.2–1.54 ml/h of liquid AQ, with the ethanol and AQ mixture delivered in mixture by a single syringe pump, both at ambient temperature of 22 °C and using 6 g of catalyst.Individual flow conditions are shown in the results section.Nitrogen gas was deliberately used as an inert diluent in our experiments in order to derive from a nitrogen balance the total dry molar flow of outgoing gases.This allowed performing the calculations of product yields from measurement of their dry mole fractions, as well as the conversions of fuel and of steam from the aqueous fraction.In a ‘real world’ process, the addition of an inert diluent like nitrogen would neither be necessary nor recommended except for purging purposes as it would, amongst other effects, decrease the purity and calorific value of the reformate.After completing the steam reforming phase, N2 was kept flowing until the concentrations of outgoing gases had subsided to zero, then the N2 feed was stopped.The oxidation step of the chemical looping reforming experiments took place at a set temperature 600 °C using an air flow of 970 cm3 min−1.The oxidation reactions gave rise to an increase in reactor temperature.This ended when the temperature in the reactor returned back to the initial 600 °C, indicating combustion of carbon deposits on the nickel catalyst and re-oxidation of the nickel catalyst were no longer taking place.For the experiment using catalyst A, the temperature for reduction, steam reforming and oxidation steps were at 600 °C.Catalyst B was reduced by H2 at 500 °C but underwent steam reforming and oxidation steps at 600 °C.The code Chemical Equilibrium and Applications ‘CEA’ was used to calculate equilibrium conditions of the EtOH/AQ system at atmospheric pressure and temperatures corresponding to the experiments.This code relies on minimisation of Gibbs free energy and therefore requires as set of input data comprising the thermodynamic properties enthalpy, entropy and specific heat, and their variations with temperature for the pool of reactants and products, in addition to pressure, temperature and molar inputs.The code allows taking into account the full population of chemical species in the code’s library as potential equilibrium products, which includes hundreds of stable hydrocarbon species and free radicals, as well as ions, in gaseous and condensed phases.However, the standard version of thermo.inp does not include thermodynamic properties of unusual species that are nevertheless typical of bio-oils, therefore these were found in other sources and incorporated in the program.Accordingly, the thermodynamic properties of acetic acid were from and were present in the original thermo.inp file, but those of levoglucosan and 2-furanone were taken from, whereas vanillin’s were from.The maximum theoretical H2 yields assuming complete conversion to H2 and CO2 were 26.30 wt% for ethanol alone, and 17.05 wt% for the organics in the aqueous fraction.Thus a hydrogen yield efficiency close to 100% would represent a system close to equilibrium and a maximum achievable for the chosen feed and reaction conditions.Xfuel,exp differs from Xfuel,eq in that only the three gas products CO, CO2 and CH4 were taken into account in the experimental fuel conversion value.SelC CO2 exp or eq and SelC CH4 exp or eq corresponded to Eqs. and respectively.with SelH CH4 exp or eq and SelH NH3 exp or eq corresponding to Eqs. and respectively.For SelH exp, only the selectivity to H2 and that to CH4 were measured.In SelH eq, SelH NH3 was determined and found to be very small at 600 °C.During chemical looping steam reforming, the reduction of NiO reactions RdEtOH and RdORG precede the catalytic steam reforming reactions SREtOH and SRORG due to the lack of reduced, metallic nickel in the reactor bed, which is the catalytically active material for the SR and WGS reactions.where tss was the time at which steady state gas compositions values were reached following the reduction period.A distinction is made here between calculated and measured extent of NiO reduction, as in some cases a measured value can be derived directly from powder XRD spectra using Rietveld refinement.This was done for our previous bio-oil and model compounds steam reforming experiments.Much of the discussion makes use of comparisons between experimental outputs with their chemical equilibrium counterparts.The first set of results aims to show that uncertainties regarding the composition of the aqueous fraction did not influence the chemical equilibrium outputs for the range of conditions studied.Table 2 lists the main chemical equilibrium outputs assuming four different bio-oil aqueous fraction mixtures, based on varying mole fractions of four model organic compounds identified in the GC–MS of the original oil.These were acetic acid, levoglucosan, and 2-furanone or vanillin, where all four were detected in significant amounts by the GC–MS although the method fell short of full quantification.These mixtures targeted the same elemental molar composition of the original bio-oil and were well within 5% of the target for each of the C, H and O elements.They would have resulted in less than 2% discrepancy in maximum theoretical hydrogen yield compared to the original bio-oil.The results in Table 2 show the chemical equilibrium outputs were found to be remarkably constant across all four assumed mixtures for any given tested in the experiments, despite the water conversion and the selectivity to carbon containing gases varying significantly with changing S/C for a given mixture.In the following section, and for the sake of simplicity, the results of the reforming tests are shown for mixture composition ‘M1’, i.e. where the organic content of the AQ fraction consisted of 40 mol% acetic acid, 30 mol% levoglucosan and 30 mol% vanillin.Note that if this mixture had been attempted in practice, a small amount of preheating of the liquid mixture might have been required to fully dissolve the vanillin.On this basis, the molar inputs used for the chemical equilibrium calculations at the different steam to carbon ratios corresponding to the experiments are listed in Table 3, using an arbitrary total input of 1000 mol.Values in Table 3 are presented with 6 decimals which were deemed required to achieve the target 7.1 wt% of organics in the aqueous phase, caused by the large molar mass of levoglucosan and vanillin.It also demonstrates the increase in mole fractions of the bio-oil compounds with increasing S/C, as the water input was linked with the bio-oil aqueous fraction input and therefore to its organic content.Conventional steam reforming of ethanol has been extensively studied in the literature and the aim for the experiments on ethanol alone of the present study was to provide a benchmark with which to compare the EtOH/AQ experiments.Table 4 and Fig. 2 contain the results obtained using both catalysts A and B at atmospheric pressure and 600 °C with respect to the equilibrium values in the S/C range 2–5.Table 4 compiles the H2 yields and H2 yield efficiency data for the steam reforming with ethanol and with the EtOH/AQ mixture.Fig. 2 plots reactants conversion and selectivity to carbon products.Both catalysts fulfilled the expectation of processes dominated by SREtOH and WGS reactions, with carbon products composed of, in decreasing order, CO2, CO and CH4 at given S/C ratio, as expected at the medium temperature of 600 °C which still favours WGS over its reverse.The selectivity to CO2 increased with S/C, as an effect of Le Chatelier’s principle on both SREtOH and WGS.Process outputs from catalyst A were consistent with a reactor that was close to, but not quite at equilibrium, with ethanol conversion between 0.9 and 1, and steam conversion approximately 75% of the equilibrium value for the S/C range studied, indicating the presence of rate-limiting undesirable reactions, evidenced by the higher selectivity to methane.In contrast, the experiment with catalyst B at S/C of 2 exhibited all its outputs consistent with chemical equilibrium.The findings were in agreement with, which also used Ni metal on alumina support.Ni is known for its good activity for cleavage of C–C and limited activity for WGS, so, high selectivity to CO2 showed both catalysts and in particular B had good activity for the water gas shift.As expected from Le Chatelier’s principle, the H2 yield increased with S/C at equilibrium and this was also the case in the experiments.For catalyst A, the highest H2 yield was achieved at the highest S/C tested, corresponding to the best oil conversion to the carbon products CO, CO2 and to a much smaller extent, CH4.It is also worth noting that SelCCH4 exp decreased from 9.3% to 3.8% with S/C increasing from 2 to 5.This condition reflected the equilibrium trends which favour the steam methane reforming reaction at the expense of its reverse reaction at this temperature.H2 yield efficiency, which compares the experimental yield with its equilibrium value, increased from 80% at S/C of 2 to 89% at S/C of 5 for catalyst A, confirming conditions not quite at equilibrium, whereas that of catalyst B at S/C was 97%, substantiating B as a more active catalyst for conventional ethanol steam reforming than A.The higher activity of B could be attributed to a surface area of an order of magnitude higher than that of A, as well as to its higher Ni content.The lower part of Table 4 contains the H2 yield outputs over catalysts A and B for the EtOH/AQ mixture, while Fig. 3 presents the variation with S/C of the reactants conversion fractions, as well as the selectivity to CO2, CO and CH4 products.As before, and for both catalysts, increasing S/C in the low range had a beneficial effect on the H2 yield and caused a shift to CO2 as the main carbon product at the expense of both CO and CH4, consistent with progressively more favourable conditions for SREtOH, SRORG and WGS.However, increasing S/C ratio beyond 3 resulted in slightly lower H2 yield, a common occurrence with Ni catalysts, and attributed to adsorption of H2O on the Ni active sites, thus inhibiting adsorption of the fuel, eventually causing less fuel conversion. found that organic and steam molecules competed for active sites and increasing S/C ratio caused lack of active site for the organic molecules to be adsorbed.This finding was also supported by previous studies using a Ni-based catalyst for steam reforming of acetic acid and aqueous phase of rice husk at temperature 800 °C and S/C ratio of 4.9.Given that acetic acid is a significant component of the aqueous fraction of our bio-oil, it was not surprising to encounter the same effect during steam reforming of EtOH/AQ on the same catalyst.For a given S/C, the H2 yield from EtOH/AQ for catalyst A was higher than that obtained with catalyst B, in contrast with the ethanol experiments, with A bringing conditions closer to equilibrium than B, as reflected in the H2 yield efficiency.In addition, closeness to equilibrium was realised for the lower S/C, the root of this being evident in the drop in fuel conversion fraction from 1 to 0.85 observed when increasing S/C from to 2 to 4.The respective contents in ethanol and in bio-oil organics were such that the organics content was also increasing with S/C, as seen in Table 3.This would have caused more complex steam reforming conditions than with ethanol alone as S/C increased.Although catalyst A showed overall better performance for catalytic reforming of the EtOH/AQ mixture, its higher selectivity to CH4 and to CO than catalyst B is worth noting.Higher selectivity to CH4 indicates favourable conditions for methanation of CO and CO2, and represents a large penalty in H2 yield, as each mole of CH4 could have potentially steam reformed into 4 mol of H2.High SelCCH4 may also imply poor performance of a catalyst that is not able to steam reform the CH4 produced by thermal decomposition or cracking of the fuel due to the temperature limited to 600 °C.The presence of substantial amounts of solid carbonaceous deposits that were observed at the lower, downstream, part of the reactor when reforming EtOH/AQ using catalyst A, which had persisted after a regenerative air feed step, indicated that thermal decomposition of the fuel to carbon had occurred.Coke deposition has also been reported when steam reforming aqueous fraction at temperatures between 500 and 650 °C.In contrast, and despite its slightly lower fuel conversion, reforming of EtOH/AQ using catalyst B did not result in solid deposits for this set of experiments.The higher SelCCO2, exp for B than A also indicated catalyst B had more water gas shift activity, as found earlier for the experiments with ethanol alone.Based on the outputs of Table 4 and Fig. 3, and mainly due to low selectivity to CH4 and considerations of the cost of raising the AQ feed to vapour phase, which increases with S/C, the condition of S/C ratio 3.33 at 600 °C was selected for the feasibility tests of chemical looping of the EtOH/AQ mixture.In these conditions, an industrial process could feature recycle of the unconverted fuel and steam.In contrast, a high selectivity to CH4 resulting from operating at a lower S/C would have remained problematic at the temperature of 600 °C degrees, which is, according to equilibrium predictions, insufficient for complete conversion of methane through steam reforming at atmospheric pressure.Two sets of chemical looping steam reforming were performed using the EtOH/AQ mixture with S/C 0f 3.33, using catalysts A and B.The performances of catalysts A and B as oxygen transfer catalysts in the chemical looping steam reforming process were investigated for experiments at 600 °C using 6 g of catalyst per run.The mean process outputs at steady state during chemical looping steam reforming of ethanol with the pine bio-oil aqueous fraction are shown in Table 5 and Fig. 4 for a short number of cycles aimed at exploring feasibility of the process.The time on stream over which the mean values were calculated was around 1 h.The steam reforming outputs after reduction of the catalyst are influenced by the reduction process that preceded it.Therefore, the calculation of the reduction rate efficiency using the EtOH/AQ mixture according to Eqs. and respectively was performed for the four cycles of chemical looping steam reforming.Typical plots of the reduction rate efficiency for the first 2000 s using catalysts A and B are shown in Fig. 5, while the extent of reduction by 1000 s of time on stream, where the reduction rates had fallen back to near zero for both catalysts, are listed in the last column of Table 5.According to Table 5, the EtOH/AQ feed was able to maintain the extent of reduction of catalyst A to above 85% before the steady state of steam reforming was reached, while catalyst B achieved below 15% in the same amount of time.For catalyst A, the only gas products during the first 800 s of the reduction period were CO2 and H2O, reflecting that the reduction reactions RdORG and RdEtOH were not competing with the steam reforming SRORG and SREtOH, and bringing the extent of NiO conversion to 70%.A steady rise in CO and H2 in the last 200 s indicated that for the final reduction period, steam reforming was also taking place.For catalyst B, which exhibited evolution of H2, CO, and CO2 from the start of the experiment, the co-existence of both reduction and steam reforming mechanisms was clear, dominated by steam reforming, given the small reduction rate efficiency and nevertheless high fuel conversion.In the situation where the reduction reactions and steam reforming co-exist, it is to be expected that the latter be affected by the equilibrium shift that the production of CO2 and H2O via reduction would trigger.It may therefore be desirable that reduction and steam reforming are mutually exclusive, with the reduction stage completed as quickly as possible, as observed for catalyst A.One feature that could overcome this particular drawback would be the introduction of in-situ CO2 capture in the reformer which would introduce new favourable equilibria states for both reduction and steam reforming.We are in the process of investigating this particular effect for a future publication.The effect of chemical looping, where CLSR effectively relies on the organic content in the feed mixture to reduce the catalyst, as opposed to externally provided hydrogen, can be seen in the evolution of the outputs from cycle 1 to subsequent cycles.The results in Table 4 indicate that catalysts A and B were affected very differently by reliance on auto-reduction.Catalyst A, which maintained its extent of reduction to around 78%, exhibited fuel and water conversions that retained their initial values by cycle 4, overall resulting in an unaffected H2 yield of ca. 17 wt%, that is, 79% of the equilibrium H2 yield.Surface areas for fresh, reduced and used catalyst A after steam reforming and chemical looping runs are listed in Table 6.Surface area of catalyst A after conventional steam reforming run followed with an air feed at set temperature of 600 °C increased very slightly from 3.3 to 3.5 m2 g−1.In contrast, a more significant loss of surface area was observed after 4 cycles via chemical looping.This was probably caused by sintering via the exotherms resulting from oxidation of Ni, and to a smaller extent, of coke under air feed.Despite this loss in surface area, the H2 yield, as well as the fuel and water conversions, were maintained.Catalyst B, which featured only partial reduction, exhibited a significant drop in H2 yield from 81% to 64% in just 3 cycles, the root of which can be traced to a large slump in fuel conversion.It is proposed that this is directly related to the state of partial reduction of B throughout cycle 2, and therefore evidence of a very significant decrease in active sites for steam reforming and water gas shift compared to the fully H2-reduced catalyst conditions of cycle 1.From cycles 2 and 3, the extent of reduction was maintained at this low level, and most of the process outputs were sustained as well, which reinforces this explanation.Concurrent with this deactivation of catalyst B due to partial reduction, the selectivity to CH4 was found to increase steadily with cycling, as a larger proportion of the fuel would have undergone pyrolysis rather than steam reforming comparing cycle 1 with 3, accompanied by both CH4 and coke as products.These results therefore support the premise that for good operation of chemical looping steam reforming of EtOH/AQ, the extent of NiO reduction needs to be maintained in order to prevent catalyst deactivation and subsequent drop in conversion with build-up of undesirable by-products.It also permits the interesting conclusion that a conventionally ‘better’ steam reforming catalyst shown to perform very well with a simple oxygenated feedstock such as ethanol, i.e. catalyst B, does not necessarily maintain high performance in chemical looping steam reforming conditions, where reducing properties under cyclic operation play such a crucial role, nor when using more complex feedstock.Here, catalyst A, which is a low surface area steam reforming catalyst, produced results exceeding expectations and outperformed catalyst B in H2 yield, both during conventional steam reforming of the EtOH/AQ feed, as well as during its chemical looping steam reforming.Clearly, a study with a larger number of cycles on catalyst A, evaluating the energy demand of the process compared to that of the conventional process would be justified on the basis of this study.Both catalysts A and B converted efficiently ethanol/aqueous fraction of pine pyrolysis oil mixtures.During chemical looping steam reforming experiments using A, the EtOH/AQ feed achieved close to 87% chemical reduction of NiO to Ni over cycles, maintaining a 17 wt% H2 yield, i.e. 79% of equilibrium.A lower NiO reduction was achieved with B, resulting in a drop in H2 yield and a growing selectivity to the undesirable product CH4, leading to conclude that catalyst A appeared to be more suitable for CLSR of EtOH/AQ compared to B.
Reforming ethanol ('EtOH') into hydrogen rich syngas using the aqueous fraction from pine bio-oil ('AQ') as a combined source of steam and supplementary organic feed was tested in packed bed with Ni-catalysts 'A' (18wt%/α-Al2O3) and 'B' (25wt%/γ-Al2O3). The catalysts were initially pre-reduced by H2, but this was followed by a few cycles of chemical looping steam reforming, where the catalysts were in turn oxidised in air and auto-reduced by the EtOH/AQ mixture. At 600°C, EtOH/AQ reformed similarly to ethanol for molar steam to carbon ratios (S/C) between 2 and 5 on the H2-reduced catalysts. At S/C of 3.3, 90% of the carbon feed converted on catalyst A to CO2 (58%), CO (30%) and CH4 (2.7%), with 17wt% H2 yield based on dry organic feedstock, equivalent to 78% of the equilibrium value. Catalyst A maintained these outputs for four cycles while B underperformed due to partial reduction.
754
An approach to ingredient screening and toxicological risk assessment of flavours in e-liquids
The main users of e-cigarettes are current smokers, especially those who have expressed an interest in quitting or cutting down cigarette consumption.In the same way that the taste of tobacco is important to cigarette smokers, flavour is an important part of the e-cigarette experience, including for regular adoption or conversion.Farsalinos et al. undertook a survey of 4618 Greek e-cigarette users to assess flavour preferences.The median duration of smoking cigarettes was 22 years and of e-cigarettes was 12 months.Respondents reported that having a variety of flavours available was very important to efforts to quit smoking, and almost half felt that restriction of variety would increase cravings for cigarettes.The authors concluded that liquid flavourings in e-cigarettes contribute substantially to the overall experience of persistent users.Similarly, when adults in the US were surveyed about their tobacco use and motivations for starting and stopping e-cigarette use, the study found the most important reason for stopping vaping was the taste of the product.This feature was particularly important to those who had tried e-cigarettes only once or twice, whereas taste played a notably lesser role in stopping vaping for intensive and intermittent users.These findings imply that taste, and hence flavourings, are likely to play a major role in the difference between people only trying e-cigarettes versus actually adopting them for longer term use.Indeed, flavourings might be essential to smoking cessation in e-cigarettes users, because the US study concluded that daily use of e-cigarettes for at least 1 month was strongly associated with quitting smoking after a 2-year follow-up period, compared with intermittent or no use.The market for e-cigarettes has expanded extremely quickly worldwide.Long-term research findings on the health effects of vaping are not yet available, and methods for various assessments, such as toxicology, flavours, respiratory effects and so on, are still to be agreed upon.However, as vaping products are widely available, publication, debate and agreement on risk assessment approaches are becoming increasingly important.Regulations are still developing and are not yet up to date with vaping reality.Therefore, industry can help to develop appropriate product standards and implement robust quality management systems.Much of the focus of studies reported thus far has been related to the risk assessment of the solvents and nicotine in e-liquid.Additionally, screening and risk assessment considerations are generally performed on the in-going ingredients of e-liquids.However, the main consumer exposure to the e-liquid during normal use is to the aerosol.In this paper we focus on responsible product stewardship for the flavours that are essential to create consumer-relevant e-liquids.We suggest an approach to toxicological risk assessment of flavours that takes into account the in-going flavour ingredients and constituents and the identification, measurement and risk assessment of any potential thermal breakdown and reaction products.The aerosolisation process involves a brief heating period during every puff of an e-cigarette.Published data around heater operating temperatures are scarce, but estimates have ranged from 40–65 °C to 170–180 °C, and even up to 350 °C or higher in the absence of e-liquid.Regardless of the exact operating temperatures of individual vaping products under specific conditions, a heating period introduces the potential for pyrolysis of compounds and endothermic reactions between them.Additionally, the compounds can respond in varying degrees to the different processes involved in aerosolisation, such as evaporation and condensation.Together these factors might result in changes to the composition of the aerosol versus that of the e-liquid.Appropriately sensitive measurement of the aerosol, therefore, is required for the risk assessment to take into account potential thermal breakdown and reaction products of flavouring ingredients.The first screening step for in-going ingredients relates to the purity of the compounds.As a practical way of minimising risks from potential contaminants in ingredients, we suggest that only food-grade flavouring ingredients are used to provide some reassurance on purity and systemic toxicity.Food flavours, however, are not normally assessed for inhalation exposures and further safety assurance is required.A toxicological risk assessment also requires knowledge through full quantitative disclosure of the individual ingredients and constituents in e-liquid.This requirement sounds obvious, but besides the commercial sensitivity of flavour recipes and sub-flavours, challenges surround consistency and identification of constituents, especially for ingredients of natural origin.The compositions of naturals vary dependent on biological and geographical origins and weather and other environmental factors affecting growth and harvest, and can change over time.Thus, using only naturals that are approved food flavourings ensures that specific limits have been placed on constituents of known toxicological concern.An example of such restrictions can be found in article 6 of the European food flavouring regulations.Exclusion of ingredients from use if they have properties known to be carcinogenic, mutagenic or toxic to reproduction is considered a basic safety precaution.In general, use of only food-grade flavourings should already have ensured they are not CMR, however, because classification criteria can differ per region and several food flavourings have been grandfathered on to approved lists on the basis of historic use, exceptions may exist.Therefore our proposed screening criteria also explicitly exclude any ingredients classed as group 1, 2A or 2B carcinogens in the International Agency for Research in Cancer classification, as well as any classified as CMR by the US Food and Drug Administration or if a harmonised European classification exists.Additionally, ingredients that appear on the REACh list for substances of very high concern, 2014) for human toxicity reasons should also be avoided, as should all compounds that have been identified by the FDA as “harmful and potentially harmful compounds” or HPHC in a tobacco smoke context, 2012).For ingredients that are not evaluated or classified or where only a manufacturer’s self-notified classification exists, a weight-of-evidence approach is recommended that applies criteria to the data as described by the Globally Harmonized System of Classification.Some discussion has taken place about restricting the inclusion of contact allergens in e-liquids.An evaluation process has been proposed that includes a tolerable no effect level of 1000 ppm in e-liquids, below which the chance of induction of contact sensitisation and eliciting effects in pre-sensitised people is considered tolerable.However, the situation is different for respiratory sensitisation.If e-liquids were to contain respiratory sensitisers, inhalation exposure over time could lead to IgE-mediated responses, such as are experienced with hay fever and occupational asthma.Although extremely rare, in the very worst case, people might experience anaphylactic responses, including death.The potential severity of symptoms related to respiratory allergens, therefore, sets these substances apart from those causing the more common contact sensitisation.Additionally, although contact sensitisation is a well-understood process with recognised, robust hazard identification tests and quantitative risk assessment processes, no validated hazard identification tests and quantitative risk assessment processes exist for respiratory sensitisation and the recommended approach relies on a weight of evidence evaluation.Some tests are in use for hazard identification, such as the measurement of immunoglobulin E in mice and specific guinea pig pulmonary responses, but their applicability is restricted to certain chemical classes of compounds.Hazard identification and the derivation of tolerable doses are therefore based on a weight-of-evidence approach, where occupational experience especially can form an important hazard alert function.On top of the identification uncertainties, several respiratory sensitisers have very low derived no-effect levels, leading to occupational exposure guidelines being measured in μg/m3 for anhydrides, and even 5–60 ng/m3 for several enzymes.If flavouring ingredients pass the screening stage, a review of the existing toxicological data should follow.This approach will contribute to identifying any evidence of inhalation-specific issues that might make a compound unsuitable for use in an inhalation product.An example is the potential for diacetyl and 2,3-pentanedione to cause bronchiolitis obliterans.Much of the traditional flavour testing has focussed on oral exposure.This can provide valuable information with regards to potential systemic toxicity of the flavour but does not provide information on potential effects on the respiratory tract and does not take into account any possible effect from the brief heating the flavour undergoes in the aerosolisation process.In that context, it is worth noting hundreds of flavours have been tested via the inhalation route as part of cigarette exposures.Individual flavours or groups of flavours were added to the tobacco rod and the resultant smoke was analysed for priority smoke constituents and tested in several in vitro tests as well as 90-day rat inhalation studies.In general, addition of the flavours had no effect on, or reduced the levels of most of the measured smoke constituents.Even in the few instances where some increases in smoke constituents were observed, these changes did not affect the smoke’s in vitro cytotoxicity, in vitro bacterial mutagenicity, in vitro mammalian genotoxicity or inhalation toxicity.Because this testing used the inhalation route of exposure and included heating of the flavours, it provides some qualitative reassurance on those flavours that can add to the weight of evidence evaluation of a flavour compound for use in an electronic cigarette.However, the flavours were tested against a background of cigarette smoke, which itself causes toxicity, and the temperatures reached in a burning cigarette are many hundreds of degrees centigrade higher than that of vaping products.Therefore the main value of this vast body of data is as a source of potential alerts that a flavour may require further investigation if some equivocal or adverse effect were seen.For example, at very high inclusion levels, the addition of spearmint oil resulted in equivocal results, the depression of body weight gains and increased atrophy of olfactory epithelia in male rats only.In contrast, some other ingredients, most notably glycerol triacetate and lactic acid, have been seen to ameliorate the inhalation toxicity traditionally seen from cigarette smoke.These effects may well be mediated via interactions that are not relevant to vaping products, but they act as an alert that further investigation is required.Where there is no inhalation data that is suitable for hazard assessment or hazard characterisation of the ingredient itself, local responses observed via other exposure routes might help inform the risk assessment in a qualitative way.For example, if a compound has shown irritation potential via dermal and/or ocular exposures, it is likely that respiratory irritation will also be a relevant end point.Additionally, relevant toxicity data on structural analogues might be available.Considerations from scientific bodies in establishing occupational exposure guidelines can also be very helpful in deliberating the weight of evidence, which is needed to integrate the resulting multitude of, largely qualitative, information.Within a weight of evidence approach, to help strengthen rationales for quantitative interpretation of the information, looking at the rules for mixture classifications laid down in the European Classification Labelling and Packaging regulations can also be helpful.The regulation includes generic cut-off concentrations that are defined per toxic end point.If compounds are present below the levels given, their presence does not need to be taken into account when establishing the hazard classification of the overall mixture via the calculation route.This effectively presents a practical rule of thumb, with some regulatory support, below which a compound’s toxicity is not expected to contribute significantly to the overall toxicity of that mixture.Some food-grade flavouring ingredients have surprisingly little to support them other than generally recognised as safe status, which can on occasion rely only on historic use free from known issues.Such qualitative support does not easily translate into supportable levels in product.In that context, another useful threshold concept that can be helpful when there is a lack of data on local and systemic toxicity is the toxicological threshold of concern.The TTC concept and approach have recently been reviewed by several regulatory authorities and advisory organs et al., 2012).Although the concept was originally proposed in the context of contaminant risk assessment and prioritisation, much extension and validation work has since been published, and TTC is now recommended for chemical risk assessment in many situations beyond contaminants.We propose TTC as a suitable concept for risk assessing two classes of vaping aerosol compounds:Ingredients where the supporting toxicological data is qualitative and does not lend itself to the quantitative derivation of a DNEL;,Thermal degradation and reaction products of ingredients, as long as the compounds do not belong to the classes of chemicals for which the TTC approach is deemed inappropriate, as specified in et al., 2012).A distinction is made between ingredients and thermal degradation and reaction products, because ingredients will be well defined compounds for which CMR hazards have already been excluded and for which there will usually be toxicological data or other product experience available.In contrast, for low-level contaminants, chemical identification and toxicological data might be limited and no hazard screening has taken place.Several proposals for inhalation TTCs have been made.They differ largely by the type of compound that was in the databases used to derive the no observed effect levels/no observed effect concentrations from, viz. industrial chemicals, air pollutants or industrial chemicals with inhalation occupational exposure guidelines or ingredients of consumer aerosol products.The TTCs derived from industrial chemicals and organic air pollutants were based on compounds generally considered to be potentially toxic.In contrast, flavouring ingredients will already have been subject to the specific selection criteria described earlier.The more relevant TTCs for this group of compounds, therefore, are those derived from consumer aerosol products.Carthew et al. derived TTCs for systemic and local effects from inhalation exposure.For compounds in Cramer classes 1 and 3, the TTCs derived for systemic effects were lower than those for local effects.We propose that the use of the most stringent of the TTCs derived from inhalation studies are appropriately conservative for risk assessment of flavouring ingredients for e-cigarettes.Because the database did not contain sufficient Cramer class 2 compounds to derive a TTC, the conservative approach is taken of applying the Cramer class 3 TTC to Cramer class 2 compounds.As per the process described in Fig. 1, the aerosol needs to be measured to identify any potential reaction and breakdown products of flavours.Emerging data indicate that vaping the same e-liquid in different devices can result in quite different aerosols.This is no surprise to vapers who are well aware that different devices require different types of e-liquids and a variety of vaping websites provide user experience on the kind of vapour produced by different e-liquids in different types of vaping devices.Any flavour related new compounds can be seen as aerosol contaminants and should be risk assessed for inhalation using the same standard principles as applied to the ingoing ingredients.For example, common findings are the formation of acetals from propylene glycol and flavours with an aldehyde moiety.Such acetals are sensitive to hydrolysis and will most likely hydrolyse back into the ingoing flavour and propylene glycol in the high humidity environment in the respiratory tract or as part of the metabolic pathway.However, one big difference between ingoing flavour ingredients and potential thermal or reaction products is that, where compounds with CMR properties and respiratory toxicity have been excluded from use as ingredients, no such restrictions can be applied to these contaminants that might be found in the aerosol.Therefore, if the risk assessment of the compound relies on a TTC approach, use of TTCs derived from more conservative databases than that used for flavouring ingredients, is appropriate.The inhalation TTCs proposed on the basis of industrial chemical and air pollutant databases show substantial variation.If the type of chemicals is not restricted to those suitable for use in aerosol consumer products, the breadth of potential chemical classes is widened notably.Relative to that breadth, limited numbers of chemicals are included in the databases on which the proposed TTCs are based.For inhalation exposures, the scientific consensus is therefore that route-to-route extrapolation from well-substantiated oral values is the preferred option et al., 2012; Williams et al., 2013).A suitable safety factor needs to be derived, therefore, to derive contaminant inhalation thresholds based on the well-established oral TTCs.In this context, it is worth reiterating the observation that for Cramer class 1 and 3 aerosol ingredients, the TTCs derived from inhalation data for systemic effects were lower than those for local effects.This finding is in contrast to the assumption sometimes made that the respiratory tract could have a high sensitivity to local effects and that they would therefore be likely to dictate the overall TTC.This hypothesis was recently investigated by reviewing inhalation studies from an expanded RepDose database for local versus systemic No Observed Effect Concentration values and local versus the most sensitive NOEC regardless of toxic end point.This demonstrated that these three NOEC distributions were not dissimilar and did not confirm a particular sensitivity of the respiratory tract to local end points.The Cramer class 1 TTC derived from inhalation data on consumer aerosol ingredients is approximately half that of the traditional Cramer class 1 TTC derived from oral data.For Cramer class 3 compounds the inhalation-derived TTC was higher than that derived from oral data, but would have been highly influenced by the aerosol database having excluded the most toxic of the Cramer class 3 compounds.The factor 2 difference found in the Cramer class 1 comparison is in line with the default oral to inhalation bioavailability extrapolation recommended by REACh, 2012).It is also qualitatively compatible with considerations expressed at an international workshop in which extrapolation of oral TTCs to inhalation values were discussed.On the one hand it was noted that TTC values for inhalation exposures might be expected to be lower than those from oral exposures due to high absorption and low first-pass detoxification on uptake from the lungs.On the other hand, however, it was recommended an appropriate form of inter-species scaling should be used instead of the default factor of 10 used in deriving the oral values, as in the risk assessment guidance, such as that from REACh, 2012), which will at least partially compensate for the expected higher absorption and lower detoxification effects in deriving human inhalation TTCs from the oral data.In addition to route extrapolation considerations, duration of exposure needs to be taken into account.As a default, comparisons of chronic oral to chronic inhalation exposure convert the data to exposure for 24 h/day.This is unrealistic for exposure from vaping products.If we conservatively assume that approximately half of a user’s waking hours are spent vaping, exposure would be for 8 h instead of 24 h.A combined duration–route extrapolation factor of 0.67 would, therefore, be appropriate.However, because each of the inhalation databases used in the comparisons had its own limitations, we propose an oral to inhalation extrapolation factor of 1 for TTCs for aerosol contaminant risk assessment for vaping products.As a result, the TTCs established for oral exposures, including the restrictions on chemical class, can be directly applied to contaminant aerosol risk assessment.As with ingredients, the number of Cramer class 2 compounds evaluated is highly limited and, therefore, we suggest that Cramer class 3 TTC values be applied to Cramer class 2 contaminants.Thus, a TTC of 1800 μg/day is proposed for Cramer class 1 compounds, and a TTC of 90 μg/day for Cramer class 2 and 3 compounds.In the first instance it seems incongruous that the TTC proposed for Cramer class 1 contaminants is higher than that proposed for Cramer class 1 ingredients.However, in the light of the exposure estimates with which these TTCs will be used, the difference becomes justifiable.Ingredients can be expected to consistently end up in the vaping aerosol and, therefore, the consumer will be exposed with every puff.In contrast, the occurrence and level at which reaction and thermal breakdown products might occur in the aerosol will vary.For example, storage and transport conditions of the e-liquid and how long it has been open to the air will influence levels of reaction products.Potential thermal degradation can be influenced by the length of a puff, the air flow of the individual puff, how recently the coil has been replaced, etc.As a default, good risk assessment practice dictates that worst case, measured peak exposures should be the value from which to estimate realistic worst case consumer exposures.As a result of the intermittent presence of contaminants versus the consistent presence of ingredients, chronic exposure estimations derived from peak exposures will thus be more of an overestimate for aerosol contaminants than for ingredients.The TTCs have been based on chronic, low-level exposures.This is an exposure pattern for which the average exposures over time are generally more relevant than peak exposures.Therefore, a higher TTC can be applied to the estimated worst case exposures derived for contaminants from the aerosol measurements than that estimated for ingredients.The TTC concept is also helpful in the practical question of how accurate the analytical data should be that inform the risk assessment of the aerosol contaminants.Daily exposure to an unknown contaminant is generally taken to be tolerable at levels lower than 1.5 μg per day.We propose the limit of detection for the general gas chromatography scan used to identify if there are any contaminants in the aerosol, is based on this concept.For vaping products, the estimated daily number of puffs has been reported as on average 120 puffs/day, based on online questionnaire data from 3587 participants, whose median duration of e-cig use was 3 months.Older internet survey data from the same team indicated the 81 ever-users of e-cigarettes drew a median of 175 puffs per day.A reasonably conservative estimate, therefore, would be 300 puffs/day.To detect an estimated intake of 1.5 μg/day over 300 puffs/day, requires a limit of detection of approximately 1500 ng/300 puffs = 5 ng/puff.Application of the TTC approach to aerosol contaminant risk assessment is only one option.Dependent on the available data and case-by-case considerations, other risk assessment approaches can be applied instead.For example, the estimated exposure level can be compared to those from other exposures that are considered acceptable or tolerable, such as environmental background exposures or exposures via the diet.Comparisons can also be made with exposures from appropriate comparator products.Currently the vast majority of e-cigarette users start vaping when they are smokers, and the vaping generally results in smoking cessation or reductions in cigarette consumption, 2014; Biener and Hargraves, 2014; Brown et al., 2014; Etter and Bullen, 2011).At least some of the small percentage of never-smokers who vape are thought to have also considered cigarette smoking but decided on vaping instead.Thus, the percentage of never smokers among e-cigarette users, especially those using them as a smoking replacement, could increase.Overall, for most established users today, vaping is a cigarette replacement activity and therefore exposures from cigarettes are a reasonable comparator.However, this comparator should only be applied to unavoidable aerosol contaminants, such as the thermal breakdown and reaction products from the humectants and nicotine.Although flavours are an essential part of an e-liquid, no one flavour is irreplaceable.Supportability on the grounds of resulting in less exposure than cigarettes should therefore not be applied to individual flavouring compounds.For flavourings, a risk assessment could, for example, support the ingredients based on an absence of effects over and above those seen with a flavour-free version of the same e-liquid aerosol.An appropriate comparator might also be the range of compounds measured in aerosol from good-quality commercially available products, but the market place is currently mixed and quality might be hard to define.Flavouring ingredients are an essential part of vaping products, but inhalation data suitable for setting supportable levels in e-liquids, exist on only a limited number of compounds.We therefore suggested a practical approach to risk assessment of in-going flavouring ingredients in e-liquid and potential thermal breakdown and reaction products in the aerosol.We recommend excluding flavouring ingredients with CMR properties or respiratory sensitisation properties.Additionally, to provide a base level of reassurance on systemic toxicity and restrict the level of potentially toxic contaminants, we recommend that only food-grade flavourings are used.Risk assessment should take into account the published data on ingredients to help exclude compounds with known specific inhalation issues.The application of inhalation TTCs can be useful for compounds with limited toxicological data on which to base a quantitative inhalation risk assessment.The most stringent of the TTCs derived from inhalation data on consumer aerosol ingredients are proposed as applicable for flavour ingredient risk assessment, that is, 970 μg/day for Cramer class 1 compounds and 170 μg/day for Cramer class 2 and 3 compounds.For aerosol contaminants, TTCs derived from other databases are more relevant and other exposure considerations apply.Therefore, a TTC of 1800 μg/day is considered appropriate to apply to worst-case exposure estimates for Cramer class 1 contaminants and 90 μg/day for Cramer class 2 and 3 contaminants.Risk assessment also needs to be informed by appropriate analytical measurements to identify the potential aerosol contaminants.We suggest a gas chromatographic technique is employed with a limit of detection of approximately 5 ng/puff.This work was joint funded by Nicoventures and British American Tobacco, and the authors are full time employees of Nicoventures and British American Tobacco.The Transparency document associated with this article can be found in the online version.
Flavour ingredients are an essential part of e-liquids. Their responsible selection and inclusion levels in e-liquids must be guided by toxicological principles. We propose an approach to the screening and toxicological risk assessment of flavour ingredients for e-liquids. The screening involves purity requirements and avoiding ingredients that are carcinogenic, mutagenic or toxic to reproduction. Additionally, owing to the uncertainties involved in potency determination and the derivation of a tolerable level for respiratory sensitisation, we propose excluding respiratory sensitisers. After screening, toxicological data on the ingredients should be reviewed. Inhalation-specific toxicological issues, for which no reliable safe levels can currently be derived, can lead to further ingredient exclusions. We discuss the use of toxicological thresholds of concern for flavours that lack inhalation data suitable for quantitative risk assessment. Higher toxicological thresholds of concern are suggested for flavour ingredients (170 or 980. μg/day) than for contaminant assessment (1.5. μg/day). Analytical detection limits for measurements of potential reaction and thermal breakdown products in vaping aerosol, should be informed by the contaminant threshold. This principle leads us to recommend 5. ng/puff as an appropriate limit of detection for untargeted aerosol measurements.
755
Effectiveness of rutin-rich Tartary buckwheat (Fagopyrum tataricum Gaertn.) ‘Manten-Kirari’ in body weight reduction related to its antioxidant properties: A randomised, double-blind, placebo-controlled study
Rutin is a flavonoid of the flavonol type, which is commonly found in plants.Rutin shows antioxidant effects via scavenging of radiation-induced free radicals.In addition, it has several pharmacological functions such as anti-inflammatory, anti-diabetic, and blood capillary strengthening properties.Kamalakkannan and Prince reported that oral administration of rutin decreased blood glucose levels and increased insulin secretion in streptozotocin-induced diabetic rats.Other reports suggested that oral administration of rutin significantly decreased the levels of lipids in plasma and tissues in streptozotocin-induced diabetic rats.In addition, rutin has cardioprotective effects, which are related to its ability to inhibit platelet aggregation.Although it has been reported that rutin has several pharmacological effects, its exact mechanism and metabolism were not fully elucidated.Buckwheat is recognised as a functional food and a good source of nutritionally valuable amino acids, dietary fibres, and minerals such as zinc and copper.In particular, Tartary buckwheat,contains approximately 100-fold higher amounts of rutin in its seeds compared to common buckwheat.In a double-blind clinical trial, 2-week intake of Tartary buckwheat cookies with high rutin content decreased levels of total cholesterol and myeloperoxidase, an antioxidant marker, as compared to Tartary buckwheat cookies with low rutin content.This finding suggests that rutin-rich Tartary buckwheat can display beneficial functions, including anti-atherosclerotic and antioxidant effects.However, Tartary buckwheat contains a high level of rutinosidase, which hydrolyses rutin.Thus, rutin in Tartary buckwheat is hydrolysed in a few minutes upon addition of water.Hydrolysis of rutin may diminish its beneficial functions and give a bitter taste.These facts have limited the use of Tartary buckwheat in food products.A new variety of rutin-rich Tartary buckwheat ‘Manten-Kirari’ containing only trace amounts of rutinosidase has been developed by the NARO Hokkaido Agricultural Research Center.Therefore, most of rutin remains unhydrolysed, and products developed from ‘Manten-Kirari’, show high hydrophilic antioxidant capacity.These facts suggest that consumption of Manten-Kirari can provide rutin in sufficient amounts to perform its biological functions and at the same time, avoiding the bitter taste.To investigate whether consumption of rutin-rich Tartary buckwheat could reduce arteriosclerosis, display antioxidant effects, and change body composition, we conducted this double-blind, placebo-controlled study.A variety of Tartary buckwheat, ‘Manten-Kirari’, cultivated in Hokkaido, Japan, was used for preparation of the active test food in this trial.Hard wheat flour prepared from ‘Yumechikara’ was used for formulation of the placebo food.The active test food was manufactured and packed under strict quality control at the plant of Kobayashi Shokuhin Co., Ltd. in compliance with the Food Sanitation Act.The manufacturing process of buckwheat noodles included the following: mixing of the raw materials, addition of some water, preparation of primary noodle dough, pressing to prepare noodle sheets, cutting, casing, and drying."The manufacturing process of cookies included the following: mixing of the raw materials, shaping with cookie cutter, and baking in oven.Although the rutinosidase activity in ‘Manten-Kirari’ is lower than those in other varieties, ‘Manten-Kirari’ contains trace amounts of rutinosidase.Therefore, the rutin in the dough gradually hydrolysed upon addition of water.The degree of rutin hydrolysis increases with the increase in the dough water concentration, temperature and ratio of Tartary buckwheat flour in the dough.For example, more than 90% of rutin remained in ‘Manten-Kirari’ whereas the majority of rutin was hydrolysed in other varieties within 30 minutes after addition of water.To reduce rutin hydrolysis, it is very important to shorten the processing time from water addition to drying.In this study, casing was completed within 40 minutes after water addition, and raw noodles were immediately dried to decrease the water concentration.As a result, we obtained rutin-rich noodles and cookies.Nutrition facts regarding the active test food and placebo food used in this study are provided in Table 1.Rutin concentration in the test food and placebo food was measured using HPLC.Briefly, 1.0 g of rutin-containing sample was extracted with a mixture of 7.2 mL of methanol and 1.8 mL of 0.1% phosphoric acid at 80 °C for 2 hours.After extraction, the sample was centrifuged at 5000 g for 10 minutes, and the resultant supernatant was filtered through a 0.45-mm filter and assayed using HPLC.HPLC was performed using a Cadenza CD-C18 column at a flow rate of 0.2 mL/min.The elution gradient program was set at 0–20 min with isocratic flow conditions at solvent A : solvent B as 63:37.The chromatogram was visualised at 360 nm.According to the study design, subjects should take 500 mg of rutin every day from the active test food.However, since about 20% of rutin would be lost during the boiling process, therefore, we adjusted the content of rutin in the dry noodles at more than 500 mg.The active test food and the placebo food were identical in appearance.Previous reports suggested that a dose of 5000 mg flour/kg bodyweight was the No Observed Adverse Effect Level determined by in vivo acute and subacute toxicology studies.We recruited 231 volunteers, of whom 230 provided written informed consents to participate in this clinical study.Finally, we selected 149 subjects, 2.25 ± 0.65) through a screening test, excluding the following: individuals with a recent history of gastrointestinal disorders; pregnancy; severe acute or chronic diseases; surgery; severe allergic reaction to food, particularly buckwheat and wheat, and/or current use of any medications including anti-hyperlipidaemic medications.These 149 eligible subjects were randomly assigned to either the active test food or the placebo food group, with adjustments for age, sex, and AI.The randomised allocation sequence was created using a permuted-block randomisation design stratified by age, gender, and AI, where the block size was a multiple of two.Each subject was allocated by a third-party data centre according to the random allocation sequence into a relevant group."The third-party data centre concealed the allocation information, including the subjects' personal data, and kept them secure.This information was disclosed only after the laboratory and analytical data were fixed, and the method of statistical analysis was finalised.The clinical study was conducted as a double-blind, placebo-controlled trial.The time schedule for the study is shown in Fig. 1.We performed body composition measurements, including body weight, body mass index, and body fat percentage analyses, at weeks 0, 4, 8, and 12 after the start of rutin ingestion, and 3 weeks after the end of rutin ingestion.At all five time points, a medical interview was conducted along with a check of the vital signs and haematological and urine tests.We asked the subjects to take 80 g of the active test noodles or placebo noodles per day at any time of the day and cook them using any cooking method they liked.When the subjects could not cook the noodles, they were allowed to consume cookies instead for up to 2 days per week.During the course of this study, subjects were asked to not change their daily activities, including food consumption, medications, and exercises.The primary outcomes were AI and oxidised LDL levels.The secondary outcomes were the thiobarbituric acid reactive substance, urinary 8-hydroxy-2′-deoxyguanosine, TC, high-density lipoprotein cholesterol, and low density lipoprotein cholesterol levels, BW, BFP, and BMI.Blood samples were collected at the following time points: baseline, at weeks 4, 8, and 12 after the start of rutin ingestion and at 3 weeks after the end of rutin ingestion."In addition to a medical interview, each subject's body composition and blood pressure were measured.Subjects fasted for 12 hours before blood collection.General blood tests were performed antioxidant markers, lipid profile, complete blood count , liver function , and kidney function , and urine tests including 8-OHdG levels were measured.Haematological tests were performed at Sapporo Clinical Laboratory, Inc.Ox-LDL and TBARS were measured using ox-LDL ELISA kit and TBARS assay kit.TG, TC, HDL-C, and LDL-C were measured by free glycerol method, cholesterol oxidase method, selective inhibition method and selective solubilisation method.AI was calculated in LDL-C/HDL-C.WBC, RBC, Hb, Ht, and Plt were measured by flow cytometry method, electrical resistivity measurement, SLS-Hb method and electrical resistivity measurement.AST, ALT, γ-GTP, ALP, and LDH were measured by Japan Society of Clinical Chemistry reference method.BUN, Cr, and UAC were measured by urease-GLDH method, enzyme assay and uricase-POD method.8-OHdG was measured by New 8-OHdG Check Elisa and corrected for creatinine concentrations."Each subject's body composition and BP were measured using a Body Composition Analyzer DC-320 and an Automatic Blood Pressure Monitor HEM-7080IC.All subjects provided written informed consent prior to undergoing any of the tests related to this study.The study protocol was approved by the Ethics Committee of Hokkaido Information University in conformity with the Helsinki Declaration.This study was registered in UMIN.The sample size was statistically determined to obtain a power of 80% with an alpha error of 0.05.In order to demonstrate the postulated change in AI at week 12, a sample size of 120 was required.Assuming a 20% loss to follow-up, 149 subjects were included.Mean and standard deviation were calculated for each group.Changes in the subject values were analysed using repeated measures ANOVA between the groups."In addition, changes in the subject values were analysed using Student's t-test to compare the mean of the active test food group and placebo food group at each evaluation point.Statistical analyses were performed using SPSS Statistics 19.p < 0.05 was considered as significant, and p < 0.10 was considered as marginally different.During the trial, 4 subjects dropped out for personal reasons.As a result, 145 subjects completed this trial, 73 in the active test food group and 72 in the placebo group.One person in the placebo group was excluded from the analysis because of low ingestion rate.As a result, 144 persons were included in the final analysis.The study flow diagram is shown in Fig. 2.Mean age, height, BW, BMI, BFP, AI, and LDL-C for each group are presented in Table 2.No significant differences existed between the active test food and the placebo food groups, showing appropriate assignment of subjects into the two groups.First, we evaluated the effect of rutin-rich Tartary buckwheat on AI and ox-LDL.Table 3 shows that the interaction of group by time did not differ significantly between the groups.In addition, there was no significant difference between the active test food and the placebo food group in the change in AI levels from the baseline to evaluation points.Moreover, ox-LDL decreased at week 8 in the placebo food group compared to the active test food group.We also examined the effect of rutin-rich Tartary buckwheat on oxidative stress markers.Urinary 8-OHdG did not differ between the groups.However, TBARS levels significantly decreased at week 8 in the active test food group.We also evaluated the effect of rutin-rich Tartary buckwheat on lipid metabolism parameters.TC, LDL-C and HDL-C did not differ between the groups.The group × time interaction gave marginally significant difference to TG levels.However, there were no differences between the active test group and placebo group in the changes in TG levels from baseline to each evaluation points.To determine the effect of rutin-rich Tartary buckwheat on body composition, we evaluated the changes in BW, BMI, and BFP.No significant differences in the interaction of group by time of BW, BMI, and BFP were observed between the groups.However, the ingestion of the active test food significantly decreased BW and BMI at week 8.Moreover, BFP significantly decreased in the active test food group compared to the placebo food group at week 4.We evaluated the CBC, liver and renal function, and BP after the ingestion of rutin-rich Tartary buckwheat products.Minimal changes were observed in the CBC parameters, liver function, renal function, and BP.Although few subjects showed adverse events, runny nose/nasal congestion, fever, pharyngeal pain, cough), their symptoms were mild, and they recovered in a few days.Thus, the principal investigator judged that there were no adverse events related to the ingestion of the test food.These results suggested that the ingestion of rutin-rich Tartary buckwheat had no or minimal unfavourable effects even at a dose of 50 g/day.The results of our randomised, double-blind, placebo-controlled, parallel-group trial confirmed the potential effects of rutin-rich Tartary buckwheat, ‘Manten-Kirari’ on lipid metabolism, antioxidation, and body composition.There were no significant differences in AI and ox-LDL levels, and in lipid metabolism parameters between the active test food and the placebo food groups.However, TBARS levels decreased in the test group.In addition, the ingestion of the active test food decreased BW, BMI and BFP.Previous reports indicated that 22-day-continuous ingestion of rutin decreased TBARS levels in a dose-dependent manner in rats, although it affected neither the level of lipid metabolism parameters in serum and liver nor the level of steroid excretion into faeces.In addition, 5-week-continuous ingestion of 100 mg/kg rutin improved diabetic neuropathy and decreased TBARS levels in diabetic rat models.It was suggested that the antioxidant mechanism of flavonoids, such as rutin, involves radical scavenging activity.Free radicals that are formed through the auto-oxidation of unsaturated lipids in plasma and membrane lipids, react with polyunsaturated fatty acids, and lead to lipid peroxidation.Our clinical study suggested that rutin-rich Tartary buckwheat had antioxidant effects.However, ox-LDL decreased at week 8 in the placebo food group compared to the active test food group.This may be attributed to a higher initial level of ox-LDL in the placebo food group compared to that in the active test food group.In addition, ox-LDL was positively correlated with serum LDL-C levels.Since serum LDL-C decreased by a higher value in the placebo food group compared to the active test food group, the ox-LDL level decreased by a higher magnitude in the placebo group.Moreover, no significant difference was observed in the ox-LDL/LDL ratio between the two groups.The underlying cause is not clear; however, it is generally considered that the flavonoid glycoside rutin is hydrolysed by the intestinal microflora.Metabolites containing a vicinal hydroxyl structure such as quercetin, 3,4-dihydroxyphenylacetic acid and 3,4-dihydroxytoluene play important roles in the antioxidant effects of rutin.It is well known that Tartary buckwheat contains other functional compounds such as vitamins B1, B2, and B6 and proteins; however, the interaction between rutin and these compounds was not fully investigated.Since the metabolic pathways of rutin in Manten-Kirari and that of refined rutin are different, it is assumed that their metabolites and effects may also be different.Therefore, further research is required on the antioxidant effects of ‘Manten-Kirari’ and rutin metabolites present in ‘Manten-Kirari’ in vivo and in vitro using purified rutin as a control.No significant differences in lipid metabolism parameters such as TC, HDL-C, LDL-C, and TG existed between the active test food group and the placebo food group.However, the ingestion of the active test food significantly decreased BW and BMI at week 8.In addition, BFP decreased at week 4 in the active test food group compared to the placebo food group.A previous study has reported that the ingestion 110 mg quercetin for 12 weeks decreased visceral fat area in subjects whose BMI was between 25 kg/m2 and 30 kg/m2.The mechanism by which quercetin caused visceral fat area improvement was suggested as inhibition of the gene expression of peroxisome proliferator-activated receptor gamma and sterol regulatory element-binding protein 1c and facilitation of gene expression of proteins involved in β-oxidation, as well as inhibition of the mitogen-activated protein kinase signalling factors extracellular signal-regulated kinase1/2, Jun-N-terminal kinase, and p38 MAPK in adipocytes and macrophages.Rutin caused improvement of BW and BFP in rats fed with a high-fat diet through induction of gene expression related to and activation of 5′ AMP-activated protein kinase in skeletal muscles and mitochondrial biosynthesis.In addition, another report suggested that mRNA expressions such as PPARγ and CCAAT/enhancer binding protein-α in 3T3-L1 cells were down regulated by rutin treatment.These facts suggested that improvement of BW and BFP due to the ingestion of ‘Manten-Kirari’ was related to the activation of AMPK and increase in the energy conversion, and would regulate the expression of adipogenic transcription factors in adipocyte.The effect of BW-reduction by the ingestion of ‘Manten-Kirari’ was limited in our clinical study.However, the mean of BMI in our subjects was 22.2 kg/m2, and it was regarded as non-obese.Therefore, we expected that ingestion of ‘Manten-Kirari’ might more reduce BW and BFR in obese subjects."On the other hand, in the placebo food group, the change value in TBARS levels from baseline to week 8 significantly demonstrated positive correlation with the change value in BMI; however, in the active test food group, the change value in TBARS levels did not show positive correlation with the change value in BMI.These results suggested that ‘Manten-Kirari’ suppressed the increase in oxidative stress induced by the increase in BMI.Obesity contributed oxidative stress and inflammation.It was reported that there was a significant positive association between BMI or waist/hip ratio and urinary levels of 8-epi-prostaglandin F2α.The adipose tissue in obesity increased oxidative stress via the increase of NADPH oxidase, the enzyme production of reactive oxygen species, and the reduction of antioxidant enzyme, such as superoxide dismutase, glutathione peroxidase and catalase.Rutin treatment to high-fat diet-fed rats showed not only the weight reduction of body, liver organ and adipose tissue but also the reduction of TBARS levels.On the other hand, secretion defect of adipokine, such as tumor necrosis factor-1, monocyte chemoattractant protein-1, leptin and adiponectin, were observed in obesity.Rutin also inhibited the expression of leptin and then up-regulated the expression of adiponectin at the protein level in 3T3-L1 adipocytes.Moreover, oxidative stress could be also related to adipokine imbalance.Therefore, the inhibition of TNF-α secretion and the increase of adiponectin secretion and leptin sensitivity related to antioxidant effects may improve obesity through inhibition of inflammation, improvement of insulin resistance, or correctness for imbalance between food intake and energy expenditure .These facts suggested that rutin might have directly or indirectly improved adipokine imbalance, and then reduced BW and BFR.In conclusion, the results of this study revealed that rutin-rich Tartary buckwheat, ‘Manten-Kirari’, showed potential effects on decreasing BW, BFP, and oxidative stress.Although additional studies are still needed to elucidate the molecular mechanisms underlying these effects and to confirm the results of this study, the present study facilitates the development of new applications of processed foods using “Manten-kirari”.
Rutin, a phenolic compound, has antioxidant, anti-dyslipidaemic, and body weight-reducing effects. We evaluated the anti-arteriosclerotic, antioxidant, and body weight-reducing effects of rutin-rich Tartary buckwheat. We randomly divided 144 adult subjects into an active test food group consuming products containing rutin-rich Tartary buckwheat and a placebo food group. Body composition measurements and haematological and urine tests were performed at weeks 0, 4, 8, and 12, and at 3 weeks after termination. Atherosclerosis index and ox-LDL did not significantly differ between the groups. However, TBARS levels, BW and BMI in the active test food group were significantly lower than those in the placebo group at week 8 (p = 0.027, p = 0.030, respectively). BFP in the active test food group at week 4 (p = 0.038) was lower than that in the placebo group. Thus, rutin-rich Tartary buckwheat intake may be effective for body weight due to its antioxidant properties.
756
Systematic comparison of the statistical operating characteristics of various Phase I oncology designs
Phase I trials of a new anti-cancer drug are usually single arm, open label studies conducted on a small number of cancer patients, many of whom do not respond any longer to the standard treatment.Due to the toxic nature of many anti-cancer drugs as well as due to ethical reasons, cancer patients are enrolled in Phase I oncology trials, as opposed to the healthy volunteers used in Phase I trials in other therapeutic areas."The main aim of a Phase I oncology trial is to investigate and understand the toxic properties of the new anti-cancer drug; the drug's efficacy is not traditionally the focus, although the drug's efficacy is often observed and monitored by the oncologist.With regard to safety, the trial helps investigators determine the right dose and dosing interval as well as the best route of administration of the new drug.In order to determine the right dose, an endpoint such as Phase 1 dose limiting toxicities in the first cycle is often considered.For each dose finding Phase I trial, a set of pre-defined adverse events, typically only those possibly related to taking the study drug, constitutes the DLTs for that trial.Patients are traditionally monitored for DLTs during the first cycle of administration of the new anti-cancer drug; however, more recent trials may monitor DLTs for a longer period and may include toxicities in the DLT definition that are not included in the conventional definition of DLTs .The starting dose in these dose finding trials is usually a very conservative dose based on animal studies of the drug, and the subsequent increasing doses to be administered are pre-specified.The number of patients with DLTs in each dose level is used to determine the Maximum Tolerated Dose.For a single anti-cancer drug being tested, the MTD is usually the highest dose level at which the observed DLT rate is equal to or below a specified percent.Phase II patients are generally dosed at the MTD determined in the corresponding Phase I trial.The above method for MTD selection is more applicable to cytotoxic agents where the toxicity and efficacy are assumed to increase monotonically with dose than to some modern molecularly targeted therapies where the MTD may not be reached even at higher doses due to their low toxicity; in such cases, another appropriate dosing endpoint may need to be considered such as the dose at which the key pharmacokinetic and pharmacodynamics parameters are optimal .Dose finding Phase I oncology designs can be broadly categorized as rule based or model based.The 3 + 3 design has been the workhorse dose finding design for Phase I oncology trials for a long time.It is still commonly used due to its simplicity and ease of implementation.However, depending on the target DLT rate of interest, it can be slow and inaccurate in estimating the MTD and can lead to a large portion of patients receiving sub-therapeutic doses that do not produce any clinically meaningful response .Hence, other designs, including model-based designs, have been explored in recent years .The establishment of the MTD for various Phase 1 oncology designs is the main focus of this paper.In this work, we explore extensions of the 3 + 3 design as well as the model based mTPI , TEQR , BOIN , CRM and EWOC designs and compare their performance.There is no unique criterion to evaluate these designs since the performance of each design depends on the true DLT probability at each dose and the target DLT rate of the design.Hence, we systematically compare several statistical operating characteristics for the true DLT rates generated at the same doses by three different dose-toxicity curves.In addition, we explore the effect of starting the trial at different dose levels below the true MTD on the accuracy of MTD selection in these designs.The 3 + 3 design and its extensions we consider target a DLT rate of ∼0.2, and we specify a target DLT rate of 0.2 for the model based designs we consider.Although the results in this paper focus on a target DLT rate of 0.2, we explain in the discussion section the implications of targeting other DLT rates such as 0.1 and 0.33 with the A + B designs considered and discuss other A + B designs that target these rates.We also study the performance of the model based designs considered when the target DLT rate specified is 0.1 and 0.33.In contrast to previous works that compare a limited number of specific designs , our comprehensive comparison across several designs should serve as a practical aid in applying these Phase I oncology designs or in developing new ones.We consider the 3 + 3 design, which targets a DLT rate of ∼0.2 , as well as its various extensions that target a DLT rate of ∼0.2.We also include the simple accelerated titration design and the 3 + 3+3 design in our study .We then investigate several of their statistical operating characteristics, such as the accuracy of MTD selection among others.The formal definition of the MTD is that it is the dose for which Probability = target probability.For the A + B designs that allow only escalation, the algorithm that we follow is :If out of A patients assigned to dose level i, the number of DLTs observed is ≤x, then assign A patients to dose level i+1.If the number of DLTs observed out of A patients at dose level i is >x and <y, then assign B more patients to dose level i.If out of A + B patients, the number of DLTs observed is ≤z, then add A patients to dose level i+1.Otherwise stop the trial.If the number of DLTs observed out of A patients at dose level i is ≥y, then stop the trial.We then estimate the MTD to be the dose level immediately below the last dose level examined.For the standard 3 + 3 design, which is a special case of the general A + B design, this implies that the MTD is estimated to be the highest dose in which fewer than 33% of patients experience a DLT.For the A + B designs that also allow de-escalation, the algorithm that we follow is:Implement the rules given above for the corresponding escalation only design and let i be the dose level where the number of DLTs exceeds that allowed by the design.Then, ensure that A + B patients have been dosed at dose level i-1.If yes, dose level i-1 is estimated to be the MTD.If not, add B more patients at dose level i-1.If out of the A + B patients at dose level i-1, the number of DLTs observed is ≤z, then dose level i-1 is estimated to be the MTD even if A + B patients have not been dosed at dose level i-2.If out of the A + B patients at dose level i-1, the number of DLTs observed is >z and A + B patients have been dosed at dose level i-2, then dose level i-2 is estimated to be the MTD.If A + B patients have not been dosed at dose level i-2, then add B more patients and continue the process.For the 3 + 3 design with de-escalation, the MTD is estimated to be the highest dose in which fewer than 33% of patients experience a DLT, and in which at least six participants have been treated with the study drug.For the rule-based designs where no de-escalation is allowed, Table 1 describes the dose finding rules; the specific x, y, and z for each A + B design can be determined based on the description of these designs in Table 1.To provide a preliminary idea of the properties of these designs, we depict in Fig. 1 the probability of not escalating for a single step for various true DLT rates for the escalation only designs considered.For example, for the 3 + 3 design that allows only escalation, we can escalate at each step or dose level if 1) 0 out of 3 patients experience a DLT or if 2) 1 out of 6 patients experiences a DLT; the probability of escalating at each step or dose level is q3+3pq5 and not escalating at each step is 3p2q + p3+9p2q4+9p3q3+3p4q2, where p is the probability of experiencing a DLT at the current dose level and q = 1-p.Using these two probabilities and extending the framework to any number of steps, we can then calculate analytically the probability of selecting any dose level as the MTD for the 3 + 3 as well as other A + B designs that allow only escalation.This reference also provides analytic formulae for the probability of MTD selection for the 3 + 3 and other A + B designs that allow de-escalation as well.In terms of model-based designs, we consider the Modified Toxicity Probability Interval, Toxicity Equivalence Range, Bayesian Optimal Interval Design, Continual Reassessment Method and Escalation with Overdose Control designs and explore their statistical operating characteristics.For these designs, we can choose the DLT rate that each design will target; we specify a target DLT rate of 0.2 for all of them, in order to compare their performance with the performance of the 3 + 3 design and its extensions that target a DLT rate of ∼0.2.Note that although the TEQR design is not a model based design, it allows the specification of the target DLT rate.The mTPI design is described in detail in the reference by Ji and others .The mTPI design is a Bayesian dose finding design that uses the posterior probability in guiding dose selection.The mTPI design uses a statistic for the decision rules called the unit probability mass, defined as the ratio of the probability mass of the interval and the length of the interval .The toxicity probability scale is divided into three portions: corresponding to under-dosing, corresponding to proper dosing and corresponding to over-dosing.Here pT is the target probability of dose limiting toxicity and ε1 and ε2 are used to define the interval for the target DLT rate.The rules for escalating, staying at the same dose or de-escalating depend on which of these portions has the highest UPM for that dose level, based on a beta-binomial distribution with a beta prior .For example, the next cohort of patients will be treated at the next higher dose level if the UPM is the largest for the under-dosing interval.The trial stops if dose level 1 is too toxic or if the maximum sample size is reached or exceeded.The TEQR design is a frequentist version of the mTPI design and is described in detail in the reference by Blanchard and Longmate .This design is not based on the posterior probability but on the empirical DLT rate.The unit interval is divided into three portions:, and.The rules for escalating, staying at the same dose or de-escalating depend on which of these portions contains the empirical DLT rate for that dose level – if the empirical DLT rate lies between 0 and pT-ε1, we escalate; if it lies in the interval , we stay at the same dose; if it lies above pT+ε2, we de-escalate.In both the mTPI and TEQR design, we stay at the current dose if the current dose is safe but the next higher dose is too toxic based on the data.A trial using the TEQR design stops if dose level 1 is too toxic or when a dose level achieves the selected MTD sample size.In a trial using the TEQR or the mTPI design, the MTD is determined to be the highest dose level with a DLT rate that is closest to the target DLT rate after applying isotonic regression at the end of the trial.The concept of the BOIN design is similar to that of the TEQR design in terms of dividing the toxicity probability scale into three intervals and using these intervals along with the empirical DLT rate to guide dose finding .In contrast to the TEQR and mTPI designs, where the interval for the target DLT rate is fixed and is independent of the dose level and the number of patients that have been treated at that dose level, the BOIN design is more general and permits this interval to vary with the dose level and the number of patients that have been treated at that dose level.In this design, the probability of patients being assigned to very toxic doses or to sub-therapeutic doses is low.A trial using the BOIN design usually stops at the pre-planned sample size but the design allows the incorporation of early stopping rules.The EWOC design is a Bayesian adaptive dose finding design, whose unique feature is over-dose control i.e. the posterior probability of treating patients at doses above the MTD, given the data, cannot be greater than a certain pre-specified probability α .In mathematical terms, we specify a prior distribution for, where ρ0 is probability of DLT at the minimum dose and γ is the MTD dose, and let Πn be the marginal posterior cdf of γ given Dn.The first patient receives the dose x1, and conditional on the event of no DLT at x1, theth patient receives the dose xn+1 = Π−1n, which implies that the posterior probability of exceeding the MTD is equal to α .The design also minimizes the under-dosing of patients.This means that the MTD is generally reached rapidly, and after the initial cohorts of patients, the remaining cohorts of patients are treated at dose levels reasonably close to the MTD.In this design, it is also possible to add a stopping rule for excessive toxicity for e.g. the trial will be stopped early if three consecutive DLTs are observed or if the posterior probability at the minimum dose exceeds a certain pre-defined value.For our simulations in SAS of the 3 + 3 design and its extensions, we use a Bernoulli random generator, along with the probability of a DLT at different doses generated by a dose-toxicity curve, to randomly assign each patient a DLT or not depending on the probability of a DLT at the assigned dose.We then implement the assignment rules of each design and follow each simulated trial to its conclusion.For example, for the designs that allow only escalation, we escalate until the number of DLTs at the last dose level examined exceeds that allowed by the specific design, and the MTD is then estimated to be one dose level below the last dose level examined.We perform these simulations 10000 times for each combination of design and dose-toxicity curve.The increase in dose at a new dose level beyond dose level 1 for each dose-toxicity curve investigated is based on the modified Fibonacci series, as commonly used in many oncology trials .A logistic dose-toxicity curve is often used to describe the underlying relation between dose and toxicity in cytotoxic agents .Hence, we specify the true DLT probability at each dose based on a specific logistic curve.In addition to the logistic curve, we consider a specific log logistic and a linear dose-toxicity curve to study the performance sensitivity of these designs to the true DLT probabilities generated by these different dose-toxicity curves.Table 2 shows the true DLT rates at each dose level for each of the three dose-toxicity curves.For determining the two unknown coefficients of each dose-toxicity curve, we use the DLT rates at two different doses – namely we assume a true DLT rate of 0.01 at dose level 1 of 100 units and a DLT rate of 0.2 at the true MTD of 334 units.We assume a DLT rate of 0.2 at the MTD because the 3 + 3 design targets a DLT rate between 0.2 and 0.25 .Hence this choice of 0.2 allows a fair comparison of the simulation results from the 3 + 3 design with those from other A + B designs whose approximate target DLT rate is 0.2.However, we also study the performance of these designs to different target DLT rates, such as 0.1 and 0.33.We choose the following broad range of statistical operating characteristics to compare and evaluate the dose finding schemes considered for these three dose-toxicity curves: the accuracy of MTD selection, the average number of dose levels examined and its standard deviation, the maximum and median number of dose levels examined, the mean and median number of patients and the median number of DLTs per trial, the mean number of patients dosed at the MTD, the mean percentage of patients dosed at the MTD, above the MTD and below the MTD, the average number of patients and DLTs at each dose level, the average trial DLT rate and the average DLT rate at the MTD.Further, we investigate the effect of the location of the starting dose relative to the true MTD on the accuracy of MTD selection for the chosen logistic and log-logistic dose-toxicity curves for e.g. when we start our trial simulation at dose level −3, −2 or −1 instead of at dose level 1.In addition, we use three linear dose-toxicity curves with different offsets to investigate the effect of the location of the starting dose relative to the true MTD on the accuracy of MTD selection for the 3 + 3 design.Our SAS programs, available on request, are presently able to provide results for six designs and three dose-toxicity curves.However, the programs are simple and flexible and can be extended to other A + B designs as well as any other dose-toxicity curve.We use R code provided by Ji et al. to implement the mTPI design.The program requires the following inputs: number of simulations, target probability of dose limiting toxicity pT and ε1 and ε2 that help define the lower and upper bound of the interval for the target DLT rate respectively, sample size, cohort size, starting dose and the true DLT rate at each dose.We use the R package TEQR to implement the TEQR design.The program requires the following inputs: number of simulations, target probability of dose limiting toxicity pT and ε1 and ε2 that help define the lower and upper bound of the interval for the target DLT rate respectively, DLT probability deemed to be too toxic, desired sample size at the MTD, cohort size, maximum number of cohorts, starting dose and the true DLT rate at each dose.We use the R package BOIN to implement the BOIN design.The program requires the following inputs: number of simulations, target probability of dose limiting toxicity pT, cohort size, number of cohorts, starting dose, cut off to eliminate an overly toxic dose for safety and the true DLT rate at each dose.Although the design allows the possibility of rules for stopping prior to reaching the planned sample size, we did not implement these early stopping rules, to permit fair comparisons between designs.We use a CRM trial simulator to implement the various scenarios for the CRM design.The program requires the following inputs: number of simulations, maximum sample size, cohort size, number of doses, starting dose, target probability of dose limiting toxicity, stopping probability and the true DLT rates at the various doses.The probability of DLT at dose i is modeled as piexp, where pi is a constant and α is distributed a priori as a normal random variable with mean 0 and variance 2.The initial default prior probabilities of DLT used in the software are given in Appendix Table 3.The trial stops when the planned sample size is reached or if the lowest dose is too toxic.We use a web based program to implement the EWOC design.The program requires the following inputs: number of simulations, target probability of dose limiting toxicity, maximum acceptable probability of exceeding the target dose, variable α increment, cohort size, sample size, minimum dose, maximum dose, number of dose levels and the true probability of DLT at each dose.Although the EWOC design allows the possibility of rules for stopping prior to reaching the planned sample size, the current implementation of the EWOC design does not include early stopping rules.The parameters used for mTPI, TEQR, BOIN, CRM and EWOC designs are shown in Appendix Tables 2, 3, 4 and 5.Note that the sample size is an output of the rule-based A + B designs as well as the TEQR design.For the mTPI, BOIN, CRM and EWOC designs, we use the same sample size that the TEQR design yields for each of the three sets of true DLT rates.For all the simulation results in this section, dose level 1 is the lowest dose and dose level 3 is the true MTD.For the logistic dose-toxicity curve constructed, there is a very clear separation between the true DLT rate at the MTD and the rates at the dose levels below and above it: the DLT rate of 0.2 at the MTD versus 0.04 at the dose level below and 0.71 at the dose level above.The DLT rate of 0.2 at dose level 3 aligns with the range of toxicity rates that the escalation-only A + B designs target and is the target DLT rate specified for the model-based designs.Hence all the designs pick dose level 3 as the MTD the largest percentage of times in our simulations, while incorrectly picking the other dose levels substantially less frequently.The 4+4a design with and without de-escalation, the mTPI design, the CRM design and the 3 + 3+3 design correctly pick dose level 3 as the MTD ∼79%, ∼80%, ∼76%, ∼76% and ∼76% percent of the time respectively.The median number of patients enrolled in the trial ranges from 6 for the simple accelerated titration design to 25 for the 5 + 5 a design.As expected, with the 3 + 3 design, about half of the patients are given doses below the MTD.The BOIN design and the 5 + 5 a design with and without de-escalation also treat a large percentage of patients at doses below the MTD – about 50%, 48% and 49% respectively.On the other hand, the simple accelerated titration design over-doses a large percentage of patients.The model based designs generally treat a large percentage of patients at the MTD.The average trial DLT rate ranges from 0.17 for the TEQR design to 0.4 for the simple accelerated titration design; the median number of DLTs per trial ranges from 2 for the 2 + 4 design without de-escalation to 5 for the 4+4a design with de-escalation and the 5 + 5 a design, among the extensions of the 3 + 3 design considered.For the log-logistic dose-toxicity curve constructed, there is a clear separation between the true DLT rate at the MTD and the rates at the dose levels below and above it: the DLT rate of 0.2 at the MTD versus 0.06 at the dose level below and 0.42 at the dose level above.Although this separation is not as large as it is in the logistic dose-toxicity curve considered, all the designs still pick dose level 3 as the MTD more frequently than they pick any other dose level.The CRM, mTPI, BOIN and 5 + 5 a with and without de-escalation designs correctly pick dose level 3 as the MTD ∼74%, ∼63%, ∼59%, ∼58% and ∼58% percent of the time respectively.The median number of patients enrolled in the trial ranges from 7 for the simple accelerated titration design to 30 for the 5 + 5 a design with de-escalation.For this dose-toxicity curve, about 49% of patients are given doses below the MTD in the 3 + 3 design.The BOIN, TEQR and 5 + 5 a design with and without de-escalation also treat a large percentage of patients at doses below the MTD – about 50%, 47%, 47% and 47% respectively.On the other hand, the simple accelerated titration design over-doses a large percentage of patients.The model based designs generally treat a large percentage of patients at the MTD.The average trial DLT rate ranges from 0.17 for the TEQR design to 0.34 for the simple accelerated titration design; the median number of DLTs per trial ranges from 2 for the simple accelerated titration design, reflecting the very small sample size for this design, to 5 for the 4 + 4 a design and the 5 + 5 a design with de-escalation, among the extensions of the 3 + 3 design considered.For the linear dose-toxicity curve constructed, the DLT rate at dose level 3 is 0.2 and the DLT rate at dose level 4 is 0.34.Although this separation is even smaller than that in the logistic and log-logistic dose-toxicity curves considered, all the designs except the accelerated titration design pick dose level 3 as the MTD more frequently than any other dose level.The CRM, mTPI, 5 + 5 a with and without de-escalation and TEQR designs correctly pick dose level 3 as the MTD but only ∼54%, ∼45%, ∼45%, ∼45% and ∼45% percent of the time respectively.The median number of patients enrolled in the trial ranges from 8 for the simple accelerated titration design to 30 for the 5 + 5 a design with de-escalation.For this dose-toxicity curve, about half of the patients are given doses below the MTD in the 3 + 3 design.The BOIN, TEQR, CRM, mTPI designs and the 5 + 5 a design with and without de-escalation also treat a large percentage of patients at doses below the MTD – about 58%, 50%, 50%, 48%, 48% and 48% respectively.On the other hand, the simple accelerated titration over-doses a large percentage of patients.The model based designs generally treat a large percentage of patients at the MTD.The average trial DLT rate ranges from 0.16 for the TEQR design to 0.31 for the simple accelerated titration design; the median number of DLTs per trial ranges from 2 for the simple accelerated titration design to 5 for the 4 + 4 a and 5 + 5 a designs, among the extensions of the 3 + 3 design.Results for the accuracy of MTD selection for the 3 + 3 design for all the three dose-toxicity curves considered are presented in Fig. 3; results for some of the other designs are presented graphically in Appendix Figs. 1–3.In the previous section, our simulations are started at dose level 1 for all the rule-based designs, and dose level 3 is the true MTD for all the designs.This means that it takes only two escalations from the starting dose to reach the true MTD in the escalation only designs.However, the accuracy of MTD selection could depend on where the starting dose is located relative to the true MTD, for example if it is located six dose levels below the true MTD versus two, because some dose finding designs may be slow to escalate while others may be fast to do so.Thus, we investigate the effect of starting at lower dose levels on the accuracy of MTD selection in the 3 + 3 design and its extensions that allow only escalation, using the logistic dose-toxicity curve in Table 2.We find that the number of patients on the trial and the percentage of patients who are under-dosed, both of which are outputs of the program for the rule-based designs, increase when we start at the lower doses, but the accuracy of MTD selection is largely unaffected for all these designs.We find similar results for the model based designs.We also find similar results for the log-logistic dose-toxicity curve in Table 2 to those described for the logistic dose-toxicity curve.The result that the location of the starting dose relative to the true MTD does not affect the accuracy of MTD selection may not be surprising since the true DLT rates at dose level −1, −2 and −3 are very small for the logistic and log-logistic dose-toxicity curves used.In general, the accuracy of MTD selection will be affected when the true DLT rates at these lower dose levels are much greater than 0.01.We have demonstrated this for the 3 + 3 design using three linear dose-toxicity curves with different offsets.In practice, the starting dose of the trial is usually an extremely conservative estimate based on animal studies, and the DLT rates at the first few dose levels are expected to be very low.1,In this case, the accuracy of MTD selection should not be affected even when the true MTD is several doses above the starting dose in the rule-based escalation only designs considered, and we can enroll patients at the same low starting dose for these designs.In this work, we have systematically compared via simulations the statistical operating characteristics of various Phase I oncology designs, namely the 3 + 3 design and its extensions that target a DLT rate of ∼0.2 as well as the mTPI, TEQR, BOIN, CRM and EWOC designs with a pre-specified target DLT rate of 0.2, for three sets of true DLT rates.Although this is not an exhaustive comparison of all the current Phase 1 oncology designs, we have covered multiple commonly used ones.The 3 + 3 design is very simple and easy to implement and hence is still commonly used.However, our simulations show, not unexpectedly, that it under-doses a large percentage of patients, and is also not the design that picks the MTD most accurately for any of the dose-toxicity curves examined, with or without de-escalation.All the designs examined select the MTD fairly accurately when there is a clear separation between the true DLT rate at the MTD and the rates at the dose level immediately below and above it, as is the case for the DLT rates generated using the chosen logistic dose-toxicity curve.However, when this separation is small, as is the case for the DLT rates generated using the chosen linear dose-toxicity curve, the accuracy of MTD selection is much lower.The separations in these true DLT rates depend, in turn, not only on the functional form of the dose-toxicity curve but also on the investigated dose levels and the parameter set-up.The considered A + B designs with de-escalation generally pick the MTD more accurately than the corresponding escalation-only design for the true DLT rates generated using the chosen log-logistic and linear toxicity curves, but not for the logistic one.Some of the other rule based designs examined pick the MTD more accurately than the 3 + 3 design, depending on the true DLT rate at each dose.For example, the 5 + 5 a design is as accurate as the model based designs in picking the MTD for the true DLT rates generated using the chosen log logistic and linear dose-toxicity curves but requires enrolling a larger number of patients compared to the other designs considered and under-doses a large percentage of patients for these dose-toxicity curves.Among the designs investigated, the simple accelerated titration design over-doses a large percentage of patients.Over-dosing of patients in oncology trials is an important issue that needs to be considered carefully in terms of study design since the toxicities at the higher doses can be very harmful to patients.The EWOC design explicitly takes this into consideration; in this design, one can control the expected proportion of patients receiving doses above the MTD by pre-specifying the maximum acceptable probability of exceeding the target dose.Although some model-based designs can be more difficult to implement than rule based designs, the model based designs studied, mTPI, TEQR, BOIN, CRM and EWOC designs, perform well and assign the maximum percentage of patients to the MTD, and also have a reasonably high probability of picking the true MTD.In our simulations, we assumed a true DLT rate of 0.2 at the MTD because it has been shown that the standard 3 + 3 design targets a toxicity rate between 0.2 and 0.25 .However, when a DLT rate of 0.1 is specified as the target DLT rate, the various A + B designs considered would not, in general, select the MTD accurately because 0.1 is not within their target range, and when a DLT rate of 0.33 or 0.4 at the MTD is assumed, A + B designs that target a higher DLT rate would pick the MTD correctly more often than the 3 + 3 design.For example, for the linear dose-toxicity curve in Table 2, dose level 2 is the true MTD if the target DLT rate is 0.1.In this case and for the extensions of the 3 + 3 design considered, percentages for correct MTD identification for dose level 2 are lower than those for dose level 3 and range from 14% to 29%; percentage for 3 + 3 is 27%.If we consider a 5 + 5 design that targets a DLT range of 0.1–0.15, it selects dose level 2 as the MTD ∼43% of the time, which is much higher than the percentages with which the 3 + 3 and the other A + B designs with a target DLT rate of ∼0.2 select dose level 2 as the MTD.Dose level 4 is the true MTD if the target DLT rate is 0.33.If we consider the 4 + 4 b design and 5 + 5 b design, they both select dose level 4 as the MTD ∼40% of the time.This is much higher than the percentages with which the 3 + 3 and the other A + B designs with a target DLT rate of ∼0.2 select dose level 4 as the MTD for the chosen linear dose-toxicity curve.Results for the accuracy of MTD selection for the model based designs for the linear dose-toxicity curve given in Table 2 and for the target DLT rates of 0.1 and 0.33 are provided in Appendix Tables 6 and 7 respectively.The accuracy of MTD selection decreases as the target DLT rate increases from 0.1 to 0.33 for the mTPI, TEQR, BOIN and CRM designs, but not for the EWOC design, for the chosen linear dose-toxicity curve.Our simulations for the A + B and model based designs show that for designs where the approximate DLT rate targeted by the design is known, it is critical to pick a design that is aligned with the true DLT rate of interest.We also showed that as long as the true DLT rates at the first few dose levels are very low, the accuracy of MTD selection is largely unaffected by the number of escalations it takes to reach the true MTD, for the rule-based escalation only designs considered that target a DLT rate of ∼0.2.For the standard 3 + 3 design, our simulations, where the starting dose is two levels below the true MTD, show that the maximum number of dose levels examined varies between 5 for the logistic dose-toxicity curve and 7 for the linear and log-logistic dose-toxicity curves considered, while the median number of dose levels examined is 4 for all the three dose-toxicity curves.In comparison, a literature review of 41 trials that were performed using the standard 3 + 3 design found that the median number of dose levels examined was 6, about 45% of the patients were under-dosed and about 20% of the patients were over-dosed .These empirical results are consistent with our simulation findings that the 3 + 3 design under-doses about 50% of the patients and over-doses about 22% of the patients on the trial, for all the three dose-toxicity curves.The average number of patients enrolled in trials that are based on the 3 + 3 design is, however, much higher in the literature review with a mean of 44 patients than in our simulations, where we found a mean of ∼14 patients for all the three dose-toxicity curves.However, this literature review is based on trials of targeted anti-cancer agents that reached the MTD and we do not know the exact percentage of trials that included expansion cohorts, and if the initial cohorts started at very low doses; hence, the above comparisons are not exact.Nevertheless, it is clear from clinical trial data as well as our simulations that Phase I trials are very small and thus may not provide good estimates of the MTD.If we consider designs with a higher average sample size, say 50–60 patients, they will have a much higher accuracy of MTD selection.In the future, it may be worthwhile investing in the enrollment of a larger number of patients even in a Phase I trial to obtain more accurate estimates of the right dose to be used for later Phase trials, although there is always a trade-off between costs and more accurate estimates.In conclusion, our comprehensive study compares and contrasts the 3 + 3 design with multiple other Phase I oncology designs with an approximate target DLT rate of 0.2 for various scenarios of true underlying DLT rates, in order to understand which designs pick the true MTD most accurately, which under-dose and over-dose the maximum percentage of patients, which assign the maximum number and percentage of patients to the MTD cohort, which explore the maximum number of dose levels and enroll the most number of patients in each case.Our SAS programs are flexible and can be extended to include other A + B designs, other dose-toxicity curves as well as other evaluation criteria.The summaries in this paper provide considerable information on design property trade-offs, and the means to explore additional settings.These may be useful aids in choosing a Phase I design for a particular setting.
Dose finding Phase I oncology designs can be broadly categorized as rule based, such as the 3 + 3 and the accelerated titration designs, or model based, such as the CRM and Eff-Tox designs. This paper systematically reviews and compares through simulations several statistical operating characteristics, including the accuracy of maximum tolerated dose (MTD) selection, the percentage of patients assigned to the MTD, over-dosing, under-dosing, and the trial dose-limiting toxicity (DLT) rate, of eleven rule-based and model-based Phase I oncology designs that target or pre-specify a DLT rate of ∼0.2, for three sets of true DLT probabilities. These DLT probabilities are generated at common dosages from specific linear, logistic, and log-logistic dose-toxicity curves. We find that all the designs examined select the MTD much more accurately when there is a clear separation between the true DLT rate at the MTD and the rates at the dose level immediately above and below it, such as for the DLT rates generated using the chosen logistic dose-toxicity curve; the separations in these true DLT rates depend, in turn, not only on the functional form of the dose-toxicity curve but also on the investigated dose levels and the parameter set-up. The model based mTPI, TEQR, BOIN, CRM and EWOC designs perform well and assign the greatest percentages of patients to the MTD, and also have a reasonably high probability of picking the true MTD across the three dose-toxicity curves examined. Among the rule-based designs studied, the 5 + 5 a design picks the MTD as accurately as the model based designs for the true DLT rates generated using the chosen log-logistic and linear dose-toxicity curves, but requires enrolling a higher number of patients than the other designs. We also find that it is critical to pick a design that is aligned with the true DLT rate of interest. Further, we note that Phase I trials are very small in general and hence may not provide accurate estimates of the MTD. Thus our work provides a map for planning Phase I oncology trials or developing new ones.
757
Assessment of a cracked reinforced concrete beam: Case study
Beam-column joint plays an important role in the structural response of reinforced concrete structures, especially when these structures are subjected to cyclic or seismic loads.These joints usually experience high flexural and shear stresses from vertical and lateral loads.Shear stresses may arise from shear forces and/or from torsion straining actions in some cases such as edge beams, especially if loaded by heavy cladding.The stress distribution due to flexural and shear forces produce a diagonal crack pattern in the panel which leads to crush of the compressive strut, and consequently, to deterioration of strength and stiffness of the joint .The behavior of beam-column joint in reinforced concrete structures was investigated by many researchers over last decades through experimental tests as well as finite element analysis to address many aspects such as failure modes, ductility, energy dissipation capacity and sudden decrease of strength .There are two common types of joint failure based on the ductility of the joint.These types are non-ductile joint shear failure prior to beam yielding or ductile joint failure after beam yielding.The global capacity of the structural system is controlled by shear strength of beam-column joints .For reinforced concrete beams with plain steel bars, the sliding of steel bars may govern the failure mode of the beam and the diagonal shear failure has less effect .Rules and guidelines for design structure to have better response considering the beam-column joint are presented in many building codes ,Many researchers considered different techniques in strengthening reinforced concrete beams.Some used external layers of Ultra High Performance Fiber Reinforced Concrete, UHPFRC, on the tensile side or compressive side or three side jacket of the beam .M.A. Al-Osta et al .investigated the effectiveness of strengthening reinforced concrete beams by sand blasting RC beams surfaces and casting UHPFRC around the beams and by bonding prefabricated UHPFRC strips to the RC beams using epoxy adhesive.Other researchers used steel angles prestressed by cross ties for seismic retrofit ,Esmaeel et al. used a combination of GFRP sheets and steel cage for seismic strengthening of reinforced concrete beam-column joint without perforating the concrete elements.Carbon-fiber-reinforced polymer, CFRP strips and sheets were used to strengthen defected beam-column joint.The failure characteristics of the defected joint were effectively enhanced by using the CFRP strips and sheets.It was found that strengthening using CFRP increased the joint ultimate capacity and reduced the ductility .The objective of this research is to present the results of investigation that the author conducted based on the owner request to check the accuracy and appropriateness of the builder analysis and conclusion.The request was done after observing a crack in the beam-column joint in a newly constructed building and during the finishing period.The building was not opened for use yet.The investigation is based on the detailed analysis using PLAXIS and SAP2000 to check the stresses developed in the beam and determine whether the stresses developed in the beam exceed those expected by the design and to propose a repair method, if needed.A crack of more than 10 mm width was detected in an edge reinforced concrete beam of the second floor of a three-story office building in Riyadh.This beam has a clear span of 16.5 m with a section of 300 mm x 1500 mm.A plan showing this beam is presented in Fig. 1, and the beam reinforcement is presented in Fig. 2.It was reported that the crack was first noticed after erection of the heavy, exterior cladding on the inflected beam.The crack is located at the beam-column joint.The adjacent beam cross section is 400 mm width and 700 mm height with a span of 4.0 m.Review of the beam design loads indicated that the beam carries its self-weight, centric brick wall of about 4.5 m height plus heavy external cladding of about 4.5 m height, weighing about 15 kN/m.This heavy cladding represents more than 40% of the load on the beam.Accordingly, initiation of the crack right after installation of the cladding should be considered carefully in the structural sense during this investigation.It is worth noting that the beam does not carry floor self-weight since it is on the edge of the building on the back entrance with the whole building height.After observing the crack, the builder diagnosed the crack and concluded that it is non-structural in nature, with no associated risk.The builder diagnosed the crack as being temperature induced due to lack of the side reinforcement that was supposed to prevent the crack in the first place.He suggested that the crack needs mere aesthetic treatment for architectural reasons.Visual inspection of the crack revealed that it is at the exact 90 degrees joint between the top of the beam and the interior supporting column and migrated downwards on both sides towards the column within some 300 mm and back again towards the bottom joint with the beam.The crack is widest at the top of the beam and reduces in width towards the other side of the beam.The crack is shown in the photograph presented in Fig. 3.It should be noted that the crack on the other side of the beam is almost typical of the one shown in Fig. 3.The builder’s report attributed the crack to a drop of −32 °C temperature around the cracked beam, indicating that this variation in temperature must have induced high tensile force on the cracked side of the beam, assuming it is the most restrained.This blunt statement of the crack reason lacks the following important consideration:Accurate modeling of the temperature variation entails considering the nonlinear and gradient nature of the temperature variation as advocated by several codes ,The fact that the temperature variation takes effect on a beam that has already developed minute and hair cracks due to gravity loading is expected to be very minimal, since these cracks will alleviate the stress that would otherwise develop by the temperature variation.In order to understand the possible reasons of the crack, three types of analysis were performed, as follows:3-Dimensional, linear finite element analysis using PLAXIS to investigate the response of the cracked beam under consideration along with the connected beams and columns.Solid elements were used to represent the reinforced concrete beams and columns.3-Dimensional, linear frame analysis using SAP2000.This analysis provided the actual internal forces in the beam and the column for design and check purposes considering factored dead load.Additional analysis was also performed for temperature variation to rule it out as a possible reason for the crack development.Section analysis to compare the section capacity with the demand obtained from Step 2 above.This analysis included resistance to flexure, torsion and shear in the beam.The column capacity was also checked in this analysis.According to the analysis performed through different types of analysis, the following observations and results are obtained:The 3-Dimensional, linear finite element analysis using PLAXIS showed clearly that the beam-column joint is the most stressed in tension, indicating the importance of providing adequate reinforcement with careful consideration to curtailment of the reinforcement to assure availability of ample development length.The maximum flexural tensile stress at beam-column joint is around 7.0 MPa at the top fibers, due to the serviceability loads.The maximum vertical deflection of the cracked beam is around 12 mm at the mid-span.According to Fig. 5, a considerable out-of-plane deformation is noticed in the column, which can be related to the torsional stresses due to the heavy cladding on the beam.Figs. 6 and 7 show the shear stress developed in the cracked beam from both sides.The maximum shear stress is around 3.5 MPa at both sides of the beam at the beam-column joint, due to serviceability loads.In order to estimate the bending moment values at different locations of the model, beam elements with minimal stiffness were added to the model to get their deformations without altering the whole stiffness of the model.The straining actions are then calculated after making the necessary adjustments considering the actual stiffness.The bending moment diagrams and values are shown in Figs. 8 and 9.The 3-Dimensional, linear frame analysis using SAP2000 was conducted to find the factored design forces of the beam and column to check its adequacy according to its original design.Additional analysis was also performed to check the design against temperature variation.The diagrams of bending moment, shear forces, normal forces and torsion moments due to factored dead load and temperature are shown in Figs. 10 and 11.According to the results, it is clear that the straining actions developed from the factored dead load is much higher than that developed from the temperature change, especially from the shear forces and torsion and bending moment values, which are more responsible for the crack pattern compared to the normal forces.The maximum design straining actions at the beam of consideration due to factored dead load are 1672 kN.m of negative bending moment, 590 kN of shear force and 90 kN.m of torsion moment.The design axial force of the column is around 2000 kN with design biaxial moments of 1430 kN.m and 17.2 kN.m around y and x axes, respectively.The straining actions from the case of temperature change are 38.5 kN and 68.6 for shear force in beam and column, respectively, 20.8 kN and 29.8 kN for axial force in beam and column, respectively and moment values of 278 kN.m and 247 kN.m for beam and column, respectively.In order to compare the section capacity with the demand obtained from the SAP2000 analysis, section analysis was conducted according to the ACI 318-17 guidelines .Considering the flexural design, the top reinforcement needed at the cracked beam is 4061 mm2, however, the provided reinforcement is 6 # 20 plus 4 # 16, which makes total of provided reinforcement is 2662 mm2.The designer provided compression steel of 12 # 25, however, its contribution to tensile stresses at top of the section, where the crack appeared, is not similar to tensile reinforcement.There is more reinforcement added as bottom reinforcement from the adjacent beam, however, due to the smaller beam size, this added reinforcement is close to the neutral axis of the concerned beam, which makes its contribution to flexural capacity is minimal.Please refer to Appendix A for the flexural design of beam according to ACI 318-17.Considering the shear design of the cracked beam resulted from the combined shear force and torsion moment, it was shown that vertical stirrups of 12 mm in diameter spaced at 100 mm as well as long reinforcement of 6 # 20 are needed.The provided stirrups close to the column are # 10 mm at 100 mm.No added long reinforcement at both sides was provided, which reveals the inadequacy of shear reinforcement.Please refer to Appendix B for the beam design under shear and torsion according to ACI 318-17.As for the column capacity, Fig. 12 shows the adequacy of column through the interaction diagram.The column has considerable higher capacity compared to the straining actions developed at the column.From the aforementioned analysis and observations, it is clear that the column capacity is adequate compared with the calculated internal forces.The top beam reinforcement at the crack location is inadequate for resisting the calculated bending moment.The shear and torsion reinforcement of the beam is inadequate compared with the corresponding calculated forces.The crack presented in Fig. 3 is normally typical of flexural cracks.Accordingly, considering the shape of the crack, inadequacy of the top beam reinforcement, and appearance of the crack subsequent to erection of the cladding, it is clear that the crack is caused by the flexure induced by the cladding.Overloading of the beam during installation of the beam may have caused additional flexure onto the beam.The extent and direction of the crack propagation may have exacerbated by the torsion caused by the cladding due to the offset of its center of gravity from the centerline of the beam.According to the analysis and concluding remarks, it was suggested to retrofit the beam-column joint through high strength composite laminate strips.These strips have 68% carbon fiber content with 32% binder content.Reinforced concrete surface was first cleaned and dried to assure that it is free from foreign material, contaminants, oil, grease and other debris.The crack was injected with a low viscosity epoxy resin.After the surface preparation, epoxy adhesive was applied at a reasonable rate.The strips were pre-cut to the desire sizes and the surface of the strips was roughened with sand paper for good adhesion.The adhesive was placed onto the concrete surfaces, using a roller to press the strips into the adhesive until is forced out on both sides of the strips.The surplus adhesive was removed on both sides before the adhesive hardened.The strips and adhesive were left for 24 h prior to over coating.Fig. 13 shows the strips and adhesive during the process of strengthening of the cracked beam.A reinforced concrete beam in a 3-story office building was observed to have a crack during the building construction and after installing the cladding.The beam is cracked critically at the top connection with an interior column.The beam has a clear span of 16.5 m and its dimensions are 300 mm width and 1500 mm depth.The builder submitted a report through his consultant advising that the crack is non-structural and does not pose any risk to the building.The crack was investigated by conducting visual inspection and detailed finite element analyses and section capacity calculations.Contrary to the Builder’s diagnosis, the analysis has shown that the crack is structural and was caused by inadequacy of the beam section to resist flexure at its top connection with the column.Recommendations were made that the beam has to be retrofitted and its capacity increased, both in flexure and shear, although the latter is less critical.According to the analysis and concluding remarks, it was suggested to strengthen the beam-column joint through high strength composite laminate strips.The following conclusions are obtained:Cladding weight should be carefully considered in the design.The offset distance from the beam should be considered in calculating its torsional effect.The change in the cross section of a beam at the intersection with the column should be avoided unless really needed because it affects the rigidity of the joint.If used, careful design detailing should be considered especially for flexural analysis and the development length of flexural bars.The beam-column joint is usually subjected to high combined stresses due to the flexural, shear and torsion stresses.Any careless design may result in severe structural cracks.
This paper presents the analysis and repair of a cracked reinforced concrete beam in a 3-story office building in Riyadh, KSA during its construction and near completion. In October 2015, a reinforced concrete beam with a cross section of 300 mm x 1500 mm and a clear span of 16.5 m in this building cracked at the connection with one of its supporting columns. This crack propagated on both sides of the beam. To investigate the main reason of the beam cracking, a site visit was conducted to visually inspect the cracking beam and the connecting structural elements. After that a detailed analysis of the beam using PLAXIS and SAP2000 to investigate the stresses developed in the beam, spot the most stressed parts and to check the adequacy of the design. The analysis revealed the inadequacy of flexural resistance of the beam as well as the shear and torsion capacity. The main reason of the crack is underestimating the cladding weight. Based on the results, a repair methodology was selected using CFRP sheets to increase the flexural capacity of the beam section with enhancement to its torsional and shear carrying capacity to meet the design demand.
758
Progress and remaining challenges in high-throughput volume electron microscopy
Only 30 years after their development began in earnest , self-driving cars, controlled by sophisticated artificial intelligence algorithms, now cause serious accidents at a rate that roughly matches that seen for human drivers, at least according to a manufacturer's press release.Natural intelligence depends on the brain, with algorithms, that are encoded to some, possibly overwhelming, degree in the connection pattern between neurons.Those connections are made by synapses but depend on neural wires for their link to the cell soma.When trying to reconstruct neural wiring from volume electron microscopy data, a computer using the best current AI algorithms makes errors at about the rate of a moderately motivated human.However, most human tracing errors are due to inattention and thus uncorrelated , which means that they can be corrected by redundant tracing.Different computer algorithms, on the other hand, tend to fail at the same places in the data set, where disambiguation typically requires context and high-level knowledge that currently only human experts can provide.For VEM data of sufficient quality) highly motivated and knowledgeable humans, such as late-term graduate students and postdocs, can consistently correct almost all of those errors."We can, therefore, assume that such data contain all the information needed to extract the neural circuit diagram and we can expect that algorithms will eventually be able to reconstruct at a human expert's error rate.For the reconstruction of axons and dendrites, VEM-based connectomics has in the past almost exclusively relied on a hybrid approach: first, the computer creates a candidate segmentation with parameters set to ensure that at most a few of the generated segments span more than one real neurite, but each neurite is still broken up into many supervoxels .This situation is called an over segmentation and human proofreading can be performed by simply inspecting each segment for locations where it should be combined with one or more of its neighbors .A different approach to ‘proof reading’ is to generate center-line tracings, one for each neurite .In a separate step, each skeleton is then used to combine overlapping supervoxels into a volume model of the corresponding neurite .Both approaches yield similar results and consume about the same amount of human time with a possible advantage for skeletonization as long as the supervoxels are small .This is not too surprising since both approaches require that all locations in the VEM data set are viewed by one or more humans.Note that proofreading is orders of magnitude faster than the rate at which segmentation proceeded in the case of C. elegans , where neither acquisition nor analysis used computers.To move beyond the complete-viewing limit one has to stop proofreading decisions that the computer made on the basis of clear evidence and instead concentrate on decisions that could easily have gone the other way.Confidence information, which may have to be generated separately , is needed to steer human inspection to ambiguous locations .This has, in some cases, reduced the total proofreading effort needed to reach a given reconstruction reliability by orders of magnitude.In addition to an increased use of targeted proofreading, there has been a recent surge in the efficacy of machine segmentation itself.It is likely that this has to do with a renewed focus on the key step in segmentation: the detection of cell boundaries.Boundary locations can sometimes be identified simply by testing whether the voxel intensity is above or below a certain threshold.When that fails, one can, in addition, consider the staining pattern surrounding the voxel in question, a task well suited for machine learning based on convolutional neural networks.Finally, one can ask whether making the currently considered voxel a boundary voxel makes the local boundary shape more plausible or less so."That is what any human annotator's visual system does after having seen some amount of data form neural tissue.Expert annotators, in addition, draw on neurobiological knowledge to navigate difficult regions of neuropil.Similarly, algorithms that agglomerate supervoxels make decisions on whether the resulting shape is more or less plausible on a medium length-scale.The required shape priors can either be designed using explicit knowledge about neurite shapes or trained using the shapes of actual neurites .Rather than considering one decision after another, one can consider the combined effect of multiple merge decisions to select a globally optimal set of mergers .Now, back to single-voxel classification."Flood-filling networks, which are currently leading in segmentation performance , use an iterative voxel-classification process: first, using only the image intensities, an initial estimate for each voxel's probability of being part of the current neurite is generated. "A feedback path makes the previous iteration's ‘in’-probability part of the classifier input, automatically incorporating implicit shape priors into the primary voxel classification process.Before we can start analyzing data we have to generate them.High-throughput VEM remains the state-of-the-art for resolving nanometer-sized structures, the development of sub-diffraction light microscopy and of methods that expand the tissue before imaging notwithstanding.During the last decade, four VEM techniques were used for most connectomic data sets.Two of those techniques are based on serial block-face electron microscopy.SBEM using a diamond knife was originally introduced more than three decades ago , but became an effective tool about a decade and a half ago when it was shown that it can be used on nervous tissue .Since then SBEM data have been used to reconstruct neural circuits on a large scale.A variant of SBEM was introduced that employs a focused ion beam for the periodic removal of material from the block face .With FIB-SBEM the resolution is improved in all directions.Block-face increments as low as 2 nm have been demonstrated .The usable lateral resolution is almost on par with what can be achieved with TEM , because FIB-based material removal tolerates a much larger electron exposure than DiK-SBEM .The one big drawback of FIB-SBEM is that the field-of-view is limited to a few tens of microns along the axis of the ion beam .One of the most exciting recent advances in VEM was, therefore, the demonstration that epoxy-embedded heavy metal-stained brain-tissue samples can be cut — with only minimal loss of material at the interface — into slabs between 10 and 30 μm thick using a lubricated and heated diamond knife .By combining the hot-knife slab-cutting,technique with FIB-SBEM, a method,is created that should have no limit on sample size and at the same time provides data that are of sufficient resolution and SNR to allow virtually error-free automatic neural circuit extraction .It would take thousands of years to acquire FIB-SBEM data for a whole mouse brain with a single single-beam microscope, but if one takes advantage of the fact that the brain would be cut into hundreds or thousands of slabs, which can be imaged in parallel, the wall-clock time required can be reduced by simply using multiple machines, limited mainly by the available budget.Setting budget issues aside for the moment, we could ask: Has any of the four approaches to VEM discussed by Briggman and Bock in their 2012 paper emerged as the winner and is on the verge of making other approaches obsolete?,Posing the question in a different way: if one were to embark on a large-scale effort to reconstruct one whole mouse brain now, is it time to commit to one of these approaches?,Based only on its track records, serial-section TEM should be the method of choice.SSTEM was used for all currently available whole-brain datasets: C. elegans with 302 neurons , the adult fruit fly with approximately 250 000 neurons , and the larval fly nervous system with about 10 000 neurons .1,SSTEM data are, however, more difficult to analyze automatically , likely due to the highly anisotropic resolution and to slice-to-slice jitter.Compare for example, reconstructions of the fly visual system based on SSTEM and FIB-SBEM data, respectively ."The FOV for TEM imaging needs to be contained within a single grid opening, across which the section has to be stably suspended and which cannot exceed the standard TEM grid's 3 mm outer diameter.This is too small for slices across a whole bird brain, which can be as large as 13 mm × 10 mm.But, as it did for FIB-SBEM , the hot knife may come to the rescue.One reason for the continued popularity of TEM is that wide-field imaging is inherently parallel allowing net imaging rates of about 50 MHz using a TEM that is read out with a camera array .Does the automatic collection of ultrathin sections on tape , currently increasing in use, have a future in connectomics?,The resolution along the z-direction could be improved by the acquisition of multiple images at different tilt angles or different landing energies.The missing information along the z-axis would then be recovered by tomographic reconstruction or, multi-energy deconvolution , respectively."The ATUM's imaging speed depends on the scan rate of the SEM that is used, which for a single-beam instrument, is much behind that of TEMCA .However, preparing the incoming electron beam in a way that results in a hexagonal array of high resolution foci allows the acquisition from all these foci in parallel, as long as each focus is far enough from its closest neighbor to ensure separate secondary-electron detection.Such a multi-beam scanning electron microscope with up to 91 beams has been designed and is being sold by Carl Zeiss Microscopy .So far, the MSEM has been used successfully on sections collected on a solid substrate using ATUM .An attempt by one of us to combine MSEM and DiK-SBEM has run into unexpected difficulties.While a plausible idea for combining MSEM and FIB milling while maintaining a high overall image-acquisition throughput has not appeared, known difficulties being the slow speed of material removal and the need of the MSEM for a planar sample surface well beyond the border of the imaged area.The emergence of a mode of knife-less material removal that has a large FOV and leaves a smooth sample surface, gas cluster ion beam milling, may be a way out.We have taken the value of reconstructing circuit diagrams as being self-evident, needing neither introduction nor proof.After more than a decade of neo-connectomics, are we still mostly building the tools that will ultimately get us the all-powerful whole-brain data set?,Or are there already key insights of fundamental importance that one could not have been gained without the use of connectomics?, "We think that while there are some notable biological results we still need to remind ourselves of Amara's law , which tells us that the impact of novel technologies is almost always overestimated in the short term but underestimated in the long term.We are still confident that eventually the reading of memories and the discovery of new algorithmic principles will become the daily routine of connectomics.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest
Recent advances in the effectiveness of the automatic extraction of neural circuits from volume electron microscopy data have made us more optimistic that the goal of reconstructing the nervous system of an entire adult mammal (or bird) brain can be achieved in the next decade. The progress on the data analysis side — based mostly on variants of convolutional neural networks — has been particularly impressive, but improvements in the quality and spatial extent of published VEM datasets are substantial. Methodologically, the combination of hot-knife sample partitioning and ion milling stands out as a conceptual advance while the multi-beam scanning electron microscope promises to remove the data-acquisition bottleneck.
759
Occupied with classification: Which occupational classification scheme better predicts health outcomes?
Health research has looked at a variety of domains of inequalities.One of these, work, has been generally neglected, though the relationship between it and health has increasingly been highlighted, particularly in terms of its psychosocial conditions.The workplace psychosocial environment is generally thought to be a consequence of employment relations rather than solely of external social determinants of health.It is not only the work itself or the technologies around it, but the structure and order of the workplace that may create both physical and mental health effects.Psychosocial hazards in the workplace have often been considered separate to physical ones, when they may be related to one another, similar to the interconnected ‘whole person’ view of physical and mental health.However, results from research on employment conditions and health can be inconsistent.For example, nonstandard labour contracts were not associated with adverse health effects by Scott-Marshall and Tompa, but Benach and Muntaner report that those in insecure jobs have higher self-reported morbidity.Differences in results can be attributed to the diversity of the outcomes researched in terms of the forms of employment examined, the composition of the sample, which health measures are included, and the location or context of the work.Indeed, Hoven and Siegrist review mediation and moderation studies on adverse working conditions and health outcomes, noting that studies feature “a high degree of heterogeneity of core measurements.,This can be exacerbated as the difference between some of these measurements and contexts used in studies may be unclear, and new forms of work, such as flexible employment, can be difficult to classify, particularly with respect to terminologies which may be unclear; temporary, non-permanent, precarious, non-standard, insecure, contingent, fixed-term, atypical, casual, and unregulated represent similar concepts in various studies.A variety of perspectives have sought to address inconsistency in this field: two of the most commonly used schemata of the work-health relationship are the job strain or job-demand-control model and the effort-reward imbalance model.Benach and Muntaner suggest that these frameworks, though, may not be able to incorporate “more distal social and organizational determinants of health.,This paper will bring new perspectives in to bear on issues with these conceptual models, in order to take into account structural and social inequalities, geographical context, and time.Looking towards the life-course approach and through the lens of exposure, a framework linking concepts in epidemiology, occupational health, and inequalities research has been developed – the worksome.Health research can sometimes confound occupation and class, often by using either as a proxy for the other.That is to say, occupational classifications are sometimes used as class and vice versa, but class is more complex than just occupation, and this is reflected in how class classifications are created.Occupation is a component of class, and occupation is simply the job or work someone does; class is, in essence, a hierarchical measure of socioeconomic positioning, and on a higher scale, a measure of social structure.Savage asserts that occupationally-based measures of class are “actually a way of making for cultural judgements about the ranking and social importance of jobs.,MacDonald et al., in a review of epidemiological studies, found that while many collected occupational measures, most work used these data to inappropriately represent socioeconomic class.Class is almost always unsuitable to examine the work-health relationship as it has historically been articulated in a variety of ways.Class contains an implied hierarchy, which already imposes a relationship that may be unsuitable and inappropriate.Further, with class, it is difficult to understand axial differences.Using occupation as a proxy for class or vice versa can mask the nuances between or within occupations with respect to working conditions and exposures.Occupation can indeed be articulated as part of a class definition, but it is not simply a component of it; it can be a social determinant of health in its own right.While work such as the Whitehall studies have created the basis for examining the relationship between work and health, it is important to remove implied hierarchies or grades of occupation to discover further information about these relationships.It is important to remove implied hierarchies or grades from occupation to discover further information about these relationships, in addition to the evidence on social class gradients of health.There has been a vast array of research into employment status, or grade and health, but with a changing world of work and employment relations, precarious or ‘flexible’ conditions are filtering through to jobs where it would have been inconceivable before.Socioeconomic characteristics like class may interact with flexible working conditions vis a vis health outcomes, but reflecting on the percolation of these conditions to other jobs, refined occupational categories change less over time, and may be more appropriate.Daykin argues that changing patterns of employment, generally thought to be a consequence of the late 20th century neoliberal shift, are reflected in new patterns of the production and distribution of risk and hazard, namely the transfer of the costs and risks of employment from the employer to the employee.Flexibility is not just found in the technical systems of work, but also more abstract elements thereof, such as tasks, status, and scheduling.It also has filtered through to work where these conditions may have once been thought unthinkable.In general, work has also been intensified with a pressure to do the same or more work in less time, or to expand tasks and expectations beyond what they were before.These conditions are not equally distributed amongst occupations, and perhaps even individuals, so it follows, then, that inequalities in health should be also examined occupationally.Clougherty et al. assert that “occupational classifications used in many epidemiological studies have proven too coarse to capture fine-scale status differences ”.Sometimes, for example, there is a lack of clarity: Hallerod and Gustafsson argue that occupations can be used to create ‘economical classes,’ but occupation only forms part of these classes.Moreover, Hallerod and Gustafsson use ‘occupational position’, ‘employment position’, ‘economical classes,’ and ‘social classes’ almost interchangeably, possibly based on that argument, which can lead to some confusion when it comes to interpreting results.The UK NS-SEC, for example is generated with the UK version of the ISCO 2008, the SOC2010, but it contains other inputs relating to status.For example, Corna and Sacker convert from the SOC1990 to the NS-SEC to assign ‘occupational class,’ and refer to it as such, when the SOC codes are themselves an occupational classification.The European social class measure for the European Social Survey is composed in a very similar fashion from multiple items, including occupation.Almeida et al., 2006 claim that “class structures significantly mark the value patterns found in the populations analysed.,We assert, then, that using class can also limit the transferability of results due to variation in contexts.Occupations, while socially mediated, are not, like class, socially defined, and are more readily conceptually transferable between contexts.Different occupations are associated with varying conditions, risks, prospects, and outcomes, these are not given across or even within occupations.There is thus considerable heterogeneity within and between occupations.Therefore, the relationship between working conditions and health should be analysed with respect to this heterogeneity, looking both between and within occupations.As such, the hierarchy implicit in class classifications may confound the examination of these already complex relationships.To that end, this paper will examine several classification systems empirically, namely ISCO as an occupational one, NACE as an economic/occupational system, and NSSEC as a socioeconomic class one, as it is commonly used in the literature, to provide a base for using occupational classifications over socio-economic class ones by determining which classification has the best predictive accuracy with respect to a range of health measures from a specifically occupational dataset.Further, it will argue that finer-scale versions of these classification systems perform better in general even when parsimony is considered.This will also forward the worksome framework by empirically demonstrating that occupational classifications are the most appropriate for work-health research, when often class is used to proxy occupation or vice versa, which is not always the right approach.There is therefore a need to bridge what people are exposed to and what people say or believe they are exposed to, including social exposures.This paper will introduce the worksome in order to provide a framework for justifying the use of occupational classifications over class, and the importance of occupation as a social determinant of health.The worksome will be underpinned empirically by an examination of occupational, social, and economic classification systems.The worksome is an expansion of the exposome.The exposome was developed by Wild in response to the sequencing of the human genome, and to incorporate the life-course approach to exposure into epidemiology.The exposome includes three separate-but-overlapping domains, the internal, specific external, and general external whilst also capturing both nature and nurture.This sort of life-course approach is appropriate for work as it accounts for a large proportion of time in a life-course, and it can impact how lives are lived outside the workplace.Working consumes a large part of any life course, regardless of whether it is formal or informal.The general external elements, like work, of the framework are, in the general version of the exposome assumed rather than measured, as work with the exposome is predominantly top-down, focusing on physically measurable exposures.The exposome has been adapted for health inequality research, notably by Juarez et al. who created ‘the public health exposome,’ which focuses primarily on environmental health.Research creating various types of exposome, for instance see the exposomics project, the public health exposome, and the occupational exposome, focuses on the use or adaption of the exposome more with respect to biological analyses and issues which may arise thereof, without realising that other approaches using survey data may also be suitable under the paradigm.The worksome expands on the idea of exposure to include a social-physical gradient.It is necessary to consider work explicitly to draw out lower-level scale exposures, vectors, and effects.The worksome emphasises the importance of the scale of exposure and the interactions both within and between scales.Scale, used here in the sense of ‘level’, can include individuals, work groups, firms, industries, and so on, with other geographic and contextual factors existing at the same or different levels, such as the workplace, the city, or the regulatory regime at varying levels of government.This does not mean that scales are rigid.Delaney and Leitner, argue that scale is often constructed, and so the worksome takes scale as a fluid, interactive concept of levels, while keeping in mind that scale is often socially and politically mediated.The physical-social aspects of exposure are represented by the social gradient linking the physical to the geocontextual and the workplace, in order to encompass largely physical exposures, predominantly social exposures, and exposures which are inherently both physical and social and fall between the extremes, such as working time.Working time is both; as a basic concept, it is physical: the time spent exerting oneself at work, but it too is social, in the sense that it is also the time spent being exposed to a variety of working conditions.Social exposures have a certain intangibility to them, something which is emphasised in the social-physical gradient of the worksome, though it is an exposure type not emphasised by the exposome.A social-physical gradient of exposure allows for flexibility in analysis as it provides a framework within the worksome for disparate and similar-but-different measures of exposure to be compared.Moreover, individual-level exposures and workplace level exposures interact: individuals within a workplace are affected and effect upon workplace-level characteristics.Individuals, therefore, cannot be considered solely as discrete entities with respect to the work-health relationship.There are also factors above the individual and the workplace.Workplaces are also located within geographical contexts, be it in relation to other firms, related industries, as well as in social and regulatory contexts.Geocontextual influences are an undercurrent and require consideration in work-health research.Time is also considered in the worksome – exposures continue across the life course.Again, interactions within and between all of these domains must be emphasised – people exist at multiple scales simultaneously: ‘echoes’ of past actions or consequences are reflected in these interactions as well."A given individual's contribution can prevail and the residual impacts remain with people for a long time after the initial exposure, as well as influencing their and others' behaviours.By including the interactions between scales, individuals, times, and geographies in the worksome, we further our understanding of the complexities of this landscape."As work too consumes a large part of any given individual's life, the life course approach is key to understanding work as a social determinant of health.With respect especially to time, the life trajectory approach allows the worksome to also cover those who are unemployed or engaged in informal work.The former is incorporated as they move in and out of the workforce.The latter is encompassed as the worksome does not distinguish between formal and informal work, in the sense that they are both considered equally under the framework.Indeed, there are a number of papers examining life trajectories and career typologies with respect to occupational mobility, for example, and these approaches, often using sequence analysis or latent class analysis, can and should be emulated in work that examines the relationships between working conditions and health.Movement between occupation types, such as from manufacturing to the low-paid service sector, has been connected with poorer health using these approaches.Employing latent class models, Corna and Sacker modelled the lifetimes of older British adults, particularly around the labour market and family experiences, finding significant differences in the mental health domain.The worksome is useful over the exposome as it adds specificity and interaction between the domains, has a social-physical exposure gradient, and emphasises scale more strongly.Both qualitative and quantitative forms of research are key to forming a better picture of the work-health relationship.Within the quantitative approaches multilevel models can be used to approximate the proposed structures.For qualitative research, the effects people have on systems and scales and how they are affected by them could be elucidated through interviews, or participatory work where the participants guide the research journey.Using the language of biomedical epidemiology is key to this approach; the goal is to not only forward a more clear and comparable set of social research projects but also to develop clearer research findings for policymakers and other scientists.The worksome makes explicit the elements that the exposome treats as givens, allowing for the use of language familiar to policymakers while including effects that may not be considered explicitly in the biomedical approach.This framework can help fit disparate pieces of research together and contextualise them to form a wider collective of research.Flexibility is important, as for research involving people, a complete body of research is impossible as society is constantly changing, so gaps in research are to be expected, and can be filled.For the empirical portion of this paper, the objective is to distinguish work, or occupation, from class, and to set out which system of classification is most appropriate for use in quantitative analysis.This will advance the argument that occupation and class should be examined separately, as well as supporting the usefulness of the worksome in underpinning work-health research.This will use the European Working Conditions Survey to see which classification system has the best predictive accuracy for a variety of health measures including backache, self-rated health, and fatigue.The European Working Conditions Survey is a repeated cross-sectional quinquennial survey started in 1991.It is administered by the European Foundation for the Improvement of Living and Working Conditions for the European Union.Waves were conducted in 1991, 1995/6, 2000/1, 2005, 2010, and 2015.This paper uses data from the 2010 and 2015 waves, due to the presence of occupational class variables.All EU countries and European Economic Area countries were included, with a number of EU candidate members in some waves, therefore not all countries are in all waves.The target sample in each country was between 500 and 1500 individuals.The EWCS data were obtained from the UK Data Service.The EWCS data has individuals classified both by the Statistical Classification of Economic Activities in the European Community and the International Standard Classification of Occupations.The NACE is an industry classification, and the ISCO an occupational one.The National Statistics Socio-Economic Classification is a British system of socio-economic classification based on the UK occupational classification system, employment status, and firm size.The SOC2010 was derived from the ISCO 2008 2-digit version, and the employment status and firm size variables were derived from survey questions in the EWCS.A new dataset was created with the occupational classifications and relevant health measure variables for the years 2010 and 2015.Only 2010, 2015 were included as the expanded 2-digit ISCO and NACE classifications were only in those waves.In the analysis the data for 2010 and 2015 are combined as this study is not concerned with change and it means that there is a larger sample size to detect meaningful effects; the classifications and outcome variables were consistent over this relatively short period.The health measure variables were dichotomised in order to fit the logistic regression model.The original responses were ‘Yes, positively’, ‘Yes, negatively’, and ‘No.’.Manor et al., found in their analysis of self-rated health that both dichotomised and ordered categorical models showed similar results with only small differences in power and efficiency.The health measure variables are also self-reported, which may not be ideal.Miilunpalo et al. assessed subjective measures of health, and found that, in relation to objective health measures, they are valid for use in population health research.They also argued that perceived health measures were stable due to a small rate of major changes in that status.Burstrom and Fredlund found a strong relationship between poor self-rated health and mortality, implying that self-rated health is a suitable predictor of mortality, and therefore ‘a useful outcome measure.’,DeSalvo et al. found that, compared to multi-item measures of self-reported health and comorbidity, a single-item measure is as good at prediction.It is therefore acceptable, then, to use the health measures collected in the EWCS, to examine them in relation to working conditions, or, in the case of this paper, to see which classification system better predicts them.Given the argued importance of time for understanding the worksome, specific health problems were defined to look at issues that had occurred within the last year while the general measures were defined contemporaneously at the time the questionnaire was answered.60 Logistic regression models were run using MLWiN 3.01.Logistic regression is used here as the measures are binary.Separate models for each health measure as the independent variable with each classification system as dependent variables were run using a Markov Chain Monte Carlo Bayesian framework.This provides a Deviance Information Criterion, a Bayesian version of the Akaike information criterion.The DIC is a measure of predictive accuracy that is the badness of fit between the observed and modelled measures penalised for model complexity.The number of categories in any given system should not be a factor in determining which model has the best predictive accuracy as the DIC operates by estimating model complexity and automatically penalizes models that do not show an improvement in the badness of fit over and above model complexity; that is, it the DIC privileges parsimony.As such it is an ideal procedure for comparing models with different specifications involving different classifications.The DIC can be compared within the same health measures, but not between health measures, i.e., the DIC for the NS-SEC for skin problems cannot be compared to the DIC for backache for the ISCO 1-digit system.In terms of the specifics of MCMC estimation we followed the good-practice recommendation of Draper.Thus, we use likelihood approach to estimate an initial model, specify default priors to impose as little information as possible on the estimates, a burn-in of 500 simulations to get away from these initial estimates, and a monitoring chain of some further 5000 simulations to characterise the parameter estimates and calculate the DIC.Fig. 2 presents the results of all 60 models, by question or health measure and the classification scheme.Only the DIC for each model measure/classification pair is reported.Each outcome has the individual classification models sorted by DIC, so that the classification system with the best parsimony is on the left.The colour on the graph is consistent for each system.The y-axes of the graphs are different due to the varying measures, as discussed earlier, but the comparison of classification systems should be considered within a measure rather than between a measure.It is not the specific value of DIC which is important, but which has the lowest DIC within a measure."The ISCO 2-digit schema best predicts whether an individual's work may affect their health.Indeed, the ISCO 2008 2-digit classification has the highest predictive accuracy for all health measures across the data, not only for those questions which referred to the work-health relationship specifically.The 2-digit NACE classification outperformed the 1-digit ISCO 2008 for some outcomes, though for self-rated health, backache, lower muscular pain, upper muscular pain, and injury it was surpassed by the 1-digit ISCO.The 1-digit ISCO, therefore, did not always perform as consistently as the 2-digit version of the classification.The NS-SEC in this study borrows some predictive power from the ISCO 2-digit classification in this dataset as it is partially derived from it, and this may be why NS-SEC showed higher predictive accuracy than both the 1- and 2-digit NACE classifications for backache and lower muscular pain, as well as over the 1-digit version of the NACE for upper muscular pain and injury.The NS-SEC also had somewhat higher predictive accuracy over the 1-digit ISCO and 1-digit NACE in terms of fatigue.It seems then, that the NS-SEC may be slightly better at predicting measures relating to general or muscular health than the NACE.Nonetheless, the ISCO 2-digit classification remains the most empirically appropriate for predicting health measures in the EWCS dataset, as it had the lowest DIC for all health measures.Theoretically, this indicates that work should be considered separately from class when examining health measures, and that the worksome is an appropriate model for enquiry into this relationship.There is a clear need to focus both theoretically and empirically on work and occupation in and of itself rather than as a component of class or a feature that can be proxied by class.While many socio-economic classification systems, like the NS-SEC do use occupation as their base, they are not a ready substitute for occupational classifications themselves.Furthermore, class contains an implied hierarchy, something which may confound results, as it is a hierarchical system of social or cultural value partially based in occupation.Social classification systems are informed by their social contexts, as the cultural value of occupations change through time."For example, around a quarter of occupations in the UK Registrar General's Class Classification changed between classes from 1951 to 1961.A system with an implied hierarchy may not be appropriate for occupational research, particularly with a changing world of work where flexible or precarious conditions have filtered even to ‘standard’ occupations.Further, some occupational classification schemes are too simple or coarse to examine fine-scale detail in terms of health measures, and it has been shown that the 2-digit level of the ISCO 2008 performs better.This means, then, that occupation, and therefore, the worksome, is conceptually valid as a separate and distinct social determinant of health.Theoretically, the expansion of the exposome into the worksome provides a framework for both qualitative and quantitative work.Empirically, the analysis in this paper has shown that for examining the health of workers, occupational classifications such as the ISCO are generally the most appropriate.The more detailed 2-digit level provides better predictive accuracy, whereas the 1-digit levels may be more practical for certain analyses and data collection practises.However, some issues remain with the 1-digit ISCO when it comes to predictive accuracy for certain health measures, where it is outperformed by the NACE 2-digit classification.In some cases, the NS-SEC did not have the least predictive accuracy compared to the other systems, primarily the NACE.One reason for this could be that the SOC2010, used to derive the NS-SEC, in the case of this data, was derived itself from the ISCO 2008 2-digit version, and therefore could have borrowed some statistical power from the ISCO 2008 2-digit.Another could be that the NACE is a classification of industries or economic activities rather than occupations and may not be completely suited to this sort of analysis.The NACE, though, is formed so as not to distinguish by the ownership, legality, modes of operation, or formality of economic activities.This may be nonetheless helpful, as the EMCONET research agenda includes non-standard forms of work beyond precarious or flexible work, including informal work and slavery.The worksome too allows for non-standard forms of work.The ISCO, for example, does not necessarily have provisions for these, so in those cases, the NACE may be more appropriate depending on the nature of the work.The ISCO 2008 2-digit version nonetheless does allow for the vast majority of occupations to be classified as it does not discriminate by conditions, so therefore flexible and modern working conditions can be accounted for as long as they are acknowledged explicitly in the study.For clarity in research, especially when interested in either class or occupation, it is necessary to separate out class and occupation as determinants of health.This rationale supports the use of the worksome, a conceptual framework developed in this paper, for the examination of the work-health relationship.The 2-digit ISCO 2008 occupational classification is the most appropriate when examining the relationship between work and health, compared with the NACE and NS-SEC.Therefore, there is also empirical justification for the use of the worksome as a framework, and examining occupation as a separate domain of health inequalities and as a separate determinant of health.With both empirical and theoretical justification, the worksome therefore can provide a transferable framework for research into work and health.Through its flexibility, it can accommodate research from a variety of scales and contexts, allowing for the conceptual linking of disparate yet related studies."It is an expansion of a familiar concept, the exposome, and encompasses a life-course approach, as work is something which generally consumes a large part of any given individual's time.The exposome was explicitly chosen as a base, as its biomedical language and approach is well understood by policymakers.The worksome reorients the way in which the relationship between occupation and health is understood – as an interactive, multi-scalar framework of exposures set along a social-physical gradient.By integrating scales, times, individuals, and geographies and their interactions, the complexities of these relationships become clearer.Separating occupation from class, and justifying it empirically, is necessary to forward the worksome, as occupation is at its core.Again, as class is defined through sociocultural values, this makes it less suitable for examining the relationship between work and health, especially compared with more refined, less time-variant occupational classifications.This is not to say that there should be no research on class and health, but merely to allow for a more thorough and empirically appropriate interrogation of the complex relationship between work and health.
Health inequalities continue to grow despite continuous policy intervention. Work, one domain of health inequalities, is often included as a component of social class rather than as a determinant in its own right. Many social class classifications are derived from occupation types, but there are other components within them that mean they may not be useful as proxies for occupation. This paper develops the exposome, a life-course exposure model developed by Wild (2005), into the worksome, allowing for the explicit consideration of both physical and psychosocial exposures and effects derived from work and working conditions. The interactions between and within temporal and geographical scales are strongly emphasised, and the interwoven nature of both psychosocial and physical exposures is highlighted. Individuals within an occupational type can be both affected by and effect upon occupation level characteristics and health measures. By using the worksome, occupation types are separated from value-laden social classifications. This paper will empirically examine whether occupation better predicts health measures from the European Working Conditions Survey (EWCS). Logistic regression models using Bayesian MCMC estimation were run for each classification system, for each health measure. Health measures included, for example, whether the respondent felt their work affected their health, their self-rated health, pain in upper or lower limbs, and headaches. Using the Deviance Information Criterion (DIC), a measure of predictive accuracy penalised for model complexity, the models were assessed against one another. The DIC shows empirically which classification system is most suitable for use in modelling. The 2-digit International Standard Classification of Occupations showed the best predictive accuracy for all measures. Therefore, examining the relationship between health and work should be done with classifications specific to occupation or industry rather than socio-economic class classifications. This justifies the worksome, allowing for a conceptual framework to link many forms of work-health research.
760
Scale-up challenges and opportunities for carbon capture by oxy-fuel circulating fluidized beds
The Fifth Assessment Report of the United Nations Intergovernmental Panel on Climate Change notes that human impact on the climate system is certain, and recent anthropogenic emissions of greenhouse gases are the highest in recorded history leading to global warming, sea level rises and more frequent weather-related disasters.In particular, CO2 emissions have risen sharply and are the main cause of and contributor to global warming .Fig. 1 shows the elevation in anthropogenic CO2 emissions since 1850, indicating a dramatic increase since ca. 1950, which was due to the rapid global economic growth after the Second World War.It is now widely accepted that action against unabated emissions of GHGs must occur, in order to minimize the damaging effects of climate change.The main methods for reducing CO2 emissions are:Carbon capture and storage ,Utilization of fuels with low C/H ratio such as natural gas ,Improving energy efficiency, thus utilizing less fuel ,Substituting fossil fuels with renewable or nuclear energy sources ,Carbon capture and storage is a bridging technology permitting a smoother socio-economic shift from fossil fuels to renewable energy sources.The main capture technologies in CCS are post-combustion, pre-combustion and oxy-fuel combustion.Post-combustion CCS relies on removal of CO2 from the flue gases after combustion; however, since the concentration of CO2 in the exhaust gases is relatively low, the cost of CO2 separation is high and imposes efficiency penalties of ∼8–12 percentage points .In pre-combustion CCS the fuel is gasified or reformed, and the CO2 is then removed from the produced gas before it is combusted or further processed for another use.A significant benefit of pre-combustion CCS technologies is that the CO2 concentration is typically far greater than 20 vol% and thus the separation of CO2 is more economical than for post-combustion systems, with efficiency penalties of ∼7–9 percentage points .Finally, oxy-fuel CCS is the combustion of fuel in a mixture of almost pure O2 and recirculated flue gases, resulting in a flue gas consisting mainly of CO2 and steam, making the separation of CO2 relatively simple.An air separation unit produces O2 cryogenically and the choice of O2 purity significantly affects CO2 purity, plant capital cost and operating power consumption .Fig. 2 shows a generalized schematic of the oxy-fuel CCS process, detailing the ASU, combustion boiler and RFG lines.Oxy-fuel circulating fluidized beds for CCS typically have an efficiency penalty of ∼10 percentage points and, therefore, a new focus of current research in the oxy-fuel area is on pressurized oxy-fuel CCS technologies which should lower the intrinsic efficiency penalty.The inlet O2 concentration is key for scale-up of the oxy-fuel process.Utilizing a higher inlet O2 concentration while keeping the furnace geometry constant leads to higher thermal output per furnace cross-sectional area, improving the economics as it increases the process efficiency.For new facilities, however, it is more likely that a higher inlet O2 concentration allows a smaller boiler size, with reduced capital costs .It has been shown that higher inlet O2 concentrations also lead to lower CO emissions , improved desulphurization efficiency , and a reduced power requirement for the flue gas recycle blower .Nonetheless, higher inlet O2 concentration also lead to constraints in heat extraction from the boiler , an increased risk of bed agglomeration within the furnace , and if the O2 concentration increases beyond ∼27 vol%, a significant increase in piping costs due to the requirement for somewhat specialized materials.An important consequence of reduced boiler size when utilizing elevated O2 concentrations is that the solids inventory is smaller, which can result in a greater variation in the CFB bed temperature since the bed mass acts to dampen temperature changes due to its thermal capacity .In this context, it is important to note that the inlet and exit O2 concentrations are among the major control parameters in CFB oxy-fired boilers .Lappalainen et al. concluded that FGR leads to higher perturbations in the furnace O2 content due to the lack of control of the O2 supply from O2 injection, FGR and fuel O2.One requirement identified in controlling the O2 content within the furnace is the online measurement of the time-dependent O2 concentration in the RFG, and consequently the required adjusted flow of inlet O2 in order to meet the desired air/fuel ratio .Given the importance of inlet O2 concentration, the present work focusses on this parameter as a key variable.Information on oxy-fuel pulverized combustion boilers can be found in Scheffknecht et al. , Toftegaard et al. , Chen et al. , Wall et al. and Yin and Yan .Unfortunately, the differences between oxy-fuel CFB and oxy-fuel PC make them of less relevance to CFB operation.There are far fewer reviews on oxy-fuel CFB than on oxy-fuel PC.One example is by Mathekga et al. , and another is by Singh and Kumar focusing on the current status and experimental results from small oxy-FBC beds.However, the present review looks at the challenges and opportunities for process improvement through scale-up by reviewing the existing literature in oxy-fuel CFB modelling, heat transfer phenomena, fluid-bed dynamics, and pollutant emissions.The work provides a comprehensive assessment of the oxy-fuel fluidized bed scale-up process, not addressed in previous reviews.Given that oxy-fuel CFB is a major carbon capture route for large coal power plants, such information is of vital importance for furnace designers and fluidized bed research and development.In particular, the authors evaluate: how the available models developed for lab-, pilot- and large-scale oxy/air-fired CFBs can be adapted for modelling large-scale oxy-fuel CFB units; the major challenges and opportunities in the design and scale-up of utility-scale oxy-fuel CFB boilers; and the important parameters for designing large-scale oxy-fuel CFB boilers.Both experimental work and modelling simulations are pivotal in the design and scale-up of oxy-fuel technology.To date, no commercial-scale oxy-fuel CFB boiler has been built despite the technology currently having a Technology Readiness Level of 7–8 .The failure to build large-scale oxy-fuel plant arises because of a lack of effective carbon pricing, and an absence of government motivation to tackle climate change, and the scale and inherent interdependence of investors required to get such a project up and running .Furthermore, where CCS plants have been deployed, they have typically utilized post-combustion amine scrubbing systems because of their previous demonstration at scale and the larger efficiency penalty incurred by conventional oxy-fuel technology.As a result, the scale-up of novel oxy-fuel technologies such as pressurized oxy-fuel boilers is receiving attention.Table 1 shows the major experimental oxy-fuel CFB units and indicates the extensive research carried out to date.The largest tested oxy-fuel CFB unit so far is CIUDEN, an industrial-scale 30 MWth unit situated in northwest Spain .Other large-scale units in operation include: a 4 MWth oxy-fuel CFB demonstration plant at Valmet, Finland , an oxy-fuel CFB calciner utilized as part of a 1.7 MWth post-combustion calcium looping CCS plant at La Pereda, Spain , and a 0.8 MWth testing plant built at CanmetENERGY, Canada .These units have all undergone major experimental campaigns focussing on the different aspects of oxy-fuel CFB operation including combustion, emissions formation and heat transfer.Examples of the scale-up process for air-fired CFBs can be seen elsewhere: Glicksman , Glicksman et al. , Zlokarnik , Knowlton et al. , and Leckner et al. .A major difference between industrial-scale and smaller-scale units is the degree of lateral gas mixing, localized air/fuel ratios, and heat distribution within the furnace, which is influenced by the furnace width.For lab- and pilot-scale units the furnace width is typically less than 0.2 m, while industrial-scale units can be 1 m or more and, thus, gas mixing in lab-scale units is effectively perfect .The application of observed trends and correlations, and measured parameters obtained from lab- and pilot-scale units is often unsuitable for the design and operation predictions of large-scale units due to the greater complexity of larger systems over smaller systems .It is necessary to be cautious when assessing the degree to which data obtained from the smaller-scale units can be utilized for larger-scale units .Various research groups have developed oxy-fuel models to properly understand and analyze their experimental data.Thus, the modelling scope and results are largely dependent on the type and scale of the experimental setups and their corresponding data.In general, lab-scale FB units are 1D and thus their corresponding extracted models are more suitable for combustion chemistry.On the other hand, pilot-scale units give valuable sets of data in relation to both axial and lateral profiles leading to models capable of comprehensive 3D analysis of oxy-fuel FB processes.There are several active groups developing comprehensive mathematical models for oxy-fuel CFBs using data from experimental units, including: Myöhänen et al. ; Seddighi et al. ; and Krzywanski et al. .Another modelling approach utilizes commercial computational fluid dynamics codes modified for oxy-fuel CFB boilers, such as work by: Zhou et al. ; Adamczyk ; and Amoo .Table 2 presents the major modelling tools developed specifically for oxy-fuel CFB boilers.The design of large-scale oxy-fuel fluidized bed boilers has been of critical importance for minimizing the experimental costs of the scale-up process.Examples of the work on the design of oxy-fuel FB boilers can be found elsewhere .The design of next-generation large-scale oxy-fuel boilers is split between two pathways of: constant-furnace-size scenario; and constant-thermal-power scenario.The constant-furnace-size scenario is the only option for retrofitting air-fired CFB boilers, thus providing a near-term CCS implementation.Retrofit boilers are most attractive to the power sector since they can reuse most of the plant equipment and reduce investment cost/risk, making retrofitting oxy-fuel the most competitive technology option for CCS .However, to economically retrofit an oxy-fuel system into an existing air-fired boiler, the original power plant must have sufficiently high efficiency).With the same furnace geometry, an oxy-fuel CFB boiler with the same O2 concentration as an air-fired boiler, i.e., 21 vol%, gives a lower furnace temperature due to the specific heat capacity of CO2 relative to N2.The same furnace temperature as an air-fired furnace can be achieved with an oxy-fuel furnace, but O2 concentration of around 27–30 vol% is required .By increasing the O2 concentration, the boiler thermal power output changes, therefore necessitating additional heat removal via external heat exchangers .An increased rate of circulating solids flux to improve heat removal ability also increases the efficiency of the boiler and reduces the unburnt carbon content of the fly ash, both of which are economically and environmentally favorable .The constant-thermal-power scenario involves downsizing the furnace, as it is possible to achieve the same thermal power output with a lower total volumetric flow rate and an increased O2 concentration.Modelling by Leckner and Gomez Barea found the potential to reduce the boiler size by ∼80% by increasing the oxygen concentration from 21 vol% to 80 vol%, for a 300 MWth oxy-fuel CFB boiler, as shown in Fig. 6.In addition, a more homogeneous bed temperature profile and a lower heat flux to the boiler tubes, compared to the constant-size scenario, make the constant-thermal-power scenario a better pathway for oxy-fuel CFB CCS development for new facilities .In order to improve the competitiveness of oxy-fuel fluidized bed combustion, R&D has started on pressurized oxy-combustion.This has a number of potential benefits including a further reduction in boiler size, increased net power plant efficiency, and the provision of alternative pathways for removal of impurities such as O2, CO, NOx and particularly SOx .The challenges faced by previous air-fired pressurized fluidized bed combustion demonstrations have been recognized by the research community and thus configurations that do not include hot gas filters and gas turbines are being developed.There are at least five global research groups developing oxy-PFBC technology including those in Canada , in the UK , in the USA , in China , and in Poland .The largest oxy-PFBC pilot plant built to date, in collaboration with GTI and Linde, is located at CanmetEnergy in Ottawa.The facility includes CO2 purification equipment for removal of NOx, SOx, and oxygen.Under oxy-fuel conditions, the specific heat of the furnace gas is higher than under air-fired conditions due to the higher concentrations of CO2 and H2O.A higher specific heat capacity of the mixed gas lowers the furnace temperature and alters the heat removal duties .However, it has been demonstrated that raised inlet O2 concentration can considerably increase the furnace temperature .The main research questions in relation to heat transfer in oxy-fuel CFBs are: “What is the most appropriate mechanism for heat removal?,and “How is the heat/temperature distributed within the overall system?,Within the constant-furnace-size scenario, if the inlet O2 concentration rises above 60 vol%, the thermal power output theoretically can be tripled for an equivalent size combustor ; however, the furnace heat extraction area and, consequently, the maximum extractable heat from the furnace are limited.Thus, above a certain inlet O2 concentration, heat must come from the CFB return leg or through external heat exchangers in order to compensate for the increased thermal power output .Figs. 7 and 8 show the increase in the share of heat extraction required within the return leg and external heat exchangers due to an increase in O2 concentration.The extent to which heat can be usefully obtained from the solids within the external heat exchanger is unclear and depends on circulating solids flux and heat transfer rates .So far, the only way to utilize high O2 inlet concentrations is to increase the circulating solids flux, requiring dramatically higher heat extraction requirements in the external heat exchanger.For example, a 324 MWth furnace with 27 vol% O2 inlet concentration requires around 240 kg/s of circulating solids, but for a similarly sized unit operating at 70 vol% O2 inlet concentration, the capacity becomes 1079 MWth, requiring ∼3000 kg/s of circulating solids .Therefore, when maintaining the furnace size, the heat balance often necessitates a very high circulating solids flux.Higher O2 inlet concentrations can have dramatic impacts on the boiler’s fluid dynamics, combustion efficiency, heat transfer requirements and effectiveness, and can increase erosion due to the higher solids flux.Thus, for the retrofit scenario, inlet O2 concentrations are limited to below 40 vol% .Therefore, in order to utilize greater inlet O2 concentrations, a new CFB design is required which is capable of operating with a high circulating solids flux and with large heat extraction duty within the external heat exchanger.Effective design and scale-up of heat transfer equipment depend on heat transfer modelling which itself requires accurate empirical data and experimental data obtained under realistic conditions .A major challenge in the design and scale-up of oxy-fired systems is the unrepresentative results obtained from the use of heat transfer models which apply data derived from lab-scale units .In part, the inadequacies of lab-scale units are due to their aspect ratio being much larger than industrial- and large-scale furnaces, leading to heat transfer correlations which are unreliable at larger scales.Furthermore, the knowledge, experience and correlations gained from air-fired CFB conditions can be unreliable for oxy-fuel CFB due to significantly different solid fluxes.Thus, critical evaluation of available heat transfer models is paramount.Numerous fluidized bed heat transfer models with different structures and assumptions exist .The most important parameters in modelling heat transfer in a CFB furnace are: the solids suspension density; particle size; bed temperature; particle specific heat; and hydrodynamic conditions in the furnace, which are typically assumed to involve a core-annulus structure .Convection in the CFB context refers to: convection from gases to heat transfer surfaces; and conduction from solid particles to the heat extraction surfaces.These two mechanisms can be modelled separately or together in a single convective heat transfer coefficient .Radiation is the main heat transfer mode under oxy-fuel CFB conditions; however, some experimental results have shown that the radiative heat transfer coefficient decreases dramatically when suspension density exceeds 5 kg/m3 due to the increased absorption of the heat by the particles.Radiative heat flux in oxy-fuel combustion can be enhanced due to higher-temperature combustion; Bordbar and Hyppänen found up to a 40% increase in furnace radiative heat flux when changing from air-fired to oxy-fuel combustion.The spatial radiative heat flux variations observed in Fig. 9 result from multiple cyclones, which allow the gas to bypass before injection to the furnace, in addition to the flow oscillations due to bubbles rupturing above the dense bed.With regard to heat radiation in CFB furnaces, two calculation methods that can be used are Monte Carlo analysis and the net radiation methods .The Monte Carlo approach is based on statistical features relating to the physical phenomena where the radiation is simulated by modelling stochastic paths of photon bundles leaving and reaching the combustor wall .Although it is a mature method, it requires extensive calculation resources for complex systems and geometries .This is especially so for the simulation of oxy-fired systems with high solids flux and hence greater complexity.The required heat extraction duty from a furnace is usually overestimated due to the high level of uncertainty linked to the expected heat losses of the overall system .In addition, the uneven solid flow rate within the furnace leads to errors in the measurements and modelling of the heat flux distribution .Control of temperature levels in oxy-fired CFB furnaces is a critical issue for their scale-up .The temperature profile is largely homogeneous in air-fired CFB furnaces, but under oxy-fuel conditions with increased inlet O2 concentration, can lead to large, localized gradients in the furnace temperature profile due to hotspots .The furnace temperature of oxy-fuel CFBs can be lower than those of air-fired CFBs for equivalent inlet O2 concentrations due to the higher heat capacity of the gas in the oxy-fuel atmosphere, caused by the recirculation of CO2 .The steam content of oxy-fired CFB furnaces will vary due to the use of wet or dry FGR and, in turn, higher furnace steam concentrations can reduce the furnace temperature .Fig. 10 shows the effect of the recirculation rate on the temperature distribution in an oxy-fired CFBC furnace .Increasing the fly ash recirculation rate from 0 to 8 t/h leads to a reduction of the bed temperature from 1233 K to 1153 K, but also causes the furnace exit temperature to rise from 1033 K to 1083 K.The primary reason for the temperature change is the transfer of more heat from the bed’s dense phase to its dilute phase.High-temperature oxidation under oxy-fuel conditions is particularly problematic for furnaces since it leads to fireside corrosion from the reactions between the reactor internals and the surrounding hot gaseous environment.This can eventually lead to the failure of boiler tubes, superheaters, reheaters, and water walls by typical metal loss mechanisms or by the generation of cracks .In addition, the corrosion in oxy-fuel furnaces can be intensified due to the FGR producing elevated concentrations of corrosive gases .Syed et al. and Hussain et al. both found thicker ash-oxide layers on the combustor internals under oxy-fuel conditions compared to air-fired conditions, up to an ≈40% increase.They also found that sulphur penetrated deeper into the oxide layer under oxy-fuel conditions, leading to higher levels of S-stabilized corrosion compounds such as alkali-iron tri-sulphates.The maximum heat flux to the boiler water tubes is of great significance for the furnace design and is used for selecting the materials utilized for the construction of the water tubes .Heat flux prediction and modelling become more important in large-scale supercritical units, where the values of heat flux should be designed to avoid steam generation in water tubes .The furnace maximum heat flux must be designed to ensure a reasonable safety margin before reaching the critical heat flux, to avoid vapor films, overheating and potentially the rupture of the water tube walls .The location of the maximum heat flux depends on the fuel type, thermal power, secondary gas injection arrangements and the heat extraction panel arrangement .For air-fired CFB boilers with an electrical power generation capacity in the range of 1–200 MWe, heat fluxes are typically found to be in the range of 120–150 kW/m2, which can increase by a third for oxy-fuel conditions .Fig. 11 provides a comparison of the reported maximum heat fluxes for CFB and PC units under both air and oxy-fired conditions.The heat flux to the furnace water walls is much lower in a constant-thermal-power scenario than in a constant-furnace-size scenario because the furnace net heat extraction duty is higher in the latter scenario at high O2 concentration .Deposition of ash on heat transfer surfaces which occurs through slagging and fouling is a major cause of boiler tube and combustor wall damage .Ash deposition has been widely explored for air-fired CFB, but little such data exist for oxy-fuel CFB conditions.Ash deposition through slagging usually occurs in furnace locations with high temperatures while fouling typically occurs at lower temperatures.The co-utilization of coal and biomass has been studied by various groups such as and can correspond to increased slagging and fouling on heat transfer surfaces .Oxy-fuel conditions can increase the rate of ash deposition compared to air-fired conditions, owing to the gas physical properties, which lead to changes in deposition behavior .In addition to an increased rate of ash deposition, the sulphur content of the ash deposits also increases under oxy-fuel conditions, producing a higher risk of corrosion of fireside surfaces when approaching the acid dew point .The deposition process varies largely among different ash particles depending on their composition and size.Once ash particles are deposited on the surface of a combustor component they can begin to coalesce and fuse via sintering.Particle sintering typically occurs at temperatures lower than the melting temperature of the ash material, but higher than the Tamman temperature.Like slagging, increased fouling can lead to the following issues :Reduction of boiler efficiency and availability;,Increase in boiler temperature due to lower heat extraction capability;,Higher levels of NOx emissions due to the promotion of thermal NOx;,Lower thermal power output of the boiler; and,An increase in ash deposition due to the formation of alkali sulphates on the surfaces encouraging the collection and agglomeration of ash particles.The increased furnace temperature of an oxy-fired furnace can also lead to the enhanced oxidation rate of boiler tubes which will increase their required replacement rate .The frequency of boiler tube replacement is influenced by erosion and abrasion caused by solid particles impacting the surfaces and by oxidation and corrosion caused by higher oxygen partial pressures and ash-influenced corrosion .A critical problem in CFB furnaces is the melting and sintering of solid bed particles, which increases at elevated temperatures and in the presence of alkaline earth compounds .Ash melting temperature varies for different fuels: for biomass it typically occurs above 1250 K , for lignite above 1350 K and for hard coal above 1450 K .Some key elements in ash contributing to corrosion in CFB boilers include Cl, Br, Zn and Pb .In the upper part of the furnace, ash deposition is mainly of K-, Na- and CaSO4-derived materials.However, in the lower parts of the furnace alkali chlorides and bromides are found in the form of K and Na compounds and CaSO4, while the heavy metals in ash found on water walls are typically in the form of Zn, Pb, and Cu sulphates and chlorides .With this information, it is possible to design and implement suitable materials for the type of deposition expected at each location in the furnace.For a more complete discussion of corrosion in oxy-combustion systems and for suggestions regarding materials of construction, see elsewhere .Iron oxide is an important contributor to slagging on heat transfer surfaces during coal combustion and its effects can be enhanced further by the formation of pyrrhotite and FeO–FeS, which derive from pyrite and have low melting points and, therefore, enhance the slagging process .In oxy-fuel conditions, pyrite undergoes faster oxidation due to the increased steam concentration, leading to the production of magnetite, which can impact ash deposition rates .The operation of a large-scale CFB furnace consists of multiphase flow with interactions significantly influenced by the contact between the gas and solid phases.Fig. 13 shows a schematic of the main transport regions in a CFB loop.In a typical CFB furnace, the formation and break-up of particle clusters is the main characteristic of the flow at the meso scale; here we refer to the meso scale as the length scale between the largest particle size and the diameter of the furnace.The dense-bottom/dilute-top and dense-wall/dilute-core structures are the characteristic of the flow at the macro scale .Here we refer to the micro scale as that which is relevant to a length equal to or smaller than the particle diameter .The furnace dense bed contains the major proportion of the solids and it is the region where most of the char particles burn , while the volatiles and lighter char particles predominantly combust in the O2-rich bubble phase .The rise of the bubbles generates the general upward motion of the solids; the bubbles rupture at the top of the dense phase – the so-called ‘splash zone’ – producing a further driving force for the upward motion of the solid particles .The ‘transport zone’ is located above the splash zone and is characterized by a core-annulus structure and a dispersed outer phase which generally falls back into the bed resulting in back-mixing .Circulating solids play a key role in shaping the hydrodynamics of the furnace and temperature profile in the CFB loop.Circulating solids also transfer furnace heat to external heat exchangers, which are located in the loop seals of the return leg.Increasing the amount of circulating solids avoids excessive temperatures but can significantly increase operating costs.A major challenge for modelling, design and scale-up is that the quantity of circulating solids varies considerably among different boiler manufacturers and units .While circulating solids have been reported as high as 25 kg/m2 s , the typical value is less than 10 kg/m2 s .A higher circulating solids flux increases the heat extraction capability of the CFB and minimizes the amount of unburnt carbon in the fly ash .Nevertheless, practical methods for increasing the circulating solids flux when utilizing lower gas flow rates are challenging; one potential solution would be increasing the efficiency of the cyclones and decreasing their cut-off particle diameter, thereby circulating more solids back into the furnace.In CFB furnaces, particle segregation due to differences in particle size and density can have a major impact on flow behavior and reaction kinetics .Within the fluidized bed itself there exists a certain amount of axial particle segregation that is caused by the smaller particles fluidizing and entraining within the gas stream more easily than the larger particles .The concentration of solids and, hence, fuel in the dense zone of the fluidized bed can be around 1000 times that at the top of the bed .Jang et al. found that the average ash particle size under oxy-fuel conditions was smaller than in air-fired conditions due to a reduced ash particle growth mechanism caused by the different properties of the oxy-fuel gas.A major problem for fluid dynamics scale-up is that the majority of experimental work in this field has mimicked air-firing conditions such as in .However, simulation of the fluid dynamics and solid flows is different in oxy-fuel conditions compared to typical air-fired conditions due to the varying inlet O2 concentrations and, consequently, differences in volumetric flow rates and FGR rates .Another difference in the oxy-fuel CFB furnaces lies in their considerably higher solids fluxes when high inlet O2 concentrations are specified, which leads to a shift toward the fast fluidization regime.Mass transfer phenomena can be divided into two categories: lateral; and axial mixing.In the lower section of the transport zone, the effects of solid-solid mixing are dominant, while in the upper section gas-gas mixing is more significant .A major reason for studying mixing is that the reaction mechanisms and the combustion kinetics by themselves fail to predict the measured furnace gas profiles in both lab- and large-scale units .Simulation studies often add adjustment coefficients to the reaction rates in order to take into account the mixing effects of the gas and solids .Others reconcile the lack of exact kinetic data in large-scale systems by assuming that CFB combustion can be modelled as mixing-controlled combustion, thus lumping kinetics and mixing together .In general, the extent of lateral mixing is better in lab-scale units than in large-scale units; therefore, information obtained from lab-scale experiments is less robust for direct application to large-scale units .Lateral gas mixing in fluidized bed furnaces can be described by dispersion, which may be assumed to be the sum of the dispersion coefficients from large-scale structures, and localized small-scale particle motions .Overall, the poor lateral gas mixing commonly associated with CFBs can often lead to maldistribution of the reactant gases and incomplete combustion .The maldistribution of gas is greater under oxy-fuel conditions given the higher fuel feeding rates, but the higher solids flux may positively influence lateral gas mixing.Improving the lateral mixing is important to ensure homogeneous heat distribution throughout the combustor, which in turn will minimize any localized hot spots that could lead to the formation of ash melts .Prediction of the interaction between the solid and gas phases is critical in establishing effective axial mixing in fluidized bed boilers.The distribution of the phases within CFB furnaces will affect how the reactants mix.Table 3 presents some of the past approaches used for modelling axial gas mixing.Axial gas mixing in a CFB furnace is often divided into three distinct zones.From the bottom to the top of the combustor, they are: restricted mixing in the dense bed; improved mixing close to where the bubble eruptions occur; and finally restricted mixing in the transport zone .Axial gas mixing is of course dependent on the furnace geometry, the properties of the solids and gases, and the operating conditions.Mixing in the dense phase of the CFB bed can be limited because of poor mixing between the combustible gas containing emulsion and the oxygen containing bubbles, but provided that there is sufficient fluidization mixing generally proceeds rapidly .Secondary air injection has an important effect as this additional gas, injected at an angle into the furnace, will cause the eruption of bubbles into a relatively dense bed of material, improving axial gas mixing .Secondary injection can dominate fluid dynamics across the entire cross-section of a lab-scale CFB furnace, while affecting only a portion of the injection plane in larger units .Varol et al. , using a 30 kWth CFB unit, suggested that the secondary air should be injected into the furnace at an angle of 10–15 degrees to the horizontal in the upward direction, to produce optimal mixing between the injected air and the furnace flow.While such studies are valuable in optimizing the properties of secondary gas injection, concerns about the generality of their conclusions is such that further measurements in industrial-scale oxy-fuel CFBs with high inlet O2 concentrations are required.Dimensional analysis is one of the most useful tools in providing scaling laws between lab-scale and large-scale units.Geometric and dynamic similarity are both important in scaling studies, while geometric similarity is a prerequisite for dynamic similarity .Geometric similarity is valid when the dimensions of one unit relate to the second unit with just one constant factor.Thus, dynamic similarity has two conditions: first, geometric similarity is valid; and second, all independent dimensionless numbers are the same.Dynamic scaling can be divided into three major parts: fluid dynamics scaling; combustion scaling; and boiler design scaling .Fluid dynamics scaling has been studied in detail by Glicksman et al. .In order to achieve fluid dynamics similarity between two units of varying size, the dimensionless numbers indicated in and the geometry should be similar.The comparison of lab-scale units with industrial or even other lab-scale units is often difficult due to the lack of geometric similarity.Industrial CFB units used for electricity production typically have an aspect ratio lower than 10 , whereas lab-scale units have aspect ratios >30 .In such narrow units, lateral mixing tends to be better than in large-scale units .This lack of geometric similarity raises the question of the extent to which CFB transport and mixing phenomena are transferable from a narrow small-scale unit to a wider large-scale unit.In all, if the results of the experimental campaigns presented in Section 2 are divided into three categories of combustion chemistry, axial mixing and lateral mixing, the fluid dynamics scaling for the design of large-scale oxy-fuel CFB boilers can take the lateral mixing information only from pilot- and industrial-scale experimental campaigns.The combustion reactions are critical for an oxy-fired CFB, starting with char combustion.Char combustion is the most important energy-releasing reaction for the combustion of solid fuels and is the main source of energy, CO and CO2 in solid fuel boilers.Murphey and Shaddix found a dramatic increase in the particle temperature in pulverized fuel flames when increasing the O2 concentration .Heterogeneous reactions consist of three stages: adsorption of oxygen onto the char surface; surface reaction between char and O2; and desorption of CO2 or CO formed.Modelling the formation of the combustion reaction products requires the simulation of several parallel adsorption-desorption processes, each with individual activation energies .The most widely used approach for detailed reaction mechanism evaluation is the Langmuir–Hinshelwood derivation .The Arrhenius equation is also widely used for quantifying kinetic rate constants across a range of temperatures for various reactions .The most important heterogeneous reactions relevant to combustion efficiency and temperature profile in oxy-fuel CFBs are the char combustion and gasification reactions, as described below.CO is an effective marker to observe the progression of combustion in CFB furnaces.CO measurements can also show to what extent the combustion is complete or incomplete at different heights along the furnace .Riaza et al., using an entrained flow reactor, reported that char burnout slows down at low inlet O2 concentrations during oxy-fuel combustion owing to higher heat capacity and lower diffusivity of CO2 in comparison with N2 .Riaza et al. also noted that switching to oxy-fuel conditions led to an increased char particle ignition temperature .Yuzbasi and Selçuk reported that the char combustion process is retarded in oxy-fuel conditions as compared to air-fired conditions at the same oxygen levels, and is characterized by a lower rate of reaction and higher burnout temperature .Solid fuel conversion in oxy-fuel conditions produces a char with a higher carbon content due to the loss/conversion of more hydrogen and oxygen.Furthermore, these chars typically have a higher specific surface.Furthermore, the high concentration of CO2 prevents secondary char formation and tar polymerization reactions, but may lead to additional CO production .A major challenge with modelling char combustion via Langmuir–Hinshelwood type mechanisms or Arrhenius methods is that most of them are semi-empirical models valid only in the range of temperatures and oxygen partial pressures for which the correlations parameters are derived .Fitting factors, such as the tortuosity factors and the effectiveness factors are used to adapt intrinsic kinetic parameters derived at the lab scale for a larger-scale system, although this is still limited to the range over which the data are gathered.Many studies are available on gasification reactions in atmospheres with high concentrations of CO2 and steam .However, the importance of the gasification reactions is highly temperature- and fuel-dependent.For example, Jia and Tan reported that the gasification reaction between CO2 and carbon becomes relevant at temperatures above 1033 K for low-rank, high-reactivity coals and at temperatures above 1173 K for anthracite and petroleum coke.NO arises from three mechanisms: prompt-, fuel- and thermal-NOx, with fuel-NOx the primary source of NOx from FBC systems.Typically, prompt-NO occurs in gas flames, while thermal-NOx will only become significant at temperatures above 1273 K.A schematic of the fate of fuel nitrogen in combustion is shown in Fig. 14.Air-fired FBCs typically have lower total NOx emissions compared to conventional PC combustion, due to the lower combustion bed temperature.It should be noted that N2O emissions are not a problem for woody biomass, where the bulk of the fuel-N is released in the form of NH3 rather than HCN, or during char-N oxidation .The dependence of the NOx emission on O2 concentration is also discussed elsewhere .NOx pollutants can be generated from either char-bound or volatile-bound nitrogen compounds but the type and quantity of emissions depend significantly on the operating conditions and fuel characteristics .Hofbauer et al. concluded that the NOx emissions increased with increasing inlet O2 concentration when conducting the combustion reactions in a 150 kWth pilot-scale CFB unit, the results of which are shown in Fig. 15.de Diego et al. reported that, regardless of initial concentration of NO and temperature, oxy-CFB FGR leads to more than 60% of the recycled NO being converted to N2, which is mainly due to the reburning process, while less than 5% is converted to N2O .In pilot plant runs with a 0.8 MWth CFBC, NOx emission were shown to increase modestly, around 18%, when changing bed temperature from 1123 K to 1193 K utilizing bituminous coal .These pilot plant results also demonstrated that the final emitted NOx was considerably lower in oxy-fuel conditions.Duan et al. concluded that the emission of NOx under oxy-fuel was much lower due to enhanced gas phase reduction from FGR.It is widely known that the presence of char and CO in the bed leads to reduction of the NOx into N2 and N2O; this was demonstrated recently by Duan et al. , using a 50 kWth oxy-fuel CFB with a 21% inlet O2 concentration.Further, they found that increasing the inlet O2 concentration from 21 vol% to 40 vol% led to a higher conversion of the fuel-N to NO. In a modelling study, Peng et al. suggested that the levels of NO and N2O emissions increase with the increase in excess oxygen.This has also been supported experimentally for air-fired systems but is less clear for oxy-fired systems .Air staging is a proven method for reducing NOx emission from CFB furnaces .Duan et al. concluded that O2 staging in oxy-fuel CFB was more effective in reducing NO emissions compared to air-fired conditions, while de las Obras-Loscertales et al. reported that wet FGR caused a sharp decrease in NO emissions and a slight increase in N2O emissions.During the combustion process, molecular chlorine, Cl2, and hydrogen chloride gas, HCl, may be produced; modelling has shown that Cl2 formation is favored at temperatures above 600 °C and in oxygen-rich environments .Chlorine-containing product formation is highly temperature-dependent and thus may vary when changing to oxy-fired CFB.Font et al. reported that due to the high organic affinity of chlorine in coal, the retention of chlorine at any stage of the process or gas clean-up was difficult under oxy-fuel conditions .Lupiáñez et al. found that oxy-firing increased the chlorine detected in fly ashes in comparison to the air-fired tests, whereas, no chlorine was detected in the solids taken from the bed bottom under both air- and oxy-fired conditions.Chlorine is a key contributor to the formation of aerosols and submicron particles and high-temperature corrosion in boilers firing biomass and waste, due to the formation of KCl and NaCl compounds.In boilers firing solid recovered fuels or sludge, bromine plays a key role in aerosol formation and high-temperature corrosion.The bromine in sludge comes from water treatment chemicals containing bromine .The formation and reduction behavior of Br mimics the Cl behavior and will form similar halogenated compounds with alkaline earth metals and heavy tars, if present .While fuels typically have only tens of ppm levels of Br, the slags and wall deposits are reported to contain up to 3 wt%, mainly in the form of KBr and NaBr .While Br is very important in emission evaluation and ash deposition, there is no information available on it in oxy-fuel CFB conditions.Suriyawong et al. investigated mercury speciation under O2/CO2 coal combustion in a tubular furnace with a coal feeding system, and experimental results indicated that the distribution of Hg was similar between air and O2/CO2 combustion.Font et al. quantitatively analyzed the fate of mercury in a 90 kWth bubbling fluidized bed in O2/CO2 combustion conditions, and found that elemental Hg was the major species in the exhaust gas, while the major mercury species retained in bag filters was Hg2+.Wang et al. evaluated mercury speciation in 50 kWth and 6 kWth fluidized beds, and observed a distinct difference in mercury speciation between air and O2/CO2 coal combustion.Gharebaghi et al. simulated the results of mercury oxidation and speciation under oxy-coal combustion using the combined homogeneous-heterogeneous model.Contreras et al. conducted a series of thermodynamic equilibrium calculations on the fate of mercury, in which they found that chlorine speciation was the key factor affecting the fate of mercury in O2/CO2 combustion.Cl2, SOx, and NOx also play important roles in Hg oxidation in both air- and oxy-fuel combustion .Chatel-Pelage et al. studied mercury emission in a pilot-scale pulverized coal-fired boiler in O2/CO2.SOx and NOx were both important factors in determining the mercury emission.Fernandez-Miranda et al. pointed out that SO2, NOx, and HCl can accelerate Hg0 oxidation.Wu et al. provided evidence that NO concentration can influence mercury oxidation.The level of mercury emissions also depends on the furnace temperature, with higher temperatures leading to the reaction of elemental mercury with chlorine , which may lead to capture of both elements in the form of fly ash.Font et al. used a 90 kWth oxy-fuel bubbling fluidized bed with coal and limestone and reported high levels of mercury emissions in the exhaust gases; of these total mercury emissions, 82% was elemental and the rest was Hg2+, which is in gaseous form and is of great environmental concern.The retention of chlorine and mercury in the form of bottom ash is desirable since it minimizes the risk of atmospheric emission in the case of less effective flue gas filters.Further research on the interaction of mercury, chlorine and Na-based sorbents is necessary for efficient in-furnace emission removal .Oxy-fuel CFB combustion is based on mature fluidized bed boiler technology, and offers a good opportunity for reducing CO2 emissions from heat-generating facilities.The major conclusions from this study are:Scale-up: Lab- and pilot-scale units give valuable information on the chemical kinetics since they offer more or less the same residence time and environment as larger-scale oxy-fuel boilers.While there are numerous measurement campaigns in pilot- and industrial-scale oxy-fuel CFB units, only limited results taken from such experimental campaigns are reliable for design and scale-up of large-scale oxy-fuel CFB units.For instance, data and models on reaction mechanisms are transferable from lab- and pilot-scale furnaces to large-scale units.However, utilization of heat transfer correlations, ash deposition data and furnace hydrodynamics requires further investigation.In particular, mixing is very much dependent on the unit size and, therefore, scale-up requires experimentation into the mixing issue in industrial- or large-scale boilers.Boiler design: The two roadmaps of Constant-Furnace-Size Scenario and Constant-Thermal-Power Scenario are of technical and economic importance.Both scenarios rely on the economic potential of oxy-fuel CCS at increased O2 concentrations.The Constant-Furnace-Size Scenario is the more suitable option for retrofitting air-fired CFB boilers where, with the same furnace geometry, elevated O2 concentration causes enhanced boiler thermal power.In the Constant-Thermal-Power Scenario, the furnace size becomes smaller when increasing O2 concentration due to the reduced volumetric flow rate of RFG at increased O2 concentration.Achieving high inlet O2 concentration is important for reducing the cost of generated power by increasing the boiler thermal power or by furnace size reduction.In addition to the considerable furnace size reduction, the more homogeneous temperature profile and the lower heat flux from furnace water walls make the Constant-Thermal-Power Scenario a better roadmap to commercialization.Heat transfer: Radiative heat extraction is normally the dominant heat extraction mechanism.Control of the furnace maximum heat flux and local temperatures under oxy-fuel conditions is necessary to avoid damage to water walls at high inlet O2 concentrations.Ash deposition mechanisms on the heat extraction surfaces can be different from those seen in typical air-fired conditions due to temperature variations and different chemistry.Fluid dynamics: The considerable increase in circulating solids at high inlet O2 concentration leads to changes in furnace hydrodynamics including high solids concentration, high share of fine particles and aerosols in the furnace and the need for much more efficient cyclones.From a hydrodynamics point of view, the retrofit scenario is applicable only at low to medium inlet O2 concentrations.For high inlet O2 concentrations, a new boiler design is necessary and the semi-empirical data taken from typical air-fired CFB or from pilot-scale oxy-fuel CFB should be re-evaluated if used for design of large-scale units with an inlet O2 concentration above 40%.Combustion: Major reactions can vary from air-fired to oxy-fuel.Char gasification reactions become important while in-furnace profiles of gases like CO, SOx and NOx vary considerably compared to typical air-fired CFB.However, indirect sulphation is likely to be the dominant sulphur capture route.NOx emission should be lower under oxy-fired conditions due to the NOx reburn process and higher concentrations of steam coming from the wet RFG.No new data was generated in the course of this research.
Oxy-fuel combustion is a promising technology for carbon capture and storage (CCS) from large point sources. In particular, fluidized bed (FB) boilers represent one of the power generation technologies capable of utilizing the oxy-fuel concept. This paper reviews the published material on the key aspects of oxy-fuel circulating FB, including the boiler heat balance, heat transfer mechanisms, furnace hydrodynamics, and the mechanical and chemical mechanisms of the process. In particular, it demonstrates the challenges of utilizing high inlet O2 concentrations in the oxy-fuel process in fluidized beds. This requires significantly more efficient gas-particle clean-up technology (especially for Cl with perhaps 19% retention and Hg with 2.15 μg/m3 in flue gases), high circulating solids flux and, hence, significant heat extraction outside the furnace (up to 60% of the boiler's total heat extraction). Scale-up of oxy-fuel CFB technology can partially compensate for the energy penalty from air separation by furnace downsizing when operating at high inlet O2 concentrations. Critically, while there are numerous measurement campaigns and corresponding models from the pilot and, to a lesser extent, industrial scale, the paper endeavors to answer the questions about what information taken from such experimental campaigns is reliable, useful for future design, and for scale-up.
761
Inhibition-Related Cortical Hypoconnectivity as a Candidate Vulnerability Marker for Obsessive-Compulsive Disorder
Patients with OCD were recruited from a National Health Service treatment center in the United Kingdom.Each patient entering into the study gave permission for the study team to contact a first-degree relative.Healthy control participants were recruited using media advertisements.Participants provided written informed consent after having the opportunity to read the information sheets and ask questions of the study team.The study was approved by the Cambridge Research Ethics Committee.All study participants participated in an extended clinical interview supplemented by the Mini International Neuropsychiatric Interview, the Montgomery–Åsberg Depression Rating Scale, and the National Adult Reading Test.The MINI version used identifies the following mental disorders: major depressive disorder, dysthymia, suicidality, manic episodes, panic disorder, agoraphobia, social phobia, posttraumatic stress disorder, alcohol dependence/abuse, substance dependence/abuse, psychotic disorders, anorexia nervosa, bulimia nervosa, generalized anxiety disorder, and antisocial personality disorder.The MADRS rates depressive symptoms, and the National Adult Reading Test estimates IQ.For patients with OCD, symptom severity was assessed via interview using the Yale-Brown Obsessive-Compulsive Scale.Inclusion criteria across all groups were being of adult age, being right-handed according to the Edinburgh Handedness Inventory, and being willing to provide written informed consent.Exclusion criteria across all groups were the inability to tolerate scanning procedures, contraindication to scanning, current depression, current mental health disorder on the MINI, history of neurologic disorders, history of psychosis, and history of bipolar disorder.In the OCD group, participants were required to meet DSM criteria for the disorder based on clinical interview and the MINI, to have primarily washing/checking symptoms, and to have a Yale-Brown Obsessive-Compulsive Scale total score >16.Our rationale for including patients with mainly washing/checking symptoms was that washing symptoms in particular are extremely common in OCD, and we wished to include the same symptom-related criteria as in our previous case-relative-control behavioral study.Patients with OCD with clinically significant hoarding were excluded because hoarding differs from mainstream OCD and is now listed separately from OCD in diagnostic nosological systems.In the OCD relatives group and control group, participants were required to be free from history of OCD, to be free from other mainstream mental disorders, and to not be receiving psychotropic medication.Participants completed pretraining on the SST prior to functional magnetic resonance imaging, with a view to minimizing between-group differences in behavioral measures that can confound interpretation of imaging connectivity data.Participants then completed the task during fMRI.We used a version of the task optimized for fMRI as described elsewhere.In brief, individuals viewed a series of left- and right-pointing arrows and were instructed to respond as quickly as possible by clicking the button with their right hand, depending on which direction the arrow was pointing.Intermittently, a down-pointing arrow would appear on the screen for a variable time interval after a go signal, and participants were instructed to stop their initiated response when it appeared.By modulating the go–stop gap as previously described, the task was designed for a 50% successful inhibition outcome and was performed by each participant for approximately 8 minutes.The stop signal reaction time was calculated using the simple/standard way for such designs, that is, by subtracting the mean go–stop interval from the mean reaction time.Scanner behavioral data recorded for each participant are presented in the Supplement, with analyses indicating that the task design functioned correctly .Imaging data were acquired at the Wolfson Brain Imaging Centre at the University of Cambridge.Participants were scanned with a 3T Siemens TIM Trio scanner.While the participants were undertaking the SST, blood oxygen level–dependent sensitive three-dimensional volume images were acquired every 2 seconds.The first 10 images were discarded to account for equilibrium effects of T1.Each image volume consisted of 32 slices of 4 mm thickness, with in-plane resolution of 3 × 3 mm and orientated parallel with the anterior commissure–posterior commissure line.A standard echo-planar imaging sequence was used with 78° flip angle, 30 ms echo time, and temporal resolution of 1.1 seconds in a continuous descending sequence.The field of view of images was 192 × 192 mm, a 64 × 64 matrix, 0.51 ms echo spacing, and 2232 Hz/pixel bandwidth.In addition, a 1-mm resolution magnetization prepared rapid acquisition gradient-echo structural scan was collected for each individual with a 256 × 240 × 192 matrix, 900 ms inversion time, 2.99 ms echo time, and 9° flip angle.Scan preprocessing was conducted using the standard procedure in SPM12.Data for each participant were motion corrected, registered to the structural magnetization prepared rapid acquisition gradient-echo, spatially warped onto the standard Montreal Neurological Institute template using DARTEL toolbox, upsampled to 2-mm cubed voxels, and spatially smoothed using a Gaussian filter.fMRI data were analyzed to determine blood oxygen level–dependent signal changes in response to participants performing the SST.General linear model analysis was applied at the individual participant level in SPM12.The data were high-pass filtered to remove low-frequency drifts in the MRI signal.Regressor functions for each condition were created by convolving timing functions indicating the onset of each of six event types, with a basis function representing the canonical hemodynamic response.The event types were successfully versus unsuccessfully inhibited left or right responses and the left or right responses in go trials.Six regressors were included representing rotations and translations for the x-, y-, and z-axes.Whole-brain maps depicting beta weights for the experimental predictor functions from the first-level models were collated for group-level analyses using a full-factorial 2 × 2 × 3 design, where outcome of the stop trials and the direction with which the response was made were the within-subject factors and group was the between-subject factor.The following four a priori voxelwise contrasts were estimated: 1) the positive effect of condition, which captures regions of the brain that are significantly active during stop trials; 2) successful minus failed stop trials; 3) the main effect of group; and 4) the group × condition interaction.To correct for multiple comparisons across the whole-brain mass, contrast images were thresholded at p < .05 voxelwise, and false discovery rate cluster correction was then applied at p < .05.Significant effects of group were further interpreted by fitting 5-mm-radius spheres at the peak coordinates of a given significant F test map and conducting post hoc permutation tests for each groupwise comparison.Regions of interest were generated by our in-house three-dimensional watershed transform algorithm.The method was used because it can accurately and efficiently decompose thresholded statistical activation maps into discrete clusters even when the clusters are contiguous.It was conducted at the group level based on the thresholded statistical maps to enable connectivity across the activated network to be examined.When generating the ROIs, the within-subject contrasts were also thresholded voxelwise at p < .01 to focus on the most active brain regions.The ROIs formed the basis of the connectivity analyses.In total, 20 patients with OCD, 18 of their nonsymptomatic first-degree relatives, and 20 control participants completed the study.The demographic and clinical features of the sample are presented in Table 1, where it can be seen that the groups were well matched on age, gender, and IQ.As expected, patients with OCD scored significantly higher on MADRS total scores than the other groups, but mean scores were well below the threshold for clinically significant depression, in keeping with the exclusion criteria used.Task-related behavioral measures did not differ significantly among the groups.The following numbers of patients were taking psychotropic medication: eight selective serotonin reuptake inhibition monotherapy and two selective serotonin reuptake inhibitor plus low-dose antipsychotic medication.One patient was also taking occasional lorazepam but had not taken this within 48 hours of study participation.Activation differences for the SST contrasts of interest, along with the extracted ROIs, are summarized in Figure 1.There was a main effect of group, yielding group differences mainly in the occipital lobes, specifically in the left and right occipital cortex, the temporal occipital fusiform cortex, and the cerebellum.Post hoc permutation tests indicated that the group effect was due to hyperactivation in patients with OCD versus both other groups maximal in the bilateral lateral occipital complex.Brain regions significantly activated during stop signal trials, across all participants, are shown in Figure 1B.It can be seen that the SST activated the distributed inhibitory control network, including the bilateral inferior frontal gyrus, insula, and anterior cingulate cortex.For the contrast of successful minus failed stops across all participants, relative hypoactivation was observed in regions associated with motor responses.This is consistent with failed stops activating relevant motor areas owing to action as compared with there being no motor response for successful stops.The interaction of group × successful minus failed inhibition did not yield significant regions.The 29 functional ROIs from the above activation maps were used for the subsequent connectivity analysis.For the SST contrast, there was no significant main effect of group on gPPI connectivity, there was a significant effect of connection, and there was no significant interaction.When applied to the success minus fail contrast, there was a significant main effect of group and connection and a significant interaction.These results indicated that the task conditions affected network connectivity in different ways across the three groups.To characterize the basis of the effects at the node level, the coefficients were contrasted pairwise for patients and their relatives versus control participants, thresholded at p < .01 two tailed.A widespread pattern of reduced connectivity was evident in patients with OCD and their relatives.Summing the number of supra-threshold connections for each node highlighted a high degree of abnormality affecting cerebellum area crus 1 connectivity bilaterally, middle occipital gyrus bilaterally, superior frontal gyrus and superior medial frontal cortex, left middle temporal, and left postcentral gyri.This study evaluated functional brain dysconnectivity during response inhibition as a candidate vulnerability marker for OCD.Consistent with our hypothesis, the key finding was that patients with OCD and their first-degree relatives had abnormally reduced functional connectivity during the SST between frontal and posterior brain regions, including the frontal cortex, occipital cortex, and cerebellum.These novel findings accord well with the notion that functional connectomics constitutes a candidate vulnerability marker for OCD, supporting neurobiological models of the disorder implicating loss of cortically mediated inhibitory control, not only constrained to the frontal lobes but also involving distant posterior brain regions.Conventional analysis confirmed that the fMRI SST activated neural circuitry, including the bilateral inferior frontal cortex and anterior cingulate cortex as well as more posterior parts of the brain playing a role in visual attention streams.This is in keeping with prior lines of research implicating such regions in cortically mediated motor inhibition processes.We generated a set of ROIs using an innovative watershed algorithm to examine connectivity differences between groups using a gPPI model.This identified widespread patterns of hypoconnectivity, common to patients with OCD and their relatives, versus control participants in frontal and posterior brain regions.Overall group differences in connectivity during the SST were specifically detected during the success–fail contrast, with connectivity being lower in patients and relatives versus control participants.In the absence of significant overall stop–go differences in connectivity among the groups, this suggests that patients with OCD and their relatives had higher connectivity for failed stops and/or lower connectivity for successful stops compared with control participants.Ultimately, determining what this means on a process level requires further investigation examining causal dynamics.However, the implicated neural regions are likely to operate via mutual bidirectional connections to facilitate response inhibition.It is interesting to note that certain frontal brain regions found to be abnormally connected here during response inhibition were previously found to exhibit reduced striatal-related connectivity in OCD in association with cognitive rigidity.The most commonly dysconnected nodes common to patients and their asymptomatic first-degree relatives included frontal cortical, occipital, and cerebellar regions.Conventional neurobiological models of OCD have focused on the frontal lobes, whereas the current findings implicate abnormal connections involving not only frontal brain regions but also these other brain regions.This is in keeping with several tiers of OCD research more broadly, including connectivity studies.For example, resting-state connectivity changes in OCD were maximal in the cerebellar crus 1 region, and machine learning algorithms designed to discriminate patients with OCD from control participants based on resting-state connectivity indicated important contributions from not only frontal regions but also occipital and cerebellar regions.To our knowledge, only one previous study has examined task-related functional dysconnectivity as a candidate vulnerability marker for OCD.This study found reduced functional connectivity between the right dorsolateral prefrontal cortex and the basal ganglia during executive planning.Resting-state connectivity changes have also been described in the literature, in patients with OCD and their relatives, involving distributed brain regions.Collectively, the emerging evidence thus suggests important dysconnectivity not only between cortical and subcortical regions but also between anatomically distant cortical regions in OCD, findings that are likely to be contingent on the nature of the cognitive probe used to explore such neural circuitry.In terms of group differences in SST-related brain activation, we found differences in posterior brain regions, maximal in the bilateral lateral occipital complex.This result was attributable to hyperactivation in patients versus both other groups, whereas activation in relatives did not differ from control participants in this region.There was no group × successful minus failed inhibition interaction, indicating that this abnormality was common to inhibition trials on the task whether or not inhibition was successful.The lateral occipital complex plays an important role in visual attentional processing, including representation and perception of objects and faces.One interpretation of the current finding is that hyperactivation of this visual processing region may be related to hypervigilance in OCD or an expectation of an environmental threat.Owing to the unpredicted nature of this result, replication is required before firm conclusions can be made.Nonetheless, this result suggests that tasks designed to probe visual attentional streams may be valuable in OCD research.Although this is the first study to address inhibitory control–related functional connectivity as a candidate vulnerability marker for OCD, several limitations should be considered.We recruited patients with primarily washing/checking OCD symptoms who did not have comorbidities.As such, it remains to be demonstrated whether the findings generalize to patients with other primary symptoms or to those who have comorbidities.Owing to the sample size, power may be limited.Our approach could be viewed as conservative because nodes of interest were generated using false discovery rate p < .05; hence, and in view of the sample size, some neural nodes implicated in OCD, but with a smaller effect size, may have been overlooked.Presupplementary motor activation abnormalities were previously found in patients with OCD and their relatives, but we could not replicate this finding in the relevant ROIs.Likely because participants were pretrained, they did not differ on stop signal behavioral measures; this is an advantage because it simplifies imaging interpretation, but the corollary is that our study did not measure neural changes related to impaired inhibition but rather measured neural changes related to inhibition per se.Owing to the nature of the gPPI analysis, it could not be established whether there was heightened connectivity during go trials or decreased connectivity during stop trials in the patients and relatives.Our connectivity difference was in the contrast of successful–failed stop trials.Control participants showed heightened connectivity when stopping was successful relative to unsuccessful.In OCD, this effect was reduced.This is an interesting pattern of connectivity difference.Patients with OCD may be engaging the network more during unsuccessful stop trials, in line with abnormal post-error processing.Or, it may be that they engage the network less during the successful stop trials.The fact that we see this difference but no cross- group difference for stop–go suggests that it is both.This aspect could be assessed in future studies by including rest blocks, allowing activity and connectivity during routine responding to be estimated separate from the resting baseline.While some patients with OCD were receiving psychotropic medications, functional dysconnectivity was also found in these patients’ relatives who were not receiving any psychotropic medications.Hence, while we cannot address effects of such pharmacotherapies on connectivity owing to the sample size, our key findings were not due to such effects.Prior work found treatment-related changes in activation during a Stroop task, which examines attentional inhibition processes, in patients with OCD.Future work should examine effects of treatment on functional connectivity during inhibition tasks in OCD.We did not observe robust differences between the OCD and first-degree relative groups in functional connectivity.Identification of differences between these two types of group using larger samples in future work may be valuable to identify mechanisms associated with chronicity/instantiation of OCD as opposed to vulnerability toward OCD.Lastly, the current study focused on cortical functional connectivity; however, given the prominent role of the basal ganglia in OCD models, future work should also look at cortico-subcortical connectivity on the SST, with there already being evidence of abnormalities in OCD using an executive planning task.In conclusion, we found that hypoconnectivity during response inhibition, involving frontal and posterior brain regions, may constitute a candidate vulnerability marker for OCD.Future studies could use such cognitive probe connectivity approaches to help delineate etiological factors involved in OCD and extend research into other obsessive-compulsive–related disorders.
Background: Obsessive-compulsive disorder (OCD) is a prevalent neuropsychiatric condition, with biological models implicating disruption of cortically mediated inhibitory control pathways, ordinarily serving to regulate our environmental responses and habits. The aim of this study was to evaluate inhibition-related cortical dysconnectivity as a novel candidate vulnerability marker of OCD. Methods: In total, 20 patients with OCD, 18 clinically asymptomatic first-degree relatives of patients with OCD, and 20 control participants took part in a neuroimaging study comprising a functional magnetic resonance imaging stop signal task. Brain activations during the contrasts of interest were cluster thresholded, and a three-dimensional watershed algorithm was used to decompose activation maps into discrete clusters. Functional connections between these key neural nodes were examined using a generalized psychophysiological interaction model. Results: The three groups did not differ in terms of age, education level, gender, IQ, or behavioral task parameters. Patients with OCD exhibited hyperactivation of the bilateral occipital cortex during the task versus the other groups. Compared with control participants, patients with OCD and their relatives exhibited significantly reduced connectivity between neural nodes, including frontal cortical, middle occipital cortical, and cerebellar regions, during the stop signal task. Conclusions: These findings indicate that hypoconnectivity between anterior and posterior cortical regions during inhibitory control represents a candidate vulnerability marker for OCD. Such vulnerability markers, if found to generalize, may be valuable to shed light on etiological processes contributing not only to OCD but also obsessive-compulsive–related disorders more widely.
762
Chronic helminth infection does not impair immune response to malaria transmission blocking vaccine Pfs230D1-EPA/Alhydrogel® in mice
To eradicate malaria, novel tools must integrate with existing approaches to reduce parasite transmission.One promising potential tool is a transmission-blocking vaccine, which aims to halt transmission by inducing antibodies targeting antigens expressed by the parasite in the mosquito host .Pfs230, a protein present on the surface of Plasmodium falciparum gametes, is a leading candidate for a TBV.Recently, a recombinant form of the first 6-cysteine rich domain of Pfs230 was produced with the quality characteristics and quantity suitable for human clinical trials using the Pichia pastoris expression system .In order to enhance immunogenicity, the ∼20 kDa recombinant Pfs230D1 protein was chemically conjugated to a carrier protein and formulated in an adjuvant.This vaccine candidate is currently in clinical trials in endemic areas .Malaria-affected areas are often co-endemic with helminth parasite infections.Helminth parasites belong to multiple taxonomic groups, but collectively they share the capacity to downregulate the parasite-directed host immune response .During chronic infection, helminths modulate immune responses to bystander pathogens , and to some vaccine antigens .The cytokine response to most helminth parasites is characteristically both Th2- and IL10-dominated; the IL-10 response appears to derive from both adaptive and natural T regulatory cells .These prototypical responses driven by helminths or helminth-derived molecules have been shown to alter the responses to some types of vaccines , though this is not a universal finding .To date, few studies have examined whether a malaria TBV can be modulated by infection with intestinal helminth parasites.It has been recently suggested that Heligmosomoides polygyrus bakeri infection impairs the immunogenicity of a Plasmodium falciparum DNA TBV, although this infection did not impair immunity to irradiated sporozoites .Hpb is a natural intestinal parasite of mice, capable of establishing long-term chronic infections in many strains of mice which is ideally suited for lengthy immunization studies.During the infection, Hpb induces a markedly polarized early Th2 response characterized by increased IL-4, IL-13 and IgE production .However, this persistent type 2 response shifts to long-lasting chronic infection, characterized by a strong regulatory response with expanded frequency of regulatory T cells and production of IL-10, peaking at day 28 post-infection .At this stage of infection, the ability of Hpb to down-modulate responses to unrelated bystander antigens, including vaccine candidates, has been extensively demonstrated .In this context, we used the mouse model of intestinal infection with Hpb to assess whether transmission-blocking immunity induced by Pfs230D1-EPA/Alhydrogel® would be impaired by helminth infection.Our findings demonstrate that chronic Hpb infection does not affect antibody responses or transmission-blocking activity induced by Pfs230D1-EPA/Alhydrogel® immunization.This supports the feasibility of TBV use in areas where intestinal helminths and malaria are co-endemic.All animals were infected, vaccinated and sampled according to protocols approved by the NIAID Animal Care and Use Committee.For each experiment, 10 BALB/c mice per group were infected with 200 H. polygyrus bakeri infective larvae by oral gavage 28 days before the first dose of the Pfs230D1-EPA/Alhydrogel® vaccine.The confirmation of Hpb infection and intensity follow-up were determined by fecal egg counts at days 25, 53 and 63 post-infection using standard protocols .HES antigens from adult worms were prepared as described by Johnston et al. with some minor modifications.Briefly, Hpb adult worms were isolated from the duodenum of BALB/c mice inoculated 14 days earlier with 200 infective 3rd stage larvae."The worms were soaked and washed six times in Hanks' Solution and then placed in RPMI 1640 culture media plus a standard antibiotic mixture of penicillin, streptomycin and gentamicin, distributed at approximately 400 adult worms per 2 ml in 24-well culture plates for 1–2 week.HES-containing culture media were collected at intervals of twice per week, and then were pooled out and concentrated over a 3000 MWCO filter.The protein concentration was determined by Bradford assay and the HES were used for the ELISA assays to measure helminth specific antibody response.Pfs230D1-EPA is a conjugate produced by chemically cross-linking Pfs230D1, a highly purified recombinant protein corresponding to Pfs230 expressed by gametocytes and on gametes, to rEPA, a highly purified recombinant protein corresponding to a mutant and detoxified Exoprotein A from Pseudomonas aeruginosa.The conjugate Pfs230D1-EPA is composed of 54.6% Pfs230D1 and 45.4% EPA and manufactured under current good manufacturing practice with methods developed at the Laboratory of Malaria Immunology and Vaccinology, National Institute of Allergy and Infectious Diseases, National Institutes of Health as described previously .Pfs230D1-EPA was formulated by adding 8.2 µL of the vaccine aseptically to each of two sterile Wheaton glass vials containing a mixture of 359.8 µL of Phosphate Buffered Saline and 32 µL of Alhydrogel® 2%.Mixing was done using a Rotamix Rotator at 16–24 rpm for 60 min, at room temperature.The final formulation was then stored at 2–8 °C for approximately 24 h prior to use for immunization of the mice.Mice were infected with H. polygyrus bakeri as described above at day −28.After 28 days of infection with Hpb, ten BALB/c mice were immunized intramuscularly in the leg, with 1 µg of Pfs230D1-EPA/Alhydrogel® in 50 uL of PBS using a “hubless” syringe.The vaccine immunization day was considered as day 0.The second dose was given 28 days after the first vaccine dose.On day 35, 7 days after the second dose, all mice were euthanized.Experiments were performed in two independent sets of infection and immunizations comprising 5 mice per group.Retro-orbital blood was collected from mice at days 0, 15 and 25 of the study.Stool samples were collected at days −25, 25 and 35.On day 35, mice were euthanized, exsanguinated, and spleens were removed for isolation of B cells.Antibody responses to Pfs230D1 were measured using an enzyme-linked immunosorbent assay.Immulon® 4HBX plates were coated with 1ug/well of recombinant Pfs230D1.Plates were incubated overnight at 4 °C and blocked with 320 µL of buffer containing 5% skim milk powder in Tris-buffered saline for 2 h at RT.Plates were washed with Tween-TBS.Samples were added to antigen-coated wells in triplicate and incubated for 2 h at RT.Plates were washed, then 100 µL of alkaline phosphatase labelled goat anti-mouse IgG, IgG1, IgG2a or IgG3 were added and incubated for 2 h at RT.The plates were washed and a colorimetric substrate was added.Plates were read at absorbances of 450 nm and 550 nm on a multi-well reader.Antibody responses to Hpb were measured by ELISA.Immulon® 4HBX plates were coated with 1ug/well of Hpb adult worm excretory/secretory antigen.Plates were incubated overnight at 4 °C and, after washing, blocked with 200 µL of PBS-BSA 5% for 1 h at RT.Plates were washed six times with Tween-TBS.50uL of mouse serum were added to antigen-coated wells in duplicate and incubated for 1 h at RT.Plates were washed and then incubated separately with biotinylated rat anti-mouse IgG1, IgG2, IgG3, IgE and IgA for 1 h at RT.Plates were washed and incubated with 50 μL of streptavidin conjugated with HPR for 30 min at RT.After washing, TBM-substrate was added and incubated for ten minutes in the dark.25 µL of 2 N H2SO4 was used to stop the reaction and plates were read at absorbances of 450 nm and 550 nm on a plate reader.Twenty-five microliters of each mouse serum were used to determine levels of IL-5, IL-6, IL-10, IL-13, TNF-a and IFN-y by magnetic beads using Luminex multiplex assay based on the manufacturer’s recommendations.For isolation of Pfs230D1-specific B cells from splenocytes, we developed a biotin-streptavidin tetramer prepared with Pfs230D1 recombinant protein expressed in Pichia pastoris .Tetramer preparation was performed as previously described .Briefly, Pfs230D1 protein was biotinylated and bound to streptavidin previously conjugated with fluorochrome PE.A decoy tetramer was generated using BSA and the fluorochromes PE and DL-594, to assure reduction of unspecific bindings.Splenocytes were isolated and incubated with 1 µM decoy tetramer and PBS containing 10% FBS and Fc Block, to inhibit unspecific binding of Fc receptor expressing cells), for 5 min at room temperature in the dark.Then, Pfs230D1-PE tetramer at 1 µM was added to the tube and incubated at 4 °C protected from light for 20 min.Cells were washed with PBS containing 10% FBS and incubated with anti-PE magnetic beads for 20 min.Four mL of PBS were added, and the suspension was passed over magnetized LS columns for elution of Pfs230D1-PE specific cells.After enrichment with PE and Decoy Tetramers, splenocytes were stained with Zombie Violet live/dead, Alexa Fluor 700 anti-Gr-1, Alexa Fluor 700 anti-CD3, Alexa Fluor 700 anti-F4/80, APC-Cy7 anti-B220, PE-Cy7 CD19 and PercP/Cy5.5 anti GL7 conjugated antibodies from Biolegend.Pfs230D1-specific B cells were gated and non-Pfs230D1 B cells were excluded.Cells were analyzed using LSR II cytometer and analysed using FlowJo V.10.Transmission reducing activity, determined by the reduction of P. falciparum oocyst burden in the mosquito midgut, was evaluated using standard membrane feeding assay .Briefly, an in vitro 15-day culture of stage V P. falciparum gametocytes was diluted with washed O+ RBCs and an AB+ serum pool from malaria-naïve volunteers to achieve 0.07–0.9% concentration of Stage V gametocytes and a 50% hematocrit.For each sample, 200 μL of diluted culture was mixed with 60 μL of test sample and immediately fed to pre-starved 3–8-day-old Anopheles stephensi mosquitoes using a Parafilm® membrane in a mosquito feeder, kept warm with a jacket with 40 °C circulating water.After feeding, mosquitoes were kept at 26 °C and 80% humidity conditions to allow parasites to develop.On Day 8 after the feed, mosquito midguts were dissected and stained with 0.05–0.1% mercurochrome solution in water for at least 20 min.Infectivity was measured by counting oocysts in at least 20 mosquitoes per sample.Pre-vaccination serum pools from the same mice was used as negative control.Each sample was tested in two independent SMFAs, and the two TRA values were averaged to obtain a single subject level TRA for a given time point.Analyses were performed using data from two independent experiments.Statistical analyses for Pfs230D1 IgG measurement, specific B cells quantification by flow cytometry, and SMFA were performed by One-way ANOVA and corrected for multiple comparisons."Data from cytokine levels and Hpb IgG measurements were analysed using the Kruskal-Wallis test followed by Dunn's multiple comparisons test.Parasite quantification in the stool samples was analysed by the Mann-Whitney test.To investigate if chronic helminth infection could impair the immunogenicity and/or efficacy of a malaria TBV, we infected BALB/c mice with Hpb L3 by gavage and 28 days later immunized them intramuscularly with two doses of Pfs230D1-EPA/Alhydrogel®.We assessed the parasite burden and chronicity of the helminth infection by quantification of the Hpb eggs in the stool at day 25, day 53 and day 63 post-infection.We also assayed Hpb-specific antibody production at day 63 post-infection.At all 3 time points, all animals from group 2 and group 4 were infected with similar egg counts.Before the euthanasia at day 63dpi the parasite burden in both infected groups were the same.Similarly, levels of Hpb-specific antibodies did not differ between the 2 Hpb-infected groups of mice.The Hpb-specific antibody response for the 2 Hpb-infected groups was characterized by a highly significant increase in the IgG1 but not IgG2a levels at day 63 post-infection, when compared with group 1 and group 3.No significant differences for IgG3 were seen among the four groups and similar increases in IgA and IgE isotypes were observed in groups 2 and 4 compared to group 1 and 3.We next measured levels of Pfs230D1-specific IgG in all groups of mice.Pfs230D1 antibody levels were similar between infected and uninfected mice at day 15 and at day 25 post-vaccination.One week after the second vaccination, Pfs230D1 IgG titers were slightly higher in sera from Hpb-infected mice, but the difference was not statistically significant.Pfs230D1-specific IgG titers were not detected in sera from unvaccinated mice.Pfs230D1-specific IgG1, IgG2a and IgG3 levels were measured at day 35.IgG1 levels were similar between groups that were Hpb-infected and vaccinated and those only receiving the TBV.IgG2a and IgG3 levels did not differ between immunized mice and the naïve group, nor between Hpb-infected/vaccinated mice and those only vaccinated.Overall, these results indicate that chronic Hpb infection does not impact the IgG response to PfS230D1 immunization.At day 63 post inoculation, Hpb infection induced a marked increase in IL-10 levels in the sera of group 2 compared to the uninfected animals or the uninfected but Pfs230-vaccinated animals.Interestingly, IL-10 levels did not increase in the animals of group 4, as compared with group 2.Serum levels of IL-6 increased in both Hpb-infected group 2 and group 4 when compared to group 1.Levels of TNF-alpha, IFN-gamma, IL-5 and IL-13 did not differ significantly.To further characterize humoral responses and investigate whether Hpb infection affects vaccine-induced B cell generation, specific Pfs230D1-B cells were enumerated at day 35, seven days after the second immunization.The frequency of Pfs230D1-specific B cells was not statistically different between Hpb-infected and -uninfected groups that received the vaccine.Supplementary data associated with this article can be found, in the online version, at https://doi.org/10.1016/j.vaccine.2019.01.027.To further characterize humoral responses and investigate whether Hpb infection affects vaccine-induced B cell generation, specific Pfs230D1-B cells were enumerated at day 35, seven days after the second immunization.The frequency of Pfs230D1-specific B cells was not statistically different between Hpb-infected and -uninfected groups that received the vaccine.SMFA was performed to evaluate transmission-reducing activity, defined as a reduction in the number of oocysts per mosquito.Sera from mice that received only Pfs230D1 conferred 36.1% TRA while sera from Hpb-infected and immunized mice conferred 32.8% TRA, not significantly different from each other but both significantly greater than sera from naïve mice.Sera from Hpb-infected unvaccinated mice reduced oocyst transmission by 22%, which was not significantly different from the naïve group.In the current study, we evaluated whether chronic intestinal helminth infection alters the activity of a promising malaria transmission blocking vaccine.Among the many helminth parasites that elicit immunomodulatory responses, one of the best-studied models is the intestinal nematode Heligmosomoides polygyrus bakeri infection in mice .We infected 6-week-old BALB/c mice with the intestinal nematode H. polygyrus bakeri, and immunized the mice starting 28 days later with Pfs230D1-EPA/Alhydrogel®, a vaccine aimed at reducing transmission of malaria, currently in clinical trials in malaria-endemic areas.Antibody responses to Hpb and the helminth burden in stool were used to assess the establishment of chronic infection.Hpb-infected mice demonstrated marked increase of helminth antigen-specific IgG1, IgA and IgE but not IgG2a antibody responses after 9 weeks of infection, concomitant with an increase of systemic IL-10 and IL-6 levels.This persistent parasite-specific type-2 immune response is normally associated with establishment of chronic infection in primary challenges with Hpb .Vaccine immunogenicity was characterized by production of Pfs230D1-specific activated B cells in the spleen and by antibody titers to Pfs230D1, while vaccine functional activity was assessed by SMFA.Numbers of Pfs230D1-specific B cells, antibody titers and TRA were all significantly higher in vaccinated compared to unvaccinated animals, demonstrating vaccine immunogenicity.Most relevant to this study, production of antigen-specific B cells, antibody titers against Pfs230D1, and TRA did not differ between helminth-infected and uninfected animals.This study used a submaximal vaccine dosage in BALB/c mice based on dose ranging studies conducted in other mouse strains, with the expectation that a submaximal response would be more sensitive to immunosuppression.Of note, this vaccine regimen yielded lower TRA levels in BALB/c mice versus levels we have seen for other mouse strains.These differences could be due to strain-specific differences in the immune response, or to the timing of blood sampling.Nevertheless, our data demonstrate that the antibody activity induced by Pfs230D1-EPA was not impaired by Hpb infection, with similar TRA achieved in Hpb-infected and uninfected mice.Our data collectively demonstrate that the immunogenicity and functional activity of Pfs230D1-EPA/Alhydrogel® vaccine were not impaired by Hpb infection in BALB/c mice.This lends support to the notion that this protein-based malaria transmission blocking vaccine could be implemented in malaria-endemic areas where helminth co-infection is common.CHC, JL and PHGG conceived the study.CHC, PHGG, EB, JL, JH, JFU, CA, TBN and PED designed the experiments.CHC, PHGG, JH, EB, NAHA, OM, AM, EK performed experiments.CHC, PHGG, JH, NAHA, OM analyzed data.CHC, PHGG, EB, JH, CA, NAHA, DN, AM, OM, JFU, JL, TBN and PED interpreted the data and wrote the manuscript.The authors declare no competing interests.
Introduction: Malaria transmission blocking vaccines (TBV) are innovative approaches that aim to induce immunity in humans against Plasmodium during mosquito stage, neutralizing the capacity of the infected vectors to transmit malaria. Pfs230D1-EPA/Alhydrogel® a promising protein-protein conjugate malaria TBV, is currently being tested in human clinical trials in areas where P. falciparum malaria is coendemic with helminth parasites. Helminths are complex metazoans that share the master capacity to downregulate the host immune response towards themselves and also to bystander antigens, including vaccines. However, it is not known whether the activity of a protein-based malaria TBV may be affected by a chronic helminth infection. Methods: Using an experimental murine model for a chronic helminth infection (Heligmosomoides polygyrus bakeri - Hpb), we evaluated whether prior infection alters the activity of Pfs230D1-EPA/Alhydrogel® TBV in mice. Results: After establishment of a chronic infection, characterized by a marked increase of parasite antigen-specific IgG1, IgA and IgE antibody responses, concomitant with an increase of systemic IL-10, IL-5 and IL-6 levels, the Hpb-infected mice were immunized with Pfs230D1-EPA/Alhydrogel® and the vaccine-specific immune response was compared with that in non-infected immunized mice. TBV immunizations induced an elevated vaccine specific-antibody response, however Pfs230D1 specific-IgG levels were similar between infected and uninfected mice at days 15, 25 and 35 post-vaccination. Absolute numbers of Pfs230D1-activated B cells generated in response to the vaccine were also similar among the vaccinated groups. Finally, vaccine activity assessed by reduction of oocyst number in P. falciparum infected mosquitoes was similar between Hpb-infected and immunized mice with non-infected immunized mice. Conclusion: Pfs230D1-EPA/Alhydrogel® efficacy is not impaired by a chronic helminth infection in mice.
763
The effect of visual and auditory elements on patrons' liquor-ordering behavior: An empirical study
Beginning with Kotler's study, the effect of atmospheric elements has been widely addressed in academic research.Atmospheric elements such as music, lighting, color, and aroma are closely related to sensory marketing, which engages “the consumers’ senses and affects their perception, judgment and behavior”.Many retail companies and restaurants try to utilize these factors to achieve their goals by enriching consumer experience.The atmospheric factors of eating situations in particular may be defined as “interior design for food,” which uses elements such as food preparation and the dynamics of the eating experience “to design the right light, temperature and colors in the eating environment”.In academia, several prior studies focus on the effect of a specific atmospheric element, mostly color or music; North et al., 1997; Roballey, 1985).Nevertheless, examining atmospheric factors from a holistic perspective is critical, as consumers perceive service environment collectively.Thus, the congruency and harmony of the atmospheric elements are important to enhancing patrons’ restaurant or store experience.However, only a few studies about the effect of congruency exist in the retail and hospitality fields, and most of them focus on a single atmospheric element of congruency.To increase the external validity of the experiment, two or more elements should be considered.Therefore, the current work addresses not only auditory congruency but also visual congruency with a specific product."The definition of congruency in this study is adapted and modified from Demoulin's study: the extent to which consumers’ subjective perceptions of auditory and visual congruency influence their perception of a specific product.The present study focuses on visual and auditory congruency with liquor consumption by conducting an experiment in a bar.Most prior studies related to liquor consumption were conducted in terms of healthcare and alcohol abuse, therefore studies from the marketing perspective are needed.Moreover, a bar is known to use atmospherics intensively, and sensory aspects are important for wine, especially among different types of liquor; thus, designing atmospherics for wine consumption in a bar is relatively critical.To investigate the effect of congruent atmospherics on patrons’ behavior, wine expenditure ratio and liquor choice were set as dependent variables.Data in the form of receipts were collected for four weeks from two branches of an operating bar.Logistic regression and linear regression were used to examine the effect of visual and auditory congruency.Many atmospherics-related studies use the stimulus–organism–response model created by Mehrabian and Russell to explain how atmospherics cause consumers’ evaluation and behavioral responses.In their model, atmospherics are a stimulus that causes consumers’ evaluation and behavioral responses.Kotler divides atmospheric stimulus in four senses: visual, aural, olfactory, and tactile.We chose to use visual and auditory in our study, as they may be modified easily and at a low cost.Background music has been recognized as a critical factor influencing the mood of a store; it is easy to control and costs less than other marketing tools designed to create mood.Numerous previous studies on background music have focused on consumers’ behavior by controlling the tempo and dynamics of the background music.Smith and Curnow found that consumers spend less time shopping when listening to fast background music than when the music is slow.Moreover, some studies found that the tempo of background music affects the speed of consumers’ movements as well as the amount of time consumers spend in a restaurant, the amount of time they spend eating, and the amount of time they spend drinking.In addition, the interaction between the tempo and volume of the background music affects consumers’ perceptions of service quality and satisfaction with their experiences.Some prior studies examined the effect of auditory congruency in various situations."Jacob et al. carried out an experiment in a florist's shop to determine the increase in consumers’ expenditure amount with romantic music compared to pop music and no music.Some studies investigated congruency between type of music and wine.North et al. examined the nationality congruency effect between background music and selection of wine origin.French wine selection increased with French background music, while the rate of German wine selection increased with German background music.Areni and Kim also found an increase in consumers’ expenditure amount when classical music was played in a wine retail shop.To broaden the prior discussion, the current study examines the congruency between background music and type of liquor in a hospitality situation.In the current study, we expect that there will be an increase in expenditure and selection congruent with the type of liquor: wine.When music that is congruent with wine is played, the ratio of wine expenditure to the total expenditure of the table will increase.When music that is congruent with the wine is played, the selection of wine as opposed to other types of liquor will increase.Visual cues, such as the colors and theme of the environment, have been addressed in previous studies."According to Stroebele and De Castro's review paper, colors are one of the most powerful marketing tools available, as they create emotional responses and direct attention to specific items or areas.The color of both the food and the environment in which it is served, including the color of the furniture, tableware, and dishware, are known to elicit certain sensations.Moreover, choosing an interior theme based on a specific culture influences consumers’ food selection behavior.Bell et al. decorated a restaurant with Italian visual cues, and consumers selected more Italian dishes than English dishes.In the current study, the matched culture with type of liquor was explored first; then, the effect of visual congruence with type of liquor selection was addressed.When a visual that is congruent with wine is displayed, the ratio of wine expenditure to the total expenditure of the table will increase.When a visual that is congruent with wine is displayed, the selection of wine as opposed to other types of liquor will increase.A field experiment was conducted at two different branches of a bar franchise for a total of four weeks.This method was used to increase the external validity, which is known to be useful in fine-tuning managerial strategies and decisions.Two branches, which have similar locational conditions, were selected to control the branch effect: thus, one branch is an experimental branch, and the other one is a control branch.At this bar, liquor products, some snacks, and meals are sold, but wines are their main product.All the receipt data were collected from customers who visited the bars from Monday through Thursday while the bars were open.As the number of patrons increases on Fridays and weekends, each branch adds more space outside of the store where background music cannot be controlled; thus, we excluded the data from Fridays and weekends.All of the mood factors except for background music and paper placemats remained unchanged.The pre-test was conducted to investigate Korean consumers’ awareness of wine-related countries.We limited the age of respondents to those in their twenties and thirties.About 79% of respondents stated that France was the country they most associated with wine and related products.Chile, Italy, Spain, the US, Australia, and the Republic of South Africa followed France.Therefore, French-related auditory and visual cues were selected as the congruent cues to go with the wine product.For the congruent auditory cues, 88 French songs that are generally recognizable as typical French music in Korea and 94 Korean pop songs were selected for the congruent and incongruent conditions, respectively.To control the effect of tempo, a beats per minute analysis program was used to check the BPM for each song.We excluded some songs that were far from the desired range and adjusted the French and Korean pop songs according to the following equation:Regarding the congruent visual cues, pictures of the Eiffel Tower, the French flag, and cheese were used, as these images were the most frequent results when “France” was searched using a Korean search engine.Paper placemats printed with these French images were used in the congruent condition, and placemats with no image were used in the incongruent condition.In the experimental branch, only auditory congruency was implemented during the first week, and only visual congruency was implemented during the second week.During the third week, both auditory and visual congruency were implemented.Lastly, only incongruent cues were implemented during the fourth week.In the control branch, only incongruent cues were used throughout the experimental period to control the branch effect.The branch variable was used to control the effect of the cues.The interaction variable –the auditory and visual cues – was left out because it had no significant effect.To examine the effect of seasonality, an ANOVA and chi-square analysis were performed on the dependent variables among weeks or days of the week.As there were no significant differences, no effect of seasonality was posited.When the French visual cue was presented, the ratio of wine expenditure to the total expenditure of a table increased by 6.2%.The results of the t-test examining the price per bottle of wine showed that this phenomenon was due to consumers’ tendency to purchase higher-priced wine when a visual cue was present.On the other hand, there was no significant difference in the price per bottle of other types of liquor.Auditory congruency had a significant effect on the selection of liquor type.When music that was congruent with the wine was offered, the probability of a consumer ordering a bottle of wine was 1.86 times higher than the probability that a consumer would order a different type of liquor.Several results can be inferred from the findings.First, when presented with a congruent visual cue, patrons’ wine expenditure ratio increased.We found that this result was due to patrons’ tendency to purchase higher-priced wine rather than consuming more bottles of wine.Second, when presented with a congruent auditory cue, the probability to order wine increased.Therefore, patrons’ choice to purchase wine products from liquor lists is affected by congruent auditory cues, while the overall expenditure on the wine – particularly the price per bottle – increases when appropriate visual cues with wine are presented.This result supports prior scientific research on visual and auditory sensitivity."After Ng and Chan's study, the response time of auditory stimuli is shorter than that of visual stimuli; thus, patrons have a tendency to respond to congruent auditory cues first, choosing which liquor to consume.After the auditory stimuli, congruent visual cues may affect patrons’ decision about the expenditure size of their wine consumption.The results of this study have the potential to contribute to both academic research and the practical field.First, this study investigated the effect of circumstantial cues on purchasing decisions on a group level rather than an individual level by collecting the receipt data of each table.To study the patrons’ behavior in a bar, group-level decisions should be considered, as one of the reasons to visit bars is to engage in social interaction with other patrons.Moreover, group decisions are critical in investigating wine consumption, as bottles of wine are usually shared by the whole table.Wine is often sold by the bottle, rarely by the glass, in Korea.Second, this study aimed to increase the external validity by using a field experiment method and collecting receipt data.Previously, eating behaviors were mostly studied in controlled laboratory settings or in a natural environment with self-report surveys.The current study was conducted in a natural environment with observed data, receipts, which may increase the external validity of the experiment.For bar and restaurant managers, appropriately designing atmospherics is important for their sales.Many restaurants tend to prefer to sell specific products due to achieve “economy of scale.,The bar where we conducted the experiment focuses on selling wines, which means they prefer to sell wines to increases their profit margins.Utilizing background music and visual aids that are appropriate for the main product of the restaurant may help to increase consumers’ expenditure and to influence patrons’ choice.Several limitations of this study that may impede its generalizability are as follows.There are several unobserved demographic variables—the receipt data did not include patrons’ demographics or the number of patrons at each table.Although we used the ratio of expenditure on wine as a dependent variable, future studies may include individual- or group-level variables.Observation methods may be useful in collecting these data.Other than the unobserved variables, uncontrolled mood elements may be another limitation.We only investigated two out of the four atmospheric stimuli.This study can serve as a starting point for research regarding the interaction of other atmospheric elements.Future liquor-related studies should explore the effect of the olfactory and tactile dimensions, as eating and drinking behavior are closely related to the five senses of human beings.
This study aims to empirically investigate the effect of music and visual congruency with a type of liquor in a bar. Specifically, it examines the effect of atmospheric elements' congruency with wine on patrons' liquor-ordering behavior and their expenditures. As most Koreans associate wine with French culture, French auditory and visual cues were used to investigate the effect of congruency. A total of 650 receipts from two different branches of a bar were collected for four weeks. The study found that in the visual congruence condition, the ratio of wine expenditure to the total expenditure of a table and the ratio of liquor expenditure to the total expenditure of a table increased. When the auditory congruence condition was implemented, the probability of ordering wine increased. There was no interaction effect. Implications based on the findings are discussed in the final section of the study.
764
Physiological aspects of nitro drug resistance in Giardia lamblia
Giardia lamblia, a flagellated, amitochondrial, binucleated protozoan, is the most common causative agent of persistent diarrhea worldwide.Giardiasis is commonly treated with metronidazole, other 5-nitroimidazole compounds, nitazoxanide or albendazole as an alternative in the case of resistance to nitro drugs.Moreover, G. lamblia is susceptible to a variety of antibiotics because of its prokaryote-like transcription and translation machineries.According to a commonly accepted model, nitro compounds are activated by reduction yielding toxic intermediates, the electrons being provided by pyruvate oxidoreductase.The reduced nitro compound then binds covalently to DNA and results in DNA breakage and cell death.Resistance formation to nitro compounds is, however, eagerly detected both in vitro and in vitro.Studies with metronidazole-resistant strains have revealed, however, that resistance is not always correlated with reduced POR activity thus mechanisms of action independent of POR activity may exist.In accordance to the prevailing model for the mode of action of nitro drugs, one would hypothesize that resistant trophozoites have decreased activities of nitroreductases, and that this decrease is due to lower expression levels of the corresponding genes.To verify this hypothesis, freshly obtained, resistant patient isolates would be optimal, but they are difficult to maintain in axenic culture.Therefore, most of the studies compare resistant “model” strains generated in vitro with isogenic wildtype strains.These studies have revealed genome rearrangements and profound transcriptional changes evidenced by differential analyses using microarrays followed by quantitative RT-PCR on selected transcripts and strand-specific RNA sequencing.In both studies, expression profiles of genes coding for variant surface proteins and for genes involved in oxido-reductions – amongst others - are altered the latter allegedly confirming this hypothesis.These studies on transcriptional changes do not reveal, however, the alterations that occur with respect to the cellular physiology of the resistant lines.Questions such as whether these lines have reduced reductase activities only with nitro drugs or also with other compounds as electron acceptors, and whether they have different pool sizes or ratios of electron and energy providing cofactors, need to be addressed.In this study, we document the physiological changes during resistance formation to nitro drugs in G. lamblia, comparing a nitro drug-resistant strain, namely the previously introduced strain C4 and its corresponding wild-type with respect to their ultrastructure, whole cell activities such as oxygen consumption and resazurin reduction assays, functional assays, and pool sizes and ratios of cofactors involved in reductive processes.If not otherwise stated, all biochemical reagents were from Sigma.Nitazoxanide was synthesized at the Department of Chemistry and Biochemistry, University of Bern, Switzerland.6-hexanol was synthesized at the Department of Sciences and Chemical Technologies, University of Rome and kindly provided by M. Lalle.Albendazole, NTZ, metronidazole, and NBDHEX were kept as 100 mM stock solutions in DMSO at −20 °C.Protein contents of cell-free extracts were determined by the Bradford method using a commercial kit.For the normalization of whole-cell-assays, the trophozoites were lysed in PBS containing 0.05% Triton-X-100."Student's t-tests were performed using the software package R.Differences of the mean values with p < 0.01 were regarded as statistically significant.Trophozoites from G. lamblia WB clone C6 wild-type and of the NTZ/MET resistant clone C4 were grown under anaerobic conditions in 10 ml culture tubes containing modified TYI-S-33 medium as previously described.C4 was routinely cultured in the presence of 50 μM NTZ.Subcultures were performed by inoculating 20 μl or 100 μl of cells from a confluent culture detached by cooling to a new tube containing the appropriate medium.For all experiments comparing wild-type to C4 trophozoites, the medium from confluent cultures was removed one day before the harvest and replaced with fresh medium without NTZ.Trophozoites were detached by incubation on ice for 15 min followed by centrifugation.Pellets were washed twice with PBS and either stored at −20 °C or used directly.For all growth studies, G. lamblia WBC6 wild-type and the MET- and NTZ-resistant strain C4 were inoculated into culture tubes.To determine the respective growth curves, WT and C4 trophozoites were grown with 50 μM NTZ or with equal amounts of DMSO as a solvent control.At various time points, adhering cells were counted in a Neubauer chamber.To determine minimal inhibitory concentrations, WT and C4 trophozoites were inoculated in the presence of increasing amounts of the nitro compounds MET, NTZ or NBDHEX, and of ALB as a control.The tubes were incubated at 37 °C for 4 days.The MIC was determined by observing the wells under the microscope starting from higher to lower concentrations.The concentration at which the first living trophozoites were visible is given as the MIC.For scanning or transmission electron microscopy, trophozoites were harvested as described above and processed as described earlier, with the sole exception that UranyLess EM Stain was used instead of uranyl acetate.For quantification of expression of characterized proteins by real-time PCR after reverse transcription, trophozoites were grown and harvested as described above.RNA was extracted using the QIAgen RNeasy kit digestion according to the instructions by the manufacturer.RNA was eluted with RNase-free water and stored at −80 °C.First-strand cDNA was synthesized using the QIAgen OmniscriptRT kit.After quantitative RT-PCR, expression levels were given as relative values in arbitrary units relative to the amount of actin.Quantitative RT-PCR was performed as described using the primers listed in Table 1.Oxygen consumption and extracellular acidification rates were simultaneously determined using a Seahorse XFp device.For each assay, WT or C4 trophozoites were harvested as described and suspended in PBS, and the suspension was added to XFp cell culture miniplates containing 150 μl of a sterile NaCl 0.9% solution.Plates were centrifuged in order to ensure adhesion of the trophozoites.Then, the measurements were performed according to the instructions provided by the manufacturer.During the internal calibration of the XFp extracellular flux cartridge, the miniplates containing the trophozoites were incubated at 37 °C and then transferred into the device.OCR and ECAR rates were determined by averaging the rates obtained between 6 and 30 min after the start of the analysis and normalized to the protein contents of the cells.To determine initial resazurin reduction rates, WT or C4 trophozoites were suspended in PBS or PBS containing 0.2% glucose.0.1 ml of this suspension were added to 96-well-plates.The assay was started by adding 0.1 ml of resazurin in PBS and the reduction of resazurin was quantified at 37 °C by fluorimetry with excitation at 530 nm and emission at 590 nm using a 96-well-multimode plate reader.Extracts were prepared from frozen pellets suspended in assay buffer containing 0.5% Triton-X-100 and 1 mM phenyl-methyl-sulfonyl-fluoride.Nitroreductase activity was determined by measuring the formation of 7-amino-coumarin.The assay buffer contained 7-nitrocoumarin as a substrate and NADH or NADPH as electron donors.The reaction was started by addition of the electron donor.Pyruvate oxidoreductase assays were performed in potassium phosphate containing sodium pyruvate, coenzyme A, MgCl2, and thiaminpyrophosphate as described with the sole exception that thiazolyl blue tetrazolium chloride was used as final electron acceptor instead of benzyl viologen.Ornithine-carbamyl-transferase was assayed in the direction of citrulline formation and citrulline was quantified as described.This assay was slightly modified for the determination of citrulline by adding convenient amounts of cell-free extracts directly to the stop and colour development solution.NAD and NADP contents were determined using commercial kits according to the instructions provided by the manufacturer.FAD was determined using a commercial kit according to the instructions provided by the manufacturer.The ADP/ATP-ratio was determined using a commercial kit according to the instructions provided by the manufacturer.For all assays, trophozoites were harvested as described, counted and freshly processed using the respective extraction buffers provided in the kits.The extraction buffers of all kits contained detergents and ensured an instaneous and >95% lysis of the trophozoites.The assays were run in quadruplicates in 96-well-plates containing the equivalent of 104 cells per well.The mean values and standard errors of three independent assays normalized to the protein contents of the cells are shown.In order to illustrate the resistance of the G. lamblia strain C4 derived from the wild-type WBC6, we determined the minimal inhibitory concentrations of three nitro compounds, namely MET, NTZ and NBDHEX on both strains.Whereas all three compounds inhibited the wild-type clone at MICs in the 10-μM-range, none of the compounds inhibited strain C4, and were ineffective even at 100 μM, the highest concentration used in this test.Conversely, the MIC for the benzimidazole ALB – thus a non-nitro drug – was similar in both strains.In the absence of drugs, C4 trophozoites proliferated almost as rapidly as the WT trophozoites, reaching confluence after 4 d post inoculation.In the presence of 50 μM NTZ, thus in the medium used to maintain strain C4, the proliferation of resistant trophozoites was slower, and confluence was reached after approximately one week post inoculation.Upon subsequent passages on drug-free medium, the resistance slowly declined, but was nevertheless maintained, as already published.Both wild-type and strain C4 were fixed and processed for SEM, and inspection of specimens did not reveal morphological differences, neither on the ventral disc nor on the dorsal surface of the trophozoites, nor did the sizes of the trophozoites differ.The sizes, as measured from SEM micrographs as shown in Fig. 2 were for WT trophozoites 13.2 ± 0.5 μm for the long axis and 7.9 ± 0.2 μm for the small axis.The values of the same parameters of C4 trophozoites were 12.9 ± 0.4 μm and 8.6 ± 0.5 μm, respectively.TEM did also not indicate dramatic differences between the two stains.All characteristic features of trophozoites including the ventral disc and the axonemes located between the two nuclei appeared structurally unaltered.However, in approximately 10% of the trophozoites of strain C4, cytoplasmic vacuolization could be observed, often in combination with a less electron-dense cytoplasm.This feature was virtually absent in WT trophozoites.In order to see whether the results obtained on nitro reduction can be extended to metabolic processes involving other electron acceptors, we investigated oxygen consumption and resazurin reduction, both methods using intact cells.Oxygen consumption rates were significantly lower in C4 trophozoites reaching ca. 50% of the wild-type levels.Conversely, extracellular acidification rates were similar in both strains.Similar observations could be made by offering resazurin as an electron acceptor.C4 trophozoites had lower resazurin reduction rates than WT trophozoites.For both strains, the rates were increased in the presence of glucose.In a next step, we investigated the mRNA levels of a panel of selected genes, including the gene coding for nitroreductase GlNR1.No differences in transcription levels could be detected between strain C4 and WT trophozoites, with the exception of GlNR1 mRNA, whose levels were significantly lower in C4 trophozoites than in WT trophozoites, thus confirming previous results.The mRNA levels of other genes involved in nitro reduction including both POR isoforms were the same in both strains.According to the current knowledge on the mode of action of nitro drugs, reduction of nitro groups to more toxic intermediates should be impaired in resistant strains as compared to wildtype strains.To verify this hypothesis, we measured 7-nitrocoumarin reductase activity in total cell extracts of WT and C4 trophozoites using either NADH or NADPH as electron donors.As controls, we determined pyruvate-oxidoreductase activity and – as a not-oxidoreductase control - ornithine carbamyl transferase.POR activity did not significantly differ in extracts from both lines.In contrast, nitroreductase activity was markedly reduced in extracts of C4 trophozoites, reaching only ca. 20% of the activity level in WT extracts, regardless which electron donor had been offered.Interestingly, the second enzyme activity that we included as a control, namely OCT, was significantly lower in C4 extracts compared to extracts of WT trophozoites.This observation prompted us to investigate the levels of citrulline, the product or educt of OCT.C4 trophozoites, contained less citrulline, namely 2.7 ± 0.3 nmol/mg protein, compared to 8.3 ± 1.0 nmol/mg protein in WT trophozoites.Since – except for GlNR1 – the expression levels for other known or alleged nitro-reducing enzymes were similar in both strains, it was of interest to determine whether the pool size of the prosthetic group responsible for electron transfer to nitro groups, to oxygen and to xenobiotics with a similar redox potential, FAD, was altered in the resistant strain.In-terestingly, the FAD level in C4 trophozoites amounted to only about 50% of the level found in WT trophozoites.In a next step, we investigated the pool sizes and ratios of the nicotinamide-dinucleotide co-factors involved in electron transfer, and of the ADP/ATP-ratio as a marker for the energy status.NAD was by far more abundant in trophozoites than NADP and exhibited a higher degree of variation between independent preparations.The levels of NAD and NADH did not significantly differ between WT and C4 trophozoites, the NADP and NADPH levels were, however, significantly reduced in C4 trophozoites.The ratios of NAD versus NADH in trophozoites of both strains balanced strongly in favour of NAD, and were slightly, but not significantly, higher in WT than in C4 trophozoites.In contrast, the NADPH/NADP ratios were, however, closer to one and significantly increased in C4 trophozoites.In WT trophozoites, the ADP/ATP-ratio was close to one and significantly increased in C4 trophozoites.The absolute ATP contents were 24.1 ± 3.1 nmol/mg protein in WT versus 21.8 ± 1.9 nmol/mg protein in C4 trophozoites.The absolute ADP contents were 25.0 ± 5.7 nmol/mg protein in WT vs 29.8 ± 4.2 nmol/mg protein in C4 trophozoites.The differences were not significantly different.In the present study, we have investigated physiological aspects of resistance formation in G. lamblia using the nitro drug-resistant strain C4 and its isogenic wild-type WBC6 as a “model system”.Trophozoites of the two strains did not differ markedly with respect to cell shape and ultrastructural characteristics, thus physiological parameters such as enzyme activities and metabolite content could be compared.C4 trophozoites exhibit similar mRNA expression levels of genes coding for enzymes invoved in nitro and/or O2-reduction, including GlNR2 and a homologous protein without N-terminal ferredoxin domain, other flavoproteins like flavodiiron protein and flavohemoglobin, thioredoxin reductase, NADH oxidase or the two POR isoforms.The only enzyme shown to exhibit significantly decreased mRNA levels in C4 trophozoites is GlNR1.This result is in good agreement with data obtained with three other MET-resistant strains.The significant decrease of nitroreductase activity in cell-free extracts of C4 trophozoites, thus has other origins, namely either post-transcriptional downregulation or lack of essential cofactors.Since the electron donors NADH are provided in excess in the functional assay, and since the most relevant nitro-reducing enzymes are flavoproteins, the incorporation of the prosthetic group FAD may be critical.As previously suggested, the reduction of FAD levels may thus constitute an important physiological mechanism to avoid the formation of toxic nitro intermediates and/or radicals.Since NADH oxidases also contain FAD, it is not surprising that the oxygen consumption of C4 trophozoites is reduced, as well.This differs from results of a former study where no differences in oxygen consumption between MET-sensitive or - resistant clinical isolates could be observed.It is unclear, however, to which extent these results can be extrapolated since the cells have been grown under different conditions.Furthermore, we have to consider the possibility that FMN and not FAD is the cofactor of some enzymes that may be involved in nitroreduction.The surprising observations concerning OCT activity and citrulline contents indicate, however, that besides redox processes also other metabolic mechanisms are affected in resistant strains.OCT plays a critical role in giardial energy metabolism and is up-regulated on mRNA and protein levels in ALB-resistant trophozoites.The fact that C4 trophozoites have a lower OCT activity may be an indication for a diminution of energy production and of intermediate metabolism.Lower citrulline pool size, higher ADP/ATP and NADPH/NADP+ ratios and lower growth rates indicate the same.Taken together, the metabolic parameters that we have investigated support the thesis that resistance formation to nitro drugs in C4 is due to a reduction of nitro drug activation rather than to a detoxification of nitro radicals.Expression studies have revealed a downregulation of the nitroreductase NR1, a potential activator of nitro compounds whereas enzymes involved in nitro radical detoxification such as the nitroreductase NR2, flavohemoglobin, and flavodiiron protein are not affected.Conversely, other resistant strains show expression patterns suggesting that both mechanisms are involved in resistance formation.Aerobic resistance, thus quenching of nitro radicals by O2 as observed in the microaerophilic Trichomonas sp. can be excluded for Giardia growing under strictly anaerobic conditions.To sum up, resistance formation exhibits striking similarities to metabolic adaptation processes to environmental distress, and, in this case, is less likely caused by mutations of single intracellular targets.This may be anchored in the evolutionary history of this protozoan parasite, which must face dietary shifts of its omnivorous hosts ranging from a carbohydrate-rich to a red meat-rich diet, resulting in an accumulation of nitrosamines and other reactive nitrogen species.These compounds may be generated at biologic heme centers mediating e.g. the nitration of phenol and tryptophan.When using nitro drugs, the treatment success would then be only guaranteed by an immediate increase from zero to a concentration above the MIC until complete parasite clearance.A step-wise increase of sublethal drug concentrations would result in adaptation as has been easily observed in the generation of the resistant lab strains by us and by other groups.The biochemical trigger of this adaptation is unknown.Since “resistance” formation has been shown to correlate with antigenic variation and since antigenic variation is due to epigenetic changes at the post-transcriptional level, we may assume that the metabolic changes observed in this study may be – at least in part - the result of epigenetic changes, as well.This would explain the reversibility of “resistance” upon subcultures in the absence of drugs.Thus, nitro drug resistance shares some, but not all features with the concept of “tolerance” as defined with respect to antibiotic treatment of bacteria.As a consequence, we suggest replacing the term “resistance” by “tolerance” to nitro drugs as a special case of physiological plasticity towards environmental distress and suggest reserving the term “resistance” to genotypical changes such as point mutations of targets or acquisition of drug degrading enzymes by lateral transfer.The authors declare that there is no conflict of interest.
For over 50 years, metronidazole and other nitro compounds such as nitazoxanide have been used as a therapy of choice against giardiasis and more and more frequently, resistance formation has been observed. Model systems allowing studies on biochemical aspects of resistance formation to nitro drugs are, however, scarce since resistant strains are often unstable in culture. In order to fill this gap, we have generated a stable metronidazole- and nitazoxanide-resistant Giardia lamblia WBC6 clone, the strain C4. Previous studies on strain C4 and the corresponding wild-type strain WBC6 revealed marked differences in the transcriptomes of both strains. Here, we present a physiological comparison between trophozoites of both strains with respect to their ultrastructure, whole cell activities such as oxygen consumption and resazurin reduction assays, key enzyme activities, and several metabolic key parameters such as NAD(P)+/NAD(P)H and ADP/ATP ratios and FAD contents. We show that nitro compound-resistant C4 trophozoites exhibit lower nitroreductase activities, lower oxygen consumption and resazurin reduction rates, lower ornithine-carbamyl-transferase activity, reduced FAD and NADP(H) pool sizes and higher ADP/ATP ratios than wildtype trophozoites. The present results suggest that resistance formation against nitro compounds is correlated with metabolic adaptations resulting in a reduction of the activities of FAD-dependent oxidoreductases.
765
Collection of human reaction times and supporting health related data for analysis of cognitive and physical performance
The purpose of this article is to provide interested researchers with a well annotated and sufficiently large collection of human reaction times and health related data and metadata that could be suitable for further analysis of lifestyle and human cognitive and physical performance.The second aim is to present a procedure of efficient acquisition of human reaction times and supporting health related data in non-lab and lab conditions.Each provided dataset contains a complete or partial set of data obtained from the following measurements: hands and legs reaction times, color vision, spirometry, electrocardiography, blood pressure, blood glucose, body proportions and flexibility."It also provides a sufficient set of metadata to allow researchers to perform further analysis.Age and gender distributions of all participants are listed in Tables 1 and 2.Prior to measurements all participants were familiarized with the goal of the project, overall experimental procedure and related legal conditions.Then they were registered into a software application for rapid collection, storage, processing and visualization of heterogeneous health related data, signed the informed consent and filled in a short motivational questionnaire.Immediately after that they took part in individual measurements organized at nine physical sites.Each physical site was equipped with appropriate hardware and software tools related to the type of measurement and served at least by one human expert who also provided the participant with the information about the site measurement.The last physical site, the information desk, served both for the registration of the participants and provision of measurements results.It was served by three people.Although there was a recommended route between individual measurement sites, in fact the participants could circle them in any order.They were also not required to complete all the measurements and could have interrupted the measurement cycle at any time.Only in the best case they visited all the measurement sites and filled in all questions in the questionnaire.The complete data collection procedure took approximately 15 minutes.When a single measurement was completed, the obtained data were inserted via a user interface into a software application.When the participant finished his/her last measurement, he/she was provided with the results from all the visited measurement sites organized on one A4 page."After registering and signing the informed consent each participant proceeded to fill in a motivational questionnaire containing a set of 13 single choice questions to provide a basic overview of participant's current lifestyle and health condition.The following questions were asked:Q1: Do you exercise regularly?,Q2: If not.Would you like to exercise regularly?,Q3: If yes.Do you have friends with whom you could exercise?,Q4: Do you eat regularly?,Q5: Do you drink enough water during the day?,Q6: Do you eat healthily?,eg.poultry, fish, fruits, vegetables, water, etc.Q7: Do you use any dietary supplements?,e.g. vitamins, supplements for joints or bones, etc.Q8: Do you smoke?,Q9: If yes.How many cigarettes do you smoke?,Up to 10 cigarettes per day,Up to 10 cigarettes per week,Up to 20 cigarettes per day,Up to 10 cigarettes per month,20 or more cigarettes per day,Q10: How often do you drink alcoholic beverages?,Multiple times per week, "I don't drink any alcohol",Q11: Do you undergo regular medical examinations?,Q12: Do you have a girlfriend/boyfriend or husband/wife?,Q13: Do you indulge yourself with proper rest and relaxation?,e.g. massages, wellness, proper rest, …,The number of measurements sites was different for the participants visiting the Days of Science and Technology 2016 and the participants from Mensa Czech Republic visiting the neuroinformatics laboratory at the University of West Bohemia.The restriction of measurement sites for the participants from Mensa Czech Republic was primarily caused by the limited time they had during their visit in the laboratory and their interest in other kinds of measurements related to brain functioning."The measurement site was focused on the measurement of participant's hands reaction times to outside visual stimuli. "A custom cognitive research device consisting of a wooden desk with four buttons and LED panels placed in a square formation and related hardware and embedded control software for generation of visual stimuli and recording the participant's responses was used.The task of the participant was to press a button near the LED diode panel turned on by right or left hand as quickly as possible.Only one LED panel could have been active at a time.The order of lighting up the LED panels was random and controlled by embedded software.In total the participant completed 16 trials where he/he pressed one of the four buttons placed on the wooden plate according to the LED panel turned on.The results given to the participant contained the following values:Average hands reaction time – calculated from 16 trials when the participant pressed one of the four buttons placed on the wooden plate according to the LED panel turned on,Number of missed reactions – a missed reaction was considered when no button was pressed within the time limit one of the LED panels was turned on,Number of incorrect reactions – an incorrect reaction was considered when a wrong button was pressed within the time limit one of the LED panels was turned on.The next measurement site was focused on the measurement of the legs reaction time using an impact dance pad."This dance pad was divided into nine areas and connected to a laptop where these areas were represented by corresponding patterns that had been randomly highlighted.Only one area could have been active at a time.The task of the participant was to stand in the central part of the dance pad, step aside once the corresponding pattern on the laptop was highlighted and return quickly back to the central part of the dancing pad .In total the participant completed 16 trials where he/she stepped aside and return back to the central position.The results given to the participant contained the following values:Average legs reaction time – calculated from 16 trials when the participant touched one of eight areas on the impact dance pad by his/her leg,Standard deviation – calculated from all trials,Best reaction time – the shortest reaction time achieved on the impact dance pad,Worst reaction time – the longest reaction time achieved on the impact dance pad.The third measurement site was focused on the measurement of color vision.The participant was tested using a total of eight pseudochromatic pictures.His/her task was to recognize the number hidden in these pictures.The results given to the participant contained the following values:List of incorrectly recognized pictures.The fourth measurement site was focused on the measurement of lung capacity, forced expiratory volume, and expiratory flow of the participant.All measurements were performed using the SP10W spirometer.The results given to the participant contained the following values:Forced vital capacity – the amount of air the participant forcibly expelled from the lungs after taking the deepest breath possible,Forced expiratory volume in the 1st second – the amount of air the participant expelled during a forced breath and measured during the first second, "Peak expiratory flow – the maximum speed of participant's expiration.The fifth measurement site was focused on the electrocardiography measurements.It included heart rate together with measurement of the ST segment and QRS interval.QRS representing ventricle depolarization was measured from the start of the Q wave to the end of the S wave.The ST segment was measured from the end of the S wave, J point, to the start of the T wave.All measurements were performed using the ReadMyHeart Handheld ECG device.The results given to the participant contained the following values:Heart rate ,ST Segment – the length of the ST segment that represents the interval between ventricular depolarization and repolarization,QRS Interval – duration of the QRS complex that represents ventricle depolarization.The sixth measurement site was focused on the measurement of blood pressure in a traditional way as a systolic and diastolic blood pressure.This measurement was completed by the measurement of heart rate, here denoted as puls.All measurements were performed using the Omron M6 Comfort IT device.The results given to the participant contained the following values:Systolic blood pressure ,Diastolic blood pressure ,The seventh measurement site was focused on the measurement of glucose concentration in blood.All measurements were performed using the FORA Diamond Mini blood glucose monitoring system.The result given to the participant contained the following value:Glucose - concentration of glucose in blood."The eight section was focused on the measurement of body proportions, the participant's height was measured manually, weight, body mass index, and concentration of muscle-mass, water and fat in participant's body was measured and calculated the Medisana BS 440 Connect device.The results given to the participant contained the following values:"Height – participant's height,", "Weight – participant's weight,",Body Mass Index,Muscle-Mass – concentration of muscle-mass in human body,Water – concentration of water in human body,Fat – concentration of fat in human body.The ninth section was focused on measurement of human body flexibility that was measured using a 13 cm high portable podium.The participant standing at the podium was asked to touch his/her own feet.Not being able to do it, the result was as a negative number, On the other hand, when the participant managed to bend even more, the result was a positive number.The result given to the participant contained the following value:Flexibility – difference between position of fingers and foot during deep forward bend.The following table summarizes devices used during the measurements.The data collected during the Days of Science and Technology 2016 are available in Table 6, Table 7, Table 8 and Table 9.The data collected in the neuroinformatics laboratory from the members of Mensa Czech Republic are available in Table 11.The questionnaire data collected during the Days of Science and Technology 2016 are available in Table 10.The questionnaire data collected in the neuroinformatics laboratory are available in Table 12.Each record in the data collection corresponds to one participant.The first step of the preliminary statistical analysis was to distribute the obtained data into three categories.The first category Basic data contains the following information about each participant:the group the participant belongs to,gender,The second category Measured data includes the subcategories that are related to the data obtained from individual measurement sites:average hands reaction time ,number of missed hands reactions,number of incorrect hands reactions,average legs reaction time ,best legs reaction time ,worst legs reaction time ,pseudochromatic picture 1 ,pseudochromatic picture 2 ,pseudochromatic picture 3 ,pseudochromatic picture 4 ,pseudochromatic picture 5 ,pseudochromatic picture 6 ,pseudochromatic picture 7 ,pseudochromatic picture 8 ,heart rate ,forced vital capacity ,forced expiratory volume in the 1st second ,peak expiratory flow ,The last category Questionnaire data includes the data obtained from the questionnaires completed by the participants.These data are divided into the following subcategories:Sport – the participant,0 - does not do any sport + does not want to do any sport,1 - does not do any sport and wants to do some sport,2 - does some sport, but has no friends to do some sport with,3 - does some sport and has friends to do some sport with.Food – the participant,0 - eats irregularly and unhealthily,1 - eats irregularly and healthily,2 - eats regularly, but unhealthily,3 - eats irregularly and unhealthily.Drinking habits – the participant,0 - drinks enough water,1 - does not drink enough water.Supplements – the participant,0 - does not use any supplements,1 - uses supplements.Smoking – the participant,0 - does not smoke,1 - smokes up to 10 cigarettes per month,2 - smokes up to 10 cigarettes per week,3 - smokes up to 10 cigarettes per day,4 - smokes up to 20 cigarettes per day,5 - smokes 20 or more cigarettes per day.Alcoholic beverages – the participant,0 - does not drink any alcoholic beverages,1 - drinks alcoholic beverages occasionally,2 - drinks alcoholic beverages once a week,3 - drinks alcoholic beverages several times per week.Medical checks – the participant,0 - undergoes medical checks periodically,1 - undergoes medical checks irregularly.Partner – the participant,0 - does not have a girlfriend/boyfriend or spouse,1 - has a girlfriend/boyfriend or spouse.Rest and relaxation – the participant,0/1 - does not/does rest and relax,0/1 - does not/does rest and relax.The dataset is partly inconsistent because not the whole set of health related data was obtained from each participant.Moreover, the members of Mensa Czech Republic did not participated in all measurements to obtain the whole set of health related data.The motivational questionnaire was also not filled fully in all cases.Since unfilled or inaccurate data can influence the variability of the dataset, the statistical methods that can cope with expected errors were used.The significance level of 0.05 was used for all tests.All statistical methods were performed in MATLAB.Box plot graphs used to visualize the basic statistical characteristics of the data were created separately for the members of Mensa Czech Republic and for the visitors of the Days of Science and Technology 2016."Figs. 6 and 7 show the box plot graphs depicting the range of participants' age, legs reaction times and BMI.The first method was based on the gradual selection of questionnaire data subcategories.On the base of significance coefficients and p-values it was decided which questionnaire data subcategory significantly affects the selected subcategory of the measured data.The questionnaire data were regarded as one package in the second method.The multivariate regression analysis for all subcategories of the questionnaire data processed simultaneously was implemented.The stepwise regression was chosen as the most suitable method for the dataset.The results of this regression are summarized in Tables 4 and 5.
Smoking, excessive drinking, overeating and physical inactivity are well-established risk factors decreasing human physical performance. Moreover, epidemiological work has identified modifiable lifestyle factors, such as poor diet and physical and cognitive inactivity that are associated with the risk of reduced cognitive performance. Definition, collection and annotation of human reaction times and suitable health related data and metadata provides researchers with a necessary source for further analysis of human physical and cognitive performance. The collection of human reaction times and supporting health related data was obtained from two groups comprising together 349 people of all ages - the visitors of the Days of Science and Technology 2016 held on the Pilsen central square and members of the Mensa Czech Republic visiting the neuroinformatics lab at the University of West Bohemia. Each provided dataset contains a complete or partial set of data obtained from the following measurements: hands and legs reaction times, color vision, spirometry, electrocardiography, blood pressure, blood glucose, body proportions and flexibility. It also provides a sufficient set of metadata (age, gender and summary of the participant's current life style and health) to allow researchers to perform further analysis. This article has two main aims. The first aim is to provide a well annotated collection of human reaction times and health related data that is suitable for further analysis of lifestyle and human cognitive and physical performance. This data collection is complemented with a preliminarily statistical evaluation. The second aim is to present a procedure of efficient acquisition of human reaction times and supporting health related data in non-lab and lab conditions.
766
Immune Factors in Deep Vein Thrombosis Initiation
DVT and its major complication, pulmonary embolism, designated together as venous thromboembolism, are one of the leading causes of disability and death worldwide.VTE is the third most common cardiovascular pathology by its prevalence after myocardial infarction and stroke , with about 900 000 cases and 300 000 deaths in the US annually.Surprisingly, the prevalence and mortality of VTE has not substantially decreased over 30 years despite progress in diagnostic and prophylactic modalities .DVT develops in deep veins, usually, but not exclusively, in legs, causing pain, redness, swelling, and impaired gait.If the thrombus is unstable, it can become detached and travel to the lungs, where it occludes pulmonary circulation causing PE.In contrast to arterial thrombosis, whose mechanisms have been intensively investigated, DVT remains largely terra incognita, which inspired the American Surgeon General to issue a Call to Action to stimulate research of venous thrombosis .Blood clotting is based on a protein polymer called fibrin, produced by cleavage of its precursor, fibrinogen, by the protease thrombin .Thrombin is formed by activated Factor X-mediated processing of prothrombin.Activation of FX can occur via two mechanisms designated as extrinsic and intrinsic pathways.The former one is initiated by a protein designated as tissue factor, which may be exposed by the tissues or blood cells, predominantly monocytes.The intrinsic pathway starts from contact of FXII with a negatively charged surface.Both pathways trigger a cascade of enzymatic transformations converging on FX.Upon formation, fibrin is stabilized by a transglutaminase, FXIII.Because of the known roles of these factors in clot formation, the current paradigm of DVT prophylaxis focuses predominantly on the coagulation system, by targeting thrombin, active FXa or vitamin K-dependent clotting factors.However, due to the substantial overlap in the mechanisms of normal hemostasis and pathological thrombosis, the therapeutic window of anticoagulants may be narrow because of increased chances for bleeding complications.Using a mouse model, the Wolberg group has demonstrated an antithrombotic potential of targeting coagulation FXIII, thus preventing retention of red blood cells in the thrombus .Given that partial inhibition of FXIII does not impair hemostasis, this may be a promising anti-DVT approach, although its usefulness in humans needs validation.More broadly, there is a general demand for a fundamentally new approach that would allow for efficient DVT prevention without risk of bleeding.In this review, we discuss recent advances in understanding the mechanisms of DVT, demonstrating a pivotal role of the immune system in its pathogenesis, and show that recent experimental data call for a paradigm shift, namely to reconsider DVT as an immunity- and inflammation-related process rather than merely coagulation-dependent thrombosis.Of note, once a thrombus is formed, a process of its resolution begins.In mouse models, thrombus size reaches its maximum within the first 1–2 days, after which it gradually decreases during 2–3 weeks .The process of venous thrombus resolution depends on leukocytes, cytokines, metalloproteinases, as well as effector–memory T cells, sharing certain similarities with wound healing .When DVT has occurred, even its successful treatment does not preclude all the spectrum of DVT complications, such as recurrence, pulmonary hypertension, post-thrombotic syndrome, and others.Also, in case of a tardy diagnosis, life-threatening PE can develop.Consequently, the primary effort of translational research should focus on DVT prevention and in this review, we have therefore concentrated specifically on the mechanisms of venous thrombosis initiation.For readers interested in thrombus resolution we recommend the following reviews on the topic .The initiation of venous thrombus formation involves a complex cascade of events that can be divided into three consecutive though overlapping stages: blood flow stagnancy and hypoxia; activation of the endothelium; and blood cell recruitment leading to activation of blood coagulation and thrombus development.We outline these steps and the involvement of immune cells below.DVT develops in the valvular pockets of the veins .As blood pressure gradually decreases from the heart ventricles all the way to the veins, the pumping function of the heart may become insufficient to push blood through the veins.Thus, normal function of the auxiliary muscle pump becomes indispensable for proper blood flow especially in the legs given that humans have vertical spinal orientation.When limb muscles do not contract regularly and properly, blood flow velocity in certain veins decreases to complete stasis.This is associated with elevated risk for DVT.Thus, blood flow stagnation in the veins is one of the main factors driving idiopathic DVT .This is an important stipulation because, for example, DVT caused by cancer is based on elevated blood coagulation induced, in particular, by TF-bearing microparticles .Flow stagnation can result from sessile position of an individual, such as observed after surgery, in a bed-ridden position or even during long-haul flights.In pediatric patients, immobility for more than 72 h is also considered one of the main factors triggering DVT with each additional day of hospitalization increasing the risk by 3% .Venous stasis increases also with age and it was observed that contrast material stays in the veins of elderly patients for up to 1 h after venography .It should be noted however that blood flow stagnancy either may require additional factors or be unusually prolonged to cause DVT, or both, because, for example, normal night sleep does not result in DVT.Animal models recapitulating complete or partial blood flow restriction by applying ligature on the inferior vena cava have been developed and used to delineate mechanisms of venous thrombosis initiation and resolution.In addition to the experimental approach, the pivotal role of flow reduction in cell accrual in the valves has recently been demonstrated by a computer simulation approach .Blood inside the vein is the only source of oxygen for the venous wall.Thus, diminished supply with new portions of blood creates local hypoxia in the vein.Hypoxia is considered the major pathogenetic mechanism linking blood flow stagnancy with following processes in the vessel wall leading to thrombosis.Hamer and co-authors directly demonstrated in dogs and patients that blood oxygenation in venous valve pockets quickly falls once the flow becomes static and returns back to original values when pulsatile flow is applied .It has been recently shown that whole-body hypoxia potentiates venous thrombosis .Hypoxia results in activation of various cell types in the venous wall, such as mast cells and endothelium, leading to expression of adhesion receptors and release of Weibel–Palade body constituents .This cascade of events is a prerequisite for local recruitment of leukocytes and platelets; an inflammatory phenomenon that was first demonstrated to be induced by flow restriction more than 50 years ago .The recruited cells give start to thrombosis through various routes, some of which are discussed below.Endothelial activation and exocytosis of WPBs, central events in DVT initiation, are rapidly induced by stenosis of the IVC .In this process, WPBs fuse with the plasma membrane and release their constituents expressing some of them, such as von Willebrand factor and P-selectin, on the membrane, thus mediating cell recruitment.This is consistent with increased levels of soluble P-selectin in patients with DVT , although soluble P-selectin may originate also from activated platelets.Elevated levels of P-selectin both in the circulation and the venous wall are associated with upregulated DVT in the electrolytic mouse model .Inhibition of P-selectin suppresses DVT and stimulates spontaneous recanalization of the thrombus in an IVC balloon occlusion model in baboons, suggesting its pathophysiological role in venous thrombosis .Hypoxia results in the synthesis of reactive oxygen species, which activate the endothelium, inducing the expression of adhesion receptors and recruitment and extravasation of leukocytes .ROS may activate endothelial cells also indirectly, for example, via the complement system.Oxidative stress induces complement activation and its deposition on the vessel wall leading to progression of inflammation .The susceptibility to DVT strongly correlates with complement component C5a levels in a mouse model .Mice deficient for C3 or C5 have reduced experimental venous thrombosis, with the lack of C5 not being accompanied by any defects in platelet activation or normal hemostasis .High levels of the C3 are associated with high risk of DVT in humans .Mechanistically, high-molecular-weight multimers of VWF released from WPBs provide a scaffold for complement activation .Complement components bind to VWF strings and become activated through the alternative activation pathway .Thus, in addition to the direct activation of endothelium by hypoxia and ROS, the complement system may represent an additional link between flow restriction and endothelial activation.In addition to cell recruitment, activated endothelium enhances blood coagulation and exerts suppressed anticoagulant function, thus contributing to thrombus formation .Upregulation of the endothelial surface adhesion receptor ICAM-1 underlies augmented DVT under the endotoxemic conditions .Clinically, endothelial activation has been reported in patients with VTE and thrombosis of superficial veins .It has recently become clear that endothelial activation and cell recruitment, the critical events in DVT initiation likely induced by local hypoxia, require an intermediary: MCs.MCs are a part of the innate immune system that largely reside in tissues and are present in the vicinity of blood vessels .These are large cells containing granules enriched with proinflammatory mediators, such as tumor necrosis factor-α and histamine, and antithrombotic factors, such as tissue plasminogen activator and heparin.Thus, MCs might be expected to exert opposite effects on thrombosis: its reduction by releasing blood coagulation inhibitors or stimulation by supporting local inflammatory response.The involvement of MCs in DVT is implicitly supported by their accumulation at the site of venous thrombosis , but their net functional contribution has long remained obscure.Using the stenosis DVT model in mice, we have recently demonstrated that two strains of MC-deficient mice are completely protected against DVT .Adoptive transfer of in vitro-differentiated MCs into MC-deficient animals restored thrombosis, suggesting that it is the lack of MCs that prevents DVT.Thus, the net effect of MC degranulation in this model is the exacerbation of venous thrombosis, suggesting that the activity of MC-originated proinflammatory factors outweighs the activity of antithrombotic factors.Absence of MCs completely prevents thrombosis, suggesting that MCs are absolutely required for its initiation.In addition to protection from DVT, MC deficiency is accompanied by reduced cell recruitment at the venous wall after stenosis application .Consequently, the factor or factors released from MCs and responsible for DVT may be involved in endothelial activation.Histamine is one of the most likely candidates produced by MCs.Indeed, local application of histamine accelerated DVT in wild-type mice and induced DVT in MC-deficient mice .The mechanisms of the prothrombotic effect of histamine might involve its ability to induce release of VWF and P-selectin from WPBs and enhance expression of E-selectin and ICAM-1 , mediating blood cell recruitment to the vascular wall.Histamine also stimulates the expression of TF, the key initiator of blood coagulation, in different cell types .Mechanisms of MC activation by hypoxia remain obscure, but oxidative stress and ROS appear to play a role in this process.ROS can be both the cause and the consequence of MC activation because, on the one hand, inhibition of MC activation downregulates ROS production and, on the other hand, antioxidants prevent MC degranulation .Generation of ROS has a direct impact on DVT in a mouse model.Indeed, H2O2 has been shown to mediate increased susceptibility to DVT in aged mice, whereas mice overexpressing an antioxidant, glutathione peroxidase 1, are protected against the prothrombotic effect of H2O2 .Mastocytosis, which is associated with abnormally high numbers of MCs, is accompanied by bleeding symptoms in a small proportion of patients .The reason for this is unclear but it is possible that when the number of MCs exceeds a certain limit, massive release of antithrombotic factors starts to prevail over the effect of proinflammatory/prothrombotic stimuli.However, the importance of MCs in venous thrombosis is corroborated by a clear link between allergic diseases and VTE .Severity of asthma strongly correlates with the risk of not only DVT but also PE .MCs and histamine are implicated in airway and lung inflammation-related thrombosis induced by diesel exhaust particles and prevention of histamine release has an antithrombotic effect .Thus, targeting MCs by inhibiting their activation and degranulation, as well as further identification of and targeting MC granular constituents exacerbating thrombosis, may represent a fundamentally new approach to fight DVT, although the precise benefit and advantages of this approach in patients are still to be verified.Restriction of venous blood flow induces rapid leukocyte recruitment.After 1 h of IVC stenosis, leukocytes start to roll along and adhere to the venous endothelium, and after 5–6 h, leukocytes carpet the entire endothelial surface.Neutrophils account for more than 80% of adherent leukocytes and monocytes represent the remainder .Leukocyte recruitment is dependent on P-selectin exposure on the luminal side of the venous endothelium since the number of leukocytes recruited to the venous wall in mice lacking P-selectin on the endothelial surface is reduced by several orders of magnitude.Moreover, these mice are protected against DVT, indicating that leukocyte recruitment is crucial for DVT development in response to blood flow restriction.Although activated platelets also expose P-selectin on their surface, the role of platelet-derived P-selectin in leukocyte recruitment is less prominent .Leukocyte recruitment is also affected by various plasma components.For example, higher plasma levels of low-density lipoproteins likely enhance leukocyte accumulation since deficiency in proprotein convertase subtilisin/kexin type 9, an enzymatically inactive protein that binds the LDL receptor favoring its degradation, significantly reduces leukocyte adhesion and thrombus growth in the stenosis DVT model .Given that leukocyte recruitment to the venous wall is indispensable for DVT, below we discuss the contribution of the main leukocyte subsets recruited, namely neutrophils and monocytes, to the pathogenesis of the disease.The involvement of neutrophils in DVT was discovered several decades ago .Recent studies have demonstrated the critical role of neutrophils in the pathophysiology of DVT .Depletion of neutrophils inhibits venous thrombus formation, indicating that their role cannot be substituted by other leukocytes .This prothrombotic effect of neutrophils, however, is observed only in the stenosis DVT model, whereas in the stasis model, neutropenia does not affect thrombus size in mice and results in development of even larger thrombi in rats .Upon recruitment to the venous wall, neutrophils undergo activation and release their nuclear material, forming a web-like extracellular structures designated as neutrophil extracellular traps.These are composed of DNA, histones, secretory granule constituents, and other components implicated in antimicrobial defense .It has been shown that signals originating from the neutrophil P-selectin glycoprotein ligand-1, a counter-receptor for P-selectin, may trigger the process of NET formation .Under pathogen-free conditions of thrombosis, NETosis may also be induced by high-mobility group box 1 released by and exposed on the surface of platelets recruited to the venous wall .Although monocytes may also be able to form extracellular traps , experiments on neutropenic mice have shown that neutrophils are the major source of these traps in venous thrombi .NETs are abundantly present in venous thrombi , which is in line with increased plasma levels of NETs biomarkers in patients with DVT .Prevention of NETosis or destruction of NETs by infusion of deoxyribonuclease I protects mice from thrombus formation in the stenosis DVT model, indicating the crucial role of NETs in the onset of DVT.NETs support DVT also in the stasis model in some but not in other studies .The latter study demonstrates that Toll-like receptor-9-deficient mice have larger thrombi than control animals, despite elevated levels of NET markers, and that treatment with DNase I or genetic ablation if peptidyl arginine deiminase-4 does not reduce thrombus size in the stasis model.More prominent prothrombotic function of neutrophils and NETs in stenosed versus fully closed vessels suggests that residual blood flow is indispensable for the inflammatory mechanism to become operational in venous thrombosis commencement, whereas complete absence of flow likely induces DVT through a more coagulation-dependent mechanism .The mechanisms by which NETs may contribute to venous thrombosis have become an important area of research.It has been shown that various adhesion proteins, including VWF, fibrinogen, and fibronectin, may bind to DNA/histone strings so that NETs become a scaffold for adhering platelets and red blood cells independent of the fibrin network .Upon release into the extracellular space, histones trigger activation of endothelial cells , which is consistent with increased plasma levels of VWF in mice infused with purified histones .In vitro experiments have also demonstrated that NETs stimulate platelet adhesion and aggregation at a venous shear rate and induce thrombocytopenia in vivo, with both effects being abolished by histone inactivation .Another mechanism of the prothrombotic effect of NETs is potentiation of the coagulant cascade and reduction of anticoagulant activity.NETs can bind FXII and provide a scaffold for FXII activation .Activated FXII may amplify fibrin formation without activating FXI, presumably through direct interaction with fibrin .Additionally, neutrophil elastase and other proteases associated with NETs degrade anticoagulants, such as TF pathway inhibitor , while histones impair thrombomodulin-dependent protein C activation , promoting thrombin generation.A recent study has demonstrated in vitro that while NETs components, DNA and histones, potentiate thrombin generation and blood clotting, NETs, as a biological entity, are unable to do so .This implies that NETs might need a certain degree of degradation to acquire procoagulant activity.Thus, NETs could represent an important mechanistic link between neutrophil accrual and venous thrombogenesis.Monocytes and, to a lesser extent, neutrophils recruited to the venous wall serve as a principal source of TF; the major initiator of the extrinsic coagulation pathway and fibrin deposition.In the stenosis model of DVT, deletion of TF in myeloid leukocytes completely prevents thrombus formation without affecting leukocyte recruitment .In contrast, in the complete stasis model, DVT is driven primarily by the vessel-wall-derived but not leukocyte-derived TF .This difference might be attributed to different pathogenetic mechanisms operating in these similar but distinct models.The prothrombotic function of leukocytes is negatively regulated by signaling via TLR-9.It has been shown that lack of this pattern-recognition receptor is associated with larger venous thrombi and increased levels of NETosis, necrosis, and apoptosis markers in the stasis, but not stenosis, model of DVT in mice .It has also been shown that lack of TLR-9 leads to reduced monocyte recruitment to venous thrombi .Platelets are recruited to the venous wall shortly after blood flow restriction and play an important role in DVT as platelet depletion substantially reduces thrombosis .A role of platelets in DVT is supported by the observations that an antiplatelet drug aspirin reduces DVT in mice and VTE in patients undergoing orthopedic surgery ; a condition frequently associated with compromised venous blood flow.Importantly, efficacy of venous thrombosis prophylaxis by aspirin is non-inferior to that of rivaroxaban, an anticoagulant widely used in clinical practice , which confirms involvement of platelets in DVT pathogenesis.In contrast to arterial thrombosis, where platelets form large aggregates , in DVT, platelets are mainly recruited as single cells and adhere either directly to the activated endothelium or to adherent leukocytes forming small heterotypic aggregates .Platelet recruitment to the venous thrombus is mediated by binding of platelet receptor GPIbα to VWF exposed on the endothelial surface.Indeed, deficiency in either GPIbα extracellular domain or VWF prevents experimental DVT.Recently, it has been shown that platelet recruitment also depends on the platelet membrane molecule CLEC-2, a hemi-immunoreceptor tyrosine-based activation motif-bearing receptor capable of binding podoplanin.Podoplanin is a mucin-type transmembrane protein expressed in the murine IVC wall in tunica media and adventitia, and its expression is markedly upregulated in the course of thrombus formation .It has been proposed that hypoxia-induced activation of the endothelial cells, caused by restriction of the blood flow, renders endothelial cell–cell junctions looser, allowing for platelet penetration into subendothelial spaces where the interaction between CLEC-2 and podoplanin may take place .The analysis of signal transduction pathways in platelets following recruitment to the venous wall has shown a role for mechanistic target of rapamycin complex 1, a rapamycin-sensitive protein complex consisting of mTOR, Raptor, and mLST8 .Deficiency of mTORC1 considerably reduces DVT in the murine flow restriction model.Platelet recruitment to the developing venous thrombus is also associated with enhanced generation of ROS, promoting thrombus growth .The contribution of both mTORC1 and ROS to the pathogenesis of DVT increases with age , which is consistent with higher incidence of VTE in elderly patients .Platelet recruitment and DVT in the conditions of hypobaric hypoxia, such as encountered at high altitude, depend in a mouse model also on assembly of NOD-like receptor family, pyrin domain containing 3 inflammasome , a molecular platform triggering autoactivation of caspase-1, which cleaves the proinflammatory cytokines, interleukin-1β, and IL-18, into their active forms.Deficiency in NLRP3 is associated with reduced thrombus size in complete stasis-induced DVT in mice .This finding is in accordance with the study demonstrating increased serum IL-18 levels in experimental DVT in rats as well as with the clinical observation demonstrating increased levels of IL-1β and IL-18 in patients with DVT .Besides procoagulant activity, recruited platelets provide important proinflammatory stimuli being a source of various damage-associated molecular patterns .Following recruitment to the venous wall, platelets expose HMGB1 , a nucleosomal protein that serves as a DAMP when released into the extracellular space.Deficiency in platelet-derived HMGB1 markedly decreases thrombus size and thrombosis incidence in the DVT model .Operating through the receptor for advanced glycation end-products and other pattern recognition receptors, HMGB1 promotes NETosis of the recruited neutrophils and facilitates recruitment of monocytes ; an important source of TF triggering the extrinsic coagulation pathway.Additionally, HMGB1 promotes recruitment and activation of new platelets at early stages of venous thrombus formation.Enhanced NETosis and platelet accrual result in further HMGB1 accumulation in the developing thrombus forming a positive feedback propagating DVT .Myeloid-related protein-14, a member of the S100 family of calcium-modulated proteins, is another DAMP abundantly expressed in platelets and neutrophils .MRP-14 deficiency is associated with reduced DVT, which is partially rescued by adoptive transfer of wild-type platelets or neutrophils.Acting in a Mac-1-dependent manner, MRP-14 fosters NETosis, thereby promoting venous thrombus propagation.Collectively, the data characterize platelets as important regulators of sterile inflammation and stress the role of platelet–neutrophil crosstalk in venous thrombogenesis.Involvement of platelets in the pathogenesis of DVT is limited by several mechanisms.Platelet recruitment to the venous endothelium is downregulated by the interaction of apoA-I, the major apolipoprotein in high-density lipoprotein, with endothelial receptor scavenger receptor-BI.This interaction diminishes endothelial activation and WPB release in an endothelial NO synthase-dependent manner .SR-BI-mediated signaling protects from venous thrombosis in mice, which is consistent with increased risk of DVT in patients with low plasma HDL levels , although some reports contradict this .The ability of platelets to promote leukocyte recruitment to the venous thrombus is negatively regulated by amyloid precursor protein abundantly expressed in platelets.In the stasis DVT model, substitution of wild-type platelets by APP-deficient ones increased platelet–leukocyte interaction.Genetic deficiency in APP is associated with enhanced NETosis, greater incorporation of NETs into venous thrombi, and enhanced DVT .Thus, APP or its functional analogs may represent a new approach to DVT prevention targeting simultaneously local inflammation and NETs production.Mechanisms of DVT initiation represent a cascade of events virtually identical to the local inflammatory response recently designated as immunothrombosis .This opens a window of opportunities for identification of new antithrombotic targets because the immune system is not directly implicated in normal hemostasis and targeting it is unlikely to result in excessive bleeding; and multiple anti-inflammatory drugs are already on the market and, consequently, available for testing their efficacy to prevent DVT.Although the precise mechanisms of how the immune-system-related cells and molecules are implicated in DVT may differ, some of them converge on local inflammation in the venous wall.Thus, focusing on this common denominator, reduction of endothelial activation, release of WPBs and local cell recruitment could be a promising strategy ameliorating venous thrombosis.For example, NO is one of the most potent inhibitors of WPB liberation , and we can therefore speculate about potential usefulness of NO donors for DVT prevention.HDL activates eNOS through binding SR-BI and a component of HDL, apoA-I, was shown to efficiently reduce DVT in experimental conditions operating via the same route .A mutated form of apoA-I with higher lipid-binding propensity, called apoA-I Milano, downregulates arterial thrombosis caused by ferric chloride .Of note, synthetic apoA-I analogs have been developed and proven to recapitulate various effects of the natural protein .Hence, the apoA-I/HDL axis might be considered potentially useful to fight DVT.Histones, a part of NETs, also stimulate WPB release and their prothrombotic activity can therefore rely, at least in part, on this effect.Based on experimental evidence, endothelial activation can be limited also by targeting MC degranulation, especially given that drugs with such mechanism of action are already in clinical use for other purposes.Determination of targetable mechanisms of MC activation in the unique environment of the veins, identification of the MC-derived factors promoting DVT, and verification of the relevance and efficacy of this approach in patients represent a challenge for future research.Thus, the following directions currently seem to be most promising in the translational aspect to prevent DVT by manipulating the immune system: amelioration of local vessel response to hypoxia; inhibition of endothelial activation and WPB release; inhibition of immune cell recruitment; and targeting NETs.In conclusion, DVT develops as a form of immunothrombosis with a particularly important role of local inflammation at the stage of thrombosis initiation.Targeting inflammatory pathways is less likely to cause bleeding complications than inhibition of blood coagulation mechanisms.Thus, it may be considered as an alternative and safer approach for the prevention of DVT in at-risk populations and further research in this area may provide important new therapeutic options.Is local hypoxia a leading factor exacerbating DVT?,Are there other causes?,What are the mechanisms of hypoxia-driven local inflammation?,What mechanisms mediate mast cell activation and degranulation under flow stagnancy conditions?,What mediators released by mast cells trigger DVT?,What therapeutic interventions targeting the immune system and local inflammation will be most efficient against DVT?,Based on combination of targeting the immune response with conventional methods of tackling DVT, it is tempting to develop a personalized approach of DVT prevention in different predisposing conditions.
Deep vein thrombosis (DVT) is a major origin of morbidity and mortality. While DVT has long been considered as blood coagulation disorder, several recent lines of evidence demonstrate that immune cells and inflammatory processes are involved in DVT initiation. Here, we discuss these mechanisms, in particular, the role of immune cells in endothelial activation, and the immune cascades leading to expression of adhesion receptors on endothelial cells. We analyze the specific recruitment and functional roles of different immune cells, such as mast cells and leukocytes, in DVT. Importantly, we also speculate how immune modulation could be used for DVT prevention with a lower risk of bleeding complications than conventional therapeutic approaches.
767
Simplified setup for the vibration study of plates with simply-supported boundary conditions
The frame can be designed to accommodate plates of various sizes.Important features include: sliding slots to accommodate minor variations in plate dimensions, a v-groove to support the plate and limit edge displacements, but which allows edge rotation along its length and width, cutouts to accommodate elements which are surface mounted on the plate, if required, and legs that allow air flow below the plate and which can accommodate vibration generators or speakers, if required.A sample frame design is shown in Fig. 6.Simply supported boundary conditions require supports which are flexible enough to allow in-plane shortening during bending, but rigid enough to prevent lateral displacement during rotation.Flexible weather stripping sealant has been found to provide the required stiffness for this purpose and reduces vibration transmission to the supporting frame.Removable sealants, as shown in Fig. 1, are also easy to remove between tests and do not damage the plate being tested.Care must be taken when using plates made of materials that are sensitive to or can be damaged by solvents, as most removable sealants dry via solvent evaporation.Using a sliding frame, such as the one shown in Fig. 2, accommodates minor or major differences in plate dimensions during installation of the plate.The use of a vibration isolating pad between the supporting frame and the table has also been found to help reduce signal noise.Installation procedure:Loosen frame support bolts and slide frame to its maximum opening size.Apply sealant in the middle of each v-groove.Enough sealant is applied if sealant is visible on all edges of the plate, both above and below.Too much sealant is applied if sealant begins to extend onto the plate surface.Align plate with v-grooves in the middle of the opening.If attachments are included, align attachments with supporting frame cutouts.Slide two parallel frame supports until they make contact with the plate, then slide the opposing two frame supports until they also make contact with the plate.Apply light finger pressure on all sides to ensure proper plate contact with the frame.Too much pressure may induce unwanted compressive stresses in the plate or even cause bending in thinner plates.Tighten bolts to approximately 9 Nm torque, while ensuring the frame remains perfectly flat.Tightening should occur progressively by tightening each bolt in a diagonal pattern to ensure even pressure on the plate.Overtightening of the bolts tends to rotate and misalign the frame elements, causing the plate to sit poorly in the v-grooves.9 Nm has been found to hold the frame together snuggly during testing without loosening.Verify flatness of the frame after tightening.Tap the plate lightly with your finger to ensure that enough sealant has been applied and that no direct contact is made between the plate and the frame.The setup can now be used to test the modal properties of simply supported plates using a variety of excitation techniques, such as a speaker, shaker or impact hammer, in combination with a visualization and/or measurement method, such as Chladni patterns, accelerometers or a scanning laser vibrometer.Metal, polymer and composite plates have been tested successfully using this method.However, the recommended setup might need to be modified to account for other plate materials.Two simple testing methods are described below.Any number of experimental modal analysis techniques can be used with the setup described above.Two relatively low-cost techniques that have been used successfully are detailed below.Testing Method 1: Speaker and Chladni Pattern Method,A speaker is placed at 3 mm below the underside of the plate, measured from the top of its peripheral, and is set to emit sound at about 40 W. Finely divided material, such as cake sprinkles, are sprinkled on the plate.Computer-generated sine waves are amplified and emitted below the plate, which vibrates and displaces the sprinkles forming distinct patterns along vibration nodal lines at resonant frequencies of the plate.A receiver, such as a sound level meter, measures acoustic sound intensity while sine wave frequencies are swept.A natural frequency is reached when the receiver displays a local maximum value and when the sprinkles assume a Chladni pattern.Practically, this method has been found to be limited to about 1200 Hz for such a 50 W nominal speaker.The frame shown in Fig. 6 is also limited to using a 50 W speaker, as this is the smallest speaker diameter that fits below this frame.In theory, higher frequency measurements could be achieved on a larger plate with a more powerful speaker.However, proper sound insulation would be required as frequencies above 1200 Hz are uncomfortable for the operator, even while wearing ear protection.The speaker and Chladni pattern method setup is described in Fig. 12 and Table 1.Testing method 2: Impulse Hammer and Accelerometer Method,Accelerometers are glued to the surface of the plate using hot glue and the plate surface is impacted using an impulse hammer.Appropriate locations for placing accelerometers and impacting the plate when measuring the six lowest modes of vibration are shown in Fig. 16.These locations should be chosen to be simultaneously as close to as many antinodes under consideration as possible, while avoiding nodes completely.Both the impact force using a load cell and the response using accelerometers are measured.The signals are then processed by a signal conditioner and captured by a data acquisition device connected to a computer for further processing and modal extraction.When impacting the plate, ensure the hammer remains orthogonal to the plate and restrict the force to a range 25 to 65 N for plates that fit within the frame shown in Fig. 6.This range has been found to provide optimal signal to noise ratios for extracting modal information.However, an appropriate impact force range will need to be determined for plates of other dimensions.Care must be taken not to impact the plate with such a large force that its resulting deformation would cause the plate to separate from the removable sealant.Signal processing should be considered carefully to ensure accurate results.Accelerometer and impulse hammer signals should be isolated.Logarithmic decrements for modal analysis can be used to convert impulses and accelerations into natural frequencies.The Least-Squares Complex Exponential algorithm was found to be best suited for this experiment.Finally, the window size must be optimized to obtain the cleanest results.The setup is described in Fig. 13 and Table 2.6061-T6 aluminum plates were used to validate the method.Seven plates were tested and two configurations were used: simple flat rectangular plates and a ribbed plate where the rectangular cross-section rib is centered along the plate’s length, as shown in Fig. 14.The ribbed plate was manufactured as one piece.Material properties of these aluminum plates are given in Table 3.Plate configurations and their dimensions used to validate the method are provided in Table 4.Tested plates range in mass between 0.10-0.24 kg.The supporting frame, as shown in Fig. 6, has a mass of approximately 3.2 kg.This provides a plate to frame mass ratio range of 0.03-0.08, helping to minimize effects of the frame on the plate’s natural frequencies and well below the value determined to be adequate by Robin et al. .To capture the theoretical natural frequencies of the simply supported ribbed plate, a finite element analysis model was created and simulated in COMSOL 5.1 using higher-order shear deformation theories.Results of the FEA model are given in Fig. 15.However, this model does not include the additional mass of the two accelerometers which would effectively lower the natural frequencies.Therefore, this model is referred to as the base model.A second augmented model was also created and simulated with accelerometer masses at their experimental location shown in Fig. 16.A comparison of the results for both the base and augmented FEA models to their experimental counterparts is provided in Table 6.In this case, Base = theoretical frequencies for each mode calculated using the FEA model and Augmented = theoretical frequencies for each mode calculated using the FEA model with two additional accelerometer masses located as in Fig. 16.Sp/Ch and Im/Ac are the same as in Table 5.For the ribbed plate, the average error when comparing experimental results to theoretical natural frequencies was 0.31% with a standard deviation of 0.39.In the case of the simple flat plates, the average error was 2.3% with a standard deviation of 1.72.In all cases, thicker plates provided better and more consistent results.This is likely due to the reduced impact that compressive stresses from mounting the plates in the supporting frame had on their material properties.Moreover, the largest errors occurred for the lowest frequency, likely due to the significant increase in resonance energy for this mode of vibration.Only a few outliers can be identified in the data and can likely be attributed to normal experimental and material variability.Overall, results confirm that the proposed setup provides reliable experimental measurements when compared to theory and should be used when multiple measurements must be taken relatively quickly or plates must be reused for other experiments or purposes.The use of simply supported boundary conditions is generally favored in the theoretical modeling of vibrating plates due to their mathematical simplicity, as well as the availability of exact analytical solutions.Conversely, free and clamped boundary conditions are favored during experimental studies due to the ease in which they can be accurately represented experimentally.Unfortunately, the literature available for the experimental measurement of simply supported rectangular plates is sparse, as these are rather more difficult to model under real experimental constraints.One of the earliest attempts at experimentally modeling simply supported boundary conditions was performed by Hearmon on wooden plates .In this study, a wooden frame was constructed using large v-grooves and long bolts which spanned the width of the plate in each direction.A similar method was developed by Amabili using sliding slots to accommodate various plate sizes and silicon to account for span shortening .A similar experimental setup is discussed by Guo et al. and used by Dumond and Baddour , the latter suggesting the use of removable sealant instead of silicon so that time between experimental trials is reduced and plate samples can easily be reused.Unfortunately, the edges of their plates had to be tapered to a point to properly sit in the v-grooves.The proposed method is based on the early work of Dumond and Baddour, providing the exact protocol for reproducing such experiments, but allowing plates to keep their original square edges and suggesting improvements, such as slots for surface mounting additional elements onto the plate.Alternatively, Barnard and Hambric propose that the supporting frame and the plate be made from one thick block of aluminum, where the plate is machined down in thickness and a groove is provided at its perimeter by only leaving a very thin webbing to support the plate .While this approach is interesting, it does require that the plate and support be made of the same material and does not allow for span shortening.Alternatively, Hoppmann and Greenspon suggest clamping the edges of a larger plate and cutting grooves in the plate itself by notching to a depth of 80% of the total plate thickness at a perimeter line within the supported region which defines the actual plate dimensions of interest .Obvious difficulties arise in using these plates for other purposes.Using a completely different approach, Ochs and Snowdon developed an experimental setup using a spring-steel skirt and support strip fixed to the plate using jewelers screws .The spring-steel skirt is slotted to allow air to move freely around the vibrating plate and to allow for height adjustments of the setup.Pan et al. and Yoon and Nelson discuss similar setups.Champoux et al. provide guidelines and calculations for selecting the stiffness of the skirting material which are based on the properties of the plate.Because of the difficulty inherent in using screws to fasten the edge of thin plates and the potential effects screw holes have on the results, Robin et al. suggest the use of a permanent adhesive instead .However, the use of permanent adhesive limits the ability to use the plate following the vibration experiment.Moreover, a new spring-steel skirt must be defined and created for every plate used.In all cases, experimental results compare well to theoretical values and have been shown to be satisfactory.Although, the ease of experimental implementation varies greatly between methods.The method described herein provides a low-cost, simple, accurate and non-destructive way of experimentally measuring the modal properties of thin, simply supported plates and can be used for quick validations of models and designs without modification for multiple trials and varying plate properties.
The experimental study of vibrating plates having simply-supported boundary conditions can be difficult to achieve due to the complexity of preventing translation, but allowing rotation along all boundaries simultaneously. Only a few methods have been proposed, but all are either time-consuming to set up and involve customization of the test rig for each plate or do not allow the plate to be reused for other purposes. The method described in this paper offers a low-cost, simple, accurate and non-destructive way of experimentally measuring the modal properties of thin, simply supported plates and can be used for quick validations of models and designs without modification for multiple trials and varying plate properties. The key attributes of this method include: . An adjustable sliding support frame which can be made of a distinct material from the plate and which can accommodate variations in plate geometry and properties without modification. Removable flexible sealant applied in a v-groove on the supporting frame which can be easily used to fix and support the plate according to the simply-supported boundary conditions. A low-profile design, which can be used to accommodate most experimental testing methods for determining modal properties of vibrating plates.
768
UV light absorption parameters of the pathobiologically implicated bilirubin oxidation products, MVM, BOX A, and BOX B
The bilirubin oxidation products, MVM, along with BOX Aacetamide]) and BOX Bacetamide), have been implicated in the deleterious effects associated with subarachnoid hemorrhage.The detection method utilized to determine the presence of these compounds is UV absorption associated with reversed phase-HPLC .However, reports of the UV absorption profile and/or λmax of MVM have not been reported for the solvent utilized in their detection, but are limited to CH3OH .Also, reports of these absorption characteristics are limited for BOX A to H2O and CH3CN, and for BOX B to H2O .Further, extinction coefficients for MVM and BOX A are limited to CH3OH and CH3CN, respectively , and are lacking for BOX B. Thus, it is anticipated that the present data will assist in the detection and quantitative determination of BOXes levels in biologic samples from SAH, as well as in other pathobiologies associated with elevated bilirubin.UV absorption spectra of BOX A, BOX B and MVM were determined in CHCl3, CH2Cl2, CH3CN, 15% CH3CN plus 10 mM CF3CO2H, H2O, and 0.9% NaCl."At longer wavelengths, BOX A λmax's were little affected by the solvent, ranging from 295–297 nm. "With BOX B, less polar solvent yielded λmax's of lower wavelength, with values ranging from 308–313 nm. "With MVM, less polar solvent yielded λmax's of higher wavelength, with values ranging from 318–327 nm. "These λmax values corresponded to previously reported λmax's at longer wavelengths, as limited to the following solvents: BOX A of 300 nm in H2O and 295 nm in CH3CN , BOX B of 310 nm in H2O , and MVM of 317 and 319 nm in CH3OH . "Calculated ε’s for BOX A, BOX B, and MVM at their respective λmax's in CHCl3, CH2Cl2, CH3CN, 15% CH3CN plus 10 mM CF3CO2H, H2O, and 0.9% NaCl, ranged from 10,600-13,000, 19,000–24,200, and 2,100-2,820 L/mol-cm respectively.The ε determined using the actual amount of Z-BOX A, at λmax 295 in CH3CN, was 17,000 L/mol-cm .Thus, the present Box A ε likely represents a low estimate.The estimated MVM ε is similar to that reported for MVM at λmax 317 and 319 nm in CH3OH of 2,300 and 2,290 L/mol-cm .Bilirubin solubilization was performed at room temperature in an aluminum foil wrapped vessel due to the reported light sensitivity of BOX A, BOX B, and MVM .One or more 50 mg portions of bilirubin were incubated in 25 ml 0.2 M NaOH with occasional vortexing over 24–72 h .The dark red bilirubin solution was then buffered by addition of 7.5 ml of 0.5 M Tris base before neutralization with 0.4 ml of 12.3 M HCl to pH 7.0.Overtitration of the dark red solution to lower pH resulted in a green solution.The neutralized buffered bilirubin solution was immediately used for oxidation with H2O2.With prolonged storage, bilirubin precipitated from this supersaturated solution.As performed under dim ambient light and in an unlit fume hood the neutral buffered solution was oxidized for 24 h with 8% H2O2.For MVM synthesis, 0.5 M FeCl3 was added to the bilirubin solution prior to H2O2 and the oxidation allowed to proceed for 10 min.Each aqueous reaction mixture was extracted twice with 6 ml CHCl3 or CH2Cl2 and the combined organic phase extracted once with 1 ml water, evaporated to ~2 ml at <50 °C and atmospheric pressure, transferred to microfuge tubes, and evaporated to near dryness.Additional ~2 ml aliquots of extract were repeatedly added, each followed by evaporation to near dryness.The final addition of washed extract was evaporated to dryness and reconstituted in 1 ml 1% CH3CN for purification by reversed phase-HPLC.RP-HPLC was used for both purification and analysis of the bilirubin oxidation products.As performed under dim light, organic solvent extracts of BOX A and BOX B, as well as MVM, reconstituted in 1% CH3CN were diluted as necessary into the RP-HPLC starting buffer of 2% CH3CN:98% H2O containing 10 mM M CF3CO2H.Injections of 1.0–1.5 ml were made onto a Vydac 218TP C-18 5 µm column with guard column equilibrated with 2% CH3CN containing 0.01 M CF3CO2H.The guard column was necessitated by the detection of a small amount of residual H2O2 in the CHCl3 and CH2Cl2 extracts of the bilirubin-H2O2 reaction mixtures.An attempt to remove the H2O2 with CH3CH2OH addition to the reaction mixtures actually caused a 10-fold increase in the amount of substrate detected as H2O2.H2O2 was not detected in RP-HPLC-purified oxidation products.The column was eluted with a continuous gradient of 0.5% CH3CN/min over 32 min, followed by steeper gradients and higher CH3CN concentration for washing the system between runs.Eluates were monitored from 210–350 nm using a diode array spectrophotometer and flow cell and were collected in aluminum foil wrapped test tubes.RP-HPLC of the combined products of bilirubin-H2O2 reaction mixtures with and without Fe3+ yielded three peaks with retention times at 26.0, 28.7, and 31.2 min, respectively.These retention times corresponded to eluting CH3CN concentrations of 12.8, 14.4, and 15.6%, respectively."UV absorption at other retention times was not detected at 297, 310, and 327 nm, i.e., at the longer wavelength λmax's of the compounds with 26.0, 28.7, and 31.2 retention times, respectively, as well as at 223 nm, indicative of a purified preparation.This relative order of retention time of MVM, BOX A, and BOX B differs from that which a laboratory previously reported, which was BOX A, BOX B, and then MVM .While this difference in relative order of retention time may be due to differences in column properties, it should also be considered that the present inclusion of CF3CO2H in the solvent resulted in ion pairing with BOX A and BOX B."From the bilirubin-H2O2 oxidation in the absence of Fe3+, the ratio of MVM:BOX A:BOX B formed at their respective λmax's was 0.10 ± 0.03:1.0:0.95 ± 0.05, respectively.Several minor peaks were also observed.Incubation at times shorter or longer than 24 h did not result in additional MVM formation.Yields after purification of BOX A and BOX B were ~1% each, based on starting material and measured by UV spectroscopy."From the bilirubin-H2O2 oxidation in the presence of Fe3+, the ratio of MVM:BOX A:BOX B formed at their respective λmax's was 1.0:0.05 ± 0.01:0.04 ± 0.01, respectively.Several minor peaks were also observed.Incubation for 1, 5, 30, 45, and 60 min did not increase BOX A and BOX B formation while MVM formation was reduced.The reaction yielded ~5% MVM, based on starting material and measured by UV spectroscopy.Increased MVM formation with Fe3+ inclusion in the bilirubin-H2O2 reaction mixture is consistent with the dependency of MVM formation following H2O2 oxidation of ferriprotoporphyrin IX on the chelated iron as well as the oxidation of bilirubin by CrO3 .Present yields are generally consistent with earlier reports of <5% and 4% formation of BOX A, BOX B and MVM .While one of these reports also demonstrated significant MVM synthesis, the increased MVM formation was possibly due to a somewhat greater H2O2 concentration in the reaction mixture with bilirubin.On the other hand, highly variable amounts of MVM were formed by oxidation of bilirubin with ~10% H2O2 .Hydrogen peroxide oxidation of biliverdin instead of bilirubin did not increase the yield of MVM.After purification, BOX A, BOX B, and MVM samples shielded with aluminum foil from light were stable for at least 6 mo at −20 °C and for 24 h at room temperature in 14.6% CH3CN, as determined by RP-HPLC; i.e., no loss of compound or detection of additional absorption peaks through the UV absorption spectrum."Removal of the aluminum foil and exposure of BOX A, BOX B, and MVM to ambient light for 24 h decreased recovery by 10%, 15%, and 5%, respectively, and the appearance of peaks at 18.3 min and 20.8 min with a ratio 1.13:1, and with λmax's of 288 and 296 nm, respectively.UV spectra were performed in a SpectraMax M5.For compound identification and ε determinations, analytic samples of BOX A, BOX B, and MVM were loaded onto a C18 separation cartridge, washed with 1 ml D2O and eluted with 1.5 ml, 80% CD3CN.Samples were then evaporated to dryness under N2 and reconstituted in 1 ml CD3CN.BOX A, BOX B, and MVM chemical shifts and coupling constants were determined on a DMX-500."Extinction coefficients at the respective λmax's for BOX A, BOX B, and MVM were determined by titration in CH3OH and integration of signals relative to CH3OH under conditions of long recycle delay, and determination of UV absorption.1H-NMR spectroscopy yielded chemical shifts and coupling constants for BOX A, BOX, B, and MVM consistent with previous reports.Samples for MS were prepared by evaporation of compounds in aqueous CH3CN to dryness in an N2 stream at 40 °C, followed by reconstitution in 10% CH3CN/90% H2O containing 0.2% HCO2H.Lyophylization was avoided due to apparent loss of compounds.Samples obtained from RP-HPLC were infused into a Thermo Scientific LTQ-FT™ hybrid MS consisting of a linear ion trap and a Fourier transform ion cyclotron resonance MS. The standard electrospray ionization source was operated in a profile mode for both positive and negative ions as indicated.The only possible elemental composition at 2 ppm mass error, but also even at 5 ppm, for 0–10 nitrogen, 0–15 oxygen, 0–30 carbons, and 0–60 hydrogens are those of BOX A and BOX B, and for MVM.With MVM as the protonated molecular ion, the observed mass was m/z 138.05498 with a mass error of 180 ppb.For MVM, MS also suggested the apparent presence of the plastic antioxidant/stabilizer 1,10-bis, resulting from the carrying out of the FeCl3-bilirubin-H2O2 oxidation in a polypropylene vessel.
The formation of the bilirubin oxidation products (BOXes), BOX A ([4-methyl-5-oxo-3-vinyl-(1,5-dihydropyrrol-2-ylidene)acetamide]) and BOX B (3-methyl-5-oxo-4-vinyl-(1,5-dihydropyrrol-2-ylidene)acetamide), as well as MVM (4-methyl-3-vinylmaleimide) were synthesized by oxidation of bilirubin with H2O2 without and with FeCl3, respectively. Compound identity was confirmed with NMR and mass spectrometry (MS; less than 1 ppm, tandem MS up to MS4). UV absorption profiles, including λmax, and extinction coefficient (ε; estimated using NMR) for BOX A, BOX B, and MVM in H2O, 15% CH3CN plus 10 mM CF3CO2H, CH3CN, CHCl3, CH2Cl2, and 0.9% NaCl were determined. At longer wavelengths, λmax's for 1) BOX A were little affected by the solvent, ranging from 295–297 nm; 2) BOX B, less polar solvent yielded λmax's of lower wavelength, with values ranging from 308–313 nm, and 3) MVM, less polar solvent yielded λmax's of higher wavelength, with values ranging from 318–327 nm. Estimated ε's for BOX A and BOX B were approximately 5- to 10-fold greater than for MVM.
769
A numerical analysis of a composition-adjustable Kalina cycle power plant for power generation from low-temperature geothermal sources
Reducing fossil fuel consumption and greenhouse gas emissions is particularly important for us to ensure a sustainable future.Power generation from a variety of renewable heat sources such as geothermal and solar thermal energy could make an important contribution to the decarbonisation of our economy .In particular, low-temperature geothermal energy is being used increasingly for power and heat generation .Power cycles utilising low temperature heat sources have been intensively studied and well documented in the past several decades , amongst which organic Rankine cycles and Kalina cycles are considered to be two most important technologies .In 1984, Kalina proposed a power cycle using a binary mixture as working fluid to generate power from heat source with a relatively low temperature, denoted as Kalina cycle later on .The Kalina cycle is essentially a further development of Rankine cycle.One key difference between them is that a Kalina cycle uses a mixture rather than a pure working fluid, so that isobaric evaporation and condensation processes occur under changing temperatures and the mixture composition varies throughout the cycle.Compared with a Rankine cycle, the efficiency of a Kalina cycle can be increased due to a close temperature match with heat transfer fluids in the evaporator and condenser.For instance, a Kalina cycle system using an ammonia-water mixture as the working fluid to generate power from the waste heat of a gas turbine achieved a thermal efficiency of 32.8% .A Kalina power plant normally uses components similar to those for constructing a conventional steam power plant.Some investigations showed that a Kalina cycle can achieve a better thermal efficiency than ORC systems .The Kalina cycle attracted considerable attention in the past decades.Fallah used an advanced exergy method to analyse a Kalina cycle for utilising a low-temperature geothermal source .Cao et al. investigated a biomass-fuelled Kalina cycle system with a regenerative heat exchanger, and found the net power output and system efficiency increases as the temperature within the separator increases .The performance of a KCS-11 system for solar energy application has also been studied.It was reported that the ammonia mass fraction was an important system operation parameter and should be optimised to reduce the system’s irreversibility .Recently, Yu et al. studied a combined system consisting of a Kalina power cycle and an ammonia absorption cooling cycle, of which the cooling to power ratio can be adjusted over a large range.Their theoretical results showed that the overall thermal efficiency could be increased by 6.6% by combining the two cycles in this way .Wang et al. studied a flash-binary geothermal power generation system using a Kalina cycle to recover the heat rejection of a flash cycle .The optimised results showed that the ammonia mass fraction, the pressure, and the temperature at the inlet of the turbine have significant effect on system’s performance.Hettiarachchi et al. studied the performance of the KCS-11 Kalina cycle system for utilising low-temperature geothermal heat sources and found an optimum ammonia concentration exists for a given turbine inlet pressure .Aiming at low-temperature heat sources, Kalina et al. proposed a power cycle which was later named KCS-34 , based on which a low-temperature geothermal power plant was built in Husavik, Iceland in 2000 .Nasruddin et al. simulated a KCS-34 Kalina cycle using Cycle Tempo 5.0 software and compared it with the operation data of the Husavik power plant, showing a good agreement .Later, Arslan studied the performance of a KCS-34 Kalina cycle system using an artificial neural network and life cycle cost analysis, and found that the most profitable condition was obtained when the ammonia mass fraction was in the range between 80% and 90% .In practice, the expansion ratio of the turbine for KCS-34 cycle is relatively high and a multi-stage turbine is required.However, Lengert changed the location of the recuperator in a KCS-34 Kalina cycle and proposed a new power cycle, i.e., the so-called KSG-1 patented by Siemens.It can achieve high cycle efficiency and only requires a single-stage turbine .Later on, Mergner and Weimer compared the thermodynamic performances between a KSG-1 and KCS-34 for geothermal power generation.The results showed that the KSG-1 achieved a slightly higher efficiency than the KCS-34 .The architectures of KCS-11, KCS-34, and KSG-1 are compared and shown in Fig. 1.In the past decade, different approaches have been proposed to further improve the efficiency of Kalina cycle power plants.Ibrahim and Kovach studied a method for controlling the temperature in the separator to adjust the ammonia mass fraction at the inlet of the turbine, and found that this method can improve the cycle’s thermal efficiency .Nguyen et al. developed a Kalina split-cycle that had a varying ammonia concentration during the preheating and evaporation processes .He et al. studied two modified KCS-11 systems, which used a two-phase expander to replace a throttle valve .Hua et al. investigated the transient performance of a Kalina cycle for high-temperature waste heat recovery, which can regulate the concentration of the working fluid mixture.This method controls the on/off state of two valves to maximise power generation when the temperature of the waste heat source fluctuates.Controlling the concentration of the working solution adjusts the turbine inlet pressure.It was reported that the cycle’s thermal efficiency was 12.8% greater than that of a conventional method .Recently, Mlcak and Mirolli proposed a method to adjust the ammonia concentration according to the cooling source temperature to improve cycle efficiency .A low-pressure separator is used at the upstream of the condenser to separate the two-phase mixture into an ammonia-rich vapour flow and an ammonia-lean liquid flow.Then, a drain pump is used to control the mass flow of the lean liquid mixture so that the ammonia mass fraction of the basic solution entering the condenser can be regulated.A density sensor is installed at the maximum pressure position of the basic solution to monitor the ammonia mass fraction in real time, which is sent to the controller as a feedback signal.The thermodynamic principle of the proposed composition-adjustable Kalina cycle is considered to be technically feasible, but no details have been provided with regard to the implementation of such a power cycle and the potential improvement of the cycle’s thermal efficiency under real climate conditions.Moreover, no further research has been reported on this subject according to our literature review.Apparently, there is a need for more insights of the proposed composition-adjustable Kalina cycle to further assess its technical and economic viabilities.For this reason, this paper carried out a comprehensive numerical analysis of a composition-adjustable KSG-1 Kalina cycle power plant.The main objective is to answer several important questions as follows: How can it be implemented?, How much can it improve the annual average thermal efficiency under various climate conditions?, What are the key factors affecting its performance?,Moreover, although most Kalina cycle power plants in operation usually use water-cooled condensers, large quantity of water may be not available or too costly, especially for an inland area .It is then necessary to use air-cooled condensers, which are more sensitive to changing ambient temperatures.In order to maximise the effect of the ambient temperature change on the cycle’s performance, an air-cooled condenser is employed for the system investigated in this research.A theoretical model is firstly established, based on which a numerical code is developed.The effect of ammonia concentration on the system’s performance is analysed.A case study based on Beijing’s climate data has been carried out to demonstrate the performance improvement of the composition-adjustable Kalina cycle system.Finally, a brief performance comparison is performed for various types of climate conditions.The results show that the composition-adjustable Kalina cycle system can remain in the high-efficiency regions when the ambient temperature varies.Therefore, the system’s performance can be improved.Based on the concept proposed in a recent patent , a composition-adjustable KSG-1 Kalina cycle for low-temperature geothermal power generation is modelled in this study, of which the system architecture is shown in Fig. 2.An ammonia-water mixture is used as the working fluid.As ammonia and water have very different boiling temperatures, the gliding temperature of ammonia-water mixture is large and it can be used to decrease the irreversible losses during heat transfer processes in the condenser and evaporator.Fig. 3 shows the bubble and dew lines of ammonia-water mixture at a pressure of 2 MPa.When the ammonia mass fraction is 0.8, the bubble and dew temperatures are around 60.3 and 147.3 °C, respectively; the corresponding glide temperature is as large as 86.9 °C.As shown in Fig. 2, the basic solution at the saturated liquid enters Tank 1.A density sensor is installed at its outlet to measure the density of the ammonia-water mixture so that its composition can be deduced and used as a feedback signal for composition control.The low-pressure basic solution at state 3 is pressurised by Pump 1 and it turns into subcooled liquid at state 4.The basic solution is heated to state 5 by the recuperator.In the evaporator, the basic solution is further heated to a two-phase state 6 by a geothermal brine.For the convenience of comparison with the data in literature , the temperature of the brine water is set as 120 °C in this research.As the temperature is not high enough for the brine water to fully evaporate the basic solution, Separator 1 has to be used to separate the two-phase solution into an ammonia-rich saturated vapour mixture of state 7 and an ammonia-lean saturated liquid mixture of state 8.The high-enthalpy vapour mixture expands in the turbine and turns into a low-pressure mixture at state 9.Meanwhile, the high-pressure liquid at state 8 is throttled through the expansion valve to state 10 to reduce its pressure.Subsequently, these two low-pressure flows are mixed to a two-phase state 11 in Mixer 1.It then flows into the recuperator, where the temperature of the low-pressure ammonia-water mixture decreases further after transferring its heat to the high-pressure side.In order to improve the condensation process of ammonia-water mixture in the condenser, Separator 2 is employed to separate the two-phase flow into saturated vapour and saturated liquid.The liquid stream from Tank 2 is then pressurised by Pump 2 and sprayed into Mixer 2, further condensing the ammonia-rich vapour stream.The mixture is cooled and condensed in the condenser, turning into the saturated liquid.The corresponding T-s and h-x diagrams are given in Fig. 4, where the numbers are the corresponding states as shown in Fig. 2.The black dashed lines represent the bubble lines of states 8 and 14, respectively; while the yellow dashed lines are the dew lines of states 7 and 13, respectively.It can be seen that the temperature glides during the processes 5-6, 11-12, and 17-1.For the composition-adjustable Kalina cycle system as shown in Fig. 2, the control unit detects the pressure of the work solution in Separator 1 as a feedback signal to regulate the mass flow rate of Pump 1.The temperature of the basic solution at the inlet of Pump 1 is detected to control the air mass flow rate.The density of the basic solution at the inlet of Pump 1 is adjusted by varying the mass flow rate of Pump 2.In this composition-adjustable Kalina cycle system, the density sensor is installed at the inlet of Pump 1 rather than at the outlet of Pump 1 as proposed in Mlcak and Mirolli’s patent .Hereby, the operating pressure of the density sensor can be reduced.Furthermore, in Mlcak and Mirolli’s patent, the system thermal efficiency is used as the performance indicator to optimise the ammonia mass fraction.However, in this paper, both the thermal efficiency and the exergy efficiency are used as the performance indicators.To evaluate the thermodynamic performance of the composition-adjustable Kalina cycle system, a mathematical model is established based on the mass and energy balance equations.Therefore, the exergy destruction rate for each component of the Kalina cycle system can be determined.In this study, some assumptions are made as follows: all the working processes are steady; the pressures at all the states during the operation are constant; the thermodynamic properties at states 1, 2, and 3 are the same; the states 14 and 15 are the same.In addition, the turbine is assumed to operate with a constant isentropic efficiency across the range of mass flow rates presented to it, and this assumption is believed to be feasible .That allows us to apply the model to analyse the cycle performance when the mass flow rates of the turbine varies as the ambient temperature changes from one season to another.The present research assumed that the brine from a geothermal production borehole has a fixed flow rate and temperature, and thus the design target is to maximise the power production.A program was developed using Matlab.In this model, the thermodynamic properties of the ammonia-water mixture need to be determined.These values are computed by Refprop 9.1 based on the Helmholtz free energy method.The uncertainties of the equation of state are 0.2% in density, 2% in heat capacity, and 0.2% in vapour pressure .The performance of a conventional KSG-1 Kalina cycle was computed at first.The main input parameters are listed in Table 1.The mass flow rate of the brine is set to 141.8 kg/s, the same as that used in the patent .As there are two-phase states of the ammonia-water mixture in the heat exchangers, where the temperature of the mixture glides with the heat transfer quantity, a pinch analysis method is used to determine the pinch point position and the overall heat transfer.In order to verify the model, the computed results are compared with some published data as listed in Table 2.The absolute errors of the heat transfer for all the heat exchangers are less than 1.6%, verifying the computing program developed in this research.The verified program is then used to analyse the performance of the tested composition-adjustable Kalina cycles.The air temperature data of Beijing in 2015 shown in Fig. 5 are used as the ambient conditions for this analysis.The cycle is optimised according to the average air temperature of each month.The flow chart for the optimisation algorithm is shown in Fig. 6.Firstly, the temperature and mass flow rate of the brine are specified.The temperature and pressure of state 6 are then set.Next, the pressures at all other states are determined based on the pressure drops.Then, the minimum temperature of the ammonia-water mixture at state 1 is calculated according to the ambient temperature and the pinch point temperature difference.The minimum mass fraction of ammonia in the basic solution corresponding to P1 and T1 is computed.Based on the mathematical model, the working process for each component of the Kalina cycle system is computed.Both the thermal and exergy efficiencies are then determined.An iterative algorithm is used to compute the heat transfer within the recuperator, the evaporator, and the condenser according to a predefined pinch point temperature difference.In addition, according to the working pressures of Separators 1 and 2, the corresponding bubble and dew lines are determined.Consequently, the ammonia mass fractions at the outlets of the separators can be obtained based on the Lever rule of zeotropic mixtures .In this research, in order to study the effect of composition adjustment on the system’s performance, a conventional KSG-1 Kalina cycle represented by Cycle B was also simulated.The results are compared with the composition-adjustable Kalina cycle denoted as Cycle A.In Cycle A, the composition of its ammonia-water mixture can be adjusted according to the ambient temperature.As a benchmark, Cycle B, a conventional Kalina cycle, has a fixed composition of the working fluid mixture, and thus a fixed condensing temperature.To allow Cycle B to operate over a year when the ambient temperature fluctuates from the minimum in winter to the maximum in summer, it has to be designed based on the maximum ambient temperature in a year.For this reason, the maximum temperature over a year was selected to model Cycle B.As a result, it will have a constant thermal efficiency throughout a year.Based on the developed numerical model, an optimisation procedure as shown in Fig. 6 was used to analyse the effect of adjusting ammonia mass fraction in the basic solution on the performance of Cycle A.The ambient temperature in October at 13.76 °C was used as a sample case.Figs. 7–9 show the system’s performance as a function of the ammonia mass fraction in the basic solution xb.The mass flow rate of the basic solution is given in Fig. 7.It decreases from 57.69 kg/s to 31.67 kg/s as xb increases from 0.502 to 0.792.This can be attributed to that the evaporated mass flow of the basic solution increases with the increase of xb, but the heat transfer of the brine in the evaporator cannot increase proportionally.The temperatures at the inlet and outlet of the recuperator are shown in Fig. 7.The temperature at the inlet of Pump 1 decreases with the increase of xb due to a constant pressure at the inlet of Pump 1.The temperature difference at the inlet and outlet of the recuperator is small.The temperature at the outlet of the high temperature side of the recuperator has a similar tendency because the temperature at the inlet of the high temperature side decreases.The temperatures of the brine at the inlet and outlet of the evaporator are shown in Fig. 7.For the convenience of comparing the results with some published data, the inlet temperature of the brine is fixed at 120 °C.The temperature of the brine at the outlet of the evaporator decreases gradually because the basic solution becomes easier to evaporate as xb increases, and thus the heat transfer in the evaporator also increases.Fig. 7 shows the temperatures in the turbine and Mixer 1.The temperature at the inlet of the turbine remains 107.3 °C, the same as the outlet of the evaporator.Because the temperature, the pressure, and the ammonia mass fraction of the work solution at the inlet of the turbine remain constant when xb increases, the temperature at the outlet of the turbine remains constant accordingly.The temperature of the ammonia-lean solution at the outlet of the expansion valve also remains constant.However, after these two streams are mixed in Mixer 1, the temperature at the outlet of Mixer 1 decreases gradually as xb increases.The mass flow rates at the outlet of Separator 1 are given in Fig. 8.As xb increases, the mass flow rate of the ammonia-rich work solution increases, but the mass flow rate of the ammonia-lean solution decreases.The mass flow rates at the outlet of Separator 2 are shown in Fig. 8, and they have similar tendencies to those shown in Fig. 8.As shown in Fig. 8, both the temperatures of the basic solution at the inlet and outlet of the condenser decrease as xb increases.It can be seen in Fig. 8, the mass flow rate of air increases significantly as xb increases.Accordingly, the air temperature at the outlet of the condenser drops gradually.The power consumption of each component is shown in Fig. 9.When the ammonia fraction xb increases, the power consumption of both Pumps 1 and 2 decreases gradually because the enthalpy of the solution decreases with the increase of xb if the outlet pressures of the pumps are kept constant.The power consumption of the fans increases rapidly because the mass flow rate of the air rises significantly as xb increases.The heat transfer rates within the evaporator and the condenser are shown in Fig. 9.Both of them increase evidently as xb increases.When xb is 0.502, the heat transfer rate of the evaporator and the condenser is 19.15 MW and 18.35 MW, respectively.However, when xb further increases to 0.792, they increase to 29.12 MW and 26.19 MW, respectively.The power output of the turbine and the net power output of the cycle are presented in Fig. 9.The total power consumption of the pumps and the condenser fans is also shown in this figure.The power output of the turbine rises as xb increases due to the increase of the mass flow rate of the work solution.The total power consumption also increases as xb increases, and it increases significantly when xb is above 0.79.As a result, the net power output firstly increases and then decreases.The maximum net power output occurs when xb is around 0.782.The cycle’s thermal and exergy efficiencies are shown in Fig. 9.Both of them firstly increase and then decrease when xb increases from 0.502 to 0.792.The maximum thermal efficiency and exergy efficiency occur when xb is around 0.762 and 0.772, respectively.The ammonia mass fractions corresponding to the maximum points of the thermal and the exergy efficiencies are very close.The efficiencies of Cycles A and B defined in Section 4 were then computed according to the monthly average temperature throughout a year in Beijing.Cycle A is designed to match the ambient air temperature during a year.In spring or autumn, the ambient temperature is moderate, and it is represented as Ta1 in Fig. 10.The temperature of the liquid ammonia-water mixture at the outlet of the condenser,is denoted as Point A in Fig. 10.The temperature difference between state 1 and the ambient ΔT is constrained by the pinch point temperature difference.When the season shifts to winter, the ambient temperature decreases from Ta1 to Ta2.State 1 moves from Point A to Point B.During this shifting process, only the ammonia mass fraction is adjusted, while the condensation pressure of the ammonia-water mixture is kept as constant.If the season shifts to summer, state 1 will move from Point A to Point C according to the increase of ambient temperature from Ta1 to Ta3.Under these conditions, the condensation pressure of Cycle B is the same as Cycle A.The temperature at state 1 of Cycle B must be determined based on the maximum month-average temperature over a year.The thermal efficiency of Cycle A as a function of both xb and the ambient temperature is shown in Fig. 11.It can be seen that the ambient temperature has a strong effect on the cycle thermal efficiency.For a given value of xb, the thermal efficiency increases as the ambient temperature decreases.Fig. 11 gives the corresponding results of the thermal efficiency against xb for each month.The solid line represents the thermal efficiency for each month, while the dashed line represents the optimal operation line, i.e., maximum thermal efficiency.It can be seen that for each month the thermal efficiency first increases then decreases as xb increases.This is because the power output of the turbine increases as xb increases, but the power consumption of the cooling fans of the condenser increases too, especially when xb is high.The optimised xb based on the thermal efficiency is in the range of 0.603–0.95, and the corresponding thermal efficiency is in the range of 6.12–9.24%.Fig. 11 shows the thermal efficiency of Cycles A and B as a function of ambient temperature.The results of Cycle A corresponding to the optimal operation line of the thermal efficiency are shown in Fig. 11.The thermal efficiency of Cycle B is constant at 6.12%.However, the thermal efficiency of Cycle A increases from 6.12% to 9.24% because, as the ambient temperature decreases, the power output of the turbine increases significantly by matching the condensation temperature of ammonia-water mixture with the ambient air temperature.Fig. 11 shows the calculated exergy efficiency of Cycle A as a function of both xb and ambient temperature.The variation of the exergy efficiency has a similar tendency to that of the thermal efficiency as shown in Fig. 11.The heat source of the 2 MW Kalina power plant in Husavik is a low-temperature geothermal brine at 120 °C, and its xb of the Kalina cycle is 0.82.This case is denoted as a red1 line in Fig. 11 and, and it is close to the optimal results at the ambient temperature of 2.94 °C according to the present simulation.It should be noted that, in this research, an air-cooled condenser is used instead, and its pinch point temperature difference is greater than that of a water-cooled condenser.Therefore, the corresponding ambient temperature is about 2 °C less than that of the Husavik’s geothermal power plant.The exergy efficiencies of Cycles A and B are given in Fig. 11.The exergy efficiency of Cycle A increases from 30.7 to 36.5% as the ambient temperature decreases from 26.6 to −1.3 °C.In contrast, the exergy efficiency of Cycle B decreases from 30.7 to 16.5% because the net power output of Cycle B is constant while the exergy of the brine increases as the ambient temperature decreases.In this section, we assume the system operates along the optimal operation line of the cycle’s thermal efficiency).In this case, the mass flow rate of the geothermal brine is fixed as 141.8 kg/s.The net power outputs corresponding to the maximum thermal efficiency are given in Fig. 12.The net power output increases significantly with the decrease of the ambient temperature.The average ambient temperature of Beijing in July reaches a maximum of 26.6 °C, and the corresponding net power output is 1.366 MW.In contrast, the lowest temperature is −1.3 °C in January, and the net power output is 3.145 MW, which is 2.3 times of that in July.This demonstrates the benefit of matching the cycle with ambient conditions by adjusting the composition of the mixture.The corresponding thermal efficiency and exergy efficiencies are shown in Fig. 12.As the ambient temperature decreases from the maximum to the minimum, the thermal efficiency increases from 6.12% to 9.24%.Accordingly, the exergy efficiency increases from 30.7 to 36.5%.Fig. 12 shows the optimised xb as a function of the ambient temperature, which increases as the ambient temperature decreases.As shown in Fig. 12, the corresponding density of the ammonia-water mixture decreases from 780 kg/m3 to 640 kg/m3 as the ambient temperature drops.Density sensors having an accuracy of ±0.1 kg/m3 are widely available in the market, which is sufficient for the real-time control of the ammonia mass fraction as required by the system modelled in this paper.Fig. 13 shows the power consumption of the pumps and the condenser fans.The power consumption of Pump 1 is of the same order as the fans, while the power consumption of Pump 2 is much less than them.As the ambient temperature decreases, the power of Pump 1 decreases from 134 kW to 114 kW, while the power consumption of the fans firstly increases and then decreases slightly.The power output of the turbine and the net power output of the cycle are shown in Fig. 13.Since the power output of the turbine is far greater than the total power consumed, the tendency of the net power output is similar to that of the power output of the turbine.The mass flow rates of the basic solution and the cooling air are shown in Fig. 13.As the ambient temperature drops, the mass flow rate of the basic solution decreases but the mass flow of the air increases.The mass ratios in the two separators are also given in Fig. 13.The mass ratio of Separator 2 is lower than that of Separator 1 because the operation temperature of Separator 1 is higher than that of Separator 2.Fig. 13 shows the heat transfer between the geothermal brine and the ammonia-water mixture in the evaporator.As the ambient temperature decreases from 26.6 °C to −1.3 °C, the heat transfer increases from 22.34 MW to 34.03 MW.The temperature of the brine at the outlet of the evaporator is also given in this figure, and it decreases as the ambient temperature decreases.The heat transfer within the evaporator increases evidently due to the temperature drop at the inlet of the low-temperature side.As shown in Fig. 13, the heat transfer of the condenser increases as the ambient temperature decreases.It can be seen that the air temperature at the outlet of the condenser decreases as the ambient temperature decreases.The annual average thermal efficiency of Cycle A is 7.86%, and it is about 28.39% higher than that of Cycle B at 6.12%.On the other hand, the annual average heat transfer of the evaporator of Cycle A is also 26.23% higher than that of Cycle B.As a result, the annual average net power output of Cycles A and B is 2.267 and 1.366 MW, respectively.The former is 65.99% higher than the latter.The computed results of the Kalina cycle based on the annual average air temperature of Beijing are listed in Table 4.The thermal efficiency is 8.10% and the exergy efficiency is 24.27%.The corresponding exergy destruction rate for each component is shown in Fig. 14.The condenser causes the largest exergy destruction rate at 1726 kW, followed by the evaporator and the turbine.If the irreversibility loss of the condenser can be reduced further, the system performance can be further improved.The potential of performance improvement for different climate conditions are also evaluated.In addition to Beijing, four other locations are considered, including Lima; Husavik, Rome, and Turpan.Their monthly average ambient temperatures in 2015 are shown in Fig. 5 .The maximum and minimum monthly averaged temperatures, the annual mean temperature, and the annual temperature variation are also listed in Table 5.Based on Eqs.–, the calculated results of the five selected locations are shown in Table 5.The annual average improvement of the thermal efficiency is nearly proportional to the annual temperature variation.The larger the annual temperature variation, the higher the annual average improvement.Furthermore, the annual mean temperature also affects the performance improvement.A lower annual mean temperature leads to a higher thermal efficiency of Cycle B. For example, the improvement of thermal efficiency of Husavik is slightly less than that of Lima.In a composition-adjustable Kalina cycle, there exists an optimal ammonia mass fraction of the basic solution for a given ambient temperature, leading to a maximum thermal efficiency.Below the optimal value, the mass flow rate of the working solution decreases as the ammonia mass fraction decreases, leading to lower power output from the turbine.Above this optimal value, the power consumption of the fans of the condenser increases significantly as the ammonia mass fraction increases, reducing the power output too.The composition-adjustable Kalina cycle can change the ammonia mass fraction of the basic solution in response to ambient temperature so that the condensation temperature of the ammonia-water mixture can be regulated to match the changing ambient temperature.When the ambient temperature rises, the system reduces the ammonia mass fraction, so the condensation temperature of the mixture increases.When the ambient temperature drops, the system enriches the concentration of ammonia, reducing the condensation temperature.During the operation, the condensation pressure is maintained constant.According to the analysis above, a composition-adjustable Kalina cycle can achieve a higher annual-average thermal efficiency than a conventional Kalina cycle operating on a fixed composition.For a typical continental climate, the calculated annual-average thermal efficiency can be improved significantly, the heat addition within the evaporator can also be increased accordingly.As a result, the annual net power production can be increased significantly.However, such an improvement of the thermal efficiency strongly depends on the heat source temperature and annual temperature variation.For a given heat source temperature, the larger the annual temperature variation, the higher the improvement of thermal efficiency.For a given annual temperature variation, the higher the heat source temperature, the less the improvement of thermal efficiency.This can be attributed to the fact that the thermal efficiency of the conventional Kalina cycle increases as the heat source temperature increases).For high temperature heat sources, the cycle is less sensitive to the variation of condensing temperature.The present research assumed that the brine from a geothermal production borehole has a fixed flow rate and temperature, and thus the design target is to maximise the power production.Based on this assumption, the system power output and components sizes of a conventional Kalina cycle are designed according to the highest ambient temperature in summer.On the contrary, the components sizes of a composition-adjustable Kalina cycle should be specified according to the lowest ambient temperature in the winter.A composition adjustment control system needs to be added.The mass flow rate and ammonia concentration of the basic solution of the composition-adjustable Kalina cycle are then regulated according to the changing ambient temperature.In this case, the overall power output varies as the ambient temperature changes.For the convenience of cost comparison, power output can be fixed as the same.In this case, the capital cost of a composition-adjustable Kalina cycle power plant will be slightly more than that of a conventional Kalina cycle mainly due to the introduction of a composition adjusting system that consists of a density sensor, a control unit, and a tank.There will be a break-even point where the additional capital and operation costs can be compensated by the gain of annual-average thermal efficiency, which however strongly depends on operation conditions such as the scale of the power plant, annual temperature variation, heat source temperature, etc.Qualitatively speaking, a combination of large annual temperature variation, low heat source temperature, and large rated power output could lead to an economically viable case.This paper presents a comprehensive numerical analysis of a composition-adjustable Kalina cycle system.An advanced numerical model taking into account the heat transfer processes within the evaporator and condenser has been developed to demonstrate and analyse the working mechanism of this cycle in detail, and it has been verified by some published data.Air-cooled condenser has been used in this research to maximise the effect of the ambient temperature on the cycle’s performance.The effect of both air temperature and flow rate on the cycle’s thermal efficiency has been analysed in detail.The obtained results are compared with a conventional Kalina cycle with fixed composition and condensing temperature, showing a significant improvement in annual-average thermal efficiency.However, such an improvement of the thermal efficiency strongly depends on the heat source temperature and annual temperature variation.For a given heat source temperature, the larger the annual temperature variation, the higher the improvement of thermal efficiency.For a given annual temperature variation, the higher the heat source temperature, the less the improvement of thermal efficiency.Extra components and a control system are required to implement such composition-adjustable Kalina cycle, and they introduce extra costs.The additional capital and operation costs can be compensated by the improvement of annual-average thermal efficiency.In general, a combination of large annual temperature variation, low heat source temperature, and large rated power output is preferred for a composition-adjustable Kalina cycle.In order to quantitatively identify the break-even point, a combined thermodynamic – economic model is required, and it will be studied in detail in the future.
The Kalina cycle is believed to be one of the most promising technologies for power generation from low temperature heat sources such as geothermal energy. So far, most Kalina cycle power plants are designed with a working fluid mixture having a fixed composition, and thus normally operate at a fixed condensing temperature. However, the ambient temperature (i.e., heat sink) varies over a large range as the season changes over a year, particularly in continental climates. Recently, a new concept, i.e., composition-adjustable Kalina cycle, was proposed to develop power plants that can match their condensing temperature with the changing ambient conditions, aiming at improving the cycle's overall thermal efficiency. However, no detailed analysis of its implementation and the potential benefits under various climate conditions has been reported. For this reason, this paper carried out a comprehensive numerical research on its implementation and performance analysis under several different climate conditions. A mathematical model is firstly established to simulate the working principle of a composition-adjustable Kalina cycle, based on which a numerical program is then developed to analyse the cycle's performance under various climate conditions. The developed numerical model is verified with some published data. The dynamic composition adjustment in response to the changing ambient temperature is simulated to evaluate its effect on the plant's performance over a year. The results show that a composition-adjustable Kalina cycle could achieve higher annual-average thermal efficiency than a conventional one with a fixed mixture composition. However, such an improvement of thermal efficiency strongly depends on the heat source temperature, climate conditions, etc. The composition-adjusting system introduces extra capital and operation costs. The economic viability of a composition-adjustable Kalina cycle power plant depends on the balance between these extra costs and the increase of thermal efficiency.
770
The envelope of passive motion allowed by the capsular ligaments of the hip
Anatomical limits to the range of motion of the hip joint are important to prevent impingements, which can lead to serious clinical problems.For young adults, femoroacetabular impingement in the native hip causes pain and trauma to the acetabular labrum or articular cartilage and can, in the long-term, lead to osteoarthritis.For total hip arthroplasty patients, impingements cause subluxations and subsequent edge loading and high wear or dislocation.Consequently, there is much benefit to be gained from understanding how the natural hip limits ROM to prevent impingement.The majority of hip ROM research considers how impingement is influenced by bony hip morphology only, and the effects that surgery can have on this.Many of these studies investigate joint morphology or implant shape/position in isolation and find that there is a non-symmetrical range of hip rotation.In extension, the hip has a large range of internal rotation but is at risk of impingement in external rotation.Conversely, in deep flexion the hip has a large range of external rotation but is at risk of impingement in internal rotation.However, clinical measurements of hip rotation suggest ROM is more symmetrical than these models indicate and indeed recent research has described how the soft tissues also limit hip ROM.Including these tissues in ROM models has demonstrated that variations in hip geometry which affect ROM within the soft-tissue passive restraint envelope are more important than variations outside it.Of the hip soft-tissues, the influence of the hip capsular ligaments on ROM restraint is particularly important to consider because any intra-articular hip surgery necessarily involves an incision to these ligaments to gain access to the hip, whether open or arthroscopic joint preserving surgery, or THA.In vitro data indicate the capsular ligaments limit the available ROM in the native hip, and that when they pull taut in deep flexion they may protect the hip against posterior edge loading.In vitro data also demonstrate that synovial fluid flows from the central intra-articular compartment of the hip to the peripheral compartment and it has been suggested that tightening of the capsular ligaments may circulate synovial fluid back to the central compartment again.There are therefore several possible biomechanical functions of the capsular ligaments and several groups advocate their repair after joint preserving surgery.However it remains a technically demanding task and is not routinely performed.This is a concern as failure to restore these biomechanical functions may increase the risk of osteoarthritis progression.Most hip ligament research focusses on a neutral ab/adduction swing path so it remains unclear when the ligaments engage as ab/adduction varies.There is therefore a lack of baseline experimental data describing the positions throughout ROM where the capsular ligaments pull taut to restrain rotation of the native hip, and what rotational stiffness of restraint they provide when they do.These data would be useful for both assessing the importance of the capsular repair for a patient as well as performing the repair to restore natural biomechanics.The aims of this study were to quantify the ligamentous passive restraint envelope for the hip when it is functionally loaded throughout the whole ROM, and to quantify the amount of rotational stiffness provided by the capsular ligaments once taut.This would provide the surgeon with an objective target to restore ligament biomechanics following early intervention surgery.The null hypothesis is that the passive rotation restraint envelope does not vary throughout hip ROM.Following approval from the local Research Ethics Committee, 10 fresh-frozen cadaveric pelvises with full length femurs were defrosted and skeletonised, carefully preserving the hip joint capsule.Guide holes were drilled into the left posterior superior iliac spines and femoral shaft before bisecting the pelvis and transecting the femoral mid-shaft.The guide holes based on the contra-lateral pelvis and femoral epicondyles were then used to mount the hip into a six-degrees-of-freedom testing rig according to the International Society of Biomechanics coordinate system.Neutral flexion, rotation and ab/adduction equated to a standing upright position.The rig comprised of a femoral-fixture that was attached to a dual-axis servo-hydraulic materials-testing-machine equipped with a two-degree-of-freedom load cell and a pelvic-fixture that could constrain, release or load the other four-degrees-of-freedom.Pure moments could be applied in all three physiological directions: internal/external rotation torque through the rotating axis of the servo-hydraulic machine and flexion/extension or ab/adduction torques with a pulley and hanging-weights couple.This meant that any hip could freely rotate about its natural centre, unconstrained, without affecting the magnitude of applied torque.Fixed angular positions could be applied using position control on the servo-hydraulic machine or with screw clamps on the pulleys.Femoral proximo-distal loads were applied by operating the vertical axis of the servo-hydraulic machine in load control whilst an x–z bearing table and a hanging weight applied joint reaction force components in the transverse plane; translations in the secondary translational degrees-of-freedom x–y–z were free to occur in response to the applied load and ligament tension.For each specimen, all tests were performed at room temperature on the same day without removing the specimen from the testing rig.The specimens were kept moist using regular water spray.With the femur in the neutral position, a fixed compressive load in the coronal plane of 110 N angled 20° medially/proximally relative to the mechanical axis of the femur was applied.This loading vector was held constant relative to the femur whilst the pelvis was flexed/extended and ab/adducted to apply ROM.As load direction was relative to the femur this meant that, for example, if the hip was flexed to 90° then the load would be applied in the transverse plane.This loading direction was chosen based on the mean direction of the hip contact force relative to the femur during functional tasks reported in HIP98: 18±5° medially/proximally and 0±6° anteriorly/proximally for an average patient walking fast/slow, up/down stairs, standing up, sitting down, and knee bend.For each specimen, the ROM with the joint capsule intact was established by applying 5 N m extension/flexion torques with the hip joint in neutral rotation and ab/adduction to define a value of extension and deep flexion for the hip.Then, with the joint still in neutral rotation, 5 N m ab/adduction torques were applied to measure values of high abduction and high adduction at six different flexion angles.Finally, 5 N m torques were applied in internal/external rotation at 30 different hip positions; all possible combinations of ABD, AB20°, A0° AD20° and ADD at all six flexion/extension angles.At each hip position, these rotation movements were applied by the servo-hydraulic machine using a sinusoidal waveform with a 10 s period whilst continuously recording the angle of rotation and passive rotation resistance.Each movement was performed twice and data were analysed from the second iteration.In order to assess specimen morphology, following testing, the following measurements were made: femoral head diameter, offset, anteversion, neck-shaft angle and head/neck ratio, as well as acetabular centre edge angle and depth ratio.The α and β angles, and the anterior neck offset ratio were also measured.Specimens with α>55° or centre-edge angle <25° were considered abnormal and were excluded from the data analyses.Internal/external torque–rotation curves for each specimen in each hip position were plotted using MatLab.The angular positions where the hip joint motion transitioned from slack to stiff were identified by finding the first points where the torque–rotation gradient exceeded 0.03 N m/° for both internal and external rotation.This value of 0.03 N m/° was determined from pilot data by visually inspecting plots of the torque–rotation data alongside the calculated gradient values.The slack/stiff transition points were then used to calculate three parameters for further analysis: the range of un-resisted rotation, the location of the mid-slack point and the change in rotation from the transition point to 5 N m of passive rotation restraint.In cases where there was continually passive restraint with no slack region, the mid-slack angle was defined at 0 N m passive resistance torque.Finally, the gradient values were additionally used to quantify the aggregate torsional stiffness provided by the capsular ligaments at the point of 5 N m passive resistance.The values recorded at AD20° and AB20° could not be included in the repeated measures analyses because not all hips could reach these positions in extension or deep flexion.Data were analysed in SPSS with two- or three-way repeated measured analyses of variance.The independent variables were the angles of hip flexion and hip ab/adduction for the two-way analyses, with an additional factor of direction of rotation for the three-way analyses.Four dependant variables were analysed: the range of un-resisted rotation, the angle of mid-slack, the angular change from the transition point to 5 N m passive restraint and finally the torsional stiffness of the hip at the point of 5 N m restraint.Post-hoc paired t-tests with Bonferroni correction were applied when differences across tests were found.The significance level was set at p<0.05.The number of post-hoc comparisons at a given level of flexion was different from that at a given level of ab/adduction.Therefore adjusted p-values, multiplied by the appropriate Bonferroni correction factor in SPSS, have been reported rather than reducing the significance level.One male hip had a visibly aspherical head and was excluded from the data analysis.External rotation data for one female specimen was lost due to the capsule rupturing from the bone when 5 N m torque was applied in external rotation meaning that subsequent hip rotation results are presented for only eight specimens.Morphological measurements of these specimens are presented in Table 1.Under 5 N m torque the mean hip joint flexion was 112±10° and extension was −12±7°.The range of hip joint ab/adduction varied with the angle of hip flexion; it was largest in 60–90° of flexion and smallest in hip extension.The range of un-resisted rotation varied with both the angle of hip flexion and ab/adduction and the effect of flexion on the slack region was found to be dependant on the level of ab/adduction and vice-versa.The post-hoc analysis showed that the slack region in neutral ab/adduction was greater than that in high ab/adduction.The largest difference was at F60° where the mean slack region was 41±13° larger in neutral ab/adduction than when the hip was highly adducted.Similarly, the hip had a greater slack region in mid-flexion compared to extension and deep flexion.The largest difference was at F60° where the mean slack region was 44±15° larger at F60° than EXT.The position of the mid-slack point also varied with the angle of hip flexion and abduction with an interaction effect between flexion and ab/adduction.Post-hoc analyses showed that, for both neutral and high adduction, the mid-slack point was found with the hip internally rotated in extension, externally rotated in deep hip flexion.However, when the hip was highly abducted, no difference was detected between the position of the mid-slack point in deep flexion and extension.Instead, the mid-slack point was found with the hip externally rotated in mid flexion, resulting in a parabolic-like shift in the location of the mid-slack point.Neither the angular change from the transition point to 5 N m passive restraint nor the torsional stiffness at 5 N m restraint was affected by a three-way interaction between flexion, ab/adduction and rotation direction.However, both dependant variables did vary with hip position with a two-way interaction detected between flexion and ab/adduction across both directions of rotation.Post-hoc analysis detected differences in similar positions to those found for the slack region.Generally, when the slack region increased, torsional stiffness increased and slack-to-taut decreased.The most important finding of this study was that the passive restraint envelope for hip rotation varied with the angle of flexion/extension and ab/adduction.In a position of mid-flexion and mid-ab/adduction there were large slack regions where the capsular ligaments provided no rotational restraint, which indicate a large in-vivo ROM that allows the hip to move freely under the action of hip muscles during many daily activities.Conversely, towards the extremes of hip ROM there was a minimal/non-existent slack region, thus limiting the available range of rotation in positions where the hip is vulnerable to impingement and/or subluxation.The results also showed that internal/external rotation restraint is not symmetrical; the mid-slack point displayed a shift from an internally rotated position in extension to an externally rotated position in hip flexion.Our results do not distinguish between capsular rotational restraint and that from labral impingement, but provide an aggregate rotational restraint from the peri-articular tissues.However, within the 5 N m restraint boundaries examined, our previous research found that the mean labral contribution to rotational restraint only exceeded 20% in 6/36 hip positions and was less than the capsular contribution to rotational restraint in all hip positions.These labral impingements were observed most frequently when the hip was in high abduction, which may be the cause of the parabolic shift of the mid-slack point in high abduction, and also the few hip positions in low flexion and high abduction where slack-to-taut and torsional stiffness seemingly both increase.Another limitation was the high mean age of the cadaveric specimens; they are better matched to patients undergoing THA than those receiving early intervention surgery.Our study also did not consider the effects of osteoarthritis on capsular stiffness, or how a smaller head size for a THA may affect the ability of the capsule to wrap around the head and tauten.By only including normal hips in the study it was also not possible to address whether hips suffering from FAI have normal capsular anatomy/function.However, studies have suggested similarities between hip capsule dimensions in pathological hips and normal hips.In Fig. 7, the passive restraint envelope measured in this study is overlaid on ROM data taken from 18 studies with a total of more than 2400 subjects, which include clinical goniometer readings, in-vitro experiments including skin and muscles and computational impingement models.Our data are in good agreement with other cadaver based studies, but the passive restraint envelope is typically less than clinical measurements.This is to be expected as clinical ROM measurements usually measure the relative movement between the thigh and trunk, thus including contributions from the lumbar spine, sacro-iliac joint as well as the anatomic hip joint.However, the ROM measured in the current study was always less than that measured when only bone-on-bone impingement was considered for normal hips indicating the capsular ligaments engage to prevent impingement.The impingement-free range of rotation measured in bone–bone impingement studies is biased towards internal rotation in extension, and external rotation in flexion.Our data indicate capsular rotation restraint guides the available range of rotation towards these impingement-free positions as the mid slack point shifts 30° from an internally rotated position in extension to a more externally rotated position in deep flexion.Several authors have reported the total resistance to hip joint distraction/dislocation, the stiffness of individual ligaments, their contribution to hip rotation restraint, or their influence on hip ROM.However to our knowledge there are no studies measuring the slack region, or the angular change required to tauten the ligaments or torsional stiffness provided by an intact capsule once the ligaments are taut.This study quantifies these variables and the findings correlate well with the understanding of the anatomy of the capsular ligaments.The four capsular ligaments available for limiting hip rotation are the same ligaments which can generate resistive moments against deep flexion/extension or high ab/adduction.This explains the reduced hip rotation slack region observed in the more extreme hip positions as the ligaments are recruited to limit both large movements of the lower limb and hip rotation.It also explains the reduced rotational stiffness in these hip positions as the ligament fibres do not align to purely resist hip rotation but also the other movements.Conversely in mid-flexion and mid-ab/adduction, there is a large slack region available as the ligaments are not resisting movements in any direction.When the hip is excessively rotated in these mid-ROM positions such that the ligaments start to tauten, the ligaments develop high levels of torsional stiffness in small angular changes as the fibres are orientated more perpendicularly to the axis of hip rotation, directly opposing the movement.In conclusion, to our knowledge, this is the first study to quantify the hip positions where the capsular ligaments restrain hip rotation and those where the joint is slack, how much rotation is required to tighten the ligaments, and how much rotational stiffness is provided by them once taut.These results provide a benchmark for the normal joint that can be used as a target for capsular repair in joint preserving surgery, and enable the restoration of capsular biomechanical function after surgery.This study was funded by the Wellcome Trust and EPSRC and the Institution of Mechanical Engineers.The dual-axis Instron materials-testing-machine was provided by an equipment grant from Arthritis Research UK.
Laboratory data indicate the hip capsular ligaments prevent excessive range of motion, may protect the joint against adverse edge loading and contribute to synovial fluid replenishment at the cartilage surfaces of the joint. However, their repair after joint preserving or arthroplasty surgery is not routine. In order to restore their biomechanical function after hip surgery, the positions of the hip at which the ligaments engage together with their tensions when they engage is required. Nine cadaveric left hips without pathology were skeletonised except for the hip joint capsule and mounted in a six-degrees-of-freedom testing rig. A 5. N. m torque was applied to all rotational degrees-of-freedom separately to quantify the passive restraint envelope throughout the available range of motion with the hip functionally loaded. The capsular ligaments allowed the hip to internally/externally rotate with a large range of un-resisted rotation (up to 50±10°) in mid-flexion and mid-ab/adduction but this was reduced towards the limits of flexion/extension and ab/adduction such that there was a near-zero slack region in some positions (p<0.014). The slack region was not symmetrical; the mid-slack point was found with internal rotation in extension and external rotation in flexion (p<0.001). The torsional stiffness of the capsular ligamentous restraint averaged 0.8±0.3. N. m/° and was greater in positions where there were large slack regions. These data provide a target for restoration of normal capsular ligament tensions after joint preserving hip surgery. Ligament repair is technically demanding, particularly for arthroscopic procedures, but failing to restore their function may increase the risk of osteoarthritic degeneration.
771
Identity, Structure and Compositional Analysis of Aluminum Phosphate Adsorbed Pediatric Quadrivalent and Pentavalent Vaccines
Traditionally, complex biological products such as vaccines presented unique challenges to implementation of even rudimentary characterization packages; thus, the product was defined almost exclusively by its manufacturing process, i.e.,if the process remains unchained, the product should be the same.The advances in technology allowed the application of more comprehensive characterization packages for products such as adsorbed combination vaccines, containing several antigens in a single formulation to protect against more than one disease.The application of extensive characterization packages can now extend beyond simply characterizing the purified proteins, to include product intermediates , and adsorbed protein drug substances and adjuvanted vaccine formulations .As discussed previously characterization of vaccine attributes at both the drug substance and drug product stages have progressively higher criticality with respect to product supply, safety andimmunogenicity.For vaccines, this encompasses not only protein antigens, but also adjuvants, and adjuvanted and multivalent product formulations.Factors that can affect safety and efficacy critical quality attributes and critical material attributes may include, but are not limited to, protein adsorption and conformation, size distribution and morphology of adsorbed drug substances.Presented here, to assess these attributes, are several of analytical tools with the capability of characterizing multivalent vaccines and their components, as well as lot-to-lot consistency.The principle applied here is that the quality of subsequent batches is the consequence of the strict application of a quality system and of a consistent production of batches, which can be demonstrated using state-of-the-art and non-animal methods .Many reports in the literature demonstrate that protein adsorption to an adjuvant can alter its conformation , and either stabilize , destabilize , or show no effect on conformation.This highlights the importance of analytical tools capable of monitoring these possible changes in protein antigens throughout the manufacturing process.Multivalent vaccines offer better protection against certain diseases such as pertussis , and manufacturing of combination products with the same immunogenicity and safety profile as each of its individual component vaccines is a considerable challenge .Development of combination vaccines requires a careful assessment and selection of adjuvant and process steps including formulation of the intermediates and final product.Furthermore, there may be physicochemical or immunological interference between any or all of the components .The goal of this study is to set an empirical baseline to map the structure-function relation of the antigens from the vaccine products.For that purpose the samples analyzed here were the commercialized vaccines proven to be immunogenic in clinic.Hence this study was designed to understand the differences between the pre-adsorbed and adsorbed antigens used to formulate vaccine product.This biophysical toolset was not explored in the past for the vaccine components produced at manufacturing scale.This will serve a as a basis to understand any future changes in the manufacturing process, facility, or site.To provide a more comprehensive analysis of current manufacturing processes, samples of intermediate pre-adsorbed protein antigens, adsorbed drug substances, and drug products were examined using a panel of methods.These included dynamic light scattering, laser diffraction, scanning electron microscopy, Fourier transform infrared spectroscopy, and intrinsic fluorescence spectroscopy.These non-routine characterization tests were applied for the purpose of product knowledge.Particle size can be an indication of both process consistency and product stability, and can be a quality attribute used in the characterization of vaccine and vaccine components .DLS was utilized to characterize the size of pre-adsorbed protein antigens, while LD was applied to particle sizing of adjuvant and adjuvanted dug substances.As antigen protein conformation may affect the presentation of epitopes, the effect of adjuvantation on protein higher order structure was analyzed.FTIR was utilized to measure secondary structure content, and IF to examine tertiary structure conformation.With the objective of a comprehensive characterization of multivalent vaccines and their components, a novel SEM approach to the visualization of adjuvant size and morphology was developed.Use of low vacuum SEM imaging mode allows characterization of non-conductive biomaterials .This allowed an investigation of the effect of adsorbed proteins on adjuvant morphology and packing density of the suspension that in turn could be used to gain product knowledge and characterize adjuvantation step of the manufacturing process.Finally, although various multivalent vaccines may contain similar antigen profiles, minor variations in their composition or formulation may be detected by a sufficiently sensitive and selective method.FTIR could be used to measure a signature spectrum, not only for individual adsorbed monovalent drug substances, but also for several multivalent vaccine drug products.Thus, FTIR can be used to identify very similar drug products.All samples examined in this study were manufactured in-house, including adjuvant, pre-adsorbed and adsorbed protein samples, and final vaccine products.All protein antigens were purified from the respective pathogen.The proteins in solution, either purified or chemically-modified were hereafter referred as pre-adsorbed antigens, which denotes a specific manufacturing step.Upon formulation with aluminum phosphate adjuvant, the proteins were referred as adsorbed antigens of drug substances.Quadracel™ contains Diphtheria Toxoid, Tetanus Toxoid, acellular pertussis proteins: Pertussis Toxoid, Filamentous Haemagglutinin, Pertactin, Fimbriae types 2 and 3, inactivated poliomyelitis vaccine type 1, type 2 and type 3 as active ingredients .The pI values of these antigens measured by Capillary Electrofocusing Imaging are summarized in Table S1.In addition to the above ingredients, Pentacel and Pediacel also contain purified polyribose ribitol phosphate capsular polysaccharide of Haemophilus influenzae type b covalently bound to TT.The previously listed toxoids are a form of respective toxins chemically-modified with formaldehyde.Additionally, these vaccines include AlPO4, 2-phenoxyethanol, and polysorbate 80 .All pre-adsorbed antigens were in phosphate buffer, except pre-adsorbed DT, which was in 0.9% saline.The molar ratio of each protein antigen is listed in Table S2.All DLS measurements of particle size distribution of pre-adsorbed antigens were performed using a Nanotrac 150 instrument.All the samples were measured at room temperature at 20 fold dilution using MilliQ water, hence viscosity of water was used for the data analysis.Total volume for all measurements was 600 μL.Nanorange mode was enabled for appropriate analysis of the particle sizes below 20 nm.The data acquisition and analysis were done by Microtrac Flex software.The particle size was reported as hydrodynamic diameter in nm, with 1 decimal point.Coefficient of variation for the qualified generic DLS method was ranging from 5 to 10% for DT, TT, FHA, FIM, and 15% and above for PRN and PT.All measurements of particle size distribution of adjuvant, adsorbed antigens and multivalent vaccine products were performed using a Mastersizer 3000 instrument, operating in a dynamic range of 0.01 to 3500.00 μm.Particle size distributions in solutions and suspensions were quantitatively determined by measuring the angular variation in intensity of light scattered from a laser beam passing through a dispersed particulate sample.The reportable value is Derived Diameter, which is the particle size for a specific percentile of the cumulative size distribution.Particles were measured at room temperature using the built-in “non-spherical” option within the software, and the average Dv10, Dv50 and Dv90 values of 5 measurements were reported in μm with 1 decimal point.The coefficient of variation for the qualified LD assay was in the range of 5–7% for the adsorbed antigens.Data re-plotting was performed using SigmaPlot.FTIR spectroscopy was performed using a Vertex 70 FTIR Spectrometer, equipped with a cryogenically-cooled MCT detector and a BioATRII sampling accessory.A sample volume of 20 μL was loaded onto the sample cell and the spectra were collected at a resolution of 0.4 cm-1 at 25 °C with a wavenumber accuracy of 0.01 cm−1 at 2000 cm−1.The samples were allowed to stabilize for 1 min on the ATR crystal.Background and sample measurements were conducted with each reported measurement representing an average of 200 scans.Data acquisition and analysis were performed using the OPUS 6.5 software.OPUS automatically subtracts the background signal from the sample to produce the spectrum for the analyte.All measurements were carried out at 25 °C using a Haake DC30/K20 temperature controller.After acquiring the FTIR spectra, the baseline was corrected by removing the scattering signal using the OPUS software.Quant2 software was used to estimate secondary structure with an error of 5.5% for alpha-helix content and 4.4% for beta-sheet content.The second derivative spectrum was generated using the Savitzky-Golay algorithm, which allowed simultaneous smoothing of the spectrum.Arithmetic manipulations and re-plotting were performed using SigmaPlot.SEM was used to examine the morphology and size of adjuvanted protein antigens.All measurements were performed using FEI Quanta 3D SEM in the Imaging Facility at York University.Low Vacuum SEM mode was used to image the dried adjuvanted samples and was accomplished by centrifugation of the sample at 6000 rpm, followed by removal of the supernatant.Sodium chloride, a residual of adjuvant production process, may interfere with SEM characterization of the microstructure, and therefore, samples were rinsed prior to analysis.Pellets were then rinsed 3 times with LC-grade water.The rinsed pellets were then immobilized by smearing a 10 μL aliquot of the adjuvant suspension on a glass microscope slide.The samples were imaged using a low vacuum secondary electron detector.IF spectroscopy was performed using Varian Cary Eclipse spectrophotometer.Intrinsic fluorescence, a dye-free method to evaluate changes in aromatic amino acid residues within proteins, was used to probe changes in the local environment as a result of adsorption onto the surface of AlPO4 adjuvant.All protein samples were excited at 285 nm and emission spectra were recorded in 300 to 400 nm region using multi-cell holder accessory of Cary eclipse.Measuring parameters such as slit width were optimized for each sample to obtain maximum fluorescence intensity.Size distribution profiles of pre-adsorbed and adsorbed proteins were measured using DLS and LD respectively.DLS was used for pre-adsorbed samples while LD was used for adjuvants and adsorbed vaccines.The size distribution profiles as determined by DLS for each of the pre-adsorbed antigens are shown in Fig. 1.These antigens will ultimately be formulated into a multivalent vaccine with protection against Pertussis, Diphtheria and Tetanus.On an average, particle sizes ranged from 10 to 200 nm, and show the diversity of size from antigen to antigen.The polydispersity index for each antigen is listed in Table S3.Each showed a unique distribution profile.AlPO4 was used as an adjuvant in the adsorbed form of the antigen proteins, and its particle size was in the range 9–13 μm.The size distribution profiles depicted in Fig. 1b were representative of the three lots of each product analyzed.With the exception of adsorbed FHA, each monovalent adsorbed antigen profile showed two major peaks and a broad size distribution ranging from approximately 1-100 μm.By contrast, adsorbed FHA showed one major peak and a narrower size distribution.Final drug product that includes all six adsorbed protein antigens was similar in size distribution to most of the monovalent adsorbed antigens.In addition to particle size, the previously unexplored characteristic of particle morphology was visualized for the first time for an AlPO4 adjuvant suspension, as well as for each of the adsorbed monovalent antigens.Panel b in Fig. S1 depicts a low vacuum SEM image of AlPO4 adjuvant, most prominently highlighting the formation of irregularly shaped agglomerates comprised of smaller particles.In Fig. 2, six AlPO4 adsorbed monovalent antigens are compared using low vacuum SEM, in which finer features of the surface can be visualized .The SEM images of DT, TT, and FHA were not sharp compared to the other three antigens although they were recorded under the same conditions.This can be assigned to lower conductivity surfaces.The changes in conductivity observed for DT, TT, and FHA samples indicated that the adsorbed layer onto the surface of AlPO4 was likely more dense.The SEM images shown here were representative of the whole sample, see Fig. S1c for additional SEM images.As shown in Fig. 2, adsorbed protein samples and AlPO4 adjuvant were similar in morphology when imaged in low vacuum mode, and no changes to adjuvant were observed after antigen adsorption.As previously discussed, proteins adsorbed on the surface of adjuvant particles may undergo conformational changes.Higher order structural changes following AlPO4 adsorption were characterized by different spectroscopic methods: FTIR, and IF.FTIR spectroscopy was used to probe the conformational changes associated with adsorption by monitoring shifts in secondary structure from pre-adsorbed to adsorbed in purified monovalent antigens.Table S4 indicates the changes in alpha helix and beta sheet content upon adsorption.An increase in both alpha-helix and beta-sheet content were observed for DT and TT upon adsorption to AlPO4.However, for PRN, FIM and FHA the changes detected were within the experimental error and hence deemed insignificant.While pre-adsorbed and adsorbed PT antigen did not show any spectral change.FTIR spectra for all six adsorbed monovalent antigens are presented, and supplemented with the second derivative spectra to highlight regions of variability.All protein antigens characterized, except PT, also showed a broad peak around 1078 cm−1 for AlPO4 adjuvant in the adsorbed form.Some small changes were also observed in protein backbone and sidechain around 1400 and 1453 cm−1 as a result of adjuvantation.All drug substances, as well as the final multivalent product samples showed unique spectral features.As shown in the upper left panel in Fig. 3, similar spectral features are observed in FHA, FIM and PRN, whereas by contrast, DT and TT were similar to some extent.In cases where unambiguous distinction is difficult by comparing spectra, calculated 2nd derivative spectra can elaborate additional spectral information, as shown in the lower panel of Fig. 3.In this analysis, the differences emerge within the amide I and II regions for each of the tested drug substances in the pre-adsorbed versus adsorbed forms.This region highlights the changes in β-sheet, turns and α-helices at approximately 1624, 1676 and 1654 cm-1, respectively.Secondary structural content may be influenced by adsorption to AlPO4 as a result of changes to the local environment of the antigens, which can also be detected by the shifts in the FTIR peak positions.The low frequency region consists mainly of contributions from adjuvant and phosphate buffer.The combination or multivalent vaccine products Quadracel™, Pentacel®, and Pediacel® all contain AlPO4 as an adjuvant and have many antigens in common.As a result, the spectral features of these combination products are quite similar, yet small but detectable differences were observed.For instance, the peak representative of the P-O stretch had higher absorbance in Quadracel™ when compared to Pentacel® or Pediacel®, the latter showing a shift in this peak.Another spectral difference was observed at 1420 cm−1 in Pentacel® and Quadracel™, where both showed a broad shallow peak in contrast to the sharper peak detected at 1414 cm−1 in Pediacel®.Intrinsic fluorescence spectroscopy was used to probe effect of adsorption on tertiary structure of the proteins.IF emission spectra of DT and TT revealed that adsorbed form of the protein has hypsochromic shift in tryptophan fluorescence emission as compared to pre-adsorbed antigens.FHA and PRN showed no significant shift whereas FIM and PT did not show fluorescence emission signal in either form.Vaccines are complex formulations containing multiple components such as protein antigens, adjuvants, excipients, stabilizers etc., and hence they form complex interations within the matrix.Therefore it is imperative to perform identity, compositional and structural analysis as a mean of quality control as well as to gain product knowledge.This study focuses on these aspects of the vaccine components and products through set of biophysical methods.As per ICH Q6B, it is important to understand and characterize physico-chemical properties of protein antigens such as higher order structure, purity, identity, biological activity, post-translational modifications .The results from particle size distribution suggest that AlPO4 adjuvant primarily affects the overall size of the adsorbed protein antigens.It also appears that the majority of particle size distribution profiles of the adsorbed protein antigens have some variability from particle size of AlPO4 as a result of adjuvantation.Apparently, particle size is important in the uptake of particles by antigen presenting cells , and the size of 10 μm is optimal .This is in agreement with the particle sizes of adsorbed protein antigens found in this study.The adjuvant suspension consists of small submicron particles that form continuous porous surfaces, and dense surface texture, which may impact antigen adsorption and therefore there is some variability for particle size distribution of adsorbed protein antigens.SEM images demonstrate that AlPO4 suspension and adsorbed proteins consist of small submicron particles that form a continuous porous surface.The approximate overall size of these particles is of ~4–5 μm as measured by SEM and ~8–14 μm as shown by LD.These differences were due to experimental conditions, such as hydration level of the adjuvant suspension and the presence of a vacuum for SEM measurements.As shown internally by Electrochemiluminescence and ELISA assays, DT, TT, and FHA show high % adsorption to AlPO4, whereas % adsorption of PRN, PT, and FIM is low.Adjuvant particles with DT, TT, and FHA appear larger in size possibly denser due to interaction between proteins and adjuvant surface.The adjuvant appears to be coated with the protein making the surface less conductive, and resulting in less sharp images.FTIR spectroscopy was used to probe secondary structural changes as a result of adjuvantation due to its ability to measure adjuvanted samples using ATR crystal.In FTIR spectra, individual peaks represent vibrational modes of the molecules under study and the alteration in the local environment of these molecules is detected by shifts in the peaks or the appearance or disappearance of certain peaks.This information was used while acquiring and analyzing spectra of these vaccine components.Drug substances that primarily consist of single antigens can be characterized using FTIR before and after adjuvant formulation.All of the vaccine antigens tested are purified proteins and thus share some fundamental FTIR spectral features.Moreover, the degree of adsorption to AlPO4 adjuvant may differ among antigens due to concentration, pI, or other factors and this may complicate the analysis of spectral features in adsorbed samples.Besides PT all other protein antigens showed spectral changes from pre-adsorbed to adsorbed formulations.This suggests that PT likely does not adsorb to AlPO4 surface which is also in agreement with SEM data.Besides PT and PRN all other protein antigens in the pre-adsorbed form have amide II signal, which disappears due to adjuvantation, this indicates structural changes involving amide II region.The toxoids, DT, TT, and FHA were all adsorbed to the surface of AlPO4, however, only DT and TT showed an increase in secondary structure content, consistent the findings reported in literature for the effect of adjuvant on protein structure.This is most likely due a difference in overall structure of the antigens, such as globular structure of DT and TT, versus elongated fibrillar structure of FHA.Secondary structure elements of DT, TT, FHA, PRN, PT, and FIM detected by FTIR were consistent with the structure of PRN , Diphtheria Toxin , Tetanus Toxin , Pertussis Toxin , and with the models of FHA and FIM reported in literature.In addition, the adsorption of DT and TT to AlPO4 induced additional rearrangement due to surface interaction.Whereas, for FHA, it appears that adsorption does not facilitate additional structural rearrangement.PT, consisting of globular domains that are chemically-modified, did not exhibit any changes similar to DT and TT.Therefore the detoxification alone does not explain differences in protein secondary structure upon adsorption.The results obtained by FTIR for DT, TT, and FHA are in agreement with changes observed in the intrinsic fluorescence emission spectra for these proteins.As shown in Fig. 5, a band broadening observed for the adsorbed antigens likely occurred due to altered solvent interactions with each fluorophore.The overall hypsochromic shift in adsorbed DT and TT indicate that tryptophan residues are more buried and have less solvent access, which could indicate these proteins are more folded than their pre-adsorbed forms or these residues are shielded by the adjuvant surface.For PT, PRN, and FIM, the presence of AlPO4 did not induce any significant changes as shown by FTIR and intrinsic fluorescence.A forced degradation study showed that a decrease in antigenicity of adsorbed TT using chemiluminescence was consistent with a decrease of thermal transition temperature measured by DSC for pre-adsorbed and by nanoDSF for adsorbed TT.As such, for the adsorbed TT stored at 45 °C the antigenicity decreased from 4586 μg/mL at time zero to 2600 μg/ml at 17 weeks, whereas, at 55 °C the antigenicity decreased to 0.037 μg/ml in just 1 week.As demonstrated in a recent study , the FTIR spectra of both adsorbed monovalent antigens and multivalent vaccine products showed rich information that can be recorded as a spectral fingerprint for each tested sample allowing FTIR spectroscopy to be used as a lean technique to verify the bulk drug substance identity prior to formulation, and in-process test to verify vaccine product identity prior to filling.Although multivalent vaccines can appear to be very similar in formulation, the addition of Haemophilus influenzae conjugate component and excipients in the formulation results in a unique signature profile for each product tested thus far.To summarize, FTIR can be used as lean technique to verify identity of the bulk drug substance prior to formulation and also to gain knowledge about changes to protein antigens as a result of adsorption.The findings presented here will be used for future comparability studies to assess the effects of process optimization, changes in manufacturing facilities and sites .In this study, a toolset of biophysical techniques were applied to the analysis of pre-adsorbed and adsorbed vaccine antigens, drug substances, and drug products so as to set an empirical baseline to map the structure-function relation of the antigens from the commercial vaccine products.As shown by SEM, the AlPO4 adjuvant suspension consists of small submicron particles that form a continuous porous surface.As shown by FTIR, secondary structure alpha-helix and beta-sheet content of DT and TT increased after adsorption to AlPO4 adjuvant, whereas no significant changes were noted for other protein antigens besides structural changes within the amide region.Similarly, SEM showed strong interactions between AlPO4 adjuvant and DT, TT, and FHA.Finally, FTIR spectroscopy can be used as a direct method capable of identifying final drug product without desorption using a unique spectrum generated by a combination of protein antigens and excipients.The authors declare no conflict of interest.Kristen Kalbfleisch, Sasmit Deshmukh, Wayne Williams, Ibrahim Durowoju, Jessica Duprez, Carmen Mei, Bruce Carpick, and Marina Kirkitadze are employees of Sanofi Pasteur, and Sylvie Morin and Moriam Ore are the employees of York University and have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript.Thus includes employment, consultancies, stock ownership or options, or royalties.No writing assistance was utilized in the production of this manuscript.
Purpose: The goal of this study is to set an empirical baseline to map the structure-function relation of the antigens from the commercialized vaccine products. Methods: To study the structural changes of protein antigens after adsorption several analytical tools including DLS, FTIR, Fluorescence, LD, and SEM have been used. Results: All antigens have shown wide range of hydrodynamic diameter from 7 nm to 182 nm. Upon adjuvantation, the size distribution has become narrow, ranging from 10 to 12 μm, and has been driven by the derived diameter of aluminum phosphate (AlPO4) adjuvant. Further to examine size and morphology of adsorbed antigens, SEM has been used. The SEM results have demonstrated that the AlPO4 adjuvant suspension and adsorbed proteins consist of submicron particles that form a continuous porous surface. Diphtheria Toxoid (DT), Tetanus Toxoid (TT), and chemically-modified Filamentous Haemagglutinin (FHA) have shown surface adsorption to AlPO4. Secondary structure alpha-helix and beta-sheet content of DT and TT has increased after adsorption to AlPO4 adjuvant as shown by FTIR, whereas no significant changes were noted for other protein antigens. The results from Intrinsic Fluorescence have shown a structural rearrangement in DT and TT, consistent with the FTIR results. Multivalent vaccine product identity has been determined by FTIR as unique fingerprint spectrum. Conclusion: The globular proteins such as DT and TT have shown changes in secondary structure upon adsorption to AlPO4, whereas fibrillar protein FHA has not been affected by adsorption. FTIR can be used as a lean technique to confirm product identity at different manufacturing sites.
772
Hippocampal morphology and cognitive functions in community-dwelling older people: the Lothian Birth Cohort 1936
The role of the hippocampus in cognitive processes, particularly in a variety of memory functions, is well studied.Via its dense connections with other important cerebral loci, its processes also support cognitive abilities more generally.Evidence that hippocampal volume is related to memory performance is most prevalent among populations which show age-related or pathological hippocampal atrophy."Lower hippocampal volumes among patients with Alzheimer's disease mild cognitive impairment, and depression are associated with poorer verbal and non-verbal/spatial memory scores.Similarly, among nonpathological samples of older adults, differences in hippocampal volume are related to poorer memory performance, mainly quantified using verbal recall tasks.Reduction in hippocampal volume has also been linked with poorer cognitive performance in a variety of cognitive domains in addition to memory, such as fluid intelligence and processing speed.However, a study in a group of 518 older adults from a population-based cohort reported that the rate of decline in hippocampal volume over 10 years was related specifically to verbal memory but not to general indicators of cognitive performance or measures of executive function.Aside from potential confounders of sample size, age, gender, and vascular risk factors, other possible reasons for the somewhat inconsistent evidence of the association between total hippocampal volume and cognitive performance might be that different hippocampal regions are differently sensitive to age, and/or to different cognitive tests and exhibit distinct shrinkage/enlargement effects that may compensate overall volumetric variations in this structure.One approach to test this theory has been to measure the volumes of specific hippocampal subfields, but there is no consensus on a single segmentation protocol.In addition, 1.0-mm isotropic voxels obtained at 1.5 T, commonly used by many MR protocols, produce images too coarse to reliably delineate hippocampal subfields.Acquisition protocols at higher magnetic fields of ∼0.4 × 0.4 mm or less in-plane resolution of the hippocampal region have been used by studies specifically aiming at the study of this structure.But even with optimal acquisition methods, anatomical delineation of hippocampal subfields is challenging.As subfield morphology is subject to individual differences, using atlas-bases measures for identification of fine-grained details is inconsistent with routine clinical image acquisition protocols.An alternative method has been to examine hippocampal shape morphology—which does not consider subfield boundaries established a priori.Analyses assessing the hippocampus in this way have reported age-related inward deformations in the hippocampal head and subiculum, regardless of age-related hippocampal volume reduction, and sub-regional associations with other cognitive domains, in addition to memory, across the whole lifespan.A consistent finding from across studies that relate hippocampal morphology with cognitive measures is the association of cognitive performance with deformations in the cornu ammonis at the hippocampal head."For example, on 383 data sets extracted from the Alzheimer's Disease Neuroimaging Initiative database, the anterior hippocampus and the basolateral segment of the amygdala showed a deformation inward in AD and MCI patients with respect to cognitively normal individuals, consistent with associated memory deficits on this population.In 137 individuals of 18–86 years of age, a lengthening of the antero-posterior axis left hippocampus was prominently associated with working memory performance across the adult lifespan.A study on 103 MCI subjects revealed an atrophy pattern associated with rapid cognitive deterioration in mini-mental state examination scores and verbal memory that showed initial degeneration in the anterior part of CA1.Another study also showed a significant decrease in the volumes of CA1 and subiculum subfields in AD compared with cognitively normal individuals.Yet, in spite of the importance of the hippocampus in healthy and pathological aging, a comprehensive analysis of multidomain cognitive associations with hippocampal deformations among a large group of cognitively normal older adults is currently lacking.Here, we extend our previous pilot analysis conducted on a small subsample of an age-homogeneous cohort of cognitively normal older individuals to examine associations between hippocampal morphology and a wider range of cognitive functions, both at the level of cognitive domains and with respect to individual subtests, on a sample that is 13 times larger.While examining the possibility of added value in using hippocampal shape analysis in conjunction with volumetry, the aim of the study is to explore hippocampal shape associations between a wide range of cognitive functions.Such associations may indicate loci particularly sensitive to the cognitive functions we evaluate and may also be coincident with loci reported in other studies to be vulnerable to the neuropathologies of aging.By exploring these associations on a larger sample, we aim to answer the following questions: is the inward deformation on the hippocampal head reported by other studies associated with reduced general cognitive functioning on a cognitively normal aging population and/or related to their childhood intelligence?, In nondemented older individuals, is regional hippocampal morphology associated with other cognitive functions or only with memory as reported elsewhere?, "In line with the studies referenced above, we hypothesize that in this cohort of septuagenarian individuals' hippocampal morphology, and specifically lateral deformations on the surface of the hippocampal head, will be associated with specific memory ability and also with broader cognitive domains.Given prior evidence in the hippocampus and associations between earlier life intelligence and other MRI phenotypes in this cohort, we further hypothesize that precursors of these deformations could be found at childhood.The Lothian Birth Cohort 1936 provided the sample for the present analysis.The LBC1936 is a large study of older community-dwelling adults, mostly living in the Edinburgh and Lothians area of Scotland, all of whom were born in 1936 and most of whom participated in the Scotland Mental Survey of 1947 at age 11 years.At ∼70 years, study participants underwent an initial wave of cognitive and physical testing, from 2004–2007.Approximately 3 years later, 866 underwent a second wave of cognitive tests at mean age 72.8 years which also involved an optional brain MRI scan.All data in the current study are taken from this second wave.The brain scan was undertaken by 700 subjects, yielding 681 participants with useable MRI data.Of these, 654 participants who also had complete cognitive data, were the subject of the present analysis.The Multi-Centre Research Ethics Committee for Scotland, Scotland A Research Ethics Committee and Lothian Research Ethics Committee approved the use of the human subjects in this study; all participants provided written informed consent and these have been kept on file.Participants who attended the second wave of the LBC1936 study also underwent a number of cognitive tests.These included 6 subtests from the Wechsler Adult Intelligence Scale: symbol search, digit symbol, matrix reasoning, letter-number sequencing, digit span backward, and block design, alongside 6 subtests from the Wechsler Memory Scale IIIUK: Logical memory immediate and delayed recall, spatial span forward and backward, and verbal paired associates.They also provided measures of simple and 4-choice reaction time and inspection time.These were used to examine associations with the hippocampus for both memory subtests, and for cognitive domains.Cognitive ability at age 11 was assessed using the Moray House Test IQ score from the Scottish Mental Survey of 1947, which is considered a good measure of general intelligence).Continuous measures of body mass index, average systolic and diastolic blood pressure, and glycosylated hemoglobin, were obtained.Also, at wave 2, participants provided information on vascular and health factors during a medical interview.They were asked whether they had received a diagnosis of hypertension, high cholesterol, or diabetes, about their history of cardiovascular disease, previous strokes, and their smoking status.Presence of each self-reported factor was coded as 1 except smoking status.An aggregate score of contemporaneous vascular risk was derived from these factors and the presence/absence of old infarcts identified on the MRI scan.MRI scans were acquired using a GE Signa Horizon 1.5-T HDxt clinical scanner operating in research mode using a self-shielding gradient set with maximum gradient of 33 mT/m and an 8-channel phased-array head coil.The imaging protocol is fully described elsewhere.For this particular study, we used data obtained from processing coronal T1-weighted volume scans acquired with a 3D inversion recovery prepared fast gradient echo sequence.Hippocampal shape models were generated from binary masks obtained semiautomatically from the T1-weighted volumes.First approximations of left and right hippocampal segmentations were obtained from an automated pipeline that uses tools from the FMRIB Software Library version 4.1, and an age-relevant template, followed by visual inspection and manual correction when required using Analyze 10.0 software, and saved as binary masks as per previous publications.Semi-automated measurements of intracranial volume were used for normalization."The hippocampal shape modeling and analysis of the local deformations are done in 4 steps: construction of the sample-relevant deformable template model of the target structure; template deformation and construction of the individualized shape models; surfaces' alignment; and computation of the local deformations.Full explanation can be found at http://cgv.kaist.ac.kr/brain/, and the toolbox that implements each step can be accessed from http://www.nitrc.org/projects/dtmframework/.In principle, hippocampal binary masks were input to a non-rigid shape modeling framework that uses a progressive model deformation technique built-up on a Laplacian surface representation of multi-level neighborhood and flexible weighting scheme."Briefly, the surface of a 3D model that encodes the generic shape characteristics of all hippocampi from the sample as a triangular mesh is non-rigidly deformed in a large-to-small scale to allow recovery of the individual shape characteristics, while minimizing the distortion of the general model's point distribution.This surface deformation is achieved through an iterative process that, at each iteration, decreases a rigidity weight α and the level of neighborhood in a step-wise way together with the magnitude of the displacement of each vertex.At early iterations, the generic 3D model deforms more largely to reproduce the large shape features of the hippocampus by propagating the external force, guiding each vertex of the general model to the closest image boundary, across the surface.In the iteration process, when the general model is not deformed anymore by the balance between the external and internal forces, the rigidity and the level of neighborhood are gradually diminished so that the model deforms at smaller regions to reproduce local shape details.To preserve the surface quality and diminish the effect that rough boundaries and noise in the binary masks could pose to the shape analyses, a rotation and scale-invariant transformation that constrains the vertex transformations only to rotation, isotropic scale, and translation is applied afterward.This helps regularizing the individual vertex transformations to those of the neighboring vertices using them as reference."The sample's right and left hippocampal DTM are constructed by applying marching cubes, mesh smoothing and mesh resampling methods to hippocampal “atlases” obtained from averaging the coregistered binary masks from all participants' hippocampi.Our left and right hippocampal DTM are triangular meshes of 4002 vertices each.The quality of the modeling process was evaluated using 3 metrics: the volumetric similarity index calculated as the sum of true positives and negatives divided by the sum of true and false positives and negatives; the mean; and maximum distances between the points of the individualized surface models and the corresponding boundaries of the binary masks.The first metric is calculated after converting the individualized surface models to binary images as the sum of the true positives and negatives divided by the sum of true and false positives and negatives.True positives are the voxels of this ‘mesh-to-binary’ converted image that are coincident with those of the binary mask used as input in the modeling.In turn, true negatives are those which were not part of either of the binary images.The third metric is known in the technical literature as fiducial localization error.When these metrics suggested that the precision of the modeling method was more than half the voxel size, the modeling process was re-run with different values of the rigidity parameter, number of iterations, neighborhood rings, and offsets until a good fit was achieved."After the 4002-vertex surface mesh model was fit to each hippocampal binary mask, all meshes were coregistered and scaled using the individuals' ICV, and an average mesh was generated.This “template” mesh was then aligned back to each individual mesh to calculate the deformation of each point from each hippocampus with respect to the correspondent point in the sample-specific “template”.This last step generated 2 text files with the values of the deformation vectors for each point of each data set.Cognitive test scores were examined both at the subtest and domain level.For the subtests, we examined spatial memory, verbal memory, and scores on digit span backward, and letter-number sequencing.At the domain level, we used PCA to create 3 latent variables representing the cognitive domains of memory, information processing speed, and the hierarchically superordinate domain of general fluid intelligence.This data reduction approach is common for deriving a latent, underlying construct which is free from item-level measurement error and test-specific variance.The cognitive tests, loadings, and proportion of variances explained by the first unrotated component in each domain are shown in Supplementary Table 1.Further details on the cognitive tests are reported in 2 open-access protocol papers.Associations between cognitive variables and hippocampal morphology were evaluated with multiple regression using MATLAB R2015a.Initially, we explored how much cognitive function in older age can be explained by local deformations.This model used the deformation vector at each point of the hippocampal triangular meshes as the predictor and each cognitive subtest variable as the response.We then investigated these associations at the level of the cognitive domains g, g-memory, and g-speed.Next, we explored how much local hippocampal surface deformations in older age depended on childhood intelligence and used the latter as predictor.Age in days at the time of the scanning, gender, and vascular risk score were used as covariates in all models.We also ran supplementary analyses for hippocampal volume.We calculated correlations between hippocampal volume, cognitive and vascular risk variables, and linear regressions using the same age, gender, and vascular risk measures as for the morphological analysis.Given the well-known vascular substrate of neurodegeneration and cognitive impairment, and the links between vascular risk factors and cognitive decline, we explored whether vascular risk factors were directly associated with local hippocampal shape deformations, and if there were any mediating effects in the associations between hippocampal deformations and cognitive function.The beta coefficients and p-values for each of the 4002 points were mapped on the reference surface to display the deformation patterns in relation to each cognitive variable.Standardized βs are reported throughout, and p-values were corrected for multiple comparisons using false discovery rate as recommended by Glickman et al.Finally, we ran a sensitivity analysis to account for the presence of participants who may be exhibiting pathological aging.Though all participants were free from dementia diagnosis at initial recruitment, we identified those who had either reported a dementia diagnosis or had a mini-mental state examination score <24 at either wave 2 or wave 3 of the study.A dichotomous covariate reflecting whether either criterion was fulfilled was included in sensitivity models, and the loci and magnitudes of associations between cognitive scores and hippocampal morphology were compared with previous model outputs.Characteristics of study participants are shown in Table 1."Participants' mean total hippocampal volume was 6429.10 mm3, and associations between hippocampal volumes and study variables are shown in Table S2.Participants attending MRI did not significantly differ from those who only attended cognitive testing across any memory subtests or at the level of any cognitive domains.Median Dice coefficient values were 0.96 for both hippocampi.Median hippocampal surface-binary mask mean differences were 0.22 mm for left hippocampi and 0.29 mm for the right, indicating that the surface models accurately reproduced the hippocampal shape details.The median fiducial localization error for the left hippocampus was 4.20 mm, and for the right hippocampus it was slightly higher 6.91 mm.Further investigation revealed that the latter, which measures the maximum distance between the surface model and the binary mask, was high due to rough boundaries on the binary masks arising from voxelization and the presence of small T1-weighted hypointense cavities.Although their nature is unknown, these cavities are normal features of aging: some of them may represent a diffuse vascular process with adverse local effects and/or proxies for larger volumes of infarcts or mild or severe diffuse damage.Regional differences in hippocampal morphology with respect to measures of specific memory subtests are shown in Fig. 2.The standard errors of all cognitive models are shown in Supplementary Fig. 1.At uncorrected significance levels, better performance across 4 measures was associated with both inward and outward hippocampal deformations with respect to the template.Outward deformations at the bilateral right medio-ventral tail and bilateral inward deformations at the dorsal tail were consistently associated with superior performance across tests, though with differing magnitudes.Only right hippocampal associations involving extreme deformation patterns in relation to spatial span performance survived FDR correction; this was in subiculum and CA1 at the head, and in the ventral tail section of CA1.Regional differences in hippocampal morphology with respect to general cognitive factors are shown in Fig. 4.Memory domain scores broadly replicated the inward and outward deformation patterns with respect to the mean surface of the sample across memory subtests, outlined above.Bilateral deformations on CA1 at the hippocampal head and dorsal tail, at the junction between hippocampal head and tail and subiculum were associated with processing speed.A modest and nonsignificant association with general cognitive abilities was observed at the dorsal head of left hippocampus.After applying FDR correction, only associations involving regions with extreme deformation patterns associated with processing speed survived: in subiculum, in the ventral tail section of CA1, at the anterior-to-dorsal region of the head for left hippocampus; and in subiculum and in the ventral tail section of CA1 for right hippocampus.Childhood intelligence, represented by age 11 IQ, did not predict hippocampal shape deformations in older age.Fig. 6 shows that the model fitted the data, but no associations survived FDR correction.Body mass index and self-reported vascular risk factors exhibited nominal uncorrected associations with inward deformations at the lateral head of each hippocampi.However, these associations did not survive FDR correction.Therefore, there was no basis from which to conduct formal mediation analyses to inquire whether vascular risk factors mediated any associations between hippocampal shape and cognitive functions.Of note, an additional evaluation of the associations between cognitive variables and hippocampal morphology excluding the vascular risk factor score as a covariate did not show difference in the graphic representation of the results presented above.Accounting for dementia diagnosis among participants did not significantly alter the loci or magnitude of the reported effects.For example, maximal cluster peaks for speed in the left hippocampus changed from β = −0.231 to −0.218 and 0.248 to 0.251, and in the right hippocampus from β = −0.242 to −0.234 and 0.227 to 0.221.For spatial, from β = 0.201 to 0.189, and −0.272 to −0.247.All values still remained significant following FDR correction.Supplementary analyses for hippocampal volume are shown in Table S2 and Table S3.When modeled with cognitive tests covarying for age, sex, and vascular measures, raw volumes, were associated with verbal memory, digit backward, and letter-number sequencing.Total hippocampal volume was also significantly associated with the cognitive domains g and memory.However, while these results survived FDR correction for multiple comparisons, adjusting the hippocampal volumes for brain size attenuated all associations to nonsignificance.Here, we report that associations between hippocampal characteristics and cognitive abilities show hippocampal-wide volumetric effects alongside complex and regionally specific morphological deformations.We found associations between regional shape deformations in the right hippocampus and spatial memory, and between processing speed and a more distributed set of bilateral regions.Notably, these 2 cognitive measures did not show any associations with hippocampal volume, indicating that volumetric and morphological analyses provide complimentary information on a brain formation which is intimately involved in multiple cognitive functions.In particular, our results highlight the importance of the CA1 subfield in cognitive performance among this group of healthy older adults, in agreement with other studies.While a previous study on a group of 104 healthy young adults reported a complex pattern of inward and outward hippocampal deformations with respect to the mean hippocampal shape of the sample being associated with measures of spatial intelligence and spatial memory but not with processing speed, our contrasting findings in this cohort may be due to the increased proportion of shape variance due to differential age effects, which may subsequently account for more variance in cognitive performance.Processing speed is well known to be highly sensitive to aging, but current research indicates a central role for white matter in processing speed in older age.Nevertheless, hippocampal volume has been reported to contribute uniquely to processing speed beyond white matter hyperintensities, suggesting that hippocampal deformations may provide unique information about cognitive variability in older populations.The main pyramidal layers of the hippocampus are found predominantly in CA1, along with CA3 and the subiculum."Given that these layers receive axonal projections from the perforant path, inward hippocampal deformations found in clinical populations have been previously taken as probable consequence of disease-mediated reductions in nerve fibers in Alzheimer's disease and schizophrenia which disrupt cognitive function.Hippocampal deterioration is present in nonpathological aging, making it reasonable to apply these inferences about hippocampal deformations and basic neurobiology to the current findings relating to inward deformations.However, this would lead us to infer that outward deformations may reflect resilience, whereas we found outward deformations to be associated with poorer processing speed at bilateral subiculum.One speculative interpretation may be that this reflects a relative preservation of areas that exert inhibitory signaling in processing speed-related functions, though direct data linking hippocampal morphology and neurobiology should be a priority for future research.Despite the fact that childhood intelligence did not predict hippocampal shape deformations in older age, the nominal uncorrected associations between these deformations and age 11 IQ were observed in the same regions that were also associated with fluid intelligence in older age.A smaller study on individuals from 18 to 86 years of age also showed similar result but measured subfield volumes rather than morphology.Another study of similar sample size, evaluated the correlation between educational attainment in youth, and hippocampal shape deformations reported significant associations in the same locations as our study."This may indicate that there is an inner tendency of certain hippocampal regions to be deformed inward or outward with respect to a medial shape depending on people's intelligence and independent of age, although a direct association does not seem to exist.The association between spatial memory ability and the hippocampus receives broad support from previous studies.In particular, spatial memory has previously been related to the volume of the right hippocampal tail.However, it is important to observe that the Wechsler Spatial Span task administered here does not provide an index of pure allocentric spatial ability, which is well studied with respect to hippocampal functioning.Rather spatial span is a complex task that may employ multiple or different frames of reference, and the results here should be interpreted in that context.The finding that measures of short term, working memory, and verbal memory was not associated with hippocampal shape after correction for multiple comparisons may be considered unsurprising.However, prior work indicates that the hippocampus may not be relevant for some processes such as memory binding nor for verbal processing, though there is functional and volumetric evidence for the involvement of the hippocampus in immediate and delayed verbal memory.It should be noted that across all memory tests, that there were associations in consistent directions with the subiculum and in clusters at the head and tail of the CA1 region.However, these associations did not survive correction for multiple comparisons, and although FDR is considered a relatively liberal correction approach, it should be noted that it cannot account for the spatial relatedness of clustered peaks, which are relatively uncommon.Moreover, the inability to reliably detect effects of hippocampal shape on some memory tests could also be due to the relative good health of the study participants; this precludes a clear generalization of our findings to other populations, such as those with clinical neurodegenerative or neuropsychiatric conditions.One question in a cohort of this age is to what extent the results reflect normal age-related variations in hippocampal shape as opposed to reflecting a proportion of subjects who may be in the earliest stages of dementia or other age-related neurodegeneration.The exclusion criteria utilized may not capture participants either in the early or presymptomatic stages of disease."Despite the unavailability of biomarkers of Alzheimer's pathology in this cohort, the current literature suggests 20% or more asymptomatic individuals in this age group may have evidence of Alzheimer's pathology. "Studies on hippocampal morphology in Alzheimer's disease patients and individuals with MCI—albeit using different shape modeling methods—show associations between different cognitive tests and hippocampal shape deformations in the same locations and directions as our study.Though our data had no extreme outliers, it is currently impossible to ascertain the number of presymptomatic individuals in the current cohort, and the degree to which any presymptoms have exerted leverage on our results.Such information will only be possible with continued follow-up and future data linkage with national health records.We therefore caution that our findings apply generally to currently non-demented, community-dwelling older adults, rather than exclusively to nonpathological aging.This study has other limitations.First, we did not include measures of other brain regions.Thus, it is possible that hippocampal shape and processing speed, for example, are both related to other brain measures such as white matter microstructure, frontal lobe regions, or general brain atrophy, but that processing speed is not directly constrained by hippocampal shape per se.Future studies could focus on longitudinal data which examines change-change correlations in light of other brain MRI indices."Also, cross-sectional studies could examine hippocampal morphology in relation to other brain regions' morphology and/or microstructure to inform of possible associations and/or patterns on different populations.It should also be noted that the effect sizes for associations between morphology and cognitive abilities were generally modest.Although we were well-powered to detect these effects, it is possible that such effects may not be reliably detected in less well-powered settings.However, our findings in this healthy, self-selecting cohort are likely to be underestimates of population-level effect sizes, and we also note that morphological analysis estimates were of a greater magnitude than those for hippocampal volume.Furthermore, though shape analysis is a powerful tool to investigate small changes in the outer surface of the hippocampus and its subregions, inferences on inner hippocampal subfields such as the dentate gyrus cannot be made.Finally, further information on the relative contributions of hippocampal morphology and volume to cognitive abilities would benefit from direct comparisons with subfield volumes.However, as outlined in the introduction, their accurate delineation requires greater resolution and a higher field strength than is available here, and there remains no consensus on a single segmentation protocol."Among the study's strengths is the narrow age range, and control for other important confounds such as vascular risk, and the large sample size.The hippocampal masks on which the morphological analysis was based were each visually inspected and manually edited to ensure high quality.The hippocampal modeling method employed here was validated specifically on older individuals who were experiencing nonpathological aging, MCI, and AD patients.We also used a cohort-specific template to minimize the potential for registration errors and ensured the hippocampal shape modeling could accurately reproduce the shape details and correct for the rough boundaries of the binary masks.This enabled us to demonstrate a complex pattern of hippocampal deformations across a wide range of well-characterized cognitive abilities in older age.To the best of our knowledge, this is the first study on a large older and cognitively normal population exploring the associations between hippocampal morphology and cognitive functions.Nevertheless, the deformation patterns found are similar to those presented by other studies that explored hippocampal morphology in cognitively different groups of individuals with ages ranging from middle to late adulthood.Asymmetry in the patterns obtained for left and right hippocampi was also a corroborative result.This asymmetry has been previously reported not only for the hippocampus but also for the whole temporal region in MCI and AD patients.Overall, this study indicates that a consistent pattern of both inward and outward hippocampal deformations in certain regions is associated with specific cognitive functions in older age and suggests that complex shape-based hippocampal analyses may provide valuable information beyond gross volumetry.The authors have no conflicts of interest to disclose.
Structural measures of the hippocampus have been linked to a variety of memory processes and also to broader cognitive abilities. Gross volumetry has been widely used, yet the hippocampus has a complex formation, comprising distinct subfields which may be differentially sensitive to the deleterious effects of age, and to different aspects of cognitive performance. However, a comprehensive analysis of multidomain cognitive associations with hippocampal deformations among a large group of cognitively normal older adults is currently lacking. In 654 participants of the Lothian Birth Cohort 1936 (mean age = 72.5, SD = 0.71 years), we examined associations between the morphology of the hippocampus and a variety of memory tests (spatial span, letter-number sequencing, verbal recall, and digit backwards), as well as broader cognitive domains (latent measures of speed, fluid intelligence, and memory). Following correction for age, sex, and vascular risk factors, analysis of memory subtests revealed that only right hippocampal associations in relation to spatial memory survived type 1 error correction in subiculum and in CA1 at the head (β = 0.201, p = 5.843 × 10−4, outward), and in the ventral tail section of CA1 (β = −0.272, p = 1.347 × 10−5, inward). With respect to latent measures of cognitive domains, only deformations associated with processing speed survived type 1 error correction in bilateral subiculum (βabsolute ≤ 0.247, p < 1.369 × 10−4, outward), bilaterally in the ventral tail section of CA1 (βabsolute ≤ 0.242, p < 3.451 × 10−6, inward), and a cluster at the left anterior-to-dorsal region of the head (β = 0.199, p = 5.220 × 10−6, outward). Overall, our results indicate that a complex pattern of both inward and outward hippocampal deformations are associated with better processing speed and spatial memory in older age, suggesting that complex shape-based hippocampal analyses may provide valuable information beyond gross volumetry.
773
A uni-extension study on the ultimate material strength and extreme extensibility of atherosclerotic tissue in human carotid plaques
Despite significant advances in the diagnosis and management of stroke, it remains the third leading cause of death globally.Carotid atherosclerotic disease is responsible for 25–30% of all cerebrovascular ischemic events in western nations, with carotid luminal stenosis being the only validated diagnostic criterion for patient risk stratification.However, patients with mild to moderate carotid stenoses still account for the majority of clinical events.Novel non-invasive screening methods are therefore urgently required to improve risk stratification of carotid plaques, in an attempt to avoid acute ischemic events.High-resolution, multi-sequence magnetic resonance imaging has shown great potential in identifying high-risk plaque morphological and compositional features, such as a large lipid-rich necrotic core, intraplaque hemorrhage and fibrous cap defects with high accuracy and reproducibility.These MR-depicted plaque features have demonstrated their ability to differentiate patient clinical presentation and for predicting subsequent ischemic cerebrovascular events.However, IPH and FC rupture are both common in symptomatic lesions, yet clinical recurrence rates are only 10–15% within the first year.It is therefore clear that plaque morphological and compositional features alone, or in combination, cannot serve as a robust marker for prospective cerebrovascular risk, and additional analyses or biomarkers are required.Studies have suggested that underlying pathological processes, such as inflammation and hypoxia, have an influence on plaque destabilization.Additionally, biomechanical factors are likely to play a role, as carotid atherosclerotic plaques are continually subject to mechanical loading due to blood pressure and flow.Plaque structural failure could occur when such loading exceeds its material strength.Therefore, biomechanical analyses may provide complementary information to luminal stenosis and plaque structure in determining vulnerability.Calculating mechanical stress within FC has been shown to differentiate patient clinical presentation both in the carotid and coronary arteries and could provide incremental information to predict subsequent ischemic cerebrovascular events.These findings suggest that plaque morphological features and mechanical conditions should be considered in an integrative way, if plaque vulnerability assessment is to be improved.To ensure that any mechanics-based vulnerability assessment is accurate, apart from the predicted mechanical loading within the plaque structure, the ultimate material strength and extreme extensibility of different atherosclerotic plaque components, including FC, media, lipid and IPH or thrombus are needed.Available experimental data on this aspect is limited and hence we sought to quantify the ultimate material strength and extensibility of FC, media, lipid core and IPH/T in human carotid plaques from uni-extension tests in the circumferential direction.The local ethics committee approved the study protocol and all patients gave written informed consent.The patient demographics are shown in Table 1.Details of tissue preparation and testing and the equipment used have been described previously.In brief, endarterectomy carotid plaque samples from 21 symptomatic patients were collected during surgery and banked in liquid nitrogen for <4 months prior to testing.Cryoprotectant solution added to a final concentration of 10% dimethylsulfoxide was utilized to prevent tissue damage from ice crystals formation and thawing.Prior to testing, samples were thawed in a 37 °C tissue bath and cut into 1–2 mm thickness rings perpendicular to the blood flow direction from proximal to distal, using a scalpel.Approximately 10 rings were obtained from each plaque.Each ring was further dissected to separate different atherosclerotic tissue components under a stereo microscope using fine ophthalmic clamps and scissors.The tissue strips were prepared carefully to minimize the variation of width and thickness along the length.The identification and separation of each tissue component in atherosclerotic plaques is known to be challenging.Prior to material testing, sample rings adjacent to the ones used for material testing were chosen for training, assessing the accuracy of operators to identify and separate plaque component type through visual and histological means.As shown in Fig. 1A, the histological examination confirmed the fibrous nature of the inner layer marked by gray arrows in the ring next to it.Histological preparation of samples was performed following a standard protocol, with tissue strips being formalin-fixed and paraffin-embedded.Intraplaque hemorrhage appeared red/reddish within the wall as shown in Fig. 1B and confirmed by the histology as shown in Fig. 1B1 and B2.Thrombus typically appeared as a section of red/reddish substance obstructing the lumen.Lipid core was yellowish in color.As multiple measurements were obtained from each plaque, a linear mixed-effect model was used to assess the difference between parameters for different tissue types.All statistical analyses were performed in R 2.10.1, with statistical significant assumed when p value was <0.05.All results are presented as median for non-normally distributed data.In total, data from 32 FC strips from 12 samples, 35 media strips from 15 samples, 26 lipid core strips from 9 samples and 12 IPH/T from 7 samples qualified for analysis of ultimate strength and extreme extensibility.Data from tissue strips which slid or broke at the location close to the clamp were excluded.It is essential to minimize the variation of width and thickness along the tissue strip.The thickness, width and length of tissue strips included for analysis are: FC – 1.02±0.21, 1.71±0.48, 12.78±3.02; media – 0.97±0.34, 1.65±0.45, 15.44±4.72; lipid core – 1.17±0.33, 1.68±0.61, 8.92±2.17; and IPH/T – 1.24±0.40, 1.69±0.41, 9.07±3.51.The comparisons of ultimate material strength and extensibility of FC, media, lipid core, and IHP/T are presented in Fig. 4, with the exact values listed in Table 2.As shown in Fig. 4A, extreme extensibilities of each atherosclerotic tissues were comparable.But their ultimate strength differed.Both ultimate material strength of FC and media were comparable, as were that of lipid core and IPH/T.However, the ultimate material strength of both FC and media were significantly higher than either lipid core or IPH/T.When each ultimate material strength value of FC was plotted vertically, there appeared to be a visual separation into two groups, with ultimate material strength of 259.3 kPa for the stronger group and 69.7 kPa for the weaker group.12/15 strips from the lower strength group were from the proximal plaque region, whereas 6/17 strips from the higher strength group were from this region.No such clear separation was found in media, lipid core and IPH/T.The peak stress and stretch recorded from the strips which slid away indicated that those strips had an ultimate strength and extensibility above the recorded level.As shown in Fig. 6, no significant differences were found in FC and media in either extreme extensibility or ultimate strength.As no tissue strips of lipid core or IPH/T slid, such analysis was not performed with these.Our results show that FC and media have a similar ultimate material strength, as do lipid core and IPH/T.The ultimate material strength of FC and media are higher than either lipid core or IPH/T.Moreover, all atherosclerotic components have similar extreme extensibilities.Finally we observed that FC at the proximal region of the plaque is weaker than FC located distally.It is important to understand the underlying mechanism of the clear separation of ultimate strength of FC as shown in Fig. 5.In the group with low strength, 80.0% strips were from the proximal region, whereas only 35.3% strips were from the proximal region in the group with high strength.This observation may be due to the inherent difference in the pathological feature between the proximal and distal plaque region.It has been observed that the proximal region contains more macrophages and the distal contains more smooth muscle cells.The increased density of macrophages has been shown to reduce the material strength in FC in aortic atherosclerotic lesions, through release of matrix metalloproteinases.A weaker FC in the proximal plaque region also is supported by the observation that the majority of angiography-defined carotid plaque ruptures/ulcerations are located in this region.In this study the most stenotic site defined as the minimum ratio of luminal area to wall area was used as a reference location, enabling the plaque to be divided into proximal and distal portions.The most stenotic site was determined by a visual estimation of luminal area and wall area.The lack of a fully quantitative approach may have introduced small errors in determining the exact lumen and wall area.In addition, the wall area mentioned here could not represent the total outer wall area of the lesion, as only parts of thickened intima or media were removed during the carotid endarterectomy.To authors׳ best knowledge, only four studies have previously reported the ultimate strength and extreme extensibility of carotid atherosclerotic plaques.In three of these studies whole plaque specimens were classified according to their appearance on imaging before being subject to testing, while the other study separated plaque into two layers.The recorded failure features were therefore grouped and analyzed accordingly.In general, the stress level of failure varied within and between samples widely ranging from tens kPa to MPa and extreme extensibility varied from about 1.2 to 1.8.Results from these studies may be challenging to interpret though, as tissue components were not separated and tested.For example, in the mixture of lipid and fibrous tissues, the presence of soft lipid likely acts to destabilize the structure, reducing its ultimate material strength.The strength of our data is that we attempted to individually characterize the material properties of atherosclerotic tissue.Thus, researchers are able to use these data as the basis for computational simulations to predict rupture risk.However, the fact that material characteristics are location- and lesion-dependent should not be overlooked.Despite the heterogeneity between our methodology and previous studies, data on FC and media were remarkably similar between studies.There are studies reported the material strength and extreme extensibility of atherosclerotic lesion from other circulations, such as coronary, aorta and iliac artery, that have been summarized in a previous comprehensive review.In this study, although some lipid core and IPH/T were very fragile and unstable, some were strong and flexible as those shown in Figs. 2, 3 and 8 in the reference, permitting uni-axial testing.In vivo imaging-based mechanical analysis in predicting stress/stretch has demonstrated its incremental value to predict subsequent ischemic cerebrovascular events.However, we should be cautious in combining the information of ultimate strength and extensibility obtained from direct material testing with computational modelling to assess FC rupture risk.Firstly, the effect of residual stresses in atherosclerotic plaques is unknown.Although residual stress is rarely considered in the majority of in vivo imaging-based computation modelling studies, neglecting this force may lead to an overestimation of FC stress and stretch concentrations.Secondly, the ultimate strength has a close association with local inflammation, which may be quantifiable through ultrasmall superparamagnetic iron oxide-enhanced MR imaging or positron emission tomography with 2-deoxy-2-fluoro-D-glucose integrated with computed tomography imaging.Lower strength thresholds should be adopted for mechanical-based vulnerability assessment if heavy inflammation is present within plaques.Thirdly, the ultimate material strength of FC obtained in study was from symptomatic patients.Some of these tissue strips, in particular those within the lower strength group shown in Fig. 5, might be from a location anatomically close to a previous rupture site.Therefore the median value of 70 kPa from this group should not be regarded as a universal threshold, while the median value of 260 kPa from the stronger group may be more appropriate in serving as a reference value for plaque regions with lower biological activity.Our study also revealed evidence of tissue micro-damage during the extension process that occurred much earlier than the final breakage, which is evidenced as steps shown along the load–displacement curves in Figs. 2 and 3.Micro-damage occurred at a very low loading level as seen from the curves.These may be caused by high stress concentration due to uneven stretching or clamping, although inherent tissue imperfections, e.g., the presence of neovessels or voids due to cellular death, could also account for such early damage.This implies that under physiological conditions, tissues with abnormal configuration, such as large luminal curvatures induced by FC erosion and rupture, or structure, such as micro-calcium inclusions, could incur damage despite a relatively low stress/stretch level, resulting in repeated cycles of damage/healing leading to plaque progression.In this study, the extreme extensibility was measured by using the stretch ratio, which is dependent on the marker location and the ratio of length to width/thickness.Ideally, the ratio of length to width/thickness should be over 10 and the markers needs to be located in the central region.However, due to small sample dimensions, it was difficult to meet these ideal criteria.In this study, the length to width ratios for FC and media were about 7 and those for lipid core and IPH/T were smaller, at around 5.The length to thickness ratios for FC and media were >10 and those for lipid core and IPH/T were ~7.Although these ratios are not ‘perfect’, they should be acceptable for material testing.As shown in Fig. 8B and C, the difference of stress–stretch curves obtained by tracing the distance between different pairs of the 4 markers placed along the strip was small.This implies that the deformation in the region about 1–2 mm away from the clamp should be reasonably uniform when the ratio of length to width or thickness is about 5–7.Despite our robust methodology, there are limitations to this study; due to small sample dimensions, only the strength and extensibility in the circumferential direction were quantified; there is likely to be a close relationship between strength/extensibility and tissue microstructure, e.g., presence of micro calcium and fiber orientation.However, following testing the tissue strips became contaminated and accordingly, such information was no longer available; calcium was not tested in this study; some tissue strips may contain more than one tissue type due to the highly heterogeneous nature of atherosclerotic plaques.As shown in Fig. 9, the yellowish block would be judged to be lipid core, however it was mixture of lipid and fibrous tissues when reviewed under light microscopy.Although the authors attempted to dissect tissue strips by careful inspection using a stereo microscope, potential tissue misclassification still remained; the tissue type was discriminated according to the appearance of the slice used for testing and the corresponding histology of an adjacent slice.Misclassification could potentially occur if tissue type changed dramatically in a short longitudinal distance; moreover in this study, strips of lipid core were from big yellowish tissue blocks, most of which should be true lipid cores.However, as shown in Fig. 9 that this statement might not hold in some of them; and cryoprotectant solution was used to prevent tissue damage from ice crystals formation.This storage approach may slightly alter mechanical behavior of atherosclerotic tissue due to solution permeation into the tissue and lipid extraction.The ultimate material strength of atherosclerotic components differs with FC and media being comparable, as were lipid and IPH/T.All tissue subtypes exhibited similar extensibility.The authors do not have any conflict of interest to be declared.
Atherosclerotic plaque rupture occurs when mechanical loading exceeds its material strength. Mechanical analysis has been shown to be complementary to the morphology and composition for assessing vulnerability. However, strength and stretch thresholds for mechanics-based assessment are currently lacking. This study aims to quantify the ultimate material strength and extreme extensibility of atherosclerotic components from human carotid plaques. Tissue strips of fibrous cap, media, lipid core and intraplaque hemorrhage/thrombus were obtained from 21 carotid endarterectomy samples of symptomatic patients. Uni-extension test with tissue strips was performed until they broke or slid. The Cauchy stress and stretch ratio at the peak loading of strips broken about 2 mm away from the clamp were used to characterize their ultimate strength and extensibility. Results obtained indicated that ultimate strength of fibrous cap and media were 158.3 [72.1, 259.3] kPa (Median [Inter quartile range]) and 247.6 [169.0, 419.9] kPa, respectively; those of lipid and intraplaque hemorrhage/thrombus were 68.8 [48.5, 86.6] kPa and 83.0 [52.1, 124.9] kPa, respectively. The extensibility of each tissue type were: fibrous cap - 1.18 [1.10, 1.27]; media - 1.21 [1.17, 1.32]; lipid - 1.25 [1.11, 1.30] and intraplaque hemorrhage/thrombus - 1.20 [1.17, 1.44]. Overall, the strength of fibrous cap and media were comparable and so were lipid and intraplaque hemorrhage/thrombus. Both fibrous cap and media were significantly stronger than either lipid or intraplaque hemorrhage/thrombus. All atherosclerotic components had similar extensibility. Moreover, fibrous cap strength in the proximal region (closer to the heart) was lower than that of the distal. These results are helpful in understanding the material behavior of atherosclerotic plaques.
774
Cholecystocolonic fistula: A rare case report of Mirizzi syndrome
Mirizzi syndrome, eponymously documented in 1948, is now widely known amongst the surgical community to denote the circumstance by which a large gallstone in the gallbladder neck or cystic duct leads to a narrowing of the common hepatic duct .Mirizzi syndrome is among the rarer complications of longstanding gallstone disease, alongside cholecystocholedochal fistula and gallstone ileus .Mirizzi syndrome concurrent with cholecystoenteric fistula is an even rarer occurrence and may or may not include an associated gallstone ileus .Of those patients found to have both Mirizzi syndrome as well as cholecystoeneteric fistula, reports have included cholecystoduodenal fistula as well as cholecystogastric fistula.However, there are few reports of cholecystocolonic fistula .In the following case, the patient was discovered to have advanced Csendes type V Mirizzi with a cholecystocolonic fistula, representing a rare phenomenon with a rarer presentation.This phenomenon has been explained in detail in only one prior case at the time of this writing .This case was handled at an academic institution and has been presented in accordance with the SCARE criteria .A 70-year old man with history of hypertension, hyperlipidemia, and chronic biliary colic presented to an outside medical center with a five day history of RUQ abdominal pain, nausea and emesis that worsened with fatty meals.The patient denied fevers or diarrhea, was afebrile, and vital signs were within normal limits.On exam, the patient demonstrated RUQ and epigastric tenderness with a negative murphy sign.Initial laboratory tests were notable for an alkaline phosphatase of 183 and were otherwise unremarkable.Ultrasound imaging demonstrated a thickened gallbladder wall up to 5 mm with an apparent shadowing suggesting a gallstone as well as some pericholecystic fluid.Due to extensive ring-down artifact on US, computed tomography imaging was obtained and elicited concern for cholecystocolonic fistula; with pneumobilia, inflammation and enhancement in the area of the gallbladder fossa and common bile duct, as well as thickening extending into the adjacent colon at the hepatic flexure.The patient underwent two weeks of antibiotic therapy.Preoperative Hepatobiliary scintigraphy was compatible with a cholecystocolonic fistula to the hepatic flexure of the colon.However, preoperative colonoscopy was unable to locate the fistula tract.On an elective basis, the patient was taken to the operating room.After laparoscopic exploration, the decision was made to convert to open due to signs of severe inflammation.An open cholecystocolonic fistula takedown with partial colectomy and primary anastomosis was performed.A cholecystectomy was attempted though aborted due to concern for malignancy and biopsies were taken.The patient was transferred to our institution for further care.At the time of transfer, the patient was hemodynamically stable and tolerating diet without complaint.There was a kocher incision healing well and a single Jackson Pratt drain with mild bilious drainage.Laboratory tests were notable for a slightly elevated bilirubin to 1.3 and an albumin of 2.9.Biopsies taken at the outside institution were found to be negative.The patient underwent a diagnostic ERCP and was found to have multiple stones in the hepatic ducts as well as a large stone eroding through the wall of the upper third of the common hepatic duct concerning for Csendes type IV Mirizzi syndrome; stent was placed in the CBD.Following complete preoprative planning, the patient underwent an exploratory laparotomy, bile duct exploration with a choledochotomy and removal of a large biliary stone.The stent was visible in the lumen of the bile duct and the remainder of the gallbladder neck was used to primarily repair the defect using interrupted sutures.The patient tolerated the procedure well, nasogastric tube and Foley catheter were removed on postoperative day 1.His diet was advanced and he was tolerating solid food as of POD2.On POD 3 the output from JP drain was minimal and it was removed.He was discharged home in good condition on POD4 and later seen in clinic with a well healed scar and no other complaints,Through the decades since its discovery, Mirizzi syndrome has presented with various levels of severity and effect, necessitating multiple classification schemes.Ultimately, those most widely used in clinical practice include the McSherry and the Csendes classification systems .The McSherry classification schema dichotomized the syndrome, denoting type I as external compression of the bile duct by a large stone, and type II as a cholecystobiliary fistula caused by such a stone or stones .The Csendes classification provides additional utility, as it correlates with principles of management.This formulation delineated severity of damage to the biliary tree, where Csendes type II indicates a cholecystobiliary fistula with less than one-third circumference of the biliary tree eroded.Further, types III and IV indicate erosion of up to two-thirds, and erosion resulting in complete destruction of the duct, respectively.Csendes type V is indicative of a cholecystoenteric fistula in addition to any of the previously stated types.This is further divided into patients without a gallstone ileus, known as type Va, and those with an associated gallstone ileus, known as type Vb .While Ultrasound is widely used and is recommended as the best screening method, no significant difference has been demonstrated in the sensitivity of detection between groups with cholecystobiliary fistula and those without .Magnetic Resonance Cholangiopancreatography may be more advantageous, as it can further delineate pericholecystic inflammation as well as help to differentiate Mirizzi syndrome from gallbladder malignancy.ERCP remains the gold standard of diagnosis, though the full details of its presentation in individual patients may not be recognizable until operative intervention .Notable variants of Csendes type V Mirizzi syndrome have been described, including Mirizzi combined with Bouveret syndrome .Though cases have been reported, the occurrence of Mirizzi syndrome with a concurrent cholecystoenteric fistula remains a rare presentation of this disease.The occurrence of cholecystoenteric fistula in patients with Mirizzi has been shown to correlate to the severity of Mirizzi described by progression from Csendes type I to IV .Further, the occurrence of Mirizzi syndrome with concurrent cholecystocolic fistula is rarer yet .Because the surgical approach varies based on severity, it is important to stage the procedure appropriately.Surgical management of Mirizzi varies by Csendes type.Cholecystectomy and partial cholecystectomy have been shown to be effective for Csendes type I.The approach to type II and III involve partial cholecystectomy without removal of tissue at the margin of the fistula.Management of severe, Csendes type IV may warrant a Roux-en-Y hepaticojejunostomy .Open operation is preferred with severe inflammation within Calot’s triange in the setting of distorted anatomy and is the current standard of management for Mirizzi syndrome .Despite the staged, multicenter care this patient received, it is the position of the authors that a case like this is best managed in a single operation to minimize the risk of seeding potential malignancy.Concern for gallbladder malignancy would best be managed in this scenario by frozen section, as this has been demonstrated to be an effective method for identifying lesions that would necessitate conversion to radical surgery .Our patient had biopsies negative for malignancy.Due to the large defect of the common biliary duct, reconstruction of the duct was made using remnant walls of the gallbladder neck using interrupted stiches, leaving the stent in place.To our knowledge, this is one of two cases within the literature describing a patient with Mirizzi syndrome concurrent with cholecystocolonic fistula .This case not only highlights the rare possibilities of anatomical presentation in advanced gallstone disease but delineates our surgical management.Despite the multicenter management of the patient described, a single surgical endeavor would be better suited for a patient with such advanced gallbladder disease with concern for malignancy.We aim to present this case as another example of a rare presentation of Mirizzi syndrome that, due to its rarity, heretofor lacks well document standards of managment.No source of funding to be declared.No institutional review board is required for publication of a case report at our institution.Written informed consent was obtained from the patient for publication of this case report and its accompanying images.Esparza Monzavi, CA – study concept/design, data interpretation, writing the paper.Peters, XD – data interpretation, writing the paper.Spaggiari, M – study concept/design, data interpretation.Not commissioned, externally peer-reviewed.
Introduction: Mirizzi syndrome is a rare complication of gallstone disease that more rarely is associated with the formation of cholecystoenteric fistula. Presentation of case: The patient presented with a five-day history of abdominal pain in the right upper quadrant (RUQ), nausea, and emesis. Further ultrasound (US) imaging demonstrated a large gallstone with associated thickened gallbladder with pericholecystic fluid. Computed tomography (CT) imaging, preoperative Hepatobiliary Scintigraphy and Endoscopic Retrograde Cholangiopancreatography (ERCP) displayed findings consistent with a Csendes type IV Mirizzi syndrome associated with cholecystocolonic fistula. Description of surgical approach, management and outcome is presented. Discussion: Surgical management of Mirizzi syndrome varies by classification of its severity. Open operation is preferred in cases with severe inflammation and concern for malignancy. The patient underwent a cholecystocolonic fistula takedown. A cholecystectomy was attempted though aborted due to concerns of malignancy. Biopsies returned negative for malignancy and the patient demonstrated findings on ERCP consistent with Mirizzi syndrome. Stenting of the common bile duct (CBD) was performed with ERCP and later the patient underwent an open biliary exploration with subsequent choledochotomy, biliary stone removal, and primary closure with interrupted sutures using remnant gallbladder wall flaps. Conclusion: To our knowledge, Mirizzi syndrome with concurrent cholecystocolonic fistula is exceedingly rare with a paucity of reports within the literature. Our report discusses principles of management of Mirizzi syndrome as well as best practices of surgical management for Mirizzi syndrome with concurrent cholecystocolonic fistula.
775
SpECTRUM: Smart ECosystem for sTRoke patient׳s Upper limbs Monitoring
Worldwide, stroke affects 15 million people each year according to the World Health Organization and is the second leading cause of death and the third leading cause of disability across the world.Survivors often encounter motor or cognitive disorders requiring rehabilitation.Some impairments are usual such as spasticity, muscle weakness or visual problems; but others are uncommon, such as hemibody tremors and can manifest themselves months or years after the stroke.Rehabilitation is a long process that involves medical staff and costly infrastructures for long periods.Indeed, patients have to go to the hospital, often daily, at the early stages of post-stroke recovery, to perform rehabilitation exercises.The patient׳s recovery is assessed by a therapist either from visual observations during rehabilitation sessions or more formally during checkup sessions using standard protocols and specific tools in order to collect quantifiable and objective information on the patient׳s motor functions.In addition, demographic trends indicate that world population is aging and the number of stroke patients over 75 years old will increase from 55% in 2005 to 75% in 2050.Proportionally, fewer and fewer therapists will be available; and treating all the patients with the same efficiency will be impossible.Collecting objective information on the patient׳s motor functions may enhance the patient׳s recovery thanks to a personalized treatment.In fact, several research groups investigated the use of technological devices to monitor stroke patients during the rehabilitation sessions.These devices focus on motor functions monitoring by using serious games and wearables.Serious games provide playful environments involving motor functions, but force the patient to be in front of a screen.Serious games provide new data on the patients׳ physical state, but do not constitute a relevant strategy to assess the patient׳s recovery evolution with quantifiable and objective information, as the required tasks do not settle in the reality by manipulating physical objects.On the other hand, wearables are often used for everyday life monitoring.However, wearing sensors is constraining for stroke patients who already have difficulties getting dressed for example.Wearables are more adapted for monitoring during rehabilitation sessions where a therapist can help the patient to place the sensors.New platforms based on self-contained objects embedding sensors and displays allow for conceiving new approaches for stroke monitoring and rehabilitation by collecting objective data during rehabilitation sessions.These objects communicate with each other, are connected to the Internet and favor connectivity and interoperability.This paper presents an ecosystem of smart objects, called SpECTRUM, inspired by the current rehabilitation tools observed during rehabilitation sessions and the Action Research Arm Test protocol.SpECTRUM is composed of three objects: a jack to monitor fingers grasping and the input of each finger during grasping, a cube to monitor the global hand motor functions during manipulation exercises and a wrist band to monitor the arm motor functions during the ring tree exercise consisting in moving rings along horizontal shafts.The jack, the cube and the wrist band provide relevant and measurable information on the motor functions of the hand and the arm of the patient.These objects are also able to monitor the appearance and evolution of the patient׳s tremors during exercises.Moreover, we developed an Android application that ensures data management including real-time data visualization and access to data history.With this monitoring platform, the therapists will have better knowledge of the patient׳s motor functions level and could assess degradation or an improvement of the patient׳s state thanks to the data history.It is then possible to either adapt the rehabilitation exercises according to the patient׳s weaknesses or propose a readmission to the hospital if the patient׳s independence is decreasing.The paper is organized as follows: existing platforms for hand and arm motor monitoring and rehabilitation are presented in Section 2.Then, the results of an observational study that we conducted in order to identify the current tools that are used by health care professionals during rehabilitation session are presented in Section 3.Section 4 introduces the concept of the SpECTRUM ecosystem that follows the guidelines brought out from the observational study and the Action Research Arm Test protocol.Section 5 is devoted to the hardware development presentation of the smart objects.Afterwards, a preliminary study was carried out with health care professionals in order to collect feedback on possible improvements on the objects׳ functionalities.Then, we present the design choices based on the results of the previous study of a mobile application to record and to perform data visualization collected by the objects in Section 7.A second preliminary study has been carried out with health care professionals to collect feedback on possible improvements on the visualization interfaces.Section 8 presents the results of tests performed with three stroke patients in order to assess the technical reliability of the SpECTRUM ecosystem.Finally, we conducted a preliminary study involving nine stroke patients who are asked to assess the usability and the acceptability of the SpECTRUM ecosystem.Many researches already investigated the upper limbs motor assessment by monitoring rehabilitation exercises.First, this section presents the scales and tools for motor assessment based on visual estimations and then focuses on the new technological approaches for monitoring rehabilitation exercises.Many scales have been developed for assessing upper limb motor recovery of stroke patients.Back in the 1960 s, the Swedish occupational therapist Signe Brunnstrom developed an approach, which is now qualified as the “Brunnstrom Approach”, based on a series of longitudinal observations that allows assessment of the motor recovery of the patient.The Brunnstrom Recovery Stages are divided into two stages: Arm assessment including seven stages evaluating basic and complex arm controls and hand assessment including six stages evaluating the recovery of grasping, lateral prehension or palmar prehension.This approach shows strong positive correlations between recovery at admission and discharge.Based on the BRS approach, the Fugle-Meyer Assessment has been created and is the first stroke-specific assessment tool following the natural recovery process of a post-stroke hemiparetic patient.The BRS and FMA tools are useful, but follow subjective observations made by the therapists.Other tools exist for assessing the motor recovery of stroke patients, such as the Wolf Motor Function Test, which focuses on constraint-induced movement therapy, or the Motricity Index, which focuses on the UE and LE recovery assessment.Finally, the Action Research Arm Test assesses the UE motor functions with four categories: grasp, grip, pinch and gross movements.ARAT includes 19 items scored between 0 and 3 and grip and gross movements are more used than grasp and pinch.Moreover, ARAT is easily reproducible and requires less time to administer than the BRS or FMA.Although many scales are used to assess the arm and hand functions visually, no quantifiable information is collected offering the possibility to better estimate the patient׳s motor functions and perform a more individualized treatment.With the recent development of internet of things, different approaches emerged to perform rehabilitation and monitoring of stroke patients׳.This section first presents various rehabilitation platforms and then focuses on activity monitoring platforms.Rehabilitation platforms aim to improve the motor functions of patients with different exercises.In most cases, these exercises are presented in the form of fun, original and interactive games in order to maintain patient engagement during the rehabilitation phase despite a intensive and repetitive training.These exercises are called “serious games” and are designed for a main purpose other than pure entertainment.These platforms are therefore presented in various technological forms: virtual reality, augmented reality or interactive screens.First of all, Virtual Reality allows patients to be immersed in a three-dimensional virtual environment via a screen or VR headset.Patients can interact with this virtual environment via sensors to perform various exercises and rehabilitation.Burke et al. used virtual reality by screen to simulate a vibraphone on a computer.The patient can then interact with the vibraphone and play music using two Wii remote controls.This game seems interesting for the rehabilitation of the wrist and arm with different melodies to accentuate the movements of the affected arm.Others preferred the use of a data glove, i.e. a glove with sensors to track hand movements, to provide rehabilitation exercises.Jack et al. and Boian et al. developed a hand rehabilitation application using a data glove and a force feedback glove to interact with a virtual world.Four rehabilitation exercises are available, each designed to exercise a specific parameter of hand movement: range, speed, fractionation or force.VR can be combined with biofeedback and mirror-neurons to enhance the patient׳s engagement and to develop new skills.Then, Augmented reality consists of superposing elements to real-time reality.Burke et al. proposed two augmented reality games that use a webcam to capture reality.To this reality comes to be superimposed a game interface that evolves according to patients׳ actions.The first game is to knock out a rabbit coming out of its hole while the second game consists for the patient to follow two arrows, one pointing to the left and the other to the right, and touch the left arrow with the left hand and the right arrow with the right hand.These games allow to work on motor coordination of the upper limbs as well as the amplitude of the movements.Vogiatzaki et al. also developed AR games that use a Kinect as well as video projectors and real physical objects in order to avoid disconnection between the patient and the reality.The first game consists in placing a physical cube on a target projected on the table while the second exercise consists of throwing with a real paper ball at a virtual target projected on a wall.Theses exercises involve the arm and hand motor functions and allow to work on motor coordination, prehension and dexterity.Finally, interactive screens differ from touch screens in their ability to react to surrounding objects.Patients can then interact with these screens using everyday objects.Jacobs et al. developed an exercise that involves the physical manipulation of everyday objects.These objects can be chosen according to the needs and motor abilities of the patient.The purpose of the exercise is to move any object placed on a screen by avoiding bar-shaped obstacles that move from left to right on the screen.Obstacles can be avoided by moving the object on the screen or by lifting it to pass over an obstacle.This exercise helps to work on the motor coordination of the hand and arm.Although these platforms offer interesting solutions for stroke recovery monitoring, the patients are disconnected from reality by being immersed in the virtual world or being in front of a screen.Unlike rehabilitation platforms, monitoring platforms are intended to monitor the motor activity of patients with no direct objective of improvement.These platforms are essentially based on portable devices, often called wearables, which incorporate sensors and are worn on the human body to track movement and position information of limbs upper or lower.Patients׳ motor activity can be monitored with the help of wearables integrating electronic or textile sensors.On the one hand, the use of portable devices integrating electronic sensors enables the design of small, low-cost platforms to improve monitoring of patient activities.Many tools based on accelerometers have been developed for monitoring stroke patients׳ arm activity during rehabilitation.Patel et al. placed bi-axial accelerometers on the arms, forearm and hand as well as uni-axial accelerometers on the thumb, index and chest.They then assessed the quality of performance of different tasks such as reaching an object near and far, moving your forearm from the knees to the table and compared it to the score that could be obtained with the Functional Ability Scale during the assessment by a health professional.The results show that acceleration data collected during the performance of different motor tasks can be used to characterize patient movement and movement quality.Furthermore, some research also studied the monitoring of upper limb movements in space using a kinematic model and multi-sensor fusion.Using an inertial unit, Zhou et al. developed a kinematic model of the arm and forearm and compared the results obtained with a motion capture system.The results show that this kinematic model provides sufficient performance to estimate upper limb movement for post-stroke rehabilitation using a weighted least squares filtering algorithm.Moreover, Yan et al. developed a wearable wireless system based on different sensors to monitor the patient׳s health conditions and detect anomalies.Finally, Rigas et al. proposed a wearable solution based on accelerometers to detect tremors of Parkinson׳s Disease.As tremors can appear after a stroke, it should be interesting to use the same method to detect the post-stroke patients tremors.Indeed, some researchers showed that assessing the evolution of the patient׳s tremors by monitoring the tremors frequency and magnitude may lead to the detection of a relapse of the motor functions.In addition, post-stroke patients often have tremors with frequency under 5 Hz and perpendicular to the direction of the movement.On the other hand, new textile fibers enriched with metal particles and conducting electricity allowed to design textile sensors, called Smart Textiles.These sensors are commonly based on the piezo-resistivity principle and offer an interesting alternative to electronic sensors.They are flexible, inexpensive and can detect stretching or pressure by measuring the variation in electrical resistance.For example, Taccini et al. developed a garment incorporating textile sensors knitted with wool with piezoresistive properties to capture patient movements.These sensors were placed on the elbow, shoulder and buttocks to measure bending angles.Other research used piezoresistive materials to design pressure sensors.Xu, Huang, Amini, He, and Sarrafzadeh developed a pressure sensor that can detect when a person is sitting.The structure of this sensor consists of the superposition of rows of horizontal and vertical conductive wool yarns separated by a piezoresistive material.However, these wearables used for post-stroke monitoring are not based on objects used during rehabilitation sessions and require the therapists to change its work habits.The state-of-the-art demonstrates that the current assessment of patient׳s recovery is performed with visual observation made by the therapist during rehabilitation sessions or more formally during checkup sessions with standard protocols.Among them, ARAT seems the best compromise between administration time and reproducibility and contains objects that can easily embed sensors.New approaches such as serious games or wearables provide quantifiable information on the motor functions of stroke patients, but can have a heavy impact on the cognitive load or modify the therapist׳s practices.Embedding sensors inside objects used during rehabilitation sessions or used in a standard protocol seems to serve as an interesting alternative to provide new qualitative information on the patients׳ motor functions without modifying therapists׳ practices.Moreover, less adaptation time or appropriation of the system is, thus, required and the patients can entirely focus on the rehabilitation exercises.Our approach consists of an ecosystem of self-contained objects designed to monitor hand and arm motor functions of stroke patients.This approach aims to complete the visual observations made by the therapists in order to individualize the patients׳ treatment by monitoring the patient׳s motor functions during rehabilitation sessions.This study aims to observe occupational therapists during rehabilitation sessions in order to identify the objects and tasks that will be used to design a smart ecosystem of instrumented objects.The ecosystem aims to provide objective and quantifiable information on the parameters subjectively assessed by the therapists for now.These objects will be based on the current rehabilitation tools as well as the ARAT protocol identified previously in order to avoid disturbing patients׳ and therapists׳ habits or practices.We followed health care professionals working in functional and medical rehabilitation centers during rehabilitation session.We observed the tasks performed by the patients as well as the tools used during the exercises.Finally, at the end of the day, therapists showed us the other tools for post-stroke rehabilitation and monitoring that were not used during the day by the patients.The observations reveal two types of rehabilitation objects used by occupational therapists.The first set of objects is used for rehabilitation exercises, while the other set is used for the assessment of physical recovery.Indeed, rehabilitation and assessment tools are different in order to avoid patients learning by heart the tasks required for assessment during rehabilitation sessions.Rehabilitation tools are varied in size, weight and shape.Some tools are standardized and manufactured, while others are made from scratch by therapists or based on common objects; and the tools differ from one rehabilitation center to another.The standardized tools include a rings tree for arm and hand rehabilitation, two types of hollow cones for arm rehabilitation and a solitaire game for finger prehension rehabilitation.The rings tree consists of horizontal metal shafts at different heights on which the patient threads plastic rings.If the therapist sets up three levels with 10 rings, the patient grasps the rings one by one and move them from level one to level two, then from level two to level three, then from level three to level two and finally comes back down to level one.This exercise involves arm and hand functions and requires coordination and muscle strength.The hollow cones measure 18 cm in height and have two different diameters.The biggest one measures 6.5 cm in diameter at the base and 4 cm in diameter at the top, while the smallest one measures 4.5 cm in diameter at the base and 1.5 cm in diameters at the top.The patient piles cones requiring arm strength and coordination.Finally, finger prehension exercises are made with a solitaire game where the patient has to grasp a little cylinder and put them into holes.The tools made from scratch by the therapists are used for arm and hand rehabilitation and can be very different according to the rehabilitation center.A custom manipulation exercise discovered in the rehabilitation center of Le Havre is composed of a platter with holes of different sizes and shapes placed on a table in front of the patient.The shapes include circles, squares, triangles and rectangles.The patient has to grasp a wood object and slot it into the corresponding shape.This exercise involves motor and cognitive functions required for daily independence.On the other hand, some exercises are based on common objects.For example, the rehabilitation centers in Evry and Le Havre used tennis balls or screwdrivers to perform rehabilitation exercises.The tennis balls are used for throwing exercises: The patient throws the ball on the wall and solicits completely the arm and hand.The screwdrivers are used to perform finger and wrist exercises in order to working on pronosupitation of the wrist required to screw.Finally, the rehabilitation center at Lille used a jack for finger prehension exercises.The exercise consists in grasping the jack on the recesses in order to work on fingers placement precision and releasing of the jack Fig. 4.In order to assess the patient׳s arm and hand abilities, four check-ups are used by occupational therapists: a strength check-up, a grasping check-up, a sensitivity check-up and a transfer check-up Fig. 5.– The strength check-up aims to assess the strength of the patient while grasping an object.Strength pears are used to assess the bi-digital pliers, the tri-digital pliers and the whole hand strength.When the patient faces spasticity problem and pears are too soft for finger strength measurement, a Collin or Jamar dynamometer is used, as it is very hard to compress.However, if the patient׳s strength is too weak to be measured by the pears, a plastic tumbler serves as a tool for strength assessment.Indeed, therapists can easily evaluate the pressure exerted on the tumbler with the noise and the crushing.– The grasping check-up aims to evaluate the approach, the holding and the releasing of an object.This check-up is based on visual estimations and subjective information collected by the therapist during the test such as the way the patient grasps, holds or manipulates the object.– The sensitivity check-up is devoted to assessing the finger sensitivity of the patient.The textures are placed in a closed box preventing the patient from visually recognizing the texture.– The transfer check-up is based on the “Box and Blocks” test requiring to move little wood cubes from one box to another.The patient has to move the cubes one by one.If the patient grabs two cubes, only one is considered as moved.If a cube falls, it is not considered in the final score.The final score corresponds to the number of cubes moved from one box to another during a minute, substracting the fallen cubes or the cubes moved with another one.After a stroke, finger extension is the motor function most likely to be impaired, while grasping and releasing an object is essential to functional movement of the hand.Monitoring the dexterity of stroke patients with quantifiable information, such as the finger placement on an object during pinching, the pressure applied by each jaw of the digital digital pliers on the object while grasping or the global pressure of the pliers is essential to assess the ability of the patient in terms of movements dexterity of the fingers and hand motor functions.It can also help to detect spasticity of the fingers if the pressure applied by each jaw of the pliers is significantly different.The observational study and the items scored in the ARAT protocol also show that monitoring the movements of the hand and the arm of stroke patients brings useful information on the recovery of motor functions.Indeed, involuntary abnormal movements can appear after a stroke such a chorea i.e. an sudden, brief and non-repetitive arrhythmic involuntary movement.As is shown in the state-of-the-art tools and the results of the observational study, patients׳ monitoring is only based on subjective information collected by therapists during the use of rehabilitation tools.Only checkups allows the collection of accurate measures on the patient׳s motor functions with assessment tools.Collecting intermediate data by embedding sensors into objects designed for assessing the evolution of the patient׳s motor recovery over rehabilitation sessions seems a good compromise between totally visual assessment and extremely accurate measurements.Moreover, it can also allow for collection of information currently not assessed at all during rehabilitation sessions or checkups such as tremors.The SpECTRUM ecosystem is inspired by the results of the observational study performed in three rehabilitation centers as well as the ARAT protocol.Three objects provide reliable and quantifiable information on the patient׳s hand and arm motor functions during rehabilitation sessions which, until then, were only assessed by visual estimations and subjective measures.First, we proposed a jack to monitor finger dexterity.Indeed, the rehabilitation center in Lille currently uses a jack for precision grasping and dexterity exercises.Moreover, grasping a jack with the fingers is similar to the ball bearing exercise of the ARAT pinch section.The jack is able to monitor the placement of the fingers and the pressure applied by each jaw of the bi-digital or tri-digital pliers of the patient during grasping.Indeed, the use of compensatory strategies such as the help of the second hand to support the object or the use of other fingers during grasping is sometimes observed.Knowing the pressure applied by each jaw of the pliers thus makes it possible to check if the input task is performed correctly.In addition, the jack is able to monitor its orientation and the tremors of the patient during manipulation.Second, we proposed a cube to monitor the evolution of the hand prehension.The cube collects information about the pressure applied by the patient during the grasping, its orientation and the tremors of the patient during manipulation.Although the cube presents similar functionalities as the jack, the redundancy of information provides additional data during grasping, while the configuration of the hand is different and the pressure or the hand movements can be very different.In addition, the cube allows for following the evolution of the global pressure of the bi-digital or tri-digital pliers on the object during the manipulation - unlike the jack which is intended for a more precise follow-up of the dexterity.Third, we decided to use a smart wrist band to monitor the arm activity of stroke patients during the rings tree exercise revealed in the observational study.Indeed, collecting data from the rings by adding sensors on them is not conceivable because of their size and shape.The smart wrist band is able to monitor the movements of the patient׳s arm as well as the patient׳s tremors during the exercise.Finally, we designed a visualization interface in order to present the patient׳s motor data to the therapists in an easy and fast, understandable way.The visualization tool allows one to easily collect, record and visualize data in real-time on a tablet and visualize previous records.The therapists can thus have access to a review of the patient׳s state at the end of the sessions and assess the evolution of the motor recovery by comparing previous records.The jack shape is a rounded parallelepipoid with cylindric recess where the patient has to mainly place his/her fingers during the grasping.The jack is based on the Raspberry Pi Zero Wireless platform, including CPU and Wi-Fi communication.In order to retrieve information on the fingers׳ positions and the pressure applied on the jack during grasping, we added “Force-Sensing Linear Potentiometers” from Pololu1 in order to measure the magnitude of the force applied on the sensor as well as the force location.The FSLPs are located on the middle of the jack׳s sides and follow its shape.The movements of the jack as well as tremors are monitored with an Inertial Measurement Unit from InvenSense This IMU embeds a tri-axis accelerometer, gyroscope and compass and its ratio price/performance is very good.The jack is powered by a 3700 mAh battery and embeds a micro-USB connector for charging.The data is transmitted in real-time to an Android application via Wi-Fi and a power switch allows users to turn on/off the device.The cube׳s dimensions are 5 cm ×5 cm ×5 cm.Although one of the custom tools from the observational study includes cubes of different sizes, we decided to design the SpECTRUM cube in order to be easily graspable by the patients.Moreover, a 5 cm side cube is a part of the ARAT protocol and this size perfectly fits the pressure sensors size.The cube is based on the RFDuino platform including a micro-controller integrating Bluetooth Low Energy for communication.“Force-Sensitive Resistors” have been placed on each side of the cube in order to monitor the physical pressure applied on the cube during grasping.As for the jack, the cube is equipped with an IMU from InvenSense in order to monitor the cube׳s movements and tremors.The cube is powered by a 340 mAh battery as well as a micro-USB connector for charging.The data is transmitted in real-time to an Android application via BLE since Wi-Fi is not available on the RFDuino platform.The BLE transfer rate is lower than Wi-Fi, but assures data reliability and consumes less energy since the battery capacity is 10 times smaller.A power switch allows users to turn on/off the device.We decided to use the industrial smart watch from Motorola: the Moto 360.Indeed, all the sensors are already integrated into the wrist band and the data collection can easily be performed with use of the Android API.The Moto 360 embeds a tri-axis accelerometer used to monitor the arm movements of the patient during the rings tree exercise.The aim of this study is to explore the functionalities and the design of the objects with health care professionals and collect feedback on possible improvements.Semi-structured interviews were conducted and recorded for further analysis.The interviews took place in a quiet office separated from the patients.Participants were informed about the nature and the aims of the study.The interview started with a general presentation of the SpECTRUM ecosystem followed by a detailed presentation of each object including its functionalities, the associated sensors and the type of collected data.The participants were able to manipulate the objects and then we collected feedback, recommendations on possible improvements on the design of the objects.The questions asked during the interviews are detailed in Appendix A.The average duration of each interview was approximately 17 min.A thematic content analysis of the interviews was performed.The results are presented in two categories: the functionalities and the usability of the objects.Monitoring the pressure applied on the object during grasping has been judged useful by five participants for the jack and by all the participants for the cube.Indeed, grasping objects with a normal grasping strength is essential for everyday life.An OT and the PT mentioned that monitoring the pressure over time could allow therapists to assess its variations according to different ways of grasping - which is impossible with a dynamometer which displays only one value at a time.In addition, three OT mentioned that monitoring the pressure could enhance the detection of spasticity and could also allow detection of a voluntary motor command disruption, according to one OT.However, one OT mentioned the information collected about the pressure should be paired with strength tests with pears and manometers, as this information cannot be used as a checkup with measures.It can only be used as a monitoring tool.All the participants judged useful to monitor the position of the fingers on the jack.Indeed, the fingers׳ positions could be an indication of the motor abilities of the patient.Five participants proposed to create jacks with different sizes in order to assess the position of the fingers, as well as the pressure on the jack according to the spacing of the fingers.Three participants assured that monitoring irregular movements with each object would be interesting.However, three participants mentioned that irregular movements monitoring is useless because it can be easily assessed by visual observations and only a few patients faced this kind of disorder.These results being ambiguous, we decided to remove this functionality for the moment as many new data is already available to enhance the therapists׳ diagnosis and the irregular movement detection is not significant.Monitoring the orientation has not been judged useful by five participants for the jack and by three participants for the cube.Indeed, as they are symmetrical, their orientation does not bring information about the nature of the grasping.However, three OT suggested that knowing the orientation for a non-cube shape jack could be interesting to assess the nature of the grasping by coupling the orientation with the pressure and/or position information.Finally, only one OT intern mentioned that monitoring the orientation of the jack or the cube would be useful.Indeed, even if the therapists can assess the orientation visually, a precise quantification of the orientation may bring more relevant information on the pronosupitation of the patient.We followed the majority and decided to remove the orientation monitoring.All the participants mentioned that monitoring the tremors with each object is very useful in order to follow the evolution of the tremors over the rehabilitation sessions and justify the benefits of the rehabilitation.They all agreed that tremor frequency and magnitude would bring complementary information on the patient׳s physical state.All the participants agreed that judging the recognition of different movements with the watch would be useful.Moreover, two OT and the PT interns proposed to use the watch during exercises dedicated to shoulder rehabilitation such a throwing or recognizing tasks such as walking or drawing shapes in the 3D space.Finally, the watch could be useful at home according to all the participants to monitor the patient׳s activities and follow the patient׳s evolution.Three participants mentioned that the pressure and position sensors are not placed optimally on the jack.Indeed, as the jack is quite thick, the sensors should have been placed at the top of the jack׳s lateral face in order to help the patients to grasp the jack on the sensors.However, all the participants mentioned that the placement of the pressure sensors on the cube is optimal as they cover its entire surface.Moreover, all the participants mentioned that the watch would be interesting to collect data outside the rehabilitation center.According to the six health care professionals involved in this study, only the jack requires a design improvement.The pressure and position sensors must be moved to the top of the lateral face in order to help the patient grasping the jack on the sensors.Feedback from the health care professionals also indicates that some information currently collected by the objects is not relevant for the assessment of patient recovery.Monitoring the orientation of the jack and the cube is not necessary since therapists can visually evaluate this information and do not need a precise assessment.After we validated the functionalities of the objects with health care professionals, it is necessary to display the collected data in an easy and efficient way in order to make the monitoring and the diagnosis easier during rehabilitation sessions.The data has to be rapidly understandable in a nonambiguous way for the therapists.We decided to develop a visualization interface for each functionality.The orientation monitoring and the detection of irregular movements not being judged useful by health care professionals, they were not included in the visualization interfaces.The position of the fingers on the jack is represented by displaying the distance of the fingers from each cylindrical recesses.We used a bar graph representation as the placement of the fingers is not supposed to evolve during handling.The use of a bar graph is therefore more suited to the representation of a fixed value over a given period.We proposed two representations of the pressure applied on an object.First, we proposed a bar graph for the jack in order to display the value of each pressure sensor.The purpose of the jack being to allow a quick comparison of the pressure of each jaws of the bi-digital or tri-digital pliers, it is not necessary to visualize the history of the pressures over the time with a line graph.The interpretation of pressure data using a line graph would be blurred by the overload of the visual channel.Unlike the jack which is intended for a detailed monitoring of the dexterity of the fingers, the cube is mainly interested in the monitoring of the grasping of the hand over time.A line graph representation allows thus a better assessment of the evolution of the pressure over time.As the jack already focuses on comparing the pressure of each jaw of the pliers, we decided to only display the average pressure applied by the patient on two opposite sides of the cube.This design choice allows to maintain a necessary degree of understanding in order to evaluate the grip of the hand and avoids overloading the interface with unnecessary information.The movements of the patient׳s arm are displayed using the accelerometer data.We used a line graph to allow occupational therapists a better assessment of the patient׳s movements over time but also to easily detect irregular or abnormal movements.Indeed, a sudden movement, for example, will result in a peak on the curves displayed.It will then be easy for the occupational therapist to compare the amplitude of these peaks to determine if they are significant.The frequencies and the magnitudes of the patient׳s tremors are displayed to the therapists after the manipulation of an object.The magnitudes of the tremors being immutable values for each recording, we used a bar graph representation.The translational tremors are displayed on the left and the rotational tremors on the right.Each bar represents the magnitude of the tremor on the corresponding axis.By clicking on this bar, the corresponding shaking frequency appears at the bottom of the screen.This study aims to collect feedback on possible improvements for the visualization interfaces.All the health care professionals who participated in the preliminary study on the functionalities of objects took part in this second preliminary study.Semi-structured interviews were conducted with the participants in the same environment as the previous study.The interview started with a reminder of the SpECTRUM ecosystem objectives and functionalities and continued with a detailed presentation of the visualization interfaces.The participants were able to manipulate the objects and the visualization interfaces in order to discover the entire ecosystem.Then, we collected feedback, recommendations and improvements on the interfaces.The questions asked during the interviews are detailed in Appendix B.The average duration of each interview was approximately 14 min.A thematic content analysis has been performed based on the videos recorded during the interviews.Results are presented for each functionality.All participants mentioned that the use of a bar graph to display the distance of the fingers from the recesses is relevant.However, although three OT suggested that using a line graph for visualizing a record as a way to assess the evolution of the fingers׳ positions over time could bring complementary information on the grasping, two OT interns did not judge this representation useful.The two representations of the pressure applied on an object were judged relevant by the majority of the participants.All of them agreed on the use of a bar graph for the jack and five OT agreed to use a line graph for the cube.However, the therapists would like to be able to choose the line graph or the bar graph representation as each one has its own advantages.Indeed, the bar graph representation allows an easier and faster understanding of the data.On the other hand, the line graph representation is better for assessing the evolution of the pressure over the time during the exercise and detect variations.Moreover, offering games involving pressure have been suggested by all participants.The jack׳s interface could propose a functionality where the therapist can set an horizontal line as a pressure goal where the patient has to approach this value or stay in a close range of this value.Then, the cube׳s interface could propose to the therapists to display pressure curves that patients have to reproduce in order to work on the grasping strength control and maintaining the patient commitment in rehabilitation.All participants judged the line graph representation relevant for the movements of the watch.Indeed, movements are space displacements over the time and a line graph allows to assess variations and irregular movements.Finally, the bar graph representation used to display the tremor has been judged relevant by all participants.They mentioned that using a line graph would be irrelevant as translational and rotational tremors frequencies and magnitudes are computed at the end of the record on the whole collected data.Different improvements have been proposed by health care professionals.First of all, the participants proposed to leave the opportunity to the occupational therapists using the interfaces to choose the type of representation they want to visualize the pressure applied on the jack and the cube.In addition, the occupational therapists asked the possibility to propose the games mentioned previously to the patient in order to work the motor control of the fingers.Before planning a large study involving a large number of patients, it is necessary to ensure the reliability of the data collected by the objects during their use.The aim of the study is to collect data during the objects usage in order to detect possible dysfunction.The study has been approved by a National Ethical Committee named “Comité de Protection des Personnes”.Three patients were involved in this study, including one female and two males.The women participant was 73 years old and had an ischemic stroke one year ago.She was right-handed and had a left hemiparesis.The two right-handed men participants were 59 and 75 years old and respectively an ischemic 17 months ago and a hemorrhagic stroke 15 months ago.The 59-year-old patient had dysarthria and ataxia while the 75-year-old patient had light motor command disorder.The experiment took place in a rehabilitation center in Le Havre, France.The patient was welcomed in a quiet room separated from the other patients.We started to present the ecosystem, including the features of each object, as well as the associated sensor.We also presented the visualization interface to the patient.Then, the patient was informed about the nature and the aims of the study.The patient was required to sign a consent form and provide personal information, such as the date of the stroke or the type of stroke.Afterwards, we exposed the experiment protocol.The protocol is divided in tasks to perform with each object.In the following sections, the thumb, the index, the middle finger, the ring finger and the little finger are respectively noted I, II, III, IV and V.The sides of the jack are noted G, P, R, Pi corresponding to the colored dot of each side.The two opposite sides of the cube are associated to R, Y and B corresponding to the colored dot of each side.The task required with the jack consists in grasping it with two fingers on two opposite sides and grasping it with three fingers - one per side.The task required with the cube consists in grasping it with two fingers on two opposite sides and with three fingers on three sides and finally to grasp the cube with the whole hand.Then, the task required with the watch is to perform the rings tree exercise by moving ten rings along horizontals shafts.The detailed protocol are presented in Appendix C.Each patient performed the experiment with two among the three objects of the ecosystem in order to minimize the time of the experiment.The patient who had an hemorrhagic stroke performed the tasks with the jack and the cube.The patient with dysarthria and ataxia performed the tasks with the cube and the watch, while the third patient with hemiparesis performed the tasks with the jack and the watch.The experiment was video and audio recorded in order to compare the real task performed by the patient with the data collected from the sensors.An occupational therapist who already was involved in the preliminary study was present during the experiment in order to help the patient, if necessary.The results of the pretests show that the cube is 100% reliable for a bi-digital grasping and 70% reliable for a tri-digital grasping with one finger per side.The watch is also reliable for the movements of the patient׳s arm.Tremor detection on the cube and the watch suggests that patients did not experience tremor during exercise, which was confirmed by occupational therapists present during the data collection.The main problem highlighted during these pre-tests is a malfunction of the jack pressure sensors in certain configurations.The jack accuracy only reached 70% during a bi-digital grasping and fell down to 56% during a tri-digital grasping with one finger per side.This can be explained by the nature of the pressure sensors, which are resistive.Their shape has been modified in order to conform to the domed shape of the jack, resulting in a minimum detection threshold – sometimes too high for certain patients presenting a motor deficit of the fingers.However, this constraint is not an obstacle to future experiments because this problem was noticed – mainly during three-finger grasping.Indeed, grasping an object like that is unnatural and occupational therapists mentioned that it would be more interesting for future experimentation to modify the three-finger grasping in a tri-digital grasping with a thumb on one sensor and the index finger and middle finger on the opposite sensor.In addition, occupational therapists suggested to use the jack only with patients with sufficient grip strength or spasticity leading to an unusually high grasping force in order to avoid an immediate modification of the prototype while maintaining a reliable data collection.In the next version of the jack prototype, pressure and position sensors will be placed on the top of the lateral face to facilitate detection in any configuration.This study aims to assess the usability and the acceptability of the three objects of the SpECTRUM ecosystem by stroke patients during rehabilitation exercises at the rehabilitation center.The study has been approved by a National Ethical Committee named “Comité de Protection des Personnes”.The experiment took place in the same rehabilitation center in Le Havre, France where the pre-tests took place.The patient was welcomed in the same quiet room separated from the other patients.Contrary to the pre-tests, each patient performed the experiment with the three objects of the ecosystem.The experiment started with a presentation of the ecosystem including the features of each objects, the associated sensor and the visualization interface.Then, the patient was informed about the nature and the aims of the study.The patient was required to sign a consent form and provide personal information such as the date of the stroke or the type of stroke.Afterwards, we displayed the experiment protocol, which is divided in tasks to perform with each object.The task required with the jack is consists in grasping it with two fingers on two opposite sides and grasping it with three fingers on two opposite sides.The task required with the cube consists in moving it following a path of targets on an A3 paper sheet by grasping it with two and three fingers as the jack and to grasp it with the whole hand.Indeed, previous researches showed that positioning and manipulating an object require good coordination of the upper limbs and appear to be suitable tasks for stroke patients.Moreover, these tasks are generally based on an action-perception loop exploiting different sensory channels.Then, the task required with the watch is the same as the pre-tests.The required information, as well as the experiment protocol are presented in Appendix D.At the end of the experiment, we conducted a semi-structured interview based on a predefined interview guide in order to explore the usability and the acceptability of the SpECTRUM ecosystem by stroke patients.The experiment was video and audio recorded in order to perform a further thematic content analysis.The average duration of the interviews was approximately 11 min.A thematic content analysis of the interviews was performed.Based on the topics explored during the interviews, we present the results according to the following categories: the usability of the SpECTRUM ecosystem and the acceptability of the ecosystem and its applications.All the patients mentioned that the size and weight of the jack and the cube is fine.Four patients including three ischemic and one hemorrhagic stroke patients with light motor disorders, ataxia, dysmetria or dysarthria found the jack very easy to use while three patients including two ischemic and one hemorrhagic stroke patients assured that the cube is very easy to use.In addition, all the patients agreed to say that the watch is very easy to use for the rings tree exercise.Indeed, three patients including two ischemic and one hemorrhagic stroke patients with hemiparesis, dysarthria and facial paralysis mentioned that the watch works as a classic watch.In addition, three patients including two ischemic and one hemorrhagic stroke patients with light motor impairments mentioned that they had forgotten that they were wearing the watch on their wrist.Moreover, all the participants mentioned that the cube is not sliding through their fingers and one of them who faced dysarthria and upper limb motor impairment found the taking good.On the other hand, two patients including an ischemic and a hemorrhagic patients mentioned that the jack is not sliding through their hands.All the participants agreed that the texture of the jack and the cube is very adapted for rehabilitation object.Then, six patients did not find any problem to the pressure and position sensors placement on the jack while the youngest patient with hemiparesis and facial paralysis suggested that the sensors should be at the top of the jack׳s side in order to facilitate the grasping.However, one ischemic stroke patient mentioned that the exercise with the jack requires a lot of concentration as this exercise involves finger movements precision.Furthermore, the oldest patient who had an hemorrhagic stroke found the jack too heavy at the end of the exercise due to muscles fatigue.It should be noted that this result cannot be considered significant as the muscles fatigue can be increased due to the patient׳s age.On the other hand, only one patient with hemiparesis on its dominant hand mentioned that the edges of the cube are too sharp and one patient who had an ischemic stroke and faced upper limb motor impairment had the hand a little numb at the end of the exercise, which is due to its upper limb motor disorder.Seven patients, including four ischemic and three hemorrhagic stroke patients are willing to use the SpECTRUM ecosystem during the rehabilitation sessions.Two of them mentioned that this ecosystem seems useful to assess their evolution over the rehabilitation sessions.The patient who faced facial paralysis and moderate finger motor disorder found the jack and the cube very useful to work on movements precision.One ischemic stroke patient with dysarthria qualified the objects of playful and fun.Finally, among the seven patients, three of them agreed to use these devices if it helps the occupational therapists for the monitoring.Five patients, including three ischemic and two hemorrhagic stroke patients mentioned having no preference among these objects.A patient with dysarthria and motor impairment on the right upper limb mentioned that all the objects are useful, especially the watch which is the easiest device to use.The patient who had an hemorrhagic stroke with facial paralysis and moderate finger motor disorder, which is the youngest one,preferred the jack in order to perform finger precision exercises.Two ischemic stroke patients mentioned that the watch is their preferred device as it can be used as a common watch and wear during the day to enhance the monitoring.One patient who had an ischemic stroke preferred the cube for its design.Most of the patients did not find any other application for the devices of the SpECTRUM ecosystem.Only one patient who had an ischemic stroke proposed an exercise based on its rehabilitation sessions.The exercise is based on the rings tree exercise and consists in moving the rings one by one from the Level 1 to the Level 2 by passing them behind the back and moving the rings from the Level 2 to the Level 3 by passing them behind the head.This exercise solicits the shoulders, the neck and the back and involves proprioception.No participant mentioned concerns about the transmission of collected data as the data will be transferred from the devices to a tablet or a computer through local wireless network and secure connection.They all mentioned that the collected data is not critically personal such as medical information.None of the patients involved in this experiment figured out potential problems with the SpECTRUM ecosystem either in terms of design or in terms of functionalities.As a conclusion to this preliminary study, most of the patients mentioned that this ecosystem is easy to use and can be useful for the therapists in order to assess the evolution of the patient׳s recovery state.The first feedback on the ecosystem shows a very good acceptance by the patients.The SpECTRUM ecosystem could be used during rehabilitation sessions in hospitals in order to collect more data about usability and acceptability on a long period.The paper presents the conception and development of the SpECTRUM ecosystem, including a smart jack, a smart cube and a smart watch intended for monitoring the arm and hand motor activity of stroke patients during rehabilitation sessions.The ecosystem design is based on an observational study performed in 3 rehabilitation centers with the aim of instrumenting objects currently used during rehabilitation sessions.The SpECTRUM ecosystem also provides a visualization interface for the data collected to help occupational therapists evaluate the evolution of motor functions of patients during rehabilitation sessions.The arm and hand motor activities are assessed by the therapist based on quantitative information.Moreover, appearance and evolution of tremors can be assessed by the therapist based on the information provided by each instrumented object of the ecosystem.The rehabilitation program can be adapted by the therapists according to the patient׳s recovery state.After the implementation of the first prototype of the SpECTRUM objects, a preliminary study has been carried out with health care professionals in order to collect feedback on possible improvements on the objects׳ design and functionalities.It evidences that the objects׳ design has been validated by the health care professionals and some features need to be removed.Only the sensors׳ placement on the jack needs to be reviewed in order to facilitate the grasping for the patient.Based on these results, a mobile application for recording and visualizing the collect data has been developed.A second preliminary study has been carried out in order to collect feedback on possible improvements on the visualization interfaces.The results show that the visualization interfaces are relevant, but need several improvements.Functionalities, such as pressure goals or pressure curve reproduction, have to be added to the next update.Afterwards, we performed pre-tests with stroke patients in order to assess the reliability of the collected data.The results show that the cube and the watch data are reliable.The data collected by the jack are not always reliable when the patients have a weak finger strength.This result implies that the jack has to be used by patients with strong finger strength or spasticity.Finally, a preliminary study on the usability and acceptability of the ecosystem has been carried out with stroke patients.The results show that the majority of the patients agree to use these devices during rehabilitation sessions at the hospital, especially if it helps the therapists during its monitoring.The design of the objects does not raise fundamental issues for the patients except for the sensors placements of the jack, which needs to be placed on the top of the lateral face of the jack.Future works will address several issues.New technologies will be investigated to provide more reliable information about grasping pressure and fingers positions on the jack as mentioned in the preliminary study.A new prototype of the jack will be implemented and technically tested with stroke patients.In order to provide relevant data from the watch to the therapists, the machine learning approach will be investigated in order to quantify the number of rings moved and detect on which level the ring has been moved.Furthermore, a longer study involving patients and therapists during a year is planned to assess the benefit of the SpECTRUM ecosystem during the rehabilitation process.Finally, information about the way the patients approach the object cannot be retrieved at the moment.In order to overcome this limitation, a smart textile sweater integrating textile bending sensors using conductive threads has been developed for monitoring the elbow flexion of the patient.This technique could be applied to monitor the shoulder and chest configuration during grasping in order to provide complementary information on the patient׳s physical state.
This paper presents a new ecosystem of smart objects designed to monitor motor functions of stroke patients during rehabilitation sessions at the hospital. The ecosystem has been designed starting from an observational study as well as the Action Research Arm Test. It includes a jack and a cube for hand grasping monitoring and a smart watch for the arm dynamic monitoring. The objects embed various sensors able to monitor the pressure of the fingers, the position of the fingers, their orientation, their movements and the tremors of the patient during the manipulation tasks. The developed objects can connect, via Bluetooth or Wi-Fi technology, to an Android mobile application in order to send collected data during the execution of the manipulation task. Performances achieved during the sessions will be displayed on the tablet. Using the collected data, the therapists could assess the upper arm motor abilities of the patient by accessing qualitative information that is usually evaluated by visual estimations or not reported and adapt the rehabilitation program if necessary. The objects, as well as the visualization interfaces, have been evaluated with health care professionals in terms of design and functionalities. The results from this evaluation show that the objects׳ design is adapted to bring useful information on the patient׳s motor activities, while the visualization interfaces are useful, but require new functionalities. Finally, a preliminary study has been carried out with stroke patients in order to assess the usability and acceptability of such an ecosystem during rehabilitation sessions. This study indicated that the patients are willing to use the ecosystem during the sessions thanks to its easy usage.
776
Richness and ethnobotany of the family Euphorbiaceae in a tropical semiarid landscape of Northeastern Brazil
The Brazilian semiarid region is home to a significant amount of biodiversity that is associated with cultural diversity; however, this region is still poorly studied.The significant cultural diversity of this region results from a confluence of different cultures and populations that have different uses for the available plants.Therefore, records of the cultural diversity from ethnobotanical studies are important tools for the development of realistic and functional models for the use and management of natural resources, which can assist public policy planning and decision-making.In recent years, ethnobotanical studies performed in the Brazilian semiarid region have indicated that the families Fabaceae, Lamiaceae, Asteraceae, and Euphorbiaceae are the most representative in terms of use.In these studies, Euphorbiaceae was of particular interest because it includes several useful species spanning different use categories, particularly the genera Croton L., Euphorbia L., and Jatropha L.The medicinal use category is the most representative for the species of this family, although other cited uses are as timber and food as well as for mystical purposes, among others.Stauble reviewed the botanical knowledge related to Euphorbiaceae in rural communities in Western Africa and described 81 species that can be useful for 87 symptoms; of these species, 46% may have purgative effects, and 28% may have antidiarrheal effects.The author stated that certain genera, such as Euphorbia, Phyllanthus, and Jatropha, have several medicinal species and emphasized the ethnopharmacological importance of the family.The use of Euphorbiaceae was also evaluated by studies in India, where the medicinal use of 23 species belonging to 12 genera was described.In these studies, certain species were considered capable of treating symptoms of incurable diseases, such as AIDS and cancer.Euphorbiaceae is one of the most complex and diversified families of the order Malpighiales and shows a highly diverse morphology, including generally lactescent plants.The species of this family are often cited as pioneers and frequently occupy rocky outcrops, ruderal environments, disturbed areas, and forest and road edges.In the Araripe National Forest, 11 species of Euphorbiaceae have been recorded; however, studies have not been performed that specifically evaluated the set of species of this family in the region.Therefore, the aim of this study was to identify the Euphorbiaceae species available to a rural community surrounding the Araripe National Forest and record the knowledge of the uses of these species.From this basis, we aimed to identify the most important species and most cited use categories by one of the human populations, showing the distribution of this family in the region.The Araripe National Forest is a conservation unit for the sustainable use of forest resources and scientific research; it has approximately 38 thousand hectares, is located in the south of the state of Ceará within the Chapada do Araripe Environmental Protection Area, and covers part of the municipalities of Crato, Barbalha, and Jardim.This conservation unit is the first National Forest in Brazil, and it was established to preserve the forest resources to maintain the springs that feed the valleys.The Chapada do Araripe Environmental Protection Area exhibits deep and well-drained soils in addition to a good aquifer and protective plant cover, which guarantees the maintenance of a wet and fertile region in its surroundings, mainly in the portion facing Ceará."The vegetation of the Araripe National Forest, according to the unit's management plan, consists of physiognomies of the Cerrado biome, such as Cerrado stricto sensu and “Cerradão”, and areas of mountain rainforest and “Carrasco”, in addition to a low representation of secondary forests and areas without forest cover.In the definition of Coutinho, the Cerrado shows two extreme physiognomies, “Cerradão” and “campo limpo”, and all the remaining physiognomies of this biome are considered ecotones between these extremes.In the Araripe National Forest, Lima et al. characterized the Cerrado area as a transition between the rainforest and the Cerrado that consists of sparse woody vegetation of medium size with widely branched elements and soil covered by grasses.The “Cerradão” is differentiated from the Cerrado by a forest physiognomy with small- and medium-sized tortuous trees, a dense shrub understory, and soil that is uncovered or covered by a thin layer of grasses.The rainforest of the Araripe National Forest is characterized by medium-sized woody vegetation, and some elements reach heights between 11 and 15 meters with straight shafts, tall branches, and understories composed of natural regeneration of the overstory.In surveys performed for the development of the management plan, the rainforest showed great similarity to the “Cerradão” in terms of tree species.The “Carrasco” was defined by Andrade-Lima as a xerophytic vegetation type of small-sized subtree and tree physiognomy.Lima et al. characterized the “Carrasco” of the Araripe National Forest as xeromorphic shrub vegetation with a severely leached sandy soil and deciduous species that reach a maximum height of 5 m.The Carrasco consists of Cerrado, “Cerradão,” forest, and Caatinga species and is considered by Fernandes and Fernandes and Bezerra to have originated from the destruction of the “Cerradão,” assuming an aspect of dense shrubbery forest.The Araripe National Forest has species typical of the Cerrado plant physiognomies and includes the only protected Cerrado area in the state of Ceará; therefore, it is considered by the Ministry of Environment to be of priority importance for conservation and scientific research.The study site is of great ecological importance because the Chapada do Araripe is among the 27 sites classified as of extreme biological importance, and it is a priority for the conservation of biodiversity in the Caatinga.Anthropic areas span 84% of the territory of Ceará, and the climate and soil conditions of this region favor desertification.In this context, the Araripe National Forest plays an important role in the preservation of fauna, flora, and water and provides a balance for the regional climate by protecting and supporting the existing forests.Additionally, this conservation unit provides several resources, such as food, energy, and medicinal plants, to the rural populations settled in the area.This study was conducted in the rural community of Horizonte, which is adjacent to the Araripe National Forest and is located in the municipality of Jardim, Ceará State, in the northeast region of Brazil.According to the census performed by local health agents, approximately 1120 people live in the community, and there is an outflow of people searching for jobs in other states because of the lack of opportunities in the region.Located approximately 15 km from the urban center of Jardim, Cacimbas has one health center for simple, weekly care, and urgent care is available in the city center.The community also has a daycare center and a primary school, whereas secondary education is available in the urban center.The rate of illiteracy is 15% and mostly corresponds to elderly residents.Because of the lack of employment opportunities, extractivism contributes to the income generated by this community.The most extracted products are “pequi”,“janaguba,Plumel), “faveira”,and “barbatimão”, and several species are used as firewood and in honey harvesting.Most of the inhabitants practice subsistence agriculture and primarily cultivate beans and cassava, selling the excess product.In addition to extractivism, the main source of income for this population is government aid.A preliminary survey of the species of Euphorbiaceae that occur in the study site was performed by consulting specialized literature, the Dárdano de Andrade-Lima Herbarium of the Cariri Regional University, the Professor Vasconcelos Sobrinho Herbarium of the Department of Biology of the Federal Rural University of Pernambuco, and the website specieslink.Subsequently, the collection was performed between August 2011 and July 2012.To determine the availability of Euphorbiaceae species, several walks were conducted through the different physiognomies of the Araripe National Forest, including inner areas, trails, edges, and areas close to the studied community.A field guide with images and the popular names of plants that occur in the region was created based on the literature to assist in locating the species with the aid of a mateiro, who participated in the walks whenever possible.All the observed Euphorbiaceae individuals, whether collected or not, were georeferenced for the creation of a sampling map.The collected material was preserved and identified with the aid of experts, and the samples were incorporated into the collection of the Professor Vasconcelos Sobrinho Herbarium of the Department of Biology of UFRPE.All the scientific names were checked against the database of the Missouri Botanical Garden.Data on the geographic distribution of the species were obtained from the literature and databases of the Missouri Botanical Garden, New York Botanical Gardens, and List of Species of the Flora of Brazil.To obtain ethnobotanical data, probabilistic sampling was performed to select a significant community sample, which consisted of 153 families out of 242.This selection was performed by drawing lots from the record of inhabitants provided by the health agents.The interviews were performed with the head of the household or the individual responsible for the household at the moment of the visit, as long as they were older than 18 years old.In total, 153 interviews were conducted, and each interview corresponded to one family unit.All the participants in this study were considered generalists with a general knowledge on the use of plants, and individuals who agreed to participate in the study signed an informed consent form.Semi-structured interviews and visual aids were used to gather information on the local knowledge.The interviews were conducted according to the recognition of species by the participants, and their knowledge on the common names, local uses, source locations, and importance of useful plants were recorded.The checklist interview consisted of a folder containing A4 images of all the Euphorbiaceae species found during sampling.Each species was shown to the informants as a colored photograph of the species in its natural environment and as a scanned image of a dried specimen.Each set of images of the same species was numbered to annotate the information provided by the informant in the data sheet.The use of the scanned image of the dried specimen in the checklist is a new proposal for the collection of data with visual aids.The Spearman correlation was used with the software Bioestat 5.0 to test whether the number of times a species was recognized correlated with the number of times this species was cited as useful.During the interviews, whenever a plant was indicated to be useful, the participants were asked whether they had already used the plant.The relationship between knowledge and effective use was analyzed by a proportion referred to herein as the “use proportion,” which was calculated according to the following formula:Use proportion = number of participants who used the plant / number of times the plant was cited as useful.The cited uses were categorized as medicinal, magical–religious, ornamental, food, soap, fuel, cosmetic, and other.The importance value, which measures the proportion of informants who cited a species as the most important, was obtained by the formula:IV = ns/nwhere ns = number of informants that considered species “s” the most important and n = total number of informants.The floristic survey of Euphorbiaceae found 23 species belonging to 11 genera.The most representative genus was Croton, followed by Euphorbia, Manihot, Astraea, Cnidoscolus, Jatropha, Microstachys, Chamaesyce, Maprounea, Phyllanthus, and Ricinus.These species form the group of Euphorbiaceae available to the community of Cacimbas.Regarding the location of the species within the Araripe National Forest, Euphorbiaceae were found predominantly at the edges and trails of fragments of Cerrado, “Cerradão” and rainforest and distributed in the interior of the “Carrasco”.In the areas outside of the Araripe National Forest, the plants were collected in areas more accessible to the community, often in cultivation fields.Certain recorded species are common in anthropic areas, and they include Ricinus communis, Manihot esculenta, and Euphorbia pulcherrima, which are cultivated plants with known uses and a wide distribution in the Neotropics.In these areas, the herbs Chamaesyce sp. and Euphorbia hirta were found, which were more common in the rainy season, and the shrubs Jatropha gossypiifolia and Jatropha mollissima were also found.Two species of the genus Cnidoscolus were found in ruderal areas close to the “Carrasco”: Cnidoscolus urens and Cnidoscolus ulei.The herbs Euphorbia heterophylla and Euphorbia hyssopifolia were collected in ruderal environments close to the rainforest, usually in humid environments.The shrub Croton heliotropiifolius was collected in all environments within the Araripe National Forest as well as in ruderal and anthropic environments in open areas with a high light incidence.Six species occurred simultaneously in areas of Cerrado ss, “Cerradão,” and rainforest: Astraea klotzschii, Astraea lobata, Croton jacobinensis, Manihot caerulescens, Maprounea guianensis, and Microstachys corniculata.Three species, Croton adamantinus, C. tricolor, and C. echioides, were only found in “Carrasco” areas and were associated with trails but not with signs of anthropic disturbance.The results of the floristic survey were used to investigate the local knowledge on the uses of this group of species available to the population of Cacimbas.The checklist interview was conducted with 26 plants because we collected three specimens that could be identified only at the family level, and all of them had at least one associated use.There was a positive and highly significant correlation between the number of times a plant was recognized and the number of times it was cited as useful.The plants that were most frequently recognized were those with the highest number of different uses: Croton jacobinensis, C. heliotropiifolius, C. urens, C. ulei, E. pulcherrima, J. gossypiifolia, M. caerulescens, M. esculenta, M. guianensis, Phyllanthus tenellus, R. communis, and “ornamental 2”.Of the species with the highest number of use citations, only M. guianensis was obtained by the participants from areas within the Araripe National Forest, whereas the remaining species were obtained from the community area itself whenever necessary.The species that received most of the use citations were not only located close to the participants but were also cultivated or ruderal.The most recognized plant was the castor oil plant, and all the informants who recognized this species cited it as useful.This plant also received the highest number of use citations and was considered the most versatile of the studied species.The most frequently indicated uses for the castor oil plant were as a laxative and in the treatment of toothaches through the use of the seed oil.The two local varieties of M. esculenta were recognized by most informants, and this species had the second most citations as being useful in the study.All the informants that recognized the species in the checklist interview stated that it is useful and attributed a total of 17 different food uses to both varieties together.The main use indicated for this species was the production of flour and consumption of the cooked root.C. heliotropiifolius occupied the fourth place in use citations and was recognized and cited as useful by 84% of the informants.This species received 33 different use indications and was the second most versatile species in this study; it is primarily used as a medicinal plant and is indicated for wound healing and blood purification.J. gossypiifolia was recognized and cited as useful by approximately 85% of the informants of Cacimbas.This species received 25 use indications, which were mainly in the magical–religious use category because it is used to avoid the evil eye.The informants also indicated medicinal uses for this species, particularly for the treatment of stroke and headache; however, there was little consensus among informants regarding these uses.E. pulcherrima was recognized by 36% of the informants, although most did not know its common name but indicated that the species was ornamental.Similar results were obtained for “ornamental 2,” which was recognized by 38% of the informants.M. guianensis was recognized by 34% of the informants, and 98% of them indicated that the plant was useful, mainly as firewood.During the interviews, several informants reported using M. guianensis, although they preferred other species, such as Byrsonima sericea and Byrsonima sp., as firewood.The informants stated that they collect already dried wood in the Araripe National Forest, so there is no need to cut individual plants for firewood.Two species of the genus Cnidoscolus were named as “cansanção”: C. urens and C. ulei.C. urens had a high recognition rate at 81%, although only 15% indicated that the plant was useful.C. ulei was recognized by approximately 37% of the informants, and 25% indicated that it was useful.Most of the informants that recognized these species stated that the plants “are good for nothing but stinging,” referring to the stinging effect of the leaves.Approximately half of the available species were infrequently recognized and used by the informants, and the survey results were often inconsistent with regard to the common name and use indications; this was the case for E. heterophylla, E. hirta, E. hyssopifolia, and Chamaesyce sp., which are ruderal species that inhabit areas of the community but were not often recognized.According to the informants of this study, these species are often found in wetter soils during the rainy season.M. caerulescens was recognized by 67 of the informants, and 36% reported no utility, whereas the other respondents attributed different uses to this species with no consensus.One of the cited applications was the production of rubber balls for leisure by using the latex of the plant; however, the informants reported that this is an old habit that is not often used today.In terms of use, the species most frequently cited as useful were not always the most frequently used species according to the families interviewed.M. esculenta was the only species of the study that showed 100% use; that is, all the informants that considered this species useful reported using the species.Although the castor oil plant was the most recognized species in this study, it came in fourth in terms of use, with 77% of the informants who considered this species useful also reporting using the species.The eight most cited use categories were medicinal, magical–religious, ornamental, food, soap, fuel, cosmetic, and other."The category “other” included answers such as “it is useful, but I don't know what for,” meaning that it was a species with indefinite use but still useful.Of these categories, the most cited was medicinal, with 42% of the use citations.The emphasis on the medicinal category is related to the percentage of use citations and the diversity of plants in this category.Nine species were cited as medicinal, as shown in Table 3##.The food category had the second most use citations, with 37% of the number of citations; however, these citations referred only to M. esculenta, of which two varieties are cultivated in the community.These two varieties received 17 food use indications, of which the most recurring were “making flour” and “eating cooked.,One variety, “mandioca,” was indicated for “making flour,” which was the most consistent answer in the study, with 111 citations.The other variant, “macaxeira,” was indicated for “eating cooked,” with 89 citations, which was also a consistent result in terms of consensus.This use category, although expressive in terms of the number of citations and applications, presented low diversity with only one species, thus revealing the great importance of M. esculenta for the community.For the community of Cacimbas, the following species were cited at least once as the most important for the family of the informant: M. esculenta, R. communis, C. heliotropiifolius, J. gossypiifolia, P. tenellus, E. pulcherrima, and C. jacobinensis.The species M. esculenta had the highest importance value, followed by R. communis.Previous surveys of phanerogamic flora performed in the Araripe National Forest reported seven and six species of Euphorbiaceae.This study found three species in common with the survey of Ribeiro-Silva et al.: A. lobata, M. caerulescens, and M. guianensis.In addition, an additional three species were also found in the survey of Costa et al.: C. jacobinensis, C. heliotropiifolius, and M. guianensis.There were 11 previously recorded species of Euphorbiaceae in this region, and this study added 22 new occurrences, resulting in a record of 21 species in the Araripe National Forest and 12 species in the anthropic areas surrounding the forest.The effort to specifically locate specimens of this family and its collection in anthropic areas may have contributed to the high number of species found in this study compared with other studies conducted in the region.The shrub C. heliotropiifolius has been recorded in the Amazon forest, Cerrado, and Atlantic Forest and may form large populations in the Caatinga.This distribution may explain the large area of collection of this species in our work.The species C. adamantinus, C. tricolor, and C. echioides have a distribution that is more closely related to the semiarid climate and most likely occurred in these areas because of the influence of the surrounding Caatinga vegetation, as they are frequently recorded in this vegetation type.Santos et al. used a checklist interview in an ethnobotanical study in the Caatinga of Pernambuco State and also found a significant relationship between the number of times a species was recognized and the number of times it was cited as useful, suggesting that species that are more frequently recognized are more frequently handled and used.Anthropic areas usually provide several useful species, including many species of Euphorbiaceae.Several ethnobotanical studies have shown that potentially useful plants, especially those with medicinal uses, have origins in anthropic areas, and they are predominantly herbs.In this study, the medicinal category was ranked second in species richness.In Indonesia, Voeks and Nyawa observed that most medicinal species were found in anthropic areas, especially secondary forests.Similarly, Caniago and Siebert studied the use of plants by healers in Indonesia and observed that medicinal species were predominantly found in disturbed sites compared with primary forests.The production of castor oil seeds in the studied region has been promoted by the government for a long period of time according to the informants, and this history may explain the dissemination of seeds in ruderal areas of the community and knowledge on the uses of the seed oil.Few farmers currently cultivate the castor oil plant because it provides little return; however, the knowledge remains, and the plant is still used.Castor oil is manually extracted from the seeds in the households of certain community inhabitants.Generally, the elderly produce the oil for their own consumption and to sell it to the community; however, the commercialization of this product does not contribute to the income of the families of the community.Oliveira et al. also observed that the castor oil plant is used for medicinal purposes in the semiarid region of the state of Piauí for the treatment of worms, boils, and snake bites.The results for M. esculenta are not new culturally or scientifically because cassava is one of the main sources of calories in the diet of several countries, including Brazil, where it is cultivated in all regions and plays an important role in industry and human and animal diets.Albuquerque and Andrade also reported the use of M. esculenta as a food source in the semiarid region of the state of Pernambuco, where it is important to the subsistence economy of the studied community.Kreutz et al. investigated the traditional practices for healing and preventing diseases in a community in Cuiabá – MT and found that bellyache bush, among other plants, was indicated for “benzeção”, a practice performed mainly by the elderly, who use branches of specific plants for this purpose; this practice was also reported in this study.In a comparative ethnobotanical survey in communities on the coast of Pernambuco, Silva and Andrade also found medicinal and magical uses for J. gossypiifolia.Communities of distinct localities believe that this species possesses supernatural powers; however, few studies have addressed this interpretation of its magical and religious uses and determined how this knowledge is transferred.The exotic and ornamental natures of E. pulcherrima and “ornamental 2” may justify the lack of knowledge of their common names and indicate a lower cultural importance of these species in the community.The lack of water in the community during the dry season hinders the cultivation of gardens with ornamental plants, and the water that is available is destined for domestic use and for cassava and other food crops.The high recognition rate of the species C. urens and C. ulei is most likely because the inhabitants actively avoid accidental exposure to these plants while walking in ruderal areas or on the way to the forest.Approximately 15% of the informants stated that M. caerulescens serves as food for the forest animals, and this information subtly reveals the close relationship between the population of Cacimbas and the Araripe National Forest and demonstrates that the ecological importance of the forest is recognized.Incursions into the forest in search of resources increase the knowledge of natural processes within the population.Therefore, it is important that communities actively participate in the development of preservation strategies for these environments, especially in the case of a national forest in the semiarid northeast region of Brazil.Albuquerque found no correlation between known and effectively used medicinal plants in a community in the semiarid region of Pernambuco and did not attribute this characteristic to the loss of knowledge; rather, effective use was attributed to “mass knowledge” and “stock knowledge.,According to the author, mass knowledge refers to a group of plants known as useful in a culture; certain plants might be used and then become part of the practical knowledge of the community, i.e., the stock knowledge.Although the castor oil plant was the most recognized and had diverse uses among the sampled species, it did not have the same important value as the cassava for the inhabitants of the community, which was most likely because the community has a greater need for cassava as a food source than for the castor oil plant as a medicinal treatment.The other plants were chosen by relatively few informants; however, this result does not necessarily mean that they are unimportant for the community.Rather, their importance may just be lower than the importance of cassava.The species that are not shown on the graph were not mentioned by any informant as being the most important during the interviews.Despite its high use, M. guianensis was not among the most important species for the community of Cacimbas, which was most likely because its main application was as “firewood,” and there are several other plant options for firewood in the forest.The use of M. guianensis as firewood is opportunistic and not intentional, meaning that the informants go to the forest to collect dry wood as firewood, but not necessarily to collect M. guianensis.This study found a representative richness of Euphorbiaceae that contributed to increased knowledge of the biodiversity of a protected area that has important biological richness in the semiarid northeast region of Brazil.Of the available Euphorbiaceae species, 26% are widely used by the studied population, which indicates the importance of species of this family.However, because the local environmental heterogeneity more studies are necessary.The dynamics of Euphorbiaceae use by the local population does not appear to threaten the conservation of local biodiversity because most of the species are obtained from anthropogenic areas.Based on the applications of the plants, this study observed patterns in the use of certain species in semiarid regions, whereas other species exhibited patterns that differed from those of other rural communities in the same region.The data set obtained in this study contributes to the knowledge of species of the family Euphorbiaceae and provides information on their uses and local importance that might be used in different research areas.This ethnobotanical study focused on one botanical family of emphasis in the literature and showed the possibility of uniting distinct research objectives by strengthening the relationship between botanical knowledge and traditional knowledge and enhancing the relationship between man and biodiversity.
Euphorbiaceae stands out among angiosperms in its species richness and in the number of reported uses from ethnobotanical surveys in Brazil and other tropical countries. In Brazilian semiarid regions, species of Euphorbiaceae are among the most frequently used by rural communities, especially for medicinal purposes. The present study investigated the presence of species of Euphorbiaceae and their use by a rural population from the Araripe National Forest region, a protected area located in the Chapada do Araripe (NE Brazil). This area is considered to be of primary importance for conservation, and it is lacking in scientific research. The survey of the richness of Euphorbiaceae occurred through opportunistic plant collections and phytosociological studies between August 2011 and June 2012. We performed 153 interviews with informants who were selected based on general non-probabilistic household sampling and administered semi-structured interviews using a checklist interview that considered all the species of the family Euphorbiaceae registered in collections. We found 23 species of Euphorbiaceae, with the genus Croton (5 species) being highlighted. This study adds new occurrences of Euphorbiaceae to the region compared with the results found in previous surveys; 50% of the collected Euphorbiaceae species were considered useful, with Manihot esculenta (cassava) considered of the highest importance, with a higher utilization rate in the community ponds. The study also indicated the use of castor bean (Ricinus communis), Croton heliotropiifolius, and Jatropha gossypiifolia. The category of use that was most cited was medicinal, and most of the useful species were obtained by informants in anthropogenic areas. The richness of Euphorbiaceae in the region was representative; however, further studies should be conducted in the study area. The dynamics of Euphorbiaceae use in the studied rural population did not appear to pose a threat to native species within the conservation area.
777
Understanding the decision-making process in disaster risk monitoring and early-warning: A case study within a control room in Brazil
Communities from different countries all over the world have been affected by the growing occurrence of disasters, which in 2015, incurred financial losses close to US$100 billion worldwide and caused 23,000 fatalities .These disasters are a potentially damaging physical event, phenomenon or human activity that may cause the loss of life or injury, property damage, social and economic disruption or environmental degradation .A disaster is thus triggered by natural hazards, which can be natural or induced by human processes .Geographically, different locations are more or less exposed to these different types of hazards.Hazard and exposure are well known and the concepts that are easy to understand.By contrast, vulnerability is a complex concept, and disciplines have several ways of defining, measuring and assessing it.The concept involves the characteristics of people and groups that expose them to harm and limit their ability to anticipate, cope with and recover from harm .Disaster risk is determined probabilistically as a function of hazard, exposure, vulnerability and capacity.It is the potential loss of life, injury, or destroyed or damaged assets which could occur to a system, a society or a community in a specific period .In this manner, early warning systems have been established to protect people by enabling action in advance to reduce risks and impacts .Together with a technological infrastructure for data collection and analysis like decision support systems , as well as decision analysis models , EWS also denotes a social process that occurs at different spatial scales and involves decision-making .There are some chains with several types of data, information, knowledge, experts, stakeholders, practitioners, policymakers, citizens involved in this social process.Monitoring available information1 and making a decision to issue warnings about potential disaster risks are often carried out in a control room, which is staffed by operators for analyzing environmental variables, identifying potential hazards and vulnerabilities, and communicating warnings with an emergency response team .These control rooms have been established not only for disaster risk management, but also areas of interest, such as nuclear power plants , mineral processing plants , and oil refineries .Since social aspects of a control room impact the way how decisions taken, they have been examined in research works existing in the literature .However, although these works have resulted in a better understanding of the procedural, cultural, and social aspects of control rooms in different scenarios, the challenge now is to recognize how those factors can influence decision-making in control rooms for disaster risk monitoring.This is particularly important once activities carried out by operators are often affected not only by the cognitive skills of each operator but also by communication and collaboration between them .As stated by Reed , decision-making preferences in organizations are often inconsistent, unstable, and externally driven; the linkages between decisions and actions are loosely-coupled and interactive rather than linear.On the basis of this challenge, this study investigates the following research question: what are the factors that influence the decision-making process in a control room for disaster risk monitoring and early warning?,The first stage in answering this question was achieved in our previous work when a preliminary version of the decision-making process was modeled by means of a standard modeling notation .This version was later extended and refined in another work , which also sought to link the tasks of the decision-makers with existing data sources.This paper goes beyond the modeling and development of diagrams that described the decision-making process by interpreting the factors that could influence it.The interpretation is supported by a conceptual framework that was based on a case study that was conducted within the control room of Cemaden.Hence, this work not only consolidates and extends our previous works but also provides the following new contributions:Conceptual framework: A framework is proposed for conceptualizing the relationship between the factors that influence the decision-making process.These factors can be described as “dimensions” and “pillars” of the decision-making process in the control room.Case study: Lessons were learned from the case study within the control room of Cemaden and formed the basis of the conceptual framework.This paper is structured as follows.Section 2 first outlines the conceptual basis of this work.Following this, Section 3 describes the research method employed for conducting this work, as well as the study settings, i.e., Cemaden, its control room, and existing monitoring systems.On the basis of this, Section 4 describes the findings of the study, which are discussed in Section 5.Eventually, Section 6 reaches some conclusions and makes suggestions for future work and research lines.Disaster management presents as an important alternative to achieve this resilience and, as a consequence, avoid or, at least, reduce the impacts caused by natural disasters .It follows a continuous process, which consists of activities that are executed before, during and after a disaster.These activities in turn are separated into four main phases.The monitoring of different variables, as well as decisions of issue warnings are defined in the preparedness phase, which aims to reduce potential damages caused by a disaster .Early warning systems indeed play a critical role for supporting these tasks, and because of this, enhancing EWS is one of the seven targets of Sendai Framework for Disaster Risk Reduction to minimize disaster risks and save lives .EWS are defined as a “set of capacities needed to generate and disseminate timely and meaningful warning information to enable individuals, communities and organizations threatened by a hazard to prepare and to act and in sufficient time to reduce the possibility of harm or loss” For doing so, they consist of four interrelated elements .To start with, the first element is risk knowledge which requires a systematic collection and analysis of data and should include a dynamic assessment of hazards and physical, social, economic, and environmental vulnerabilities.The second element is monitoring and warning that should have a good scientific basis for predicting and forecasting hazards and a warning system that operates 24/7.While the third interrelated element focus in the communication and dissemination of warnings that contain clear messages and useful information to enable proper responses.Last but not least, the response capability element is essential to ensure effectiveness of EWS, i.e. people should understand their risks and know how to react .Effectiveness EWS requires a proper monitoring of variables of interesting in order to ensure that accurate warnings of potential events are issued in time.Control rooms are particularly important to support these tasks.This is because they are staffed by operators that are responsible for analyzing environmental variables, identifying potential hazards and vulnerabilities, and communicating warnings with a response team .Consequently, they are a core element in different levels of chains at spatial locations from national to local organizations.In the same manner of any other organization, control rooms can be also characterized by “a series of interlocking routines, habituated action patterns that bring the same people together around the same activities in the same time and places” .These however can be located within an organization, or as an organization itself.Their activities are thus often affected by factors that can be the complexity and variety of several data collection tools, as well as the variety of external and internal factors that affect control room operators .Example of external factors are a broken rainfall gauge, or the restrictive national laws, and on the other hand, an internal factor may be organizational policies for communication among operators.Control rooms can be found in nuclear power plants , mineral processing plants , oil refineries , emergency warning systems .Although these existing works investigate the physical design of a room and its technological tools, only few of them are focused on analyzing organizational theory and issues .Yang et al. studied the effects of computer-based procedures on the performance of operators in nuclear power plants, including factors, such as mental workload and situational awareness.Li et al. examined human factors in the complex and dynamic environment of mineral processing plants.Furthermore, Weick investigated an air control system incident with the aim of identifying its failure issues.Results of this work showed that interruption of routines when combined may incur into a small errors, and, thus, an overall event.Decisions are intrinsic in the daily activities within control rooms; for example, if a traffic engineer requires data about the condition of the roads when deciding on what is the appropriate traffic flow .Both tangible and intangible factors affect the success or failure of a decision but a decision-maker still requires suitable data when making decisions.Otherwise, he/she might simply have to depend on his/er own experience and this might result in a wrong decision and raise questions about his reliability and efficiency .In the case of disasters, before control room operators can make informed decisions and issue accurate and useful warnings, they usually analyze information from different types of variables in a short period of time.For this reason, attempts have been made in both the literature and common practices to develop decision support systems that would be able to support decision-making.For example, Picozzi et al. devised an early warning tool for earthquakes which provides alert messages within about 5–10 s for seismic hazard zones, while Alfieri et al. analyzed a European operational warning tool for water-related disasters.Another line of works are focused on developing decision analytical models that provide a better understanding of how to use required variables, and thus improve decision-making .Within this group, Kou and Wu proposed a multi-criteria-based decision model that could be employed for analyzing existing medical resources and providing their optimal allocation.Furthermore, Comes et al. presented an approach that supports decision-makers under fundamental uncertainty by suggesting potential developments scenarios.Although these works provided relevant contributions to improve decision-making in disaster management, none of them investigated these decisions as an organizational process within control rooms.Here, it is worthwhile to mention the work of While Weick that examined an air control system and indicated the interruption of important routines, together with a broken communication chain among operators, as two factors that may trigger small errors into major disasters.This study is closely related to this work; however, we are focused on modeling the decision-making process, and then examined the influencing factors.In other words, a process can be modeled as a set of connected activities using information and communication technologies, which lead to a closed outcome providing a measurable benefit for a customer” .On the basis of a modeled decision-making process, operators and coordinators are able to analyze and define proper decision models, which may fit to the goal of activities.Thus, decision analytical model should be recognized as beyond the scope of this work.In Brazil, preventive countermeasures have been taken to mitigate loss and damage, as well as to improve the coping strategies employed by communities against floods, droughts, and landslides.One of these countermeasures was to set up Cemaden, which is a branch of the Brazilian Ministry of Science, Technology, Innovation, and Communications.Since the establishment of Cemaden, the number of monitored municipalities has grown from 56 in 2011 to almost 1000 in 2016, which represents 17% of all Brazilian municipalities.In parallel, the number of warnings that were issued by the control room of Cemaden has also grown during the last few years, i.e., 1353, 1762, 1983, and a total of almost 6500 issued warnings since 2011 when Cemaden was founded.This growing number of monitored municipalities, combined with the number of issued warnings, illustrates the complexity of the ongoing problem of disaster risk monitoring and issuing early warnings in Brazil.For dealing with this scenario, Cemaden has been building a monitoring system that consists of more than 4750 rainfall gauges, about 550 humidity and rainfall sensors, nine weather radars, and almost 300 hydrological stations.These sensors provide data on precipitation, calculate the movement of weather systems, and forecast the weather conditions.In addition, the center also works in collaboration with several institutions such as the National Water Agency, the Brazilian Geological Survey, and the National Institute of Meteorology.These provide further data about weather conditions, risk maps, and environmental variables, which supplement the existing data of the center.All these different types of data are monitored and used within a control room for making decisions of whether or not issuing warnings of potential hazards when adverse weather conditions are forecast.This contains a video wall, which displays the data that is drawn on to support the decision-making of a monitoring team.The monitoring teams work 24 h a day, throughout the entire year, in a continuous monitoring cycle that is divided into six-hour shifts, starting at midnight.They comprise a team of seven to eight members that include at least one specialist in each of the following areas of expertise: hydrology, meteorology, geology, and disaster management specialist.In addition to the video wall, each member has a separate working station where they analyze particular information on their own, e.g., a geologist may want to analyze data provided by geological agencies, while a hydrologist is more interested in data from water resources agencies.While working, they can use a decision support system, which integrates data from the monitoring systems and displays integrated data on a geospatial dashboard.These data are also analyzed by the teams to determine what warning level should be adopted; on the basis of this warning level, relief agencies on the ground can decide what kind of action should be taken.Since previous knowledge and experiences of an area are also essential when deciding whether or not to issue a warning of a potential disaster, this makes decision-making more empirical, although it is also highly subjective.Furthermore, the task of issuing a warning and deciding on its level implies a high degree of responsibility and puts pressure on the operators, which makes decision-making more complex.A case study was carried out as a part of the research methodology, mainly because it is a means of investigating a contemporary phenomenon in its context when the boundary-line between them may be unclear .Since the aim of this study is to analyze the decision-making process of a control room for disaster risk monitoring and early warning, the control room operators of Cemaden represent the subject of the study and their daily business processes are the units of analysis.A set of analytical variables was employed to assist in the collection and analysis of significant information about the units of analysis .These included Activity, Sequence Flow, and Actor and were derived from our previous work .During the phase of data collection, semi-structured interviews and direct observations were employed to gather qualitative data from control room operators.Purposive sampling was adopted as a technique for selecting participants for the qualitative study, i.e., those operators who were working in the control room on the visiting day were selected as the sample for the study.This method was chosen mainly because control room operators have a very strict work schedule and are unable to spend much time away from their regular activities; in view of this, the best alternative was to approach them informally in their free time, and not during their work shifts.The aim was to recruit as many participants as possible and thus include a comprehensive and appropriate number of individual cases for the study.Collected data were then used for preparing a diagram that describe the decision-making and reveal influential factors.This diagram was modeled with the aid of Business Process Model and Notation , which is a standard model that is used in research for the task of modeling business processes in different application domains.After this diagram has been modeled, it was further evaluated with control room operators.Purposive sampling was also conducted during the free time of the participants.It is also worth mentioning that no a priori fixed sample size was set in any phase of the case study.Data were collected during the period January 19th-22nd, 2016 and on February 1st, 2016 at the Cemaden headquarters in São José dos Campos, Brazil.During these periods, 88 warnings were issued from the control room to the National Center for Disaster and Risk Management, at the Nacional Civil Defense.Direct observation sessions were conducted following a study protocol and with a limited degree of interaction by the researcher and the subjects."This meant that the observer was only regarded as a researcher and did not interact with the subject or interfere with the subjects' activities .The aim of these sessions was to gather data about the day-to-day activities and interactions of the subjects without interfering with their work.Individual, face-to-face interviews were also conducted with the aim of obtaining data about the business activities of the participants.Open-ended questions were asked, and these guided the course of the interviews.There were 10 semi-structured interviews with members of the control room comprising two geologists, two hydrologists, two meteorologists, and four disaster analysts and these took place at the workplace of the participants."This represented 30% of all the members that were working in the control room, all of whom have had at least one year's experience there.Since the interviewers were working within strict time constraints, the interviews took no more than 35 min and all of them were audio-recorded.The data collection was carried out by a Ph.D.Student with a background in business process modeling and information systems.His work was supervised by a Researcher with a background in the sociology of disasters and early warning systems and a Professor with a background in information systems and disaster management.This interdisciplinary teamwork was important since it provided a solid basis for conducting all the phases of the study.The audio-recording from the interviews was used for transcribing each of them verbatim, i.e., the transcription included every word of the audio-recording and so represented just the way it was said.The analysis and classification were conducted in two distinct phases.In the first stage, the analytical variables used during the data collection were again employed as a basis for defining a coding technique for the classification and analysis of the data.Coding is “a method that enables you to organize and group similarly coded data into categories or “families” because they share some common characteristics” .The coding scheme was then employed to classify the content of each transcription.This analysis relied on the NVivo Data Analysis Software.2,The second phase was based on the coded data, and consisted of modeling the decision-making process, by means of BPMN.This modeling centered on the business process that covers the analysis of all the coded data assigned to the “Activity” and “Sequence Flow” categories of the coding scheme.Signavio Modeling Platform3 was used for supporting in this task.Focus group sessions were held with the aim of obtaining practical feedback on the model diagram, as well as assessing recommendations for improvements and/or discovering new ideas.Focus groups can be regarded as a social research method that allows a group of people to provide data about a specific topic by means of informal group interaction .A protocol was created to guide the work during the sessions, which consisted of unstructured and open-ended questions.Two focus group sessions were held on August 23rd, 2016 at the control room of Cemaden with teams that were working in shifts.Six people attended the first session - one meteorologist, one disaster analyst, two hydrologists, and two geologists - while the second session consisted of seven people - two geologists, one hydrologist, two disaster recovery analysts, and two meteorologists.The focus group sessions were conducted by the Ph.D.Student under the supervision of the Researcher and Professor.The participants of the focus group session were the only people present in the room.This study fully complied with the ethical and legal principles governing scientific research with human beings, drawn up by the School of Arts, Sciences and Humanities of the University of So Paulo and took into account the requirements laid down by the Brazilian National Board of Health.All the participants signed the Informed Consent Form.The interviews and focus group sessions were conducted in Portuguese because it was the native spoken language of the participants.They were also audio-recorded by means of a smartphone.The subjects were not paid anything for their participation in the sessions.Moreover, the participants were not given immediate feedback after the interviews and focus group sessions, although the ideas and results obtained in this study will eventually be shared with the CEMADEN community during a workshop at the center.When examining the feedback of the control room operators on the decision-making process, the focus group data was divided into two key areas: the “pillars” of the decision-making process and the “dimensions” of the control room for disaster risk monitoring and early-warning.The results of the focus group sessions provide qualitative data that decision-making in the control room is linked to four areas: 1) tasks; 2) required information for informed decision-making; 3) decision rules that make sense of the available data; and 4) accurate data sources.Interestingly, during the phase of data collection at Cemaden, control room operators were reticent in regarding of the decision-making process.This was mainly because they did not know how this process would be and why it could be important for their work.However, during the phase of data evaluation, we could understand that operators have a tacit knowledge about their daily activities, which turned to be useful in their own opinion, as one of the meteorologists stated:“It helps the operator.For example, I followed all the predetermined tasks; if something unexpected happened it was because it was not covered by the process.,The participants also thought that the decision-making could be speeded up once they know what their activities are and what data and information they have to look for.This is consistent with a previous analysis of the work in the control room , which found there were disruptions in information flow and a work overload among the control room operators when there was a lack of appropriate tools and action protocols.However, Militello et al. only analyzed the information flow between different control rooms, whereas this paper is concerned with analyzing the decision-making process.Furthermore, control room operators believe that training can capacity them to making better decisions or even improving them."It is worthwhile to mention that the operators did not have any training and/or drills since Cemaden's creation in July 2011.Indirectly, they recognized that their decisions have uncertainties and they want to reduce this vulnerability.As a geologist stated:“Decision-makers will be trained to know how the existing processes and decisions should be carried out, and thus be prepared for making better decisions or even improving them.,A clear understanding of the decision-making process also provides a basis for understanding the interactions and relationships among the operators.This can not only help them to analyze any inconsistencies and avoid misunderstandings but also manage conflicts within the teams during the daily activities.An example of this devalued expertise of disaster analyst is expressed by the obligation to perform administrative tasks when the other experts are performing scientific analysis of hazard monitoring using meteorology, hydrology and geology knowledges.Disaster experts expressed their vulnerability in the organization trying to highlight some information regarding people exposed to floods and landslides and/or physical vulnerability of buildings in these risk-prone areas:“My job is not restricted to administrative tasks.I am also responsible for providing data to the other members about the vulnerable community.,Furthermore, the results from the focus group sessions also provided evidence that decision-making is closely related to an understanding of what information is available and how it can be combined to detect a potential disaster through a decision rule.For example, a hydrologist requires data about the volume of rainfall and water level of riverbeds in order to predict the risk of flooding.This required information is affected by the quality of the shared data, and location of the available data sources, such as hydrological stations and rainfall gauges.The uncertainties caused by the huge volume of available data or the condition of the data collection tools, should be also taken into account when making decisions.As one meteorologist pointed out:“The forecasting of rainfall depends on having available tools, and effective meteorological stations; it also requires data that are updated and reliable because the rainfall gauges might not be properly calibrated.Unfortunately, some municipalities do not have any available tools, which means one is monitoring ‘in the dark’.,Indeed, forging a relationship between tasks, required information, decision rules, and data sources is a crucial issue.With regard to this, a geologist made the following comment:“You know how things should work and are thus suitably prepared to make a decision or even improve the decision-making process., "A meteorologist echoed the geologist's comment on the importance of understanding the basic principles of decision-making which he supplemented by pointing out that “this could help in the management of the team members; for example, when you have to hire a new member.”",The results of the study showed that the tasks carried out in the control room can often be divided into four key areas: 1) phases of warnings; 2) determining the type of hazard; 3) the location of warned areas, and 3) the expertise of the operators.With regard to the phase of issuing warnings, according to the participants, the status of a warning could fall into one stage of the following sequence phases: Analysis; Opened; Kept; Ceased; and, Under Review.4,The participants emphasized the fact that the required tasks and their sequence flow may change during these phases and thus there could be evidence of further activities.For example, in the “Under Review” phase, the warning is analyzed with regard to its quality, while in the “Kept” phase, the disaster management specialist could investigate the occurrence of disaster damages and losses reported by the media in an affected area.A different set of tasks, information, and data sources are required to assist in the decision-making.In the same manner, the decision-making process is also affected by the different types of hazards that might share several common features but could also have idiosyncrasies.For example, the volume of rainfall at a specific city/town and/or region can be used to assist in the forecasting of both floods and landslides; however, the water level in a riverbed that is essential for flood forecasting is hardly useful for forecasting landslides.The location of data monitoring also affects the decision-making, especially because of the territorial size of Brazil where many different kinds of weather systems can be found.Moreover, each area has its own specific environmental features, e.g., the geological setting of the Mountainous Region of Rio de Janeiro is more susceptible to landslides than that of the Center region of São Paulo.At the same time, urban settings also play a critical role since locations with inhabitants are more hazardous than rural areas.Furthermore, the characteristics and state of buildings are also essential in the decision-making, as pointed out by the meteorologist:“A warning about a landslide in the Mountainous Region of Rio Grande do Sul takes a completely different form from the Mountainous region of Rio de Janeiro because the buildings are stronger than those in the shanty towns.,The differences between the environmental, urban, and residential settings mean that the decision rules and required information change from one location to another, e.g., a decision rule may determine that the rainfall threshold of the volume of rainfall of the Metropolitan Region of São Paulo is 60 mm in 24 h, while, the threshold for the landslide-prone areas between Jaboato dos Guararapes and Recife cities could change to 40 mm.Furthermore, the results of the focus group sessions showed that the decision-making process is also affected by the expertise of each member of the monitoring team, as well as how these members should interact in their teams.This was made evident when a geologist explained the role of the disaster analysts, although the geologist did not identify the role of disaster analyst in the risk analysis cycle:“For example, it could be raining in a region.The geologist predicts the risk of several landslides; however, none of them will occur in an urban area.Here, the disaster analyst can help me as well .,The role of meteorologists in the decision-making process also demonstrated the level of expertise among the members of the control room.The interactions among the diverse experts of the team is different.Further information often required to meteorologists, which sometimes did not have their competences well defined.The concept of disasters, in the most parts of interviews, is attached to the idea that disasters are caused by rains.As a geologist stated:“I have often asked the meteorologist: ‘Is it going to rain?,Is it a high-risk potentially critical situation?,Is it likely going to cause a disaster critical event?’.,A kind of decision-making that is only centered on the analysis of meteorologists increases the uncertainties of the process."In contrary, when a decision is made on the basis of risk modeling and events forecasting, it is able to standardize the team's decision-making, as well as overcome uncertainties, by allowing the specialists to share their responsibilities and putting less pressure on them.Apart from this, a decision-making that relies only on the analysis of meteorologists overloads the work of these specialists, and hence makes their decision-making more vulnerable and prone to human errors.Moreover, during emergency situations, sensors and meteorological reports are subject to failures and real-time decision-making can no longer be based on dynamic data.On the basis of the two key areas presented in the previous sections, a conceptual framework is proposed here as a way of describing factors that influence decision-making in control rooms for disaster risk monitoring and early-warning.This framework then consists of two essential groups of elements, as displayed in Fig. 3: 1) the “pillars” of decision-making that is illustrated as a triangle; and 2) the “dimensions” of decision-making represented by the ellipses.The performance of monitoring teams is closely related to the “pillars” of decision-making, the operators of the control room feel more confident when they are following a defined process.In turn, this process may be centered by a decision rule, which draws the relationship between a) the tasks of the decision-making process, b) their required information, and c) useful data sources.By understanding this process, when the decisions are speeded up, every member can understand their role in the process.This kind of understanding also makes the operators more confident about making decisions.There are two reasons for this: 1) they are following a predetermined protocol and 2) they can be trained to be more specialized in the activities that they have to carry out.On the other hand, the results of this study also suggested that the “pillars” of the decision-making process should be adapted and driven by not only the traditional elements of the risk framework for broad structural policies , but also by a supplementary element named Temporality.Table 3 details each element of the framework.Together, these elements constitute what we named as the “dimensions” of decision-making, i.e., the type of hazard, the warning phase of the disaster, the location of hazardous-prone areas, and area of expertise of the operators.Understanding the links between two essential groups of elements is particularly valuable as it highlights areas of improvement in the overall decision-making process.This is because an action protocol should provide a guideline of essential activities but not constrain the monitoring team which is liable to happen because of the inherent dynamics of disaster management.Control rooms are indeed an essential feature of early warning systems and, thus, disaster management, largely because they trigger countermeasures and responsive actions if there is a hazardous situation, e.g., the imminence of a disaster.Therefore, this work provided lessons that were learned from a case study within Cemaden.Firstly, the establishment of an action protocol could provide guidelines for monitoring teams, by reducing the need to depend on their own experience, assessing the workload of the operators, improving the reliability of the decision-makers, and making it possible to track the information required for decisions."The findings of this study provided empirical evidence that professional Bourdieu's habitus arises when no process is established.In other works, this behavior indicates the tendency of people to maneuver their body in certain ways that they are used to, e.g., posture and more abstract mental habits, modes of perception, classification, appreciation, and feeling .As a turn, more complex multidisciplinary discussions have emerged.Secondly, given the importance of decision-making, a proper method for designing action protocols should be tailored by influencing “factors” and “dimensions”.These factors corroborate the results of the analysis conducted by Altamura et al. on the social and legal significance of the concept of “uncertainty” in an early warning insofar as they enforce the need for an analysis when for an analysis a decision-making process for control rooms.In line with past works that investigated failure chain of control rooms , this work enlarges the importance of a decision-making process for control rooms as a means of reducing possible errors.This also supplements other research that is focused on investigating the uncertainties that face the decision-makers for disaster management .As participants mentioned during th e focus group sessions, operators feel more convincing when they recognize the full scope of their activities.Thirdly, available systems and decision models should be aligned with the decision-making process; otherwise, they might become useless, by making the tasks complex and delaying the decisions.As a result, both cosmology episode and habitus may come back into the spotlight, and thus wrong decisions could be taken.This study finding supplements previous research works on the development of decision analytical models for disaster management and control rooms insofar it approaches the problem from another perspective, i.e., the decision-making process itself as a sequence of tasks, required information, actors, and data sources.This result is consistent with other studies in the literature that analyze key aspects of decision-making for disaster management .Our study adds to these previous works by offering a conceptual framework, which refines the factors that help to define the way decision-making is carried out in the control rooms for disaster risk monitoring and early warning.This framework is also particularly valuable for a better understanding of how computer-based procedures can be implemented in the main control room.Forth, when traditional data sources are damaged, inexistent, or not well calibrated, crowdsourcing and volunteered information may be adopted as a supplementary source.Results obtained in this study also suggest that the condition of data collection tools implicates decision-making in control rooms, as it was mentioned by participants during interviews and focus group sessions.Since control room operators are making decisions far away of a vulnerable region, their judgments rely on data provided by existing monitoring equipment.Therefore, when a data collection equipment is not proper working, operators could decide “in the dark” without knowing the “real” situation in the area; this occasionally may lead to devastating consequences due to a wrong decision.This conclusion is in line with previous works that investigated decision-making and human factors in different control rooms .For overcoming this challenge, common people can provide reliable and accurate volunteered geographic information from vulnerable areas , which thus supplements traditional data collection tools and enhances decision-making in control rooms ."Findings of this work however show that as any other information, this should reflect the decision-makers' requirements, otherwise it may be useless or even misused.In summary, these lessons that were learned from the case study, together with the conceptual framework, provide contributions that are useful not only for operators but also coordinators in guiding the establishment of a control room in different application areas.Although we believe these results are of great significance for research and practice, there are limitations to our study which should be recognized.The conceptual framework could be useful in several other application domains for understanding their decision-making process; for example, control rooms of nuclear power plants, air traffic control, oil factories.However, while these examples demonstrate the potential of the framework, its rigorous evaluation through the systematic application in several other scenarios is still required to establish the generality of the framework, which is beyond the scope of this work.The aim of this paper has been to understand the factors that affect decision-making in the control room for disaster risk monitoring and early warning.Semi-structured interviews and participatory observations were conducted in a qualitative analysis project, which was conducted in the control room of Cemaden.The results obtained in this analysis showed that members of the control room tend to draw on their previous experiences and knowledge in their decision-making when there is a lack of a clear strategy.As a result, control room operators become more concerned and worried about the way they are making decisions.At the same time, this increases uncertainty in decision-making since operators do not know what activities they are supposed to carry out which kind of information can be regarded as “additional information”, and what data sources should be analyzed.On the basis of these concerns, our framework describes the essential features of a decision-making process, i.e., a) the tasks involved, b) the required information, c) the decision rules that are designed to make sense of the information, and d) the required data sources.The results of the study showed that these are the essential components of the decision-making processes that assist in carrying out the activities of control rooms for disaster risk monitoring and early warning.Furthermore, the framework provides a set of “dimensions” that characterize the decisions made in the control room, such as the expertise of the members, the warning phase, types of disasters, and geographic location.The results provided evidence that there is a strong relationship between the essential features of decision-making and these dimensions.In other words, the dimensions influence the way that the tasks are carried out by monitoring teams, e.g., control room operators will not analyze the water level of a riverbed when making decisions related to landslides.This also affects the required information, decision rules, and analyzed data sources.For example, a hydrologist will not analyze weather forecasting, since it is assigned to a meteorologist.Future lines of research should also be noted.Given the nature of the findings of this study, there is still a need to conduct further case studies in different organizational settings, which could support the generalization of the contributions achieved in this study.In addition, more participatory observations should be conducted in the control room to extend the acquired knowledge basis, especially during a disaster situation such as that occurred in the Mountainous Region of Rio de Janeiro in 2011, before the Cemaden creation.The conceptual framework should be also applied and evaluated in other applications domains; for example, control rooms that aim at monitoring and issuing warnings of tornadoes or earthquakes, as well as control rooms of a nuclear power plant, which has distinct requirements of a control room for disaster management.These further evaluations have the potential to provide a new understanding or improve the body of knowledge on factors that influence decision-making in control rooms.Moreover, although a sequential process is useful for supporting decision-making, in some cases it may become pointless due to the uncertainty of resources or existing information.For example, an action protocol may determine that a control room operator must issue an alert using data from rainfall gauges; however, in one particular case, he/she does not found an equipment installed at the location.In this context, there is an emerging trend to adopt reference task models to assist in disaster management , which is a definition of universal elements that can be employed by organization developers to solve a specific task at a given time.As a result, this could meet the need for a more flexible decision-making process and more resilient during disaster situations .
The tasks of disaster risk monitoring and early warning are an important means of improving the efficiency of disaster response and preparedness. However, although the current works in this area have sought to provide a more accurate and better technological infrastructure of systems to support these tasks, they have failed to examine key features that may affect the decision-making. In light of this, this paper aims to provide an understanding of the decision-making process in control rooms for disaster risk monitoring and early warning. This understanding is underpinned by a conceptual framework, which has been developed in this work and describes factors that influence the decision-making. For doing so, data were collected through a series of semi-structured interviews and participatory observations and later evaluated with members of the control room of the Brazilian Center for Monitoring and Early Warning of Natural Disasters (Cemaden). The study findings provided a solid basis for designing the conceptual framework of the essential factors required by the decision-makers. These factors are separated into two groups: 1) the “dimensions” of decision-making (i.e., the type of hazard, the phase of the disaster risk, the location, and area of expertise of the operators) and the “pillars” of decision-making (i.e., the tasks, their required information, useful data sources, and the decision rule). Finally, the contributions achieved in this study may help operators to understand and propose proactive measures that could improve their decision-making, overcome uncertainties, standardize the team's decision-making, and put less pressure on operators.
778
Occurrence and seasonality of internal parasite infection in elephants, Loxodonta africana, in the Okavango Delta, Botswana
Parasites can reduce body condition, reproductive success, and survival in their hosts.Although parasite infections have been associated with mortality in the African elephant, Loxodonta africana, research on the parasite fauna of this species is limited.More is known about parasites of Asian elephants, whose large captive population and significance to livelihoods underpin more detailed study.Apart from well recognised generalist taxa, most elephant-associated parasites so far described appear to be specific to either Asian or African elephants, suggesting that they have evolved to become host specific in the 7.6 million years since the African and Asian elephants diverged.Due to the relatively limited amount of work that has been carried out on these parasites in African elephants, very little is known about their identity, occurrence, importance, life cycles and transmission dynamics.Among African elephants, nematodes are frequently found, with hookworms in particular reported to cause pathological lesions and haemorrhages in the bile ducts and liver, as well as the intestines.The elephant-specific intestinal fluke Protofasciola robusta, likely to be an ancestral species within the Fasciolidae, has been associated with intestinal tissue damage, haemorrhage and death in free-ranging African elephants.Coccidian infections, while apparently common, have not been widely associated with adverse clinical consequences.This study sought to determine, by means of a coprological survey, the occurrence of and levels of infection with gastrointestinal parasites among African elephants in the Okavango Delta ecosystem, and to test for associations with potential drivers of transmission, including age, sex, group size and composition, season and year."Additionally, serial sampling of a small group of domesticated elephants at the study site was utilised to investigate the seasonality of transmission in this unusual and important part of the elephant's range. "The study was conducted in the Ngamiland Wildlife Management Area 26 concession in the Okavango Delta, Botswana.This is a private game concession of around 180,000 hectares, used for tourism, comprising riverine forest and grasslands, which flood seasonally along with the rest of the Delta.Rainfall is concentrated between November and February, and flood levels rise from March to June, and then recede to September.The study area has an estimated wild elephant population of 1350, as well as a small herd of domesticated African elephants used for transporting tourists on elephant-back safaris.Both populations have been subject to detailed behavioural studies in recent years, facilitating access to known groups and individuals for faecal sampling.Fresh faecal samples were collected from individual free-living elephants during daylight hours, between November 2008 and April 2012.Elephants were observed until they had defecated and had moved off to a safe distance.A sample was then taken, comprising separate aliquots from the surface and the interior of the dung bolus, to control for eggs having a heterogeneous distribution in the faeces.Only samples able to be collected within one hour of being dropped were taken.This was to avoid rapid parasite egg hatching or bolus drying, as well as disturbance and dispersion by insects such as dung beetles.Seven domesticated African elephants were kept at the study site.This group, known as the Abu herd, were used for elephant-back safaris, and enclosed at night but allowed to forage in the bush during daylight hours, as well as being walked regularly to water and on safari routes.This group therefore provided an opportunity to track temporal patterns in parasite load through longitudinal sampling, and hence reflect seasonal fluctuations in infection pressure to which wild elephants might also be exposed.Samples were collected from all seven members of the Abu herd over a shorter period, also during daylight hours.Samples were placed in plastic bags and stored in a cool box for transfer to the laboratory and processing on the day of collection.The following information was collected for each sample: the date and time of collection, the age and sex of the elephant sampled, and the size and composition of its social group.Group composition was categorised as follows: Group 1 comprised either all females or females with males below the age of 15 years, while Group 2 comprised all males aged 15 years or more.These two categories represent the differential group living dynamics of wild African elephants.Female elephants remain in their matriarchal groups for life, while males are pushed out of these herds at the onset of puberty, and may remain solitary or form groups of their own.Elephants were assigned an age based on a number of visually observed variables, including body size, and tusk size and damage.The elephant population sampled has been the subject of long-term, on-going behavioural studies, and many of the observed elephants could be matched to a previously compiled identification database by observing ear markings, tusks and tail hair, and precise age consequently confirmed.This also minimised the risk of repeat-sampling of individuals.For each sample collected between 12th November 2008 and 20th January 2012, three grams of faeces were weighed out, stored in a 15 ml storage pot and filled to the top with 10% formalin.These samples, hereafter referred to as formalin-preserved samples were stored at ambient temperature and analysed between one and 15 months after collection.For samples collected between 21st January and 11th April 2012, hereafter referred to as unpreserved samples, three grams were also measured out, but were stored in a domestic refrigerator at around 4 °C, and analysed within 24 hours of collection.Nematode egg and coccidian oocyst density in faecal samples was estimated using a modified McMaster method, with salt-sugar flotation solution and a detection limit of 30 eggs per gram.Briefly, 42 ml of water were added to each three gram sample, mixed thoroughly and then sieved.Two centrifuge tubes were filled with an aliquot of the sieved solution, and centrifuged for two minutes at 1500 rpm.The supernatant was then discarded and flotation solution added to the remaining sediment.The tubes were then inverted several times and a pipette was used to extract some of the mixed solution and place it in the chambers of a Fecpak slide.This slide was used in preference to the standard McMaster slide because of the increased sensitivity, with one egg counted equating to 30 epg, compared with 50 epg using the standard modified McMaster method.The slides were left for two minutes to allow the eggs time to float to the surface before being examined under 10x objective of a light transmission microscope.The prevalence of coccidian oocysts was recorded, and the number of nematode ova in each chamber was counted to estimate egg density.Since some parasite ova, notably fluke eggs, could be too dense to float in salt-sugar solution, a sedimentation method was used to assess fluke prevalence.Faecal suspension was prepared as described in the flotation procedure above and topped up with water to 200 ml, mixed and poured into an inverse conical beaker.The beaker was left for three minutes to give fluke eggs time to sink.A pipette was then used to remove approximately 2 ml suspension from the very bottom of the beaker, and transfer it to the lid of a petri dish.After adding a drop of methylene blue stain, a graduated petri dish was then placed bottom-down on top of the lid to create an even layer of sediment, and the whole examined under 40x total magnification under a dissecting microscope.The number of fluke eggs seen was recorded.Early analysis of samples revealed that nematode eggs were frequently present in sediment fractions of FP-samples, while flotation tests on the same samples were negative for nematode eggs.Thereafter, nematode eggs were examined using both flotation and sedimentation methods.Sediment was examined in a petri-dish, as above, and the presence of fluke and nematode eggs recorded separately.Individual faecal samples were categorised by age, sex, month, season, group size and group composition.Associations between these factors and the prevalence of coccidia and fluke, and of nematodes in FP-samples only, were investigated by binary logistic regression analysis, separately for each parasite type, in order to take account of potentially confounding interactions.All factors were included initially, and the least significant removed in turn until only significant predictors remained.The level of significance was set at p = 0.05.Logistic regression was not appropriate for nematode eggs in UP-samples, since observed prevalence was 100%.Instead, the effects of the same factors on nematode egg density were investigated by multiple linear regression analysis, following log10 transformation to stabilise the variance.Because of apparently inconsistent flotation of nematode eggs preserved in formalin, nematode egg density was analysed only for UP-samples, and prevalence of the three parasite categories analysed for UP- and FP-samples separately.Nematode egg counts from samples analysed only using flotation, before the limitations of this method were known, were discarded from the analysis of nematode egg prevalence."Trends in egg density in the captive Abu herd over the study period were assessed using two-tailed Pearson's correlation against time.One individual elephant was not found to be infected with any parasite at any time and was discarded from this analysis.Analyses were conducted using SPSS software.A breakdown of samples collected by factor is given in Table 1.The median age of sampled elephants was 17 years, and the median group size 5.A total of 61 UP-samples were analysed and 397 FP-samples, which were stored in formalin for between 1 and 15 months before analysis.A total of 197 samples were analysed before problems with nematode egg flotation were appreciated, and were excluded from analysis of nematode egg prevalence, leaving an effective sample size of 261 for this analysis.Sampling was skewed towards males early in the study to align with behavioural studies, and more females sampled later to achieve greater balance.In addition to samples from wild elephants, 79 faecal samples were collected from the seven individuals of the Abu herd.Coccidian oocysts were recorded in 69% of UP-samples and 48% of FP-samples.For both sample types, prevalence varied seasonally, and was significantly higher in January and/or February than in the reference month, March, which recorded intermediate prevalence.In FP-samples, coccidian oocysts were additionally more likely to be found in faecal samples taken in 2010, and those that had been stored in formalin for less time.Each additional month spent in formalin reduced the chance of finding coccidian oocysts using flotation by 13%.Although oocyst prevalence in males and females was very similar, when interaction with other factors was taken into account, oocysts were more likely to be found in samples from males than in those from females.Nematode eggs were present in 73% of FP-samples, and were more likely to be found in elephants from larger groups, and in samples that had been stored in formalin for less time.For every additional elephant in a group, nematode eggs were 6.5% more likely to be found, and for every additional month spent preserved in formalin, they were 19% less likely to be found.Nematode eggs were present in all of the UP-samples analysed, rendering analysis of prevalence superfluous, but egg density varied.Samples from Group 1 had significantly higher nematode egg density than those from Group 2.Nematode eggs were of typical strongyle-type morphology, and of mean length and width.Fluke eggs were present in 26% of UP-samples.Prevalence was significantly higher in females than in males.Fluke eggs were present in 23% of FP-samples, and were more likely to be found in samples collected in 2010 and 2011, and in those from older individuals, and less likely to be found after longer storage in formalin.Overall prevalence in years 2008–12 was, respectively, 12, 11, 27, 31 and 26%.Each additional month of storage in formalin decreased the chance of finding fluke eggs in an individual sample by 17%.Fluke eggs were operculate and measured 80–110 µm in length and 50–60 µm in width, and were quite different in appearance to the classic Fasciola-type eggs seen in other large mammals in the study area.Samples were collected from the seven members of the Abu herd between January and April 2012.The group comprised six females aged 3 months to 37 years, and one male of 5 years.Over this period, a significant increase in nematode egg density was observed in two individuals, with counts starting at 30 in both individuals and increasing steadily to 210 and 570 over the three month period.At the start of the study, the six members of the Abu herd were all infected with coccidia.However, from the 17th of March onwards, coccidian oocysts were no longer found in any of the samples collected.No fluke eggs were detected in the captive elephants at any time.This is, to our knowledge, the most extensive coprological parasite survey of wild elephants in Botswana to date.Specific identification of the parasite ova found was not possible, and would require corroborative post mortem recovery of adult parasites from elephants, which is rarely possible, given the high level of protection accorded to these animals and the rapid disintegration of carcasses.Advances in molecular methods provide opportunities for more specific studies in the future.Nevertheless, coprological surveys are useful to characterise broad patterns of infection at higher taxonomic levels.In the present study, wild elephants in the Okavango Delta were found to be commonly infected with nematodes, coccidia and trematodes.The morphology of the fluke eggs found was very similar to those of Protofasciola robusta, an intestinal fluke associated with emaciation and mortality in elephants in Kenya.The seasonally wet conditions in the Okavango Delta probably provide suitable conditions for the life cycles of water-dependent fluke species.Lack of fluke infection in the domesticated elephants sampled, which have a more restricted range, suggests that infective stages might be distributed patchily in the environment.Coccidia and nematode ova were found in domesticated elephants as well as at high prevalence in wild elephants, demonstrating that conditions in the Delta are conducive to high levels of parasite transmission.Considering fluke, nematode and coccidia ova together, year, month, sex, age, and group composition and size were all significantly associated with level of parasite infection.Wild elephant samples collected in 2010 more commonly contained coccidian oocysts and fluke eggs than in other years, and fluke eggs were also more common than average in 2010 and 2011.Flood levels in the Delta were unusually high in 2010, and this could have favoured parasite transmission through, variously and non-exclusively, more humid soil supporting parasite development and survival, better conditions for snail intermediate hosts of fluke, and higher host population density as a result of lower available land area.A more persistent effect might be expected for fluke than for intestinal coccidia, since flukes are generally longer-lived parasites.Given the well-established links between climate and the transmission of many parasite taxa, it was surprising that no strong associations were found between season and the prevalence or density of parasite stages in elephant faeces.However, transmission in one season can result in elevated parasite burdens in the next, given the time needed for parasite maturation, and prolonged parasite survival and propagule production.This would blur season–prevalence relationships.The prevalence of coccidia, but not of nematodes or fluke, was significantly associated with month.Oocysts were most likely to be observed in faecal samples in January and February, towards the end of the rainy season, after which prevalence declined.In the small number of serial sampled domesticated elephants, coccidian oocysts similarly disappeared from the faeces between February and March.These results suggest that the prevalence of coccidiosis is seasonal, and drops between the rainy season and the flood season.Other studies have shown high prevalence of coccidiosis in farmed livestock in the tropical rainy season.Oocysts continued to be recorded in the present study at lower prevalence through the rest of the year.Parasite transmission could be enhanced by increased host density during the flood season, as elephants become concentrated on elevated land; however, there was no clearly increased prevalence at this time.In other systems in the region, e.g. antelopes in Zambia, and in elephants in Nigeria, helminth prevalence typically peaks in the rainy season.The lack of a strong seasonal signal in helminth prevalence in the current study could be due to, among other factors, limited effects of climatic variation on transmission in this system, or parasite longevity damping fluctuations in transmission.In two of the longitudinally sampled domesticated elephants, nematode egg count increased substantially after the end of the rainy season, which would be consistent with infection from larvae that developed during the rains.However, further work is needed to characterise and explain the seasonal epidemiology of nematode infections in this system.Male elephants were more likely to shed coccidian oocysts and less likely to shed fluke eggs than females, while nematode egg prevalence was unaffected by sex.Many mammal studies have found a male bias in parasitism, usually due to sexual dimorphism in behaviour or morphology, or by the effect of sex-specific hormones on the immune system.If the latter effect is present in elephants, then bulls in musth, when plasma testosterone levels rise significantly, might be expected to have increased parasite levels.However, too few musth bulls were encountered during the study to assess the effect of this heightened male hormonal state on parasite burden.A previous study in Namibia found that musth had no significant effect on parasite burden in bull elephants, suggesting that testosterone may not have a significant immunosuppressive effect in this species.Measurement of hormones in faeces may enable more detailed investigation of hormone–infection relationships in the future.Non-hormone related sexual dimorphism such as group structure, range and diet in male and female elephants may also contribute to the observed pattern of male biased coccidia infections.Age was not associated with the prevalence of coccidia or nematode ova, nor nematode egg density.However, fluke prevalence increased with increasing elephant age.Flukes are typically long-lived within the final host, and this pattern is consistent with gradual accumulation of flukes through life, and limited host immunity.Unlike many livestock species, the elephants in this study do not appear to be acquiring immunity to parasites with age, or at least if such immunity occurs, it is not sufficiently strong to cause a detectable decrease in infection levels in previously exposed individuals.Similarly, a study on wild elephants in Namibia found that within family groups, nematode burden increased with age, and this was attributed to older elephants eating more, and therefore being exposed to a greater number of parasites.Elephant group size varied greatly in the stored sample study, ranging from two to 85 individuals, and the chance of nematode infection increased with increasing group size.A positive correlation between group size and parasite load in mammals was detected across species by meta-analysis.The rate that the environment is contaminated by parasite eggs is positively correlated with the number of parasitised individuals in the population.As larger herds have an increased probability of including infected individuals, it would be expected that larger group sizes lead to a high environment contamination rate, which in turn, leads to higher parasite levels.The host-density effect on parasite transmission may be exacerbated by the high water levels in the Delta, which force elephant group members to cluster together on dry ‘islands,’ thus increasing host density even further; although, there was no observed increase in infection levels during the flood season in the present study.Members of family groups had higher average nematode egg density than those in groups of mature males.Thurber et al. also found that members of the matriarchal group had a higher nematode burden than solitary bull elephants.This might similarly be explained by higher contamination rates of frequented range areas by larger social groups.However, such processes might be expected to act across parasite taxa, and no relationship was found between group size and composition and the prevalence of coccidian or fluke ova.Faecal egg count methodology was limited in this study due to the sinking in flotation solution of nematode eggs from elephant faecal samples after storage in formalin.This was unexpected but was overcome by changing the nematode detection method to include sedimentation as well as flotation, although this meant that only parasite prevalence, rather than density, could be estimated with confidence in stored samples.It was also found that increased time in formalin led to decreased detection of coccidia and fluke ova.It is possible that high ambient temperature adversely affects the integrity of parasite ova in faecal samples stored in formalin.This consideration should be borne in mind in other coprological studies of parasites in wildlife, in which prolonged storage of material is commonly used to overcome logistical barriers to immediate analysis.Elephants in this study were found to commonly shed parasite ova in their faeces, including those of coccidia, nematodes and fluke.A wide range of factors was associated with parasite presence and density, including sex, age, group composition, group size, month and year.A significant effect of month on parasite prevalence was also found in sympatric domesticated elephants.In the case of coccidia, it appears that transmission is favoured in rainy and flood seasons.The high prevalence of fluke eggs is notable and could be due to the warm and wet conditions in the Okavango Delta.Further research is needed to establish whether internal parasites have any effect on individual fitness or population dynamics in this population, the extent to which transmission occurs between different sympatric host species, and to more fully understand the effects of climate and host biology in the epidemiology of parasite infections.The authors declare that there are no conflicts of interest in this revised article.
It is known from studies in a wide range of wild and domestic animals, including elephants, that parasites can affect growth, reproduction and health. A total of 458 faecal samples from wild elephants were analysed using a combination of flotation and sedimentation methods. Coccidian oocysts (prevalence 51%), and nematode (77%) and trematode (24%) eggs were found. Species were not identified, though trematode egg morphology was consistent with that of the intestinal fluke Protofasciola robusta. The following factors were found to have a significant effect on parasite infection: month, year, sex, age, and group size and composition. There was some evidence of peak transmission of coccidia and nematodes during the rainy season, confirmed for coccidia in a parallel study of seven sympatric domesticated elephants over a three month period. Nematode eggs were more common in larger groups and nematode egg counts were significantly higher in elephants living in maternal groups (mean 1116 eggs per gram, standard deviation, sd 685) than in all-male groups (529, sd 468). Fluke egg prevalence increased with increasing elephant age. Preservation of samples in formalin progressively decreased the probability of detecting all types of parasite over a storage time of 1-15 months. Possible reasons for associations between other factors and infection levels are discussed.
779
Dual-doped graphene/perovskite bifunctional catalysts and the oxygen reduction reaction
Research effort continues to focus on the oxygen reduction reaction due to its importance in energy device applications and, in particular, the search for more abundant and inexpensive catalyst replacements for the Pt-group materials is attracting increasing attention .Amongst several candidates, perovskites have been demonstrated to be catalytically active .However, the low conductivity typical of perovskites limits their application as single catalysts for the ORR .Here, we investigate combining dual-doped graphenes with a perovskite for the first time.These materials exhibit improved electrocatalytic activity due to the synergic effects of the dual-doped graphene/perovskite, and optimisation of the composition obtains the best performance in terms of both current densities and number of electrons transferred in the ORR yields results that approach that of Pt/C.The dual-doped graphene catalysts were prepared via a thermal annealing of a mixture formed by graphene oxide, and the precursors of the different doping agents.These were: boric acid, melamine, orthophosphoric acid and dibenzyl disulfide.100 mg of GO was mixed with 500 mg of melamine and 100 mg of the corresponding second precursor in 30 mL of ultrapure water.The ink was sonicated for 1 h, then stirred for 15 h and centrifuged at 20000 rpm for 10 min.The supernatant was discarded and the resulting ink placed in an alumina crucible and pyrolised in a quartz tubular furnace at 900 °C for 2 h, heating rate of 5 °C min− 1, under 50 mL min− 1 N2 atmosphere.Finally, the sample was cooled under nitrogen before being weighed.Rotating ring-disk voltammetry was performed using a Metrohm AutoLAB PGSTAT128N potentiostat connected to a rotator in a Faraday cage.The reference electrode was an Ag/AgCl electrode and the counter electrode was a Pt mesh.The RRDE consisted of a GC disk and a Pt ring with an area of 0.1866 cm2, with a collection efficiency of 37%.Prior to each experiment the RRDE was thoroughly polished with consecutive alumina slurries of 1, 0.3 and 0.05 μm and then sonicated to remove any impurities.The catalyst inks were prepared by dispersing different amounts of the as-prepared dual-doped graphene, and La0.8Sr0.2MnO3 or manganese oxide, to give a total amount of 5 mg in 0.2 mL of isopropyl alcohol, 0.78 mL of ultrapure water and 0.02 mL of 10 wt% Nafion.This mixture was sonicated for 1 h and then a 15 μL aliquot was pipetted onto the GC disk to give a catalyst loading of 0.3 mg cm− 2.The droplet was left to dry at room temperature for 60 min at 400 rpm as described in literature in order to get an uniform layer.The RRDE was then immersed in the O2-saturated 0.1 M KOH alkaline solution and cycled between + 0.4 and − 1.0 V at 100 mV s− 1 until a stable response was observed.Linear sweep voltammograms were then recorded at 10 mV s− 1 between + 0.4 and − 1.0 V at rotation speeds from 400 to 2400 rpm.The Pt ring voltage was fixed at + 0.5 V to ensure complete HO2− decomposition.The AC impedance spectra were also measured via a Metrohm Autolab FRA32M analyser between 500 kHz and 0.1 Hz at 0 V vs. Ag/AgCl,with a signal amplitude of 10 mV.All measurements were carried out at 293 ± 1 K.XRD measurements were obtained using a PANalytican Empyrean Pro X-ray powder diffractometer with a non-monochromated Cu X-ray source.Raman spectra were recorded using a Raman Microscope Renishaw inVia system."X-ray photoelectron spectroscopy spectra were obtained at the National EPSRC XPS Users' Service at Newcastle University using a Thermo Scientific K-Alpha XPS instrument with a monochromatic Al Kα X-ray source.The addition of LSM to pure graphene increases the observed value of n from 2.7 to 3.6 although does not affect current density.In the case of sulfur and nitrogen-doped graphene, SN-Gr/LSM shows a significant improvement in both the measured current and the value of n compared to SN-Gr.The source of this improvement has been postulated to be either conductivity effects , or the acceleration of Eq. by the perovskite .XRD and Raman spectroscopy were used to investigate the doped graphenes: both pure graphene and SN-Gr showed diffraction peaks at 26.5° corresponding to a basal inter-layer spacing of 0.34 nm typical of graphene materials .The defects created during thermal annealing are believed to influence the conductivity of graphene materials , and modify the relative intensity of the ID and IG peaks in the Raman spectra observed at 1340 and 1580 cm− 1, respectively .The measured values of the ID/IG ratio are 0.25 for graphene and 1.13 for SN-Gr, indicating fewer defects and thus higher conductivity for the graphene sample.AC impedance measurements of the high frequency resistances supported this.The conclusion of this is that the positive effect observed for the SN-Gr/LSM is not related to an increase in the conductivity of the material.Next, the RRDE data was analysed using the method of Hsueh and Chin to determine the ratio of rate constants k1/k2– above).Fig. 1d indicates that the addition of LSM promotes the direct 4e pathway over the stepwise 2e pathway in both cases.In order to see if the results obtained for the doped-graphene/perovskite catalyst are mainly due to the presence of Mn in the perovskite, an experiment comparing the catalytic activities of MnO2, SN-Gr and SN-Gr/MnO2 has been carried out and the results are shown in Fig. 1e and f. Unlike the results reflected in Fig. 1a, the addition of MnO2 to the dual-doped graphene does not improve the current density nor the observed overpotential with respect to the doped-graphene alone.This suggests that the catalytic activity of the perovskite/doped-graphene hybrid catalyst does not come from the Mn activity only.Next, the influence of the composition of the SN-Gr/LSM system on the catalytic performance towards the ORR was investigated by conducting analogous experiments on a series of SN-Gr/LSM composites.A gradual decrease in overpotential is observed as the SN-Gr content rises, reaching a minimum at 80% SN-Gr content.This echoes the trend in onset and half-wave potentials shown in Fig. 2b, with the most positive onset being + 70.5 mV and the least negative half-wave potential being − 213 mV at 80% SN-Gr content.Tafel plots confirm the same trend, showing the SN-Gr/LSM 0.8:0.2 the lowest value of Tafel slope with − 104 mV dec− 1.The values of n calculated from ring currents are provided in Fig. 2d and show that LSM reaches a maximum value of n of 3.8, whereas the lowest value of 3.6 corresponds to pure dual-doped graphene.For mixed SN-Gr/LSM composites n does not vary systematically between these two values, with n equal to 3.8 for 20 and 80% dual-doped graphene composition and 3.7 for 40 and 60%.This points to that the ORR takes place in the pure perovskite by a 4e mechanism, although this selectivity towards the direct O2 reduction into OH− is not reflected in improved current densities, probably due to the previously mentioned poor conductivity of perovskites.The ORR performance of LSM perovskite is improved with the addition of dual-doped graphene, the best results being obtained at higher SN-Gr contents.The principal conclusion is that the electrochemical performance mainly comes from the intrinsic catalytic activity of the doped-graphene, with the perovskite playing a role of further reducing agent of the peroxide produced by the graphene catalyst.It has been proposed that the carbon facilitates the reduction of O2 into HO2− in a 2e− pathway and the perovskite assists the reduction of HO2− into OH− to give an overall mechanism .In this case, it can be observed that the SN-dual doped graphene shows a value of n = 3.6 when it is not combined with perovskite, which increases on the addition of perovskite to around 3.8.This explanation is therefore possible if the rate of the perovskite-facilitated peroxide reduction is fast compared to the formation of peroxide.In order to elucidate if the conclusions obtained for SN-Gr/LSM could be extended to other dual-doped graphenes, graphene doped with boron‑nitrogen and phosphorus‑nitrogen were tested under the same conditions.Their compositions were determined via XPS and are shown in the inset of Fig. 3a.The voltammetry in Fig. 3b illustrates that all the doped-graphenes increased their limiting current densities when 20% perovskite was added.The PN-Gr catalyst exhibited the highest activity, which is further enhanced by the addition of perovskite such that its current density approaches commercial Pt/C.In all cases the addition of LSM caused an increase in the value of n, with particular interest in SN-Gr where n rises from 3.6 to 3.8.The production rate of peroxide intermediates drops from 21.1% for the dual-doped SN-Gr to 9.6% for SN-Gr/LSM.This is an unusually low value for a Pt-free catalyst.It has been proposed that the carbon could affect the electronic structure of B-site transition metal of perovskite .Some authors have demonstrated that the addition of carbon to perovskites can modify the oxidation state of the B-site transition metal, for example Co, which is linked to enhanced catalytic activity , although this was not observed for Fe .The possible interaction between the heteroatoms of doped-graphene with the electronic structure of B-site transition metal of perovskite, and its possible relation with the enhanced catalytic performance, will be investigated in a separate study to elucidate further these effects on the catalytic mechanism of these promising hybrid materials.The combination of a perovskite with dual-doped graphenes shows a synergistic effect towards the ORR, with an optimal composition of 20% perovskite yielding a value of n of 3.8 for SN-Gr/LSM, whereas the PN-Gr/LSM develops the highest catalytic activity approaching that of commercial Pt/C catalyst with the same catalyst loading.The addition of LSM further favours the 4e mechanism over the stepwise 2e + 2e pathway, and the increase of conductivity to LSM provided by the graphene derivative is found not to be significant in the catalytic behaviour.
We report the first investigation of dual-doped graphene/perovskite mixtures as catalysts for oxygen reduction. Pairwise combinations of boron, nitrogen, phosphorus and sulfur precursors were co-reduced with graphene oxide and mixed with La0.8Sr0.2MnO3 (LSM) to produce SN-Gr/LSM, PN-Gr/LSM and BN-Gr/LSM catalysts. In addition, the dual-doped graphenes, graphene, LSM, and commercial Pt/C were used as controls. The addition of LSM to the dual-doped graphenes significantly improved their catalytic performance, with optimised composition ratios enabling PN-Gr/LSM to achieve 85% of the current density of commercial Pt/C at − 0.6 V (vs. Ag/AgCl) at the same loading. The effective number of electrons increased to ca. 3.8, and kinetic analysis confirms the direct 4 electron pathway is favoured over the stepwise (2e + 2e) route: the rate of peroxide production was also found to be lowered by the addition of LSM to less than 10%.
780
Nano-sized prismatic vacancy dislocation loops and vacancy clusters in tungsten
During high energy irradiation, lattice defects are produced in the form of interstitial- and vacancy- type point defects and clusters.In tungsten, recent simulations and experiments have shown that nanoscale loops, visible in transmission electron microscope, can be generated within the heat spike of a displacement cascade.The majority of these point defects mutually annihilates in the cascade cool down phase.While the surviving interstitials tend to form exclusively small prismatic interstitial dislocation loops, the surviving vacancies have more possibilities.They can create prismatic dislocation loops or vacancy clusters.Traditionally it is assumed, that vacancies cluster together and form 3D voids in order to minimize their energy, while interstitials tend to cluster into planar objects, which collapse into energetically favorable prismatic dislocation loops.Hereafter, we focus on tungsten, one of the prime candidate materials for future fusion reactor designs.In tungsten irradiated at low doses and moderate temperatures, TEM studies reveal the presence of prismatic dislocation loops with Burgers vectors 1/2⟨111⟩ and ⟨100⟩, the former of which dominates .TEM can in principle distinguish between interstitial and vacancy type loops using inside-outside contrast if they are larger than about 4 nm, which corresponds to 220 point defects.For smaller loops it is difficult to distinguish vacancy from interstitial nature, unless a dedicated TEM method based on diffuse scattering is applied .Using the inside-outside contrast method some studies indicate vacancy type dislocation loops while other indicate interstitial type dislocation loops and some studies both .Very recently, first-principles investigation in combination with Monte-Carlo simulations showed that nano-size voids play important role for understanding the origin of anomalous precipitation of rhenium in neutron-irradiated tungsten at high temperature .When irradiated at 500 °C the voids in tungsten are mostly invisible in TEM as their size is below the TEM resolution of about 1 nm, but a post-irradiation anneal at 800 °C for 1 h reveals voids with diameters of 1.5 nm, which corresponds approximately to 111 vacancies .The TEM visibility limit of dislocation loops is also about 1 nm diameter, which corresponds to roughly 15 vacancies or interstitials.Small voids at the limit of TEM visibility have also been reported recently by El-Atwani et al. formed in room temperature irradiation .These are expected to agglomerate into large voids at higher temperature when vacancy motion becomes thermally activated.Such transformation can be observed in positron annihilation spectroscopy results over 473 K .Molecular dynamics cascade simulations of primary cascades in tungsten show the direct formation of small vacancy clusters in a diffuse central vacancy-rich region and also creation of ⟨100⟩ vacancy loops using Ackland–Thetford derived potentials .The usual way to create a prismatic dislocation loop in simulations is to arrange the defects on a selected plane into a chosen shape.Relaxing such defect with interstitials leads to a prismatic interstitial dislocation loop, while the same defect created using vacancies can collapse into a prismatic vacancy loop or can remain stable as a planar vacancy platelet.Such uncollapsed 2D planar cluster of vacancies is sometimes called an open vacancy loop , even though it is strictly speaking not a dislocation loop.In terms of mobility at smaller sizes the prismatic dislocation loops behave more like a cluster of point defects, while at larger sizes they behave more like perfect prismatic dislocation loops .Recent collision cascade simulations in tungsten reveal 1/2⟨111⟩ and ⟨100⟩ interstitial loops as well as ⟨100⟩ vacancy loops .The objective of this paper is to compare three vacancy type defects of the same size, namely: the prismatic vacancy dislocation loop, the planar vacancy platelet on the same habit plane as the corresponding loop and the 3D void.For comparison the prismatic interstitial dislocation loop is also included.Because the dislocation loop creates a long-range deformation field, the dimensions of the simulation block should be at least 8 times the loop diameter to minimize the influence of the periodic images .For the largest clusters and loops with 397 and 401 defects the simulation block has about 5.6 million atoms, corresponding to the box side of 40 nm.The same procedure is applied to interstitials.After inserting or removing the defects, the simulation block is relaxed using the conjugate gradient method in LAMMPS and the formation energy is calculated.The interstitial cluster collapses easily to the corresponding prismatic dislocation loop, but the planar vacancy cluster usually does not collapse.To create a vacancy dislocation loop we compress the sample uniaxially in the direction of the Burgers vector by 5–20%, then we relax the sample, remove the strain and relax again.This simple procedure usually leads to a vacancy dislocation loop.If the amount of compression is too low, the vacancy cluster does not collapse.If the compression is too high, it produces completely disrupted sample.In general the ⟨100⟩ loops needs larger compression, as the space between atoms due to the vacancy platelet is higher.Another possibility to create the vacancy loop is to move atoms closer after creating the vacancies.Instead of one big gap between the atoms in the direction perpendicular to the defect plane, we then create three smaller gaps.Such samples usually collapse into dislocation loops without additional compression.The presence of a dislocation loop is examined by the DXA algorithm in OVITO software .Note that small ⟨100⟩ vacancy loops containing up to 37 vacancies are not detected by DXA and manual investigation is needed.All the other types of loops are correctly detected by DXA.The 3D voids are simply created by selecting a sphere in the perfect sample, in which the atoms are discarded.For simplicity faceting is not taken into account.In our atomistic simulations, we use three different embedded-atom method potentials: the potential of Ackland and Thetford , the EAM-4 potential developed in the paper by Marinica et al. that we designate here as M4, and the recent potential of Mason, Nguyen-Manh and Becquart , which is an improvement of AT potential.Fig. 1 shows the dependence of the formation energy Ef of the prismatic dislocation loops on the number of defects N.The M4 potential predicts incorrectly that interstitial ⟨100⟩ loops have lower formation energies than the corresponding 1/2⟨111⟩ loops for the loops smaller than about 300 point defects with respect to the elastic theory.Larger interstitial loops and all vacancy loops behave as expected , see Fig. 1 b.The potentials AT and MNB predict the correct order of formation energies of 1/2⟨111⟩ and ⟨100⟩ loops of both types, see Fig. 1a and b.The main improvement of the MNB potential over previous EAM potentials is in better description of vacancy clusters and improved free surface energy.Previous potentials predict free surfaces energies lower by approximately 30% than the DFT and experimental values.The formation energies divided by the number of defects Ef/N of the vacancy clusters and dislocation loops are reported in Figs. 2–4 for the potentials AT, M4 and MNB, respectively.The formation energy of the void as a function of size shows a larger scatter, which is probably caused by the random faceting introduced by intersecting the sphere with a BCC lattice.Indeed, in reality the void surfaces will be faceted preferentially in ⟨110⟩ directions, where the surface energy has a minimum.As a result we expect the faceted void to have a lower formation energy, but to create the facets a prolonged annealing at temperatures above 800 K would be necessary.However, our spherical void shape allows us to calculate a more precise average free surface energy γa, which is for the MNB potential very close to the experimental value, see Table 1.The platelet formation energy is for small sizes lower than the energy of the loop.From a certain critical size Nplatelet the loop is energetically more favorable than the platelet.With an increasing number of the defects N the platelet formation energy per defect Ef/N decreases only slightly and tends asymptotically to a constant for large sizes.The distance of the atoms across the platelet is 4.47 and 5.24 Å for the ⟨111⟩ and ⟨100⟩ platelet, respectively, which is in some cases lower than the range rcut of the potentials 4.50, 5.50 and 4.40 Å for AT, M4 and MNB, respectively.Despite that Eq. can be used to calculate precise surface energies for or surfaces, see Table 1.The uncollapsed vacancy planar clusters can thus be approximated by a flat cylinder of free surfaces with constant height.If we approximate the average free surface energy by the sphere value derived from Eq. we can estimate the cylinder height from the fitted parameter a2.It isa0 anda0 for the and platelet, respectively.The critical sizes Nplatelet are in general higher for ⟨100⟩ loops when compared to 1/2⟨111⟩ loops.When we compare the different potentials, the critical sizes Nplatelet are lowest for MNB potential due to higher free surface energies.Only this potential predicts the nanometer-sized vacancy loop as more stable than the platelet.The M4 potential does not allow stable small vacancy loops; such a loop upon relaxation bulge out and ends as a platelet.This is observed for the sizes 7 and 19 in 1/2⟨111⟩ and for the sizes 9, 21 and 37 in ⟨100⟩.Similar approach for 1/2⟨111⟩ vacancy loops in tungsten using DND potential gives Nplatelet=157 , which is significantly higher than for the three potentials investigated here.We conclude that the M4 and AT potentials show a thermodynamic driving force for the collapse of TEM visible vacancy dislocation loops into vacancy platelets.There is a difference though as the transformation path between the platelet and the loop is easy being diffusionless, as it involves just a slight movement of a couple of the atoms in the middle of the disc, while the transformation between a 3D void and a vacancy dislocation loop involves diffusion with the movement of many atoms.The latter would require high temperatures to allow for the required diffusion.We observed a gradual transformation of a 19 vacancy 1/2⟨111⟩ loop into a void at 700 K during 100 ns.We have investigated the formation energies of three different vacancy defects and compared them to the interstitial dislocation loop by employing atomistic simulations and three EAM potentials.The most suitable potential for vacancy type defects appears to be MNB potential, which predicts correct energies of free surfaces.The formation energies of the defect clusters are successfully fitted by simple formulas using just one or two fitting parameters.Our specific conclusions are the following:The platelet is stable up to a critical size of 14 and 46 vacancies, which corresponds to a diameter of 1.0 and 1.7 nm for the 1/2⟨111⟩ and ⟨100⟩ loop, respectively, as predicted by MNB potential.For larger sizes we expect it to collapse fairly easily to a prismatic vacancy dislocation loops.The voids have the lowest formation energies up to a critical size of 6 × 104 and 6 × 105 vacancies, which corresponds to a loop diameter of 65 and 200 nm and a void diameter of 12 and 27 nm for the 1/2⟨111⟩ and ⟨100⟩ loop, respectively, as extrapolated using MNB potential.Note that our calculations are molecular statics at 0 K, we expect that in reality both voids and vacancy clusters are formed in cascades in agreement with experiments and MD simulations.The transformation from the void to the vacancy loop and vice versa is however not straightforward and involves diffusion at high temperatures for the needed movement of many atoms.The other investigated potentials underestimate the free surface energies by approximately 30% and as a result the platelet and the void are favored when compared to the vacancy dislocation loop.This leads to higher critical sizes and makes the small vacancy loop less stable.
The vacancies produced in high energy collision cascades of irradiated tungsten can form vacancy clusters or prismatic vacancy dislocation loops. Moreover, vacancy loops can easily transform into planar vacancy clusters. We investigated the formation energies of these three types of vacancy defects as a function of the number of vacancies using three embedded-atom method tungsten potentials. The most favorable defect type and vacancy loop stability was determined. For very small sizes the planar vacancy cluster is more favorable than a vacancy loop, which is unstable. The void is the most stable vacancy defect up to quite large size, after that vacancy dislocation loop is more favorable. We conclude that the vacancy dislocation loops are nevertheless hlmetastable at low temperatures as the transformation to voids would need high temperature, in contrast to previous works, which found planar vacancy clusters to have lower energy than vacancy dislocation loops.
781
Legionella spp. isolation and quantification from greywater
There are several ISO methods for Legionella isolation from water .However, none of them is suitable for Legionella isolation from GW samples due to the fact that GW is contaminated with a very high bacterial load.Thus, almost no data can be found regarding Legionella presence in GW.Here we describe a modification of the ISO 11731:1998 protocol for Legionella isolation from GW.Our modified protocol allows the isolation of Legionella on GVPC selective Legionella medium without the massive bacterial contamination that develops on the media when ISO 11731:1998 is applied.Filter a 100 ml GW sample to remove coarse matter, using a 100 μm pore size cell strainer placed in one 50 ml tube.The 100 ml pre-filtered GW sample is filtered again through a 0.2 μm cellulose nitrate filter using a vacuum filtration system attached to a 2511 Dry Vacuum Pump.After filtration, the filter is placed into 10 ml phosphate buffered saline and vortexed for 10 min.Each sample is then subjected to a combined acid–thermal treatment as follows: 1 ml of the sample is centrifuged at 6000 × g for 10 min.For the acid treatment, 0.5 ml of the supernatant is replaced with 0.5 ml of acid buffer.The sample is then vortexed and immediately subjected to thermal treatment for 30 min at 50 °C.Following the ISO 11731:1998 recommendations, two 0.5 ml sub-samples are plated on a GVPC Legionella selective media immediately after the thermal treatment.The plates are incubated at 37 °C.Presumptive Legionella colonies are counted after 7 and 15 days of incubation.The presumptive Legionella colonies are then identified using a Legionella latex test.This test allows separate identification of Legionella pneumophila serogroup 1 and serogroups and the detection of seven other Legionella species.We used the method described above to successfully isolate and quantify Legionella along a one year GW monitoring campaign.The results of this study have been already published .Briefly, a total of 16 greywater samples were analyzed.Legionella was isolated from 81% of the samples, with a mean of 1.2 × 105 cfu/l.Details about the efficiency and the limit of detection of this method can also be found in the mentioned publication.This method is highly aggressive, so the recovery rates of Legionella were very low and the LOD established from this average recovery rate was 4.0 × 103 cfu/l.Nevertheless, the results were consistent.It should be noted that this modified methods is the only way to isolate Legionella from GW, as using the current ISO protocols does not allow the isolation of this bacteria.This method is highly aggressive for the sampled bacteria, including Legionella.For that reason, the LOD of the method is high and the efficiency of Legionella isolation is low.We recommend using this method only with problematic samples in which Legionella can’t be isolated using the methods described in the ISO protocols 11731:1998 and 11731-2:2004 due to massive contamination with other bacterial species.
Legionella, an opportunistic human pathogen whose natural environment is water, is transmitted to humans through inhalation of contaminated aerosols. Legionella has been isolated from a high diversity of water types. Due its importance as a pathogen, two ISO protocols have been developed for its monitoring. However, these two protocols are not suitable for analyzing Legionella in greywater (GW). GW is domestic wastewater excluding the inputs from toilets and kitchen. It can serve as an alternative water source, mainly for toilet flushing and garden irrigation; both producing aerosols that can cause a risk for Legionella infection. Hence, before reuse, GW has to be treated and its quality needs to be monitored. The difficulty of Legionella isolation from GW strives in the very high load of contaminant bacteria. Here we describe a modification of the ISO protocol 11731:1998 that enables the isolation and quantification of Legionella from GW samples. The following modifications were made:To enable isolation of Legionella from greywater, a pre-filtration step that removes coarse matter is recommended.Legionella can be isolated after a combined acid-thermic treatment that eliminates the high load of contaminant bacteria in the sample.
782
A long photoperiod relaxes energy management in Arabidopsis leaf six
Plants as light-dependent, autotrophic organisms have adapted to the regular light–dark cycles resulting from the rotation of the earth.The length of the light period, or photoperiod, depends on the latitude and time of the year.Plants must adjust to changes in day-length to optimize growth in varying photoperiod lengths.Although this requires tight control of physiological and molecular processes, the underlying regulatory mechanisms are still poorly understood.It is now well established that the circadian clock synchronizes metabolism with the changing photoperiods .Photoperiod length affects net daily photosynthesis and starch metabolism and adjusts seasonal growth .However, the molecular integration of photoperiod, clock and metabolic control during leaf development remains a challenging problem.Arabidopsis is a facultative long-day plant whose flowering is controlled by the photoperiod pathway in concert with molecular, hormonal and environmental signals .Interactions between the circadian clock and photoperiod length during vegetative growth affect leaf number and size, as well as their morphological and cellular properties .Plants in which the vegetative to floral growth transition is accelerated by increasing day-length or repression of regulatory genes have fewer leaves, increased single leaf areas, and a higher epidermal cell number in individual leaves compared to late flowering plants .While these adaptations to photoperiod are well documented at the phenotypic level, little is known about how concerted regulation of photoperiod-dependent gene expression and protein levels is achieved during diurnal cycles and at different stages of leaf development.We therefore asked how phenotypic changes are related to molecular profiles in a single leaf of Arabidopsis plants growing in a long-day or short-day condition.These two photoperiods cause consistent phenotypic changes in the number and morphology of successive leaves on the rosette .Because size and shape of successive leaves vary during Arabidopsis development we decided to focus the analysis on leaf number 6, which is the first adult leaf of the Arabidopsis rosette in short-day conditions.Leaf 6 was used previously to generate molecular data for Arabidopsis grown in SD .To gain insights into the molecular pattern underlying the phenotypic changes between photoperiods, we therefore analyzed transcript and protein levels of leaf number 6 grown in LD at four developmental stages, both at the end of the day and end of the night.We then compared the data with the corresponding previously established molecular data for leaf 6 of Arabidopsis grown in SD either under optimal watering or a 40% water deficit .Integration and comparative analyses of the quantitative proteomics and transcriptomics data revealed that fewer genes have significant diurnal transcript level fluctuations in LD than SD.Transcripts and proteins with significantly different levels in SD and LD validate the hypothesis that a short photoperiod requires a tight energy management, which is relaxed in a long photoperiod.Arabidopsis thaliana accession Col-4 plants were grown in a growth chamber equipped with the PHENOPSIS automaton as described previously with the exception that day length in the growth chamber was fixed at 16 h.In brief, seeds were sown in pots filled with a mixture of a loamy soil and organic compost at a soil water content of 0.3 g water/g dry soil and just before sowing 10 ml of a modified one-tenth-strength Hoagland solution were added to the pot surface.After 2 days in the dark, day length in the growth chamber was adjusted to 16 h at ∼220 μmol/m2/s incident light intensity at the canopy.Plants were grown at an air temperature of 21.1 °C during the light period and 20.5 °C during the dark period with constant 70% humidity.During the germination phase water was sprayed on the soil to maintain sufficient humidity at the surface.Beginning at plant germination, each post was weighed twice a day to calculate the soil water content, which was adjusted to 0.4 g water/g dry soil by the addition of appropriate volumes of nutrient solution.The experiment was repeated independently three times and each leaf 6 sample was prepared by bulking material from numerous plants.The frozen plant material was sent to the MPI in Golm, where it was ground and aliquotted using a cryogenic grinder.Growth-related traits of leaf 6 at single leaf and cellular scales were measured as described .Five rosettes were harvested and dissected every 2–3 days during each experiment.Leaf 6 area was measured after imaging with a binocular magnifying glass for leaves smaller than 2 mm2 or with a scanner for larger ones.A negative film of the adaxial epidermis of the same leaf 6 as the one measured in surface was obtained after evaporation of a varnish spread on its surface.These imprints were analyzed using a microscope supported by the image-analysis software Optimas.Mean epidermal cell density was estimated by counting the number of epidermal cells in two zones of each leaf.Total epidermal cell number in the leaf was estimated from epidermal cell density and leaf area.Mean epidermal cell area was measured from 25 epidermal cells in two zones of each leaf.For rosette growth measurements, at each date of harvest all leaves with an area larger than 2 mm2 from five rosettes were imaged with a scanner.The number of leaves was counted and total rosette area was calculated as the sum of each individual leaf area measured on the scan with the Image J software.Starch, glucose, fructose and sucrose content were determined by enzymatic assays in ethanol extracts of 20 mg frozen plant material as described in Cross et al. .Chemicals were purchased as in Gibon et al. .Assays were performed in 96 well microplates using a Janus pipetting robot.Absorbances were determined using a Synergy microplate reader.For all the assays, two technical replicates were determined per biological replicate.Gene expression in leaves of the four developmental stages and at the two diurnal time points in the long day optimal water experiment and in a reference mixed rosette sample was profiled as described previously using AGRONOMICS1 microarrays and analyzed using a TAIR10 CDF file .All log2-transformed sample/reference ratios without p-value filtering were used in the analyses.Microarray raw and processed data are available via ArrayExpress.Proteins in the same samples were quantified using the 8-plex iTRAQ isobaric tagging reagent as described in detail previously according to the labelling scheme in Supporting Table S5.The resulting spectra were searched against the TAIR10 protein database with concatenated decoy database and supplemented with common contaminants with Mascot.The peptide spectrum assignments were filtered for peptide unambiguity in the pep2pro database .Accepting only unambiguous peptides with an ion score greater than 24 and an expect value smaller than 0.05 resulted in 70 979 assigned spectra at a spectrum false discovery rate of 0.07%.Quantitative information for all reporter ions was available in 50 947 of these spectra leading to the quantification of 1788 proteins based on 6178 distinct peptides.The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD000908 and DOI 10.6019/PXD000908.The data are also available in the pep2pro database at www.pep2pro.ethz.ch.All proteome and transcriptome abundance measures for the LD experiment were integrated within the existing AGRON-OMICS database .A searchable web-interface containing these integrated data sets is available at https://www.agronomics.ethz.ch/,The statistical analytical methods were performed as described previously subjecting the log2-transformed sample/reference ratios to an analysis of variance treating stage and day-time as main effects followed by correction with Benjamini-Hochberg .Transcripts and proteins with a pGlobal < 0.05 and a maximum fold-change > log2 were considered to change significantly.For a significant difference between EOD and EON we additionally required pND < 0.05.The comparison of the protein and transcript levels between the LD and the two short day experiments reported previously was performed with a paired t-test comparing the values for the 8 time-points between two experiments corrected with Benjamini-Hochberg taking into account all non-plastid encoded transcripts without p-value filtering.All statistical analyses were performed using R .Assignment of protein and transcript functional categories was based on the TAIR GO categories from aspect biological process as described previously .When plotted against time from leaf initiation to full expansion, leaf 6 area increased more rapidly and reached its final size earlier and was 50% larger in LD than in SD.The dynamics of cell production and expansion in the upper epidermis of leaf 6 indicates that both cell number and cell size increased more rapidly and reached their final values earlier in LD than in SD.Thus, photoperiod has a pronounced effect on the timing of leaf development because cell division, cell expansion and leaf expansion were faster in LD than SD and ceased earlier.Similar to the faster growth of leaf 6 the whole rosette leaf area and leaf number initially increased faster in LD than SD.However, later in development and despite the increased individual leaf size at the fully expanded stage, the whole rosette area was smaller in LD than in SD.This was the result of a smaller final number of rosette leaves that were produced.Because leaf 6 growth was accelerated in the long photoperiod and stages 2–4 of leaf development were reached earlier than in the short photoperiod, biological samples of leaf 6 were harvested at four development stages corresponding to transitions associated with well-defined cellular processes .The stage 1 leaf has maximum relative area and thickness expansion rates, stage 2 and 3 leaves have maximum and decreasing absolute area and thickness expansion rates, respectively, and in the stage 4 leaf expansion ends .Sampling at defined stages allows a robust leaf scale comparison of photoperiod effects on leaf development despite different growth rates in different experiments.We found that stage 1 corresponds to the phase of rapid cell division around day 7 or 8 after leaf initiation in both photoperiods.Most of cell division had ceased at stage 2, which was around day 11 after leaf initiation in LD and day 14 in SD.Stage 3 is the phase of decreasing cell expansion rate around 14 days after leaf initiation in LD and day 21 in SD.At stage 4 cell and leaf expansion were nearly complete, corresponding to around day 21 after leaf 6 initiation in LD and day 30 in SD.Because photoperiod length affected both the progression of individual leaf stages and whole plant development, the four leaf 6 developmental stages did not have the same status with regard to whole rosette development in LD and SD plants.Leaf 6 expansion in SD was complete before the final number of rosette leaves was reached, whereas in LD more than 50% of leaf 6 expansion occurred after bolting.The floral transition at the shoot apex occurs several days before bolting, typically at 10–12 days after germination in LD .Leaf 6 was initiated at 10 days after sowing, and therefore almost all its growth occurred after the floral transition at the shoot apex.At stage 1 in LD, leaf 6 area represented approximately 5% of the whole rosette area.This proportion increased to 12–15 % during stages 2 and 3 and at stage 4 declined to around 10%.In contrast, the proportion of leaf 6 area compared to whole rosette area at stage 4 was less than 5% in SD, confirming that leaf 6 reaches its smaller final size in SD before whole rosette expansion was complete.To quantitate protein and transcript levels during the growth of a single Arabidopsis leaf we harvested leaf 6 from plants grown in LD at the end of the day and end of the night at the four successive stages of development defined above.Proteome and transcriptome profiling data, as well as the amounts of starch and soluble sugars were obtained from pooled samples of leaf 6 of three independent biological experiments.We then assessed how the molecular profiles in single leaves at precise stages of development from plants grown in LD differ from leaf 6 grown in SD by comparing them to the SD optimal watering and 40% water deficit experiments reported previously .Starch is the main carbon reserve for energy requirements during the night in Arabidopsis and represented about 80–93% of the carbohydrates measured at EOD in LD and SD.In LD-grown plants, the amount of starch at EOD was similar at all four development stages.Although starch also decreased during the night in LD plants, considerably larger amounts of starch remained at EON, especially at stages 2, 3 and 4.In SD a different pattern was found.The highest amount of starch at EOD was found for stage 1, with lower levels in stages 2, 3 and, especially, stage 4.Further, in SD, most of the starch that accumulated at EOD was consumed during the night at all developmental stages.In LD, the levels of glucose, sucrose and fructose were similar at EOD and EON for all developmental stages, with the exception of stages 1 and 2 for sucrose, where the levels were higher at EOD than EON.Glucose levels in LD were similar at all developmental stages, but fructose and sucrose were highest for stage 1.In contrast, major differences were found in SD.First, glucose, fructose and sucrose levels in SD were consistently higher at EOD than EON, as previously reported for full rosettes .Second, the highest levels of glucose, fructose and to some extent sucrose were determined for stage 4 at EOD.Third, sucrose levels for all developmental stages and harvest times were consistently lower in SD than LD, as previously reported for full rosettes .Together, the data reveal that in Arabidopsis photoperiod length has a major influence on the metabolic status of the leaf during both development and the diurnal cycle.To account for the observed phenotypic and metabolic differences between SD and LD we analyzed quantitative protein and transcript data in detail.We first performed a Principal Component Analysis to estimate the main factors that determine changes in transcript and protein levels in LD.The main contribution to the variance in the transcript data in the first two principal components is the difference between stage 1 and the later stages 2–4, which accounted for over 60% of the total variance.The EOD and EON samples are separated only in the third principal component, which accounted for about 8% of the total variance.This is in contrast to a PCA of the transcripts in SD conditions, where the time of harvest was the main contribution to the variation in the data in the first and second principal components .Assessing the difference in transcript levels between EON and EON revealed that in LD only 21.2% of all transcripts showed significant diurnal transcript level fluctuations, in contrast to 50.3% in the SOW and 43.1% in the SWD conditions.Thus, in addition to metabolite changes, the LD photoperiod also has a considerable impact on diurnal mRNA expression patterns.For the protein data, the difference between the developmental stages contributes most to the variation in the data, as observed previously in SD .Transcripts that changed similarly between EOD and EON both in LD and SD included those encoding the central clock proteins LATE ELONGATED HYPOCOTYL 1, CIRCADIAN CLOCK ASSOCIATED 1 and TIMING OF CAB EXPRESSION 1.However, as expected from the results of the PCA analysis, many more transcripts showed a significant change between EOD and EON in SD than in LD.We defined transcripts to change only in SD when they had significantly different levels between EOD and EON in SOW and SWD, but not in LD, and transcripts to change only in LD when they had significantly different levels between EOD and EON in LD, but not in SOW or SWD.To further examine the differences in the diurnal fluctuations between SD and LD we used EON as reference point corresponding to Zeitgeber Time–1 in both experiments.We then assessed which transcripts were significantly higher or lower at the respective EOD compared to the reference point only in SD, or only in LD.For all transcripts with differential diurnal fluctuations between SD and LD we examined whether they scored rhythmic by COSOPT in the free-running study conducted by Edwards et al. .For those that were rhythmic we plotted the Zeitgeber Time peaks determined in Edwards et al. .The ZT peaks of the rhythmic transcripts that are lower at EOD only in SD and higher at EOD only in LD peak in the second half of the subjective night around ZT 43-44.Transcripts that are higher at EOD only in SD peak around ZT 33–37 corresponding to the subjective dusk, while those that are lower at EOD only in LD peak in the subjective afternoon around ZT 31–32.While the time of harvest at the respective EOD in SD and LD photoperiods can affect the relative abundance difference between EOD and EON for transcripts peaking during the night, this is not the case for transcripts with ZT peaks in the afternoon or early night.The different pattern of these transcripts therefore suggests a shift in their diurnal expression.The functional categorisation against GO Biological Process of the transcripts higher at EON only in LD gave as the top category response to chitin.The list of 23 transcripts that account for this over-representation contains 14 transcription factors according to the AGRIS website , and four of them are scored rhythmic with ZT peaks in the late afternoon.Together, this suggests that the expression patterns of specific transcripts, especially for transcripts linked to biotic stress response, are changed in response to light and the expected length of the night.The differences in the diurnal transcript accumulation between SD and LD prompted us to further examine the transcripts that are differentially expressed between LD and SD.We considered those transcripts to change in a photoperiod-specific manner that were significantly different in the LD experiment compared to both the SOW and SWD experiments.A total of 3469 transcripts fulfilled these criteria with 1954 being higher in LD and 1515 higher in SD.As plants grow faster in LD than SOW and SWD conditions, it can be expected that some of the differences between the two photoperiods will be due to their different growth behaviours.Comparing the two SD experiments we had already found that the transcript levels of proteins assigned to GO category defence response to fungus and those supporting fast growth, such as proteins involved in ribosome biogenesis and translation, are reduced in leaf 6 by water deficit .To distinguish between effects caused by different growth rates and those specific for long day conditions, we defined sets of growth-specific transcripts based on the gradual increase in growth rate from SWD to SOW and the LD experiment.We hypothesised that transcripts, which accumulate to different levels between SD and LD and also show a significant difference in accumulation between the SWD and SOW conditions, are likely to be related to growth.Applying these criteria we found 134 transcripts that are most highly expressed in LD and 38 transcripts that are highest in SWD conditions.Transcripts that are highest in LD and therefore might be associated with faster growth are over-represented in various response pathways, with response to chitin, defence response to fungus and response to mechanical stimulus as the top three categories.The GO processes that are over-represented in the transcripts highest in the SWD plants are nitrile and proline biosynthetic process, as well as photosynthesis, consistent with a tight energy management in a short photoperiod and reduced water condition.Transcripts that were significantly higher in SD or LD were categorised using MapMan and TAIR10 mapping.Over- and under-representation was assessed separately for the transcripts higher in SD and LD using a Fisher’s exact test and by comparing the number of measured transcripts with the number that would be expected by chance.Fig. 6 shows the MapMan bins with p-value < 0.01 and the AGIs of the genes in these categories are listed in Supporting Table S4.Among the genes for transcripts that have different levels between SD and LD we found fewer than expected that encode proteins for translation.This is in agreement with the finding that ribosome abundance does not change between SD and LD grown plants .However, genes involved in RNA processing are over-represented in SD, while genes for small nucleolar RNAs are over-represented in LD because 14 of 45 snoRNAs represented on the tiling array are significantly more highly expressed in LD.snoRNAs associate with proteins to form functional small nucleolar ribonucleoprotein complexes, which are involved in the processing of precursor rRNAs in the nucleolus requiring exo- and endonucleolytic cleavages as well as modifications.These modifications are thought to influence ribosome function .The differential expression of snoRNAs in SD and LD conditions might reflect a specific but currently unknown mechanism of adjusting translation to the prevalent photoperiod conditions.Transcripts that are higher in LD are overrepresented in bin secondary metabolism.flavonoids.Flavonoids are plant secondary metabolites with broad physiological functions .Of the genes in this category, five encode enzymes in the KEGG pathway flavonoid biosynthesis, namely TRANSPARENT TESTA 4, TT5, F3H/TT6, TT7 and FLAVONOL SYNTHASE.These enzymes are required for the biosynthesis of the three major flavonols quercetin, kaempferol and myricetin, although the enzyme catalysing the last step of myricetin production has not yet been identified in Arabidopsis.The transcript levels for these enzymes are all increased in LD as compared to SD but generally decrease during leaf 6 development.TT5 and TT6/F3H proteins were detected in LD.TT5 protein levels decrease significantly during development in LD but the protein was detected in all three experimental conditions.Transcript levels of flavonoid pathway genes were reported to be up-regulated in leaves of sweet potato grown in LD that have high concentrations of kaempferol .Kaempferol functions as an antioxidant in chloroplasts .Higher transcript levels for the enzymes in the flavonol biosynthesis pathway in LD therefore correlate well with the over-representation of the bin redox in LD.The transcript levels for enzymes in flavonoid biosynthesis pathways involved in response to excess UV light or high light stress, such as anthocyanin biosynthesis, are not higher in LD as compared to SD.This confirms that under our experimental conditions the LD photoperiod is not triggering a stress response that would require enhanced photoprotection.Plant hormones coordinate developmental processes and growth through converging pathways .We therefore expected that several of the genes whose transcripts accumulate to different levels between SD and LD encode proteins involved in hormone metabolism and signalling.The bin hormone metabolism.ethylene is over-represented in LD and the list of genes annotated to this bin that have increased transcript levels in LD includes 10 genes encoding different ETHYLENE-RESPONSIVE ELEMENT BINDING FACTOR proteins.ERFs function in defence response and regulate chitin signalling .Two of these ERFs, DREB AND EAR MOTIF PROTEIN 1 and ERF6, belong to the transcription factors that have higher transcript levels at EON only in LD and are assigned to response to chitin.Ethylene biosynthesis is restricted by the photoreceptor phytochrome B .PHYB transcript levels are decreased in LD as compared to SD, which correlates with increased ethylene biosynthesis in LD.In addition to PHYB, other genes encoding phytochromes such as PHYA and genes encoding phytochrome kinase substrates and phototropic responsive family proteins are more highly expressed in SD, resulting in the over-representation of bin signalling.light.Photoperiod can be integrated with growth and time to flowering through regulation of the brassinosteroid hormone pathway .It was therefore unexpected that bin hormone metabolism.brassinosteroid was over-represented in SD, as plants in SD grow more slowly and flower later.However, the mRNAs with higher levels in SD assigned to this bin also include the mRNA for cytochrome P450 CYP734A1.CYP734A1 converts active brassinosteroids into their inactive forms and therefore acts as a negative regulator of brassinosteroid signalling.Thus, the over-representation of the bin hormone metabolism.brassinosteroid does not imply increased brassinosteroid signalling.In fact, the only brassinosteroid signalling-related mRNA with higher levels in LD encodes BES1/BZR1-LIKE PROTEIN 3, which is a transcription factor that is homologous to BES1/BZR1, a positive regulator of brassinosteroid signalling .Transcripts that are significantly higher in SD than LD encode twelve members of the monosaccharide transporter gene family and the SUCROSE-PROTON SYMPORTER 9.Accordingly, the bin sugar.transport is overrepresented in SD.The members of the MST gene family are classified into seven distinct sub-families and have roles in both long-distance sugar partitioning and sub-cellular sugar distribution .POLYOL/MONOSACCHARIDE TRANSPORTER 2 and SUGAR TRANSPORTER 1 are located in the plasma membrane and were suggested to import monosaccharides into guard cells during the night and function in osmoregulation during the day .The MST gene family members involved in sub-cellular sugar distribution include the plastid-localised PLASTIDIC GLC TRANSLOCATOR, which contributes to the export of the main starch degradation products maltose and glucose from chloroplasts , and six proteins encoded by the AtERD6-like gene sub-family that are located in the vacuole membrane.AtERD6 homologs are thought to export sugars from the vacuole during conditions when re-allocation of carbohydrates is important, including senescence, wounding, pathogen attack, C/N starvation and diurnal changes in transient storage of sugars in the vacuole .The increased transcript expression of genes for various sugar transporters in SD is consistent with the different amount and diurnal turnover of sugar levels in SD as compared to LD and indicates that long-distance and sub-cellular sugar partitioning is increased in shorter illumination periods.The bin PS.lightreaction is significantly different between SD and LD and overrepresented in SD.Most of the transcripts assigned to this bin that are increased in SD encode photosystem I or II proteins.Some of their genes seem to be linked to reduced growth, nevertheless, the SD compared to the LD photoperiod apparently increases photosystem abundance.This likely increases the rate of photosynthesis to use the light of the shorter illumination period most efficiently.We next examined the proteins that are differentially expressed in the LD and SOW plants.A total of 24 proteins fulfilled the strict cut-off criteria that were also applied to the transcript data.Of the 13 proteins that were higher in LD, 5 were also increased in LD compared to SWD, and of the 11 that were lower in LD, 4 were also significantly decreased in LD compared to SWD.These proteins therefore show a significant difference between LD and both SD conditions.The list of proteins that are more abundant in LD than in SD includes PATHOGENESIS-RELATED PROTEIN 5, PR2 and ribosomal L28e family protein.This is consistent with our previous findings that most of the proteins that accumulated to higher levels in the faster growing SOW leaves than in the SWD leaves mainly comprised proteins involved in translation and that transcripts with higher levels in the SOW leaves are over-represented for GO categories ribosome biogenesis, translation and defence response to fungus .Furthermore, MapMan bin stress.biotic was over-represented for transcripts that have higher levels in LD.The list of proteins that accumulate to significantly higher levels in LD also includes PHOSPHOMANNOMUTASE, which is involved in the synthesis of GDP-mannose and is therefore required for ascorbic acid biosynthesis and N-glycosylation.Interestingly, the pmm mutant has a temperature-sensitive phenotype that was attributed to a deficiency in protein glycosylation .The different abundance levels of PMM of in LD and SD might therefore suggest differential post-translational modifications in LD and SD.GERANYLGERANYL PYROPHOSPHATE SYNTHASE 1, which is required for the biosynthesis of geranylgeranyl diphosphate , also accumulates to higher levels in leaf 6 grown in LD as compared to SOW conditions.In Arabidopsis, the chloroplast-localized GGPPS11 is the GGPPS isoform with the highest transcript level in rosette leaves and mainly responsible for the biosynthesis of GGPP-derived isoprenoid metabolites including chlorophyll and carotenoids .The higher protein level of GGPPS11 in LD than in SD therefore suggests the increased production of these metabolites in LD.The proteins that are significantly more abundant in SD than in LD are PLASTOCYANIN 1 and the three cold response proteins COR15A, COR15B and COR6.6.Although plastocyanins have been implicated in photosynthetic electron transport, their concentration is not limiting for electron flow in optimal growth conditions with 11 h light .The increased PETE1 protein level in SD might therefore indicate a specific role for this protein in short photoperiods.The COR proteins are also significantly more abundant in leaf 6 grown in SWD as compared to SOW conditions and have been implicated in the adaptation response to the continuous 40% water deficit condition .However, the LD data suggest that the accumulation of the three COR proteins may also be related to growth.We did not classify transcripts for these proteins as photoperiod-specific because they are significantly different between SWD and LD but not between SOW and LD.A crosstalk between cold response and flowering time regulation has been proposed previously, with SOC1 functioning as a negative regulator of CBFs that bind to the COR promoters .Here, the situation is different, because SOC1 and CBF1 transcript levels are higher in LD as compared to SD and the COR transcripts show a different behaviour.Therefore, the levels of the COR proteins seem to be regulated differently and related to the growth rate of the leaves.LD photoperiods that are characteristic of spring and early summer induce flowering in LD plants.The core photoperiodic flowering pathway comprises GIGANTEA, FLOWERING LOCUS T and CONSTANS .Circadian clock regulation of CO transcript level and protein stability is key to monitoring changes in photoperiod length, and the biphasic regulation of CO ensures that flowering is induced in LD .The mRNA levels for the CO target FT were higher in LD compared to SD and increased during development.Downstream of FT, the MADS-box transcription factors AGAMOUS-LIKE 20/SUPPRESSOR OF CONSTANS1, AGL24, FRUITFULL and SHORT VEGETATIVE PHASE function as floral integrator genes during the transition of the shoot apical meristem to the floral meristem .Notably, AGL24 and FUL transcript levels were significantly higher in LD also in leaf 6.SOC1 transcript levels were only higher in LD at early leaf 6 developmental stages, and SVP transcript levels were not significantly different between LD and SD.In contrast, the mRNA levels for FLOWERING LOCUS C, which is a key repressor of flowering , were significantly lower in LD as compared to SD.FLC and SVP form heterodimers during vegetative growth to repress transcription of FT in leaves and SOC1 in the SAM .The reduced levels of FLC transcripts in LD together with the increased levels of FT transcripts are therefore consistent with an early release of flowering repression in LD.SOC1 belongs to the group of genes that have a diurnal expression peak in the afternoon, with SOC1 transcript levels being higher at EOD in SD, but higher at EON in LD.Interestingly, this pattern was also found for transcript levels of the potential natural antisense RNA gene AT1G69572, whose genomic region overlaps with that of CDF5.According to data reported by Bläsing et al. , SOC1 transcript levels were highest in the afternoon in a 12 h/12 h photoperiod.When compared to free-running conditions of continuous white light , SOC1 transcript levels were highest at ZT8 during the first day but no subsequent circadian oscillation was detectable.SOC1 therefore belongs to the group genes whose transcript levels are not regulated by the circadian clock but directly by photoperiod.The glycine-rich RNA-binding protein AtGRP7 has an important role in flowering.Expression of AtGRP7 is directly controlled by CCA1 and LHY, and its transcript levels oscillate with a peak in the evening .AtGRP7 regulates the amplitude of the circadian oscillation of its mRNA through alternative splicing.Arabidopsis plants that constitutively over-express AtGRP7 produce a short-lived mRNA splice form, which dampens AtGRP7 transcript oscillations and influences the accumulation of other transcripts including AtGRP8 .As the result, AtGRP7 promotes flowering, with a more pronounced effect in SD than in LD .In LD we indeed observed a dampening of both AtGRP7 and AtGRP8 diurnal transcript level changes at all leaf 6 development stages, but the transcript levels of AtGRP7 did not change significantly during development.In contrast, AtGRP7 protein levels were significantly higher in the LD experiment as compared to SOW, did not display diurnal level changes, and decreased during development both in SD and LD.The higher AtGRP7 protein levels in LD as compared to SD provide an explanation for earlier observations that the effect of AtGRP7 overexpression on time to flowering is stronger in SD than in LD.In addition to photoperiod, which may act at multiple points in the circadian clock , the rhythmic, diurnal endogenous sugar signals can entrain circadian rhythms in Arabidopsis .Furthermore, in an 18 h photoperiod considerable amounts of starch remain at EON while the rate of photosynthesis is decreased compared to a 4-, 6-, 8-, and 12-h photoperiod.Consequently, in long photoperiods growth is not longer limited by the availability of carbon and the carbon conversion efficiency decreases .By systematically investigating the molecular changes in a single leaf that are involved in the adaptation to different photoperiods in highly controlled conditions we demonstrated that fewer transcripts display significant changes between EOD and EON in LD than in SD.We previously discussed that different mRNA levels at specific times during the diurnal cycle might be required for the time-dependent regulation of the cellular energy status in prevailing environmental conditions .If diurnal transcript level fluctuations are indeed required for efficient resource allocation, this might explain why plants grown in long days do not depend on a strict diurnal regulation of transcription to tightly economise their energy budget.We also established that transcripts regulated by photoperiod belong to specific functional categories that are important for adaptation to the prevailing photoperiod condition.In contrast, identified proteins that differ significantly between photoperiods are mainly related to the different growth rates of leaf 6.Together, changes in the complex molecular pattern underlying leaf growth in different photoperiods are tightly linked to the available energy.
Plants adapt to the prevailing photoperiod by adjusting growth and flowering to the availability of energy. To understand the molecular changes involved in adaptation to a long-day condition we comprehensively profiled leaf six at the end of the day and the end of the night at four developmental stages on Arabidopsis thaliana plants grown in a 16 h photoperiod, and compared the profiles to those from leaf 6 of plants grown in a 8 h photoperiod. When Arabidopsis is grown in a long-day photoperiod individual leaf growth is accelerated but whole plant leaf area is decreased because total number of rosette leaves is restricted by the rapid transition to flowering. Carbohydrate measurements in long- and short-day photoperiods revealed that a long photoperiod decreases the extent of diurnal turnover of carbon reserves at all leaf stages. At the transcript level we found that the long-day condition has significantly reduced diurnal transcript level changes than in short-day condition, and that some transcripts shift their diurnal expression pattern. Functional categorisation of the transcripts with significantly different levels in short and long day conditions revealed photoperiod-dependent differences in RNA processing and light and hormone signalling, increased abundance of transcripts for biotic stress response and flavonoid metabolism in long photoperiods, and for photosynthesis and sugar transport in short photoperiods. Furthermore, we found transcript level changes consistent with an early release of flowering repression in the long-day condition. Differences in protein levels between long and short photoperiods mainly reflect an adjustment to the faster growth in long photoperiods. In summary, the observed differences in the molecular profiles of leaf six grown in long- and short-day photoperiods reveal changes in the regulation of metabolism that allow plants to adjust their metabolism to the available light. The data also suggest that energy management is in the two photoperiods fundamentally different as a consequence of photoperiod-dependent energy constraints.
783
Modality selective roles of pro-nociceptive spinal 5-HT2A and 5-HT3 receptors in normal and neuropathic states
Brainstem nuclei and higher brain centres can exert powerful modulation of nociceptive processing at the spinal level.This bi-directional control serves to amplify or suppress sensory transmission depending on context, expectation and emotional state.This is elegantly demonstrated during placebo analgesia, which is in part dependent on descending opioidergic pathways.In addition, a recently identified bulbospinal projection is implicated in acute stress-induced hypoalgesia and chronic stress-induced hypersensitivity.Descending modulation is largely orchestrated via the periaqueductal grey, locus coeruleus and rostral ventromedial medulla, although cortical regions such as the cingulate can exert direct facilitatory influences on spinal excitability, or indirectly via cortical-sub-cortical networks engaging descending brainstem pathways.It is clear parallel inhibitory and excitatory pathways originating from the RVM exist.Neurones within the RVM display distinct firing patterns in response to noxious somatic stimulation; quiescent ON-cells begin firing and are considered to mediate descending facilitation, whereas tonically active OFF-cells abruptly cease firing and are considered to exert inhibitory influences, and a proportion of these sub-populations appear to be serotonergic.Numerous lines of evidence indicate facilitatory influences predominate.Selective optogenetic activation of medullary serotonergic neurones decreases nociceptive response thresholds.Lidocaine block of the RVM or depletion of spinal 5-HT decreases spinal neuronal excitability consistent with tonic facilitatory activity in normal states.The ablation of NK1+ projection neurones in the dorsal horn with a saporin-substance P conjugate also suppresses deep dorsal horn neuronal excitability, revealing the parabrachial-RVM pathway as the efferent arm of a spino-bulbo-spinal circuit acting as a positive feedback loop facilitating spinal neuronal responses during noxious stimulation.Neuropathy and chronic pain states can be associated with increased descending facilitation; this time-dependent change in enhanced excitatory drive, and the failure to recruit inhibitory pathways, promotes the transition from acute to chronic pain states and sustains persistent long-term pain.However, the precise roles of spinal 5-HTRs in different states have been difficult to characterise due to their complex dual pro- and anti-nociceptive functions, and the selectivity of available antagonists.However, we and others have reported key roles of the 5-HT3R in descending facilitations in a number of pain models.In this study, we examine whether spinal 5-HT2A and 5-HT3 receptors have intensity-dependent and modality-selective roles in modulating ascending sensory output, and how these functions are altered in a neuropathic state.We blocked spinal 5-HT2ARs with ketanserin and 5-HT3Rs with ondansetron, and determined the effects on sensory neuronal coding in the ventral posterolateral thalamus.The STT-VP-S1/2 pathway is a key sensory-discriminative relay, and wide dynamic range neurones in the rat VPL exhibit intensity-dependent coding across sensory modalities."Spinal WDR neurones code sensory inputs in a similar manner to human psychophysics, and can provide insight into sensory processing in normal and sensitised states in rodent models.Furthermore, these neuronal characterisations permit study of drug effects on stimulus intensities and modalities not amenable to behavioural testing in animals.Sham or spinal nerve ligated male Sprague-Dawley rats were used for electrophysiological experiments.Animals were group housed on a conventional 12 h: 12 h light-dark cycle; food and water were available ad libitum.Temperature and humidity of holding rooms were closely regulated.All procedures described here were approved by the UK Home Office, adhered to the Animals Act 1986, and were designed in accordance with ethics guidelines outlined by the International Association for the Study of Pain.SNL surgery was performed as previously described.Rats were maintained under 2% v/v isoflurane anaesthesia delivered in a 3:2 ratio of nitrous oxide and oxygen.Under aseptic conditions a paraspinal incision was made and the tail muscle excised.Part of the L5 transverse process was removed to expose the left L5 and L6 spinal nerves, which were then isolated with a glass nerve hook and ligated with a non-absorbable 6-0 braided silk thread proximal to the formation of the sciatic nerve.The surrounding skin and muscle was closed with absorbable 3-0 sutures.Sham surgery was performed in an identical manner omitting the nerve hook/ligation step.All rats groomed normally and gained weight in the following days post-surgery.Establishment of the model was confirmed by determining mechanical withdrawal thresholds as previously described.Electrophysiology was performed as previously described.Rats were initially anaesthetised with 3.5% v/v isoflurane delivered in 3:2 ratio of nitrous oxide and oxygen.Once areflexic, a tracheotomy was performed and rats were subsequently maintained on 1.5% v/v isoflurane for the remainder of the experiment.Rats were secured in a stereotaxic frame, and after the was the skull exposed, co-ordinates for the right ventral posterolateral thalamus were calculated in relation to bregma.A small craniotomy was performed with a high-speed surgical micro-drill.The muscle overlying the lumbar vertebrae was removed, a partial laminectomy was performed to expose the L4-L6 lumbar region, and the overlying dura was removed.Once haemostasis was achieved, the surrounding muscle was coated in petroleum jelly to form a hydrophobic barrier to contain the drug.Extracellular recordings were made from VPL thalamic neurones with receptive fields on the glabrous skin of the left paw hind toes using 127 μm diameter 2 MΩ parylene-coated tungsten electrodes.Searching involved light tapping of the receptive field.Neurones in the VPL were classified as WDR on the basis of obtaining neuronal responses to dynamic brushing, noxious punctate mechanical and noxious heat stimulation of the receptive field.The receptive field was then stimulated using a wider range of natural stimuli applied over a period of 10 s per stimulus.The heat stimulus was applied with a constant water jet onto the centre of the receptive field.Acetone and ethyl chloride were applied as an evaporative innocuous cooling and noxious cooling stimulus respectively.Evoked responses to room temperature water were minimal, or frequently completely absent, and subtracted from acetone and ethyl chloride evoked responses to control for any concomitant mechanical stimulation during application.Stimuli were applied starting with the lowest intensity stimulus with approximately 30–40 s between stimuli in the following order: brush, von Frey, cold, heat.Baseline recordings were made with 25 μl vehicle applied topically to the dorsal aspect of the spinal cord after aspiration of any cerebrospinal fluid.After obtaining three baseline responses, the vehicle was removed and 10 μg and 50 μg ondansetron hydrochloride, or 50 μg and 100 μg ketanserin tartrate were cumulatively applied to the spinal cord in a volume of 25 μl, and neuronal responses were characterised 20 and 40 min post-dosing; time point of peak change from baseline is plotted.The second dose was applied approximately 50–60 min after aspiration of the first dose; excess drug was washed from the cord with 25 μl vehicle.Drug doses were guided by previous studies.Data were captured and analysed by a CED1401 interface coupled to a computer with Spike2 v4 software with rate functions.The signal was amplified, bandpass filtered and digitised at rate of 20 kHz.Spike sorting was performed post hoc with Spike2 using fast Fourier transform followed by 3-dimensional principal component analysis of waveform features for multi-unit discrimination.Neurones were recorded from one site per rat; one to three neurones were characterised at each site.Stimulus evoked neuronal responses were determined by subtracting total spontaneous neuronal activity in the 10-s period immediately preceding stimulation.Spontaneous firing of individual neurones is expressed as the mean of these 10-s periods.A total of 16 sham and 15 SNL rats were used in this study.All electrophysiological studies were non-recovery; after the last post-drug time-point, rats were terminally anesthetised with isoflurane.Statistical analyses were performed using SPSS v25.Heat and mechanical coding of neurones were compared with a 2-way repeated measures ANOVA, followed by a Bonferroni post hoc test for paired comparisons.Cold, brush and spontaneous firing were compared with a 1-way repeated measures ANOVA, followed by a Bonferroni post hoc test for paired comparisons."Where appropriate, sphericity was tested using Mauchly's test; the Greenhouse-Geisser correction was applied if violated.Group sizes were determined by a priori calculations.All data represent mean ± 95% confidence interval.*P < 0.05, **P < 0.01, ***P < 0.001.Once baseline responses had been obtained, the 5-HT3R antagonist ondansetron was cumulatively applied to the spinal cord.At both doses, ondansetron inhibited neuronal responses to punctate mechanical stimulation selectively at the most noxious intensity of stimulation.The inhibitory effects of ondansetron were modality selective; no decrease in heat, innocuous and noxious evaporative cooling, or dynamic brush evoked responses were observed.In addition, spinal 5-HT3R block did not affect overall spontaneous firing rates with only 3/12 units weakly inhibited.The inhibitory profile of spinal ondansetron was altered in neuropathic rats.In contrast to sham-operated rats, in SNL rats 10 μg and 50 μg ondansetron inhibited neuronal responses to lower intensity punctate mechanical stimuli in addition to the higher intensities that may exceed nociceptive withdrawal thresholds.Furthermore, noxious heat evoked neuronal responses were now inhibited at both doses tested.However, there were no inhibitory effects on cooling or brush evoked responses.There was no overall effect on spontaneous activity with only 4/11 units weakly inhibited.After sham rats were dosed spinally with the 5-HT2AR antagonist ketanserin, we found weak evidence for tonic facilitatory activity in 4/12 units tested in the VPL.However, ketanserin had no overall effect on neuronal responses to punctate mechanical stimulation, heat, cooling or brush evoked responses.In addition, spinal ketanserin did not alter spontaneous activity in the VPL.The inhibitory profile of spinal ketanserin was also altered in neuropathic rats.As observed in sham rats, there was no overall effect on punctate mechanical and heat evoked responses.However, both 50 μg and 100 μg ketanserin inhibited neuronal responses to innocuous and noxious evaporative cooling.No inhibitory effect was observed on brush evoked responses, nor on spontaneous firing rates.In this study, we describe the intensity and modality selective tonic pro-nociceptive function of spinal 5-HT2A and 5-HT3 receptors in the absence of nerve injury, and enhanced facilitatory roles in neuropathic conditions.Furthermore, these receptors selectively facilitated stimulus-evoked thalamic neuronal responses but not spontaneous firing in neuropathic rats.Previous studies often examined descending modulation of pain utilising spinal endpoints, either behavioural or electrophysiological.The former assay is limited to examining withdrawal threshold responses and may lack the sensitivity to decipher descending influences on spinal excitability given that modulatory medullary neurones respond selectively to intense noxious stimulation in normal conditions.Although the latter approach circumvents this shortcoming and affords the ability to examine neuronal responses to supra-threshold stimuli, the projection pathways of these neurones are rarely confirmed and effects on spontaneous neuronal firing are infrequently studied.To our knowledge, this study for the first time examines the impact of activity within bulbospinal pathways on integrative sensory processing within the VPL in normal and neuropathic states.In normal states ON- and OFF-cells in the RVM typically exhibit ‘all-or-nothing’ responses independently of sensory modality but discriminating between innocuous and noxious inputs.In neuropathic states, ON-cells gain sensitivity to lower intensity stimuli and display exaggerated responses to noxious stimulation.Correspondingly, these effects are mirrored at the spinal level following lidocaine block of the RVM.Spinal 5-HTRs mediate complex pro- and anti-nociceptive effects; in general, 5-HT2A/3/4Rs are considered to be facilitatory whereas 5-HT1/2C/7Rs are inhibitory.At the cellular level, 5-HT2A and 5-HT3 receptors exert excitatory effects, the former via downstream mechanisms mediated by activation of phospholipase C, whereas the latter is ionotropic and can directly affect membrane excitability.Anatomically and functionally, both receptors are implicated in pro- and anti-nociceptive functions.A large number of GABAergic and enkephalinergic inhibitory interneurons express 5-HT3Rs, though 5-HT2AR localisation with GABAergic interneurones is much more limited.Both receptors may enhance inhibitory modulation in the superficial dorsal horn.However, numerous studies overwhelmingly support a net facilitatory role in acute nociceptive, inflammatory and neuropathic states.Pre-synaptically, the 5-HT3R is mainly expressed in myelinated neurones and low numbers of TRPV1-positive neurones, and functionally we observe facilitatory influences on mechanical but not heat evoked neuronal responses in sham rats.Although 5-HT3Rs are also present post-synaptically in the dorsal horn, the modality selective effects are consistent with a preferential engagement of pre-synaptic receptors by descending serotonergic brainstem projections.Neither block of spinal 5-HT2A nor 5-HT3 receptors inhibited electrically evoked wind-up of dorsal horn neurones, consistent with a pre-synaptic locus of action.The 5-HT3R-mediated sensitisation of TRPV1 in injured and uninjured primary afferent terminals likely leads to sensitisation to punctate mechanical and heat stimuli in neuropathic states.Interactions between 5-HT3R activity and calcium channel function may also enhance excitatory transmission.Neither block of 5-HT2A nor 5-HT3 receptors altered dynamic brush-evoked neuronal responses indicating that descending facilitation is unlikely to mediate brush hypersensitivity.We found weak evidence for tonic facilitation of noxious punctate mechanical and noxious heat evoked responses in sham rats via 5-HT2ARs, broadly in line with the effects observed on spinal neuronal excitability.In SNL rats, enhanced facilitation of noxious punctate mechanical and heat responses did not appear to be mediated through 5-HT2ARs as effect sizes were similar to the sham group.However, spinal 5-HT2AR block now revealed facilitation of neuronal responses to innocuous and noxious evaporative cooling.Cold allodynia is a frequent occurrence in neuropathic conditions.Numerous peripheral and spinal mechanisms have been proposed to underlie this sensory disturbance, but altered monoaminergic control should not be overlooked; noxious cold transmission is modulated by the PAG-RVM pathway in the absence of injury, excitability of RVM ON-cells to somatic cold stimulation is enhanced after nerve injury, and intra-RVM lidocaine reverses behavioural hypersensitivity to cooling in neuropathic rats.The 5-HT2AR can promote excitatory transmitter release in part by inhibiting 4-amniopyridine sensitive potassium currents.In peripheral sensory neurones, transient 4-amniopyridine sensitive currents are proposed to set thresholds for cold sensitivity, and inhibition of this ‘excitatory brake’ could enhance cold transmission.Immunoreactivity for 5-HT2ARs is largely detected in peptidergic and non-peptidergic medium-to-small sized dorsal root ganglion neurones.It does not appear that 5-HT2A and 5-HT3 receptors are upregulated after nerve injury but enhanced spinal gain may still arise from a combination of plasticity in descending pathways and spinal circuits.Table 1 illustrates the balance between tonic descending inhibitory and excitatory tone in sham rats, and how this affects thalamic sensory coding across modalities and stimulus intensity.In neuropathic rats augmented serotonergic facilitation, and a concurrent loss of descending noradrenergic inhibition, contributes to substantial sensory gain.Aberrant spontaneous firing of VPL WDR neurones in SNL rats is dependent on ongoing peripheral and spinal activity.Neither block of spinal 5-HT2A nor 5-HT3 receptors inhibited spontaneous firing in sham and SNL rats.Consistent with our observations, following trigeminal nerve injury, depletion of brainstem 5-HT had no effect on the spontaneous activity of trigeminal nucleus caudalis neurones.However, intrathecal ondansetron produces conditioned place preference selectively in neuropathic rats demonstrating that enhanced serotonergic activity via spinal 5-HT3Rs can lead to an ongoing aversive state, and supports a partial separation of sensory and affective processing.Polymorphisms of the 5-HT2AR are associated with fibromyalgia, a condition considered to be dependent on altered central modulation, and 5-HT3Rs are highly expressed in human dorsal root ganglion neurones supporting involvement of these receptors in pain modulation in humans.Few clinical reports exist examining the effect of 5-HT2A antagonists in chronic pain patients.The 5-HT2AR blocker sarpogrelate is not thought to be blood-brain-barrier permeable but through a peripheral mechanism may provide relief in diabetic neuropathy, and after lumbar disc herniation.Likewise, the use of ondansetron clinically for analgesic purposes has been limited due to the poor penetration of the blood-brain-barrier.In peripheral neuropathy patients, intravenous ondansetron had mixed effects on ongoing pain, but no effect on brush allodynia, an observation that resembles the neuronal measure in this study.Similarly, in chronic back pain patients tropisetron, a highly selective 5-HT3R antagonist, had no overall effect on the intensity of ongoing pain and minimal effects on secondary measures of sensitisation."Although not considered a neuropathic condition, fibromyalgia is idiopathic and is characterised by widespread sensitisation and musculoskeletal pain, but also disturbances in descending modulation.Tropisetron was effective in a subgroup of these patients reducing the number of painful pressure points and associated pain intensity scores, and again bears a marked resemblance to the neuronal measures described here and previously.Of course, a caveat of these studies is the systemic route of dosing and the inability to rule out involvement of peripheral mechanisms or other centrally mediated processes.However, the concordance between the psychophysical measures in fibromyalgia patients with systemic treatment, and the thalamic neuronal measures following spinal dosing implies similar underlying processes are targeted.The modality and intensity dependent facilitatory role the 5-HT3R supports the notion that 5-HT3R antagonists are more effective for alleviating static/punctate mechanical hyperalgesia, and could merit further clinical investigation in patients stratified according to these sensory disturbances.Several have advocated a move to mechanism-based treatment selection and that sensory profiles of patients represent surrogate measures for underlying mechanisms.From a clinical presentation, determining enhanced descending facilitation as a mechanism present in a patient poses some difficulties.Conditioned pain modulation, an assay through which a heterotopic conditioning stimulus inhibits the perceived intensity of a test stimulus, provides a readout of the integrity of endogenous pain modulation.CPM is frequently diminished in chronic pain patients, but this net loss of inhibition might result from decreased noradrenergic inhibitory tone, increased facilitatory drive, or a combination of both.Baron and colleagues describe three sensory phenotypes in neuropathic patients, though it is unclear whether inefficient CPM correlates with any of these.The sensory profile of the SNL model shares features with the ‘mechanical’ and ‘thermal’ phenotypes, and diffuse noxious inhibitory controls are absent in this model.Speculatively, based on the modality and intensity dependent roles, enhanced descending facilitation terminating on 5-HT2A and 5-HT3 receptors may be associated with sub-groups within the ‘mechanical’ and ‘thermal’ sensory phenotypes; our current observations could help shape translational pharmacological studies.RP and AHD, conception and design of study; RP, performed experiments; RP, analysed data; RP and AHD, interpreted results of experiments; RP, prepared figures; RP, drafted manuscript; RP and AHD, edited and revised manuscript.Both authors approved the final manuscript.This study was funded by the Wellcome Trust Pain Consortium .
Descending brainstem control of spinal nociceptive processing permits a dynamic and adaptive modulation of ascending sensory information. Chronic pain states are frequently associated with enhanced descending excitatory drive mediated predominantly through serotonergic neurones in the rostral ventromedial medulla. In this study, we examine the roles of spinal 5-HT2A and 5-HT3 receptors in modulating ascending sensory output in normal and neuropathic states. In vivo electrophysiology was performed in anaesthetised spinal nerve ligated (SNL) and sham-operated rats to record from wide dynamic range neurones in the ventral posterolateral thalamus. In sham rats, block of spinal 5-HT3Rs with ondansetron revealed tonic facilitation of noxious punctate mechanical stimulation, whereas blocking 5-HT2ARs with ketanserin had minimal effect on neuronal responses to evoked stimuli. The inhibitory profiles of both drugs were altered in SNL rats; ondansetron additionally inhibited neuronal responses to lower intensity punctate mechanical stimuli and noxious heat evoked responses, whereas ketanserin inhibited innocuous and noxious evaporative cooling evoked responses. Neither drug had any effect on dynamic brush evoked responses nor on spontaneous firing rates in both sham and SNL rats. These data identify novel modality and intensity selective facilitatory roles of spinal 5-HT2A and 5-HT3 receptors on sensory neuronal processing within the spinothalamic-somatosensory cortical pathway.
784
Building capacity for water, sanitation, and hygiene programming: Training evaluation theory applied to CLTS management training in Kenya
Globally 2.4 billion people lack access to improved sanitation, and 946 million lack access to any sanitation facility and practice open defecation.Poor sanitation and hygiene together cause an estimated 577,000 deaths annually, and half of child stunting can be explained by OD.Sanitation can lead to improved social status and dignity, gender-equity benefits, and increased school attendance for girls.Human resource development has been recognized as critical to global water, sanitation, and hygiene progress since 1982.A 2014 global assessment found only one-third of countries had human resource strategies for WaSH, despite a lack of capacity constraining the sector.The capacity gap in WaSH includes a lack of soft skills among program managers such as partnership and supervision, which are increasingly important given a shift in WaSH interventions towards participatory behavior-change approaches that necessitate these skills.This gap is partly due to training that is not matched to needs, and to ill-equipped training institutions.Responsibility for WaSH programs is frequently decentralized to local government without sufficient staff and financial resources.Training in soft skills has the potential to benefit public health programs beyond WaSH, by improving program planning, and strengthening health systems.With population growth and the United Nations’ adoption of the Sustainable Development Goals, which expand national WaSH targets to include universal access and increased quality of WaSH services, the gap between human resource capacity and targets will grow.In response, SDG Target 6a is to “… expand international cooperation and capacity-building support to developing countries in water and sanitation related activities and programmes …”.There are few training evaluations in WaSH, and those that do exist tend to lack rigor.A review of 104 WaSH organizations found that over 60% do not monitor or report on their training programs, and only 15% monitor beyond simple process indicators such as number of trainees.The review also found widespread duplication of efforts, and negligible long-term trainee tracking.There are many training evaluation frameworks and tools outside the WaSH sector; however, they require adaptation and expansion for use in the complex WaSH sector, as they tend to focus on ideal trainees and isolated environments, and assess clearly defined and easily measured learning and behavior outcomes such as performance in flight simulators and building Lego models.Evaluations of training and capacity building in WaSH tend to focus on real-world settings, but few draw on the extensive evidence, tools, and theory outside WaSH.Despite a large and growing capacity shortfall, training in WaSH is insufficient and poorly evaluated.There is opportunity to increase beneficial impact by applying well-developed theory to increase learning, behavior change, and program outcomes arising from investments in training.We reviewed and adapted training evaluation theory, developed a conceptual framework for use in evaluating training in WaSH, and applied it to a community-led total sanitation management training program for government officials in Kenya.CLTS is an adaptive, participatory approach, and managing CLTS requires a diverse set of skills and collaboration between sectors.This provided an opportunity to explore the value and relevance of our conceptual framework.Our study provides new tools for use in the WaSH sector, as well as new evidence on building capacity within local government for managing WaSH programs.This study involved the development of a conceptual framework for evaluating training in WaSH, a CLTS management training program delivered by Plan International Kenya to government officials in Kenya, and an evaluation of the training program in Kenya using the conceptual framework.CLTS emerged in the year 2000 as a participatory approach to address OD, and is now a well-established approach that has been implemented in over 50 countries.CLTS was introduced to Kenya in 2007.At the inception of this project in 2011, CLTS in Kenya was focused on policies, strategies, and institutional arrangements nationally; and on village-level implementation locally.The government adopted CLTS into national sanitation policy and published CLTS guidelines; Kenya Ministry of Public Health and Sanitation).The Ministry of Public Health and Sanitation was tasked with managing CLTS programs.The government initiated an Open Defecation Free Rural Kenya 2013 campaign, and contracted a non-governmental organization, the Kenya Water and Health Organization, to independently verify ODF communities.In 2013, a new constitution was enacted, in which government decision-making was devolved to 47 newly designated counties comprising 290 sub-counties.County and sub-county responsibilities are still developing and some confusion persists.The MOPHS was merged with the Ministry of Health, while other ministries are being combined or phased out.Multilaterals and NGOs such as UNICEF, Plan, and World Vision support CLTS in Kenya financially and through training and guidance.Public health officers and volunteer community health workers facilitate CLTS activities.However, training local government officials to manage CLTS programs, including coordinating a diverse range of organizations, has largely been overlooked.A lack of local government capacity to manage programs can lead to a lack of support and guidance to communities.Plan identified local government as critical to improving CLTS programs, due to its roles in advocating to county government for policies and funding, and coordinating implementation by field officers and NGOs.Plan developed a CLTS management training program, and invited officials with a direct or indirect role in sanitation to participate.Plan trained officials from two sub-counties in Kilifi County, and two in Homa Bay County.The training program comprised an initial five-day training, and “training-over-time” activities over the following seven months, which is less common than one-time training.Officials were trained by county.The initial training covered CLTS implementation with a field demonstration, and management skills in partnership, supervision, resource mobilization, and monitoring.Training was participatory, inter-ministerial, and included group work.Training-over-time activities incorporated training and application, and included CLTS field training, resource mobilization, work planning, monitoring, advocacy, training division-level staff, and sensitizing county officials.A majority of trainees attended the majority of activities.Advocacy training and sensitizing county officials were not completed in Kilifi County due to government training for polio vaccination and terrorism response taking priority.A timeline of activities with the number of trainees at each is in Supplement 1.A description of the training program is available online.We used a qualitative study design to evaluate the CLTS management training program, and developed a conceptual framework to guide the evaluation.The conceptual framework is presented in section 3.Data collection tools were designed to identify the target outcomes of the training program, influences, and the links between them, following our conceptual framework.Research Guide Africa, a Kenyan social research agency, was hired to recruit study participants and administer surveys and interviews.During interviewer training, the tools were tested and refined for clarity and cultural appropriateness.Four local interviewers were trained for two days, and two were hired based on performance during training."The researchers' had no interactions with trainees.All government officials who participated in the initial training were eligible for inclusion in the study.RGA obtained informed consent in person before training began.IRB approval was obtained from the University of North Carolina and from the Kenya National Council for Science and Technology."There were three interactions with trainees: 1) a pre-training questionnaire to understand trainees' background and expectations; 2) round-one interviews two weeks after initial training to assess learning, attitudes, motivations, ability, and the quality of the training design; and 3) round-two interviews seven months later after training-over-time activities to assess trainee's performance, and organizational and external factors that influenced their performance.Individual in-depth interviews were conducted in trainees’ workplaces for 20–75 min.Interviews were in English with explanation in Swahili when necessary.Interviews were recorded and transcribed verbatim by the interviewer, with Swahili translated into English, then checked in full by the second interviewer.During round-one, interviewers did not always have time for the last few questions, so the round-two interview guide included more instructions, prompts, and timing guidance.Interview transcripts were coded using Atlas.ti by the second author, and a sample was reviewed by the first author.Transcripts were coded first by interview section, then by inductive and deductive themes.The second author did not contribute to the conceptual framework until after coding to reduce the potential for bias during inductive coding.Themes were systematically categorized into outcomes and influences from the conceptual framework through discussion between the first and second authors until consensus was achieved, then the framework was used to interpret themes and understand links between outcomes and influences.Selected interview quotes appear in-text with additional quotes in Supplement 4.Knowledge gained was assessed by looking for training concepts recalled during interviews.Skills are not easily measured by interviewing, so potential skills were assessed from round-one interviews by asking trainees to describe how and why they planned to change their work practices.Changes made to work practices were assessed from round-two interviews administered seven months later.To avoid positive bias, trainees were prompted for detailed examples of planned or actual changes to work practices, and why they made these changes.For the purposes of analysis, general descriptions and examples—when trainees could not provide details on the how and why—were considered a lack of evidence.Factors that influenced trainees and training outcomes were assessed in two ways: by asking trainees about them directly, and by coding influencing factors when they arose naturally.Findings are not intended to comprehensively describe training outcomes and influences; rather, the most important outcomes and critical influences are revealed by this analysis.We reviewed the literature for training evaluation frameworks, tools, and concepts, and combined and adapted them to develop a conceptual framework as a tool for identifying outcome, trainee, and context indicators, and to relate training to outcomes.We also assessed the relationship between influences and outcomes in order to make recommendations for adapting future training programs to local context.We first searched for any published articles that evaluated training or capacity building in WaSH, and that used any framework, model, or guideline for the evaluation.We were looking specifically for concepts applicable to training evaluation, so included capacity building and development in our search as training is often a component of these.We found five articles that evaluated training in WaSH.Two of the studies did not use a framework or follow an evaluation guideline or protocol.Two others mentioned frameworks, though they did not thoroughly describe their use.The fifth study proposed an approach to evaluating capacity development partnerships, but was not well-suited for training evaluation.None of these five studies described analysis, and only one included their survey guides and made the link between data and results explicit.We did not use any of these five articles to inform our conceptual framework.We turned to training evaluation literature from outside WaSH to develop our conceptual framework.We searched for any published articles with an explicit focus on training evaluation or transfer of training.In order to cover many different evaluation approaches we only reviewed articles that present new frameworks or concepts and reviews.We began with the most cited and earliest published articles, then continued to review articles until we reached saturation.We reviewed a total of 30 articles.The full literature review and complete definitions from our conceptual framework are presented in Supplement 2.WaSH training programs are implemented to improve management and implementation of programs that construct infrastructure, deliver WaSH services, or target behavior change.Our conceptual framework includes three categories of “target outcomes,” which relate to the objectives of the organization leading the training program, and six categories of “influences,” which are factors that affect outcomes.We use the term target to convey that training programs should be evaluated against the target outcomes of the organization leading the training, and that other outcomes of training may occur.This also requires that the organization leading the training sets objectives in advance.The broad outcome and influence categories and their interactions are presented in Fig. 1.Definitions of each category, including sub-categories or constructs, are presented in Table 1.Additional explanation of the conceptual framework and commentary on measurement of each outcome and influence are presented in Supplement 2.We include three target outcomes of training: learning, individual performance, and improved programming.Improved programming leads to impacts, which are not included in the framework, because they may occur long after training ends, and the causal link is confounded by many factors that cannot be measured or accounted for.We include six influences in the framework: attitude and motivation, ability, knowledge sharing, training design, organizational factors, and external factors.The first three are “trainee influences”, and the last three are “context influences”.All 42 eligible trainees enrolled.One trainee each from Homa Bay and Kilifi Counties did not participate in the second interview.Table 2 presents trainee characteristics.Learning outcomes and influences were assessed from round-one interviews, so pertain only to initial training.Learning from training-over-time activities is revealed through changes in individual performance.Target learning outcomes of the training program were understanding the CLTS process, critical thinking around CLTS, and development of four management skills: partnership, supervision, resource mobilization, and monitoring.Trainees were taught the CLTS steps during initial training, and participated in triggering a community.In round-one interviews, nearly all trainees recalled the triggering that they had seen in the field, and half gave detailed examples of triggering activities.Only a quarter of the trainees recalled details on pre-triggering and follow-up.Recall of triggering details may have been higher because trainees had seen these activities in practice.No trainees gave detailed descriptions of CLTS verification or celebration.While trainees’ descriptions of CLTS triggering indicate understanding of the theory of CLTS and the activity they observed in the field, comprehensive recall of the entire CLTS process was found to be low.During initial training, favorable and challenging conditions for implementing CLTS were presented to trainees.From these, trainees frequently recalled environmental and geographic conditions; however, government structure and responsibility were infrequently mentioned in interviews.Some trainees described conditions that were not covered in training, indicating trainees were thinking critically about what they had learned.For example, they described how human and financial resources, culture, and socioeconomic status could affect CLTS success, which they had not been explicitly taught.Trainees also observed conditions favorable for CLTS during the field visit: “there is need to work closely with the Provincial Administration, the opinion leaders, and also walking there earlier and walking around physically in the area, to see if that problem exists, and of course familiarizing yourself with the area.,Through recognition of these conditions, trainees demonstrated their understanding of the importance of context for success of CLTS programs.Complete trainee-identified success and challenge factors are in Supplement 5.Potential skills learned were seen when trainees planned to apply skills to their work—which is a proxy for skills learned.While some trainee responses suggested learning of skills, the majority of trainees could not articulate how they planned to apply new skills to their work practices.Supervision and partnership plans included forming an inter-ministerial committee to coordinate supervision of field staff and creating a forum for CLTS coordination.Resource mobilization plans included approaching a new funding source, and increasing follow-up and face-time with funders after submitting a proposal.Monitoring plans included moving to consistent indicators across ministries, and adding new indicators such as water committee feedback.Two trainees said they would just continue to ask Plan for support, indicating they had not gained new resource mobilization skills.What motivated trainees in their work was complex and varied in round-one interviews, ranging from helping people and seeing changes such as health improvements in communities, to seeing broad environmental, societal, or economic change.Some trainees mentioned being motivated by interacting with others.Delayed funding or lack of resources was commonly cited as discouraging, as was feeling that their work was hectic, stressful, or overwhelming.Some trainees revealed discomfort talking about sanitation and feces.A few trainees were embarrassed when describing CLTS:“ triggering process is when the community has now realized that this thing is bad for them … they draw their community and show the houses and where they live and put where they do the thing ….,Twenty-one trainees either indicated in the pre-training questionnaire that they had prior training in CLTS, or were likely to have prior training given their position and level.Prior CLTS training was more common among trainees from the MoH than from other ministries.Trainees who already possessed some CLTS knowledge and skills had less potential to learn from training.This was most evident for trainees from the MoH, many of whom perceived the highest value of the training program to be bringing staff from other ministries into a CLTS training, whereas those outside the MoH saw value in learning the CLTS content itself.Most trainees spoke positively about initial training.Trainees liked the participatory structure and inter-ministerial group work, which allowed knowledge sharing across different sectors.One trainee explained that the sitting arrangement allowed different levels of staff to work together: “we were like in the same wave length and we could interact freely and share freely.,The frequent group work allowed trainees to practice their partnership skills.Many trainees remembered a video about CLTS from Bangladesh, describing how it helped them think about CLTS principles such as community engagement and use of local resources.Trainees frequently cited the field activity as an opportunity to gain practical knowledge.Seeing training concepts applied in the Bangladesh video and in the field in Kenya helped trainees to think critically about how CLTS could work in their counties.Higher recall of the CLTS steps that were demonstrated in the field suggests the importance of practical experience for increasing target learning outcomes.Negative feedback on initial training primarily concerned duration.Some trainees thought the initial training should have been longer, whereas a few thought it was too long.Some trainees expressed a wish for higher daily cash allowances.The target individual performance outcomes of the training program were application of the four management skills to work activities, and increased ownership of sanitation programs.The most commonly reported changes to work practices concerned partnership – a skill trainees practiced during inter-ministerial group work in initial training.When asked about partnerships in round-one interviews, trainees tended to list NGOs, while inter-ministerial partnerships featured prominently in round-two interviews.Inter-ministerial partnerships can lead to improved coordination of CLTS programs.Trainees also described improved communication and planning with partners, new methods of forming partnerships, and increased collaboration on programs, such as tree planting campaigns and Global Handwashing Day.One trainee noted:“ bonded us so much that nowadays when you are calling colleagues for an activity, these are people that you are already working with and you are comfortable with them.Recently, a new NGO was coming in to collect baseline data, I simply cross over to here at water office immediately knows what kind of information I need., "After initial training, trainees discussed plans for an inter-ministerial committee to supervise field staff and appraise each other's work.In round-two interviews, trainees reported they had improved communication and engagement with their supervisees in decision making.One trainee reported learning new conflict resolution strategies from training.Several trainees recalled training in resource mobilization and commented on how helpful it was.Trainees experienced with CLTS made connections between resource mobilization and partnership, speaking about joint-budget planning with county government and other ministries.Those without CLTS experience gave examples unrelated to CLTS, such as writing their first proposal, and using new mechanisms to request funding.No one mentioned mobilizing resources from Plan in round-two interviews, despite having planned to.This shift indicates that trainees were thinking about resource mobilization beyond their established mechanisms.In round-two interviews, three trainees reported monitoring with increased frequency.Increased ownership of sanitation programs manifested as trainees identifying ways they had, or planned to, apply CLTS knowledge in their work.Some suggested they would spread CLTS and sanitation messages while visiting communities.Others added sanitation activities to their work, for example by including public toilets in a funded irrigation plan.One trainee suggested they would monitor OD while visiting communities for other projects.Trainees also showed other signs of taking ownership of CLTS, like drafting a sanitation policy to secure long-term institutional support for CLTS.A quote from one trainee demonstrated their ownership of CLTS:“ always worries me if a project ends, what will then drive the community, and there now you will need the service of the health promotion officer.Everybody will leave.Every other department will say ‘that Plan thing came to an end and we are waiting for it if it comes back!, "But now as a health promotion officer it is my burden to see that this continues, enablement of people's health issues continues, so what has been started, it has to move on!",… if I pack up and say that ‘Plan mentorship went!,EGPAF went!,UNICEF went!’,Then I will be killing the community!,The increased coordination and collaboration between ministries described above was another indication of increased ownership of CLTS.The attitudes and motivations that influence learning also influence individual performance.Following our conceptual framework, we looked for ways in which trainees were able to make connections between training and their work, which can lead to improved individual performance."Trainees saw connections in four areas: links to their ministry's focus area, creating healthy populations, supporting Kenya's development, and applicability of the CLTS approach to their work.Several trainees made ministry-specific connections.One trainee working in agriculture noted that CLTS can improve hygiene behavior, allowing food crops to be sold more widely.A high-level administrator saw a connection between reduced OD and improved security for women and girls.Three trainees in the Ministry of Water linked reduced OD to improved water quality.Some trainees noted their work depends on having healthy populations, which can depend on sanitation.One trainee demonstrated the ability to link the training to their work, describing how they used their new understanding of the sanitation-health link to motivate their colleagues:“… we work with targets in government and we sign performance contracts, so even if you assign them these duties, the officers will say that it’s not within their performance contracts … But when they are taken through this process, they realize that this problem is affecting health issues in communities."They then realize that as an officer, when people are often sick in an area, they won't be able to mobilize them for any activities, and that he too won't achieve his targets, so that connection needs to be established.”", "A few noted that CLTS is good for Kenya's development, and that sanitation is recognized as a right in their constitution.Others noted they could use triggering and participatory techniques from CLTS for other behavior change programs.For those in the MoH already working on CLTS, links between training and their work were clearer.Organizational factors can be categorized into people-related factors and work system factors.Having an inflexible supervisor was found to be an important people-related organizational constraint.Some trainees were motivated to pursue CLTS, but found it difficult, because it was not part of their core functions.One commented that:“… at times when the trainings are organized they tend to clash … my supervisor is sometimes not willing to let go because he wonders that this is not my core function and not in my job description.,Two training activities were directed at this constraint: Plan sensitized trainees’ supervisors at the county level to the importance of sanitation and CLTS, and also trained trainees in advocacy.The sensitization modified the organizational constraint by encouraging flexibility by the supervisors so that trainees could apply their learning, while advocacy training empowered trainees to argue for increased flexibility directly.These two activities only occurred in Homa Bay county.Rigid organizational guidelines were a work system factor that constrained changes in monitoring.One trainee commented that monitoring community-level outcomes had not changed because they always followed existing guidelines.Insufficient financial resources were a frequently referenced work system factor constraining the application of new knowledge and skills."Plan's training program included two activities directed at this constraint: lobbying county government to commit 0.5% of the following year's budget to sanitation, and resource mobilization training so that trainees could raise funds themselves.However, several trainees cited their lack of authority and inability to change fixed budgets as preventing them from applying their new resource mobilization skills, indicating additional organizational factors constraining individual performance.The 2013 devolution in Kenya was an important work system organizational factor, as an enabler and constraint.For one trainee, decentralized decision-making allowed better communication with supervisors who had relocated from the capital city to regional centers.For others, devolution meant lacking a supervisor for several months.Uncertainty regarding renewal of field staff contracts also discouraged trainees from applying new supervision skills.Assessment of improved programming is often difficult and inconclusive, as outcomes can occur long after training and are influenced by many factors.Increased scale and duration of CLTS programs may only occur after our seven-month evaluation timeframe, and cannot always be linked to training when many external factors are present.We did not attempt to evaluate programming outcomes, but instead looked for preliminary indications of improved programming, and asked trainees to reflect on programming.A few trainees thought the government could independently scale-up CLTS in their county, though the majority thought that support from Plan or other NGO partners would be necessary."Trainees outside the MoH suggested integrating CLTS into school and agricultural programs, and youth and women's groups as mechanisms for scale-up.Both interview rounds included questions on knowledge sharing—trainees taking the initiative to transfer learned knowledge and skills to their colleagues and supervisees.Many trainees reported sharing knowledge, and one elaborated on its importance:"“If you don't share knowledge, it is like it is not there.So when you share knowledge, you ease the work … you cannot carry everything on your shoulders.You need to leave some of the work to others, so you delegate such that work continues even without you., "Another trainee described how knowledge sharing can lead to training outcomes being more resilient to staffing changes: “I would like when I leave any other person who is coming in finds a system that is working, not an individual's job!”",Knowledge sharing can improve programming by facilitating institutionalization and sustainability of training outcomes.There were several work system organizational factors that influenced improved programming.Trainees from several organizations cited insufficient staffing, competing responsibilities and uncertainty resulting from changing personnel and ministry restructuring during devolution as programming challenges.For example, Ministry of Education officials were unable to receive county funding for sanitation, as their ministry was not yet decentralized.Insufficient financing was described as constraining scale-up and duration of CLTS programs.Plan lobbied county government directly and trained trainees in resource mobilization to address financial constraints.People-related organizational factors also influenced improved programming.Lack of trust prevented the establishment of a collective bank account for CLTS, when trainees were unable to agree on who would control the account.One trainee described tension with trainees from the MoH:"“… people have been seeing sanitation as a Ministry of Health kind of issue, so if they don't incorporate these other people who are not at the Ministry of Health and they want to go by themselves, I don't see them succeeding … all of us are targeting the community as a client and when it's all-inclusive that is the strength.”",Another described trainees from different ministries disagreeing about incorporating environmental impact assessment into CLTS.Trainees also noted that NGOs rely on government staff and expect them to drop other responsibilities to implement NGO programs.While trainees were able to improve their individual performance, these organizational constraints may reduce the impact of training on improved sanitation programming.External factors enable or constrain trainees and their organizations.Several trainees recognized the national policy environment as broadly enabling CLTS programs, noting that CLTS is included in the sanitation policy, and that policy can empower communities to act on their own.A few trainees described policy as constraining CLTS programs, particularly the conflict between CLTS being “community-led” and the government having a national ODF target and top-down policies.The security situation in Kilifi was an external factor that directly affected the training program, as all government officials in Kilifi were required to attend meetings on terrorism response, which delayed some training activities, and resulted in advocacy training being dropped in Kilifi.We developed a conceptual framework for evaluating training in WaSH and used it to evaluate a CLTS management training program for government officials in Kenya.The framework includes three categories of outcomes which we evaluated against the training objectives.The framework also sets out six categories of influences on outcomes.The target learning outcomes of the training program in Kenya we evaluated were an understanding the CLTS process, critical thinking about CLTS, and development of management skills.After the initial training, few trainees understood the entire CLTS process, although most demonstrated critical thinking about implementing CLTS in their counties.Round-one interviews also indicated that the initial training resulted in limited learning of new skills.However, trainees later demonstrated they had gained new skills when they applied them to their work.Target individual performance outcomes were application of management skills to work activities, and increased ownership of sanitation programs.There were frequent examples of trainees improving their work by applying new partnership skills.Improved coordination between ministries and supervision of field staff were particularly apparent.Application of other skills were less common: a few trainees had used new resource mobilization and monitoring skills.Trainees, including those with no prior CLTS experience, also demonstrated increased ownership of sanitation programs in a variety of ways such as incorporating sanitation into existing work activities.Target improved programming outcomes were increased scale, duration, and quality of CLTS programs.No interview questions directly focused on these programming outcomes, as a longer term evaluation with comparison groups would be needed to assess them.Nevertheless, increased ownership of sanitation among trainees, and increased coordination and collaboration between ministries were indications that improved programming was likely to occur.By using our conceptual framework to guide our evaluation, we identified characteristics of trainees and the context in which they work that both constrained and enhanced the training outcomes.We discuss influences and recommendations for training together, as improving outcomes involves identifying influences in advance, then adapting training to reflect these influences.We found that a variety of aspects of the training design enhanced training outcomes.The trainers improved learning outcomes by conveying training objectives, focusing on practical knowledge and skills, and actively involved trainees, all of which are adult learning principles that should be incorporated into training of public health professionals.We found that incorporation of learning-by-doing activities such as hands-on field training can positively influence trainees’ motivation, knowledge recall, and critical thinking.Videos and examples from unfamiliar settings can foster creative thinking.Group discussions and brainstorming can help trainees identify or create ways to apply their learning to their work.Participatory training emphasizing group work can improve relationships between trainees and reduce tensions between ministries.Training-over-time activities enhanced learning of new skills that were not fully developed during the initial five-day training session.Training-over-time also enhanced application of skills, consistent with non-WaSH studies which recommend unstructured, on-the-job learning opportunities as a “post-training intervention”.Training-over-time is an underutilized approach that should be used to improve learning and performance outcomes."A number of organizational factors negatively influenced trainees' individual performance, and constrained the link between individual performance and improved programming. "A lack of flexibility on the part of trainees' supervisors, insufficient human and financial resources allocated to sanitation, slow-to-evolve relationships between organizations, and uncertain roles following devolution all initially prevented trainees from introducing new activities into their work.Organizational constraints can be addressed by empowering trainees or by having the organization delivering training modifying constraints directly.For example, we found that resource mobilization and advocacy training empowered trainees to advocate for increased flexibility and financial support.Plan also directly lobbied trainees’ supervisors for increased flexibility and budget commitments.However, direct modification of constraints does not necessarily result in increased learning or sustainable outcomes, and should be used in combination with training activities to empower trainees to address these constraints themselves.The link from individual performance to improved programming was supported by trainees sharing knowledge with colleagues, which could be encouraged as a way to reinforce learning and cost-effectively spread learning beyond trainees.While enhancing outcomes by targeting training to favorable individuals and contexts may seem appealing, and indeed may be effective for some types of training, we recommend against this strategy for management training of government officials.Managers should be trained as teams for multi-sectoral WaSH programs, and unfavorable contexts often align with the greatest need.For example, targeting only officials without prior CLTS training would have excluded MoH officials, whose presence provided trainees an opportunity to practice cross-sectoral partnership skills.Additionally, some studies have found that “overlearning” is beneficial.The conceptual framework can be used to support the design of training programs and their evaluation.The three outcome categories can be used to set and organize training goals, which should be done before training begins so that they can be communicated to trainees at the outset, and so that they can be evaluated.The six influences can be assessed during a situational or needs assessment prior to training, so that the training program can be adapted to reflect these influences.Modifiable organizational constraints can be addressed in parallel to or as part of training.To our knowledge, this is the first framework developed specifically for evaluating training in WaSH.Our intent was to develop conceptual framework that can be used across training in the WaSH sector.The outcome and influence categories can apply universally, although the specific factors relevant within each category will vary between training programs.The tools and analysis methods used here should be adapted, modified and replaced as others feel is appropriate for different applications.For example, we focused on learning and individual performance outcomes, so only interviewed the trainees and used a seven-month evaluation period.Those wishing to focus on improved programming outcomes and external factors should consider interviewing trainees’ peers as well.While this framework is a tool to support evaluation of training, it is not a substitute for an appropriate study design, quality data collection, and analytical rigor.This evaluation had a seven-month timeframe, so long-term outcomes, such as increased scale and duration of CLTS programs, were not seen.We did not interview anyone beyond trainees.Interviews did not include questions to elucidate all influences on training outcomes.Impacts on beneficiaries’ health and wellbeing are influenced by a wide range of factors that are not all measurable and cannot be linked to training, and thus were not included in this study.There is a substantive human resources capacity gap in WaSH, which will only widen with population growth and heightened service quality benchmarks and coverage targets introduced with the SDGs.In response, the need for training in WaSH will also increase.The few published training evaluations in WaSH tend to lack rigor, and do not draw on the extensive evidence that exists outside of WaSH.We reviewed training evaluation literature, developed a conceptual framework, and used it to evaluate a CLTS management training program in Kenya.Ultimately, the training did not achieve its target outcomes among the majority of trainees.However, innovation is often the result of a few champions or opinion leaders, so it still seems promising that there was a dramatic shift toward integration of participatory techniques and democratic management styles among several trainees, and an increased awareness of sanitation issues among a majority.Training programs for government officials should include soft skills applicable across public health sectors such as advocacy, partnership, and supervision, to increase the value of training and justify time spent away from other responsibilities.A growing need for capacity building in WaSH combined with limited prior evaluation presents both a risk of misdirecting investments in training, and an opportunity to influence training for improved outcomes.We suggest that our conceptual framework can support design of effective training programs and more rigorous training evaluations in WaSH.This study was reviewed and approved by the UNC Office of Human Research Ethics and by Kenya National Council for Science and Technology.Informed consent was received from all participants.
Training and capacity building are long established critical components of global water, sanitation, and hygiene (WaSH) policies, strategies, and programs. Expanding capacity building support for WaSH in developing countries is one of the targets of the Sustainable Development Goals. There are many training evaluation methods and tools available. However, training evaluations in WaSH have been infrequent, have often not utilized these methods and tools, and have lacked rigor. We developed a conceptual framework for evaluating training in WaSH by reviewing and adapting concepts from literature. Our framework includes three target outcomes: learning, individual performance, and improved programming; and two sets of influences: trainee and context factors. We applied the framework to evaluate a seven-month community-led total sanitation (CLTS) management training program delivered to 42 government officials in Kenya from September 2013 to May 2014. Trainees were given a pre-training questionnaire and were interviewed at two weeks and seven months after initial training. We qualitatively analyzed the data using our conceptual framework. The training program resulted in trainees learning the CLTS process and new skills, and improving their individual performance through application of advocacy, partnership, and supervision soft skills. The link from trainees' performance to improved programming was constrained by resource limitations and pre-existing rigidity of trainees’ organizations. Training-over-time enhanced outcomes and enabled trainees to overcome constraints in their work. Training in soft skills is relevant to managing public health programs beyond WaSH. We make recommendations on how training programs can be targeted and adapted to improve outcomes. Our conceptual framework can be used as a tool both for planning and evaluating training programs in WaSH.
785
Optimized estimator for real-time dynamic displacement measurement using accelerometers
In recent decades, many consumer products have seen significant miniaturization, although production machine tools have not seen an equivalent size reduction.A small size machine requires high machine accuracy, high stiffness, and high dynamic performance.The existing solutions to these requirements are antagonistic with small-size constraints.Numerous research efforts to develop small machines have been undertaken over the last two decades , however, most of these machines are still at the research stage.The µ4 is a small size CNC machine with 6 axes, which was developed by Cranfield University and Loxham Precision .This machine concept aims at having a high accuracy motion system aligned within a small size constraint.Machine tool frames have two key functions; 1.Transferring forces and 2.Position reference.There are three main concepts meeting the two required functions , which are shown in Fig. 1.In the traditional concept one frame is used for both functions.An additional Balance Mass for compensating servo forces concept .Separating the two functions by having an unstressed metrology frame .Concepts and can be combined to achieve superior performance .In a servo system, a force F is applied to achieve the required displacement of the carriage relative to the frame X.A flexible frame will exhibit resonances that are excited by the reaction of the servo-forces.A flexible frame is a significant dynamic effect influencing machine positioning device , especially in the case of small size machine .Fig. 2 shows a 2D model of linear motion system influenced by this dynamic effect.Realizing concepts other than the Traditional can improve the machine performance; however these concepts are not aligned with a small size requirement.On the other hand, a flexible frame limits the dynamic performance of the small size machine.Thus, a new positioning concept is required.A novel positioning concept, the virtual metrology frame, has been developed .By measuring machine frame vibrational displacement Xf and carriage position relative to the frame X, and fusing both signals, an unperturbed position signal Xmf is obtained.Thus, the flexible frame resonances in the plant were attenuated resulting in an improved servo bandwidth of up to 40% .The improved machine performance is as if the machine has a physical metrology frame.This novel concept does not require the physical components of a conventional metrology frame; however, realizing this concept requires a technique for real-time measurement of the frame displacement due to vibration.There are three significant constraints and requirements for measuring the frame vibrational displacement.First, a fixed reference point for measurement is not practical, since having a second machine frame is hard to realize due to the small size constraint.Second, noise characteristics should be comparable to the position sensor noise, e.g., linear encoder.Third, the measurement delays due to signal processing should be smaller than the servo controller update rate.There are various technologies for precision displacement sensors such as capacitive, eddy current, and inductive sensors ; however, implementing these sensors requires a fixed reference point.Strain sensors do not require fixed reference point, and are used for position control due to their simplicity and low cost ; however, their main drawback is that they require deformation of the measured component.Vibrational displacement is not necessarily a deformation at the point of measurement, and the deformation can be due to a remote compliance.Hence, the location of mounting the strain sensor is determined by the measured mode shape and its compliance, and not on the point of interest.Furthermore, there are only partially compensating techniques for temperature dependence of strain measurements and long term stability .Thus, strain sensors are not suitable for this purpose.Currently, there are a limited number of real-time implementations of displacement measurements based on integration in a control system .This is due to the requirement for small phase delay; and filtering techniques for reducing phase delay can cause gain errors.High accuracy is feasible only for short duration measurements of a narrow bandwidth motion by implementing bandpass filtering techniques.Bandpass filtering reduces the sensor noise outside the required bandwidth, but also causes phase delays.The standard deviation σ of acceleration based displacement measurements increases as εtα, where t is the integration time, ε represents the accelerometer error, and α is in the range of 1–2 .Hence, long term integration of acceleration signals has been largely unsuccessful .It has been shown to be achievable under specific conditions e.g., integration in the continuous domain , and a narrow bandwidth .The over increasing standard deviation of acceleration based displacement sets a challenge implementing it as a displacement sensor in a machine, which it typical operation time is long term.In this paper, an optimization technique was used to solve the apparently antagonistic requirements for long term, real time, and high accuracy acceleration based displacement measurements.By constraining the measurements to only dynamic displacements, which occur at the flexible frame resonances, a Pareto optimal solution was found.In Section 2, we present the problem formulation by describing the experimental setup, and the optimization problem.In Section 3, we present displacement estimation noise analysis of the system under test.In Section 4 the estimator design, using a heave filter, is presented.In Section 5, we present the estimator design, and the optimization constraints and goals.In Section 6, the results of the optimization process are presented for zero placement and pole-zero placement filters.In Section 7, optimal estimator performance was validated by comparing the displacement with laser interferometer measurements.We conclude the paper in Section 8.In this section we describe the experimental setup; a simplified motion module with flexible frame and measurement equipment, and the optimization problem which was solved in this research.A simplified linear motion module, which represents one of the machine motion modules , consists of: air-bearings, frame, linear motor, linear encoder, and carriage.; the motion module frame was fixed to a vibration isolation table."The driving force and the sensor are not applied at the center of gravity, but on the ``master side''.Thus, the carriage movement relies on the high stiffness of the guiding system, which suppresses motion in an undesired direction.The plant Frequency Response Function was measured from the input force F to the position measurement X.The input force was a swept sine signal, with frequency 5–500 Hz, which was generated as current command by the linear motion controller; this enabled analysis of the system mechanical resonances effects.The plant FRF shows characteristics of type Antiresonance–Resonance, which corresponds to flexible frame and guiding system flexibility .Thus, Finite Element Analysis and Experimental Modal Analysis techniques have been used , which showed a flexible frame phenomena.Fig. 5 shows a flexible frame mode shape measured using EMA.The frame flexible mode is measured by the encoder due to the relative movement between the frame and the carriage, which appears as resonance in the plant FRF.This is because the encoder scale is mounted to the machine frame, while its read-head is mounted to the carriage.Low noise Integrated Electronics PiezoElectric accelerometers are the appropriate sensors for small vibration signals measurements due to their: low noise; wide dynamic, frequency, and temperature range; high sensitivity; and small size .Triaxial ceramic shear accelerometers were used for the EMA and measuring the frame displacement.The accelerometer sensitivity is 25 mV/g, the measurement range is ± 200 g peak, and the frequency range is 1–5000 Hz.The simplified motion module was fixed to a vibration isolation table to suppress any ground vibrations that may introduce extra noise in the measurements.A signal conditioner is required to power the IEPE accelerometer with a constant current, and to decouple the acceleration signal.A low noise analog gain switching signal conditioner was used.Digital Signal Processing was performed using a real-time target machine.It contains 16 I/O channels and 16 bit Analog to Digital Converter.The conversion time for each ADC is 5 µs.The target machine is optimized for MathWorks® SIMULINK® and xPC Target™.The frame displacement xf was estimated by measuring frame vibration af using low noise accelerometers; the signal was acquired by the ADC and passed through the estimator.It is composed of a High Pass Filter to reduce low frequency noise and a numerical double integrator.Laser interferometer was used to validate the estimated displacement."The laser light is split into two paths by a beam splitter, one that is reflected by a ``dynamic'' retroreflector and another reflected by a ``stationary'' retroreflector.The dynamic retroreflector was mounted to the machine frame, and an accelerometer mounted to the retroreflector.The stationery retroreflector was fixed using an optics holder.Note that this validation setup can only be realized on the simplified motion system, and not on the full machine.The displacement of the dynamic retroreflector is measured by counting the number of interference events.The interferometer can measure dynamic displacement at a sampling rate of 5000 Hz with a resolution of 1 nm.This section presents the noise sources in the acceleration measurement, and the effect of acceleration noise on the displacement estimation.There are three uncorrelated sources that contribute to the displacement measurement noise: accelerometer, signal conditioner and ADC.The Power Spectral Density of each source is usually specified by the manufacturer.The accelerometer has the lowest noise contribution; however, to improve the Signal to Noise Ratio a x100 gain is used.Thus, the accelerometer noise is the most significant noise source.Based on the noise analysis, the displacement estimator must reduce the low frequency noise significantly as there are at least ten orders of magnitude difference between the required and expected noise level.By plotting the CPS as a function of fs one can assess the minimum required cut-off frequency for an HPF.By finding the intersection of the CPS line with the required noise level, 30 nm, the minimum cut-off frequency was found to be 17 Hz.This section presents an estimator design based on a combination of a high pass filter and a double integrator, and phase correction techniques.Although integration is the most direct method to obtain displacement from acceleration, due to 0g-offset and low frequency noise it is not appropriate to integrate the acceleration signal directly.The integration process leads to an output that has a Root Mean Square value that increases with integration time .This can be a problem even in the absence of any motion of the accelerometer, due to the 0g-offset .Displacement estimation based on digital integration was shown to have lower noise compared to analog integration.Furthermore, at high sampling rates the digital integration showed higher accuracy .Numerical integrators can be used in the time domain and in the frequency domain ; however, using frequency domain techniques for real time application is difficult, as it suffers from severe discretization errors, if the discrete Fourier transform is performed on a relatively short time interval .Hence, the digital estimator will be in the time domain.A HPF is used to remove constant or low frequency offsets and to reduce the low frequency noise.Without it, the double integrated signal will diverge due to the double integration behavior.Tuning the HPF, an optimized cut-off frequency, ωc, takes into consideration good tracking of the actual displacement, removal of sensor noise and offsets, and low phase errors .Reduced gain and phase lead error are associated with a high cut-off frequency, whereas high noise gain is associated with a low cut-off frequency.This section describes the optimizer design, its constraints and goals, and the three error functions.The design is independent of the estimator design.During the last decade, it was shown that evolutionary algorithms are useful in solving multi-objective optimization problems.There are various techniques which can be used, however the Genetic Algorithm approach was chosen.It has been shown that GA is suitable for solving complex mechatronics problems especially for signal processing , and for multi-objectives problems .As described in Section 2, there are three displacement estimation error functions: noise error function Jσ, magnitude error function JMag, and phase error function JPhase.In section 3 it was shown that low frequency acceleration noise is the main contributor to Jσ, however usually there is no specification for noise at these frequencies."Hence, for this calculation a ``0g-motion'' acceleration noise measurements were used.A typical 0g-motion acceleration signal was measured at a sampling rate of 32 kHz for t = 20 s.The measured CPS and the expected CPS are in good agreement as can be seen in Fig. 11.For simplicity the frequencies fi, that were used to calculate the error functions, are the frame resonances which are obtained from the plant FRF; however, one can use a different frequency vector.This section presents the optimization results and its Pareto front graphs for the two estimator designs ZPF and PZP.The Pareto front graph of the ZPF estimator with Butterworth contour is shown in Fig. 13.The main conflicting objectives are the phase error and noise level.No optimal solution which meets the optimization goals was found.Allowing an underdamped estimator, where 0 < ς ≤ 1/√2, improves the phase response at the expense of having a resonant peak.Hence, a higher cut-off frequency is possible which reduces the sensor noise and phase error; however, no optimal solution which meets all three requirements was found.Thus, a ZPF estimator design with either Butterworth contour or underdamp properties is not an appropriate solution for the problem.The Pareto front graph of the PZP estimator with Butterworth contour is shown in Fig. 14.Again in this case there is no solution which meets the requirements, although it shows better results compared to Fig. 13.Comparing the Butterworth contour and underdamped HPZP transfer functions emphasizes that a lower damping ratio allows higher cut-off frequency at the expense of a resonant peak.Above the cut-off frequency the ZPF design has a positive gain error, while in PZP designs it changes its sign.Moreover, a significant difference in the low frequency noise reduction between the ZPF and PZP designs can be observed.The PZP designs have a better noise reduction.This section shows the experimental results validating an optimal PZP estimator design.There are three main experiments for validating the estimator performance: robustness of the design; long term measurements at 0g-motion; and a comparison of displacement signals due to structural vibrations between laser interferometer sensor and the displacement based acceleration.Using the optimized estimator, 0g-motion measurement was made with four tri-axial accelerometers as shown in Fig. 6.The setup is detailed in Section 2.1.The signals were acquired at a sampling rate of 54 kHz for t = 600 s.The achieved displacement RMS is 27.6 ± 2.3 nm.Furthermore, the low variance between all of the accelerometers assures that the estimator design is robust, and not accelerometer dependent.Fig. 17a shows the estimated displacement of 0g-motion measurement, i.e. the RMS noise, of one typical accelerometer.The results are in agreement with the requirements.Fig. 17b shows the changes in displacement noise RMS over measurement time.As required from the estimator, 0g-offset and low frequency noise are attenuated which allows long term double integration without diverging.The validation was made by comparing the displacement measured by the laser interferometer and the acceleration based displacement measurement of the machine frame vibrations.The frame was excited using an oscillating position command generated by the linear motion controller, Xset = Ai·sin, at various frequencies ωi and amplitudes Ai.Note that Ai is the commanded carriage movement amplitude; hence the frame exhibits different displacement due to the servo reaction forces.The frame displacement amplitudes measured by the laser interferometer and acceleration based displacement were extracted using a Fast Fourier Transform.The discrepancy between the measurements meets the specified requirements.Fig. 18 shows an example of the discrepancy in the measured frame displacement at 100 Hz.This research shows that accelerometers can be used to measure real-time displacement in the nanometer range without constraints to the integration time.Common displacement sensors require a reference point, which does not always exist."Thus, the novelty of this technique is the ability to measure the dynamic displacement of a structure without having a physical reference point, but instead using a ``virtual'' reference point.Doing so, it was assumed that the initial conditions of the frame is unstressed state and in rest.The feasibility of this technique depends on the lowest frequency required to be measure since that low frequency noise is the most significant cause of displacement error.Although the displacement noise and measurement bandwidth met the requirements, by using an accelerometer with higher performance the displacement noise can be reduced significantly and the measurement bandwidth can be extended towards 0 Hz.Furthermore, using acceleration based dynamic displacement measurement technique offers an unlimited full-scale-range sensor in the nanometer range.The optimized estimator showed less than 10% variation in the displacement noise with different accelerometers which demonstrate its robustness.The developed technique is essential to realize the virtual metrology frame concept.Thus, it was implemented in a machine with a flexible frame improved it dynamic performance.
This paper presents a method for optimizing the performance of a real-time, long term, and accurate accelerometer based displacement measurement technique, with no physical reference point. The technique was applied in a system for measuring machine frame displacement. The optimizer has three objectives with the aim to minimize phase delay, gain error and sensor noise. A multi-objective genetic algorithm was used to find Pareto optimal estimator parameters. The estimator is a combination of a high pass filter and a double integrator. In order to reduce the gain and phase errors two approaches have been used: zero placement and pole-zero placement. These approaches were analysed based on noise measurement at 0g-motion and compared. Only the pole-zero placement approach met the requirements for phase delay, gain error, and sensor noise. Two validation experiments were carried out with a Pareto optimal estimator. First, long term measurements at 0g-motion with the experimental setup were carried out, which showed displacement error of 27.6 ± 2.3 nm. Second, comparisons between the estimated and laser interferometer displacement measurements of the vibrating frame were conducted. The results showed a discrepancy lower than 2 dB at the required bandwidth.
786
Long-term performance and life cycle assessment of energy piles in three different climatic conditions
Global energy requirements are expected to expand by 30% by 2040 as a result of global economy growth with an annual rate of 3.4%, a projected population increase of 1.6 billion, and inevitably increasing urbanisation ."Space heating and cooling is the world's largest energy sector, for instance, it accounts for 50% of the final energy consumption of Europe .It was also responsible for 28% of global energy-related CO2 emissions in 2017 .Fossil-fuel-based and conventional electric equipment still dominate the global building market, which accounts for more than 80% of the heating equipment .Moreover, owing to global warming, economic growth, and urbanisation, the use of energy for space cooling has more than tripled between 1990 and 2016 .In this context, the development and diffusion of reliable, economically viable, and environment-friendly technologies for meeting a significant portion of the energy requirements of the building sector is an important challenge.The energy pile concept is a technology that enables the use of renewable energy sources for efficient space heating and cooling.In this system, the piles that are already required for structural support are equipped with geothermal loops for performing heat exchange operations to exploit the near-surface geothermal energy.The idea behind the energy geostructures comes from the fact that the temperature of the ground remains the same throughout the year after a certain depth.Therefore, with the integration of the geothermal loops and the heat carrier fluid circulating within them, the heat is extracted from the ground to heat the buildings during winter.Similarly, during summer, the extra heat is injected into the ground to cool them.In this system, ground source heat pumps are often required which work intermittently in order to adapt the temperature of the circulating fluid to meet the energy demands from the building side.Given the great potential of energy piles for reducing the dependency on fossil fuels, various in situ tests were performed on this subject .Moreover, several models or tools with varying complexity were developed for the analysis and design of energy piles .Although the previous research has answered the majority of the fundamental questions on the mechanisms governing the thermo-mechanical behaviour of energy piles, in these studies the temperature changes have been imposed to the test piles instead of being natural consequences of an actual operation.In a few studies, the long-term behaviour of energy piles employed in real operations have been monitored .Nevertheless, no experimental data has yet been published in order to perform a systematic comparison of the long-term performance of energy piles under different climatic conditions, i.e., energy piles being subjected to various heating and cooling demands.In addition, although the role of geotechnical engineering in sustainable development is being increasingly recognized , there still exist uncertainties related to the actual environmental impact of these so-called green geostructures on their life cycle, which is influenced by the material production, transportation, execution, use, and end of life and greatly depends on the demand/supply relationship between the upper structure and energy piles.Considering the above-mentioned challenges, a 3D finite element model for a group of energy piles was developed which is capable of taking into consideration the real operating philosophy of a GSHP, i.e., intermittent operation.Moreover, the actual space heating and cooling demands of a reference office building from three cities in Europe were employed in the model to represent three diverse climatic conditions.In this paper, the numerical model is first described in detail.The heating and cooling demands versus supply data, temperature of the heat carrier fluid as well as of the piles and the soil are then reported for the purpose of comparison.Next, a life cycle assessment model is implemented to estimate the environmental impacts of the energy piles employed in the different cities.Finally, a comparison with a conventional heating and cooling system is presented in terms of human health, ecosystem quality, climate change, and resource depletion.One of the main goals of this study is the assessment of the long-term performance of energy piles in different climatic conditions.To obtain a thorough comparison in this respect, the space heating and cooling demands for a reference building type at different climatic conditions should be employed as the input in numerical simulations.The ENTRANZE Project presents the necessary data for this purpose, where the heating and cooling energy demands for four different reference building types are systematically determined using the whole building energy simulation program, EnergyPlus.Among the reference buildings employed in the ENTRANZE Project, the reference office building was selected for the analysis presented in this paper.The reference building is a medium-size, five-story building with 3-m high floors.The net heated area of the building is 2400 m2.Each floor of the building is of the same size of 30 m × 16 m, in length and width, respectively.Within the ENTRANZE Project, 10 key cities, in other words ten climatic conditions, in Europe are reported to be selected for building energy simulations while considering the winter severity index, summer severity index, and climatic cooling potential as indicators.From among the 10 cities, three were selected for the present study: Seville, Rome, and Berlin.These cities, in particular, were selected to represent three different climatic conditions and diverse space heating and cooling demands.Seville, with a high SSI and low WSI, represents the case of a heat pump operation more on the cooling side.In contrast, Berlin represents a case with high heating and significantly low cooling demands.Furthermore, the heating–cooling demands in Rome lie between those of the two former cities with almost balanced space heating and cooling requirements.A 3D time-dependent finite element model was built using the COMSOL Multiphysics Software to investigate the long-term performance of energy piles, thus allowing the intermittent operation of the heat pump.In other words, in the presented model, the heat pump operates until the daily heating/cooling demand of the building is met, following which the operation is automatically terminated until the next day.The intermittent operation of the heat pump allows the temperatures of the pile and soil to recover to some extent during the stoppage times, which is also the case for the actual geothermal operations of energy piles.The heating–cooling demands presented in the previous section were employed in the model for this purpose.Although the employed mathematical formulation has proved to be adequate in modelling heat transfer in pipes and porous media regarding energy piles , the enhancement of the model with GSHP remains to be corroborated with the experimental data becoming available.With reference to the foundation of the reference building, 32 piles that were 0.5 m in diameter and 20 m in length were employed.The piles had a 4.75-m and 5.25-m centre-to-centre spacing in the x- and y-directions, corresponding to pile spacing ratios of 9.5 and 10.5, respectively.Each pile was equipped with a single U-loop pipe, with a central distance of 0.3 m between the entering and exiting pipes.Regarding the discretization of the model, mesh independence analyses were performed and element quality was controlled systematically in order to avoid erroneous interpretation of the model results.The model comprises of extremely fine and extra fine meshes of 902,104 elements in total to characterize the soil and pile domains.Tetrahedral, triangular, linear and vertex elements were employed to describe the finite element model.Regarding the pipes, Pipe Flow Module of COMSOL Multiphysics Software was employed, which idealizes the 3D flow within pipes to edge elements.The mesh for the pipes were defined by 8075 edge elements.The Neumann boundary condition with no heat flux is assigned to the ground surface since in most energy pile applications, thermal insulation is ensured between the slab and the upper environment.On the other hand, prescribed temperature boundary condition is specified for the vertical sides and the bottom boundary.The size of the soil domain is taken large enough at distances where the heat exchange operations have no effect, in order to avoid any boundary effects.The average annual ground temperatures for three cities, which is determined by relating the air temperature to the ground temperature , are assigned to the vertical sides and bottom boundaries, as well as to all the materials used in the model as the initial condition.In the presented model, the climatic conditions of the three cities were used to define the average ground temperature and also the heating and cooling demands from the building side.However, the soil conditions and material properties were the same for the three cities for the purpose of systematically comparing the long-term response of energy piles to three different energy demands.The soil domain in the model is assumed to be isotropic, fully saturated, medium dense sand.The pile domain is assumed to be reinforced concrete.The material properties for both the soil and pile domains are presented in Table 1.Table 2presents the properties assigned to the pipes for which 1-inch, cross-linked polyethylene pipe type was assumed.Regarding the water circulating within the pipes, temperature-dependent properties were assigned for the density, thermal conductivity and specific heat capacity determined by Ref. .A 30-kW water-to-water heat pump is used in the analysis conducted for the three reference cities.To simulate the intermittent operation of the heat pump, indicator states are included in the model, which monitor whether the daily heating/cooling demand is fulfilled for each time step.Once the demand is fulfilled, the heat pump operation stops until the next day, but the water still continues to flow within the pipes, thus allowing thermal recovery.A schematic of the GSHP system is presented in Fig. 3 to demonstrate the three main sections: primary circuit, GSHP and secondary circuit, as well as the interaction between each section.In addition to the GSHP system, an auxiliary system is also taken into consideration, if the GSHP system is not competent in delivering the entire heating/cooling demand.The amount of energy supplied by the auxiliary system is particularly important in this study as it is essential to employ it in the LCA analysis.The inlet and outlet temperatures of the water circulating within the piles are not constant but vary depending on the heating/cooling demand of the secondary circuit, coefficient of performance of the heat pump, as well as the heat transfer within the pipes and porous media.As expected, the efficiency of the heat pump increases as the temperature difference between the source and delivery temperature decreases and as the efficiency factor increases.For the present study, a designated efficiency factor of 0.5 is used for the heat pumps in the three reference cities.An algorithm has been employed in the model, which determines the COP depending on the source temperature and); this allows the COP to vary with time for the reference cities.In the presented model, the primary circuit of the system is completely modelled in the performed numerical analyses while the contributions provided by the GSHP, secondary circuit, and auxiliary system are idealised as described above.The idealised modelling of the GSHP and secondary circuit allows to achieve the fraction of the heating/cooling demand to be supplied by the energy piles, while the portion of the energy that cannot be supplied by the GSHP system is assumed to be covered by the auxiliary system.The environmental performance of energy piles should be compared with that of a conventional heating and cooling system to demonstrate its benefits from an environmental point of view.In this study, the LCA methodology is adopted to evaluate the potential environmental impacts while taking into consideration the material extraction, transportation, execution, use, and disposal.The analyses are performed by implementing the LCA model in the software SimaPro 8.0.3 .The international standards ISO 14040, 2006 and ISO14044 describe the LCA methodology and the related analysis phases such as goal and scope definition, life cycle inventory, life cycle impact assessment, and interpretation, which are followed in this study.The goal and scope definition step includes the definition of the functional unit, the reference flow, and the boundaries of the system.In this work, the functional unit has been defined as follows: “To fulfil the heating and cooling demands of an office building for one year”.The considered design time spans of the building and the electric heating/cooling system were assumed to be 50 years and 20 years, respectively, and were accorded to the functional unit.The reference flow and system boundary are illustrated in Fig. 5.Two scenarios were selected in the present work to satisfy the annual heating and cooling demand for the three reference cities.In the first case, the function of the deep foundation was only to transfer the mechanical loads to the subsoil while a gas boiler and air conditioner were selected to meet the heating and cooling demands.In contrast, in the second case, the coupling between a group of energy piles and a GSHP was considered.The flows between the investigated system and the environment, in terms of input and output products, resources, wastes, and emissions, were identified during the LCI.The input data of the LCI are reported in Tables 3 and 4 for the conventional system and energy piles, respectively.The amount of materials was obtained following the geotechnical design of the group of piles .The transportation distance was hypothesised as 50 km while the drilling time was obtained in consultation with a specialised company.With respect to the use phase, the amount of energy required in terms of natural gas or electricity was obtained while considering the available heating and cooling demands and the results of the finite element analysis simulation reported in Subsection 3.1.The latter allowed for the estimation of the geothermal energy that can be exploited by the thermal activation of the piles for the considered scenarios.According to the different heating and cooling demands and in situ temperature of the ground, the GSHP system behaves differently for each considered city, and therefore, the corresponding impact on the environment differs.With respect to the electricity supply, national flows were selected with respect to the three reference cities.The disposal scenario takes into consideration the recycling of all the involved materials, while assuming the recycling rates of the construction and demolition waste for the corresponding countries .The last column of Tables 3 and 4 reports the selected environmental flow from the LCI database ecoinvent .During the LCIA phase, the inventoried flows contributing to a given environmental impact category were achieved, the results of which are presented in terms of two main types of indicators selected at two different levels of the impact pathway: midpoint and end points.The midpoint indicators usually indicate a change in the environment caused by a human intervention, while endpoint or damage indicators assess damages to three areas of protection, i.e., human health, ecosystem quality, and resources.The Impact 2002+ method was selected for performing the LCIA for all the considered cases.The end point results are presented in terms of climate change, human health, resource depletion, and ecosystem quality.The results of the LCA are presented for the three reference cities mentioned above in order to investigate the influence of the heating or cooling demands on the environmental performance of the investigated systems.Due to the difference in heating and cooling demands among the reference cities, the outputs of the FEM differ for each case which consequently affects the environmental analysis of the examined cases.Both the 3D finite element model and the LCA analysis were employed to reveal the long-term energy performance and environmental impacts of an energy pile project in three different cities.Fig. 6 shows the influence of the intermittent operation of the GSHP on the water temperature circulating within the pipes during the transition from heating-to-cooling and cooling-to-heating operational modes.The example plot is given for Seville.The upper part of the figures shows the temperature of the water while the lower one shows the operation and stoppage periods within the same timeframe.The figure shows the decrease in the temperature of the circulating fluid with the operation of the heat pump during the heating mode, and the recovery of the temperature during the stoppage times, which is the contrary for the case of cooling.A temperature difference of 2–4 °C between the operation and stoppage times can be observed in the figure.The same simulation principle was applied for Rome and Berlin wherein the model was employed for 1 year without interruption.In this section, the results of the analyses for the three cities under consideration are presented in terms of energy demand and supply, temperature change along the energy piles and the surrounding soil, and LCIA in terms of climate change, human health, resources, and ecosystem quality.The comparison of the monthly heating and cooling demands from the building side and the available output from the primary circuit are presented in Fig. 7, along with the seasonal fluctuation of the heat carrier fluid.The demand/supply balance was checked iteratively at 30-min intervals for each day of the simulation, which resulted in a slightly higher supply as compared to the demand during some months if the demand was provided at a shorter time than 30 min.The comparison of the demand and supply shows that the majority of the heating and cooling demand in Seville was met by the GSHP, while an auxiliary cooling system was employed only for the month of July to cover the remaining 13% of the cooling demand.Similar results were obtained for Rome; although the heating demand was higher than that of Seville, the GSHP was capable of realising the required supply, while an auxiliary cooling system was required for the peak cooling periods.In the case of Berlin, the heating and cooling demands, which are characterised by dominant heating and limited cooling, were quite diverse as compared to the former two.The requirement of an auxiliary heating system for the four months of November, December, January, and February, i.e., during winter, can be observed in Fig. 7 c, while the limited cooling demand was met entirely by the GSHP.The initial temperatures of the involved elements, which were considered to be constant, were determined by relating the air temperatures to the ground temperatures, which corresponds to 19.2 °C, 15.2 °C, and 9.8 °C for Seville, Rome, and Berlin, respectively.The initial conditions also involved the temperature of the fluid exiting the energy piles as being originally equal to the average ground temperature.Following the start of the geothermal operations, Tout, prim showed an annual fluctuation, which was associated with the corresponding heating and cooling demands from the building side.The temperature decrease during the heating operation which was followed by a recovery and a temperature increase period are in agreement with the case studies presented by Brandl .As a result of their unique roles, the energy piles are exposed to daily and seasonal temperature variations during their lifetime.Temperatures in the pile and in the surrounding soil fluctuate during the day in between operation and stoppage times resulting in short-term temperature changes.Furthermore, there is a seasonal increase in temperatures after episodes of heat injection during summer followed by seasonal temperature reductions during heat extraction in winter.These temperature changes may cause axial displacements, additional axial stresses, and changes in the shaft resistance, with a daily and seasonal cyclic nature, along their lengths.Moreover, geothermal operations characterised by excessive heat extraction may cause temperatures along the energy piles to decrease below zero, eventually resulting in the formation of ice lenses in the adjacent soil.To prevent the freezing and thawing of the soil during successive heating and cooling operations, which are associated with heave and settlement, a minimum temperature of 2 °C on the shaft of the energy piles is recommended .Finally, a change in the in situ temperature of the ground in the long-term due to geothermal operation of the energy piles should be prevented as it may have a significant impact on the efficiency of the GSHP system.Therefore, the appropriate prediction and monitoring of the temperature fluctuations along the energy piles and surrounding soil is of paramount importance.To investigate this phenomenon, the temperature evolution along the centre pile and that in the surrounding soil during the geothermal operation in the three cities are presented in Fig. 8.The maximum temperature decrease and increase due to heat extraction and injection, respectively, and the residual temperature change along the energy pile after 1 year of geothermal operation are specified in the figure.The temperature variations with respect to the in situ temperature of the piles are within the typical range for operating energy piles .The comparison of the temperature variations along the energy pile and surrounding soil reveals that the soil closer to the energy pile exhibits a rapid response to the geothermal operation with higher temperature variations, which lags behind and decreases in magnitude at a greater distance from the piles.Moreover, the soil at a 2.4-m distance, which is equidistant from the two rows of energy piles, experiences temperature variations as well, although very limited, which evidences the thermal interactions between the neighbouring piles.Considering the ground temperature variation during the geothermal operation of the energy piles, three types of thermal responses are observed in Fig. 8: long-term temperature increase in the case of Seville due to cooling-dominant geothermal operation, thermal balance in the case of Rome, and long-term temperature decrease in the case of Berlin due to heating-dominant geothermal operation.Influences on the in situ temperature of the ground in the long-term should be avoided during geothermal operations, which significantly depends on the balance of the heating and cooling demands from the building side, as well as the ground water flow corresponding to the natural thermal recharge of the soil, which was not taken into consideration in the present study.In the case of low permeability, the balance between heat injection and extraction should be ensured for the long-term thermal equilibrium of the ground temperature.In contrast, in the case of soils characterised by high permeability, with a ground water flow greater than 0.5 m/day, the ground temperature equilibrium may be ensured by the groundwater flow, after unbalanced heating-cooling operations .Therefore, a special thermal design for the specific GSHP operations is of paramount importance for taking into consideration the space heating/cooling needs and hydrogeological ground conditions, in order to ensure that despite seasonal fluctuations, the in situ ground temperature remains the same in the long-term.The results reported in Section 3.1 in combination with the available data from the ENTRANZE Project were used for the LCI of the two systems for the three reference scenarios.The first remarkable outcome of the LCIA is the confirmation that the heating and cooling are the main contributors to the climate change impact with respect to the other life cycle stages.Considering a conventional pile foundation and a conventional heating and cooling system, Fig. 9 shows that the use phase contributes up to 98% of the total climate change impact while the residual 2% is represented by the other steps of the geostructure LC.These results clearly identify the LC phase wherein it is potentially possible to reduce the impact on the environment.The energy piles, exploiting the geothermal energy in principle and can aid in reducing the impacts of the use phase.Nevertheless, there are a number of key points that are required to be considered before arriving at this conclusion: additional materials and different energy sources are required to be introduced in the LC of the system while adopting the energy piles concept.Moreover, the environmental performance is highly dependent on the heating and cooling demands, which consequently varies during the different periods of the year.Finally, the way the energy is produced to satisfy heating and cooling needs differs depending on the country under consideration.Therefore, despite introducing the same energy source in an LC model, it can result in different environmental impacts as the country under consideration changes.The outcomes of the LCIA are used in this study to examine these features.Fig. 10 shows the results of the LCIA in terms of the four “endpoint” indicators employed by the Impact2002 + method, while the “midpoint” indicators are reported in the appendix, in Figs. A.1, A.2, and A.3.It is directly noticeable that the environmental performance of both investigated systems strongly depends on the countries and the heating and cooling demands.For example, on only considering the conventional system, the indicators presented different scores for the three reference cities.Generally, the environmental analysis rewarded the energy piles that showed a considerable reduction of impacts.With reference to the cities of Seville and Rome, the score of all the indicators was significantly in favour of the energy piles, which showed a reduction of 65% and 55% in terms of the equivalent CO2 emissions, 61% and 64% in terms of human health, 64% and 60% in terms of resources depletion, and 57% and 42% in terms of ecosystem quality, respectively.A lower reduction of the impacts was found in the case of Berlin: 13% in terms of climate change, 52% for human health, and 14% for resources depletion.The ecosystem quality was the only score in favour of the conventional system.Based on the midpoint indicators reported in the appendix, the ecosystem-related midpoint indicators in line with endpoint results for ecosystem quality are ionising radiation, land occupation, ozone layer depletion, terrestrial ecotoxicity, and mineral extraction.These results are explained by the high amount of auxiliary energy required to meet the heating demand in Berlin, which was not satisfied by exploiting only the geothermal source, and the consequential high electricity demand for the use of the energy pile system.Moreover, the analysis of the midpoints reported in the appendix clearly illustrates why energy piles have a negative impact on the ozone layer depletion in each city.This is due to the presence of refrigerant in the infrastructure of the heat pump.Fig. 11 reports a detailed overview of the environmental indicators for all the considered scenarios.The results are reported for each month of the reference year.As was expected, the peak values of the indicators can be identified during winter for the case of Berlin and during summer for Seville and Rome.It is interesting to note that, on comparing the conventional system with the energy piles during the reference year, a reduction in the impacts during the heating periods was always noted, but during the cooling periods, the energy piles contributed significantly to reducing the score of the indicators.This means that the energy piles not only contributed to providing a clean energy source for the heating of the building but they were especially efficient from an environmental point of view during the cooling periods.This is a significant advantage for the energy piles over other technologies exploiting renewable energy, which mainly satisfy only the heating demand.This paper presents the long-term performance of a group of energy piles in terms of meeting the heating and cooling demands of a reference office building in three different climatic conditions.For this purpose, a 3D finite element model was developed, which is capable of taking into consideration the intermittent operation of a GSHP, as well as the heating and cooling demands from the building side with a monthly varying nature.The results obtained on using the finite element model, in terms of meeting the heating and cooling demands from the building side, were employed to perform an LCA analysis.With this study, finally a quantitative comparison of the environmental impact between a conventional heating and cooling system and energy piles has been presented in terms of climate change, resource consumption, human health, ecosystem quality.The comparison among three reference climate scenarios has provided clear evidences regarding the adoption of the energy pile system in the selected areas.With the use of energy piles, the LCA demonstrated a reduction in terms of equivalent CO2 emissions, human health, resources depletion, and ecosystem quality for Seville and Rome, respectively, while the reduction was lower in the case of Berlin.The comparison of the conventional and GSHP systems showed that the energy piles yielded the highest reduction in indicators during the cooling periods, which is considered to be partially related to the higher coefficient of performance of the GSHP during summer months.According to this study, the energy pile technology, providing a clean energy source for both heating and cooling of the buildings, is more efficient from an environmental point of view compared to conventional systems.Moreover, their environmental performance has revealed to be especially satisfactory during the cooling periods, which is a significant advantage with respect to other renewable energy technologies satisfying solely the heating demand.
The main purpose behind the use of energy piles is to enable the exploitation of geothermal energy for meeting the heating/cooling demands of buildings in an efficient and environment-friendly manner. However, the long-term performance of energy piles in different climatic conditions, along with their actual environmental impacts, has not been fully assessed. In this paper, the results of a finite element model taking into consideration the heating and cooling demands of a reference building, and the intermittent operation of a ground source heat pump, are revealed to examine the long-term performance of energy piles. Furthermore, a life cycle assessment model is implemented to compare the environmental performance of energy piles and a group of conventional piles. The environmental enhancement provided by the adoption of a ground source heat pump system is quantified with respect to a conventional heating and cooling system. The obtained results show that (i) the energy pile system can meet the majority of the heating/cooling demands, except during the peak demands, (ii) the geothermal operation results in temperature fluctuations within the energy piles and the soil, (iii) the use of energy piles results in a significant reduction in environmental impacts in the majority of the examined cases.
787
Performance of the phonatory deviation diagram in the evaluation of rough and breathy synthesized voices
Traditionally, vocal assessment includes the investigation and integration of perceptual-auditory, laryngeal, aerodynamic, acoustic, and self-assessment data.1,2,Specifically, perceptual-auditory evaluation and acoustic analysis are the main tools used by the speech therapist/audiologist to characterize the vocal quality deviation observed in voice disorders.3,Studies in the area of voice disorder evaluation and diagnosis aim to investigate three essential clinical issues3: the ability of the measure to determine the presence/absence of a voice disorder; the evidence that the test used can determine the origin of a voice disorder; and the ability of a measure to determine the extent of a voice disorder.The perceptual-auditory voice assessment includes from the definition of the present deviation intensity to the emission and predominant vocal quality, in case of deviated emissions.The descriptors “roughness”, “breathiness” and “tension” are universally used4,5 to characterize dysphonic voices, showing a correlation in the physiological and acoustic planes.6–8,However, the roughness and breathiness parameters are considered more robust, whereas tension is a less reliable quality with great inter-rater variability, which justifies its omission in some perceptual-auditory evaluation protocols.9,10,The acoustic analysis corresponds to the sound signal recording, which is the complex product of the non-linear interaction of the biomechanical and aerodynamic properties of the vocal production system.8,It provides an indirect estimate of the vibratory patterns of the vocal folds, the vocal tract, and its different adjustments, contributing to the task of vocal quality analysis and classification.11–14,Jitter and shimmer are among the main acoustic measures based on linear models of vocal production and used in the clinical context.15,These are measures that analyze the fundamental frequency disturbance index, that is, the control of vocal fold vibrations, and the amplitude disturbance index, which is related to glottic resistance.16,17,In addition to disturbance measures, noise measurements such as Glottal to Noise Excitation and Harmonic-Noise Ratio are also widely used in the clinical context,8,18,19 as they demonstrate whether the vocal signal originates from vocal fold vibrations or the presented air current, as well as of the regular signal of the vocal folds in relation to the irregular signal of the vocal folds and the vocal tract, correlating the harmonic noise versus the wave noise component.17,19,20,In general, a deviant emission tends to combine different components of noise and disturbance, so that studies using combined measures may better represent the auditorily perceived vocal quality deviation.8,16,20–23,In this context, the Phonatory Deviation Diagram or hoarseness diagram24–26 offers the possibility of the combined analysis of disturbance measurements and noise, making it an important tool for the evaluation and monitoring of voice disorders.17,27–30,One of the great challenges of vocal assessment is the integrated analysis of data, which includes the acoustic and perceptual-auditory information.31,One of the possible solutions suggested for a better understanding of the associations between the acoustic and perceptual phenomena related to the vocal signal is the development of researches with voices generated by synthesizers.32,Synthesized voices have highly controlled and known acoustic properties and production conditions, which contributes to the understanding of the mechanisms underlying the auditorily perceived vocal quality deviation.Synthesizers simulate vocal production deviations such as roughness, breathiness, and tension, from the manipulation of disturbance parameters, noise, and tension/symmetry differences between the vocal folds, respectively.33,Therefore, considering that the identification of the presence and degree of roughness and breathiness are part of the clinical vocal evaluation routine, that PDD is an important tool in the evaluation and monitoring of voice disorders, and that the use of synthesized signals allows greater control of the stimulus and can elucidate conditions underlying the perceived deviation, the aim of this research is to analyze the performance of PDD in the discrimination of the presence and degree of roughness and breathiness in synthesized voices.For this purpose, two hypotheses were raised: there are differences in the PDD parameters regarding the identification of voices with and without roughness and breathiness; there are differences in the PDD parameters regarding the identification of signals with different degrees of roughness and breathiness.This is a documented descriptive, and cross-sectional study carried out at the Voice Laboratory of the Speech Therapy and Audiology Department of a university.It was evaluated and approved by the Research Ethics Committee of the institution, under Opinion n. 508200/2013.This study used a set of synthesized voices developed by the VoiceSim synthesizer.33,The synthesizer consists of a computer system containing a vocal fold model and a representation of the vocal tract in the format of concatenated tubes, through which an acoustic wave propagates.32,Vocal deviations of roughness and breathiness were produced from the manipulation of acoustic parameters of fundamental frequency disturbance, additive noise and tension asymmetry between the vocal folds.33,Roughness was generated by manipulating the duration of the cycle of glottic excitation and jitter, with the introduction of a stochastic disturbance in the vocal fold tissue tension, using the formula: ΔK = αεK; where /α/ is a scale parameter, /ɛ/ is a random variable, and /K/ is a coefficient of vocal fold stiffness.Breathiness was generated with the insertion of additive noise, according to the formula: Δμ = bεμ where /μ/ is the glottal airflow rate, /b/ is a scale parameter, and /ɛ/ is a random variable, similar to jitter.The tension asymmetry parameters between the vocal folds, subglottic pressure and vocal fold separation were also controlled during the production of these synthesized signals.For more details on the synthesizer, please refer to the available literature.33,The speech material of the synthesized stimuli was the vowel /ɛ/ sustained for 3 s.This vowel was chosen because it is commonly used in vocal and laryngeal evaluation procedures in Brazil,34 also considering that it is an oral, medium, open, and unrounded vowel, considered the most medium vowel of Brazilian Portuguese,34 which allows a more neutral and intermediate position of the vocal tract.Therefore, 871 synthesized vocal signals were used, of which 426 were female and 446 were male signals, with different combinations of the previously mentioned acoustic parameters.The acoustic analysis was performed using the VoxMetria software, version 4.5 h, by CTS Informática, in the vocal quality module.The PDD was used for this evaluation, in order to analyze the distribution of vocal signals according to area, quadrant, shape, and density.Regarding the area, the software itself indicates whether the vocal signal is inside or outside the normal range.As for the quadrants, the PDD was divided into four equal quadrants17: lower left, lower right, upper right and upper left.Regarding the distribution of the points in relation to density, the points concerning the distribution of the vocal signals were classified as concentrated, when the points were distributed inside a space corresponding to one square, or amplified, when the points were distributed throughout the space corresponding to more than one square of the PDD.The shape classification was performed using a simple 10-cm ruler on the printed sheet of each PDD generated by the software, corresponding to the image of each analyzed vocal signal, with no previous knowledge of the vocal deviation intensity and the predominant voice type.The points concerning the distribution of vocal signals were categorized as vertical, when the distance between the points along the abscissa was lower than along the ordinate; horizontal, when the distance between the points along the abscissa was higher along the ordinate; and circular when the distance between the points along the ordinate and the abscissa was approximately the same.17,The perceptual-auditory evaluation session took place in a quiet environment and was performed by a speech therapist/audiologist who was also a voice specialist with more than 10 years of experience in this task.The evaluator was instructed that voices should be considered normal when they were socially acceptable, naturally produced, without any irregularity, noise, or effort observable during the emission.The evaluator was also instructed that roughness would correspond to the presence of vibratory irregularity and breathiness would be associated with audible air escape during the emission.The evaluator was trained with anchor stimuli, containing normal emissions, and deviated ones at different degrees, as well as predominantly rough and breathy voices.Moreover, the evaluator was instructed about the cutoff values that would be used in this study,10 to categorize voices regarding the absence and presence of roughness and breathiness.For the assessment, the evaluator used a Visual Analogue Scale, with a metric scale of 0–100 mm, evaluating the intensity of vocal deviation and the roughness degree and breathiness degree.The evaluation closest to 0 represents less vocal deviation, and the closer to 100, the greater the deviations.For the assessment, each emission of the sustained vowel was presented three times through a speaker, at a comfortable intensity self-reported by the evaluator."At the end of the perceptual assessment session, 10% of the samples were randomly repeated for the evaluator's reliability analysis, using Cohen's Kappa Coefficient.The Kappa value was 0.88, indicating excellent reliability of the evaluator.35,In the current literature,10,36 distinct cutoff values are found for GD,36 RD10 and BD,10 used to categorize both the presence/absence of vocal deviation, and to classify the degree of the present deviation.Therefore, considering that the aim of this study is to investigate the performance of the PDD in the discrimination of the presence and degree of roughness and breathiness in synthesized voices, it was decided to use the cutoff values established for the classification of roughness and breathiness parameters.10,For RD, the following cutoff points are considered10: absence of roughness or Grade 0, mild roughness or Grade 1, moderate roughness or Grade 2 and intense roughness or Grade 3.In relation to BD, the following cutoff points were recommended: no breathiness or Grade 0, mild breathiness or Grade 1, moderate breathiness or Grade 2 and intense breathiness or Grade 3.Thus, a correspondence was made between the VAS used for RD and BD and the numerical scale,10 as described below:Grade 0: RD and BD ≤ 8.4 mm;,Grade 1: 8.5 mm ≤ RD ≤ 28.4 mm and 8.5 ≤ BD ≤ 33.4 mm;,Grade 2: 28.5 mm ≤ RD ≤ 59.4 mm and 33.5 mm ≤ BD ≤ 52.4 mm;,Grade 3: RD ≥ 59.5 mm and BD ≥ 52.5 mm.The 8.4 mm cutoff was also used to categorize the voices regarding the presence or absence of roughness and breathiness.10,Voices with values >8.4 mm in RD and BD were considered as having the presence of roughness and breathiness in vocal emissions, respectively.We chose not to analyze the tension parameter, since other studies have already shown that such characteristic is not specifically identified in the PDD,17,29 in addition to the lack of consensus regarding the inclusion of this parameter in the perceptual-auditory evaluation protocols.1,10,The GD evaluation36 was not used for signal categorization, but only for the sample characterization in the present study.Therefore, based on the results of the perceptual-auditory analysis of the RD and BD, the following classification was observed:As for the presence of roughness: 128 signals without roughness and 743 with roughness.As for the presence of breathiness: 365 signals without breathiness and 506 with breathiness.It is worth mentioning that a categorical analysis of the vocal quality predominant in the emission was not performed, but a same vocal signal could show roughness and breathiness components, since the criterion for the allocation of signals regarding the presence/absence of these components was the result of the independent evaluation of each of them through the VAS and of the cutoffs established for these parameters."The statistical analysis was descriptive for all the assessed variables and Fisher's exact test and Chi-square test were used to compare the analysis of variables related to perceptual-auditory and acoustic measures.The Kruskal–Wallis test was used to compare the acoustic measurements according to the degree of roughness and breathiness.The level of significance was set at 5% for all analyses.The software used was the Statistical Package for Social Sciences.Initially, the distribution frequency of the synthesized voices with and without roughness was compared according to the area, density, quadrant, and shape of the PDD.A difference was observed between the signals with and without roughness as a function of the PDD area and quadrant.The vocal signals with roughness were found to be proportionally outside the area of normal PDD and in the lower right quadrant.There was no statistically significant difference regarding the distribution of the signals with and without roughness as a function of the density and shape of the PDD points.Subsequently, the distribution of signals with and without breathiness was compared as a function of the PDD parameters.There was a difference in the proportion of these signals regarding the PDD area, density, and quadrant.The breathy voices were predominantly outside the normal range and in the lower right quadrant.When comparing the distribution frequency of the voices with different degrees of roughness according to the PDD parameters, a difference in the distribution of the signals was observed in relation to all PDD parameters.Voices with a higher degree of roughness were proportionally outside the area of normality, in the lower right quadrant and showed concentrated density in relation to voices with lower degrees of roughness.As for the shape, although a difference was found between the proportions of the groups, there was no distribution pattern of the signals with different degrees of roughness in a specific shape, since the signals predominantly showed the horizontal shape in all grades.Regarding the degree of breathiness, there was a difference in the distribution of the signals as a function of the PDD area, density, and quadrant parameters.Voices with higher degrees of breathiness were proportionally more often outside the area of normality, showed more concentrated density and were in the lower right quadrant, in relation to the signals with lower degrees of breathiness.This study analyzed the performance of the PDD in the discrimination of the presence and degree of roughness and breathiness in synthesized voices.This section was organized with the purpose of clarifying the conclusions of the study according to the raised hypotheses.Didactically, it was decided to analyze the components of roughness and breathiness in subsections.This study showed that the PDD area and quadrant were able to discriminate between normal signals and signals with roughness.Voices with roughness were predominantly located outside the area of normality and in the lower right quadrant."Previous studies, carried out with adults’17 and children's voices,29 corroborate the findings obtained in the present study.Both the lower right quadrant and the PDD area were important to discriminate voices with presence and absence of roughness, showing these two parameters are robust and reliable to evaluate roughness in dysphonic and non-dysphonic voices.The PDD evaluates signal irregularity in its horizontal position, being associated to the concept of roughness.24,26,The greater the irregularity of the vocal signal, the greater its displacement from left to right in the chart.This fact justifies the location of rough voices outside the area of normality and in the lower right quadrant, both in the present study and in previous ones.17,29,Additionally, it is emphasized that roughness is one of the universal parameters of the perceptual-auditory evaluation of vocal quality, representing an important characteristic in the identification of the presence of vocal or laryngeal alterations.37,Roughness is commonly related to the presence of structural and/or functional alterations in the larynx, such as is seen in cases of edema, vascular dysgenesis, nodular lesions, polyps, or any other component that generates a mass increase in the membranous portion of the vocal folds38 and, consequently, irregularities in the vocal fold vibratory pattern.In the acoustic plane, roughness is associated to the jitter and shimmer parameters.19,As for the distribution of voices with different degrees of roughness in the PDD, it was verified that vocal signals with a greater roughness component were proportionally outside the area of normality and in the lower right quadrant.Regarding density, signals with moderate and intense deviation predominantly showed concentrated density.It is noteworthy that 35.93% of the synthesized voices without roughness were outside the area of normality, whereas 12.10% of the voices with mild-to-moderate degree of roughness were inside the area of normality, that is, the PDD showed a greater confounding factor in the identification of voices without roughness, with a slight deviation in relation to the signals with a higher degree of roughness.In traditional models, with the use of algorithms that extract isolated jitter and shimmer measurements, an inverse behavior is observed, as the use of these isolated measures is less reliable in the evaluation of more deviant voices.15,17,20,24,26,39–41,Regarding density, few studies17,28,29 specifically included this parameter for PDD analysis and none of them investigated the distribution of voices with different degrees of roughness as a function of PDD density.Only one of these studies17 showed a difference in the distribution of signals with and without vocal deviation regarding density, with the deviated signals characterized as having amplified density.In other studies where PDD was used,20,24,26,40–42 the density parameter can be inferred from the distance between the points only on the abscissa axis, being associated with signals with amplified or concentrated density, respectively.All these studies were longitudinal ones and produced a tendency for less dispersion of the points on the post-intervention abscissa axis, although there is great individual variability in this parameter throughout the treatment,26 with significant differences being observed only between pre- and post-treatment conditions.This study showed greater variability in the distribution of the signals without a roughness component or with a mild-to-moderate degree of roughness between the concentrated and amplified densities.This fact confirms the good performance of the PDD in analyzing signals with a wide range of deviation and its reliability in the assessment of the most deviant signals."Additionally, it can be inferred that the PDD density parameter seems to be more robust to qualitatively analyze the patient's evolution regarding the roughness component in vocal emission.Regarding the shape, although a statistical significance was verified, a distribution pattern of the signals with different degrees of roughness as a function of this PDD parameter was not observed.In all grades, the voices were predominantly horizontal, with differences being observed only between the proportions of the groups.This finding corroborates the literature, as there is a tendency for the signals to show a predominance of the dispersion of the points in the horizontal dimension, regardless of the presence and degree of vocal deviation.20,24,26,40–42,Even in the original proposal for the classification of vocal signals as a function of the PDD shape, no significant difference was observed between healthy and deviant signals, as well as between different degrees of deviation and between rough, breathy, and tense voices.17,Therefore, the shape of the points distributed in the PDD does not seem to be a robust parameter for signal differentiation.When comparing the distribution of vocal signals with and without breathiness as a function of the PDD parameters, it was observed that area and quadrant were able to discriminate normal vocal signals from breathy ones.Breathy vocal signals were outside the normal range and were predominantly located in the lower right quadrant.Breathiness is among the universally accepted parameters for the perceptual-auditory evaluation of vocal quality and for the characterization of a dysphonic voice.4,8,37,Thus, the fact that the PDD correctly identifies the breathy signals outside the area of normality reinforces its usefulness in the clinical context of vocal assessment.However, it was observed that the PDD area and quadrant parameters showed identical behavior, in both rough and breathy voices.The vocal signals with roughness and breathiness were found outside the area of normality and in the lower right quadrant.Therefore, one can discuss the interrelationships of these two parameters in physiological and perceptual terms.The presence of breathiness is physiologically associated with a higher degree of separation between the vocal processes, lower convexity of the free edge of the vocal folds and the shorter time of the closed phase of the glottic cycles43 In turn, vocal folds that are further away from the midline tend to vibrate with greater irregularity and less amplitude of the mucosal wave,44 which, consequently, generates the roughness component in the emission.37,Therefore, considering that the signals with roughness and breathiness showed, in general, moderate deviation, with GD of 62.19 ± 14.80 and 65.28 ± 14.75 points in the VAS,36 respectively, one understands the similar distribution of signals with roughness and breathiness in the PDD area and quadrant.Although the synthesizer used to generate the signals in this study allows the creation of voices with isolated components of roughness and breathiness, this separation was not used in the present study.We suggest further investigations with separation of the exclusively rough and breathy signals to assess the performance of the PDD in this classification.In other studies,17,29 the breathy voices were located outside the area of normality, but were distributed between the lower right and upper right quadrants.Some methodological issues need to be highlighted to evidence the similar distribution of the rough and breathy voices in the lower right quadrant in this study.The two aforementioned studies17,29 used as a criterion to classify the voices as rough, breathy, or tense, a forced choice task, in which the evaluator, if he/she considered the emission deviant, should determine the predominant vocal quality.This type of evaluation task allows only one possibility of choice for each emission and not necessarily a classification regarding the presence/absence of each deviated parameter in the emission.In turn, the present study evaluated the degree of roughness and breathiness present in the emission through a VAS.Based on the cut-off values, the presence/absence of such components was established, with the possibility that the same signal would concomitantly show the presence of one or more of them, which is close to the usual conditions of deviant vocal production.Another finding of this research is the high percentage of voices without breathiness classified outside the normal range of the PDD.In a qualitative data analysis, it can be observed that the GD of deviation of these signals is 53.35 ± 16.49.Therefore, although these signals did not show auditory-perceived breathiness, they were probably evaluated as deviated in the VAS due to the presence of roughness in the emission.When comparing the results regarding the proportion of voices with presence/absence of roughness and presence/absence of breathiness identified inside and outside the PDD normality area, it is observed that there is a greater identification of voices without roughness within the area of normality and a greater identification of voices without the breathiness component outside the normality area.Qualitatively, a difference of more than 20 points was found regarding the VAS GD between voices without roughness and without breathiness, with higher GD values in the latter group.This difference in itself would justify the results regarding the higher proportion of signals without the breathiness component identified outside the normal range.These findings reinforce that, even in conditions where the perceptual-auditory evaluation criteria used to classify the signals were not intended to maximize the differences between them, but to evaluate them over a continuum, the PDD was also efficient for vocal evaluation, mainly in relation to the most deviant signals.It is suggested that other studies be carried out using the same methodology and criteria of perceptual-auditory evaluation used in this study, adding to them the criterion that the signals selected for investigation have only one of the components deviated from the cutoff values of the VAS.Regarding the degree of breathiness, there was a difference in the distribution of the signals as a function of the PDD area, density, and quadrants.It was observed that the higher the degree of breathiness, the greater the proportion of signals located outside the area of normality, in the lower right quadrant and with concentrated density.Therefore, it is verified that the greater the breathiness component in the vocal signal, the greater the capacity of the PDD to correctly identify the presence of the deviation.As previously mentioned, such finding regarding the classification of signals with higher degree of deviation constitutes one of the greatest advantages of the PDD, as it fills an existing gap15 regarding the use and reliability of traditional measures of disturbance and noise in the evaluation of voices with moderate and intense deviations.Once again, a similar distribution of the voices with different degrees of roughness and breathiness was observed as a function of the area, quadrant, and density of the PDD.The only difference between the voices with different degrees of roughness and breathiness is the distribution of the signals with Grade 2, in which there was a higher level of correct identification of the group of voices without roughness within the PDD normality area.This fact has already been discussed in this section.The vertical axis of the PDD evaluates the presence of additive noise in the vocal signal, compatible with the presence of the breathiness component.26,Therefore, it was expected that the higher the breathiness component in the emission, the greater the proportion of signals toward the upper left quadrant.In the study17 with voices of dysphonic adults, it was observed that breathy voices, although they were predominantly distributed in the upper left quadrant; 19.3% were also situated in the lower right quadrant.With the pediatric population,29 breathy voices were distributed in the lower right, lower left, upper right and upper left quadrants.In studies26,41 with patients presenting with unilateral vocal fold paralysis26 and individuals with bilateral vocal fold paralysis,26,41 it was found that only the second group, whose patients showed intense breathiness, had their voices located in the upper right quadrant.In turn, individuals with unilateral paralysis had their voices distributed between the lower left and lower right quadrants.26,In general, in high lesions of the vagus nerve, the vocal folds are more distant from the midline and the vocal emission does not originate from the glottic vibration mechanism, but comes primarily from the turbulent transglottic airflow and its propagation in the vocal tract,45,46 which would justify the presence of these signals in the upper right quadrant.26,In the present study, only nine signals were classified as having severe breathiness deviation, and of these, only one was in the upper right quadrant.In this way, two points can be highlighted: first, the sample size, since a different result could have been observed in this distribution with a larger sample of breathy voices with intense deviations; second, as already emphasized in the discussion, there is an overlap of the type of vocal deviation in the assessed signals, since the presence of only one type of deviation in each emission was not used as eligibility criterion.The PDD area and quadrant can discriminate the presence and absence of roughness, as well as the presence and absence of breathiness in synthesized voices.Signals with higher degree of roughness and breathiness are proportionally outside the area of normality, in the lower right quadrant and with concentrated density.The authors declare no conflicts of interest.
Introduction: Voice disorders alter the sound signal in several ways, combining several types of vocal emission disturbances and noise. The phonatory deviation diagram is a two-dimensional chart that allows the evaluation of the vocal signal based on the combination of periodicity (jitter, shimmer, and correlation coefficient) and noise (Glottal to Noise Excitation) measurements. The use of synthesized signals, where one has a greater control and knowledge of the production conditions, may allow a better understanding of the physiological and acoustic mechanisms underlying the vocal emission and its main perceptual-auditory correlates regarding the intensity of the deviation and types of vocal quality. Objective: To analyze the performance of the phonatory deviation diagram in the discrimination of the presence and degree of roughness and breathiness in synthesized voices. Methods: 871 synthesized vocal signals were used corresponding to the vowel /ɛ/. The perceptual-auditory analysis of the degree of roughness and breathiness of the synthesized signals was performed using visual analogue scale. Subsequently, the signals were categorized regarding the presence/absence of these parameters based on the visual analogue scale cutoff values. Acoustic analysis was performed by assessing the distribution of vocal signals according to the phonatory deviation diagram area, quadrant, shape, and density. The equality of proportions and the chi-square tests were performed to compare the variables. Results: Rough and breathy vocal signals were located predominantly outside the normal range and in the lower right quadrant of the phonatory deviation diagram. Voices with higher degrees of roughness and breathiness were located outside the area of normality in the lower right quadrant and had concentrated density. Conclusion: The normality area and the phonatory deviation diagram quadrant can discriminate healthy voices from rough and breathy ones. Voices with higher degrees of roughness and breathiness are proportionally located outside the area of normality, in the lower right quadrant and with concentrated density.
788
Mapping fibre failure in situ in carbon fibre reinforced polymers by fast synchrotron X-ray computed tomography
Reliable engineering predictions of carbon fibre reinforced polymer based on physically representative mechanisms are a key to improve CFRP structure design and material development .Such an understanding may help to alleviate the time-consuming and expensive test programmes employed in the contemporary development of primary load-bearing structures.Of the characteristic failure mechanisms of CFRPs, fibre failure is widely identified as critical in unidirectional materials/plies when loaded in tension along the fibre direction.Different approaches have been exploited to predict such fibre-dominated behaviour , including analytical models , statistical models , and finite element models .Such models generally incorporate some allowance for the distribution of individual fibre strengths, the local stress transfer to the fibres from the matrix adjacent to fibre breaks, and, in some cases, the concept of a critical cluster size that triggers final failure .Despite the efforts made, contemporary models have only been partially successful in predicting experimental observations for fibre failure, which in themselves are relatively limited within the literature .Scott et al. have previously applied in situ CT to visualise the progress of fibre failure of CFRP under incremental loading, clearly showing that fibre breaks occur increasingly at higher loads and that their accumulation follows a power law curve to a good approximation.Clusters of multiple adjacent breaks were associated with high stresses, with the maximum applied stress being ∼94% of the ultimate tensile stress of the material in question .These results were directly compared with models in Refs. , highlighting difficulties in predicting cluster formation at high loads .Furthermore, gaps in understanding fibre fracture accumulation processes have been identified, such as the formation of coplanar and dispersed clusters, and the initiation at new locations .Whilst it is evident that CT is a powerful technique for following damage accumulation using both in situ and ex situ experiments for qualitative and quantitative assessments ; previous in situ studies of fibre failure under tension have been characterised by low temporal resolution with the specimen held at constant load during the acquisition of all the projections comprising a scan .Such maintained loads may introduce changes in the material behaviour, particularly in the formation of clusters sustained at high loads and they make difficult to capture the accumulation of fractures within clusters.In addition, given the variability of commercial CFRP coupons, important practical limitations arise in capturing damage events just prior to final failure.Fast synchrotron radiation CT during continuous slow strain rate loading has the potential to overcome these limitations .In this study, fast computed tomography was used for the first time to track the accumulation of fibre breaks to within 1 s of final failure, under simple ramp loading representative of standard engineering tensile tests.The combination of continuous monotonic tensile loading, and fast acquisition has allowed to : avoid potential hold-at-load artefacts, capture of the sequence and location of successive fibre fracture sites at much finer load steps than reported previously, and observe the state of fracture immediately prior to final failure.A commercial aerospace-grade thermoplastic particle toughened carbon/epoxy, Hexcel HexPly ), with a nominal fibre volume fraction of 60% and a s layup, was studied.Double-edge notched specimens, with a central cross-section between the notches of 1 mm, length of 66 mm and a width of 4 mm were shaped by waterjet cutting from a panel with 1 mm of thickness.Further details of the geometry and coupon dimensions are reported in Ref. .Two coupons are studied in this investigation to assess the reproducibility of the observed behaviours.It should be noted, that despite the small specimen size, there are more than 7000 fibres in the 0° plies between the two notch tips, with a visualised gauge length of ∼2 mm over which the fibres breaks are observed.Uninterrupted rising tensile loading was applied in situ upon the CT beamline while the coupons were continuously scanned.Tensile tests were performed using a compact electro-mechanical rig developed by INSA-Lyon , specifically designed to be stable at high rotational speeds.The strain rate applied was ∼3 × 10−4/s.The UTS registered was 1400 MPa for coupon A and 1280 MPa for coupon B.These values are higher than that reported in previous studies, which was based on an average of 10 specimens.This difference might be mainly related to the smaller cross-section at the notch used in previous studies, reported as 0.7 mm width between notches in Ref. .Final failure was seen to occur at the notch in a catastrophic manner for both coupons considered.Experiments were performed at the Swiss Light Source on the TOMCAT-X02DA Beamline, Paul Scherrer Institut, Villigen, Switzerland.The beam energy used was 20 keV and the distance between the specimen and the detector was set to provide a degree of phase contrast to facilitate the visualisation of small crack-like features , such as fibre breaks .The exposure time was set to 2 ms and 500 projections were collected for each tomograph, resulting in 1 tomograph per second.The voxel size was 1.1 μm corresponding to a field of view of ∼2.2 × 2.2 mm, sufficient to image the notch region shown in Fig. 1.Coupons were initially scanned in the unloaded condition to confirm that no significant damage was introduced during the manufacture.All cutting damage was confined to the near-surface, and as such it was readily excluded from subsequent internal damage evolution under load.The specimens were then mounted in the rig and loaded in tension.The random access memory storage capability of the camera limited the data acquisition to roughly twelve consecutive scans of the entire region of interest.In continuous mode the image data was overwritten when the storage capability of the camera was reached.The above settings were also used to obtain information at low and intermediate loads, the difference being that when the scan was stopped only the last volume of the sequence was considered and saved.This approach avoids having interruptions during the tensile test to perform intermediate scans in the ‘start and stop’ mode.At high loads continuous acquisition was used until final failure of the specimen, giving 10 continuous scans immediately prior to failure for coupon A, and 9 for coupon B.When coupon failure occurred the acquisition was manually stopped and the data downloaded from the camera.The percentage of the sample-specific UTS associated with these final continuous scans was in range of 99.4%–99.9%.Two types of reconstruction have been considered, based on: conventional X-ray attenuation and phase retrieval, via the Paganin method .In house GRIDREC/FFT code was employed in both cases .Previous work has established that a voxel size of ∼1 μm allows individual fibre breaks to be detected in carbon fibre composites .Fast tomography is characterised by a short exposure time and fewer projections per scan, with consequent compromises in signal to noise ratio and effective spatial resolution compared to that which is achievable using conventional settings .Fig. 2 illustrates the same cross-section parallel to the load direction using conventional absorption), and Paganin phase reconstruction), without applying any image post processing.Paganin reconstruction enhances the contrast between damage and material, but compromises sharpness cf. Fig. 2.The Paganin reconstruction was seen to be less suited to distinguish small features such as single fibre breaks, but it facilitated the segmentation of more open cracks, such as matrix failure.As such, the analysis of fibre breaks here was conducted using absorption-based reconstructions, while the Paganin results were used for 3D rendering of the different damage modes before final failure, as shown in Fig. 1.Reconstructed volumes were filtered by a median filter, ensuring a degree of edge preservation.An example of the improvements obtained by applying the median filter is provided in Fig. 2 for the same cross-section shown in Fig. 2–.For each coupon the volume of interest for successive scans was first registered using ImageJ to facilitate the correlation of failure sites between subsequent load increments.VGStudio Max 2.1 was used to allow the fibre breaks to be detected and manually counted.The numbers of fibre breaks reported are based on three independent counts for each volume.Damage modes in the notch region have been described previously for tensile loads up to 94% of the UTS .Here the higher scan rate allows imaging up to 99.9% of the UTS, as shown in Fig. 1.The number of transverse ply cracks does not increase over the loading range investigated, indicating that this damage mode saturates at intermediate load levels.The opening of the transverse ply cracks that are not directly connected with the delamination did not change considerably with loading, as shown in Fig. 3–, exhibiting crack opening displacements in the range 5–10 μm.By contrast, the opening of those transverse ply cracks linked to delamination increased significantly, from a few microns at intermediate loads to ∼200 μm before failure, as shown in Fig. 3– and in green in the 3D rendering of Fig. 1.Shallow angle fibres bridging between the two flanks of the transverse ply cracks were observed even at high loads.The propagation of the matrix cracks in the 0 plies was not tracked here, being seen to propagate along the loading axis, out of the field of view imaged.Their average crack opening at the notch root was found to increase progressively with load, from 5 to 10 μm at intermediate loads, to 20–25 μm immediately before failure.Delamination was already present at the lowest load observed in this study.The delamination area increased progressively with load, creating an effectively reduced cross-section of material in the region between the notches, as documented previously .Fibre breaks were detected from intermediate loads onwards, and found to be located in this reduced cross-section isolated by the delamination, 0°ply splits and transverse ply cracks.Fig. 4 shows the projections of failed fibres through the 0° plies thickness with respect to the location of transverse ply cracks and delamination for both coupons studied.No significant correlation was observed between transverse ply cracks and the location of fibre breaks in the 0° plies.This is consistent with the observations from previous studies for a carbon/epoxy specimen subjected to tensile loading and under fatigue .By contrast, other work conducted on unidirectional non-crimp glass fibre reinforced composites subjected to fatigue has demonstrated a strong correlation between the location of fibre breaks and the presence of other damage modes in the neighbouring plies .It is important to note that the T700/M21 system has particle toughened interlayers between the plies which are likely to reduce the stress concentrations due to damage in the off-axis plies and between them.A summary of fibre breaks is provided in Table 1 for both coupons.Details of the accumulation of fibre fractures in the final continuous scans were omitted from Table 1 to facilitate the comparison, taking into account only the first and last continuous scan.It is evident that the two samples failed at different loads with different numbers of fibre breaks in the notch region: coupon B failed at a somewhat lower load, being characterised by a lower number of fibre breaks at failure.Fig. 5 shows the plot of the total number of breaks as a function of the applied nominal stress in the reduced load-bearing section:Coupon A appears to accumulate fibre failures somewhat faster at intermediate stresses.The two datasets follow a power law fit over the range of overlap.Approximately 7% of the total number of fibre breaks occurs during the final loading step, see Table 1.This is reasonably consistent with the rate of fibre fracture leading up to these loads suggesting that there is not a dramatic change in final fracture rate from the overall power law fit.The fact that the percentage of fibre breaks in coupon B increases so significantly at high loads confirms the importance of using continuous fast scanning procedures to capture damage accumulation immediately prior to final failure.In fact, these damage events would be missed using static holds.It is noteworthy that in both cases fewer than 8% of the 0°fibres in the imaged volume have fractured at 99.9% of the failure load.The type and the number of fibre breaks detected in the two coupons as a function of percentage of UTS are shown in Fig. 6.They are divided into singlets, clusters of two adjacent breaks, 3 adjacent breaks, and up to 10 adjacent breaks, which was the largest number detected.In this study a cluster was defined as an agglomeration of fibre breaks directly adjacent to one other.This is consistent with the fact that the observed multiple fibre breaks are exclusively co-planar rather than dispersed, in agreement with previous studies on the same material .The majority of breaks occur as singlets at all loading levels, whereas the number of clusters increases with load, as shown in Fig. 6 and.The total number of singlets immediately prior to final failure is very similar for both coupons.However, the number of intermediate multiplets is much fewer in coupon B, and no large multiplets were seen, whereas 9 large multiplets were seen in coupon A at 99.9% UTS.Fig. 6 shows that three 5-plet clusters and one 7-plet are even present at 85% of the UTS in coupon A; while coupon B exhibits only one 2-plet at 90% of UTS.Nevertheless, the percentage of singlets with respect to the total number of breaks is somewhat different for the two coupons, as shown in Fig. 6 and.Coupon A exhibited a significant number of clusters as a function of UTS.As noted previously, the fast acquisition allowed to record the damage state just prior to coupon failure, which was monitored from 9 to 10 s until 1s prior to coupon failure.The number of additional fibre breaks detected during continuous scanning was 41 for coupon A and 31 for coupon B, with a similar number of additional singlets for both coupons as shown in Fig. 6 and.The largest new cluster detected in these continuous scans was a 5-plet for coupon A and a 3-plet for coupon B. Newly initiated breaks always occur in new locations rather than in locations adjacent to where fibres have already failed.Fig. 7 shows the 5-plet detected during continuous scans in coupon A at 9, 8 and 1 s prior to final failure for the same cross-section parallel to the loading direction.While no fibre breaks are present 9 s before failure, a 5-plet “pops in” within the next second and it does not accumulate any more fractures in the remnant life prior to the final failure of the coupon.The analysis of the fragmentation of individual fibres along their length was conducted using three simultaneous orthogonal views.Examples of these are illustrated in Fig. 8 and Fig. 9, where the yellow arrows annotate the sequence of fibre failure for incremental loads corresponding to the same cross-section.Focusing on the fibre that exhibits the first break at 55% UTS in Fig. 8, the increase in load to 85% UTS causes a second break at a distance of 140 μm from the first, shown in Fig. 8.A further increase in the load results in a third fracture at a distance of 95 μm from the second break, Fig. 8.For the fibre in Fig. 9 the third axial break occurs between the first two fractures, spaced 460 μm apart at a distance of 135 μm and 325 μm from each respectively.Unsurprisingly, the number of fragments detected, where a fragment is defined between two axial breaks, increases with the applied load for both coupons.The distance between adjacent breaks within single fibres has been used in previous studies to provide an estimate of the ineffective length .The cumulative distribution of the fragment lengths, across all fibres containing more than one break within the field of view in the two coupons, is presented in Fig. 10.A total number of 106 fibre fragments were identified within 78 individual fibres to achieve this distribution.The minimum length detected, of 30 μm, is shown in the cross-section reported in Fig. 2.It is noticeable that the cumulative distribution in Fig. 10 reveals a sharp increase in slope up to a fragment length of 70 μm, and it is approximately linear at higher fragment lengths.Below 70 μm there are relatively few fragments.This suggests that 70 μm may be a reasonable estimate for the ‘ineffective length’, which is consistent with the value reported by Scott et al. based on five locations of double breaks associated with the highest load achieved in their work.The maximum number of axial breaks observed on an individual fibre was somewhat different for the two coupons investigated: 8 for coupon A and 3 for coupon B. Fig. 11 shows two cross-sections parallel to the loading direction for the fibre that exhibited eight axial breaks.The use of two adjacent cross-sections was dictated from the fibre misalignment.Seven breaks were present at 85% UTS, Fig. 11, while the additional break was observed before final failure of the coupon, dividing the first fragment on the left hand side of Fig. 11 into two parts with different lengths.The neighbouring fibre also shows multiple axial failures, the majority of these are also visible in Fig. 11.These two multi-fragmented neighbouring fibre have co-planar clusters in common: the break annotated as ‘2’ in Fig. 11 is a shared 5-plet, and the break annotated as ‘7’ in Fig. 11 is a shared 2-plet.This case is not unique in coupon A with a similar response observed between other adjacent fibre pairs.However, coupon B did not exhibit any cases of adjacent fibres with multiple axial breaks.The presence of multiple breaks on a single fibre may of course be related to the presence of a statistically exceptional fibre containing defects introduced by the manufacturing processes, or local stress concentrations, perhaps due to variations in the interface and fibre/matrix adhesion.The use of in situ fast computed tomography in continuous acquisition mode has enabled significant new observations of fibre failure.This method allows us to look back at the state immediately prior to failure for the first time.The number of fibre breaks at 99.9% of failure is less than ∼8% of the total number of fibres in the loaded region of the specimen.This would be consistent with only involving the low-strength tale of a statistical fibre strength distribution.The mechanism of fibre failure accumulation for this material system was clear; breaks do not successively accumulate at locations with pre-existing failed fibres, but rather new breaks always occurred in new locations.This is in agreement with a previous study on the same material .In the first instance, this may be considered consistent with dynamic stress concentrations having a strong influence on cluster formation , and also with debonding processes or other reduction in the stress concentration on neighbouring fibres that may occur after initial fracture.It could be possibly caused by a dynamic release of strain energy such as local shock-like process upon fracture of a fibre that in some cases generates fibres fractures in the locality as well.The results obtained in this study have shown a somewhat different behaviour between the two coupons considered even though they were made using the same material system and subjected to the same load.A direct comparison highlighted that:Coupon A showed a higher failure load,Coupon A was able to accommodate more fibre breaks for a given percentage of UTS and a given absolute load.At the same time it showed a higher number of multiple axial fragmentations on individual fibres.Coupon A exhibited a higher percentage of clusters, which on average form at lower loads than for coupon B.The largest cluster size was of a 10-plet for coupon A while in coupon B the maximum was 4.In both cases the number of singlets counted before final failure was similar.It is perhaps surprising that coupon A, which was seen to accommodate a larger number of fibre breaks, promoted more clusters and individual fibre fragmentation, giving a higher failure load.A possible explanation is that the location of some of the clusters might be pre-determined by defects induced during processing that affect adjacent fibres.This conjecture is supported by the observations of the occasional individual fibres, which experienced more than two breaks along their length within the field of view, and the occurrence of shared clusters.This study suggests that clusters forming at intermediate loads do not simply correlate with premature failure.In contrast, based on the results found, the presence of clusters at intermediate loads might correlate with a higher ultimate strength.In particular the presence of clusters at intermediate loads could promote fibre pull-outs explaining why the two specimens behave differently.In this respect the concept of ‘virtual co-planar clusters’ has been recently discussed by Bullegas et al. .They used laser engraved micro-cut patterns to promote hierarchical pull-outs ahead of a crack tip with a consequent increase in the translaminar fracture toughness.The same concept was previously proposed by Mirkhalaf et al. to induce crack deflection to overcome the brittleness of glass.However, further experiments are required to confirm whether the presence of clusters at intermediate loads promotes specific mechanisms, such as pull-outs.Various researchers have discussed the presence of a critical cluster size as being needed to trigger catastrophic failure .The largest cluster size detected in this study is 10-plet, and was found not to grow with increasing load.Therefore, the definition of critical cluster size or more generally the critical nature of damage needed to trigger final failure does not appear to be a useful concept in our case although it maybe that critical clusters occur even closer to final fracture than could be captured here.In fact, current synchrotron radiation CT instruments do not allow observations with even higher temporal resolution and relatively large fields of view such as the one considered here.In addition, the concept of ‘cluster’ does not have a consistent definition in literature so that the comparison of results from different studies needs to be conducted carefully.Previous work has used the minimum axial distance measured between multiple breaks along the same fibre, i.e. the estimated ineffective length, to define the radius of the cluster region .Based on this definition, all the breaks within a distance of less than the estimated ineffective length are considered to belong to the cluster regardless of whether they are co-planar, dispersed or adjacent to each other.The biggest cluster size detected by Scott et al. was of a 14-plet and it occurred at 94% UTS.However, due to the fact that this was the last load level applied before failure, there is no evidence as to its growth.Thionnet et al. developed a model to investigate a variety of loading conditions ranging from high speed monotonic load to sustained load, which highlighted differences in the accumulation of fibre fractures and cluster formation.In particular, they found that the effect of time is fundamental to the failure mode predicted by the model, showing random fibre failure and coalescence of clusters for high speed monotonic loading, while sustained loading exhibited early cluster formation .Therefore, the 14-plet observed in previous study at 94% UTS might be affected by the hold-at-load used to allow time for the CT scan, which might have not negligible influence at high loads.Previous models were unsuccessful in the prediction of larger clusters at high loads ; however, this study has indicated that very high loads do not necessarily result in large cluster formation.Results found in this study have demonstrated a different behaviour between the two coupons considered, particularly in terms of clusters formation.Whatever the mechanism, these observations confirm the need to examine many specimens to assess stochastic behaviours where failure may be determined by extreme value statistics.While this study has involved just two specimens, it represents ∼14,000 fibres under load, with a visualised gauge length of ∼2 mm each, nevertheless events that appear to be potentially relevant to failure are distinctly sparse.For example, one individual 2 mm fibre length fractured 8 times and the neighbouring fibre 6 times, whilst none of the remaining 14,000 fibres failed more than 4 times.The current work represents the first use of fast computed tomography to monitor the accumulation of fibre breaks in notched coupons immediately prior to fracture.Time lapse observations have shown that new breaks are overwhelmingly associated with new locations rather than occurring preferentially adjacent to existing failed fibres.Their accumulation was demonstrated to follow a power law curve, with a substantial number of breaks occurring in a limited load range immediately before failure, 98–99.9% UTS.The coupon that exhibited the higher number of breaks was also characterised by early cluster formation and multiple axial fragmentation, whilst it failed at higher load.The different behaviour detected for the two coupons considered, particularly in terms of clusters formation, raises important observations on the need to consider potential microstructural and material variations at large length scales.The work presented in this paper should be of significance to the modelling community and lead to further improvement in the ability to predict the tensile strength and related performance of continuous carbon-fibre composite materials.
Fast, in situ synchrotron X-ray computed tomography (CT) has been used to capture damage evolution, particularly fibre failures, before final fracture (within 99.9% of the ultimate tensile stress) in cross-ply carbon fibre/epoxy coupons under continuous monotonic tensile loading for the first time. It is noteworthy that fewer than 8% of the fibres in the 0° plies have fractured at 99.9% of the failure load. The majority of fibre breaks appear as isolated events, although some instances of multiple adjacent breaks (clusters) do occur at intermediate and high stress levels. Contrary to conventional wisdom, a cluster of failed fibres always occurred in a burst as a singular failure event: clusters were never seen to accumulate additional broken fibres as load increased suggesting low-level stress concentration local to fibre breaks. Several instances of multiple fractures along individual fibres were observed, providing an estimation of the critical stress transfer length between the fibre and matrix. The factors affecting fibre failure appear to be complex, with distinct sample-to-sample variability being identified for the length-scale tested. This highlights the need for improved understanding of the mechanisms that contribute to final failure, particularly criteria controlling the arrest or otherwise of clustered fracture events.
789
Road mortality locations of small and medium-sized mammals along a partly-fenced highway in Quebec, Canada, 2012–2015
The data were collected along Highway 175 between Quebec City and Saguenay as part of a project for the Ministry of Transport, Sustainable Mobility and Transportation Electrification of Quebec.The mortality data were collected by Bélanger-Smith and Plante and the full results are published in a report to the transport ministry in 2017 .The data have also been used in Refs. .Highway 175 is located between Quebec City and Saguenay and borders Jacques-Cartier National Park and the Montmorency Forest, and runs through the Laurentides Wildlife Reserve.The highway is located in the boreal forest biome dominated by balsam fir and black spruce.In 2014, the average annual daily traffic flow on HW 175 was estimated 6000 vehicles .Actual counts showed that the AADT was 5900 vehicles per day during the years 2011–2015.In the summer months, this average was almost 30% higher, and about 20% lower in the winter months.The proportion of trucks was 15%.In the summer months, traffic volume was twice as high on Fridays and Sundays as the annual average.In 2015, the AADT was 6200 veh./day, with 7900 veh./day in the summer months and 4900 veh./day in the winter months.During the years 2005–2015, annual traffic volumes increased by 2% per year .There were no projections for the future available.The highway was widened from 2 to 4 lanes in 2006–2011, and during construction wildlife passages were installed with fences.Of these passages, 18 underpasses designated for small and medium-sized mammals are within the road mortality surveyed area.On both sides of each entrance of these passages, exclusion fences for medium-sized species were placed.Each fence is about 100 m long on each side of the passage entrances and 90 cm high with a 6 cm × 6 cm mesh size.The data provided consist of the roadkill locations from 4 summers of road mortality surveys of small and medium-sized mammals from Highway 175 in Quebec, Canada.Additionally, GPS coordinates of the locations of the 18 wildlife passages and the location of the fence ends of the 200 m fences are provided.The meaning of the column headings is as follows:The wildlife passages are named by the nearest kilometer marker.The column ‘Position’ provides the kilometer location along the road.Some underpasses have an open median, which is reflected in a number of 4 entrances in the column ‘Number of Entrances’ instead of 2 entrances because of the opening in the median between the northbound and southbound lanes.‘Entrance names’ identify the entrances of the wildlife underpasses.‘Longitude’ and ‘Latitude’ are the longitude and latitude of each entrance.‘Entrance name’ relates to the northbound and southbound lanes of Highway 175.The column ‘Large or small/med.Passage’ distinguishes large structures and wildlife underpasses for small and medium-sized mammals.‘Fence names’ refer to the four sections of fence associated with each wildlife underpass.‘Year’ refers to the year of study.‘Session’ refers to the session of surveys.‘Moment of the day’ distinguishes evening surveys from morning surveys.‘Start point’ indicates the start point of the survey along the road, which alternated among four locations.‘Time Start’ and ‘Time End’ document the start and end times of each survey.‘Km’ indicates a rough estimate of the location where the carcass was found based on the distance to the nearest kilometer marker along the road.‘Longitude’ and ‘Latitude’ are the longitude and latitude of each carcass detected.‘Species’ is the species of the carcass.‘Direction of travel’ distinguishes the northbound lanes from the southbound lanes.Mortality surveys were conducted on a stretch of Highway 175 in Quebec during the summer months, May to October, for 4 years, 2012–2015.No surveys were performed during the winter months due to snowplows’ removal of roadkill.Mortality surveys were conducted on a 136 km loop between km 75.5 and 143.5 of Highway 175.The starting point of the surveys alternated between four locations to avoid potential bias that could result from using one starting point.Surveys were conducted at an average speed of 70 km/h, with one driver and one observer, and it took approximately 3 h to complete a survey.For each carcass found the GPS coordinates, location and the species were documented and the carcass was then removed from the road.At all times in the field, wearing a reflective/safety vest, adequate shoes and a yellow protective helmet was mandatory and while searching for roadkill or stopping to document roadkill an amber flash security light bar on the roof of the vehicle and the vehicle׳s hazard warning lights were on.The surveys were conducted in sessions completed over 2-week intervals as shown in Table 2.The first 3 days consisted of evening surveys, then no survey on Thursday and the next 6 days consisted of morning surveys and then no surveys on the last 4 days.The evening surveys started three hours before sunset and the morning surveys started 30 min after sunrise.Over the 4 years, a total of 34 complete sessions were performed resulting in 306 road mortality surveys.Of these, 102 surveys were performed in the evenings, and 204 in the mornings.
The data presented here consist of the locations of 839 roadkill points from four years (2012–2015) of roadkill surveys for small and medium-sized mammals (under 30 kg) from a four-lane highway in Quebec (Highway 175) during the months of May to October. Seventeen species or species groups were identified, all local to the area, and none of which were identified as species at risk, threatened, or endangered. The GPS coordinates of each roadkill event are given, along with the date, time of day (morning or evening), location (northbound or southbound lanes) and species (where possible). Within the surveyed road, 18 wildlife passages with 100 m fencing on each side of the passage entrances were built for small and medium-sized mammals. The GPS coordinates of the 18 passages and the end of each corresponding fence are also provided.
790
Dataset on seed details of wheat genotypes, solution treatments to measure seedling emergence force and the relation between seedling force and strain
This paper represents the dataset which have been used to investigate seed quality and germination rate of 16 wheat genotypes.Seed width and length was measured of these genotypes using slide calipers from 50 replicates of each genotype.The numbers inside the brackets in Table 1 represents the standard error between the genotypes.To create solutions of four different SAR and four different I different concentration of NaCl and CaCl2 were used which had been described in Table 2.A device was developed to measure the seedling emergence force of these 16 genotypes which had been shown in Fig. 1.This machine measured the deflection of a steel beam place on top of a foam when the seedling emerged from the foam and moved the beam.The deflection of the beam was measured using the stain gaze connected to the beam.The measured strain was converted to force using the calibration graph and seedling emergence force of each genotype was calculated.Seed samples of wheat genotypes were collected from a harvest site at the Queensland Government research farm in Kingsthorpe, Queensland, Australia.After harvest, seeds were stored in cold temperature, and before one week of seed germination test, width and length measurement, seeds were warmed to 22 °C.For each variety, the thousand-grain weight was estimated from the weight of 200 seeds from five replicate samples.Using digital slide calipers, the length and width of the seeds were measured for 50 replicate seeds of each genotype.Seedling emergence force of these 16 genotypes were measured in 16 treatment solutions, consisting of four ionic strength values and four SAR values.A mechanical device was developed to record the force exerted by the seed coleoptile.The device consisted of a stainless steel beam of 0.4 mm thickness and a width of 20 mm suspended above a seed.A strain gauge was attached to the beam to measure the displacement of the beam over time.The wheat seed was placed in a piece of foam that was held underneath the stainless steel beam.A 100 mL container of 1 mM CaCl2 was placed underneath the foam and cotton strings coming out from the foam were placed in the solution.The seeds germinated inside the foam and grew for total 14 d.When the seedling emerged, it pushed and moved the beam upwards.The attached strain gauge measured the movement of the beam from which the emergence force of the seedling was calculated.The strain gauge was connected to a data logger.Measured strain was converted to emergence force using a calibration with strain measured at loads ranging from 0 to 9 N at 1.0 N intervals.
The seed details (weight, vigor) and germination rate of 16 wheat (Triticum aestivum) genotypes in a non-limiting conditions were measured. The dataset presents seed germination rate and seed vigor of 16 wheat genotypes. The dataset also presents the concentrations of the cations to create solution treatments of various sodium adsorption ratio (SAR) and ionic strength (I). Finally, dataset presented a figure of the experimental design to measure seedling emergence force of wheat genotypes. The image of the setup and the relation between strain and force have been presented here to convert the strain of the beam into seedling emergence force. This dataset has been used in research work titled ‘Greater emergence force and hypocotyl cross sectional area may improve wheat seedling emergence in sodic conditions’ (Anzooman et al., 2018) [1].
791
Toxoplasma gondii, Sarcocystis sp. and Neospora caninum-like parasites in seals from northern and eastern Canada: potential risk to consumers
Seal meat and organs are important country foods of Inuit in Arctic and subarctic Canada and Greenland.In addition to subsistence harvests, some seal species are also harvested commercially in a government regulated sustainable harvest in eastern Canada, and seal meat is available at retail in this region.It is also offered in restaurants in metropolitan areas such as Toronto, Montreal and Quebec City, where the meat is often served as rare seal steak or as seal tartare.Seals are processed using specific guidelines for quality and food safety.Seven pinniped species are hunted in Canada, including: walrus, bearded seals, ringed seals, harbour seals, hooded seals, harp seals, and grey seals.Most of these species are hunted for subsistence purposes by Inuit and others, but the latter three species are commercially hunted, with meat and by-products being sold to restaurants and exported.These pinnipeds have different distributions, abundance and, for some species, seasonal migrations which affect their availability to hunters.They also have different diets which affect their exposure to parasites and thus pose differential zoonotic risks to human consumers.Ringed seals have a northern circumpolar distribution, are widespread in Arctic and subarctic Canada, and an estimated 100,000 are harvested annually for subsistence.Harp seals are separated into three populations based on specific pupping sites: Northwest Atlantic, Greenland Sea near Jan Mayen and White Sea/Barents Sea.The abundant Northwest Atlantic harp seal population is commercially harvested in Atlantic Canada from 307,000 to 40,000 annually.Hooded seals are separated into two breeding herds based on specific pupping sites: Northwest Atlantic and Greenland Sea.The Northwest Atlantic population has subsistence and commercial harvests in Atlantic Canada ranging from 5905 to 0, and less than 400 annually since 1999.Since 1964, commercial harvest of hooded seals is not permitted in the Gulf of St. Lawrence, and the majority of the harvest occurs in Greenland.Grey seals, distributed on both sides of the North Atlantic, are found in the Gulf of St. Lawrence and coastal Nova Scotia and Newfoundland.There is a small commercial hunt in the Magdalen Islands and eastern shore of Nova Scotia of less than 1700 animals in 2016.Little is known about the risk to humans from the consumption of seal meat containing zoonotic pathogens and parasites that they may carry.However, various studies have identified pathogens and parasites in marine mammals of concern to human health.In this study, we tested ringed seals, harp seals, hooded seals, and grey seals for the presence of the protozoan parasites Toxoplasma gondii, Sarcocystis spp., and Neospora spp.Toxoplasma gondii is the most prevalent parasite infecting humans and other warm-blooded animals worldwide.Approximately one to two billion people are infected with this protozoan parasite.The definitive hosts of T. gondii are felids, which shed oocysts in their feces.Humans may become infected by accidental ingestion of oocysts in contaminated soil, water, or food.Another transmission route is the consumption of raw or undercooked meats or organs from intermediate hosts infected with T. gondii tissue cysts.In most healthy adult humans, the infection is asymptomatic.When a woman is exposed to T. gondii for the first time during pregnancy, the parasite may be vertically transmitted to the fetus, and may result in death or severe illness Immunocompromised patients may develop toxoplasmic encephalitis.Some Inuit communities show a high level of exposure to T. gondii, with almost 60% seroprevalence in Nunavik, Quebec.This high seroprevalence was associated with handling or consuming country foods.Most animal species that are harvested in the Canadian North as country foods, including various terrestrial and marine mammals, birds, and fish, have tested positive for T. gondii.Traditionally, some country foods are eaten raw, which increases the chance of contracting toxoplasmosis.Sarcocystis spp. typically have prey-predator life cycles involving herbivores and carnivores as intermediate and definitive hosts, respectively.These parasites primarily infect skeletal muscle, heart muscle, and lymph nodes of the intermediate host.Humans can serve as definitive hosts for S. hominis and S. suihominis, which are acquired from eating undercooked beef and pork, respectively.Humans can also serve as intermediate hosts for other Sarcocystis spp., likely acquired by ingesting sporocysts from contaminated food or water, or in the environment.Infection in humans causes the disease sarcocystosis, which is generally asymptomatic.In Southeast Asia, muscular sarcocystosis in humans was found to be 21%.To our knowledge, no Sarcocystis infections in humans have been documented in Canada except for travel-related cases to Southeast Asia.Neospora caninum is closely related to T. gondii, and earlier reports confused the two species."Dogs and other canids are the definitive host of N. caninum and oocysts are shed in the canid's feces.Neospora caninum may cause severe neuromuscular disease in dogs, resulting in paraparesis of their hind limbs.In cattle, which serve as intermediate hosts, N. caninum may cause encephalitis and abortions and can be transmitted vertically.Antibodies to N. caninum were reported in 7% of human serum samples in the USA, and in 6% of healthy adults.Neospora caninum seroprevalence is significantly higher in HIV-infected patients and in patients with neurological disorders.However, this parasite has not been detected in human tissues, thus its zoonotic potential has not been clearly demonstrated.The objective of this study was to determine the prevalence of Toxoplasma, Sarcocystis, and Neospora infections in four species of seals that are harvested for food in northern and eastern Canada.Results from this study will aid in evaluating the risk of transmission of these parasites to humans through the consumption of seal meat or organ tissues.Ringed seals, P. hispida, 12 young-of-the-year or juvenile females, 1 Age = 2, 1 Age = 4) and 7 YOY or juvenile males, were collected by Inuit hunters and sampled in 1993 and 1994 at Salluit, Nunavik, Quebec.In addition, tissues from four ringed seals were collected from Inukjuak, Nunavik, Quebec but no information was available on the age or sex of these animals.Harp seals, P. groenlandicus, 19 adult females, one YOY male and one YOY female, and hooded seals, C. cristata, adult females only, were shot under scientific permit issued by Fisheries and Oceans Canada and sampled in 2005 from breeding ice floes located west of the Magdalen Islands in the Gulf of St. Lawrence, Québec.Grey seals, H. grypus, 14 adult females and 15 adult males, were shot under scientific permit and sampled in 2012 from breeding colonies on Saddle Island and Pictou Island, Nova Scotia.Canine teeth were extracted from lower jaws for age determination of ringed and grey seals only.Thin cross-sections of teeth were made and the number of dentinal annuli were counted with one growth layer group = one year of age.Hooded and grey seals were aged based on total length and sexual maturity.Seals were classified as YOY, juvenile or adult as described in Measures et al.The sex was determined in 77 of 81 seals; 54 were female and 23 were male.A total of 124 tissue samples were collected from 81 seals and included diaphragm, brain, heart muscle, lung and skeletal muscle.Tissue samples were stored at −20 °C.Tissue samples were thawed, and 1 g subsample of each was divided into two 500 mg aliquots which were used for DNA extraction.The cell lysis protocol was adapted from Opsteegh et al.To each aliquot, 625 μl of cell lysis buffer containing 100 mM Tris-HCl pH 8.0, 50 mM EDTA pH 8.0, 100 mM NaCl, 1% SDS, 2% 2-mercaptoethanol, and 5 mg/ml proteinase K, and 100 μl of 0.1 mm glass beads and 0.7 mm zirconia beads were added.Samples were homogenized 6500 rpm for 3 × 20 s using the Precellys 24 homogenizer before incubating overnight at 45 °C.Aliquots were pooled into 15 ml conical tubes and 1.25 ml cell lysis buffer was added and incubated for 2 h at 45 °C.To each homogenized tissue sample, 625 μl of 5 M NaCl and 510 μl of cetyl trimethylammonium bromide/NaCl were added.Samples were incubated at 65 °C for 15 min.An equal volume of phenol/chloroform/isoamyl-alcohol was added to the sample, followed by a 1.5 h incubation at room temperature while mixing on a Revolver™ Rotator.The solution was then centrifuged at 3000×g for 15 min at 12 °C.The supernatant was dispensed into a new 15 ml conical tube.An equal volume of chloroform/isoamyl-alcohol was added, and the samples were placed on a revolver for 1 h at RT.Samples were then centrifuged as indicated above.The supernatant was collected and 2 vol of cold 100% ethanol were added to precipitate the DNA.Samples were stored overnight at 4 °C for complete precipitation.The precipitated DNA was pelleted at 3000×g for 20 min at 4 °C.An equal volume of 70% ethanol was added to wash the DNA pellet before centrifugation at 1000×g for 10 min at 4 °C.This step was repeated twice.The pellet was then transferred to a 1.5 ml LoBind tube and air dried until translucent.The dry pellet was resuspended in 150 μl of EB Elution Buffer at 50 °C for 4 h.The extracted DNA was stored at −20 °C.The gene regions, primers, and their respective nucleotide sequences that were used in this study are listed in Table 1.All tissues available for testing in this study were tested with B1 and 18S primers.Sarcocystis-specific primers were used to confirm Sarcocystis sp.All PCR reactions were performed with a total reaction volume of 25 μl containing 1 × concentration of a 5 × Green GoTaq Reaction Buffer, 2 mM of MgCl2, 200 μM of dNTPs, 0.625 U GoTaq Polymerase, 300 nM of each primer, 1 μl of template DNA, and UltraPure water.The DNA concentration, quantified using Nanodrop, was normalized to 500 ng per reaction.PCR reaction was performed using the Mastercycler Nexus X2 thermocycler for all samples.Cycling conditions for all samples were: 95 °C for 2 min, 35 cycles of 94 °C for 30 s, 50–68 °C for 30 s, and 72 °C for 60 s, following by a final extension at 72 °C for 10 min, and final hold temperature of 10 °C.The annealing temperatures varied between primers and are listed in Table 1.Negative controls were added to each PCR run.Positive controls consisted of DNA extracted from T. gondii oocysts kindly donated by Dr. J. P. Dubey, USDA.While positive controls were not available for Sarcocystis sp. or Neospora caninum, T. gondii positive control was used as a negative control for these parasites."Positive samples as determined by gel electrophoresis were purified using either the QIAquick PCR purification kit or the QIAquick Gel Extraction kit following manufacturer's instructions.The purified PCR products were prepared for, and subjected to, bi-directional, cycle sequencing using BigDye Terminator v3.1 Cycle Sequencing Kit as recommended by the manufacturer.Amplified sequence products were purified using Wizard MagneSil green sequencing reaction clean-up system, and capillary electrophoresis was performed on a 3500 Genetic Analyser.Sequences were assembled, edited and aligned using SeqScape v3 software.Resulting consensus sequences were aligned with representative GenBank 18S sequence data from T. gondii, Sarcocystis spp. and Neospora spp., and trimmed to identical lengths of 441bp using BioEdit.Sequences are available through GenBank accession numbers MH514961-MH514967 for the Sarcocystis-positive samples, and GenBank accession numbers MH595863-MH595890 for the Toxoplasma- or Neospora-positive samples.The evolutionary history was inferred using the Maximum Likelihood method based on the Kimura 2-parameter model.The tree with the highest log likelihood was used.Initial tree for the heuristic search were obtained automatically by applying Neighbor-Join and BioNJ algorithms to a matrix of pairwise distances estimated using the Maximum Composite Likelihood approach, and then selecting the topology with superior log likelihood value.Trees were drawn to scale, with branch lengths measured in the number of substitutions per site.All positions with less than 95% site coverage were eliminated.Evolutionary analyses were conducted in MEGA6.Depending upon the seal species and collection method, different tissues were available for testing, including muscle, brain, heart, lung, and diaphragm.PCR was performed on all tissues using 18S and B1 primers as described in Table 1.18S Sarcocystidae primers were used to detect all parasites of the Sarcocystidae family.All Sarcocystis sp. positive samples were confirmed using a second, Sarcocystis-specific, nested 18S primer.Toxoplasma gondii was detected using 18S and B1 primers, however, not all tissues were found to be positive for both primers.Toxoplasma gondii, Sarcocystis sp. and N. caninum-like DNA was detected in 40%, 9% and 7% of seals, respectively.Analyses revealed T. gondii DNA in 26% of ringed seals, 63% of hooded seals, 57% of harp seals, and 31% of grey seals.Sarcocystis sp.DNA was found in 9% of ringed seals, 13% of hooded seals, 14% of harp seals, and 4% of grey seals.Neospora caninum-like DNA was only found in ringed seals from Salluit.All Sarcocystis sp.-infected animals of known sex were female; however, 97% of the seals harvested from the Magdalen Islands were female and the sex of the seals from Inukjuak was unknown.All animals from the Salluit cohort were <5 years of age; of the 6 N. caninum-like positive animals, 4 were female and 2 were male, and 5 of the 6 were YOY and one was Age = 1.For T. gondii-infected animals, the sex was known for 31 of 32 animals, of which 19 were female and 12 were male, which included one 4-day-old male harp seal pup.The prevalence of T. gondii, Sarcocystis sp., and N. caninum-like parasites in the seal populations differed by harvest location.Toxoplasma gondii DNA was detected in all seal species from all harvest locations.Sarcocystis sp.DNA was detected in all seal species but only from three of five harvest locations, namely ringed seals from Inukjuak, grey seals from Pictou Island, and harp and hooded seals from the Magdalen Islands.As Sarcocystis sp.DNA was predominantly found in skeletal muscle, and this tissue was not available from all seal species and harvest locations, some Sarcocystis infections may have been undetected.Neospora caninum-like DNA was detected in ringed seals from Salluit but not in ringed seals from Inukjuak, nor in any other seal species.Tissues positive for T. gondii included diaphragm, brain, heart muscle, lung and skeletal muscle, in similar prevalences.Sarcocystis sp. appeared to have a preference for skeletal muscle compared to diaphragm, brain, heart muscle and lung.Neospora caninum-like DNA was detected only in lung tissues from ringed seals harvested in Salluit as it was the only tissue available from this particular seal cohort.A phylogenetic tree was made for the 18S gene region of T. gondii and the N. caninum-like parasites.Five of the six N. caninum-like positive tissue samples were 100% identical by sequencing, whereas the sixth shared 99.8% identity.For T. gondii, some animals harbored single nucleotide polymorphisms of the parasite.There was insufficient information to determine whether SNPs were due to infection with more than one T. gondii strain, or because 18S is a multi-copy gene that may contain SNPs in one or more of its copies.Five of the seven Sarcocystis sp.-positive tissues had a single nucleotide polymorphism that was distinct from known Sarcocystis spp. reference sequences archived in GenBank.Ringed seal Para0249M and harp seal Para0281D had four SNPs, of which only two were identical to the SNPs of the other five seal tissues.Moreover, ringed seal Para0249M had two nucleotide variants in two of the SNPs.While Sarcocystis spp. are haploid in their intermediate hosts, 18S is a multi-copy gene and allelic variations in Sarcocystis spp. have been described previously.Seal meat and organ tissues, including muscle, blubber, heart, liver, intestine, bones, cartilage, etc. have been consumed in Canada for thousands of years by indigenous people.They are generally eaten raw, rare, or undercooked depending on cultural habits, increasing the risk of parasites being transmitted to the consumer.Data from the present study suggests that Canadian seal meat and organ tissues may be a source of infection of T. gondii.As there is considerably less known about the infectivity and pathogenicity of Sarcocystis and Neospora in humans, the presence of these parasites in seals represents a lesser known risk to consumers.In this study, we report the presence of DNA of Sarcocystidae parasites in all seal species tested.While DNA analysis may be less sensitive compared to serology, PCR and subsequent sequencing eliminates false-positive serological results due to cross-reaction between different species of the Sarcocystidae family.Furthermore, serological tests should be carefully interpreted, depending on the antibodies used for testing, as some antibodies can be transmitted from mother to fetus.Toxoplasma gondii was reported in numerous wild otariid and phocid pinnipeds worldwide, including harbour seals, ringed seals, bearded seals, grey seals, hooded seals, and spotted seals, but not ribbon seals, in Canada and Alaska, USA.Measures et al. reported seroprevalence of T. gondii in harbour, grey, and hooded seals, but not in harp seals, on the east coast of Canada.Oksanen and coworkers did not detect T. gondii in harp, ringed, and hooded seals from the Northeastern Atlantic using serology.Thus, we report harp seals as a new host for T. gondii.Furthermore, we report evidence of vertical transmission in one 4-day-old male harp seal pup.Vertical transmission may also have occurred in five T. gondii infected YOY ringed seals in our study but, as these animals were harvested by Inuit in September and eastern Arctic ringed seals are born mid-March to mid-April with two months of lactation, it is possible that they may have acquired infections via their diet which is initially pelagic crustaceans and later fish such as Arctic cod.Muscle tissue of Arctic char and Atlantic salmon have recently been identified as T. gondii DNA-positive.Measures et al. reported one 10-day-old harbour seal pup and one 14-day-old grey seal pup seropositive to T. gondii, but they attributed seropositivity to maternal antibodies.Vertical transmission of T. gondii and S. neurona was documented in an aborted sea otter pup.Furthermore, co-infections with T. gondii and S. neurona have been reported.Transmission of such protozoans to marine mammals, including cetaceans, in the marine environment is not fully understood and vertical transmission may be one way to infect conspecifics or offspring.Extralimital reports of ringed, grey, harp and hooded seals in southern waters such as southern Nova Scotia and New England, or even as far south as the Caribbean in the case of juvenile hooded seals, often involve seals that are sick and stranded.These seals may be taken into rehabilitation facilities, where they may be at greater risk of exposure to protozoans such as N. caninum, Sarcocystis spp. and T. gondii and where infected wild and domestic canids and felids contaminate coastal environments.For example, canids such as coyotes are known to venture onto ice in coastal environments to scavenge and predate seals.Canids, including coyotes, foxes and wolves are infected with Sarcocystis spp. and N. caninum but the relationship of these coccidians in canids with those in seals is unknown.Sarcocystis spp. is reported in otariids and phocids .Sarcocystis neurona DNA was detected in beach-cast otariids, phocids, sea otters and cetaceans in Oregon and Washington, USA and British Columbia, Canada.Sarcocystis spp. has not been described in seals from Arctic Canada or eastern Canada.Thus, we report ringed seals, hooded seals, harp seals, and grey seals as new host reports in Canada.Neospora caninum or N. caninum-like or “Coccidia C″ were reported in otariids and phocids, and spotted seals.Because the genetic differences between N. caninum and Hammondia heydorni and their relationship to T. gondii have not been fully resolved, we consider N. caninum indistinguishable from H. heydorni.Furthermore, we consider H. hammondii indistinguishable from T. gondii in this study.In Alaska, seroprevalence of N. caninum was reported in harbour seals and ringed seals but not in bearded seals, spotted seals, or ribbon seals.Our results provide new records of N. caninum-like parasites in Canadian ringed seals.It is not clear whether this suggests acute or systemic neosporosis as no histopathology was conducted on any of our samples.Furthermore, the 18S gene is conserved in some Neospora spp. and further sequencing will be needed to accurately identify the N. caninum-like parasites as N. caninum.As noted above for T. gondii, N. caninum-like parasites were found in YOY ringed seals but we could not confirm transplacental infection for either parasite due to the date of collection.Stranded marine mammals are often sick and do not represent the health of wild populations, thus prevalence of parasites and associated disease in carcasses or sick stranded animals, as frequently reported in the literature, may over-estimate the role of these parasites in wild populations.As S. neurona and other Sarcocystis spp. infections are associated with myositis, severe meningoencephalitis, and hepatitis in stranded marine mammals, the prevalence of Sarcocystis spp. in the wild population may be lower.In our study, apparently healthy seals were shot by Inuit hunters or under scientific permit and prevalence of protozoan infections may not be comparable to stranded animals.While some researchers described a novel S. neurona genotype in seals, we were unable to identify the closest relative to the genotype that was found in our study because many Sarcocystis spp. and Neospora spp. are identical in the 18S gene region that was analysed.Our data also show that it is imperative to confirm PCR-positive results with sequencing.Because Toxoplasma, Sarcocystis and Neospora are very closely related to one another, it is possible to amplify more than one parasite species of the Sarcocystidae family with the same primers.Alternatively, more specific primers may be designed to eliminate false-positive PCR results, as was done in the present study using Toxoplasma B1 and 18S Sarcocystis-specific primers.The results of this study demonstrate that DNA from parasites of the Sarcocystidae family, particularly T. gondii and Sarcocystis sp., is prevalent in tissues of northern and eastern Canadian seals.Although based only on the detection of parasite DNA, these findings nevertheless suggest that consumption of raw or undercooked seal meat or organ tissues can pose a risk of infection to consumers.For consumer safety, seal meat and other organ tissues should be thoroughly cooked or frozen.For example, freezing at −10 °C or lower for at least three days was shown to be sufficient for killing T. gondii and Sarcocystis spp.The same protocols should be followed when feeding seal meat to dogs to prevent transmission and propagation of N. caninum or N. caninum-like parasites.The limitations of this study are primarily due to sample size.With four different species of seals, varying in age, diet, behaviour and distribution, collected from five different harvest areas, an analysis of observed differences in prevalence of infection for each of three protozoan parasites is not possible.Moreover, there may be different modes of transmission or different exposure rates to the parasites because some species of seals undertake seasonal migrations to more southern waters.The types and numbers of tissues were also limited for some seals.Consequently, it is difficult to fully assess the risk to consumers except to state that these three parasites are present in Canadian pinnipeds and that further data are required to evaluate zoonotic risk.
Zoonotic parasites of seals that are harvested for food may pose a health risk when seal meat or organ tissues of infected animals are eaten raw or undercooked. In this study, 124 tissue samples from 81 seals, comprising four species, were collected from northern and eastern Canada. Tissues from 23 ringed seals (Pusa hispida), 8 hooded seals (Cystophora cristata), 21 harp seals (Pagophilus groenlandicus), and 29 grey seals (Halichoerus grypus) were tested for parasites of the Sarcocystidae family including Toxoplasma gondii, Sarcocystis spp., and Neospora spp. using nested PCR followed by Sanger sequencing. Toxoplasma gondii DNA was present in 26% of ringed seals, 63% of hooded seals, 57% of harp seals, and 31% of grey seals. Sarcocystis sp. DNA was found in 9% of ringed seals, 13% of hooded seals, 14% of harp seals, and 4% of grey seals, while N. caninum-like DNA was present in 26% of ringed seals. While it is unclear how pinnipeds may become infected with these protozoans, horizontal transmission is most likely. However, one harp seal pup (4 days old) was PCR-positive for T. gondii, suggesting vertical transmission may also occur. Phylogenetic analysis of the 18S gene region indicates that Sarcocystis sp. in these seals belongs to a unique genotype. Furthermore, this study represents a new host report for T. gondii in harp seals, a new host and geographic report for N. caninum-like parasites in ringed seals, and four new hosts and geographic reports for Sarcocystis sp. These results demonstrate that parasites of the Sarcocystidae family are prevalent in northern and eastern Canadian seals. While the zoonotic potential of Sarcocystis sp. and the N. caninum-like parasite are unclear, consumption of raw or undercooked seal meat or organ tissues pose a risk of T. gondii infection to consumers.
792
Chat-based instant messaging support integrated with brief interventions for smoking cessation: a community-based, pragmatic, cluster-randomised controlled trial
Advances in mobile technologies have provided a new avenue for mobile phone-based interventions for smoking cessation.Randomised trials have found mobile text messaging through short message service to be effective for smoking cessation,1,2 primarily by increasing perceived psychosocial support.3,Whether more interactive and adaptive mHealth platforms, including smartphone apps and social networking tools, could further improve smoking cessation outcomes remains inconclusive.4–6,Personalised, chat-based support provided in real time by counsellors is an emerging area in mental health care,7 but no study has yet assessed its effect on smoking abstinence.Mobile instant messaging apps are popular and inexpensive alternatives to SMS for interactive messaging.Our population-based survey8 found that adults exposed to health information from instant messaging smoked less and were more physically active than those who were not exposed, suggesting that instant messaging might be a viable way of promoting preventive behaviours.Our pilot trial9 found counsellor-moderated WhatsApp social groups to be effective in preventing relapse among individuals who had recently quit.Our formative qualitative study10 of community smokers showed that mobile instant messaging is a feasible and acceptable platform for chat-based smoking cessation support.Available models of treatment for tobacco dependence are mainly reactive and rely on a health-care practitioner to initiate treatment,11 but novel approaches to engage less-motivated or hard-to-reach smokers have been increasingly studied.12–14,In Hong Kong, only 31% of daily smokers have ever tried to quit and most current smokers never sought help from a smoking cessation service.15,Existing brief intervention models, such as the five-step 5As, mainly target smokers in clinical settings.11,We modified the 5As and developed a proactive recruitment and intervention model, AWARD, delivered by lay counsellors for promoting quitting and uptake of smoking cessation services in smokers in community settings.16–18,Hong Kong has extensive smartphone penetration.15,We developed a chat-based smoking cessation support programme delivered through instant messaging, which was designed to improve abstinence by providing theory-based behavioural support and increasing the use of smoking cessation services.10,In this trial, we assessed the effect of chat-based instant messaging support, which was integrated with brief interventions from the AWARD model, on abstinence among proactively-recruited smokers from community sites in Hong Kong.Evidence before this study,We searched PubMed and Google Scholar for randomised trials of mobile health interventions for smoking cessation published in any language, from database inception to May 31, 2017, using the search terms “mobile phone”, “smartphone”, “mobile health”, “mHealth”, “smoking”, and “tobacco”.We identified a relevant Cochrane review of mobile phone-based interventions in general, three meta-analyses focusing on text-messaging support, a systematic review on smartphone apps, and a systematic review on social media.Few of the reviewed studies included biochemically confirmed abstinence as an outcome.A meta-analysis of six trials reported a moderate effect of mobile phone-based interventions on biochemically validated abstinence at 6 months, but with substantial heterogeneity.We did an in-depth review of these trials, and we found only two trials reporting a beneficial effect of text messaging on validated abstinence, which was assessed at the end of treatment.Whether the intervention effect could last after the end of treatment has remained uncertain.Nearly all trials included in the meta-analyses involved participants who were recruited by passive means in the community and were willing to quit within 30 days of randomisation.The findings might not be extrapolated to proactively recruited smokers and those without an interest in quitting at baseline.Evidence on smartphone apps and social media for smoking cessation was inconclusive because most trials were pilot in nature.We found no trial examining the effectiveness of mobile instant messaging for smoking cessation, which was reconfirmed by an updated electronic database search on Feb 28, 2019.Added value of this study,Our trial showed that chat-based instant messaging support integrated with brief interventions was effective in increasing abstinence among smokers in the community.The proactive intervention model was able to reach a large proportion of smokers with low motivation to quit, in whom the intervention effect seemed to be stronger than in those with higher motivation to quit at baseline.Effective engagement in the chat-based intervention was low but strongly predicted biochemically validated abstinence with or without the use of external smoking cessation services.Implications of all the available evidence,To our knowledge, we have provided the first robust evidence to support the use of chat-based instant messaging support as a new method for treatment of tobacco dependence.Future trials in different settings are warranted to ascertain the effectiveness of chat-based instant messaging support on quitting.Further improvements should optimise chat-based interventions and explore strategies to increase engagement.Our findings might be useful for providers of treatment of tobacco dependence and policy makers for improving the reach of smoking cessation support to community smokers.Our pragmatic, cluster-randomised controlled trial found that a proactive intervention model, integrating chat-based instant messaging support with offers of referral to a smoking cessation service and brief advice, was more effective in increasing abstinence and use of smoking cessation services in community smokers than was brief advice alone.We observed significant effects on validated abstinence at the end of the chat-based intervention and 3 months post-treatment.Although a direct comparison is difficult because of differences in study settings, smoker characteristics, methods of intervention delivery, and intervention durations, the observed effect on validated abstinence was similar to those of previous mobile phone-based interventions for smoking cessation.4,To our knowledge, this was the first trial of chat-based support for smoking cessation delivered through an understudied mHealth method––mobile instant messaging apps.The chat-based intervention, developed on the basis of the complex trial design framework, was integrated into a multicomponent, proactive treatment model for community smokers.The complex design of the two-group pragmatic trial restricted the ability of the study to fully assess the contribution of the intervention to cessation outcomes.Factorial trials in which participants are randomly assigned to receive either control treatment or chat-based support, brief advice, or both are needed to assess the additive and interactive effect of the individual components.Nevertheless, we found that participants who engaged only in the chat-based intervention had similar results on validated abstinence compared with those who only used a smoking cessation service.The greater point estimate observed in participants who used both interventions was also suggestive of an additive effect.The associations remained significant after adjusting for other important predictors of cessation outcomes, including previous quit history, motivation to quit, and nicotine dependence.28,This suggested that chat-based support might be a crucial component of the combined intervention model.Consistent with the law of attrition, which notes that a substantial proportion of participants do not engage with the intervention in any digital health trial,29 the prevalence of effective engagement with the chat-based intervention was low in our trial.We found that participants less motivated to quit were less likely to engage with the chat-based intervention.This supports similar findings in the USA and the UK, wherein smokers who were not motivated to quit had less desire to use mHealth for quitting than those who were motivated to quit.30,The low proportion of participants ready to quit in 7 or 30 days at baseline might thus explain the low engagement in our trial.The effective engagement became 30% when it was limited to participants who were ready to quit in 7 or 30 days at baseline, which was similar to the full adherent rate reported in a trial of a smartphone cessation app done in similarly motivated smokers.31,The unavailability of interactive support outside office hours might have led to the low engagement, because most participants reported “too busy” as the reason for not using the intervention.The smaller effect on validated abstinence observed at 3 months after the end of the chat-based intervention also suggests that extending the duration and service hours of the intervention might improve engagement and abstinence.Some participants who were not interested in receiving the chat-based support might have used the blocking function of instant messaging apps.Our content analyses of the chat dialogue, to be reported elsewhere, shall provide some data on this issue.The use of youth counsellors to engage smokers at smoking hotspots in the community and deliver brief interventions in this trial had several advantages.The proactive recruitment strategy allowed us to recruit a more representative sample of community smokers than if more passive approaches were used.This strategy also presents a novel, foot-in-the-door approach to extend tobacco dependence treatment to hard-to-reach smokers, as indicated by our enrolment of a large proportion of smokers without any plan to quit.Despite a lower usage rate of the chat-based intervention in these participants compared with that of participants who planned to quit, we noted a stronger intervention effect on abstinence in participants not ready to quit than in those ready to quit in 30 days.The chat-based intervention focused on identifying a value to increase commitment to quit by using ACT, which might be particularly effective in participants who did not have a motivator to quit and not as effective in those who already had a reason to quit.10,Our results also corroborate previous qualitative findings in the USA that smokers not interested in quitting might be receptive to mHealth support, which was regarded as a novel way to change their smoking behaviour.32,The mean cost of recruiting a participant and delivering brief advice at baseline was low, suggesting a high scalability of the proactive, lay counsellor-delivered treatment model in places where health-care resources are scarce.The higher mean cost observed in the intervention group was mostly due to the personnel and equipment needed to deliver the chat-based intervention.The mean cost for each additional validated quitter at 6 months was higher than that of a trial of automatic text-messaging support done in UK treatment seekers.33,However, the cost of chat-based support will likely decrease because current cessation counsellors can be trained to use chat-based support.As artificial intelligence and related techniques continue to advance, chatbots could also be developed to provide automated personalised support to smokers and to lower the cost of interventions.7,Our study has some limitations.First, our trial design precluded estimation of the independent effects of chat-based support and baseline interventions on cessation outcomes, although our engagement analyses were indicative of the individual and additive benefits of both interventions.Explanatory trials with a factorial design are warranted to better estimate these independent effects.Nevertheless, we have provided real-world evidence of the effectiveness of the intervention model, which was designed to be readily implementable in community settings.Second, despite a good retention rate of 77%, given the high risk for attrition in community-based proactive treatment trials, non-response bias remains a possibility.Our sensitivity analyses with use of multiple imputations and by complete case yielded similar results to those of the main analyses.Third, about half of self-reported quitters did not validate their abstinence and the lower, though not significant, participation rate in the intervention than that of control groups might have skewed the observed effects towards the null.The challenge of verifying abstinence in digital health smoking cessation trials is well documented, and a 2017 study also showed high discrepancy between self-reported and biochemically validated abstinence.25,It is likely that self-reported quitters who refused to provide a sample for biochemical validation did not quit.Fourth, residual and unmeasured confounding on the observed associations of intervention engagement with abstinence cannot be excluded, although we have adjusted for important predictors of smoking cessation, including previous quit history, motivation to quit, and nicotine dependence.28,Fifth, the study provided insufficient data on the mechanisms underlying the intervention effect on cessation outcomes.Our prespecified content analyses, based on the taxonomy of behavioural change techniques, might provide some insight on these mechanisms.Sixth, the trial was community-based and used a proactive approach to recruit participants.Whether the findings are generalisable to smokers in clinical settings and those who self-selected to go for treatment needs to be tested, but ample research has shown the effectiveness of mobile phone-based interventions on quitting in treatment seekers.1,2,4,Finally, although our sample was largely representative of daily cigarette smokers in the general population, the participants tended to be younger, probably because of the lower uptake of smartphone technologies in older smokers.34,The generalisability of our findings might also be reduced by the greater proportion of previous quit attempts in our sample than in smokers in the general population.Further research is encouraged to ascertain the usefulness of mobile instant messaging for smoking cessation and other preventive behaviours.Mobile instant messaging apps are the most widely used smartphone apps and thus, are a more conducive mHealth platform for cessation support than other smartphone apps, because many community smokers unmotivated to quit are unlikely to install a smoking cessation app.30,Some instant messaging apps have now developed into broad platforms with additional functions other than messaging.For instance, WeChat includes a mobile payment platform and a mini programme or app-in-app system for add-on functions.These features present new opportunities to integrate other behavioural change strategies with the chat-based intervention, such as monetary incentive for rewarding action taken to achieve abstinence and gamified support.Our intervention model might also be adapted and tested for treatment of other behaviours.Extending tobacco dependence treatment to unmotivated smokers and increasing use of smoking cessation services have enormous public health implications.Our pragmatic trial suggests that a proactive intervention model integrating chat-based instant messaging support with brief interventions can increase quit rates, especially in smokers not ready to quit.We also provided initial evidence that chat-based support might increase abstinence as a stand-alone therapy and in combination with adjuvant treatment provided by external smoking cessation services.The study protocol and de-identified individual participant data generated during this study are available from the investigators on reasonable request.Requests should be directed to the corresponding author by email.We did a two-arm, parallel, pragmatic, cluster-randomised, controlled trial nested within a Quit to Win smoke-free community campaign, organised by the Hong Kong Council of Smoking and Health.16–20,Details of the rationale and study protocol were reported elsewhere.21,Ethical approval was granted by the University of Hong Kong and the Hospital Authority Hong Kong West Cluster Institutional Review Board.Participant recruitment took place in 68 community sites, such as shopping malls and housing estates and nearby areas, throughout all 18 districts in Hong Kong.Trained smoking cessation ambassadors, consisting mainly of university students, proactively approached smokers in the community sites, screened their eligibility, and invited them to participate in the trial.The ambassadors also collected written consent from participants at this stage.All smoking cessation ambassadors attended a half-day training workshop, which included an overview of the research study and training in the delivery of baseline interventions, and completed a test of their knowledge, attitude, and practice before participant recruitment.A member of the research team oversaw the recruitment at each community site and provided support to the ambassadors as needed.Participants were Hong Kong residents aged 18 years or older who had smoked at least one cigarette daily in the preceding 3 months, verified by an exhaled carbon monoxide concentration of 4 parts per million or higher; could communicate in Chinese; owned a smartphone with an instant messaging application installed; and intended to quit or reduce smoking, indicated by joining the QTW campaign.Smokers who had a communication barrier or were participating in other smoking cessation programmes or services were excluded from participating in the trial.We randomly assigned the 68 community sites to the intervention or control group, with random permuted blocks of two, four, and six to yield a similar number of clusters in both study groups.The characteristic of the clusters was balanced in both study groups.Because smokers tended to gather at smoking hotspots where ashtrays were available, participants recruited within the same community site received the same intervention to avoid potential risk of treatment contamination and the practical difficulty of doing individual randomisation on site.The random allocation sequence was computer-generated by a non-investigator who had no other involvement in the study.Masking of participants and the research team was not possible because of the nature of the intervention, but the participants were not informed of the treatment provided in the other group."Outcome assessors and statistical analysts were masked to the participants' allocation to the trial group.Participants in the intervention and control groups received brief face-to-face smoking cessation advice by the smoking cessation ambassadors at baseline.The ambassadors first initiated conversations with smokers by asking about their smoking behaviours and then invited the smokers to test for exhaled carbon monoxide concentrations.The test results were shown to the smokers to warn about the risks of continued smoking.The ambassadors then advised the smokers to quit or reduce smoking as soon as possible by joining the QTW contest.All eligible smokers willing to participate signed a written consent form, completed a baseline questionnaire, and received a 12-page self-help booklet.Participants in the control group received only brief smoking cessation advice at baseline.Following the AWARD model, participants allocated to the intervention group additionally received information about the smoking cessation services in Hong Kong from an information card and were offered referral to a smoking cessation service.The contact details of participants who agreed to be referred were then sent to the service providers of their choice for further treatment for tobacco dependence.Details of the treatments offered by these service providers are available in the appendix.Participants in the intervention group also received chat-based cessation support delivered through an instant messaging app for 3 months from baseline.Details of the design and content of the chat-based intervention have been described elsewhere.10,21,Briefly, a smoking cessation counsellor interacted with a participant in real time and provided personalised, theory-based cessation support.The acceptance and commitment therapy is a counselling model focusing on increasing psychological capacity to accept unpleasant experiences while committing to value-guided behavioural change.22,Guided by ACT, the counsellor helped participants to identify values that could strengthen their commitment to quit or reduce smoking and helped them to overcome urges to smoke by using metaphors and mindfulness exercises.The counsellor also delivered behavioural change techniques that promote adjuvant activities to aid smoking cessation.23,Specifically, the counsellors encouraged the use of a smoking cessation service and offered referral for participants who had refused referral at baseline.On the basis of the need and progress indicated by the participants during the chat conversation, other behavioural change techniques were also used to support quitting.Although participants could send a message anytime, the counsellor could only respond during office hours on working days, because of resource constraints.To initiate and facilitate interactions between participants and counsellors in WhatsApp, 19 push messages were sent to participants on a tapering schedule.These messages covered generic information about the benefits of quitting, strategies to manage urges to smoke, smoking cessation services, and reminders to participate in the telephone follow-up at 1, 2, and 3 months.A reminder to participate in the 6-month telephone follow-up was also sent at 26 weeks.Participants in the control group also received a reminder to participate in the telephone follow-up at each timepoint by SMS.Regular messages alone were not found to be effective in increasing abstinence in our previous QTW trial.19,The counsellors who delivered the chat-based intervention were research staff with at least 1 year of experience in smoking cessation research, supervised by an MSc-level psychotherapist trained in ACT and by a research nurse.The counsellors met at least once weekly to discuss the caseloads.The instant messaging dialogues were recorded and checked to ensure intervention fidelity.Apart from the baseline questionnaire completed during recruitment, all participants received telephone follow-up calls at 1, 2, 3, and 6 months after baseline.The follow-up assessments included current smoking status, quitting behaviours, use of smoking cessation services, and other outcomes.The primary outcome was smoking abstinence in the preceding 7 days at 6 months after treatment initiation,13 verified by exhaled carbon monoxide concentrations lower than 4 ppm and cotinine concentrations lower than 10 ng/mL.The main secondary outcome was validated smoking abstinence in the preceding 7 days at 3 months after baseline, for assessing the intervention effect at the end of the chat-based support in the intervention group.Participants earned a small cash incentive of HK$500 for passing each validation test at 3 and 6 months, which was found to have no effect on smoking abstinence in our previous QTW trial.20,Other secondary outcomes included self-reported point-prevalence abstinence in the preceding 7 days, use of smoking cessation services,smoking reduction by at least half of the baseline daily number of cigarettes, and attempts to quit at 3 and 6 months.In the earlier version of the protocol, the primary outcome was self-reported smoking abstinence in the preceding 7 days at 6 months after treatment initiation, as used in our previous QTW trials16–20 and as recommended for population-based studies that assess intervention with minimal face-to-face contact.24,In September, 2018, we changed the primary outcome to biochemically validated abstinence at 6 months because increasingly more studies of digital health interventions for smoking cessation found high rates of misreporting of abstinence status.25,The change occurred before the data on the 6-month follow-up were processed and analysed and had no effect on the trial implementation.For process evaluation of the chat-based intervention, participants reported whether they had ever interacted with a smoking cessation counsellor through instant messaging during the 3-month intervention period, which was verified by instant messaging log files.In this trial, we defined effective engagement with the chat-based intervention as having interacted with a smoking cessation counsellor, because participants who did not respond to the prompts from the counsellor would not receive any personalised support.26,At the 3-month follow-up, we also asked participants the reasons for not using the chat-based intervention.We estimated the sample size on the basis of findings from our previous QTW trial, which showed, for the intention-to-treat population, a validated smoking abstinence prevalence of 5·1% at 6 months in the control group,17 and an anticipated intervention effect—relative risk 1·83—derived from a meta-analysis of mHealth smoking cessation interventions.4,To detect a significant intervention effect with two-sided α of 0·05, power of 0·80, and an allocation ratio of 1:1, 586 participants in each group were required.The design effect due to cluster randomisation was considered negligible because our previous QTW trial showed that the intracluster correlation coefficient for validated abstinence at 6 months was smaller than 0·001.16,Therefore, the target sample size was 1172 participants.Primary analyses were by intention to treat, and participants with missing outcome measures were considered to have no change in smoking behaviour from baseline.27,We used generalised estimating equation models with a logit link to examine the intervention effect on outcomes, adjusting for clustering of participants within community sites with an exchangeable correlation structure.The ICCs of all abstinence outcomes were calculated by analyses of variance.We did three prespecified sensitivity analyses for the abstinence outcomes.21,First, we repeated the primary analyses with adjustment of imbalanced baseline covariates between study groups.Second, we used multiple imputations by chained equations to impute missing abstinence outcomes, using study group, age, sex, education level, cigarettes smoked per day at baseline, time to first cigarette of the day, previous quit history, and readiness to quit."We used Rubin's rule to pool the estimates from 50 imputed datasets.Third, we did complete-case analyses by excluding participants with missing outcomes.We examined the intervention effect in subgroups of age, sex, nicotine dependence, readiness to quit in the next 30 days, and previous quit history at baseline as prespecified in the published protocol,21 and in subgroups of education level as post-hoc analysis.We tested multiplicative interactions using the corresponding interaction terms.We did a planned analysis of whether baseline factors were associated with engagement in the chat-based intervention.Post-hoc analyses were done to examine the differences in primary outcome by intervention engagement, defined by use of a smoking cessation service, effective engagement with the chat-based intervention, or both, adjusting for age, sex, nicotine dependence, previous quit history, and readiness to quit at baseline.28,The operating cost of interventions, including the personnel for participant recruitment and intervention delivery and equipment, were calculated in both study groups.We used Stata/MP for all statistical analyses."A prespecified content analysis of the instant messaging dialogue between the participants and counsellors, with coding using the taxonomy of behavioural change techniques,23 and a qualitative assessment of participants' perception of the chat-based intervention will be presented elsewhere.The trial is registered with ClinicalTrials.gov, number NCT03182790.The funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.MPW and TTL had full access to all the data in the study.MPW had final responsibility for the decision to submit for publication.Between June 18 and Sept 30, 2017, 1347 potential participants were screened for eligibility in 68 community sites; 1287 were found eligible, and 1185 provided informed consent, were included in the trial, and were randomly assigned to either intervention or control group.We noted some differences in the characteristics of potential participants between smokers who were excluded for not using an instant messaging app and daily cigarette smokers in the general population.By intention to treat, the primary analyses included all participants randomly assigned to the intervention group or control group.The overall follow-up rates were 75% at 1 month, 70% at 2 months, 69% at 3 months, and 77% at 6 months, without significant differences between the two study groups at all four follow-ups.Attrition at 6 months was associated with younger age, female sex, not reporting their education level, having no preceding quit attempts, lower readiness to quit, and lower perceived importance of quitting, but was not associated with level of nicotine dependence at baseline.The mean age of the participants was 41·5 years years and 918 of 1185 were men.About half of participants had low level of nicotine dependence, no preceding attempt to quit or reduce smoking, and no plan to quit at baseline.Baseline characteristics were similar between study groups except that the intervention group had a lower mean age, more participants with preceding quit attempts, and higher scores in perception of quitting than the control group.The mean time used to recruit a participant and deliver baseline interventions in the intervention group and control group were similar between groups.Of 179 participants who self-reported abstinence at the 6-month follow-up, 83 participated in the face-to-face biological validation and 78 of them passed the validation.Participation rates were not significantly different between the intervention group and the control group.For the primary outcome at 6 months, more participants in the intervention group were validated quitters than in the control group.The 3-month validated abstinence was likewise greater in the intervention group than in the control group, as were self-reported 7-day point-prevalent abstinence assessed at all follow-up timepoints.These results did not change after adjusting for imbalanced baseline covariates.Analyses done with use of multiple imputations and by complete case also yielded similar point estimates of intervention effect.The ICC for validated abstinence at 6 months was 0·0062, which was slightly higher than our estimation.The proportions of participants who continued to smoke but with a reduction in smoking frequency and participants who made a quit attempt were slightly higher in the intervention group than in the control group at all follow-up timepoints; however, these estimates were not significant.The intervention group had significantly higher rates of smoking cessation service use than those of the control group at all follow-up timepoints.The intervention effect was significantly stronger in participants with moderate to high nicotine dependence, in those with a previous quit attempt over 1 year before baseline, and in those who were not ready to quit in 30 days at baseline.After correcting for multiple comparisons, only the results for readiness to quit remained significant.In the intervention group, after excluding ten participants whose responses were inconsistent with the instant messaging log, 99 of 591 participants in the intervention group reported having interacted with a counsellor through instant messaging."Among non-users who provided a reason for not using the chat-based intervention, “too busy” was reported in 211 of 252 participants and was the most commonly reported reason for non-usage, followed by “don't know how to send a message” in seven participants and “not interested” in five participants.Older age, readiness to quit within 7 or 30 days, and higher perceived importance of quitting were associated with engagement in the chat-based intervention, after adjusting for other baseline characteristics.Engagement in either or both smoking cessation services and chat-based intervention significantly predicted higher prevalence of validated abstinence at 6 months.The results were similar for intervention engagement and validated abstinence, both assessed at 3 months.The total intervention cost was US$12 930 in the intervention group and $4919 in the control group.The corresponding cost for each participant was $21·9 in the intervention group and $8·3 in the control group.The cost per additional validated quitter in the intervention group was $445.
Background: Mobile instant messaging apps offer a modern way to deliver personalised smoking cessation support through real-time, interactive messaging (chat). In this trial, we aimed to assess the effect of chat-based instant messaging support integrated with brief interventions on smoking cessation in a cohort of smokers proactively recruited from the community. Methods: In this two-arm, pragmatic, cluster-randomised controlled trial, we recruited participants aged 18 years or older who smoked at least one cigarette per day from 68 community sites in Hong Kong, China. Community sites were computer randomised (1:1) to the intervention group, in which participants received chat-based instant messaging support for 3 months, offers of referral to external smoking cessation services, and brief advice, or to the control group, in which participants received brief advice alone. The chat-based intervention included personalised behavioural support and promoted use of smoking cessation services. Masking of participants and the research team was not possible, but outcome assessors were masked to group assignment. The primary outcome was smoking abstinence validated by exhaled carbon monoxide concentrations lower than 4 parts per million and salivary cotinine concentrations lower than 10 ng/mL at 6 months after treatment initiation (3 months after the end of treatment). The primary analysis was by intention to treat and accounted for potential clustering effect by use of generalised estimating equation models. This trial is registered with ClinicalTrials.gov, number NCT03182790. Findings: Between June 18 and Sept 30, 2017, 1185 participants were randomly assigned to either the intervention (n=591) or control (n=594) groups. At the 6-month follow-up (77% of participants retained), the proportion of validated abstinence was significantly higher in the intervention group than in the control group (48 [8%] of 591 in intervention vs 30 [5%] of 594 in control group, unadjusted odds ratio 1.68, 95% CI 1.03–2.74; p=0.040). Engagement in the chat-based support in the intervention group was low (17%), but strongly predicted abstinence with or without use of external smoking cessation services. Interpretation: Chat-based instant messaging support integrated with brief cessation interventions increased smoking abstinence and could complement existing smoking cessation services. Funding: Hong Kong Council on Smoking and Health.
793
Multi criteria decision analysis for offshore wind energy potential in Egypt
Most of onshore wind farms are located in the best resource areas.The exploitation of further land-based onshore areas in many countries is currently impeded due to visual impacts, threats to birdlife, public acceptance, noise and land use conflicts .All these conflicts are likely to hinder future development of onshore wind farm deployment .Hence, most major developments worldwide have now shifted towards offshore wind where the resource is high, and less likely to be affected by the drawbacks of land-based wind farms mentioned earlier.The advantage of offshore wind is that despite being 150% more costly to install than onshore wind, the quality of the resources is greater as is the availability of large areas to build offshore wind farms.Furthermore, scaling up is likely to result in cost reductions propelling offshore wind on a trajectory that will be on par with onshore wind in the future .However, to our knowledge, there is no general approach to accurately assess wind resources.This work provides a new methodology to address this lack of knowledge which has global applicability for offshore wind energy exploitation.It is based on Analytical Hierarchy Process defined as an organised process to generate weighted factors to divide the decision making procedure into a few simple steps and pairwise comparison methods linked to site spatial assessment in a Geographical Information System.GIS based multi-criteria decision analysis is the most effective method for spatial siting of wind farms .GIS-MCDA is a technique used to inform decisions for spatial problems that have many criteria and data layers and is widely used in spatial planning and siting of onshore wind farms.GIS is used to put different geographical data in separate layers to display, analyse and manipulate, to produce a new data layers and/or provide appropriate land allocation decisions .The MCDA method is used to assess suitability by comparing the developed criteria of the alternatives .The alternatives are usually a number of cells that divide the study area into an equally dimensioned grid.The most popular and practical method to deploy MCDA is the Analytical Hierarchy Process.AHP was defined as an organised process to generate weighted factors to divide the decision making procedure into a few simple steps .Each criterion could be a factor or a constraint.A factor is a criterion that increases or decreases the suitability of the alternatives, while a constraint approves or neglects the alternative as a possible solution.AHP has two main steps: “Pairwise” comparison and Weighted Linear Combination.Pairwise comparison method is used to weight the different factors that are used to compare the alternatives while WLC is the final stage in AHP evaluating the alternatives .In order to establish a pathway for exploiting offshore wind resources, a systematic analysis for such exploitation is needed and this is at the core of this work.Our aim is to address the paucity of generalised modelling to support the exploitation of offshore wind energy.In order to do this, this work will utilise a countrywide case study where the developed methodology will be used to investigate the wind energy potential, specify appropriate locations of high resources with no imposed restrictions and generate suitability maps for offshore wind energy exploitation.The development methodology will appropriately identify the essential criteria that govern the spatial siting of offshore wind farms.The work is structured as follows: in the first section we develop the literature review further, followed by the detailed methodology considerations, description of Egyptian offshore wind, analysis and criteria followed by detailed results and discussion and then conclusions.GIS based MCDA is widely used in the spatial planning of onshore wind farms and siting of turbines.Below are highlights of some of the literature for these methods and Table 1 provides a brief comparison between the relevant approaches discussed in the literature.In a study of onshore wind farm spatial planning for Kozani, Greece, sufficient factors and constraints were used to produce a high-resolution suitability map with grid cell .The study focussed on three different scenarios: scenario 1 were all factors are of equal weight, scenario 2, environmental and social factors have the highest weights, and scenario 3, technical and economic factors have more weights than other factors.It was found that more than 12% of the study area has a suitability score greater than 0.5 i.e. suitable for wind farms, and all previously installed wind farms were located in areas with a high suitability score, which emphasised and validated their results.Their suitability map was found to be reliable by stakeholders and was used to inform the siting of new wind farms in Greece.However, in our opinion, scenario 1 is unrealistic, because normally factors normally have different relative importance.A further study elucidated the methodology for choosing a site for new onshore wind farms in the UK, taking the Lancashire region as a case study ."In order to identify the problems and the different criteria involved, the authors undertook a public and industrial sector survey soliciting community/stakeholders' views on wind farms.From the survey results, they constructed their own scoring matrix from 0 to 10 to standardise the different criteria and then used two scenarios to calculate suitability.In scenario 1 the same weight was assigned to all criteria, while in scenario 2 the criteria were divided into 4 grades.Pairwise comparisons to weight the grades were used.The study then aggregated the criteria producing two different maps.The results showed that roads and population areas had the dominant influence on the final decision and the available area for wind farms represented only 8.32% of the total study area.The study is advanced and accurate despite the fact that it was performed in 2001.However, the fact that they scale the “distance to roads factor” from 1 to 0, without applying the same scaling to the other factors will have implications on the accuracy the final results.Another research group addressed suitable sites for wind farms in Western Turkey.They found the satisfaction degree for a 250 m by 250 m grid cells, using environmental and social criteria such as noise, bird habitats, preserved areas, airports, and population areas, as well as wind potential criterion.Cells that had satisfaction degree >0.5 in both environmental objectives and wind potentials were designated as a priority sites for wind farms.The study produced a powerful tool to choose new locations for wind farms.A completed spatial planning study was performed to site new wind farms in Northern Jutland in Denmark.This included most criteria to arrive at a suitability score for grid cell of 50 by 50 m. Although the research is well presented, the method used to weight the factors was not indicated and the weight values were absent.Hence, it is difficult to ascertain from the final suitability map the methodology used to reach the conclusion.Rodman and Meentemeyer , developed an approach to find suitable locations for wind farms in Northern California in the USA.They considered a 30 by 30 m grid and used a simpler method to evaluate alternatives which only included 3 factors - wind speed, environmental aspects, and human impact.The factors were scored on a scale from unsuitable = 0 to high suitability = 4, based on their own experience to judge the factors.They then weighted the three factors by ranking them from 1 to 3, to arrive at final suitability.The suitability map was created by summing the product of each scored factor and its weight, and dividing the sum on the weights summation.Although the method was simple, their results could have been greatly improved in terms of accuracy and applicability had they used the AHP method and considered other important factors such as land slope, grid connection, and land use.In the UK, for the offshore wind competition “Rounds”, a Marine Resource System, based on a GIS decision-making tool was used to identify all available offshore wind resources .After successfully completing Rounds 1, and 2 of the competition, the tool was used to locate 25 GW in nine new zones for Round 3.The MaRS methodology had 3 iterations: it considered many restrictions taking advantages of the datasets from Rounds 1 and 2.The study excluded any unsuitable areas for wind farms, then weighted the factors depending on their expertise from previous rounds, the same as the first iteration but included stakeholder input, and alligning Round 3 zones with the territorial sea limits of the UK continental shelf.The Crown Estate responsible for these projects did not publish details of the methodology used stating only the criteria and the scenarios used in the spatial sitting process.Capital cost was taken into account in the above studies of offshore wind .The work indicated that the use of MCDM in offshore wind is rare as it is primarily used in onshore wind studies.Nevertheless, two different maps were created assuming all factors were of the same weight.A Decision Support System, which is an MCDM programme based on GIS tools was used .Constraints were only used in the study of an offshore wind farms in Petalioi Gulf in Greece .The work excluded all unsuitable areas in the Gulf using the classical simple Boolean Mask, and then estimated the total capacity as being around 250 MW using the available wind speed data."It was apparent that the old Boolean Mask technique was used due to the limited area of the authors' study area, which makes the application of many criteria to locate one OWF difficult.Another study was conducted to measure offshore wind power around the Gulf of Thailand, using only four factors with no constraints .The authors used their own judgment to weight the factors, and then used ArcGIS to select the suitable location for their study area.The work is detailed with appropriate charts.However, using only factors without considering constraints is likely to affect the accuracy of results.Further research to produce a suitability map for offshore wind areas around the UK was also undertaken but was biased towards cost modelling .The analysis was mainly based on data obtained from the UK Crown Estate using specific Crown Estate restrictions, weights and scores.The difference between this study and other offshore wind sitting studies is that authors used the overall Levelised Cost of Energy equation to aggregate factors.They produce two maps, one for restrictions and another for factors.Offshore and onshore wind spatial planning can be based on similar techniques, particularly when considering the wind speed factor.However, these techniques differ in terms of definitions of factors and constraints.For example, the main factors in onshore wind considerations are distance to roads and the proximity of farms to built-up areas, whereas for offshore wind, the factors are water depth and wind speed where the wind speed cube is proportional to power production.In most of the studies reviewed here as well as others not included, the approach taken for determining wind farm spatial planning can be summarised as follows:Identify wind farm spatial characteristics and related criteria using APH or similar techniques.Standardise different factors using fuzzy membership or some own-derived judgment.Weight the relative importance of the various factors using pairwise comparison or similar methods.Aggregate the different layers of factors and constraints using different GIS tools and WLC aggregation method.As mentioned earlier, AHP is a technique used to organise and create weighted criteria to solve complex problems.The first step of AHP is to define the problem and the branch of science it relates to, and then specify the different criteria involved.All the criteria should be specific, measurable, and accepted by stakeholder/researcher or previously used successfully in the solution of similar problems.The next step in the analysis is to find the different relative importance of the factors.The final step is to evaluate all the potential solutions for the problem, and arrive at a solution by selecting the one with highest score.In Fig. 1, we illustrate the whole AHP process used in this study.Two efficient ways to solve a multi criteria problem were suggested by Ref. .The first is to study the problem and its characteristics, then arrive at specific conclusions through the different observations undertaken by the study.The second is to compare a specific problem with similar ones that have been solved previously.In this work, we have selected the second approach to define the criteria required and this could be a factor or a constraint.Due to its dominant applicability in spatial decision-making problems, the pairwise comparison method was chosen to find the relative importance.Furthermore, factors have different “measuring” and “objecting” units, so there is a need to unify all factors to the same scale.In order to standardise the processes, a continuous scale suggested by Ref. from 0 to 1 with 0 for the least suitable measure and 1 for the most suitable factor, which is named as the “Non-Boolean Standardisation”.The scale should be used with different fuzzy functions because not all the factors act linearly.In our AHP analysis, we used the Pairwise Comparison method to weight the different factors developed in Refs. .The intensity of importance definitions and scales used to indicate pairwise comparisons between factors are the same as those used in Ref. .That is starting with a score of 1 for a pair where both factors are of equal importance and ending with 9 to score a pair wherein the first factor is of extreme importance compared to the other.The intensity of importance can be chosen using personal judgment, experience, or knowledge.The process is accomplished by building the pairwise comparison matrix, which has equal rows and columns, the number of rows equal to the number of the factors.If the factor in the left side of the matrix has higher importance compared to the top side factor, the matrix relevant cell will assume the value assigned in the scale of intensity of importance.In the opposite case, the cell will equal the inverse of the scale value.A new normalised matrix can be created by taking the sum of every column and then dividing each matrix cell by its total column value.Finally, the weight of each factor is equal to the average of its row in the new matrix.The aggregation of all factor weights equals to one.Consistency Ratio was suggested by Ref. to validate the pairwise comparison assumptions: any matrix with CR greater than 0.1 should be rectified.CR is given by: CR = CI/RI; where RI is the Random Consistency Index, and its value depends on the factor number n, RI values adopted from Ref. .CI is the Consistency Index given by: CI =/; where λmax is the Principal Eigen value, equals to the product of factor weight and the summation of its column in the pairwise matrix.WLC used in parallel with Boolean overlay, in which Boolean relationships such as are applied to achieve a specific decision with “0 or 1” as a result value.In this work, a combination of the approaches discussed above such as were used.These provided the basis to develop the models into two software packages - Microsoft® EXCEL, and ArcGIS,.After reviewing the information in the literature contained in Refs. , the identified and appropriate constraints for our methodology are as follows: Shipping Routes, Ports, Military Zones, Natural Park, Cables and Pipe Lines, Fishing Areas, and Oil and Gas Extraction Areas.The factors are: Bathymetry, Soil Properties, Wind Intensity, Distance to Shore, and Distance to Grid.These are listed in Table 2 where more detailed definitions and limitations of the different factors and constraints used in this methodology are given.The analytical methods are Pairwise Comparison and WLC.In order to apply the above approach for offshore wind farm sitting, we have identified Egypt as an appropriate case study.This is due to its unique location between two inner seas, its huge need for renewable energy, and wind data availability.Egypt has approximately 3000 km of coastal zones situated on the Mediterranean Sea and the Red Sea.Approximately 1150 km of the coast is located on the Mediterranean Sea, whilst 1200 km is bordered by the Red Sea with 650 km of coast located on the Gulf of Suez and Aqaba .According to the 2014 census, the population of Egypt was estimated to be around 90 million, 97% of which live permanently in 5.3% of the land mass area of Egypt ."Egypt's electricity consumption is increasing by around 6% annually .Within a five-year period, the consumption has increased by 33.7%, which was delivered by a 27% increase in capacity for a population increase of only 12% during this period.Until 1990, Egypt had the ability to produce all its electricity needs from its own fossil fuel and hydropower plants.However, in recent years, due to a combination of population increase and industrial growth, the gap between production and consumption has widened greatly .The Egyptian New and Renewable Energy Authority estimates that energy consumption will be double by 2022 due to population increase and development .The Egyptian government is currently subsidising the energy supply system to make electricity affordable to the mostly poor population .Such subsidies create an additional burden on the over-stretched Egyptian economy.The budget deficit in 2014/2015 was 10% of GDP accompanied by a “high unemployment rate, a high poverty rate, and a low standard of living” .The projected increase in energy consumption will undoubtedly lead to more pollution such as CO2 emissions, as increased capacity will be derived from greater consumption of fossil fuels.Wind and solar resources are the main and most plentiful types of renewable energy in Egypt.In 2006, and in order to persuade both the public and the public sectors to invest in renewable energy, NREA conducted a study which emphasised that Egypt is a suitable place for wind, solar, and biomass energy projects .The study urged the Egyptian government to start building wind farms, and the private sector to develop smaller projects to generate solar and biomass energy .Egypt aims to produce 20% of its electricity needs from renewable energy by 2020 with approximately 12% derived from wind energy.Onshore wind currently supplies only 1.8% of the Egyptian electrical power and there are no offshore wind farms.Without more and urgent investments in wind energy, there is an increased possibility that the 2020 target will be pushed back to 2027 .The first action taken by the Egyptian government towards generating electricity from wind was the creation of the Egyptian Wind Atlas ."This was followed by installation of the Za'afarana onshore wind farm, which has 700 turbines and a total capacity of 545 MW.The monthly average wind speed at this farm is in the range 5–9 m/s.The government is now planning to develop three more wind farms at different sites in Egypt .As indicated earlier, all wind energy in Egypt is currently generated onshore, and the emphasis now is to scale up capacity from the current 1 GW–7.5 GW by 2027 , by going offshore where the wind resource is much higher.Due to its geographical location, Egypt has one of the largest offshore wind potentials in the world .The Red Sea region has the best wind resource, with a mean power density, at 50 m height, in the range 300–800 W/m2, at mean wind speed of 6–10 m/s."Egypt's offshore wind potential in the Mediterranean Sea is estimate to be around 13 GW.For a relatively small land footprint, this resource is large when compared with, for example, the estimated total offshore wind resource for much larger countries such as the USA with 54 GW potential .As previously indicated, Egypt is currently experiencing serious electricity shortages due to ever-increasing consumption and the lack of available generation capacity to cope with demand .In many instances, power blackouts occur many times a day .In order to cope with the demand and provide sustainable energy, the Egyptian government embarked on a programme to produce electrical power from onshore wind.To date, however, only a small number of wind farms are in production with a total capacity of around 1 GW, which is not sufficient to support the ever-increasing demand .In order to alleviate power shortages, offshore wind can play a major part in this respect as the resource is vast and its exploitation will increase capacity thereby alleviating shortages.Investment in offshore wind will also benefit economic development of the country and will reduce pressure on land areas where wind speeds are high but are of greater commercial importance for recreation and tourism ."To date and to the authors' knowledge, there have been no detailed studies conducted to explore offshore wind energy potential in Egypt.The only available literature consists of few studies on onshore wind.For example, the economic and the environmental impact of wind farms was assessed by Ref. using a Cost-Benefits Ratio."It concluded that Za'afarana along the Red Sea coast was the most suitable site in Egypt for onshore wind farms.A “road map for renewable energy research and development in Egypt” was produced by Ref. , which emphasised that wind energy is the most suitable renewable energy source for Egypt particularly for technology positioning and market attractiveness.The first survey to assess the wind energy potential in Egypt used 20-year old data from 15 different locations to estimate the wind energy density at 25 m height and the mean wind power density .It estimated the magnitude of the wind energy density to be in the range 31–500 kWh/m2/year and the power density in the range of 30–467 W/m2.The study concluded that the Red Sea and the Mediterranean Sea, plus some interior locations were the most suitable locations for onshore wind farms.Many studies presented a set of analyses that covered the land areas adjacent to the Red Sea and the Mediterranean Sea coasts , as well as some interior locations around Cairo and Upper Egypt .Small size wind farms are a suitable solution for the isolated communities in the Red Sea coast and 1 MW capacity farms are appropriate for the northern Red Sea coast area which could be linked to the Egyptian Unified Power Network .The “Wind Atlas of Egypt”, which took nearly 8 years to complete, was the only institutional effort .As can be seen from the above, there is a gap in addressing the wind renewable energy resource in Egypt, and especially offshore wind.Our additional aim is to address this gap through systematic analysis based on well-understood approaches developed for other global sites.In order to establish a pathway for exploiting the offshore wind resource, systematic analysis for such exploitation is needed and this is one of the reasons Egypt was used as the case study for the methodology.Additionally, the work will also address the paucity of knowledge on offshore wind energy in Egypt.In order to test our proposed methodology outlined above, the analysis for the case study will investigate the wind energy potential, specify appropriate locations of high resources with no imposed restrictions and generate suitability maps for offshore wind energy exploitation.It will also identify the needed criteria that govern offshore wind farms spatial siting.In accordance with Egyptian conditions, all criteria that affect the cost will be considered, in addition to two added environmental restrictions.All criteria in Table 2 are included except for fishing areas constraints.According to Egyptian law 124 the allowed depth for large fishing vessels is more than 70 m .In addition, fishing using simple techniques noted unlikely to interfere with offshore-submerged cables .Hence fishing activities around Egypt will have no effect on OWF locations which will operate at maximum depths of 60 m.A map layer in ArcGIS was created for each the criterion using the available and relevant spatial data.Wind power data was derived from the “Wind Atlas for Egypt” .A shape file of the land cover of Egypt was created and was used as a base map.To represent the wind power map as a layer in the ArcGIS, the Georeferencing Tool using geographical control points was used to produce a map image.The power density areas contours were then entered as a shape features.Finally, the shape feature was converted to a raster file with cell size = km, the Geographic Coordinate System used was “GCS_WGS_1984”,.The bathymetry data for both the Red and Mediterranean Seas was adopted from the British Oceanographic Data Centre .Fig. 4 shows the topography of Egypt raster map in meters.Later on, we will apply a Boolean mask to eliminate levels above −5 m.In Egypt, tunnels exist only in Cairo, and beneath the Suez Canal, according to the National Authority for Tunnels .Therefore there is no need to account for tunnel data in the sea.Undersea cables locations were extracted from the Submarine Cable Map web .Fig. 5 shows the raster map for these cables and additionally depicts other parameter determined by the analysis.In Egypt, Law number 20 , permits the establishment of offshore structures in areas preserved for future excavation or mining but for safety reasons, a restricted buffer zone of 1 km was created around present and future offshore oil and gas wells.Data for these areas and restrictions adopted from Ref. are shown in Fig. 5.Under Law number 102 , 30% of the Egyptian footprint, encompassing 30 regions, were declared as Nature Reserves.The Sea Marine Nature Reserves represent nine such areas with seven located in the Red Sea, and the others located in the Mediterranean Sea.The locations and dimensions of the reserves were established from the official web site of the Egyptian Environmental Affairs Agency .The data processing was conducted in the same way as the wind power density layer and the resultant raster map is shown in Fig. 5.The shipping routes around Egypt were determined from the ship density maps of marine traffic for the period 2013-14 .Ports and approach channels areas were identified from the Marine Traffic and Maritime Transport Authority of Egypt .The GIS representation of these areas is shown Fig. 5.The Egyptian power network from which it is possible to identify appropriate grid connections to high wind resource sites .The Euclidean Distance Tool in ArcGIS was used to estimate the distance from each electricity grid line to the sites and the raster layer results are in Fig. 6.To ascertain and assess the proximity of military exercise areas to the high wind resource regions, data used was gathered from the official website of Ministry of Defence and Military Production and these areas were excluded from our analysis as shown in the layers in Fig. 5.The coastline of Egypt was drawn to calculate the distance from the sea to the shoreline, applying the Euclidean Distance Tool, and the results are shown in Fig. 7.In terms of ground conditions, most of the seabed adjacent to the Egyptian coast has a medium to coarse sandy soil .Hence, all cells in the modelling within the sea will have a score of 1.In our analysis, the factor spatial layer was ranked using the scale from Ref. , assuming that wind power density has the same importance as the total cost of the project and as such was given the same weight.The other factors comprise the major items for calculating the total cost.Data obtained from Ref. was used to compare factors and to identify the different elements of costs of OWF.It should be noted that the cost values were estimated as an average from the cost of offshore wind projects in the UK spanning the period 2010 to 2015 .Table 4 gives the cost of the various components for a wind farm including turbine foundation, cabling cost and their percentage of the total cost .Table 5 takes into account the importance of the various scales assigned to “intensity of importance” to identify the impact between pairs.For example, wind power density factor was assigned a score of 3 compared to 1/3 assigned to the depth factor, as the latter represents about 1/3 of the project total cost and according to the pairwise comparison rules."The comparison matrix of the calculated factors' weights is given in Table 6.This was determined by dividing each cell in Table 5 by the sum of its column.The values of the CI, RI, λmax and CR were found to be 0.039, 1.12, 5.16 and 0.04 respectively; CR value is much less than 0.10, indicating that the assumptions and the calculations are valid."Fuzzy function describes the relationship between the increases in a factor's magnitude as compared to overall cost appreciation/reduction.Such an assessment also depends on the experience and the knowledge about the factors.The data from Ref. indicates that the relationships for the major factors are linear.The Fuzzy Membership tool in ArcGIS was applied to produce a new standardised layer for each factor.Table 7 shows the factor membership type and its limitations, which was adapted from Table 2.Some factors need another Boolean mask to conduct their limitations, these were, distance to shore and water depth for more than the maximum value limit.Boolean mask was created to exclude the restricted cells by giving them value 0 or 1, which was adopted from the constraints shown in Table 2.Finally, all these constraints were gathered in one layer, using Raster Calculator tool, and shown in Fig. 8.All criteria were aggregated to create the suitability map of OWF in Egypt.The WLC equation was used to conduct the aggregation.The standardised layers were first multiplied by its weights, then summed together, using Weighted Sum tool in ArcGIS.Finally, the Weighted Sum layer was multiplied by the Boolean Mask layer, using the Raster Calculator tool.The final Suitability Map layer is shown in Fig. 9.In our analysis, an area with a value of 1 was found inland in the Boolean mask layer, circled in red.The reason for this confliction is that the area considered has an altitude less than −5 m below mean sea level and corresponds to the Qattara Depression, located in the north west of Egypt.This is “the largest and deepest of the undrained natural depressions in the Sahara Desert”, with the lowest point of −134 m below mean sea level .Identifying such areas gives further confidence in the robustness of the analysis and these points were excluded from the suitability map.The total number of high suitability areas for OWF is approximately 3200 cells which represent about 2050 km2, while the moderate suitability area is approximately 21650 cells which represent about 13860 km2."These numbers are promising when compared with, for example, the 122 km2 of the world's largest OWF, the London Array, which has a capacity of 630 MW .The areas that are unsuitable for OWF are equal to 16403 km2.In our work, the cell dimensions are 800 m by 800 m, which represent an area of 0.64 km2.Zooming into specific areas to obtain a finer grain of suitable locations in the suitability map given in Fig. 10, we arrive at the most suitable locations for OWF in Egypt, which are shown circled.Locations 1 and 2 are in the Egyptian territorial waters, while location 3 is situated between Egypt and Saudi Arabia.Location 1, 2, and 3 contain 1092, 2137, 969 km2 of high suitable areas for OWF, respectively.In order to estimate the potential wind energy capacity of these areas, we use the method described in Refs. which estimates the effective footprint per turbine using the expression: Array Spacing =2 × downwind spacing factor × crosswind spacing factor.However, we adopted the E. ON data for the turbine spacing of 5–8 times rotor diameter and used an average wind speed of 10 m/s .Hence, for a 5 MW turbine of 126 m rotor diameter, a one square kilometre of the chosen areas would yield ∼7.9 MW of installed capacity.Following these considerations, Table 9 gives our estimated power for the three locations shown in Fig. 10.The total wind power capacity of all these sites is around 33 GW."From the final suitability map, it is clear that most of the high suitability cells are concentrated in areas that have wind power density > 600 W/m2, which reflects the strong influence of wind power criterion on the cells' ranks.This is reasonable because the wind power has a relative importance of more than 50%.The second factor is water depth which it has a 24% share of the total weight, and this explains the long, wide area with moderate suitability which can be seen adjacent to the northern coast of Egypt.Despite an average mean power density of less than 200 W/m2 in these areas, their slope is mild approximately less than 1:800 for more than 50 km away from the sea .A new methodology to model and identify suitable areas for offshore wind sites is introduced which addresses a gap in knowledge in the offshore wind energy field.The methodology can be easily utilised in other regions by applying the four steps summarised in Section 3 and the process depicted in Fig. 1.There are some assumptions, requirements, and limitations related to the proposed methodology.The model is limited to national or regional scales requiring a wide knowledge and data when considering these scales.The model was built on the assumption that cost related criteria are higher in weight than those assigned to the environmental aspect of the site to be exploited.The approach presented was successful in providing a suitability map for offshore wind energy in Egypt.The applied model is capable of dealing with the conflicting criteria that govern the spatial planning for offshore wind farms.The spatial analysis was undertaken at a medium resolution, which is confined to the cell size of the bathymetry map data availability.Five factors and seven constraints were applied using MCDM and GIS models for Egypt as the case study area.The analysis was conducted at large scale covering the whole of Egypt and its surrounding waters and hence has implications for renewable energy policies in Egypt and, to some extent, Saudi Arabia.The study transcends different conditions present in two seas – the Red Sea and the Mediterranean Sea, and hence is of wider applicability in these regions.To our knowledge, no detailed studies have been conducted either onshore or offshore, that have considered such a footprint, provided spatial planning examination of appropriate sites.The final results indicate that Egypt could potentially benefit from around 33 GW, achieved by only considering installations at the high suitability offshore wind sites available.This significant amount of green renewable energy could provide a solution to the electricity shortage in Egypt; furthermore, the offshore wind solution has no effect on important tourist resort lands around the chosen sites.This outcome confirms the huge offshore wind energy potential in Egypt."In addition, as the fuel from wind electrical power production is free, exploitation of offshore wind could positively contribute to the country's Gross domestic product and budgets balances, reducing dependence on imported fuels whilst providing a cleaner and more sustainable approach to electricity production in Egypt.Nevertheless, a coherent policy coupled with capacity building would be needed to allow such exploitation to occur.This case study provides the needed evidence to establish an appropriate programme to exploit the offshore wind energy resource in Egypt and will contribute to the country energy mix so that it can cope with its ever-increasing energy demand.In essence, the work presented here not only plugs a knowledge gap but also provides realistic evaluation of the Egyptian offshore wind potential which can form the basis of the needed blueprint for developing the appropriate policies for its exploitation.The existence of vast commercial experience in offshore wind is more than likely to consider such a resource and can be marshalled to support it exploitation in Egypt.Lastly, the scope and methodology of this study addressed a knowledge gap in the development of renewable energy systems, particularly that of offshore wind.The methodology used here provides a robust offshore spatial siting analysis that can be applied in different locations around the world.
Offshore wind energy is highlighted as one of the most important resource to exploit due to greater wind intensity and minimal visual impacts compared with onshore wind. Currently there is a lack of accurate assessment of offshore wind energy potential at global sites. A new methodology is proposed addressing this gap which has global applicability for offshore wind energy exploitation. It is based on Analytical Hierarchy Process and pairwise comparison methods linked to site spatial assessment in a geographical information system. The method is applied to Egypt, which currently plan to scale renewable energy capacity from 1 GW to 7.5 GW by 2020, likely to be through offshore wind. We introduce the applicability of the spatial analysis, based on multi-criteria decision analysis providing accurate estimates of the offshore wind from suitable locations in Egypt. Three high wind suitable areas around the Red Sea were identified with minimum restrictions that can produce around 33 GW of wind power. Suitability maps are also included in the paper providing a blueprint for the development of wind farms in these sites. The developed methodology is generalised and is applicable globally to produce offshore wind suitability map for appropriate offshore wind locations.
794
Reconfiguration analysis of a 3-DOF parallel mechanism using Euler parameter quaternions and algebraic geometry method
Parallel mechanisms with multiple operation modes are a novel class of reconfigurable PMs which need fewer actuators and less time for changeover than the existing reconfigurable PMs.Several classes of PMs with multiple operation modes have been proposed in the past decade.A PM with multiple operation modes was first applied in a constant-velocity coupling to connect intersecting axes in one mode or parallel axes in another mode .Since DYMO – a PM with multiple operation modes – was proposed in , several new PMs with multiple operation modes have been proposed .How to switch a PM with multiple operation modes from one configuration to another requires reconfiguration analysis in order to fully understand all the operation modes that the PM has and the transition configurations between different operation modes.This requires solving polynomial equations with sets of positive dimensional solutions.Recent advances in algebraic geometry and numerical algebraic geometry as well as computer algebra systems provide effective tools to the reconfiguration analysis.This paper aims to fully investigate the operation modes of a 3-RER PM and the transition configurations to switch from one operation mode to another.Here, R and E denote revolute and planar joints respectively.In Section 2, Euler parameter quaternions will be classified based on the number of constant zero components.The kinematic interpretation of different cases of Euler parameter quaternions will be discussed.In Section 3, the description of a 3-RER PM with orthogonal platforms will be presented.In Section 4, by representing the position and orientation of the moving platform using the Cartesian coordinates of a point on the moving platform and a Euler parameter quaternion respectively, a set of kinematic equations of the 3-RER PM will be derived and then solved using the algebraic geometry method to obtain all the operation modes of the PM.The transition configurations among different operation modes will be obtained in Section 5.Finally, conclusions will be drawn.Euler parameter quaternions, which are more computationally efficient than the transformation matrices, have been used in kinematics, computer visualization and animation, and aircraft navigation .In this section, we will first recall the definition and operation of the Euler parameter quaternions and then discuss the classification and kinematic interpretation of the Euler parameter quaternions.Using Eq., we can identify the kinematic meaning of nine cases of Euler parameter quaternions directly.For example, the Euler parameter quaternion of case is q = i, which represents a half-turn rotation about the X-axis.The DOF of the above motion is 0 and the angular velocity is 0.The Euler parameter quaternion of case is q = e0 + e1i, which represents a rotation by 2atan about the X-axis.The DOF of the above motion is 1 and the axis of the angular velocity is the X-axis.The Euler parameter quaternion of case is q = e1i + e2j + e3k.Kinematically, it refers to a half-turn rotation about the axis u = T.Since the angle of rotation about the axis u = T is a constant, the component of the angular velocity along this axis is zero.The above motion can therefore be called a 2-DOF zero-torsion-rate rotation.It is noted that such a rotation is called a zero-torsion rotation in .However, the torsion angle under the above motion is a constant and may not be zero depending on the mathematical representation of rotation.For the remaining six cases of Euler parameter quaternions, we cannot obtain explicitly the axis of angular velocity for a 1-DOF rotation and the axis along which the component of the angular velocity is zero for a 2-DOF rotation by using Eq. directly.By factoring each of these six cases of Euler parameter quaternions as the product of two cases of Euler parameter quaternions, one can identify their kinematic interpretation which reflects the motion characteristics using Eqs. and.Equation shows that q = e2j + e3k represents kinematically a half-turn rotation about the Y-axis followed by a rotation by 2atan about the X-axis.The axis of the angular velocity associated with q is the X-axis.Eq. shows that q = e0 + e2j + e3k represents kinematically a half-turn rotation about the X-axis followed by a half-turn rotation about the axis u = T.The component of the angular velocity associated with q along the axis u = T is zero.The kinematic interpretation of all the 15 cases of Euler parameter quaternions is given in Table 1.As it will be shown in Section 4, each case of Euler parameter quaternions is associated with one operation mode of the 3-RER PM studied in this paper.A 3-DOF 3-RER PM with orthogonal platforms is composed of a moving platform connected to the base by three RER legs.The axes of the three R joints on the moving platform are orthogonal and have a common point.Each leg is a serial kinematic chain composed of R, E and R joints in sequence in such a way that the axes of these two R joints are always coplanar.An E joint can be any planar kinematic chain such as RRR and RPR kinematic chains.Here P denotes a prismatic joint.Fig. 3 shows a configuration of the RER leg in which the structure characteristics of the leg can be easily observed: the axes of R joints 1 and 5 are collinear, the axes of the remaining three R joints within the E joint are perpendicular to the axes of R joints 1 and 5 and).The special RER leg also satisfies the following conditions: Two pairs of R joints, joints 1 and 2 as well as joints 4 and 5, have perpendicular and intersecting joint axes, and links 2 and 3 have equal link lengths).In addition, the axis of joint 2 of each special RER leg in the 3-RER PM passes through the intersection of the joint axes of three R joints on the base.As it will be shown later, the introduction of the special RER leg) facilitates the reconfiguration of the 3-RER PM from one operation mode to another manually and does not affect the operation modes of the moving platform.Links 1 and 4 in each leg are curved in order to avoid link interference during reconfiguration.Let O–XYZ and Op−XpYpZp denote the coordinate frames fixed on the base and the moving platform respectively.The X-, Y- and Z-axes are, respectively, along the axes of the three R joints on the base.The Xp-, Yp- and Zp-axes are, respectively, along the axes of the three R joints on the moving platform.The X- and Xp-axes, Y- and Yp-axes and Z- and Zp-axes are connected by legs 1, 2 and 3 respectively.Eqs. and are the equations for the reconfiguration analysis of the 3-RER PM.In the type synthesis of PMs with multiple operation modes , a PM with multiple operation modes is obtained in a transition configuration that the PM can transit between two operation modes.Therefore, there must be at least two sets of positive dimension solutions, each corresponding to one operation mode, to the set of constraint equations of the PM.In order to reconfigure the PM, one needs to fully understand all the operation modes and the transition configurations of the PM.The methods based on algebraic geometry or numerical algebraic geometry as well as computer algebra systems provide effective tools to find all the sets of positive dimension solutions to a set of constraint equations and to solve the above reconfiguration analysis problem.The operation mode analysis of the 3-RER PM can be carried out by the prime decomposition of the ideal associated with the set of constraint equations) for the 3-RER PM.All the operation modes satisfy Eq., which will be omitted in the representation of operation modes and transition configurations of the 3-RER PM for brevity reasons.Then, we can obtain 15 sets of positive-dimension solutions, each corresponding to one operation mode, that satisfy both Eqs. and.These 15 sets of positive-dimension solutions are listed in the third column in Table 2.With the aid of the classification and kinematic interpretation of Euler parameter quaternions, we can reveal the motion characteristics in each operation mode of the 3-RER PM with orthogonal platforms.It is apparent that the first equation in Eq. is in fact Case 11 of the Euler parameter quaternions.This means that the moving platform will undergo a half-turn rotation about the axis u = T.The last three equations in Eq. is in fact Op × u = 0, which means that the moving platform can translate along the direction u.The motion in this mode is called a 3-DOF zero-torsion-rate motion considering that the component of angular velocity of the moving platform along u is zero.Figure 4 shows the prototype of a 3-RER PM fabricated at Heriot-Watt University in the No. 11 operation mode.Please note that the extra holes on the base and moving platform are used for assembling PM models other than the 3-RER PM with orthogonal platforms.The above example shows that the classification and kinematic interpretation of Euler parameter quaternions provide an efficient way for revealing the motion characteristics of different operation modes of the 3-RER PM.Unlike the method presented in , there is no need to compute the rotation matrix and the associated eigenspace for each operation mode.A description of motion of the moving platform in each of the 15 modes has been obtained and given in Table 2.Figs. 5–8 show the configurations, each in one of the 15 operation modes, of the 3-RER PM.Most of the motion described in Table 2 in each operation mode can be verified readily using the results in the literature.In operation modes Nos. 1–4 and), the axes of joints 1 and 5 in each leg are parallel to each other, and axes of joints 2–4 in each leg are parallel.The PM in these operation modes satisfies the conditions for translational PMs .Therefore, the moving platform undergoes 3-DOF spatial translation.In operation modes Nos. 5–10 and), the axes of joints 1 and 5 in one leg are parallel to the axes of joints 2, 3 and 4 in the other two legs.Since the PM in these operation modes satisfies the conditions for planar PMs , the moving platform undergoes 3-DOF planar motion.In operation mode No. 15, the axes of joints 1, 2, 4 and 5 in all the three legs have a common point for the PM with the special legs, and the axes of joints 1 and 5 in all the three legs have a common point while the axes of joints 2, 3 and 4 in each leg are parallel for the PM with three general legs.The moving platform can thus undergo 3-DOF spherical motion .It is noted that in the 3-RER PM with the special legs, the axes of joints 2 and 4 in each leg coincide in operation mode No. 15.Links 2 and 3 in each leg can rotate freely about the axes of joints 2 and 4 within the same leg.The total DOF of the PM in operation mode No. 15 is 6 although the moving platform has 3 DOFs.Transition configuration analysis is an important issue in the design and control of PMs with multiple operation modes.To find the transition configurations among nm operation modes is to solve the set of constraint equations composed of the nm sets of constraint equations associated with these operation modes.Due to space limitation, the transition configurations for cases nm = 2 and nm = 8 will be presented in this section.The above equation represents all the configurations obtained through a 2-DOF zero-torsion-rate rotation about the axis u = T.All the transition configurations between two operation modes have been obtained.Due to space limitation, only those involving operation modes Nos. 1, 5, 11 and 15 are given since these operation modes are of different types of motion.It is noted from Table 6 that the 3-RER PM can switch between the 3-DOF spherical rotation mode and any of the remaining 14 operation modes.By aligning the axes of joints 2 and 4 in each leg of the 3-RER PM, one can easily switch the PM from any of the 14 operation modes, operation modes 1–14, to operational mode No. 15 manually.It is not straightforward to switch the PM from an operation mode to operation mode No. 15 manually if a general RER leg) is used since it is not apparent when the axes of joints 1 and 5 in all the legs intersect at one point.That is why we use the special legs in the prototype of the 3-RER PM.Starting from the transition configurations between two operation modes that we have obtained in Subsection 5.1, we can further find the transition configurations in which the 3-RER PM can switch among more than two operation modes.Equation indicates that T is the reference configuration as shown in Fig. 9.Similarly, three more transition configurations through which the 3-RER PM can switch among eight different operation modes are identified as listed in Table 7 and shown in Fig. 9.It is noted that in the above four transition configurations of a 3-RER PM, the axes of joints 1 and 5 in each leg coincide.Each leg can rotate about the axes of joints 1 and 5.The DOF of the moving platform varies from 0 to 1 with the configuration of the legs.Therefore, the PM has 3 to 4 DOFs in total in these configurations.For a 3-RER PM with special legs in these transition configurations, the axes of joints 2 and 4 in each leg also coincide, and links 2 and 3 of a leg can rotate about the axes of joints 2 and 4.Therefore, the 3-RER PM with special legs has 6 to 7 DOFs in total in these transition configurations.The Euler parameter quaternions have been classified into 15 cases based on the number of constant zero components and the kinematic interpretation of different cases of Euler parameter quaternions has been presented.The results have been used to the reconfiguration analysis of a 3-DOF 3-RER parallel mechanism with orthogonal platforms.It has been found that the parallel mechanism has 15 3-DOF operation modes, including one spherical mode, four translational modes, six planar modes and four zero-torsion-rate motion modes.The transition configurations among different operation modes have been obtained.Four transition configurations have also been identified in which the parallel mechanism can switch among 8 operation modes.To the best knowledge of the author of this paper, this is the first report on the finding that a parallel mechanism can switch among 8 operation modes in a transition configuration.Although the 3-RER parallel mechanism with orthogonal platforms is not of practical industrial use due to the limited workspace in each operation mode, it is valuable for teaching robot kinematics since this PM covers four different types of 3-DOF motion and can also be used to demonstrate concepts of constraint singularity and kinematic singularity in different operation modes.A prototype has been fabricated and used in teaching robot kinematics at Heriot-Watt University.The classification and kinematic interpretation of Euler parameter quaternions presented in this paper may help further promote the application of Euler parameter quaternions in the analysis and design of parallel mechanisms, especially parallel mechanisms with multiple operation modes.
This paper deals with the reconfiguration analysis of a 3-DOF (degrees-of-freedom) parallel mechanism (PM) with multiple operation modes - a disassembly-free reconfigurable PM - using the Euler parameter quaternions and algebraic geometry approach. At first, Euler parameter quaternions are classified into 15 cases based on the number of constant zero components and the kinematic interpretation of different cases of Euler parameter quaternions is presented. A set of constraint equations of a 3-RER PM with orthogonal platforms is derived with the orientation of the moving platform represented using a Euler parameter quaternion and then solved using the algebraic geometry method. It is found that this 3-RER PM has 15 3-DOF operation modes, including four translational modes, six planar modes, four zero-torsion-rate motion modes and one spherical mode. The transition configurations, which are singular configurations, among different operation modes are also presented. Especially, the transition configurations in which the PM can switch among eight operation modes are revealed for the first time. © 2013 The Author. Published by Elsevier Ltd. All rights reserved.
795
Computational processing of neural recordings from calcium imaging data
Calcium imaging is a technique for recording neural activity with calcium-dependent fluorescent sensors.It produces movies of neural activity at typical rates of 1–100 Hz .The optical nature of the technique makes it quite versatile: it can be used to record activity from thousands of neurons , to record from sub-cellular structures such as spines and boutons , and to track the same cells chronically over long periods of time .Imaging, as opposed to electrophysiology, enables precise spatial localization of cells in tissue, which is useful for analyzing the relationship between activity and cell location, for guiding patch pipettes to a cell, and for assigning cell type information unambiguously to the recorded cells .Precise spatial localization also enables post hoc characterizations in situ by immunohistology , cell-attached/whole-cell recordings , or electron-microscopy .In addition to these strengths, calcium imaging has its weaknesses, that can be partially addressed computationally.Its main disadvantage is that the recorded fluorescence is only an indirect measure of the neural spiking.The fluorescence of a cell imperfectly reflects the average activation of the calcium sensor, which in turn imperfectly reflects the average calcium concentration over the past several hundred milliseconds, which in turn imperfectly reflects the number of spikes fired by the cell over a few tens of milliseconds .Other disadvantages of the method include its sensitivity to brain motion and the contamination of the somatic signals with out-of-focus fluorescence, both of which can be partially corrected computationally.We review here the major components of a standard data processing pipeline for calcium imaging: motion registration, ROI extraction, spike deconvolution and quality control.For an in-depth review of the technical and biological aspects of calcium imaging, see .Brain motion during calcium imaging is unavoidable, even in anesthetized animals where the heartbeat induces motion."In awake animals, brain motion can be much larger due to the animal's motor behaviors.Motion in the Z-axis is usually ignored, because it leads to smaller changes in the signal compared to X/Y motion, owing to the shape of the two-photon point spread function: ∼8x more elongated in Z than in X/Y .Therefore, most methods only address 2D motion in the X/Y plane, but see the end of this section for a discussion of Z movements and 3D motion correction.The X/Y motion registration can either be rigid, usually a 2D translation , or non-rigid, which allows different parts of the image to translate by different amounts .The choice of rigid vs non-rigid 2D registration is often informed by the acquisition speed of a single frame.To understand why, we must first observe that lines in a frame are acquired sequentially .Thus, motion that happens during the acquisition of a single frame affects later lines in that frame, and therefore the motion appears as a non-rigid stretch or shear along the Y axis, even if the underlying physical motion is approximately rigid .In contrast, when a single frame is acquired quickly, the influence of motion across the frame will be relatively uniform, and a rigid correction method can give good results.Rigid motion is often estimated by computing the cross-correlation between an imaging frame and a pre-determined target image , but alternative methods exist such as algorithms based on particle tracking .In registration pipelines that use cross-correlation, the pre-determined target can be defined as the average over a random subset of frames.However, unless the frames are initially well-aligned, this averaging can blur the target image, which impairs registration because the blur has the spatial scale of the features used for registration.Alternatively, one can bootstrap the computation of the target frame by refining it iteratively from a subset of frames and considering only those frames that are well-aligned at each iteration .To increase the accuracy of rigid registration, it is also useful to pre-whiten the Fourier spectrum of the images, so that the low frequencies do not contribute disproportionately to the estimation.The resulting method is known as “phase correlation”; it is more accurate than cross-correlation and can also be used to compute sub-pixel motion estimates .Over longer periods of time, the imaged tissue may drift relative to the objective due to thermal fluctuations in various parts of the microscope, physiological changes in the brain, or postural adjustments of the animal .Large Z-drift may also be observed on fast timescales, if an animal is engaging in motor behaviors, such as running or licking.Online Z-correction during imaging may be used to mitigate these concerns , but such corrections are not widely used.To help users diagnose 3D movement in their own data, we illustrate a few typical situations.We recommend that future algorithms should diagnose Z-drift explicitly and try to correct it.Identifying cellular compartments from a movie is a non-trivial problem, which is compounded by the relatively unclear definition of what constitutes a “good” ROI.By one definition, a somatic ROI is good if it looks like a clearly defined “donut” in the mean image .Donut shapes are expected because genetically-encoded sensors are typically expressed in the cytoplasm of a cell and not in the nucleus .However, this criterion is clearly insufficient, because sparsely firing cells are not distinctly visible in the mean image.However, they are visible in other types of 2D maps, such as the map of average correlation of each pixel with its nearby pixels .The opposite is also true: many cells that are clearly visible in the average image do not appear in the correlation map, suggesting that their fluorescence only reflects the baseline calcium in these cells.These cells are either extremely sparsely firing or have unhealthy calcium dynamics, and they should be excluded from further analyses or treated carefully.Even if such cells do have sparse, low amplitude fluorescence transients, these signals are dominated by the nearby out-of-focus fluorescence which may obscure any somatic activity.For these reasons, most recent pipelines use activity-based criteria to segment cells, typically detecting ROIs from the activity correlations between pixels, using methods such as independent components analysis , matrix factorization , dictionary learning and others .With appropriate pre-processing, for example by dimensionality reduction or stochastic smoothing , all of these methods can in principle find the ROIs with significant activity in the field of view.However, these methods differ in their signal extraction procedure, specifically the way in which they address overlapping neural signals.“Demixing” procedures such as constrained non-negative matrix factorization aim to extract the activity of cells, given their approximate, overlapping spatial masks .This is a model-based procedure because it requires a generative model for the signals, such as non-negativity, sparseness, or even a full generative model of the calcium dynamics.An alternative approach is to simply ignore overlapping pixels for the purposes of signal extraction, in which case the cell traces can be extracted as simple pixel averages without making assumptions about the generative model of these signals .Demixing approaches are particularly problematic when there is mismatch between the data and the model.This is often the case in practice because our understanding of the data is incomplete.For example, out-of-focus fluorescence contributes substantially to the recorded signals due to tissue-induced optical, distortions and scattering .The neuropil reflects the averaged activity of a very large number of axons and other neurites .Although the presence of neuropil contamination is clear in recorded data, it is unclear to what degree it influences the signals at the soma.Some analyses suggest the contamination coefficient is relatively close to 1, similar to its value just outside the soma .In addition, the timecourse of the neuropil changes over spatial distances of a few hundred microns .Therefore, the neuropil contamination cannot be assumed to be constant over space, or one-dimensional, which some pipelines do by default .The motion of the tissue in the Z direction, which changes the shapes of the ROIs, is another source of model mismatch which we discuss in the next section.Algorithms that detect cells in calcium imaging movies by different activity-based strategies generally perform relatively similarly.However, once the cells are found, the signal extraction procedures vary considerably between these methods, with some of them potentially biased by their model-based assumptions.We recommend signal extraction approaches that assume the least about the underlying model; these approaches, in their simplicity, are easier to interpret and diagnose for potential crucial biases that will affect all subsequent analyses.Fluorescence is not a direct measure of spiking, in particular due to the slow kinetics of the calcium sensors currently available.The sensors activate quickly after an action potential but deactivate slowly.Thus, the activity traces resulting from this method appear to have been smoothed in time with an approximately exponential kernel.The traces can be “deconvolved” through an inversion process to approximately recover the spike times.Deconvolution is clearly a computational problem, and historically one of the first problems in calcium imaging to be addressed .We refer to “spike deconvolution” as any algorithm that transforms the raw calcium trace of a single neuron into a trace of estimated spike trains of the same temporal length .Typical results are illustrated in for a simple algorithm , which we will refer to as non-negative deconvolution.Because the spike trains are imperfectly reconstructed, the best estimates may be non-integer quantities, although the true spike trains are always discrete sequences of action potentials.An incorrect interpretation of deconvolved spike trains is that they represent the “firing rates” or “probabilities” of neural spiking; this is not true.Instead, they represent a time-localized and noisy estimate of the calcium influx into the cell.The calcium influx events may vary in size from spike to spike.Therefore, a deconvolution algorithm which estimates event amplitudes cannot perfectly reconstruct a spike train even when the exact spike times are known from simultaneous electrophysiology .This variability in spike-evoked amplitudes in turn places an upper bound on blind spike detection accuracy.The upper bound is saturated when performance is evaluated in bins of 320 ms or larger.In smaller bins, the deconvolution does not saturate the upper performance bound, which appears to be due to variability in the timing of the deconvolved spike times on the order of ≈100 ms.This timing variability may be either inherent in the data, or resulting from an imperfect deconvolution model.If the latter was true, then it would be possible to improve performance with more complex models.However, in a recent spike detection challenge, complex methods provided few advantages over simpler methods, even when evaluated in short bins of 40 ms .The top six methods were submitted by different teams, using a variety of approaches, yet they performed statistically indistinguishably from each other on “within-sample” data.On “out-of-sample” data, some of the complex methods performed worse than simple non-negative deconvolution, and some of them even introduced unwanted biases .Given the negligible improvements offered by complex methods and the potential biases they introduce, we advocate for using the simplest method available: non-negative deconvolution, with an exponential kernel of decay timescale appropriate to the sensor used, and implemented with the fast OASIS solver .Our analysis also suggests that identification of spike trains with better than 100 ms precision may not be achievable with the GCaMP6 sensors on single trials.We also note that spike deconvolution is not necessary and should be avoided when the raw fluorescence trace provides sufficient information.However, when timing information is important, the deconvolution significantly increases data quality, as measured by metrics of signal-to-noise ratio .Last but not least, quality control is an essential component of a data processing pipeline.It allows the user to scrutinize the results of an algorithm and correct any mistakes or biases.This process benefits from a well designed graphical interface, in combination with various metrics that can be computed to evaluate the success of registration, cell detection and spike deconvolution.Development in this area is lacking, suggesting an opportunity for the community to develop a common graphical interface and common metrics for visualizing and evaluating the results of multiple algorithms.Such a framework has emerged for spike sorting , which may serve as a template.In calcium imaging, commonly used metrics are simple anatomical measures and functional measures."However, other metrics may be more informative, such as the “isolation distance” of a cell's signal from the surrounding signals, similar to metrics used in spike sorting .This metric may be used as a control to ensure that the ROIs with special functional properties are not poorly isolated units that largely mirror the neuropil activity.We mention here several specialized applications of computational methods to specific types of imaging.For instance, one-photon miniscopes extend the calcium imaging method to freely moving animals .The one-photon acquisition collects much more out-of-focus fluorescence compared to two-photon imaging, necessitating specialized computational approaches .Recordings of voltage or glutamate sensors also typically employ one-photon imaging, but these sensors have relatively low sensitivity , making the spike detection highly noise limited.Advanced computational methods may in principle detect spikes more robustly in such low SNR recordings .Other computational problems arise for calcium imaging of non-somatic compartments, such as boutons and spines, axons and dendrites .These types of ROIs can usually be detected with standard activity-based methods, but benefit from more careful post-processing, for example in determining which neurites belong together as part of the same neuron, or in determining the contribution of the backpropagating action potential to the calcium levels in spines .Online segmentation during an experiment can also be useful, and some algorithms are specialized to process incoming data sequentially .Specialized methods to track the same cells across days have also been developed , as well as specialized techniques for subtracting the contribution of out-of-focus fluorescence .Finally, specialized demixing algorithms were developed for specific microscopes such as the stereoscopic “vTwins” , or the tomography-based “SLAPMi” .Here we reviewed current progress in calcium imaging data analysis, focusing on four processing steps: motion registration, ROI detection, spike deconvolution and quality control.These steps are highly modular; nonetheless, several software packages exist that provide full analysis pipelines .Of these four steps, quality control has received the least attention, even though it is of primary interest to experimental users.Motion registration has also been relatively ignored, particularly in the Z dimension where drift can change the functional signals over time, or even as a function of instantaneous motor behaviors .ROI detection and spike deconvolution have received the most attention, with several advanced methods producing high quality outputs.We propose that the systematic biases of these methods should now be scrutinized more than their absolute performance in benchmarks, because these biases can result in incorrect scientific interpretations.We have not focused here on computational efficiency, but we note that most recent pipelines have been designed to be relatively fast and efficient on typical fields of view containing 100–200 cells.Some pipelines are even fast for processing data from ∼10,000 simultaneously recorded cells on a consumer-level workstation .Instead, our main focus in this review has been on bias and interpretability.Having used and developed these algorithms in our own lab for our own data, we find ourselves often unwilling to use a complex method for potential small gains in performance, particularly if the method may introduce additional confounds, known or unknown.Simple yet effective and unbiased tools are badly needed, as evidenced by the slow adoption of existing tools, despite their intensive development.For example, only 11 out of 143 citations to the most popular software package actually used the pipeline, with another 12 using only the spike deconvolution step.On the other hand, 48 of these citations are from other computational methods papers, suggesting that the computational field is vibrant, and would benefit from guidance and input from experimental collaborators.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest
Electrophysiology has long been the workhorse of neuroscience, allowing scientists to record with millisecond precision the action potentials generated by neurons in vivo. Recently, calcium imaging of fluorescent indicators has emerged as a powerful alternative. This technique has its own strengths and weaknesses and unique data processing problems and interpretation confounds. Here we review the computational methods that convert raw calcium movies to estimates of single neuron spike times with minimal human supervision. By computationally addressing the weaknesses of calcium imaging, these methods hold the promise of significantly improving data quality. We also introduce a new metric to evaluate the output of these processing pipelines, which is based on the cluster isolation distance routinely used in electrophysiology.
796
Season of birth is associated with birth weight, pubertal timing, adult body size and educational attainment: A UK Biobank study
Several studies have reported associations between month, or season, of birth and risks of later life health outcomes.The most compelling associations to date appear to be those with immune-related disease , such as type 1 diabetes and multiple sclerosis .Other associations have been reported with diverse health outcomes, including cardiovascular disease , type 2 diabetes , psychiatric disorders and all-cause mortality .The most comprehensive assessment to date performed a “phenome-wide” scan in the health records of over 1.7 million US individuals, identifying 55 robust disease associations .Another large analysis of the Kadoorie Biobank study reported robust associations between season of birth and adult adiposity.That analysis of ∼500,000 participants from 10 geographically diverse areas of China highlighted increased adult BMI and waist circumference in individuals born in March–July, and shorter leg lengths for those born in February–August.Season of birth associations therefore provide direct support for the ‘fetal origins of adult disease hypothesis’ that intra-uterine exposures may have long-term impacts on later health .Various mechanisms have been suggested to underlie month of birth associations, including seasonal differences in maternal exposure to meteorological factors, air pollution, food supply, diet and physical activity .Marked seasonal changes have been reported in maternal circulating 25-hydroxyvitamin D levelsD) , which reflect sunshine exposure and directly influence fetal vitamin D exposure.Hence, newborn circulating 25-hydroxyvitamin D3 levels also vary markedly by season of birth, with almost two-fold higher levels in summer compared to winter births reported in a Danish population study .Under the hypothesis that season of birth associations are primarily driven by changes to circulating 25D, we prioritised a previously untested trait for month of birth effects – puberty timing.Age at menarche is a well-recalled measure of pubertal timing in girls and has been linked to vitamin D status in prospective and genetic studies.Furthermore, we extended these analyses to assess the role of birth weight, height and BMI as potential confounders/mediators of this association.In up to 452,399 white UK Biobank participants born in the UK and Ireland, we identify robust associations between season of birth and early life growth and development.The UK Biobank study design has been previously reported .Briefly, all people aged 40–69 years who were registered with the National Health Service and living up to ∼25 miles from one of the 22 study assessment centres were invited to participate in 2006–10.Overall, about 9.2 million invitations were mailed in order to recruit 503,325 participants.Extensive self-reported baseline data were collected by questionnaire, in addition to anthropometric assessments.For the current analysis, individuals of non-white ancestry or born outside of the United Kingdom and Republic of Ireland were excluded from analysis to reduce heterogeneity in maternal exposure.All participants provided informed written consent, the study was approved by the National Research Ethics Service Committee North West – Haydock, and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research.Our primary exposure of interest was season of birth, which was based on month of birth recorded in all study participants by questionnaire.We categorised the month of birth into seasons, defined as Spring, Summer, Autumn and Winter."The primary outcomes of interest were the participants' birth weight, their height/BMI at recruitment, and among women, their age at menarche.Birth weight was recalled by questionnaire and reported in kilograms.Birth weight was treated both as a continuous quantitative trait and a case-control outcome, with weights below 1 Kg and above 6 Kg excluded from analysis.Low birth weight cases were defined as < 2.5 Kg, controls were all birth weights >= 2.5 Kg.Age at menarche in women was self-reported in whole years, and women with a reported age < 8 or > 19 were excluded as outliers.Early menarche was defined as 8–11 years inclusive.Body mass index and height in centimetres were measured at the assessment centre and treated as continuous outcomes, excluding individuals >4 SDs from the mean.A short stature case-control variable was additionally defined as the bottom 5% of individuals vs all others.We estimated maternal sunshine exposure using recorded data from the Met Office.For each individual, we calculated the cumulative hours of sunshine recorded for each month averaged across the UK in the 9 months preceding their birth month and the 3 months after.These were then grouped into four groups – three trimesters and the first 3 months after their birth month.Secondary analyses were performed across past or current diseases self-reported in response to the question “Has a doctor ever told you that you have had any of the following conditions?,”.To ensure good discrimination between medical conditions, the data were collected using a computer-assisted personal interview, administered by trained interviewers.To provide sufficient statistical power we considered only those diseases/outcomes with least 500 cases in either sex.In total, we considered 185 diseases or health outcomes which we previously defined and tested .This led to a conservative multiple testing corrected P-value of 6.8 × 10−5 for this untargeted analysis.Three educational attainment variables were created in response to the touch-screen questionnaire completed by participants.Individuals who responded “prefer not to answer” were set to missing.Individuals who held a college or university degree, the age at completion of full-time education, and thirdly individuals reporting no listed qualifications.Birth month and birth season variables were coded ‘1′ for the month/season of interest and ‘0′ for all others.Linear regression models were performed to test the association between birth season and each of our four primary outcomes.Prior to analysis, birth weight and estimated sunshine exposure were inverse-normally transformed to have a mean = 0 and SD = 1.Our significance threshold was set at P-value) = 0.003 to declare a birth season effect.To ascertain the shape of any resulting associations, we additionally repeated the analyses using individual birth months as the exposure.All models were adjusted for age, sex and socio economic position defined by 11 principal components explaining > 99% of the trait variance .Variables included in this PC construction included alcohol intake, educational attainment, participant and maternal smoking, household income, and Townsend index of material deprivation based on geographical location of residence.Analyses of educational attainment were adjusted only for age and sex.Low birth weight, in addition to other self-reported disease cases or adverse health outcomes were analysed in a logistic regression framework with the same covariates.255,769 individuals had a self-reported birth weight > 1 kg, 9.8% of whom reported low birth weight.Age at menarche between the ages of 8 and 19 years inclusive was self-reported in 238,014 women.Height and BMI measurements were available in 451,435 and 452,399 individuals, respectively after exclusions and covariate adjustments.Season of birth was associated with birth weight; each of the four seasons showed significant differences to the other 3 seasons.Effect estimates ranged from +0.05 SDs for autumn births to −0.05 SDs for winter births, with significant heterogeneity between sexes.Associations with month of birth varied continuously throughout the year, with a peak in September and a trough in February.Associations with the dichotomised trait, low birth weight, showed similar patterns.Individuals born in February were more likely to have low birth weight than those born in September, an effect which was significantly different between sexes.Season of birth was associated with reported age at menarche in women; each of the four seasons showed significant differences to the other 3 seasons.Effect estimates ranged from +0.11 years for summer births to −0.09 years for autumn births.Associations with month of birth varied continuously throughout the year, with a peak in July: +0.11 years, P = 1.4 × 10−21) and trough in September.At the monthly extremes, individuals born in September were ∼20% more likely to enter puberty early than those born in July, P = 7.3 × 10−15).These associations appeared independent of birth weight.Season of birth was associated with adult height, but not adult BMI.Effect estimates on adult height ranged from +0.12 cm for summer births to −0.13 cm for winter births.Peak month of birth differences were seen between June vs. December: +0.31 cm taller height and lower risk of short stature.Among women, adjustment for age at menarche and birth weight attenuated the association between winter births and shorter adult height, but did not attenuate the association between summer births and taller adult height, and augmented the association between autumn births and shorter adult height."To test the putative effects of antenatal sunshine exposure, we estimated each participant's sunshine exposure during each trimester of pregnancy using meteorological data on monthly total hours of sunshine in the UK, available from the UK Met Office.As expected, estimated sunshine exposure during the first trimester was strongly correlated with summer and winter births, second trimester with spring and autumn births and third trimester with summer and winter births.Assessment of the three traits with significant seasonal effects demonstrated estimated sunshine exposure associations concordant with the observed season of birth associations.For each trait, estimated sunshine exposure during the second trimester appeared most significant, with additional third trimester effects for birth weight and height, and first trimester associations for menarche.No association was observed with estimated sunshine exposure during the first 3 months after birth.To assess the potential impacts of the season of birth associations on later health and other outcomes, we systematically tested associations between season of birth and a broad range of 185 disease outcomes.After correction for multiple testing, no disease association was seen with any season of birth.We next assessed whether any of the covariates included in these models was associated with season of birth.As expected, age and sex were not associated with season of birth, however several principal components of socio-economic position were.These associations were driven by a primary effect of season of birth on educational attainment.Individuals born in autumn were more likely to continue in education post age 16 years.The pattern of association between month of birth and educational attainment differed strikingly to those with birth weight, age at menarche and adult height, with an abrupt contrast between individuals born in September vs. August.Furthermore, the difference between autumn vs. summer births was significantly larger in men than in women.In this large study of ∼500,000 UK individuals, we describe the most comprehensive assessment to date of the impact of birth season on childhood growth and physical development.In support of several other studies , we identify highly significant seasonal changes in birth weight.This was represented by higher birth weights for those born in autumn, alongside lower birth weights for those born in winter.Concordant effects were seen on the risk of low birth weight, and estimates were significantly larger in women than in men.Extensive evidence from randomised controlled trials support maternal 25 vitamin D as the causal mechanism.However, given the variability of results reported in other studies assessing seasonality and birth weight , it is likely that additional mechanisms specific to certain environments may also play a role, yet the physiological processes behind the resulting impact on birth weight remain unclear.Vitamin D is important for bone development and may act as a rate-limiting factor for growth.Seasonality in childhood growth has long been described.Humans, and also animals, show fastest growth in spring and summer and slowest growth in autumn and winter .Our findings extend this by demonstrating robustly for the first time an association between season of birth and puberty timing in girls.Although this association was independent of birth weight, the similar pattern of month of birth associations suggests a common mechanism.Although the possible mechanisms are more speculative, circulating levels of 25D in children have been prospectively linked to puberty timing .Furthermore, recent genetic studies have indicated potential aetiological roles for the vitamin D receptor and related nuclear hormone receptors in pubertal timing .Our observed season of birth effects on puberty timing partly explained our downstream association between season of birth with adult height, but did not attenuate the summer or autumn effects on height.The lack season of birth associations observed here for adult BMI appear discordant to those recently reported by the Kadoorie Biobank study , however in that paper the authors noted substantial variability by geographic region in China.In contrast, we saw no association between month of birth and BMI in a relatively smaller geographic area.In a small study higher newborn 25D3 levels were reportedly associated with higher risk of adult overweight , however our null finding for BMI is supported by the reported null association between genetically-predicted 25D levels and adult BMI .However, these findings collectively suggest that multiple mechanisms may mediate observed month of birth associations, some of which might be specific to geography and environment.This is further illustrated by the association between month of birth and educational attainment.This strong association, centred on the striking gap between August vs September births, is well documented and is explained by school entry policy.In the UK, school entry occurs annually in September; eligible children are those who reach school age by end of August.Hence, children born in September are almost one year older than their classmates born in August.This leads to variation in physical and academic performance within each school year.We demonstrate that this variation extends to the duration of full-time education and the likelihood of achieving qualifications.Collectively our findings support the existing season of birth literature, refining and extending the impact this has on childhood growth and development."This provides direct support for the ‘fetal origins of adult disease hypothesis' that intra-uterine exposures impact health outcomes many years later .Analyses of estimated sunshine exposure during maternal pregnancy indicated that the 2nd trimester was likely the key time for these exposures.Furthermore, the lack of association with estimated sunshine exposure during the first 3 postnatal months indicates that the effect is ‘programmed’ in utero.It remains unclear how these effects are programmed and what physiological mechanisms make them act years after the exposure.Through systematic assessment of almost 200 health/disease traits, we were able to eliminate any large effects of birth seasonality on many common diseases in the UK population.Due to the conservative multiple-test correction thresholds, it still remains possible however that season of birth may have modest effects on other previously unsuspected health outcomes, as seen in other populations .Month of birth is highly likely to be randomised to confounding factors, and resulting associations are not subject to reverse causality.These associations therefore represent causal, rather than correlative, relationships with effect sizes similar to genetic determinants identified for these traits .While genetic factors are unlikely to contribute to the current associations with birth month, future identification of possible genetic interactions with birth month may help to inform the mechanisms involved.The other strengths of our study include a large sample size of individuals without biased ascertainment for birth, alongside broad clinical phenotyping.Limitations of the study include no direct measurement of maternal/fetal vitamin D status to fully establish a causal mechanism.Self-reported variables may be inaccurate or subject to recall bias and no quantitative puberty measure was available in men.Previous studies have however noted accurate recall of birth weight and menarche age in later life, including assessment within UK Biobank .Information on parents’ socioeconomic status was not available.Similarly, no information was available on maternal residential location, individually-measured sunshine exposure, or vitamin D supplementation during pregnancy, however even the youngest participant in the UK Biobank was conceived during a time when gestational vitamin D supplementation was not recommended in the UK.We anticipate that all of these issues would impact the false-negative rate of our study, rather than the validity of our current findings.In summary, we provide robust evidence linking season of birth to childhood growth and development, in addition to confirming the known associations of timing of birth and educational attainment.While the associations between season of birth, or estimated antenatal sunshine exposure, with birth weight are consistent with experimental effects of in utero vitamin D exposure on fetal growth, differing patterns of seasonality and independent associations suggest that other mechanisms may link season of birth to adult height and also puberty timing in women.Future work should aim to better understand the mechanisms linking in utero exposures to outcomes years later in life.Felix Day, John Perry: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.Ken Ong: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper.Nita Forouhi: Conceived and designed the experiments; Analyzed and interpreted the data.This work was supported by the Medical Research Council.The authors declare no conflict of interest.No additional information is available for this paper.This research has been conducted using the UK Biobank Resource.Data associated with this study is available via application to UK biobank at http://www.ukbiobank.ac.uk.
Season of birth, a marker of in utero vitamin D exposure, has been associated with a wide range of health outcomes. Using a dataset of g1/4450,000 participants from the UK Biobank study, we aimed to assess the impact of this seasonality on birth weight, age at menarche, adult height and body mass index (BMI). Birth weight, age at menarche and height, but not BMI, were highly significantly associated with season of birth. Individuals born in summer (June-July-August) had higher mean birth weight (P Combining double low line 8 × 10-10), later pubertal development (P Combining double low line 1.1 × 10-45) and taller adult height (P Combining double low line 6.5 × 10-9) compared to those born in all other seasons. Concordantly, those born in winter (December-January-February) showed directionally opposite differences in these outcomes. A secondary comparison of the extreme differences between months revealed higher odds ratios [95% confidence intervals (CI)] for low birth weight in February vs. September (1.23 [1.15-1.32], P Combining double low line 4.4 × 10-10), for early puberty in September vs. July (1.22 [1.16-1.28], P Combining double low line 7.3 × 10-15) and for short stature in December vs. June (1.09 [1.03-1.17], P Combining double low line 0.006). The above associations were also seen with total hours of sunshine during the second trimester, but not during the first three months after birth. Additional associations were observed with educational attainment; individuals born in autumn vs. summer were more likely to continue in education post age 16 years (P Combining double low line 1.1 × 10-91) or attain a degree-level qualification (P Combining double low line 4 × 10-7). However, unlike other outcomes, an abrupt difference was seen between those born in August vs. September, which flank the start of the school year. Our findings provide support for the g€ fetal programmingg€™ hypothesis, refining and extending the impact that season of birth has on childhood growth and development. Whilst other mechanisms may contribute to these associations, these findings are consistent with a possible role of in utero vitamin D exposure.
797
Automated estimation of disease recurrence in head and neck cancer using routine healthcare data
Measuring clinical outcomes in cancer patients remains challenging.Studies often report results from single centres, from large national studies, or from clinical trials.Single centre studies often provide detailed data, but in small numbers of treated patients, while national studies have more patients but lack details of treatment and outcomes.Clinical trials may provide high-quality data, but in small groups of highly selected patients, and are expensive to perform.Therefore the challenge remains to report high-quality, detailed clinical outcomes in large numbers of patients, without relying on specialised data collection.Head and neck cancers represent a large, heterogeneous group of cancers with ∼460,000 cases worldwide .Treatment often involves an intensive combination of surgery, radiotherapy and chemotherapy and patients can suffer significant long-term side effects.Despite this, tumour recurrence rates remain high and survival rates are relatively poor."Similar treatments may be given at recurrence, depending on the pattern of recurrence and the patient's ability to tolerate further treatment, but are toxic and expensive.Therefore, assessment of treatment efficacy, through measuring recurrence rates and recurrence-free survival, is important for patients, clinicians and health services.The UK has had a dedicated national audit database for H&N cancer since 2004 which requires manual data entry.As a consequence, by 2012 only 11.5% of patients had data on recurrence Over this time period, the scope and scale of routinely-collected electronic healthcare data has expanded, often driven by the requirement for data for payment systems.In the UK, there are a variety of data sources, including data on hospital admissions and procedures, radiotherapy, chemotherapy and deaths.Learning how to use these data to answer clinically-driven questions, in a timely, accurate and relevant fashion represents a significant challenge .A computerised method of accessing and interpreting such data can potentially yield useful clinical patterns and outcomes that are currently measured by national audits at considerable expense.We have previously described a pilot study presenting a method of estimating recurrence and survival in H&N cancer patients based on manual analysis of routine data in a small group of 20 patients .This work was based on capturing clinical intuitions about patterns of care and treatment in the form of simple rules about intervals between different treatment types.Since the routinely available data do not contain information on treatment intent, potentially curative treatments that were close to each other in time were assumed to be as part of a planned pattern of sequential, curative treatments, whereas if there was a significant gap between initial curative treatment and subsequent treatment, that subsequent treatment was assumed to be for recurrent disease.However, the assumptions underlying this approach are unlikely to be uniformly correct, and have never been validated in a large patient group.In this study we:Extend the range of clinical intuitions that we capture in computational form,Introduce a novel computer-based automated framework, written in Python, to merge and interpret the routine data and automatically identify diagnosis and recurrence events/dates;,Use that as a basis for performing simple in silico experiments to validate and optimise our approach in a larger patient sample.Patients from a single cancer centre, were included if their first diagnosis of H&N cancer was made between 2009–2012, they had radiotherapy given with radical intent, we were able to identify their date of diagnosis, they attended at least one follow-up visit at UCLH post-diagnosis, and they had an eligible histological subtype.The identification and exclusion of patients is shown in Fig. 1.We manually extracted data on patient characteristics, staging and treatment from hospital records.The date of diagnosis was the date that the first diagnostic specimen was obtained.The date of recurrence was the earliest date that recurrence was confirmed, on clinical, radiological or histological grounds.The date of last follow-up was the latest date of known contact with the hospital.The overall survival interval was the time between diagnosis and death or last follow-up.The progression-free survival interval was the time between diagnosis and evidence of recurrent or progressive disease.Patients who were lost to follow-up, or were alive at the conclusion of the study were censored at the time of last known follow-up.This provided a “Gold standard” dataset of manually curated and analysed data.We obtained local data sources for each patient from the Hospital Information Department, UCLH.These were:Personal demographic service: notification of death, and date of death if dead,Hospital episode statistics: records of start and end dates of admissions and inpatient procedures and diagnoses,Chemotherapy data: records of administration of chemotherapy and diagnoses and dates of treatments,Radiotherapy data: records of delivery of radiotherapy and diagnoses and dates of treatments.All data sources use the International Classification of Diseases v 10 for diagnostic information.HES uses the Office of Population and Census Classification of Interventions and Procedures v 4.ICD-10 is widely used internationally, and both have mappings to other terminologies such as SNOMED-CT.Our algorithm integrated the extracted HES, SACT, RTDS and PDS data to form a single list of event data for each patient, comprising diagnostic and interventional codes.The events were arranged in chronological order and filtered to ensure only treatments relevant to head and neck cancer were analysed.This information was used to inform the development of our algorithm.None of the routine data sources directly report either date of diagnosis or date of disease recurrence, and so we defined proxy time points for relevant clinical events.Automated strategies were used to identify these proxy time points and thus estimate survival.Date of diagnosis was taken to be the date of the first recorded ICD-10 code in HES that corresponded to a diagnosis of head and neck malignancy.Date of recurrence.We used the start date of secondary treatment for HNSCC as a proxy for recurrence.We initially defined a treatment as being for a recurrence if it was given more than 90 days after the end of the previous treatment if two treatments occurred within this time interval they were considered to form a planned primary treatment strategy, otherwise the later treatment was assumed to be treatment for recurrent disease.All routine data were extracted on a per-patient basis.Blank entries were removed.Radiotherapy and chemotherapy data were summarised and ordered chronologically, and used as the basis for estimating OS and PFS.For the purposes of this study, we compared the output of software with the manually curated gold standard dataset to allow us to assess the accuracy of the automated approach.We further optimised our approach through use of a time-based threshold to distinguish between a series of primary treatments, which may follow on from one another, and treatments given in response to disease recurrence.Our manual pilot study used a static time interval of 90 days between consecutive treatments to distinguish delayed primary treatment from treatment for recurrence.In this work, we explored systematic optimisation of this time interval via in silico experimentation.Since both diagnosis and recurrence are often preceded by coded diagnostic procedures, we investigated whether backdating diagnosis and recurrence would improve estimated OS and PFS.We backdated the date of diagnosis or recurrence to the earliest of:The date of the earliest diagnostic procedure within a set time interval of a treatment for H&N cancer; this time interval was initially set at 42 days, and then varied to find the optimal interval.The first date of metastatic disease.The start of radiotherapy for an H&N cancer.Since a longer backdating time interval risks inclusion of irrelevant diagnostic procedures, we attempted to find the shortest time interval consistent with the optimal agreement with the gold standard data.For each experiment, overall survival and progression-free survival intervals were calculated using the proxy diagnosis and recurrence time points as illustrated in Fig. 2.We assessed the impact of our experiments based on the agreement between the automated results and the gold standard data in three areas:Recurrence events.Overall survival interval; we found the survival intervals in the gold standard and the routine data set, and took the ratio.In line with other work a ratio of 0.8–1.2 was taken to represent reasonable agreement.Progression-free survival interval."Since we could not assume that correlation was normally distributed we used Kendall's tau to assess correlation using the statistical software, R .The routine data was supplied as a set of text files.We wrote our own software to integrate data from the different data sources, to automatically identify diagnosis and recurrence time points, to calculate survival intervals, and to determine the agreement between the automatic analysis and the gold standard dataset.These results were then exported as a text file for subsequent analysis in the open-source statistics software, R. Fig. 3 shows the flow of data through the automated software.The software requires five sets of input data:A list of anonymised patient identifiers, used to allow us to link data from different sources,Routine data sources,Look-up tables to translate ICD10 and OPCS codes to text,A list of procedures that are not considered signs of recurrence,A list of diagnostic procedures.122 patients met the inclusion criteria.The median age was 61 and 82% of the patients presented with locally advanced disease.The tumour sites and staging data are presented in Table 1.40 of the patients developed recurrent or progressive disease.Overall survival was 88% at 1 year and 77% at 2 years.Progression-free survival was 75% and 66% at 1 and 2 years.Our initial, automated, unoptimised method provided: an estimated OS of 87% and 77% and PFS of 87% and 78%: 21 of the 40 patients with recurrent disease were correctly identified and 19 were missed; of the 82 patients who did not develop a recurrence, 77 were correctly identified and 5 were incorrectly identified.We investigated the impact of backdating the date of diagnosis and recurrence to the date of biopsy, radiotherapy treatment or date of presentation with metastatic disease.The results of this are summarised in Table 2.Backdating to the date of biopsy improved the number of patients in whom the dates of diagnosis and recurrence were in agreement with gold-standard data, as did backdating to the start of radiotherapy.Backdating to a diagnosis of metastatic disease had no impact, and we therefore disregarded it in further analysis.Combining backdating to biopsy or start of radiotherapy resulted in a modest improvement in performance above our initial technique, reducing the number of patients where there was significant error in OS and PFS interval by 57% and 8% respectively.However, none of the backdating strategies improved the identification of disease recurrence.Since backdating to biopsy and start of radiotherapy by up to 6 weeks improved performance modestly, we systematically varied the interval from 14 to 70 days, in 14 day increments.On the basis of this, a time interval of 56 days was the shortest interval with the best agreement with the gold standard data for both diagnosis and recurrence.Altering the time intervals used for backdating improved the agreement of dates further, but did not improve the number of patients in whom we detected recurrence.The results of optimal definition of the interval between one treatment and another to distinguish between planned adjuvant treatment and treatment for recurrent disease are displayed in Table 3.We aimed to minimise the incorrect classification of adjuvant treatment and to maximise correct identification of treatment for recurrent disease.A time interval of 120 days resulted in the fewest recurrence event disagreements and falsely identified recurrence events.This results in an optimal specificity of 97.6% and sensitivity of 52.5% for detecting recurrence events.Using a 120-day interval to differentiate between planned adjuvant treatment and treatment of recurrent disease also gave the best agreement between the gold standard and automated PFS intervals.Since this did not increase the number of recurrences missed and minimised the number incorrectly ascribed, this appears to be the optimal time-period in this group of patients.The best agreement between the automated analysis and manual analysis of the gold standard data came from:Backdating to a diagnostic event to no more than 56 days prior to the first evidence of a H&N cancer diagnosis code.Backdating to radiotherapy with no time limit.Assuming that treatment was for recurrent, rather than initial, disease if it occurred more than 120 days following the primary diagnosis.Table 4 displays the results of the final optimised automated strategy.We further stratified the results to see the influence on tumour type on successful identification of recurrence; results displayed in Table 5.We successfully predicted 78% of recurrences in laryngeal cancer, 54% in oropharynx- and 67% in oral cavity cancers.This is probably as laryngeal cancer tends to present at an earlier stage and therefore recurrences would be amenable to further radical treatment.This suggests that even though most of the patients in our sample had advanced disease, our method would be relatively robust if more early stage cancers were included.There was good correlation for OS and PFS between the two datasets as displayed in Fig. 7."For OS, Kendall's tau was 0.97, and for PFS, tau = 0.82.98% and 82% of patients showed good agreement between the automated technique and Gold standard dataset of OS and PFS respectively.61 diagnosis dates out of 122 were correct to the nearest week of the true date.The automated technique correctly assigned recurrence in 101 out of 122 of the patients.Of the 40 patients who developed a recurrence, 21 were identified by the automated technique.Of these, four dates were perfectly identified, seven were within 1 week and nine were within 1 month of the Gold standard dataset.The automated technique failed to detect 19 patients who developed a recurrence, and incorrectly inferred that two patients had developed a recurrence when they had not.We have automated and optimised a novel computer-based approach for estimating disease-recurrence in head and neck cancer through a series of experiments.An earlier pilot study suggested that our approach may have some validity.This study has confirmed its feasibility in a larger group of patients using an automated approach, and has documented the specific impact of a range of automatic backdating strategies on the performance of our approach.The final automated approach successfully identified recurrence status in 83% of patients, and 98% and 82% of patients showed good agreement between the automated technique and gold standard dataset of OS and PFS respectively.Our optimisation procedure showed small yet measurable improvements in estimating OS and PFS intervals over the unoptimised technique.The diagnosis date was generally later than that for the gold standard by a median of 2 days, and the estimated OS interval was more than 20% different from the gold standard OS interval for only three patients.Backdating to biopsy was found to improve diagnosis date agreement; without backdating to biopsy the routine diagnosis date preceded that for the Gold standard by a median of 5 days.However, backdating did not improve the proportion of patients who developed recurrent disease but were not detected by our approach.Of the 19 patients with recurrent disease who were missed, 16 of these failed cases were due to the patient not receiving treatment for their recurrence.One other was because the patient suffered from early recurrence 41 days after primary treatment and so a secondary treatment was assumed, following our automated technique, to be planned primary treatment.Another missed recurrence had an incorrect ICD10 code, and the final missed recurrence was because the surgical treatment of recurrence was miscoded as a reconstructive procedure.Two recurrences were falsely detected in the routine data due to: a wrong diagnosis codes being assigned a radiotherapy treatment, and a patient receiving treatment coded as chemotherapy one year after primary radiotherapy, which was therefore used as an indication of recurrence by our automated approach.These results are summarised in Table 5.There are several limitations to this work.Firstly, we have only looked at patients from one cancer centre in the UK, and we have concentrated our work on patients at higher risk of recurrence, rather than using a random sample.It will be important to assess the performance of our system in an unselected population, particularly in those with a lower risk of recurrence.However, we chose to use data which are available at a national level in the UK, and are based on widely used standards, so that that they could be reasonably easily replicated.In particular, we have not used clinically important data which are not currently reliably collected in routine care, such as cancer stage.In addition, we accept that our results may not generalise to other populations.Our technique has moderate performance.It is not clear what level or performance is necessary for the technique to be clinically applicable, and previous studies have not examined performance in detail.We suspect that even having broad estimates may be useful, but there are also clear areas for development which are the focus of on-going work.Aside from the specific problems in this dataset, there are two main problems in using routine health-care data for clinical purposes.The first is that such data, often collected by staff who are not involved in direct clinical care, may not accurately reflect the clinical situation.In our dataset, these were the causes for three of the errors.In theory, these could be reduced by training and increasing clinical involvement.The second source of error is in the mismatch which occurs in using administrative data for clinical purposes.These data are collected to inform payment and billing operations, rather than clinical care, and using them for clinical purposes often requires some degree of inference.In systematic terms, the development of progressive/recurrent disease is not directly captured in the routine data, and thus we infer recurrence from diagnostic and treatment activity that happens as a result.In this dataset, there was a specific instance where a patient received a high-cost drug regimen in the intensive care unit, which was coded as “chemotherapy”.The degree to which these inferences can be generalised and extended remains unclear, and is to some extent probably disease and health-care system specific.Previous work in this area is extremely limited.Other authors have shown that routinely collected data can be used to estimate mortality rates, related to either different oncological treatments or following orthopaedic surgical procedures .There has also been work showing that cancer registry data can be used to measure recurrence rates in breast cancer .However, there is very little work on using routinely available procedure-level data to infer disease recurrence.One study examined the use of such data to estimate measures of metastatic disease in breast, prostate and lung cancer , and used a combination of ICD-9 codes for diseases and treatment codes for chemotherapy.The other study used patients enrolled in a clinical trial, and looked at their routine health claims data to estimate recurrence and death .They found a good rate of agreement, with the routine healthcare data being able to measure the 5-year disease-free survival rate with a substantial degree of accuracy.In both cases, the authors used a combination of clinical intuition and logically-described criteria to use the routine data to infer recurrence.This approach is similar in principle to our work, although we included a wider range of treatment modalities than either of their studies did, and is the only one to include patients with head and neck cancers.Our work has focused on head and neck cancer, where local recurrence is relatively common and rarely left untreated.However, we believe that our approach is applicable to other tumour sites, and possibly non-malignant diseases.Although there are some disease-specific aspects to the work, the principle of retrospectively assigning a date of diagnosis to a primary tumour based on the date of presentation of metastatic disease would seem to be applicable across multiple tumour sites.Similarly, the use of repeated treatments to measure recurrent disease is one that is potentially applicable to a wide variety of conditions where recurrence is a possibility.However, it is restricted to those patients who are able or willing to receive subsequent treatments, and there are a proportion of patients who are either unwilling or unable to undergo further treatment, and thus will not be detected by our approach.Currently, our approach offers reasonable performance in a group of patients who are well enough to undergo initial curative treatment.There remain further possibilities for optimisation, through a more individualised assessment of relapse, and cross-referencing data on overall survival and local recurrence.These, and other developments, remain the subject of further work.We have developed a computer-based automated technique to extract data on relevant clinical events from routine healthcare datasets.Diagnosis, recurrence and death dates, and date of last follow-up were identified and overall survival and progression free survival intervals were calculated from this data.The automated algorithm was optimised to maximise correct identification of each timepoint, and minimise false identification of recurrence events.We tested our algorithm on 122 patients who had received radical treatment for head and neck cancer; the recurrence status for 82% patients was correctly identified, and in 98% of patients there was acceptable agreement between the routine and gold standard dataset for overall survival intervals.21 recurrence events were correctly identified out of a total of 40.The 19 who were missed were mainly due to the patient not receiving treatment for their recurrence; secondary treatment was used as an indicator for recurrence in the algorithm.We have demonstrated that our algorithm can be used to automate interpretation of routine datasets to extract survival information for this sample of patients, and the coding schemes and approach that the technique uses opens it up to use at a national level.There is potential to develop this algorithm to sensitise the recurrence dating strategy to contraindications of the patient, and also to perform retrospective analysis by predicting likelihood of recurrence through knowing the final status of the patient.No authors have been paid for this work, and all declare there are no conflicts of interest.
Background: Overall survival (OS) and progression free survival (PFS) are key outcome measures for head and neck cancer as they reflect treatment efficacy, and have implications for patients and health services. The UK has recently developed a series of national cancer audits which aim to estimate survival and recurrence by relying on institutions manually submitting interval data on patient status, a labour-intensive method. However, nationally, data are routinely collected on hospital admissions, surgery, radiotherapy and chemotherapy. We have developed a technique to automate the interpretation of these routine datasets, allowing us to derive patterns of treatment in head and neck cancer patients from routinely acquired data. Methods: We identified 122 patients with head and neck cancer and extracted treatment histories from hospital notes to provide a gold standard dataset. We obtained routinely collected local data on inpatient admission and procedures, chemotherapy and radiotherapy for these patients and analysed them with a computer algorithm which identified relevant time points and then calculated OS and PFS. We validated these by comparison with the gold standard dataset. The algorithm was then optimised to maximise correct identification of each timepoint, and minimise false identification of recurrence events. Results: Of the 122 patients, 82% had locally advanced disease. OS was 88% at 1 year and 77% at 2 years and PFS was 75% and 66% at 1 and 2 years. 40 patients developed recurrent disease. Our automated method provided an estimated OS of 87% and 77% and PFS of 87% and 78% at 1 and 2 years; 98% and 82% of patients showed good agreement between the automated technique and Gold standard dataset of OS and PFS respectively (ratio of Gold standard to routine intervals of between 0.8 and 1.2). The automated technique correctly assigned recurrence in 101 out of 122 (83%) of the patients: 21 of the 40 patients with recurrent disease were correctly identified, 19 were too unwell to receive further treatment and were missed. Of the 82 patients who did not develop a recurrence, 77 were correctly identified and 2 were incorrectly identified as having recurrent disease when they did not. Conclusions: We have demonstrated that our algorithm can be used to automate the interpretation of routine datasets to extract survival information for this sample of patients. It currently underestimates recurrence rates due to many patients not being well-enough to be treated for recurrent disease. With some further optimisation, this technique could be extended to a national level, providing a new approach to measuring outcomes on a larger scale than is currently possible. This could have implications for healthcare provision and policy for a range of different disease types.
798
Long-term wind turbine noise exposure and the risk of incident atrial fibrillation in the Danish Nurse cohort
There is presently a global focus on the development of renewable energy expansions and zero‑carbon shares in energy systems and wind energy is a suitable solution to achieve this.In 2016 wind power avoided over 637 million tons of CO2 emissions globally, which has positive environmental and health implications, however, wind turbines contribute to environmental noise and the local-level potential risk to human health is the subject of much debate."Denmark, one of the world leaders in total wind capacity has set a goal to generate 50% of the country's electricity by wind energy by 2021.Thus implying continuing increase in numbers and size of wind turbines, as well as proportion of the Danish population who live in close proximity to wind turbines.Studies on health effects related to traffic noise exposures indicate complications other than annoyance, including risk of atrial fibrillation.However, noise emissions from wind turbines are different and not associated with particulate or gaseous oxidative stressors, highlighting the need for research of potential cardiovascular health effects of WTN.Wind turbines are typically located in rural areas in which background noise levels and sensitivity thresholds to noise may be lower.Epidemiological studies assessing the health impacts of wind turbines are sparse and no studies have previously investigated whether exposure to WTN is associated with increased risk of AF.AF is the most common type of arrhythmia in Denmark, affecting around 5% of the population and with higher incidence rates amongst men and persons over 50 years of age, Danish Heart Foundation , 2018), and associated increased morbidity and mortality risks.Although prevalence is rising, the etiology behind this is largely unknown, but environmental or lifestyle factors are suspected.An association between WTN and AF may be biologically plausible.Exposure to WTN acts as a stressor with activation of the hypothalamic-pituitary-adrenal axis and stress cascade, has been shown to induce systemic inflammation and lastly the cortisol released in the stress reaction cascade may increase blood glycogen within atrial myocytes, which are all are suggested risk factors for AF.Rapidly increasing investments in wind turbines worldwide, the intense debate regarding potential health effects of WTN and the lack of epidemiological studies, merit research.In this study, we examine the association between long-term exposure to WTN and risk of AF in a large, nationwide cohort of Danish female nurses."The Danish Nurse Cohort was inspired by the American Nurses' Health Study to investigate the health effects of hormone replacement therapy in a European population.In 1993, the cohort was initiated by sending a questionnaire to 23,170 female members of the Danish Nurse Organization who were at least 44 years old at the time.The Danish Nurse Organization includes 95% of all nurses in Denmark.In total, 19,898 nurses replied, and the cohort was reinvestigated in 1999 when first 10,534 new nurses were invited of which 8344 responded) and second 2231 non-responders from 1993 were re-invited of which 489 responded."The questionnaire included questions on socio-economic and working conditions, parents' occupation, weight and height, lifestyle, self-reported health, family history of cardiovascular disease, use of oral contraceptives and HRT.In the present study, we used the earliest baseline information from 1993 or 1999 for 28,731 of the included nurses.Since establishment of the Central Population Register in 1968, all citizens of Denmark have been given a unique personal identification number, which allows accurate linkage between registers.The cohort members were linked to the Central Population Register to obtain the nurses vital status information at 31st December 2013.Using the unique personal identification number of the cohort members, all residential histories were traced in the Central Population Register between 1982 and 2013.Each residential address contained a unique identification code composed of a municipality-, road- and house number code.The dates the persons had moved to and from each address were noted.The addresses were then linked to a database of all official addresses and their geographical coordinates in Denmark.The endpoint was incidence of AF 10: I48 and ICD 8:427.93 and 427.94), defined as first-ever hospital contact for AF, identified in the Danish National Patient Registry, which has collected nationwide data on all non-psychiatric hospital admissions since 1977, and since 1995, patients discharged from emergency departments and outpatient clinics have also been registered.The Danish National Board of Health maintains the registers and assures the quality of the data.Participants with a discharge diagnosis or self-report of AF before enrolment into the Nurses Cohort were excluded.8768 on-shore WTs in operation at any time in Denmark from 1982 to 2013 were identified, using the administrative Master Data Register of WTs maintained by the Danish Energy Agency.It is mandatory for all WT owners to report to the register, which contains geographical coordinates, date of grid connection, cancelation date for decommissioned turbines, and output for each Danish power producing WT.Each of the turbines was classified into one of 99 noise spectra classes detailing the noise spectrum from 10 Hz to 10,000 Hz in thirds of octaves for wind speeds from 4 to 25 m/s, based on individual WT data including height, model, type and operational settings.These noise classes were formed from existing measurements of sound power for Danish WTs.At each WT location, the hourly wind speed and direction at hub height was estimated, using mesoscale model simulations.Temperature and relative humidity at 2 m height as well as the atmospheric stability were also estimated from these simulations."Each of the nurses' homes was identified and geocoded. "The noise contribution at each nurses' homes from WTs was calculated according Nord2000 method.Sound power levels from WTs were calculated for each address in the periods each cohort member had lived at the specific address.Each home address and each WT were geocoded, and the model takes into consideration, meteorological data for each WT every hour throughout the years 1982–2013.There are today >5200 WTs in Denmark, and since 1980s, they have gradually decreased in number and increased in size.WTs range in size from 40 to 63 m in height with 42 m wingspan to 90-143 m in height and 107 m wingspan.The applied noise exposure modelling has been described in details elsewhere.In brief, WTN exposure was estimated for the all the addresses the nurses had lived in using the Nord2000 noise propagation model which has been validated for WTs and previously detailed, showing a fine agreement between measured and calculated noise levels.NORD2000 method performs well in calculating noise levels below 30 dB, but in practice such low noise levels can rarely be measured due to background noise from vegetation.For each home, the noise contribution from all WTs within a 6000 meter radius was calculated hour by hour.Outdoor A-weighted sound pressure level at the most exposed façade of all buildings were calculated and exposure was aggregated as follows: day, evening, night, expressed as Lden and night), and L24, as yearly averages.Geographical coordinates were obtained for 99.9% of all the addresses.In this study, we consider nurses who had at lived within a 6000 m radius from at least one WT at some point of time in the period from 1.1.1982 to 31.12.2013 as exposed, and all others as unexposed to WTN.As previously described in detail, we used the newly updated, high-resolution Danish air pollution dispersion modelling system to estimate exposure to outdoor air pollution at the residence.The necessary input data for carrying out the exposure modelling has been established for the first time in Denmark.Road traffic noise at residential addresses of the nurses was estimated using the Nord2000 model.The input variables for the traffic noise model include the geocodes of the location, the height of apartments above street level, road lines with information on yearly average daily traffic, traffic composition and speed, road type, building polygons for all surrounding buildings, and meteorology.Noise from road traffic was calculated at individual residential addresses for the period 1982–2013, as the equivalent continuous L24h at the most exposed façade of the dwelling for the Ld, Le, Ln and Lden as yearly averages.We applied the Cox proportional hazards regression model to test the incidence of AF as a function of WTN exposure with age as the underlying time scale in all models, ensuring comparison of individuals of the same age.Start of follow-up was at the age on the date of recruitment, so nurses were considered at risk from recruitment, and end of follow-up was age at the date of first AF event, date of death, emigration or 31st December 2013, whichever came first.Nurses with an AF event before enrollment were excluded from the analyses.The effect of WTN was evaluated in several steps: Model 1) A crude model, adjusted only for calendar year at recruitment into the cohort; Model 2) A main, fully adjusted model, additionally adjusted for smoking status, smoking pack-years, alcohol consumption, physical activity, the consumption of fruit, avoidance of fatty meat consumption, use of oral contraceptives, use of HRT, employment status and marital status.The main analysis was performed on the cohort with complete information on all the covariates included in Model 2.We examined several WTN exposure time windows using the 1-, 5- and 11-year rolling mean during follow-up prior to AF diagnosis.In each rolling mean window, we considered Ld, Le, Ln, Lden, and L24 average time exposure separately.We estimated HRs for the categorical versions of WTN exposures using firstly a cut-off at 20 dB, based on the rationale that in Denmark, low-frequency sound in the 10–160 Hz range is limited to an A-weighted level of 20 dB.Second, we used cut-offs based on quartiles of noise exposure range for each noise proxy.Third, we evaluated WTN modelled as a continuous nonlinear and linear variable.To avoid enforcement of linearity between being exposed to WTN and not being exposed, two variables were used in these models; a binary variable distinguishing unexposed from exposed and a continuous variable with the actual exposure level for those exposed and the median exposure level for unexposed subjects.WTN exposures were modelled as time-varying variables in all models.We carried out a number of sensitivity analyses to assess the effect of several covariates on the association between WTN and AF in four additional separate models.Model 3) as for model 2, was further adjusted for Body Mass Index; Model 4) as for model 2, further adjusted for self-reported hypertension at baseline; the effect of diabetes and socio-economic status was assessed in Model 5) as for model 2, further adjusted for self-reported diabetes at baseline and Model 6) as for model 2, further adjusted for average gross income at the municipality at baseline, which we used as a proxy for socio-economic status.Continuous variables, year, smoking pack-years, alcohol consumption, BMI, and average gross income at the municipality were modelled with restricted cubic splines.The potential effect modification of the association between WTN amongst exposed Nurses and AF incidence by age, night shift work, obesity, road traffic noise/NOx traffic related air pollution and urbanicity index was examined by introducing interaction terms to the main linear model with the continuous version of exposure.Traffic related air pollution was considered relevant as this is strongly correlated with road traffic noise, which has been associated with the increased risk of AF incidence and is traffic related air pollution causes oxidative stress, which is independently associated with risk of AF.Noise estimates and traffic air pollution were available for every year of follow-up and all other potential confounding and effect mediating variables were available at baseline.The cohort consists of elderly nurses, thus the effect of non-AF death as a competing risk was also investigated as a function of WTN to assess whether time to AF in our main models was precluded by death.All effects are reported as cause-specific hazard ratios and 95% confidence intervals.All analysis and graphical presentations were performed using the R statistical software 3.2.0.Spearman correlation between metrics of WTN and traffic noise and air pollution were estimated, and these were not correlated.Research was conducted in accordance with principles of the Declaration of Helsinki and the Danish Nurses Cohort study was approved by the Scientific Ethics Committee for Copenhagen and Frederiksberg and written informed consent was obtained from all participants prior to enrollment.The present register based study was approved by the Danish Data Protection Agency.By Danish Law, ethical approval and informed consent are not required for entirely register-based studies.Of the total 28,731 recruited nurses in the Danish Nurses Cohort, we excluded 4 who died or emigrated before start of follow-up, 105 who were registered with a discharge diagnosis of an AF event in the Danish National Patient Registry before baseline.We additionally excluded 4471 nurses with missing information on covariates and 14 nurses due to missing address information or inability to geocode address, leaving 24,137 nurses for the final analyses.Mean follow-up was 17 years giving a total of 409,309 person-years of observations, during which 1430 nurses developed AF, with an incidence rate of 3.5 new cases per 1000 person-years.The nurses who developed AF were an average of 5 years older, had higher BMI, smoked slightly more, consumed less alcohol, were less physically active, ate more fatty meat and fruit, had higher rates of hypertension and HRT usage, but lower rates of ever using oral contraceptives, tended to be retired, lived in areas with slightly lower incomes, were exposed to higher levels of NOx traffic related air pollution, and slightly higher levels of annual weighted road traffic noise at baseline than nurses who did not develop AF within the follow-up period.Nurses from Danish Nurse Cohort resided all around Denmark with wide geographical variation, with 14.8% residing in urban areas, 42.3% in provincial towns and 40.2% in rural areas at the cohort baseline, which corresponds closely to the distribution of the Danish population.The estimated residential noise levels from WTs at baseline and distance to WT varied greatly, as did the proportion of women exposed throughout follow-up, with around 9% exposed in 1993, almost 15% in 2002 and 13% in 2013.Mean WTN levels amongst exposed nurses were 26.1 dB in 1993, 26.3 dB in 2002, and 26.4 dB in 2013."At the cohort baseline in 1993 or 1999, mean baseline residential noise levels amongst exposed nurses were slightly higher in those who developed AF) than in those who didn't develop AF).Compared to 21,618 nurses unexposed to WTN at the cohort baseline, the 2519 exposed nurses were slightly younger, had higher BMI, smoked less, were less physically active, had slightly higher rates of diabetes and oral contraceptive use, but lower HT use, tended to still be working, lived in rural rather than urban areas, had slightly lower incomes, were exposed to half the levels of NOx traffic related air pollution and lower annual levels of weighted road traffic noise but were similar in regards to hypertension, avoiding consumption of fatty meats, fruit consumption, and diabetes rates.Amongst the nurses unexposed at cohort baseline, 1307 developed AF during the 367,229 person-years, with an incidence rate of 3.6 per 1000 person-years while 123 of the nurses exposed at cohort baseline developed AF within 42,080 person-years, with an incidence rate of 2.9 per 1000 person-years.The relationship between WTN exposure and AF was characterized by a non-linear non-monotonic pattern means without strong evidence of an exposure-response relationship.When assessing the effects of the 11-year rolling mean night exposure to WTN, we found a 30% statistically significant increased risk of AF comparing ≥ to <20 dB exposure, and similar effects for evening and day exposure, as well as for overall 24-h un-weighted mean.The overall associations between the overall weighted 24-hr exposure, Lden WTN and AF was not significant when comparing women exposed to levels of noise ≥20 dB to those exposed to levels <20 dB, with adjusted hazard ratios and 95% confidence intervals of 1.01, 1.05 and 1.10, for the 11-year, 5-year and 1-year rolling mean preceding diagnosis, respectively.The effect of the included a-priori selected confounders in Model 2 compared to the crude model was very minor and HRs were slightly amplified in the fully adjusted model.There was no evidence of attenuation by BMI, self-reported hypertension, diabetes or socio-economic status in the sensitivity analyses with no marked deviation from the main model 2.There was evidence of effect modification by age with a higher risk of AF amongst nurses above 60 years of age, but none by obesity, road traffic noise, road traffic pollution or urbanicity index.The number of competing events within the cohort during follow-up was high, compared to the outcome of interest, but when assessing competing risk in model 2, we observed no association between WTN exposure and non-AF death in our data.Thus, there is no evidence that death is a competing event potentially masking the association of interest in this study.The results from this first nationwide, prospective cohort study of Danish female nurses assessing the relationship between WTN exposure and AF offer suggestive evidence of an association.We used a large nationwide prospective cohort with a representative distribution of present and historical addresses around entire Denmark and benefited from objective assessment of AF incidence based on high quality Danish registries with near 100% coverage, as well as detailed information on AF risk factors.This assessment implies minimal possibility of recall and information bias and no selection bias.We furthermore benefited from the state-of-the-art high-resolution validated exposure models for WT and road traffic noise as well as air pollution, which were based on geocodes and also accounted for all address changes, meteorological conditions as well as the size and the type of WTs.Another large Danish cohort study has previously investigated the association between long-term noise exposure from road traffic and AF.That study was based on over 50,000 men and women, participants of the Danish Diet, Cancer and Health cohort, recruited from general population, from the two largest cities in Denmark, and found a 6% increased risk associated with each 10 dB increase in road traffic noise and a dose-response effect.But after adjusting for air pollution the association was attenuated and without statistical significance.In other words, the association between traffic noise and AF seems to have been largely explained by the association with air pollution, which is a closely related exposure to road traffic noise, as the two share a major source, thus making it difficult to separate effects of noise from those of air pollution.Unlike traffic noise exposure, WTN is not correlated with particulate or gaseous oxidative stressors that are considered risk factors and relevant mediators for cardiovascular endpoints.The mechanisms of the association between environmental noise and cardiovascular outcomes, including AF are not fully elucidated but according to early work by Babisch et al. the noise stress reaction is thought to have two pathways; the first being direct effects of noise on the central auditory and nervous system and the second being an indirect non-auditory pathway mediated by annoyance.In our present study, we did not have any measures of annoyance effects amongst the nurses.However, annoyance is the most plausible explanation for the associations we observe.This has recently been reported in a cross-sectional study assessing the association between day and night-time noise and neighborhood) annoyance and AF within the German Gutenberg Health Study, on 14,639 participants.Although they did not have data on annoyance by WTN, a review of three cross-sectional studies, found that annoyance was consistently directly associated with WTN, but that no other measure of health or wellbeing was consistently related to sound pressure levels.In another previous study, annoyance was reported to be strongly correlated with a negative attitude toward the visual impact of WTs on the landscape, and the authors further demonstrate that people who benefit economically from WTs have a significantly decreased risk of annoyance, despite exposure to similar sound levels, as replicated in more recent studies.This supports the concept that annoyance due to visual rather than auditory aspects is the underlying cause via stress and sleep disturbance mediated pathways.We found a significant effect of exposure to WTN above 20 dB compared to <20 dB at night.We acknowledge that noise levels below 20 dB and probably not able to cause sleep disturbance, but if real this implies that night exposure to WTN is important in the causal pathway.This is plausible as the second mechanism involved in the effects of noise exposure relate to indirect noise-induced sleep disturbances, which are proposed pathways implicated in the development of metabolic changes.Transportation noise has been consistently associated with both self-reported and measured sleep disturbances and we expect similar disturbance could occur according to WTN.Sleep has a regulatory influence on the immune system and disturbed sleep has been associated with systemic impairment of the immune system.Sleep disturbances are similarly associated with impairment of the immune system, including changes in circulating white blood cells and increases in pro-inflammatory molecules that are on the causal pathway to AF.Notably, our results suggesting a strong association between nighttime WTN and AF, are supported by a recent study of noise annoyance and AF, where the authors report that annoyance due to noise at night was most strongly associated with AF prevalence.However, although we find strongest effects with exposure to WTN at night, we note that estimated exposure to L24, Lday, Le and Ln are all highly correlated, limiting our ability to distinguish between which exposure window is most relevant.Because we also found associations with evening and day exposure, this implies that exposure to noise at any time is likely relevant for development of AF.In the present study, we found that the WTN levels were relatively low.Only around 13% of nurses were exposed, defined as living within a 6000-m radius of one or more WTs as a large proportion of the included nurses had never lived in proximity of a WT and only 3% of all nurses were exposed to levels over 29.9 dB throughout follow-up.According to the World Health Organization it is not plausible that noise levels at and below 30 dB would cause sleep disturbances, and that only modest health effects would be expected at and below 40 dB.In the most recent environmental guidelines for the European Union, the WHO conditionally recommends that WT Lden levels should be reduced to below 45 dB, much in line with the limits set by the Danish Environmental Protection Agency; of 44 dB and 42 dB for dwellings in open country.This may imply that our findings were chance and that the noise levels in our study may not have induced intermediates previously reported to be on the causal pathway from noise exposure to AF, and direct auditory effects leading to AF at these levels are not expected.These levels of WTN are also substantially lower than road traffic noise levels within the same cohort, which were over 50 dB in average, noting that a 20 dB difference between these two sources of noise levels is perceived as around four times the loudness, due to the logarithmic scale of sound.We found no evidence of confounding of the relationship between WTN and AF in the present study, implying a robust estimate.Although valid, the information on confounding and effect mediating variables were collected at cohort baseline, and we acknowledge that these may have changed throughout the 20-year average follow-up time.The main limitation in our present study is the exposure misclassification in modelled WTN concentrations since these are only proxies of personal exposure.Although our estimation of WTN exposure is based on complete residential histories, we cannot account for exposures via temporary migration to other destinations, at work in other regions in Denmark or whilst overseas in areas with either higher or lower noise exposures.Also, we had no access to individual information related to bedroom orientation to the closest WT or noise exposure moderators such as façade insulation measures, window types.Finally, the A-weighted nature of our estimates is not informative about any peaking characteristics of the WTN throughout follow up."So although, the average A-weighted WTN levels we report are in fact in accordance with the noise limits for WTs as specified by statutory order of the Danish Environmental Protection Agency, there may have been peaks we didn't address.Another major weakness of our study is a small number of AF cases exposed to high levels of WTN, limiting the power to detect effects in this range of noise exposure.We did not have information on individual-level exposure or exposure modifiers, such as bedroom location within the residence, use of earplugs, or window or other insulating materials, and cannot rule out instances of exposure misclassification.Furthermore, we had no available information on personal sensitivity to noise, levels of annoyance or sleep quality, which have all been reported to be on the casual pathway between noise exposure and health effects.Albeit, these self-reports may have introduced bias as they include highly motivated persons with possible negative attitudes to WTs which have been repeatedly reported to play an important role as the underlying cause of reported health and sleep problems.In our study, it was not feasible to consider all noise sources including noise from neighbors, bedroom snoring, aircraft, railways, industrial noise, and ventilation nor did we have estimates of indoor WTN exposure.Selander et al. found increased effects estimates of noise in relation to AF when excluding all those exposed to other noise sources as well as hearing impaired persons.Hence, we may have underestimated the effects of WTN in our present study.Another weakness is that we lacked data on personal and household income, important determinants of socio-economic status.Finally, we only consider women, and are thus unable to account for eventual differences in effect according to gender, albeit the effects of gender do remain unclear with some studies showing stronger effects amongst men and others reporting no difference in gender.We found suggestive evidence of an association between long-term exposure to WTN and AF amongst in women above age 44.This should be interpreted with caution as levels of WTN exposure were low and direct auditory effects leading to AF are not expected.
Background: The potential health effects related to wind turbine noise (WTN) have received increased focus during the past decades, but evidence is sparse. We examined the association between long-term exposure to wind turbine noise and incidence of atrial fibrillation (AF). Methods: First ever hospital admission of AF amongst 28,731 female nurses in the Danish Nurse Cohort were identified in the Danish National Patient register until ultimo 2013. WTN levels at residential addresses between 1982 and 2013 were estimated using the Nord2000 noise propagation model, as the annual means of Lden, Lday, Levening and Lnight at the most exposed façade. Time-varying Cox proportional hazard regression models were used to examine the association between the 11-, 5- and 1-year rolling means of WTN levels and AF incidence. Results: 1430 nurses developed AF by end of follow-up in 2013. Mean (standard deviation) baseline residential noise levels amongst exposed nurses were 26.3 (6.7) dB and slightly higher in those who developed AF (27.3 (7.31) dB), than those who didn't (26.2 (6.6)). We observed a 30% statistically significant increased risk (95% CI: 1.05–1.61) of AF amongst nurses exposed to long-term (11-year running mean) WTN levels ≥20 dB(A) at night compared to nurses exposed to levels <20 dB(A). Similar effects were observed with day (HR 1.25; 95% CI: 1.01–1.54), and evening (HR 1.25; 95% CI: 1.01–1.54) noise levels. Conclusions: We found suggestive evidence of an association between long-term exposure to WTN and AF amongst female nurses. However, interpretation should be cautious as exposure levels were low.
799
Early-life exposure to persistent organic pollutants (OCPs, PBDEs, PCBs, PFASs) and attention-deficit/hyperactivity disorder: A multi-pollutant analysis of a Norwegian birth cohort
Attention-deficit/hyperactivity disorder is a neurodevelopmental disorder characterized by persistent inattention, hyperactivity and impulsivity.The prevalence of ADHD is 3.4–7% in children and slightly lower in adults, although there is heterogeneity in prevalence estimates across studies and countries, partly attributable to differing diagnostic criteria and practices.Although ADHD is highly heritable, early-life environmental risk factors explain an estimated 10–40% of the variance in ADHD.There is a growing body of evidence that early-life exposure to certain environmental contaminants impairs neuropsychological development.Hypothesized mechanisms include thyroid hormone insufficiency and disruption in early-life, inhibition of acetylcholinesterase, dopaminergic dysfunction, disruption of calcium signaling and GABA signaling pathways, and gene-environment interactions.In humans, the evidence is most robust for several persistent pesticides and heavy metals, although the evidence base is scarce for most chemicals.A small number of studies have reported positive associations between persistent organic pollutants and ADHD or ADHD-related behaviors, while other studies have reported null or inconsistent results.Many of the previous studies either investigated individual chemicals, did not account for potential confounding from co-exposure to other chemicals or interactions between them, nor investigated both prenatal and postnatal exposure, or used a cross-sectional design, had a younger age of ADHD diagnosis, or used less sensitive tools to ascertain ADHD.We investigated associations between POPs from four chemical classes and risk of ADHD diagnosis by around 13 years of age in a Norwegian birth cohort."We evaluated concentrations of chemicals measured in breast milk, which represent early breast milk exposure, and for the lipid-bound compounds are also a proxy of the child's in utero exposures.We also modeled child chemical body burdens in the first 2 years of life because this is a critical window for neurodevelopment, and intake through breastfeeding contributes to substantial exposures in this period.We used data from a prospective birth cohort, HUMIS.The HUMIS cohort was established with the aim of measuring exposure to POPs in breast milk and investigating possible health effects.Briefly, new mothers were recruited in 2003–2009 by public health nurses during routine postnatal care home visits around 2 weeks postpartum in seven counties across Norway.A subset of mothers were recruited in 2002–2005 by a pediatrician at the maternity ward in Østfold hospital in Southern Norway, two term births for every preterm birth.All mothers followed the same protocol and completed the same questionnaires, regardless of recruitment procedure.They were asked to collect 25 mL of breast milk each morning on 8 consecutive days before the child reached 2 months of age.Minor deviations in this sampling protocol, such as collection by breast pump, were accepted.Supplemental Table S1 shows a comparison of key characteristics from the general population of mothers giving birth in Norway, entire HUMIS cohort, and the subset analyzed in this study.Aside from preterm birth, other characteristics of this study population such as maternal age, primiparity, smoking at the start of pregnancy, birth weight and sex of the infant are all representative of the general population.The current study is based on the subset of 1199 mother–singleton child pairs for whom breast milk samples have been analyzed for at least one of the chemical classes.The birth cohort study was approved by the Norwegian Regional Committee for Medical and Health Research Ethics and the Norwegian Data Inspectorate.Written informed consent was obtained from all participating mothers prior to enrolment.ADHD cases were identified by linkage to the nationwide Norwegian Patient Registry which covers specialist-confirmed diagnoses at hospitals and outpatient clinics.Cases were defined as International Classification of Diseases-10 codes F90.0, F90.1, F90.8 or F90.9, which corresponds to a narrower definition of ADHD than in the Diagnostic and Statistical Manual of Mental Disorders of the of the American Psychiatric Association, as symptoms of both inattentiveness and hyperactivity/impulsivity are required.There were no cases of F98.8, which is sometimes included in ADHD ascertainment.The registry began collecting individual data in 2008, when the children were a median of 3.7 years of age, and data up until December 2016 was available at the time of linkage, when the children were a median of 12.7 years of age.We assessed environmental chemicals commonly detected in breast milk, as these legacy pollutants are persistent and ubiquitous across the globe.Chemicals included polybrominated diphenyl ethers, and perfluoroalkyl acids of the class of poly- and perfluoroalkyl substances, used respectively as brominated flame retardants and surfactants, which were not yet restricted at the time of the study; and industrial chemicals) and commercial organochlorine pesticides, which although banned or restricted, are present in the environment."Chemicals were measured in individuals mother's pooled breast milk samples, which were collected at a median of 33 days postpartum.Due to financial constraints, samples have only been analyzed for a subset of women.Chemicals were measured in broadly two subsets: the first set was oversampled for preterm birth, small for gestational age, and rapid growth, and a second set was oversampled for neurodevelopmental disorders, including ADHD, autism spectrum disorder, and cognitive delay.We restricted our study to the 27 chemicals that were measured in at least 800 samples and detected in at least 50% of the samples: 14 PCBs, 5 OCPs, 2 PFASs, and 6 PBDEs.Four laboratories performed the chemical analyses, as previously described: the Department of Environmental Exposure and Epidemiology, Norwegian Institute of Public Health, the Department of Environmental Sciences, Norwegian University of Life Sciences, the Institute for Environmental Studies, Faculty of Earth and Life Sciences, VU University, and the Research Centre for Toxic Compounds in the Environment, Masaryk University.Our primary exposure was the concentrations of chemicals in breast milk, which represent early breast milk exposure, and additionally for the lipid-bound compounds, in utero exposure.Since the critical window of neurodevelopment may be up to 2 years of age, and postnatal exposures can be substantial and variable, we estimated postnatal exposures for a secondary analysis.We modeled postnatal child blood levels for the 25 lipophilic chemicals using a two-compartment pharmacokinetic model."The model's equations use a number of input parameters including the measured chemical concentrations in breast milk; child's and mother's weight at birth, 3, 6, 12, and 24 months; the extrapolated fat mass of the child and mother; the maternal-reported duration of exclusive and partial breastfeeding; and the estimated quantity of breast milk consumed, all of which change over time.Compared with using a single measured concentration, or measured concentration ∗ breastfeeding duration, the pharmacokinetic model yields the lowest measurement error.Given the ethics of the alternative, modeling postnatal blood concentrations is the most suitable approach to capture the exposure profile of the infant during the first 2 years of life.We also modeled concentrations at birth to compare estimates of these with the measured concentrations.Full details of the model and validation are available elsewhere.For the two protein-bound PFASs, we could not use the same model, and the secondary exposure was the product of breastfeeding duration and PFAS concentration."Information on potential confounders was obtained from the Medical Birth Registry of Norway, and from the questionnaires administered to the mothers at 6, 12, and 24 months postpartum.For the primary analysis, we assessed associations between measured breast milk concentrations of chemicals and ADHD.We defined two adjustment models based on a directed acyclic graph.The minimum sufficient adjustment set included: child age at linkage, maternal age, and maternal education.We also tested a model that was further adjusted for covariates for which evidence for an association with both the exposure and outcome is weaker: parity, smoking during pregnancy, pre-pregnancy body mass index, marital status, and maternal fatty fish consumption, and variables related to oversampling: small for gestational age) and preterm birth.Estimates from M1 reflect the total effect estimates of chemical exposures and ADHD, but we cannot exclude selection bias due to the oversampling design.In the DAG, the study sample selection variable is not separated from the outcome or exposure and there is potential for selection bias in the odds ratio estimate for M1.In M2, when we adjust for “SGA, preterm”, we have adjusted for selection bias.However, since “SGA, preterm” is also a potential mediator for the effect of chemical exposures on ADHD, we may also have removed some of this effect in M2.In addition, although not commonly discussed, adjusting for preterm as a mediator may lead to collider bias.Values below the sample- and chemical-specific limit of detection were singly imputed using maximum likelihood estimation, following a log-normal distribution and conditional on maternal age, parity, pre-pregnancy BMI, and child birth year.Twenty-four exposures had <2% below the LOD, and the remaining exposures had 7%, 10%, and 28% of values below the LOD.We used multiple imputation by chained equations to impute missing exposure data and missing covariate data up to the full sample size of 1199.We imputed 100 datasets.As a sensitivity analysis, we also ran complete case analyses.Due to the high number of correlated exposures from similar sources and potential for multicollinearity, we used a two-step approach to estimate the associations between the chemicals and ADHD.We first used elastic net logistic regression, a variable selection method that adjusts for confounding due to correlated co-exposures and reduces the proportion of false positive results.We repeated elastic net modeling in each of the 100 multiply imputed datasets, and considered exposures that were selected in more than half of the models as noteworthy.We also evaluated the p-values conditional on the selection for post-selection statistical inference, which reflects the strength of the associations.In the second step, we then refit the selected subset of chemicals in an ordinary multi-pollutant logistic regression model.For comparison, we also present the single-pollutant models adjusted for confounders and corrected for multiple comparisons with a false discovery rate controlled at <5%.We tested models using natural log-transformed exposure variables to reduce the influence of extreme values and improve model fit.Regression coefficients are presented for an interquartile range increase in ln-exposure concentrations to render coefficients more comparable given the right-skewed distributions and highly variable exposure contrasts.As a sensitivity analysis, we also tested models with untransformed exposure variables.We assessed potential effect measure modification by child sex, maternal education, maternal smoking, and parity.For the most robustly-selected exposures from the primary analysis, we ran a number of additional sensitivity analyses: 1) maximum adjusted set of confounders, not including study sample selection variables; 2) excluding over-sampled SGA and preterm births; 3) excluding those recruited from Østfold county; and, 4) restricting to children aged 10 years and above.We then conducted a secondary analysis using postnatal estimates of the lipophilic chemical body burdens of the child at birth and at 3, 6, 12, 18, and 24 months of age to investigate if we could detect critical windows of exposure.These models were additionally adjusted for duration of any breastfeeding, which is a potential confounder in the postnatal exposure regression since longer duration of breastfeeding may influence both transfer of POPs and risk of ADHD.For the non-lipophilic PFASs, we assessed effect modification by total breastfeeding duration, as an indirect measure of postnatal exposure."Finally, we assessed whether associations for the selected exposures a) exhibited multiplicative interactions by introducing product terms between them; b) were non-linear, using generalized additive models fitted with a smoothing spline; and, c) whether exclusion of extreme exposure values or influential observations affected the magnitude or shape of associations.We used Stata to model postnatal estimates and R software for all other statistical analyses.Characteristics of the mother–child pairs are displayed in Table 1.Mothers were a median of 30 years of age at delivery, 33% were overweight or obese prior to pregnancy, and 86% continued breastfeeding for >6 months postpartum.A total of 55 children had been diagnosed with ADHD.The distributions of chemicals in breast milk are shown in Fig. 2.The highest breast milk concentrations within each of the four chemical classes were observed for BDE-47, PFOS, PCB-153, and dichlorodiphenyldicholorethylene.There was clear clustering by chemical class with moderate to high correlations within chemical classes, and moderate correlations between PCBs and OCPs; 34% of pairwise Pearson correlations were rp > 0.5 and 11% were rp > 0.8.Concentrations of the modeled child body burdens peaked at the end of breastfeeding with median levels around 3 times higher at month 12 than at birth.Correlations between measured breast milk concentrations and the estimated postnatal child serum concentrations for the 25 lipophilic chemicals were high, but diverged with time; rp > 0.87 for months 0 to 6 dropping to rp < 0.80 for 14 exposures by 24 months.Associations between covariates and ADHD, and between covariates and selected chemicals are shown in Table S4.Boys and children whose mothers smoked during pregnancy had an increased risk of ADHD.Maternal breast milk concentrations of chemicals generally increased with maternal age, earlier calendar year of sampling, and maternal fish intake, and were reduced in multiparous women.Ten chemical exposures were predictive of ADHD in the majority of elastic net logistic regression models, across multiple imputation datasets, in the primary M1 minimal sufficient adjustment analysis: BDE-47 and 154, PFOA, PFOS, PCB-114, and β-HCH with an increased risk, and BDE-153, PCB-153, HCB, and p,p′-DDT with a decreased risk.The most robust results were for PFOS, HCB, β-HCH, and p,p′-DDT.In a multi-pollutant logistic regression M1 model, the effect estimates were OR = 1.77 for an IQR increase in PFOS concentrations, OR = 0.47 for HCB, OR = 1.79 for β-HCH, and OR = 0.59 for p,p′-DDT.For comparison purposes, these effects estimates are also presented per 1-ln-unit increase in the footnotes of Table S5.Using elastic net modeling, a somewhat smaller set of exposures was selected in the further adjusted M2 model, with BDE-154, PFOA, and PCB-114 no longer selected; however, the mutually adjusted M2 effect estimates were generally similar in magnitude and precision in comparison to the M1 effect estimates.For comparison, we evaluated the more conventional single-pollutant associations, unadjusted for co-exposures, and details of these associations are shown in Table S6.In single-pollutant models with untransformed exposures, PFOS, PFOA, and HCB were significant.For the most robustly-selected exposures, testing the maximum adjusted set of confounders excluding study sample selection variables, excluding those over-sampled for SGA and preterm, excluding those who lived in Østfold county, or restricting to those who were aged >10 years, did not alter the interpretation of our results.Effect estimates were similar for the analysis based on multiple imputation compared to complete case analysis.Analyses based on modeled lipophilic exposures at birth and at five postnatal periods in the first 2 years of life yielded effect estimates that were similar in magnitude and precision for measured HCB, β-HCH, and p,p′-DDT except for small, non-significant increased estimates at 24 months.For non-lipophilic PFOS, we did not observe effect modification by duration of any breastfeeding, a proxy for postnatal exposure.For the subset of four robustly selected exposures, there were no two-way interactions between exposures.There were also no clear outliers.Removal of the three most influential observations had a negligible effect on the effect estimates.The associations for PFOS, β-HCH, and p,p′-DDT were clearly linear, whereas the association for HCB exhibited an inverted U-shaped curve, with an inflection point at around 8.0 ng/g lipid.There was evidence of effect modification by maternal education and child sex for PFOS but not for maternal smoking and parity.Higher odds of ADHD diagnosis with increasing PFOS levels was observed only for higher educated mothers and a stronger association was observed for girls .No effect modification was observed for HCB, β-HCH, and p,p′-DDT for any of the tested factors.In an exploratory a posteriori analysis, we found that fish intake was significantly predictive of PFOS, β-HCH, HCB, and p,p′-DDT levels with a 4–7% increase in median concentrations for the four chemicals per IQR in fish intake.Fatty fish was a more important predictor of β-HCH, HCB, and p,p′-DDT levels, whereas lean fish was a more important predictor of PFOS levels.In this prospective Norwegian cohort study, early-life exposure to four environmental chemicals was associated with the odds of ADHD diagnosis in childhood.PFOS and β-HCH were associated with an increased risk, and p,p′-DDT with a decreased risk, and HCB showed a non-linear exposure–response relationship.These results were robust to adjustment for other persistent chemical exposures and relevant confounders.The other 23 chemical exposures were less consistently associated or associations were close to null.PFOS, HCB, β-HCH, and p,p′-DDT are persistent organic pollutants and body burdens of these compounds will remain substantial in the coming years because of their long half-lives and environmental persistence.Fish intake was significantly predictive of PFOS, β-HCH, HCB, and p,p′-DDT levels in this study population.This is in agreement with previous studies demonstrating that fish intake is the most important dietary source of exposure for these four POPs.We observed a positive association between breast milk concentrations of PFOS and ADHD that was sex-specific: stronger and only statistically significant in girls.This finding should be interpreted cautiously due to the small number of female cases in our study.However, other studies have also reported adverse associations between prenatal or child serum concentrations of PFOS or PFOA and behavioral or ADHD symptoms in girls, and not in boys.An effect of PFOS on females only is plausible, since estrogen plays a critical role in brain programming, while PFOS is an estrogen receptor agonist in a number of experimental models.However, a large, prospective nested case-control study in the Danish National Birth Cohort with median maternal serum PFOS concentrations of 27.4 ng/mL did not find these sex differences, or any other consistent associations between prenatal PFAS exposures and registry-based ADHD.Other prospective studies of prenatal PFAS exposure and child neurodevelopment have largely been inconsistent or null, as was breast milk exposure in this cohort at a younger age.Cross-sectional childhood exposure has been associated with increased odds of ADHD in NHANES, although the association was only apparent for perfluorohexane sulfonate, and not PFOS or PFOA, in the highly contaminated C8 communities in Ohio.We identified a reduced odds of ADHD diagnosis in association with early-life p,p′-DDT exposure, in contrast to our a priori hypothesis that environmental chemical exposures are detrimental to neuropsychological development.Most previous studies assessed p,p′-DDE, the primary metabolite of p,p′-DDT, in relation to ADHD or related conditions, and the association for p,p′-DDE was null in this study.Prenatal p,p′-DDE was associated with total difficulties, including conduct problems and hyperactivity, at 5–9 years in the INUENDO cohort."Other studies have generally reported null or near-null associations, including the largest study to date on p,p′-DDE and ADHD, a European pooled analysis in 4437 mother-child pairs in which the average of the cohorts' median estimated child blood levels at birth was 156.7 ng/g lipid.HCB had a non-linear association with ADHD: we observed increasing risk in the low-level exposure range, which switched to a decreasing risk at concentrations above 8 ng/g lipid in a non-parametric GAM analysis.Experimental evidence supports that HCB is neurotoxic; e.g., HCB affected neuronal differentiation and inhibited neurite development in GABAergic neurons in a mouse model.Previously, prenatal HCB exposure has been associated with an increased risk of poor social competence and teacher-reported ADHD-related scores at 4 years in Spain, and deficits in child cognition at 4 years in Greece."However, the recently published European pooled analysis of HCB found no association with risk of ADHD.We hypothesize that the decrease in risk observed at higher concentrations could be related to live-birth bias.If higher HCB exposures cause increased pregnancy loss, as previously observed in a contaminated population, and as indicated by its association with SGA, this competing risk coupled with unmeasured confounding could bias HCB–ADHD associations towards null or induce an apparent protective association due to collider stratification bias.This could also apply to the inverse association observed between p,p′-DDT and ADHD, as p,p′-DDE exposure has also been associated with pregnancy loss.However, this is speculative, and the potential for bias debatable, especially since other studies report no associations between pregnancy loss and these compounds.We also observed a novel and robust association between breast milk concentrations of β-HCH and increased ADHD.The epidemiologic evidence for the effects of β-HCH, a compound structurally similar to HCB, on neuropsychological development is limited.In a cross-sectional analysis of a U.S. cohort, urinary 2,4,6‑trichlorophenol, a metabolite of certain OCPs including HCB and HCHs, was associated with parent-reported ADHD.Our findings suggest that this compound should be further investigated for neurotoxicity.We did not find clear associations for PCBs in relation to ADHD diagnosis.Most previous studies have reported null or near-null associations for PCBs, although the sample sizes have often been small.One study near a PCB-contaminated harbor in New Bedford, Massachusetts, USA reported associations between PCB-153 and teacher-reported ADHD-behaviors and tests reflecting inattention and impulsivity around 8 years of age, with stronger associations observed for prenatal compared to postnatal exposure estimates."However, the European pooled analysis of PCB-153 found no association with risk of ADHD.The differences between the New Bedford and European studies could be due to study sample size or a different underlying mix of PCB congeners.We also did not find clear evidence that PBDEs were associated with ADHD.We found suggestive evidence that BDE-47 was associated with an increased risk of ADHD and BDE-153 with a decreased risk.Several smaller studies reported associations between prenatal and postnatal PBDE exposures and inattention and hyperactivity in young school-aged children.In the CHAMACOS cohort in California, prenatal ΣPBDE exposure was associated with various measures of attention and executive functioning at ages 9, 10.5, and 12 years.A potential explanation for the discrepancy with our results might be that the exposure–outcome association exhibits a threshold, and the U.S. populations generally have higher PBDE exposure due to stringent furniture and product flammability regulations; for example, the median prenatal PBDE-47 concentration was 15.0 ng/g in maternal serum in the CHAMACOS study population, compared to a median concentration of 1.03 ng/g lipid in breast milk in the present Norwegian study population, approximately equivalent to 0.7 ng/g in maternal serum.Furthermore, sub- or pre-clinical continuous measures of ADHD-related behaviors or traits may have increased statistical power to detect associations compared to case-control analyses.Our study had several strengths, including the prospective design, a relatively large sample size, information on a large number of potential confounders, and objective registry linkage for detection of ADHD cases.We simultaneously assessed the largest number of chemical exposures in breast milk to date in relation to ADHD diagnosis, reducing the potential for confounding bias by correlated co-exposures or detection of false positive associations for correlated yet not-causally associated exposures.We did not detect substantial correlations between chemical classes, and the co-adjustment model showed little confounding by other chemicals classes for this outcome.This finding could aid the interpretation of other studies that focused on only one chemical class.We were able to assess early breast milk exposure, which for the lipophilic compounds is a good proxy for in utero exposure, and is corroborated by the results of our modeled child blood concentrations at birth.Furthermore, for the lipophilic compounds, we modeled postnatal exposure to 2 years of age, finding no additional effects from exposure during this period."Postnatal exposures are important due to the high rate of neurogenesis and synaptic pruning in the first 2 years of life, and, due to breastfeeding, the infant's concentrations can reach 6 times higher than the mother's.The model assumes that all chemicals have the same half-life, ignoring potential differences in partitioning kinetics between serum and breast milk, nevertheless, it substantially improved postnatal exposure assessment in a validation study.For three chemicals, correlations between modeled and measured levels in the child at 6 and 16 months of age were high, and performance is expected to be better in this study population with more detailed breastfeeding and weight change data than the Slovak population used for the validation study.However, for PFASs, our early breast milk concentrations do not represent in utero exposure, as the composition and concentration in breast milk is substantially different to that in maternal serum.PFASs measured in maternal serum would thus be required to fully disentangle possible prenatal and/or postnatal exposure effects of these chemicals.Our study had several limitations.As with any epidemiological study, we cannot exclude residual confounding bias or bias amplification due to unmeasured or mismeasured covariates, or misspecification of the causal structure.Adjusting for the study sample selection variables, also mediators, in M2, could have led to collider bias, although there was little evidence of this when comparing the results from the two adjustment models.We did not include several chemical classes of concern, including heavy metals, pyrethroids, and organophosphate pesticides; however, since the exposure sources and pharmacokinetics differ, the correlations between measured and unmeasured chemical classes are expected to be low, with minimal confounding bias.There is evidence that some micronutrients play a role in the etiology of ADHD, and may also share dietary exposure sources with the chemicals in lipid-rich foods such as dairy and fish.This could theoretically introduce a negative bias, leading to a reduced risk of POPs on ADHD.However, adjustment for maternal fatty fish intake did not materially influence our results, suggesting limited residual confounding from this source.Statistical power differed between chemicals due to a differing number of samples measured for some chemicals, in part due to the oversampling design.We also cannot rule out differential measurement error of the exposure data, however, we found limited evidence of batch effects across chemical analysis laboratories.Furthermore, differing analytical precision for each chemical could contribute to non-differential measurement error, nevertheless, the precision was comparable across the chemical classes.Imputing missing exposures had a negligible effect on the estimates as the complete case results were not materially different from those obtained using multiply imputed exposures.There may have been an incomplete ascertainment of ADHD cases in the study population.If a child was diagnosed before 2008, and had no further contact with a specialist, this diagnosis would not be registered in the National Patient Registry, although this is expected to be rare.It is possible that some children presenting symptomatology suggestive of ADHD had not yet received a diagnosis at the time of linkage.The mean age at first diagnosis registered in the National Patient Registry was 8.38 years in a general population sample of children born 2000–2008.119 children were under the age of 10 years at the time of linkage in the present study population, and restricting analyses to children 10 years of age and older did not materially affect our results.The cumulative incidence of ADHD reached 4.3% for those 14 years of age in the Norwegian Mother and Child Cohort Study, compared to 2.1% in the entire HUMIS cohort or 4.6% in the present study population.We were not able to evaluate different subtypes of ADHD, such as the hyperactive versus inattentive dimensions.Finally, we examined a large number of exposure–outcome associations, and these results should be interpreted with caution given the potential for chance findings.In a detailed assessment of 27 environmental chemicals in breast milk, early-life exposure to certain persistent organic pollutants was associated with risk of ADHD.Specifically, we report a novel association with β-HCH that requires replication, and, additional evidence of an effect of PFOS in females only.Protective effects from p,p′-DDE and HCB may be due to live birth bias, unmeasured residual confounding or chance findings.Further studies, including those designed to detect potential sex-specific effects, and pooled analysis of cohorts to obtain large enough sample sizes, are warranted to explore the potential neurotoxicity of a broader array of the chemical space.The authors declare that they have no actual or potential competing financial interests.
Background: Numerous ubiquitous environmental chemicals are established or suspected neurotoxicants, and infants are exposed to a mixture of these during the critical period of brain maturation. However, evidence for associations with the risk of attention-deficit/hyperactivity disorder (ADHD) is sparse. We investigated early-life chemical exposures in relation to ADHD. Methods: We used a birth cohort of 2606 Norwegian mother–child pairs enrolled 2002–2009 (HUMIS), and studied a subset of 1199 pairs oversampled for child neurodevelopmental outcomes. Concentrations of 27 persistent organic pollutants (14 polychlorinated biphenyls, 5 organochlorine pesticides, 6 brominated flame retardants, and 2 perfluoroalkyl substances) were measured in breast milk, reflecting the child's early-life exposures. We estimated postnatal exposures in the first 2 years of life using a pharmacokinetic model. Fifty-five children had a clinical diagnosis of ADHD (hyperkinetic disorder) by 2016, at a median age of 13 years. We used elastic net penalized logistic regression models to identify associations while adjusting for co-exposure confounding, and subsequently used multivariable logistic regression models to obtain effect estimates for the selected exposures. Results: Breast milk concentrations of perfluorooctane sulfonate (PFOS) and β‑hexachlorocyclohexane (β-HCH) were associated with increased odds of ADHD: odds ratio (OR) = 1.77, 95% confidence interval (CI): 1.16, 2.72 and OR = 1.75, 95% CI: 1.22, 2.53, per interquartile range increase in ln-transformed concentrations, respectively. Stronger associations were observed among girls than boys for PFOS (pinteraction = 0.025). p,p′‑Dichlorodiphenyltrichloroethane (p,p′-DDT) levels were associated with lower odds of ADHD (OR = 0.64, 95% CI: 0.42, 0.97). Hexachlorobenzene (HCB) had a non-linear association with ADHD, with increasing risk in the low-level exposure range that switched to a decreasing risk at concentrations above 8 ng/g lipid. Postnatal exposures showed similar results, whereas effect estimates for other chemicals were weaker and imprecise. Conclusions: In a multi-pollutant analysis of four classes of chemicals, early-life exposure to β-HCH and PFOS was associated with increased risk of ADHD, with suggestion of sex-specific effects for PFOS. The unexpected inverse associations between p,p′-DDT and higher HCB levels and ADHD could be due to live birth bias; alternatively, results may be due to chance findings.