content
stringlengths
4.52k
9.54k
system_prompt
stringclasses
1 value
Maternal pushing during the second stage of labour is an important and indispensable contributor to the involuntary expulsive force developed by uterus contraction. Currently, there is no consensus on an ideal strategy to facilitate these expulsive efforts and there are contradictory results about the influence on mother and fetus. To evaluate the benefits and possible disadvantages of different kinds of techniques regarding maternal pushing/breathing during the expulsive stage of labour on maternal and fetal outcomes. We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (28 January 2015) and reference lists of retrieved studies. Randomised and quasi-randomised assessing the effects of pushing/bearing down techniques (type and/or timing) performed during the second stage of labour on maternal and neonatal outcomes. Cluster-RCTs were eligible for inclusion but none were identified. Studies using a cross-over design and those published in abstract form only were not eligible for inclusion.We considered the following comparisons.Timing of pushing: to compare pushing, which begins as soon as full dilatation has been determined versus pushing which begins after the urge to push is felt.Type of pushing: to compare pushing techniques that involve the 'Valsalva Manoeuvre' versus all other pushing techniques. Two review authors independently assessed trials for inclusion and risk of bias. Two review authors independently extracted data. Data were checked for accuracy. We included 20 studies in total, seven studies (815 women) comparing spontaneous pushing versus directed pushing, with or without epidural analgesia and 13 studies (2879 women) comparing delayed pushing versus immediate pushing with epidural analgesia. The results come from studies with a high or unclear risk of bias, especially selection bias and selective reporting bias. Comparison 1: types of pushing: spontaneous pushing versus directed pushingOverall, for this comparison there was no difference in the duration of the second stage (mean difference (MD) 11.60 minutes; 95% confidence interval (CI) -4.37 to 27.57, five studies, 598 women, random-effects, I(2): 82%; T(2): 220.06). There was no clear difference in perineal laceration (risk ratio (RR) 0.87; 95% CI 0.45 to 1.66, one study, 320 women) and episiotomy (average RR 1.05 ; 95% CI 0.60 to 1.85, two studies, 420 women, random-effects, I(2) = 81%; T(2) = 0.14). The primary neonatal outcomes such as five-minute Apgar score less than seven was no different between groups (RR 0.35; 95% CI 0.01 to 8.43, one study, 320 infants), and the number of admissions to neonatal intensive care (RR 1.08; 95% CI 0.30 to 3.79, two studies, n = 393) also showed no difference between spontaneous and directed pushing and no data were available on hypoxic ischaemic encephalopathy.The duration of pushing (secondary maternal outcome) was five minutes less for the spontaneous group (MD -5.20 minutes; 95% CI -7.78 to -2.62, one study, 100 women). Comparison 2: timing of pushing: delayed pushing versus immediate pushing (all women with epidural)For the primary maternal outcomes, delayed pushing was associated with an increase of 54 minutes in the duration of the second stage of labour (MD 54.29 minutes; 95% CI 38.14 to 70.43; 10 studies, 2797 women, random-effects; I(2) = 91%; T(2) = 543.38), and there was no difference in perineal laceration (RR 0.94; 95% CI 0.78 to 1.14, seven studies. 2775 women) and episiotomy (RR 0.95; 95% CI 0.87 to 1.04, five studies, 2320 women). Delayed pushing was also associated with a 20-minute decrease in the duration of pushing (MD - 20.10; 95% CI -36.19 to -4.02, 10 studies, 2680 women, random-effects, I(2) = 96%; T(2) = 604.37) and an increase in spontaneous vaginal delivery (RR 1.07; 95% CI 1.03 to 1.11, 12 studies, 3114 women).For the primary neonatal outcomes, there was no difference between groups in admission to neonatal intensive care (RR 0.98; 95% CI 0.67 to 1.41, three studies, n = 2197) and five-minute Apgar score less than seven (RR 0.15; 95% CI 0.01 to 3.00, three studies, n = 413). There were no data on hypoxic ischaemic encephalopathy. Delayed pushing was associated with a greater incidence of low umbilical cord blood pH (RR 2.24; 95% CI 1.37 to 3.68) and increased the cost of intrapartum care by CDN$ 68.22 (MD 68.22, 95% CI 55.37, 81.07, one study, 1862 women). This review is based on a total of 20 included studies that were of a mixed methodological quality.Timing of pushing with epidural is consistent in that delayed pushing leads to a shortening of the actual time pushing and increase of spontaneous vaginal delivery at the expense of an overall longer duration of the second stage of labour and double the risk of a low umbilical cord pH (based only on one study). Nevertheless, there was no difference in the caesarean and instrumental deliveries, perineal laceration and episiotomy, and in the other neonatal outcomes (admission to neonatal intensive care, five-minute Apgar score less than seven and delivery room resuscitation) between delayed and immediate pushing. Futhermore, the adverse effects on maternal pelvic floor is still unclear.Therefore, there is insufficient evidence to justify routine use of any specific timing of pushing since the maternal and neonatal benefits and adverse effects of delayed and immediate pushing are not well established.For the type of pushing, with or without epidural, there is no conclusive evidence to support or refute any specific style or recommendation as part of routine clinical practice. Women should be encouraged to bear down based on their preferences and comfort.In the absence of strong evidence supporting a specific method or timing of pushing, patient preference and clinical situations should guide decisions.Further properly well-designed randomised controlled trials are required to add evidence-based information to the current knowledge. These trials should address clinically important maternal and neonatal outcomes and will provide more complete data to be incorporated into a future update of this review.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Pressure ulcers, also known as pressure injuries and bed sores, are localised areas of injury to the skin or underlying tissues, or both. Dressings made from a variety of materials, including foam, are used to treat pressure ulcers. An evidence-based overview of dressings for pressure ulcers is needed to enable informed decision-making on dressing use. This review is part of a suite of Cochrane Reviews investigating the use of dressings in the treatment of pressure ulcers. Each review will focus on a particular dressing type. To assess the clinical and cost effectiveness of foam wound dressings for healing pressure ulcers in people with an existing pressure ulcer in any care setting. In February 2017 we searched: the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process &amp; Other Non-Indexed Citations); Ovid Embase; EBSCO CINAHL Plus and the NHS Economic Evaluation Database (NHS EED). We also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses and health technology reports to identify additional studies. There were no restrictions with respect to language, date of publication or study setting. Published or unpublished randomised controlled trials (RCTs) and cluster-RCTs, that compared the clinical and cost effectiveness of foam wound dressings for healing pressure ulcers (Category/Stage II or above). Two review authors independently performed study selection, risk of bias and data extraction. A third reviewer resolved discrepancies between the review authors. We included nine trials with a total of 483 participants, all of whom were adults (59 years or older) with an existing pressure ulcer Category/Stage II or above. All trials had two arms, which compared foam dressings with other dressings for treating pressure ulcers.The certainty of evidence ranged from low to very low due to various combinations of selection, performance, attrition, detection and reporting bias, and imprecision due to small sample sizes and wide confidence intervals. We had very little confidence in the estimate of effect of included studies. Where a foam dressing was compared with another foam dressing, we established that the true effect was likely to be substantially less than the study's estimated effect.We present data for four comparisons.One trial compared a silicone foam dressing with another (hydropolymer) foam dressing (38 participants), with an eight-week (short-term) follow-up. It was uncertain whether alternate types of foam dressing affected the incidence of healed pressure ulcers (RR 0.89, 95% CI 0.45 to 1.75) or adverse events (RR 0.37, 95% CI 0.04 to 3.25), as the certainty of evidence was very low, downgraded for serious limitations in study design and very serious imprecision.Four trials with a median sample size of 20 participants (230 participants), compared foam dressings with hydrocolloid dressings for eight weeks or less (short-term). It was uncertain whether foam dressings affected the probability of healing in comparison to hydrocolloid dressings over a short follow-up period in three trials (RR 0.85, 95% CI 0.54 to 1.34), very low-certainty evidence, downgraded for very serious study limitations and serious imprecision. It was uncertain if there was a difference in risk of adverse events between groups (RR 0.88, 95% CI 0.37 to 2.11), very low-certainty evidence, downgraded for serious study limitations and very serious imprecision. Reduction in ulcer size, patient satisfaction/acceptability, pain and cost effectiveness data were also reported but we assessed the evidence as being of very low certainty.One trial (34 participants), compared foam and hydrogel dressings over an eight-week (short-term) follow-up. It was uncertain if the foam dressing affected the probability of healing (RR 1.00, 95% CI 0.78 to 1.28), time to complete healing (MD 5.67 days 95% CI -4.03 to 15.37), adverse events (RR 0.33, 95% CI 0.01 to 7.65) or reduction in ulcer size (MD 0.30 cm<sup2</sup per day, 95% CI -0.15 to 0.75), as the certainty of the evidence was very low, downgraded for serious study limitations and very serious imprecision.The remaining three trials (181 participants) compared foam with basic wound contact dressings. Follow-up times ranged from short-term (8 weeks or less) to medium-term (8 to 24 weeks). It was uncertain whether foam dressings affected the probability of healing compared with basic wound contact dressings, in the short term (RR 1.33, 95% CI 0.62 to 2.88) or medium term (RR 1.17, 95% CI 0.79 to 1.72), or affected time to complete healing in the medium term (MD -35.80 days, 95% CI -56.77 to -14.83), or adverse events in the medium term (RR 0.58, 95% CI 0.33 to 1.05). This was due to the very low-certainty evidence, downgraded for serious to very serious study limitations and imprecision. Reduction in ulcer size, patient satisfaction/acceptability, pain and cost effectiveness data were also reported but again, we assessed the evidence as being of very low certainty.None of the included trials reported quality of life or pressure ulcer recurrence. It is uncertain whether foam dressings are more clinically effective, more acceptable to users, or more cost effective compared to alternative dressings in treating pressure ulcers. It was difficult to make accurate comparisons between foam dressings and other dressings due to the lack of data on reduction of wound size, complete wound healing, treatment costs, or insufficient time-frames. Quality of life and patient (or carer) acceptability/satisfaction associated with foam dressings were not systematically measured in any of the included studies. We assessed the certainty of the evidence in the included trials as low to very low. Clinicians need to carefully consider the lack of robust evidence in relation to the clinical and cost-effectiveness of foam dressings for treating pressure ulcers when making treatment decisions, particularly when considering the wound management properties that may be offered by each dressing type and the care context.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Paracetamol (acetaminophen) is the most widely used non-prescription analgesic in the world. Paracetamol is commonly taken in overdose either deliberately or unintentionally. In high-income countries, paracetamol toxicity is a common cause of acute liver injury. There are various interventions to treat paracetamol poisoning, depending on the clinical status of the person. These interventions include inhibiting the absorption of paracetamol from the gastrointestinal tract (decontamination), removal of paracetamol from the vascular system, and antidotes to prevent the formation of, or to detoxify, metabolites. To assess the benefits and harms of interventions for paracetamol overdosage irrespective of the cause of the overdose. We searched The Cochrane Hepato-Biliary Group Controlled Trials Register (January 2017), CENTRAL (2016, Issue 11), MEDLINE (1946 to January 2017), Embase (1974 to January 2017), and Science Citation Index Expanded (1900 to January 2017). We also searched the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov database (US National Institute of Health) for any ongoing or completed trials (January 2017). We examined the reference lists of relevant papers identified by the search and other published reviews. Randomised clinical trials assessing benefits and harms of interventions in people who have ingested a paracetamol overdose. The interventions could have been gastric lavage, ipecacuanha, or activated charcoal, or various extracorporeal treatments, or antidotes. The interventions could have been compared with placebo, no intervention, or to each other in differing regimens. Two review authors independently extracted data from the included trials. We used fixed-effect and random-effects Peto odds ratios (OR) with 95% confidence intervals (CI) for analysis of the review outcomes. We used the Cochrane 'Risk of bias' tool to assess the risks of bias (i.e. systematic errors leading to overestimation of benefits and underestimation of harms). We used Trial Sequential Analysis to control risks of random errors (i.e. play of chance) and GRADE to assess the quality of the evidence and constructed 'Summary of findings' tables using GRADE software. We identified 11 randomised clinical trials (of which one acetylcysteine trial was abandoned due to low numbers recruited), assessing several different interventions in 700 participants. The variety of interventions studied included decontamination, extracorporeal measures, and antidotes to detoxify paracetamol's toxic metabolite; which included methionine, cysteamine, dimercaprol, or acetylcysteine. There were no randomised clinical trials of agents that inhibit cytochrome P-450 to decrease the activation of the toxic metabolite N-acetyl-p-benzoquinone imine.Of the 11 trials, only two had two common outcomes, and hence, we could only meta-analyse two comparisons. Each of the remaining comparisons included outcome data from one trial only and hence their results are presented as described in the trials. All trial analyses lack power to access efficacy. Furthermore, all the trials were at high risk of bias. Accordingly, the quality of evidence was low or very low for all comparisons. Interventions that prevent absorption, such as gastric lavage, ipecacuanha, or activated charcoal were compared with placebo or no intervention and with each other in one four-armed randomised clinical trial involving 60 participants with an uncertain randomisation procedure and hence very low quality. The trial presented results on lowering plasma paracetamol levels. Activated charcoal seemed to reduce the absorption of paracetamol, but the clinical benefits were unclear. Activated charcoal seemed to have the best risk:benefit ratio among gastric lavage, ipecacuanha, or supportive treatment if given within four hours of ingestion. There seemed to be no difference between gastric lavage and ipecacuanha, but gastric lavage and ipecacuanha seemed more effective than no treatment (very low quality of evidence). Extracorporeal interventions included charcoal haemoperfusion compared with conventional treatment (supportive care including gastric lavage, intravenous fluids, and fresh frozen plasma) in one trial with 16 participants. The mean cumulative amount of paracetamol removed was 1.4 g. One participant from the haemoperfusion group who had ingested 135 g of paracetamol, died. There were no deaths in the conventional treatment group. Accordingly, we found no benefit of charcoal haemoperfusion (very low quality of evidence). Acetylcysteine appeared superior to placebo and had fewer adverse effects when compared with dimercaprol or cysteamine. Acetylcysteine superiority to methionine was unproven. One small trial (low quality evidence) found that acetylcysteine may reduce mortality in people with fulminant hepatic failure (Peto OR 0.29, 95% CI 0.09 to 0.94). The most recent randomised clinical trials studied different acetylcysteine regimens, with the primary outcome being adverse events. It was unclear which acetylcysteine treatment protocol offered the best efficacy, as most trials were underpowered to look at this outcome. One trial showed that a modified 12-hour acetylcysteine regimen with a two-hour acetylcysteine 100 mg/kg bodyweight loading dose was associated with significantly fewer adverse reactions compared with the traditional three-bag 20.25-hour regimen (low quality of evidence). All Trial Sequential Analyses showed lack of sufficient power. Children were not included in the majority of trials. Hence, the evidence pertains only to adults. These results highlight the paucity of randomised clinical trials comparing different interventions for paracetamol overdose and their routes of administration and the low or very low level quality of the evidence that is available. Evidence from a single trial found activated charcoal seemed the best choice to reduce absorption of paracetamol. Acetylcysteine should be given to people at risk of toxicity including people presenting with liver failure. Further randomised clinical trials with low risk of bias and adequate number of participants are required to determine which regimen results in the fewest adverse effects with the best efficacy. Current management of paracetamol poisoning worldwide involves the administration of intravenous or oral acetylcysteine which is based mainly on observational studies. Results from these observational studies indicate that treatment with acetylcysteine seems to result in a decrease in morbidity and mortality, However, further evidence from randomised clinical trials comparing different treatments are needed.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Different bone-modifying agents like bisphosphonates and receptor activator of nuclear factor-kappa B ligand (RANKL)-inhibitors are used as supportive treatment in men with prostate cancer and bone metastases to prevent skeletal-related events (SREs). SREs such as pathologic fractures, spinal cord compression, surgery and radiotherapy to the bone, and hypercalcemia lead to morbidity, a poor performance status, and impaired quality of life. Efficacy and acceptability of the bone-targeted therapy is therefore of high relevance. Until now recommendations in guidelines on which bone-modifying agents should be used are rare and inconsistent. To assess the effects of bisphosphonates and RANKL-inhibitors as supportive treatment for prostate cancer patients with bone metastases and to generate a clinically meaningful treatment ranking according to their safety and efficacy using network meta-analysis. We identified studies by electronically searching the bibliographic databases Cochrane Controlled Register of Trials (CENTRAL), MEDLINE, and Embase until 23 March 2020. We searched the Cochrane Library and various trial registries and screened abstracts of conference proceedings and reference lists of identified trials. We included randomized controlled trials comparing different bisphosphonates and RANKL-inihibitors with each other or against no further treatment or placebo for men with prostate cancer and bone metastases. We included men with castration-restrictive and castration-sensitive prostate cancer and conducted subgroup analyses according to this criteria. Two review authors independently extracted data and assessed the quality of trials. We defined proportion of participants with pain response and the adverse events renal impairment and osteonecrosis of the jaw (ONJ) as the primary outcomes. Secondary outcomes were SREs in total and each separately (see above), mortality, quality of life, and further adverse events such as grade 3 to 4 adverse events, hypocalcemia, fatigue, diarrhea, and nausea. We conducted network meta-analysis and generated treatment rankings for all outcomes, except quality of life due to insufficient reporting on this outcome. We compiled ranking plots to compare single outcomes of efficacy against outcomes of acceptability of the bone-modifying agents. We assessed the certainty of the evidence for the main outcomes using the GRADE approach. Twenty-five trials fulfilled our inclusion criteria. Twenty-one trials could be considered in the quantitative analysis, of which six bisphosphonates (zoledronic acid, risedronate, pamidronate, alendronate, etidronate, or clodronate) were compared with each other, the RANKL-inhibitor denosumab, or no treatment/placebo. By conducting network meta-analysis we were able to compare all of these reported agents directly and/or indirectly within the network for each outcome. In the abstract only the comparisons of zoledronic acid and denosumab against the main comparator (no treatment/placebo) are described for outcomes that were predefined as most relevant and that also appear in the 'Summary of findings' table. Other results, as well as results of subgroup analyses regarding castration status of participants, are displayed in the Results section of the full text. Treatment with zoledronic acid probably neither reduces nor increases the proportion of participants with pain response when compared to no treatment/placebo (risk ratio (RR) 1.46, 95% confidence interval (CI) 0.93 to 2.32; per 1000 participants 121 more (19 less to 349 more); moderate-certainty evidence; network based on 4 trials including 1013 participants). For this outcome none of the trials reported results for the comparison with denosumab. The adverse event renal impairment probably occurs more often when treated with zoledronic acid compared to treatment/placebo (RR 1.63, 95% CI 1.08 to 2.45; per 1000 participants 78 more (10 more to 180 more); moderate-certainty evidence; network based on 6 trials including 1769 participants). Results for denosumab could not be included for this outcome, since zero events cannot be considered in the network meta-analysis, therefore it does not appear in the ranking. Treatment with denosumab results in increased occurrence of the adverse event ONJ (RR 3.45, 95% CI 1.06 to 11.24; per 1000 participants 30 more (1 more to 125 more); high-certainty evidence; 4 trials, 3006 participants) compared to no treatment/placebo. When comparing zoledronic acid to no treatment/placebo, the confidence intervals include the possibility of benefit or harm, therefore treatment with zoledronic acid probably neither reduces nor increases ONJ (RR 1.88, 95% CI 0.73 to 4.87; per 1000 participants 11 more (3 less to 47 more); moderate-certainty evidence; network based on 4 trials including 3006 participants). Compared to no treatment/placebo, treatment with zoledronic acid (RR 0.84, 95% CI 0.72 to 0.97) and denosumab (RR 0.72, 95% CI 0.54 to 0.96) may result in a reduction of the total number of SREs (per 1000 participants 75 fewer (131 fewer to 14 fewer) and 131 fewer (215 fewer to 19 fewer); both low-certainty evidence; 12 trials, 5240 participants). Treatment with zoledronic acid and denosumab likely neither reduces nor increases mortality when compared to no treatment/placebo (zoledronic acid RR 0.90, 95% CI 0.80 to 1.01; per 1000 participants 48 fewer (97 fewer to 5 more); denosumab RR 0.93, 95% CI 0.77 to 1.11; per 1000 participants 34 fewer (111 fewer to 54 more); both moderate-certainty evidence; 13 trials, 5494 participants). Due to insufficient reporting, no network meta-analysis was possible for the outcome quality of life. One study with 1904 participants comparing zoledronic acid and denosumab showed that more zoledronic acid-treated participants than denosumab-treated participants experienced a greater than or equal to five-point decrease in Functional Assessment of Cancer Therapy-General total scores over a range of 18 months (average relative difference = 6.8%, range -9.4% to 14.6%) or worsening of cancer-related quality of life. When considering bone-modifying agents as supportive treatment, one has to balance between efficacy and acceptability. Results suggest that Zoledronic acid likely increases both the proportion of participants with pain response, and the proportion of participants experiencing adverse events However, more trials with head-to-head comparisons including all potential agents are needed to draw the whole picture and proof the results of this analysis.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Molecular Biology, a branch of science established to examine the flow of information from "letters" encrypted into DNA structure to functional proteins, was initially defined by a concept of DNA-to-RNA-to-Protein information movement, a notion termed the Central Dogma of Molecular Biology. RNA-dependent mRNA amplification, a novel mode of eukaryotic protein-encoding RNA-to-RNA-to-Protein genomic information transfer, constitutes the extension of the Central Dogma in the context of mammalian cells. It was shown to occur in cellular circumstances requiring exceptionally high levels of production of specific polypeptides, e.g. globin chains during erythroid differentiation or defined secreted proteins in the context of extracellular matrix deposition. Its potency is reflected in the observed cellular levels of the resulting amplified mRNA product: At the peak of the erythroid differentiation, for example, the amount of globin mRNA produced in the amplification pathway is about 1500-fold higher than the amount of its conventionally generated counterpart in the same cells. The cellular enzymatic machinery at the core of this process, RNA-dependent RNA polymerase activity (RdRp), albeit in a non-conventional form, was shown to be constitutively and ubiquitously present, and RNA-dependent RNA synthesis (RdRs) appeared to regularly occur, in mammalian cells. Under most circumstances, the mammalian RdRp activity produces only short antisense RNA transcripts. Generation of complete antisense RNA transcripts and amplification of mRNA molecules require the activation of inducible components of the mammalian RdRp complex. The mechanism of such activation is not clear. The present article suggests that it is triggered by a variety of cellular stresses and occurs in the context of stress responses in general and within the framework of the integrated stress response (ISR) in particular. In this process, various cellular stresses activate, in a stress type-specific manner, defined members of the mammalian translation initiation factor 2α, eIF2α, kinase family: PKR, GCN2, PERK and HRI. Any of these kinases, in an activated form, phosphorylates eIF2α. This results in suppression of global cellular protein synthesis but also in activation of expression of select group of transcription factors including ATF4, ATF5 and CHOP. These transcription factors either function as inducible components of the RdRp complex or enable their expression. The assembly of the competent RdRp complex activates mammalian RNA-dependent mRNA amplification, which appears to be a two-tier process. Tier One is a "chimeric" pathway, named so because it results in an amplified chimeric mRNA molecule containing a fragment of the antisense RNA strand at its 5' terminus. Tier Two further amplifies one of the two RNA end products of the chimeric pathway and constitutes the physiologically occurring intracellular polymerase chain reaction, iPCR. Depending on the structure of the initial mRNA amplification progenitor, the chimeric pathway, Tier One, may result in multiple outcomes including chimeric mRNA that produces either a polypeptide identical to the original, conventional mRNA progenitor-encoded protein or only its C-terminal fragment, CTF. The chimeric RNA end product of Tier One may also produce a polypeptide that is non-contiguously encoded in the genome, activate translation from an open reading frame, which is "silent" in a conventionally transcribed mRNA, or initiate an abortive translation. In sharp contrast, regardless of the outcome of Tier One, the mRNA end product of Tier Two of mammalian mRNA amplification, the iPCR pathway, always produces a polypeptide identical to a conventional mRNA progenitor-encoded protein. This discordance is referred to as the Two-Tier Paradox and discussed in detail in the present article. On the other hand, both Tiers are similar in that they result in heavily modified mRNA molecules resistant to reverse transcription, undetectable by reverse transcription-based methods of sequencing and therefore constituting a proverbial "Dark Matter" mRNA, despite being highly ubiquitous. It appears that in addition to their other functions, the modifications of the amplified mRNA render it compatible, unlike the bulk of cellular mRNA, with phosphorylated eIF2α in translation, implying that in addition to being extraordinarily abundant due to the method of its generation, amplified mRNA is also preferentially translated under the ISR conditions, thus augmenting the efficiency of the amplification process. The vital importance of powerful mechanisms of amplification of protein-encoding genomic information in normal physiology is self-evident. Their malfunctions or misuse appear to be associated with two types of abnormalities, the deficiency of a protein normally produced by these mechanisms and the mRNA amplification-mediated overproduction of a protein normally not generated by such a process. Certain classes of beta-thalassemia exemplify the first type, whereas the second type is represented by overproduction of beta-amyloid in Alzheimer's disease. Moreover, the proposed mechanism of Alzheimer's disease allows a crucial and verifiable prediction, namely that the disease-causing intraneuronally retained variant of beta-amyloid differs from that produced conventionally by βAPP proteolysis in that it contains the additional methionine or acetylated methionine at its N-terminus. Because of its extraordinary evidential value as a natural reporter of the mRNA amplification pathway, this feature, if proven, would, arguably, constitute the proverbial Holy Grail not only for Alzheimer's disease but also for the mammalian RNA-dependent mRNA amplification field in general. Both examples are discussed in detail in the present article, which summarizes and systematizes our current understanding of the field and describes two categories of reporter constructs, one for the chimeric Tier of mRNA amplification, another for the iPCR pathway; both reporter types are essential for elucidating underlying molecular mechanisms. It also suggests, in light of the recently demonstrated feasibility of RNA-based vaccines, that the targeted intracellular amplification of exogenously introduced amplification-eligible antigen-encoding mRNAs via the induced or naturally occurring RNA-dependent mRNA amplification pathway could be of substantial benefit in triggering a fast and potent immune response and instrumental in the development of future vaccines. Similar approaches can also be effective in achieving efficient and sustained expression of exogenous mRNA in mRNA therapeutics.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Anal squamous cell carcinoma is rare, in general, but considerably higher in HIV-infected men who have sex with men. There is no consensus on the screening of at-risk populations. This study aimed to determine the incidence rates of anal squamous cell carcinoma and the efficacy of a screening program. This is a cohort study (SeVIHanal/NCT03713229). This study was conducted at an HIV outpatient clinic in Seville, Spain. From 2004 to 2017, all patients with at least 1 follow-up visit were analyzed (follow-up group), including a subgroup of men who have sex with men who participated in a specialized program for screening and treating anal neoplasia (SCAN group) from 2011 onward. The primary outcome measure was the incidence rate of anal squamous cell carcinoma. Of the 3878 people living with HIV included in the follow-up group, 897 were transferred to the SCAN group; 1584 (41%) were men who have sex with men. Total follow-up was 29,228 person-years with an overall incidence rate for anal squamous cell carcinoma of 68.4/100,000 person-years (95% CI, 46.7-97.4). The changes in the incidence rate/100,000 person-years (95% CI) over time was 20.7 (3.40-80.5) for 2004 to 2006, 37.3 (13.4-87.3) for 2007 to 2010, and 97.8 (63.8-144.9) for 2011 to 2017 (p &lt; 0.001). The strongest impact on the incidence of anal squamous cell carcinoma was made by the lack of immune restoration (adjusted incidence rate ratio (95% CI): 6.59 (4.24-10); p &lt; 0.001), the Centers for Disease Control and Prevention category C (adjusted incidence rate ratio (95% CI): 7.49 (5.69-9.85); p &lt; 0.001), and non-men who have sex with men (adjusted incidence rate ratio (95% CI): 0.07 (0.05-0.10); p &lt; 0.001) in a Poisson analysis. From 2010 to 2017, incidence rates (95% CI) of anal squamous cell carcinoma within the SCAN group and the men who have sex with men of the follow-up group were 95.7 (39.6-202) and 201 (101-386)/100,000 person-years (adjusted incidence rate ratio (95% CI): 0.30 (0.23-0.39); p&lt;0.001). The incidence rate ratio (95% CI) including non-men who have sex with men in the follow-up group was 0.87 (0.69-1.11); p = 0.269. Adherence to the visits could not be quantified. Incidence rates of anal squamous cell carcinoma in people living with HIV increased significantly from 2004 to 2017, especially in men who have sex with men who were not being screened. Participation in the SCAN program significantly reduced the incidence of anal squamous cell carcinoma in men who have sex with men, in whom focus should be placed, especially on those presenting with Centers for Disease Control and Prevention category C and advanced immune suppression. See Video Abstract at http://links.lww.com/DCR/B734. ANTECEDENTES:El carcinoma anal a células escamosas es generalmente raro, pero considerablemente más alto en hombres infectados por el VIH que tienen relaciones sexuales con hombres. No hay consenso sobre el cribado de poblaciones en riesgo.OBJETIVO:Este estudio tuvo como objetivo determinar las tasas de incidencia del carcinoma anal a células escamosas y la eficacia de un programa de detección.DISEÑO:Estudio de cohorte (SeVIHanal / NCT03713229).AJUSTE:Clínica ambulatoria de VIH en Sevilla, España.PACIENTES:De 2004 a 2017, se analizaron todos los pacientes con al menos una visita de seguimiento (grupo F / U), incluido un subgrupo de hombres que tenían relaciones sexuales con hombres que participaron en un programa especializado de cribado y tratamiento de neoplasias anales (SCAN-group) a partir de 2011.PRINCIPALES MEDIDAS DE RESULTADO:Tasas de incidencia del carcinoma anal a células escamosas.RESULTADOS:De las 3878 personas que viven con el VIH incluidas en el grupo F / U, 897 fueron transferidas al grupo SCAN, 1584 (41%) eran hombres que tenían relaciones sexuales con hombres. El seguimiento total fue de 29228 personas-año con una tasa de incidencia general de carcinoma anal a células escamosas de 68,4 / 100000 personas-año [intervalo de confianza del 95%: 46,7-97,4]. El cambio en las tasas de incidencia / 100000 personas-año (intervalo de confianza del 95%) a lo largo del tiempo fue 20,7 (3,40-80,5) para 2004-2006, 37,3 (13,4-87,3) para 2007-2010 y 97,8 (63,8-144,9) para 2011-2017, p &lt;0,001. El impacto más fuerte en la incidencia del carcinoma a células escamosas anal fue la falta de restauración inmunológica [índice de tasa de incidencia ajustado (intervalo de confianza del 95%): 6,59 (4,24-10); p &lt;0,001], categoría C de los Centros de Control de Enfermedades [índice de tasa de incidencia ajustado (intervalo de confianza del 95%): 7,49 (5,69-9,85); p &lt;0,001] y no hombres que tenían relaciones sexuales con hombres [razón de tasa de incidencia ajustada (intervalo de confianza del 95%): 0,07 (0,05-0,10); p &lt;0,001] en el análisis de Poisson. Desde 2010-2017, las tasas de incidencia (intervalo de confianza del 95%) de carcinoma anal a células escamosas dentro del grupo SCAN y los hombres que tienen relaciones sexuales con hombres del grupo F / U fueron 95,7 (39,6-202) y 201 (101- 386) / 100000 personas-año [razón de tasa de incidencia ajustada (intervalo de confianza del 95%): 0,30 (0,23-0,39); p &lt;0,001]. La razón de la tasa de incidencia (intervalo de confianza del 95%), incluidos los no hombres que tenían relaciones sexuales con hombres en F / U, fue de 0,87 [0,69-1,11); p = 0,269].LIMITACIONES:No se pudo cuantificar la adherencia a las visitas.CONCLUSIÓNES:La tasa de incidencia del carcinoma anal a células escamosas en personas que viven con el VIH aumentó significativamente de 2004 a 2017, especialmente en hombres que tenían relaciones sexuales con hombres que no se someten a pruebas de detección. La participación en el programa SCAN redujo significativamente la incidencia de carcinoma anal a células escamosas en hombres que tenían relaciones sexuales con hombres, en quienes se debe prestar una especial atención, sobre todo en aquellos que se presentan en la categoría C de los Centros de Control de Enfermedades con inmunodeficiencia avanzada. Consulte Video Resumen en http://links.lww.com/DCR/B734.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Alzheimer's disease is the most common cause of dementia in older people accounting for some 60% of cases with late-onset cognitive deterioration. It is now thought that several neurotransmitter dysfunctions are involved from an early stage in the pathogenesis of Alzheimer's disease-associated cognitive decline. The efficacy of selegiline for symptoms of Alzheimer's disease remains controversial and is reflected by its low rate of prescription and the lack of approval by several regulatory authorities in Europe and elsewhere. Reasons for this uncertainty involve the modest overall effects observed in some trials, the lack of benefit observed in several trials, the use of cross-over designs which harbour methodological problems in a disease like dementia and the difficulty in interpreting results from trials when a variety of measurement scales are used to assess outcomes. The objective of this review is to assess whether or not selegiline improves the well-being of patients with Alzheimer's disease. The Cochrane Dementia and Cognitive Impairment Group Register of Clinical Trials, was searched using the terms 'selegiline', 'l-deprenyl', "eldepryl" and "monamine oxidase inhibitor-B". MEDLINE, PsycLIT and EMBASE electronic databases were searched with the above terms in addition to using the group strategy (see group details) to limit the searches to randomised controlled trials. All unconfounded, double-blind, randomised controlled trials in which treatment with selegiline was administered for more than a day and compared to placebo in patients with dementia. An individual patient data meta-analysis of selegiline, Wilcock 2002 provides much of the data that are available for this review. Seven studies provided individual patient data and this was pooled with summary statistics from the published papers of the other nine studies. Where possible, intention-to-treat data were used but usually the meta analyses were restricted to completers' data (data on people who completed the study). There are 17 included trials. There were very few significant treatment effects and these were all in favour of selegiline; cognition at 4-6 weeks and 8-17 weeks, and activities of daily living at 4-6 weeks. There is little evidence of adverse effects caused by selegiline, and few withdrew from trials, apart from the Sano trial. The analyses were conducted on data available. There was no attempt to correct for missing patients because there were so few and withdrawal was probably unconnected with treatment. All trials examined the cognitive effects of selegiline, and in addition 12 trials examined the behavioural and mood effects. The meta-analysis revealed benefits on memory function, shown by improvement in the memory tests from several cognitive tests (the Randt Memory Index from Agnoli 1990 and Agnoli 1992, the BSRT from Sunderland 1992, prose recall from Filip 1991, ADAS-cog from Lawlor 1997, the Wechsler Memory Scale from Loeb 1990 and Mangoni 1991, the Rey -AVL from Piccinin 1990, and the MMSE from Sano 1995, Tariot 1998, Filip 1991, Freedman 1996, Burke 1993 and Riekkinen 1993). The combined memory tests, and overall the combined cognitive tests, analysed using standardised mean differences, showed an improvement due to selegiline compared with placebo at 4-6 weeks (SMD 0.39, 95%CI 0.07 to 0.72, P = 0.02, random effects model ) and 8-17 weeks, ( SMD 0.44, 95%CI 0.04 to 0.84, P = 0.03, random effects model). The meta-analyses of emotional state show no treatment effects. Several studies assessed activities of daily living using several different scales, the GBS-motor function from Agnoli 1990, the NOSIE-daily living from Filip 1991, the BDS-daily living from Loeb 1990 and Mangoni 1991, the DS from Sano 1995 and PIADL from Tariot 1998. The combined tests, analysed using the standardised mean difference, showed an improvement due to selegiline at 4-6 weeks (SMD -0.27, 95% CIs -0.41 to -0.13, P = &lt;.001). The global rating scales, the BDS used by Burke 1993 and Tariot 1998, and the GBS used by Agnoli 1990 and Agnoli 1992, and the GDS used by Freedman 1996 and the CGI by Filip 1991, analysed using standardised mean differences showed no effect of selegiline. A variety of adverse effects were recorded, but very few patients left a trial as a direct result. Four studies reported no side effects. Mangoni 1991 reported poor tolerability for 3 patients out of 68 on treatment and 1 out of 51 on placebo, resulting in dropouts. Small numbers found equally in both groups reported anxiety, agitation, dizziness, nausea and dyspepsia. Piccinin 1990 reported that selegiline was well tolerated with few adverse reactions (dizziness and orthostatic hypotension) and no resulting drop outs. Burke 1993 and Loeb 1990 both reported that selegiline was very well tolerated with no serious side effects. Sano 1995 reported 49 categories of adverse events but found no differences between the 4 arms of the factorial trial. Freedman 1996 reported unequal numbers of dropouts in the trial with 7 subjects withdrawing from the selegiline group and only 1 subject from the placebo group. The meta-analyses of the numbers suffering adverse effects, and of the numbers of withdrawals before the end of the trial show no difference between control and selegiline. Despite its initial promise, ie the potential neuroprotective properties, and its role in the treatment of Parkinson's disease sufferers, selegiline for Alzheimer's disease has proved disappointing. Although there is no evidence of a significant adverse event profile, there is also no evidence of a clinically meaningful benefit for Alzheimer's disease sufferers. This is true irrespective of the outcome measure evaluated, ie cognition, emotional state, activities of daily living, and global assessment, whether in the short, or longer term (up to 69 weeks), where this has been assessed. There would seem to be no justification, therefore, to use it in the treatment of people with Alzheimer's disease, nor for any further studies of its efficacy in Alzheimer's disease.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
To evaluate three technologies for the management of advanced colorectal cancer: (1) first-line irinotecan combination [with 5-fluorouracil (5-FU)] or second-line monotherapy; (2) first- or second-line oxaliplatin combination (again, with 5-FU); and (3) raltitrexed, where 5-FU is inappropriate. To examine the role of irinotecan and oxaliplatin in reducing the extent of incurable disease before curative surgery (downstaging). Ten electronic bibliographic databases covering the period up to August 2004. Searches identified existing studies of the effectiveness and economics of the technologies and any studies that evaluated any of the indications outlined above were included. Data were extracted and assessed generic components of methodological quality. Survival outcomes were meta-analysed. Seventeen trials were found, of varying methodological quality. Compared with 5-FU, first-line irinotecan improved overall survival (OS) by 2-4 months (p=0.0007), progression-free survival (PFS) by 2-3 months (p&lt;0.00001) and response rates (p&lt;0.001). It offered a different toxicity profile and no quality of life (QoL) advantage. However, second-line irinotecan compared with 5-FU improved OS by 2 months (p=0.035) and PFS by 1 month (p=0.03), and provided a better partial response rate, but with more toxicities and no QoL advantage. Compared with second-line best supportive care, irinotecan improved OS by 2 months (p=0.0001), had a different toxicity profile and maintained baseline QoL longer, but with no overall difference. The addition of oxaliplatin to second-line 5-FU is associated with a borderline significant improvement in overall survival (p&lt;0.07); a significantly higher response rate (&lt;0.0001); and more serious toxicities. There is no evidence for a significant difference in QoL. Schedules with treatment breaks may not reduce clinical effectiveness but reduce toxicity. The addition of oxaliplatin to second-line 5-FU also saw no improvement in OS (p&lt;0.07), better PFS (by 2.1 months, p=0.0001), an 8.9% higher response rate (p&lt;0.0001), more toxicities and no QoL advantage. There was no significant difference in OS or PFS between first-line irinotecan and oxaliplatin combinations except when 5-FU was delivered by bolus injection, when oxaliplatin provided better OS (p=0.032) and response rates (p=0.032), but not PFS (p=0.169). The regimens had different toxicity profiles and neither conferred a QoL advantage. When compared to 5-FU, raltitrexed is associated with no significant difference in overall or progression-free survival; no significant difference in response rates; more vomiting and nausea, but less diarrhoea and mucositis; no significant difference in, or worse QoL. Raltitrexed treatment was cut short in two out of four included trials due to excess toxic deaths. 5-FU followed by irinotecan was inferior to any other sequence. First-line irinotecan/5-FU combination improved OS and PFS, although further unplanned therapy exaggerated the OS effect size. Staged combination therapy (combination oxaliplatin followed by combination irinotecan or vice versa) provided the best OS and PFS, although there was no head-to-head comparison against other treatment plans. In the only trial to use three active chemotherapies in any staged combination, median OS was over 20 months. In another study, the longest median OS from a treatment plan using two active agents was 16.2 months. Where irinotecan or oxaliplatin were used with 5-FU to downstage people with unresectable liver metastases, studies consistently showed response rates of around 50%. Resection rates ranged from 9 to 35% with irinotecan and from 7 to 51% with oxaliplatin. In the one study that compared the regimens, oxaliplatin enabled more resections (p=0.02). Five-year OS rates of 5-26% and disease-free survival rates of 3-11% were reported in studies using oxaliplatin. Alone or in combination, 5-FU was more effective and less toxic when delivered by continuous infusion. Existing economic models were weak because of the use of unplanned second-line therapies in their trial data: the survival benefits in patients on such trials cannot be uniquely attributed to the allocated therapy. Consequently, the economic analyses are either limited to the use of PES (at best, a surrogate outcome) or are subject to confounding. Weaknesses in cost components, the absence of direct in-trial utility estimates and the limited use of sensitivity analysis were identified. Improvements to the methodologies used in existing economic studies are presented. Using data from two trials that planned treatment sequences, an independent economic evaluation of six plans compared with first-line 5-FU followed on progression by second-line irinotecan monotherapy (NHS standard treatment) is presented. 5-FU followed on progression by irinotecan combination cost 13,174 pounds per life-year gained (LYG) and 10,338 pounds per quality-adjusted life-year (QALY) gained. Irinotecan combination followed on progression by additional second-line therapies was estimated to cost 12,418 pounds per LYG and 13,630 pounds per QALY gained. 5-FU followed on progression by oxaliplatin combination was estimated to cost 23,786 pounds per LYG and 31,556 pounds per QALY gained. Oxaliplatin combination followed on progression by additional second-line therapies was estimated to cost 43,531 pounds per LYG and 67,662 pounds per QALY gained. Evaluations presented in this paragraph should be interpreted with caution owing to missing information on the costs of salvage therapies in the trial from which data were drawn. Irinotecan combination followed on progression by oxaliplatin combination cost 12,761 pounds per LYG and 16,663 pounds per QALY gained. Oxaliplatin combination followed on progression by irinotecan combination cost 16,776 pounds per LYG and 21,845 pounds per QALY gained. The evaluation suggests that these two sequences have a cost-effectiveness profile that is favourable in comparison to other therapies currently funded by the NHS. However, the differences in OS observed between the two trials from which data were taken may be a result of heterogeneous patient populations, unbalanced protocol-driven intensity biases or other differences between underlying health service delivery systems. Treatment with three active therapies appears most clinically effective and cost-effective. NHS routine data could be used to validate downstaging findings and a meta-analysis using individual patient-level data is suggested to validate the optimal treatment sequence.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Genistein is a naturally occurring isoflavone that interacts with estrogen receptors and multiple other molecular targets. Human exposure to genistein is predominantly through consumption of soy products, including soy-based infant formula and dietary supplements. A series of short-term studies with genistein was conducted with two goals: 1) to obtain data necessary to establish dose levels for subsequent multigeneration reproductive and chronic toxicity studies and 2) to evaluate the effects of genistein on endpoints outside the reproductive tract. The data generated from these studies have been reported previously in the peer-reviewed literature or in technical reports (Appendix C). In addition, selected data from these studies were analyzed and discussed in the National Toxicology Program's Report of the Endocrine Disruptors Low-Dose Peer Review (NTP, 2001). The present report focuses on the reproductive and general toxicology endpoints evaluated. Data obtained in separate evaluations of behavioral, neuroanatomical, neurochemical, and immunological endpoints, as well as the assessment of serum genistein levels, are also discussed to put in better perspective the selection of doses for the multigenerational and chronic studies. Genistein was administered in an irradiated soy- and alfalfa-free diet (Purina 5K96) at exposure concentrations of 0, 5, 25, 100, 250, 625, or 1,250 ppm to 10 vaginal plug-positive, female Sprague-Dawley rats starting on gestation day 7 and continuing throughout pregnancy. These dietary exposure concentrations resulted in ingested doses of approximately 0.3, 1.7, 6.4, 16, 38, and 72 mg genistein/kg body weight to dams in the 5, 25, 100, 250, 625, and 1,250 ppm groups, respectively. Dietary exposure of the dams continued through lactation, during which time ingested doses were approximately 0.6, 3.5, 14, 37, 84, and 167 mg/kg per day. Pups from five litters, culled to eight per litter with an equal sex distribution on postnatal day (PND) 2, were maintained on the same dosed feed as their mothers after weaning until sacrifice at PND 50. Ingested doses were approximately 0.6, 3, 11, 29, 69, and 166 mg/kg per day for male pups and 0.6, 3, 12, 31, 73, and 166 mg/kg per day for female pups. Body weight and feed consumption of the treated dams prior to parturition showed decreasing trends with increasing dose, and both parameters were significantly less than those of the controls in the 1,250 ppm group. A significant exposure concentration-related effect on litter birth weight was observed, but no exposed group differed significantly from the control group in pairwise comparisons. Pups in the 1,250 ppm group had significantly decreased body weights relative to controls at the time of sacrifice (males, 9% decrease; females, 12% decrease). The most pronounced organ weight effects in the pups were decreased ventral prostate weight (absolute weight, 28% decrease; relative weight, 20% decrease) in males at 1,250 ppm and a trend toward higher pituitary gland to body weight ratios in both sexes. Histopathologic examination of female pups revealed ductal/alveolar hyperplasia of the mammary glands at exposure concentrations greater than 250 ppm. Ductal/alveolar hyperplasia and hypertrophy also occurred in males, with significant effects seen at exposure concentrations of 25 ppm or greater for hypertrophy and 250 ppm or greater for hyperplasia. Abnormal cellular maturation (mucocyte metaplasia) in the vagina was observed at 625 and 1,250 ppm, and abnormal ovarian antral follicles were observed at 1,250 ppm. In males, aberrant or delayed spermatogenesis in the seminiferous tubules relative to controls was observed at 1,250 ppm. Histologic evaluation indicated a deficit of sperm in the epididymis at 625 and 1,250 ppm relative to controls, although testicular spermatid head counts and epididymal spermatozoa counts did not show significant differences from controls at these exposure concentrations. Control females showed a high incidence of renal tubule mineralization, and the severity of this lesion was significantly increased at exposure concentrations of 250 ppm or greater. Males showed no renal tubule mineralization below 250 ppm, but incidence and severity increased with increasing exposure concentration at 250 ppm and greater. The primary goal of the current study was to provide information for the selection of exposure concentrations to be used in subsequent multigenerational and chronic studies. These long-term studies were designed to address multiple aspects of the endocrine disruptor hypothesis, that is, the hypothesis that exposures of human and wildlife populations to endocrine-active compounds contribute to adverse reproductive tract effects and cancers of hormone-sensitive organs. In particular, the long-term consequences of low dose exposures that may produce subtle initial effects, the magnification of those effects across generations, and the reversibility of those effects were to be investigated. The goal was to select a high exposure concentration that would not induce overt toxicity in the dams or pups but would induce observable effects in the reproductive organs of the pups without severely impairing fertility in the F1 generation. The 1,250 ppm exposure concentration was clearly ruled out for further testing based on the effects on body weights, histopathologic observations in males and females, and a reduction in the proportion of mated dams producing litters. While the effects observed at 625 ppm would not be predicted to significantly impair reproduction, the observation of significant effects at 250 ppm (hyperplasia in the mammary gland of both sexes), together with the suggestion of subtle effects at this exposure concentration and less in the parallel immunotoxicity and neuroanatomical surveys, a high exposure concentration between 250 and 625 ppm was deemed appropriate for the purposes of the multigenerational reproductive toxicology study and the chronic study of genistein. A high exposure concentration for the multigenerational and chronic studies was thus set at 500 ppm. A low exposure concentration of 5 ppm, where no significant effects were observed in the reproductive dose range-finding, and an intermediate exposure concentration of 100 ppm were also selected. Synonyms: 4',5,7-Trihydroxyisoflavone.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Platelet transfusions are used to prevent and treat bleeding in patients who are thrombocytopenic. Despite improvements in donor screening and laboratory testing, a small risk of viral, bacterial or protozoal contamination of platelets remains. There is also an ongoing risk from newly emerging blood transfusion-transmitted infections (TTIs) for which laboratory tests may not be available at the time of initial outbreak.One solution to reduce further the risk of TTIs from platelet transfusion is photochemical pathogen reduction, a process by which pathogens are either inactivated or significantly depleted in number, thereby reducing the chance of transmission. This process might offer additional benefits, including platelet shelf-life extension, and negate the requirement for gamma-irradiation of platelets. Although current pathogen-reduction technologies have been proven significantly to reduce pathogen load in platelet concentrates, a number of published clinical studies have raised concerns about the effectiveness of pathogen-reduced platelets for post-transfusion platelet recovery and the prevention of bleeding when compared with standard platelets. To assess the effectiveness of pathogen-reduced platelets for the prevention of bleeding in patients requiring platelet transfusions. We searched the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library 2013, Issue 1), MEDLINE (1950 to 18 February 2013), EMBASE (1980 to 18 February 2013), CINAHL (1982 to 18 February 2013) and the Transfusion Evidence Library (1980 to 18 February 2013). We also searched several international and ongoing trial databases and citation-tracked relevant reference lists. We requested information on possible unpublished trials from known investigators in the field. We included randomised controlled trials (RCTs) comparing the transfusion of pathogen-reduced platelets with standard platelets. We did not identify any RCTs which compared the transfusion of one type of pathogen-reduced platelets with another. One author screened all references, excluding duplicates and those clearly irrelevant. Two authors then screened the remaining references, confirmed eligibility, extracted data and analysed trial quality independently. We requested and obtained a significant amount of missing data from trial authors. We performed meta-analyses where appropriate using the fixed-effect model for risk ratios (RR) or mean differences (MD), with 95% confidence intervals (95% CI), and used the I² statistic to explore heterogeneity, employing the random-effects model when I² was greater than 30%. We included 10 trials comparing pathogen-reduced platelets with standard platelets. Nine trials assessed Intercept® pathogen-reduced platelets and one trial Mirasol® pathogen-reduced platelets. Two were randomised cross-over trials and the remaining eight were parallel-group RCTs. In total, 1422 participants were available for analysis across the 10 trials, of which 675 participants received Intercept® and 56 Mirasol® platelet transfusions. Four trials assessed the response to a single study platelet transfusion (all Intercept®) and six to multiple study transfusions (Intercept® (N = 5), Mirasol® (N = 1)) compared with standard platelets.We found the trials to be generally at low risk of bias but heterogeneous regarding the nature of the interventions (platelet preparation), protocols for platelet transfusion, definitions of outcomes, methods of outcome assessment and duration of follow-up.Our primary outcomes were mortality, 'any bleeding', 'clinically significant bleeding' and 'severe bleeding', and were grouped by duration of follow-up: short (up to 48 hours), medium (48 hours to seven days) or long (more than seven days). Meta-analysis of data from five trials of multiple platelet transfusions reporting 'any bleeding' over a long follow-up period found an increase in bleeding in those receiving pathogen-reduced platelets compared with standard platelets using the fixed-effect model (RR 1.09, 95% CI 1.02 to 1.15, I² = 59%); however, this meta-analysis showed no difference between treatment arms when using the random-effects model (RR 1.14, 95% CI 0.93 to 1.38).There was no evidence of a difference between treatment arms in the number of patients with 'clinically significant bleeding' (reported by four out of the same five trials) or 'severe bleeding' (reported by all five trials) (respectively, RR 1.06, 95% CI 0.93 to 1.21, I² = 2%; RR 1.27, 95% CI 0.76 to 2.12, I² = 51%). We also found no evidence of a difference between treatment arms for all-cause mortality, acute transfusion reactions, adverse events, serious adverse events and red cell transfusion requirements in the trials which reported on these outcomes. No bacterial transfusion-transmitted infections occurred in the six trials that reported this outcome.Although the definition of platelet refractoriness differed between trials, the relative risk of this event was 2.74 higher following pathogen-reduced platelet transfusion (RR 2.74, 95% CI 1.84 to 4.07, I² = 0%). Participants required 7% more platelet transfusions following pathogen-reduced platelet transfusion when compared with standard platelet transfusion (MD 0.07, 95% CI 0.03 to 0.11, I² = 21%), although the interval between platelet transfusions was only shown to be significantly shorter following multiple Intercept® pathogen-reduced platelet transfusion when compared with standard platelet transfusion (MD -0.51, 95% CI -0.66 to -0.37, I² = 0%). In trials of multiple pathogen-reduced platelets, our analyses showed the one- and 24-hour count and corrected count increments to be significantly inferior to standard platelets. However, one-hour increments were similar in trials of single platelet transfusions, although the 24-hour count and corrected count increments were again significantly lower. We found no evidence of a difference in mortality, 'clinically significant' or 'severe bleeding', transfusion reactions or adverse events between pathogen-reduced and standard platelets. For a range of laboratory outcomes the results indicated evidence of some benefits for standard platelets over pathogen-reduced platelets. These conclusions are based on data from 1422 patients included in 10 trials. Results from ongoing or new trials are required to determine if there are clinically important differences in bleeding risk between pathogen-reduced platelet transfusions and standard platelet transfusions. Given the variability in trial design, bleeding assessment and quality of outcome reporting, it is recommended that future trials apply standardised approaches to outcome assessment and follow-up, including safety reporting.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Depression is common in primary care and is associated with marked personal, social and economic morbidity, thus creating significant demands on service providers. The antidepressant fluoxetine has been studied in many randomised controlled trials (RCTs) in comparison with other conventional and unconventional antidepressants. However, these studies have produced conflicting findings.Other systematic reviews have considered selective serotonin reuptake inhibitor (SSRIs) as a group which limits the applicability of the indings for fluoxetine alone. Therefore, this review intends to provide specific and clinically useful information regarding the effects of fluoxetine for depression compared with tricyclics (TCAs), SSRIs, serotonin-noradrenaline reuptake inhibitors (SNRIs), monoamineoxidase inhibitors (MAOIs) and newer agents, and other conventional and unconventional agents. To assess the effects of fluoxetine in comparison with all other antidepressive agents for depression in adult individuals with unipolar major depressive disorder. We searched the Cochrane Collaboration Depression, Anxiety and Neurosis Review Group Controlled Trials Register (CCDANCTR)to 11May 2012. This register includes relevant RCTs from the Cochrane Central Register of Controlled Trials (CENTRAL) (all years),MEDLINE (1950 to date), EMBASE (1974 to date) and PsycINFO (1967 to date). No language restriction was applied. Reference lists of relevant papers and previous systematic reviews were handsearched. The pharmaceutical company marketing fluoxetine and experts in this field were contacted for supplemental data. All RCTs comparing fluoxetine with any other AD (including non-conventional agents such as hypericum) for patients with unipolar major depressive disorder (regardless of the diagnostic criteria used) were included. For trials that had a cross-over design only results from the first randomisation period were considered. Data were independently extracted by two review authors using a standard form. Responders to treatment were calculated on an intention-to-treat basis: dropouts were always included in this analysis. When data on dropouts were carried forward and included in the efficacy evaluation, they were analysed according to the primary studies; when dropouts were excluded from any assessment in the primary studies, they were considered as treatment failures. Scores from continuous outcomes were analysed by including patients with a final assessment or with the last observation carried forward. Tolerability data were analysed by calculating the proportion of patients who failed to complete the study due to any causes and due to side effects or inefficacy. For dichotomous data, odds ratios (ORs) were calculated with 95% confidence intervals (CI) using the random-effects model. Continuous data were analysed using standardised mean differences (SMD) with 95% CI. A total of 171 studies were included in the analysis (24,868 participants). The included studies were undertaken between 1984 and 2012. Studies had homogenous characteristics in terms of design, intervention and outcome measures. The assessment of quality with the risk of bias tool revealed that the great majority of them failed to report methodological details, like the method of random sequence generation, the allocation concealment and blinding. Moreover, most of the included studies were sponsored by drug companies, so the potential for overestimation of treatment effect due to sponsorship bias should be considered in interpreting the results. Fluoxetine was as effective as the TCAs when considered as a group both on a dichotomous outcome (reduction of at least 50% on the Hamilton Depression Scale) (OR 0.97, 95% CI 0.77 to 1.22, 24 RCTs, 2124 participants) and a continuous outcome (mean scores at the end of the trial or change score on depression measures) (SMD 0.03, 95% CI -0.07 to 0.14, 50 RCTs, 3393 participants). On a dichotomousoutcome, fluoxetine was less effective than dothiepin or dosulepin (OR 2.13, 95% CI 1.08 to 4.20; number needed to treat (NNT) =6, 95% CI 3 to 50, 2 RCTs, 144 participants), sertraline (OR 1.37, 95% CI 1.08 to 1.74; NNT = 13, 95% CI 7 to 58, 6 RCTs, 1188 participants), mirtazapine (OR 1.46, 95% CI 1.04 to 2.04; NNT = 12, 95% CI 6 to 134, 4 RCTs, 600 participants) and venlafaxine(OR 1.29, 95% CI 1.10 to 1.51; NNT = 11, 95% CI 8 to 16, 12 RCTs, 3387 participants). On a continuous outcome, fluoxetine was more effective than ABT-200 (SMD -1.85, 95% CI -2.25 to -1.45, 1 RCT, 141 participants) and milnacipran (SMD -0.36, 95% CI-0.63 to -0.08, 2 RCTs, 213 participants); conversely, it was less effective than venlafaxine (SMD 0.10, 95% CI 0 to 0.19, 13 RCTs,3097 participants). Fluoxetine was better tolerated than TCAs considered as a group (total dropout OR 0.79, 95% CI 0.65 to 0.96;NNT = 20, 95% CI 13 to 48, 49 RCTs, 4194 participants) and was better tolerated in comparison with individual ADs, in particular amitriptyline (total dropout OR 0.62, 95% CI 0.46 to 0.85; NNT = 13, 95% CI 8 to 39, 18 RCTs, 1089 participants), and among the newer ADs ABT-200 (total dropout OR 0.18, 95% CI 0.08 to 0.39; NNT = 3, 95% CI 2 to 5, 1 RCT, 144 participants), pramipexole(total dropout OR 0.12, 95% CI 0.03 to 0.42, NNT = 3, 95% CI 2 to 5, 1 RCT, 105 participants), and reboxetine (total dropout OR0.60, 95% CI 0.44 to 0.82, NNT = 9, 95% CI 6 to 24, 4 RCTs, 764 participants). The present study detected differences in terms of efficacy and tolerability between fluoxetine and certain ADs, but the clinical meaning of these differences is uncertain.Moreover, the assessment of quality with the risk of bias tool showed that the great majority of included studies failed to report details on methodological procedures. Of consequence, no definitive implications can be drawn from the studies' results. The better efficacy profile of sertraline and venlafaxine (and possibly other ADs) over fluoxetine may be clinically meaningful,as already suggested by other systematic reviews. In addition to efficacy data, treatment decisions should also be based on considerations of drug toxicity, patient acceptability and cost.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Tinea infections are fungal infections of the skin caused by dermatophytes. It is estimated that 10% to 20% of the world population is affected by fungal skin infections. Sites of infection vary according to geographical location, the organism involved, and environmental and cultural differences. Both tinea corporis, also referred to as 'ringworm' and tinea cruris or 'jock itch' are conditions frequently seen by primary care doctors and dermatologists. The diagnosis can be made on clinical appearance and can be confirmed by microscopy or culture. A wide range of topical antifungal drugs are used to treat these superficial dermatomycoses, but it is unclear which are the most effective. To assess the effects of topical antifungal treatments in tinea cruris and tinea corporis. We searched the following databases up to 13th August 2013: the Cochrane Skin Group Specialised Register, CENTRAL in The Cochrane Library (2013, Issue 7), MEDLINE (from 1946), EMBASE (from 1974), and LILACS (from 1982). We also searched five trials registers, and checked the reference lists of included and excluded studies for further references to relevant randomised controlled trials. We handsearched the journal Mycoses from 1957 to 1990. Randomised controlled trials in people with proven dermatophyte infection of the body (tinea corporis) or groin (tinea cruris). Two review authors independently carried out study selection, data extraction, assessment of risk of bias, and analyses. Of the 364 records identified, 129 studies with 18,086 participants met the inclusion criteria. Half of the studies were judged at high risk of bias with the remainder judged at unclear risk. A wide range of different comparisons were evaluated across the 129 studies, 92 in total, with azoles accounting for the majority of the interventions. Treatment duration varied from one week to two months, but in most studies this was two to four weeks. The length of follow-up varied from one week to six months. Sixty-three studies contained no usable or retrievable data mainly due to the lack of separate data for different tinea infections. Mycological and clinical cure were assessed in the majority of studies, along with adverse effects. Less than half of the studies assessed disease relapse, and hardly any of them assessed duration until clinical cure, or participant-judged cure. The quality of the body of evidence was rated as low to very low for the different outcomes.Data for several outcomes for two individual treatments were pooled. Across five studies, significantly higher clinical cure rates were seen in participants treated with terbinafine compared to placebo (risk ratio (RR) 4.51, 95% confidence interval (CI) 3.10 to 6.56, number needed to treat (NNT) 3, 95% CI 2 to 4). The quality of evidence for this outcome was rated as low. Data for mycological cure for terbinafine could not be pooled due to substantial heterogeneity.Mycological cure rates favoured naftifine 1% compared to placebo across three studies (RR 2.38, 95% CI 1.80 to 3.14, NNT 3, 95% CI 2 to 4) with the quality of evidence rated as low. In one study, naftifine 1% was more effective than placebo in achieving clinical cure (RR 2.42, 95% CI 1.41 to 4.16, NNT 3, 95% CI 2 to 5) with the quality of evidence rated as low.Across two studies, mycological cure rates favoured clotrimazole 1% compared to placebo (RR 2.87, 95% CI 2.28 to 3.62, NNT 2, 95% CI 2 to 3).Data for several outcomes were pooled for three comparisons between different classes of treatment. There was no difference in mycological cure between azoles and benzylamines (RR 1.01, 95% CI 0.94 to 1.07). The quality of the evidence was rated as low for this comparison. Substantial heterogeneity precluded the pooling of data for mycological and clinical cure when comparing azoles and allylamines. Azoles were slightly less effective in achieving clinical cure compared to azole and steroid combination creams immediately at the end of treatment (RR 0.67, 95% CI 0.53 to 0.84, NNT 6, 95% CI 5 to 13), but there was no difference in mycological cure rate (RR 0.99, 95% CI 0.93 to 1.05). The quality of evidence for these two outcomes was rated as low for mycological cure and very low for clinical cure.All of the treatments that were examined appeared to be effective, but most comparisons were evaluated in single studies. There was no evidence for a difference in cure rates between tinea cruris and tinea corporis. Adverse effects were minimal - mainly irritation and burning; results were generally imprecise between active interventions and placebo, and between different classes of treatment. The pooled data suggest that the individual treatments terbinafine and naftifine are effective. Adverse effects were generally mild and reported infrequently. A substantial number of the studies were more than 20 years old and of unclear or high risk of bias; there is however, some evidence that other topical antifungal treatments also provide similar clinical and mycological cure rates, particularly azoles although most were evaluated in single studies.There is insufficient evidence to determine if Whitfield's ointment, a widely used agent is effective.Although combinations of topical steroids and antifungals are not currently recommended in any clinical guidelines, relevant studies included in this review reported higher clinical cure rates with similar mycological cure rates at the end of treatment, but the quality of evidence for these outcomes was rated very low due to imprecision, indirectness and risk of bias. There was insufficient evidence to confidently assess relapse rates in the individual or combination treatments.Although there was little difference between different classes of treatment in achieving cure, some interventions may be more appealing as they require fewer applications and a shorter duration of treatment. Further, high quality, adequately powered trials focusing on patient-centred outcomes, such as patient satisfaction with treatment should be considered.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Accurate and rapid tests for tuberculosis (TB) drug resistance are critical for improving patient care and decreasing the transmission of drug-resistant TB. Genotype(®)MTBDRsl (MTBDRsl) is the only commercially-available molecular test for detecting resistance in TB to the fluoroquinolones (FQs; ofloxacin, moxifloxacin and levofloxacin) and the second-line injectable drugs (SLIDs; amikacin, kanamycin and capreomycin), which are used to treat patients with multidrug-resistant (MDR-)TB. To obtain summary estimates of the diagnostic accuracy of MTBDRsl for FQ resistance, SLID resistance and extensively drug-resistant TB (XDR-TB; defined as MDR-TB plus resistance to a FQ and a SLID) when performed (1) indirectly (ie on culture isolates confirmed as TB positive) and (2) directly (ie on smear-positive sputum specimens).To compare summary estimates of the diagnostic accuracy of MTBDRsl for FQ resistance, SLID resistance and XDR-TB by type of testing (indirect versus direct testing).The populations of interest were adults with drug-susceptible TB or drug-resistant TB. The settings of interest were intermediate and central laboratories. We searched the following databases without any language restriction up to 30 January 2014: Cochrane Infectious Diseases Group Specialized Register; MEDLINE; EMBASE; ISI Web of Knowledge; MEDION; LILACS; BIOSIS; SCOPUS; the metaRegister of Controlled Trials; the search portal of the World Health Organization International Clinical Trials Registry Platform; and ProQuest Dissertations &amp; Theses A&amp;I. We included all studies that determined MTBDRsl accuracy against a defined reference standard (culture-based drug susceptibility testing (DST), genetic testing or both). We included cross-sectional and diagnostic case-control studies. We excluded unpublished data and conference proceedings. For each study, two review authors independently extracted data using a standardized form and assessed study quality using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. We performed meta-analyses to estimate the pooled sensitivity and specificity of MTBDRsl for FQ resistance, SLID resistance, and XDR-TB. We explored the influence of different reference standards. We performed the majority of analyses using a bivariate random-effects model against culture-based DST as the reference standard. We included 21 unique studies: 14 studies reported the accuracy of MTBDRsl when done directly, five studies when done indirectly and two studies that did both. Of the 21 studies, 15 studies (71%) were cross-sectional and 11 studies (58%) were located in low-income or middle-income countries. All studies but two were written in English. Nine (43%) of the 21 included studies had a high risk of bias for patient selection. At least half of the studies had low risk of bias for the other QUADAS-2 domains.As a test for FQ resistance measured against culture-based DST, the pooled sensitivity of MTBDRsl when performed indirectly was 83.1% (95% confidence interval (CI) 78.7% to 86.7%) and the pooled specificity was 97.7% (95% CI 94.3% to 99.1%), respectively (16 studies, 1766 participants; 610 confirmed cases of FQ-resistant TB; moderate quality evidence). When performed directly, the pooled sensitivity was 85.1% (95% CI 71.9% to 92.7%) and the pooled specificity was 98.2% (95% CI 96.8% to 99.0%), respectively (seven studies, 1033 participants; 230 confirmed cases of FQ-resistant TB; moderate quality evidence). For indirect testing for FQ resistance, four (0.2%) of 1766 MTBDRsl results were indeterminate, whereas for direct testing 20 (1.9%) of 1033 were MTBDRsl indeterminate (P &lt; 0.001).As a test for SLID resistance measured against culture-based DST, the pooled sensitivity of MTBDRsl when performed indirectly was 76.9% (95% CI 61.1% to 87.6%) and the pooled specificity was 99.5% (95% CI 97.1% to 99.9%), respectively (14 studies, 1637 participants; 414 confirmed cases of SLID-resistant TB; moderate quality evidence). For amikacin resistance, the pooled sensitivity and specificity were 87.9% (95% CI 82.1% to 92.0%) and 99.5% (95% CI 97.5% to 99.9%), respectively. For kanamycin resistance, the pooled sensitivity and specificity were 66.9% (95% CI 44.1% to 83.8%) and 98.6% (95% CI 96.1% to 99.5%), respectively. For capreomycin resistance, the pooled sensitivity and specificity were 79.5% (95% CI 58.3% to 91.4%) and 95.8% (95% CI 93.4% to 97.3%), respectively. When performed directly, the pooled sensitivity for SLID resistance was 94.4% (95% CI 25.2% to 99.9%) and the pooled specificity was 98.2% (95% CI 88.9% to 99.7%), respectively (six studies, 947 participants; 207 confirmed cases of SLID-resistant TB, 740 SLID susceptible cases of TB; very low quality evidence). For indirect testing for SLID resistance, three (0.4%) of 774 MTBDRsl results were indeterminate, whereas for direct testing 53 (6.1%) of 873 were MTBDRsl indeterminate (P &lt; 0.001).As a test for XDR-TB measured against culture-based DST, the pooled sensitivity of MTBDRsl when performed indirectly was 70.9% (95% CI 42.9% to 88.8%) and the pooled specificity was 98.8% (95% CI 96.1% to 99.6%), respectively (eight studies, 880 participants; 173 confirmed cases of XDR-TB; low quality evidence). In adults with TB, a positive MTBDRsl result for FQ resistance, SLID resistance, or XDR-TB can be treated with confidence. However, MTBDRsl does not detect approximately one in five cases of FQ-resistant TB, and does not detect approximately one in four cases of SLID-resistant TB. Of the three SLIDs, MTBDRsl has the poorest sensitivity for kanamycin resistance. MTBDRsl will miss between one in four and one in three cases of XDR-TB. The diagnostic accuracy of MTBDRsl is similar when done using either culture isolates or smear-positive sputum. As the location of the resistance causing mutations can vary on a strain-by-strain basis, further research is required on test accuracy in different settings and, if genetic sequencing is used as a reference standard, it should examine all resistance-determining regions. Given the confidence one can have in a positive result, and the ability of the test to provide results within a matter of days, MTBDRsl may be used as an initial test for second-line drug resistance. However, when the test reports a negative result, clinicians may still wish to carry out conventional testing.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Axial spondyloarthritis (axSpA) comprises ankylosing spondylitis (radiographic axSpA) and non-radiographic (nr-)axSpA and is associated with psoriasis, uveitis and inflammatory bowel disease. Non-steroidal anti-inflammatory drugs (NSAIDs) are recommended as first-line drug treatment. To determine the benefits and harms of NSAIDs in axSpA. We searched CENTRAL, MEDLINE and EMBASE to 18 June 2014. Randomised controlled trials (RCTs) or quasi-RCTs of NSAIDs versus placebo or any comparator in adults with axSpA and observational cohort studies studying the long term effect (≥ six months) of NSAIDs on radiographic progression or adverse events (AEs). The main comparions were traditional or COX-2 NSAIDs versus placebo. The major outcomes were pain, Bath Ankylosing Spondylitis Disease Activity Index (BASDAI), Bath Ankylosing Spondylitis Functional Index (BASFI), Bath Ankylosing Spondylitis Metrology Index (BASMI), radiographic progression, number of withdrawals due to AEs and number of serious AEs Two review authors independently selected trials for inclusion, assessed the risk of bias, extracted data and assessed the quality of evidence for major outcomes using GRADE. We included 39 studies (35 RCTs, two quasi-RCTs and two cohort studies); and 29 RCTs and two quasi-RCTs (n = 4356) in quantitative analyses for the comparisons: traditional NSAIDs versus placebo, cyclo-oxygenase-2 (COX-2) versus placebo, COX-2 versus traditional NSAIDs, NSAIDs versus NSAIDs, naproxen versus other NSAIDs, low versus high dose. Most trials were at unclear risk of selection bias (n = 29), although blinding of participants and personnel was adequate in 24 trials. Twenty-five trials had low risk of attrition bias and 29 trials had low risk of reporting bias. Risk of bias in both cohort studies was high for study participation, and low or unclear for all other criteria. No trials in the meta-analyses assessed patients with nr-axSpA.Traditional NSAIDs were more beneficial than placebo at six weeks. High quality evidence (four trials, N=850) indicates better pain relief with NSAIDs (pain in control group ranged from 57 to 64 on a 100mm visual analogue scale (VAS) and was 16.5 points lower in the NSAID group (95% confidence interval (CI) -20.8 to -12.2), lower scores indicate less pain, NNT 4 (3 to 6)); moderate quality evidence (one trial, n = 190) indicates improved disease activity with NSAIDs (BASDAI in control group was 54.7 on a 100-point scale and was 17.5 points lower in the NSAID group, 95% CI -23.1 to -11.8), lower scores indicate less disease activity, NNT 3 (2 to 4)); and high quality evidence (two trials, n = 356) indicates improved function with NSAIDs (BASFI in control group was 50.0 on a 100-point scale and was 9.1 points lower in the NSAID group (95% CI -13.0 to -5.1), lower scores indicate better functioning, NNT 5 (3 to 8)). High (five trials, n = 1165) and moderate (three trials, n = 671) quality evidence (downgraded due to potential imprecision) indicates that withdrawals due to AEs and number of serious AEs did not differ significantly between placebo (52/1000 and 2/1000) and NSAID (39/1000 and 3/1000) groups after 12 weeks (risk ratio (RR) 0.75, 95% CI 0.46 to 1.21; and RR 1.69, 95% CI 0.36 to 7.97, respectively). BASMI and radiographic progression were not reported.COX-2 NSAIDS were also more efficacious than placebo at six weeks. High quality evidence (two trials, n = 349) indicates better pain relief with COX-2 (pain in control group was 64 points and was 21.7 points lower in the COX-2 group (95% CI -35.9 to -7.4), NNT 3 (2 to 24)); moderate quality evidence (one trial, n = 193) indicates improved disease activity with COX-2 (BASDAI in control groups was 54.7 points and was 22 points lower in the COX-2 group (95% CI -27.4 to -16.6), NNT 2 (1 to 3)); and high quality evidence (two trials, n = 349) showed improved function with COX-2 (BASFI in control group was 50.0 points and was 13.4 points lower in the COX-2 group (95% CI -17.4 to -9.5), NNT 3 (2 to 4)). Low and moderate quality evidence (three trials, n = 669) (downgraded due to potential imprecision and heterogeneity) indicates that withdrawals due to AEs and number of serious AEs did not differ significantly between placebo (11/1000 and 2/1000) and COX-2 (24/1000 and 2/1000) groups after 12 weeks (RR 2.14, 95% CI 0.36 to 12.56; and RR 0.92, 95% CI 0.14 to 6.21, respectively). BASMI and radiographic progression were not reported.There were no significant differences in benefits (pain on VAS: MD -2.62, 95% CI -10.99 to 5.75; three trials, n = 669) or harms (withdrawals due to AEs: RR 1.04, 95% CI 0.60 to 1.82; four trials, n = 995) between NSAID classes. While indomethacin use resulted in significantly more AEs (RR 1.25, 95% CI 1.06 to 1.48; 11 studies, n = 1135), and neurological AEs (RR 2.34, 95% CI 1.32 to 4.14; nine trials, n = 963) than other NSAIDs, these findings were not robust to sensitivity analyses. We found no important differences in harms between naproxen and other NSAIDs (three trials, n = 646), although other NSAIDs appeared more effective for relieving pain (MD 6.80, 95% CI 3.72 to 9.88; two trials, n = 232). We found no clear dose-response effect on benefits or harms (five studies, n = 1136). Single studies suggest NSAIDs may be effective in retarding radiographic progression, especially in certain subgroups of patients, e.g. patients with high CRP, and that this may be best achieved by continuous rather than on-demand use of NSAIDs. High to moderate quality evidence indicates that both traditional and COX-2 NSAIDs are efficacious for treating axSpA, and moderate to low quality evidence indicates harms may not differ from placebo in the short term. Various NSAIDs are equally effective. Continuous NSAID use may reduce radiographic spinal progression, but this requires confirmation.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Cycling is an attractive form of transport. It is beneficial to the individual as a form of physical activity that may fit more readily into an individual's daily routine, such as for cycling to work and to the shops, than other physical activities such as visiting a gym. Cycling is also beneficial to the wider community and the environment as a result of fewer motorised journeys. Cyclists are seen as vulnerable road users who are frequently in close proximity to larger and faster motorised vehicles. Cycling infrastructure aims to make cycling both more convenient and safer for cyclists. This review is needed to guide transport planning. To:1. evaluate the effects of different types of cycling infrastructure on reducing cycling injuries in cyclists, by type of infrastructure;2. evaluate the effects of cycling infrastructure on reducing the severity of cycling injuries in cyclists;3. evaluate the effects of cycling infrastructure on reducing cycling injuries in cyclists with respect to age, sex and social group. We ran the most recent search on 2nd March 2015. We searched the Cochrane Injuries Group Specialised Register, CENTRAL (The Cochrane Library), MEDLINE (OvidSP), Embase Classic + Embase(OvidSP), PubMed and 10 other databases. We searched websites, handsearched conference proceedings, screened reference lists of included studies and previously published reviews and contacted relevant organisations. We included randomised controlled trials, cluster randomised controlled trials, controlled before-after studies, and interrupted time series studies which evaluated the effect of cycling infrastructure (such as cycle lanes, tracks or paths, speed management, roundabout design) on cyclist injury or collision rates. Studies had to include a comparator, that is, either no infrastructure or a different type of infrastructure. We excluded studies that assessed collisions that occurred as a result of competitive cycling. Two review authors examined the titles and abstracts of papers obtained from searches to determine eligibility. Two review authors extracted data from the included trials and assessed the risk of bias. We carried out a meta-analysis using the random-effects model where at least three studies reported the same intervention and outcome. Where there were sufficient studies, as a secondary analysis we accounted for changes in cyclist exposure in the calculation of the rate ratios. We rated the quality of the evidence as 'high', 'moderate', 'low' or 'very low' according to the GRADE approach for the installation of cycle routes and networks. We identified 21 studies for inclusion in the review: 20 controlled before-after (CBA) studies and one interrupted time series (ITS) study. These evaluated a range of infrastructure including cycle lanes, advanced stop lines, use of colour, cycle tracks, cycle paths, management of the road network, speed management, cycle routes and networks, roundabout design and packages of measures. No studies reported medically-attended or self-reported injuries. There was no evidence that cycle lanes reduce the rate of cycle collisions (rate ratio 1.21, 95% CI 0.70 to 2.08). Taking into account cycle flow, there was no difference in collisions for cyclists using cycle routes and networks compared with cyclists not using cycle routes and networks (RR 0.40, 95% CI 0.15 to 1.05). There was statistically significant heterogeneity between the studies (I² = 75%, Chi² = 8.00 df = 2, P = 0.02) for the analysis adjusted for cycle flow. We judged the quality of the evidence regarding cycle routes and networks as very low and we are very uncertain about the estimate. These analyses are based on findings from CBA studies.From data presented narratively, the use of 20 mph speed restrictions in urban areas may be effective at reducing cyclist collisions. Redesigning specific parts of cycle routes that may be particularly busy or complex in terms of traffic movement may be beneficial to cyclists in terms of reducing the risk of collision. Generally, the conversion of intersections to roundabouts may increase the number of cycle collisions. In particular, the conversion of intersections to roundabouts with cycle lanes marked as part of the circulating carriageway increased cycle collisions. However, the conversion of intersections with and without signals to roundabouts with cycle paths may reduce the odds of collision. Both continuing a cycle lane across the mouth of a side road with a give way line onto the main road, and cycle tracks, may increase the risk of injury collisions in cyclists. However, these conclusions are uncertain, being based on a narrative review of findings from included studies. There is a lack of evidence that cycle paths or advanced stop lines either reduce or increase injury collisions in cyclists. There is also insufficient evidence to draw any robust conclusions concerning the effect of cycling infrastructure on cycling collisions in terms of severity of injury, sex, age, and level of social deprivation of the casualty.In terms of quality of the evidence, there was little matching of intervention and control sites. In many studies, the comparability of the control area to the intervention site was unclear and few studies provided information on other cycling infrastructures that may be in place in the control and intervention areas. The majority of studies analysed data routinely collected by organisations external to the study team, thus reducing the risk of bias in terms of systematic differences in assessing outcomes between the control and intervention groups. Some authors did not take regression-to-mean effects into account when examining changes in collisions. Longer data collection periods pre- and post-installation would allow for regression-to-mean effects and also seasonal and time trends in traffic volume to be observed. Few studies adjusted cycle collision rates for exposure. Generally, there is a lack of high quality evidence to be able to draw firm conclusions as to the effect of cycling infrastructure on cycling collisions. There is a lack of rigorous evaluation of cycling infrastructure.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
To investigate the effects of zinc deficiency on the relevant immune function in rats with LPS-induced sepsis. Sixty rats were divided into low zinc group (LZ), normal zinc pair-fed group (NP), and normal zinc control group (NC) according to the random number table, with 20 rats in each group. The rats in group LZ were fed with low zinc diet, and the rats in group NP were fed with normal zinc diet, with the same intake as that of group LZ by manual control, and the rats in group NC were fed with normal zinc diet freely. After being fed for 7 d, the rats all fasted and were further divide into the below subgroups named LZ-LPS, LZ-normal saline (NS), NP-LPS, NP-NS, NC-LPS, and NC-NS according to the random number table, with 10 rats in each subgroup. Rats in the LPS subgroups were intraperitoneally injected with 1 mg/mL LPS solution with the dosage of 5 mg/kg, rats in the corresponding NS subgroups were intraperitoneally injected with equivalent NS. The rats were sacrificed at post injection hour 6 to collect blood, spleen, and thymus. The serum level of zinc was detected by inductively coupled plasma mass spectrometry, and the serum alkaline phosphatase (ALP) activity was detected by automatic blood biochemical analyzer. The body weight and weight of spleen and thymus of rats were weighed, and the indices of spleen and thymus were calculated. Six routine blood indices were examined by automatic blood cell analyzer. The serum levels of interferon gamma (IFN-γ), TNF-α, IL-4, and IL-10 were determined with ELISA, and the ratio of IFN-γ to IL-4 was calculated. Data were processed with one-way analysis of variance and SNK test. (1) Serum levels of zinc and ALP activity in the LPS subgroups were significantly lower than those in the corresponding NS subgroups (with P values below 0.05). The two former indices in subgroups NP-NS and NC-NS were significantly higher than those in subgroup LZ-NS (with P values below 0.05). The two former indices in subgroups NP-LPS and NC-LPS were significantly higher than those in subgroup LZ-LPS (with P values below 0.05). (2) Body weight, spleen and thymus weight, indices of spleen and thymus in the LPS subgroups were similar with those in the corresponding NS subgroups (with P values above 0.05). The 4 former indices, except for body weight, in subgroups NP-NS and NC-NS were significantly higher than those in subgroup LZ-NS (with P values below 0.05). The 4 former indices, except for body weight, in subgroups NP-LPS and NC-LPS were significantly higher than those in subgroup LZ-LPS (with P values below 0.05). (3) Levels of leucocyte count in subgroups LZ-LPS and NP-LPS were significantly higher than those in the corresponding NS subgroups (with P values below 0.05). Level of leucocyte count in subgroup NC-NS was significantly higher than that in subgroup LZ-NS (P&lt;0.05). Level of leucocyte count in subgroup NC-LPS was significantly lower than that in subgroup LZ-LPS (P&lt;0.05). Levels of neutrophilic granulocyte count (NGC) and NG in the LPS subgroups were significantly higher than those in the corresponding NS subgroups (with P values below 0.05). The two former indices in subgroup NC-LPS were significantly lower than those in subgroup LZ-LPS (with P values below 0.05). Level of NG in subgroup NC-NS was significantly lower than that in subgroup LZ-NS (P&lt;0.05). Levels of lymphocyte count and lymphocyte in subgroups LZ-NS, LZ-LPS, NP-NS, NP-LPS, NC-NS, and NC-LPS were respectively (1.8 ± 0.4) × 10⁹/L, (1.0 ± 0.3)× 10⁹/L, (2.6 ± 0.7) × 10⁹/L, (1.4 ± 0.4) × 10⁹/L, (3.3 ± 0.6) × 10⁹/L, (1.5 ± 0.5) × 10⁹/L, and 0.39 ± 0.10, 0.11 ± 0.03, 0.47 ± 0.12, 0.14 ± 0.04, 0.50 ± 0.09, 0.24 ± 0.07. The two former indices in the LPS subgroups were significantly lower than those in the corresponding NS subgroups (with P values below 0.05). The two former indices in subgroup NC-NS were significantly higher than those in subgroup LZ-NS (with P values below 0.05). The two former indices in subgroups NP-LPS and NC-LPS were significantly higher than those in subgroup LZ-LPS (with P values below 0.05). Level of lymphocyte count in subgroup NP-NS was significantly higher than that in subgroup LZ-NS (P&lt;0.05). Levels of platelet count (PC) in subgroups NP-LPS and NC-LPS were significantly lower than those in the corresponding NS subgroups (with P values below 0.05). Levels of PC in subgroups NP-NS and NC-NS were significantly higher than those in subgroup LZ-NS (with P values below 0.05). Level of PC in subgroup NC-LPS was significantly higher than that in subgroup LZ-LPS (P&lt;0.05). (4) Serum levels of TNF-α, IL-4, and IL-10 in each subgroup showed no significant differences (with P values above 0.05). Serum levels of IFN-γ and ratios of IFN-γ to IL-4 in subgroups LZ-NS, LZ-LPS, NP-NS, NP-LPS, NC-NS, and NC-LPS were respectively (75 ± 21), (233 ± 40), (80 ± 14), (345 ± 74), (66 ± 7), (821 ± 189) pg/mL, and 3.1 ± 1.0, 6.6 ± 1.7, 3.9 ± 1.7, 20.2 ± 8.3, 3.4 ± 1.5, 45.7 ± 7.6. The two former indices in the LPS subgroups were significantly higher than those in the corresponding NS subgroups (with P values below 0.05). The two former indices in subgroups NP-NS and NC-NS were similar with those in subgroup LZ-NS (with P values above 0.05). The two former indices in subgroups NP-LPS and NC-LPS were significantly higher than those in subgroup LZ-LPS (with P values below 0.05). Zinc deficiency can induce the atrophy of spleen and thymus, and reduction of peripheral blood lymphocyte. In sepsis, zinc deficiency can further decrease the production of IFN-γ, thus making the cytokines of Th1/Th2 shift to Th2 and the immune imbalance worse.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Risperidone is the first new generation antipsychotic drug made available in a long-acting injection formulation. To examine the effects of depot risperidone for treatment of schizophrenia or related psychoses in comparison with placebo, no treatment or other antipsychotic medication.To critically appraise and summarise current evidence on the resource use, cost and cost-effectiveness of risperidone (depot) for schizophrenia. We searched the Cochrane Schizophrenia Group's Register (December 2002, 2012, and October 28, 2015). We also checked the references of all included studies, and contacted industry and authors of included studies. Randomised clinical trials comparing depot risperidone with other treatments for people with schizophrenia and/or schizophrenia-like psychoses. Two review authors independently selected trials, assessed trial quality and extracted data. For dichotomous data, we calculated the risk ratio (RR), with 95% confidence interval (CI). For continuous data, we calculated mean differences (MD). We assessed risk of bias for included studies and created 'Summary of findings' tables using GRADE. Twelve studies, with a total of 5723 participants were randomised to the following comparison treatments: Risperidone depot versus placebo Outcomes of relapse and improvement in mental state were neither measured or reported. In terms of other primary outcomes, more people receiving placebo left the study early by 12 weeks (1 RCT, n=400, RR 0.74 95% CI 0.63 to 0.88, very low quality evidence), experienced severe adverse events in short term (1 RCT, n=400, RR 0.59 95% CI 0.38 to 0.93, very low quality evidence). There was however, no difference in levels of weight gain between groups (1 RCT, n=400, RR 2.11 95% CI 0.48 to 9.18, very low quality evidence). Risperidone depot versus general oral antipsychotics The outcome of improvement in mental state was not presented due to high levels of attrition, nor were levels of severe adverse events explicitly reported. Most primary outcomes of interest showed no difference between treatment groups. However, more people receiving depot risperidone experienced nervous system disorders (long-term:1 RCT, n=369, RR 1.34 95% CI 1.13 to 1.58, very-low quality evidence). Risperidone depot versus oral risperidoneData for relapse and severe adverse events were not reported. All outcomes of interest were rated as moderate quality evidence. Main results showed no differences between treatment groups with equivocal data for change in mental state, numbers leaving the study early, any extrapyramidal symptoms, weight increase and prolactin-related adverse events. Risperidone depot versus oral quetiapine Relapse rates and improvement in mental state were not reported. Fewer people receiving risperidone depot left the study early (long-term: 1 RCT, n=666, RR 0.84 95% CI 0.74 to 0.95, moderate quality evidence). Experience of serious adverse events was similar between groups (low quality evidence), but more people receiving depot risperidone experienced EPS (1 RCT, n=666, RR 1.83 95% CI 1.07 to 3.15, low quality evidence), had greater weight gain (1 RCT, n=666, RR 1.25 95% CI 0.25 to 2.25, low quality evidence) and more prolactin-related adverse events (1 RCT, n=666, RR 3.07 95% CI 1.13 to 8.36, very low quality evidence). Risperidone depot versus oral aripiprazoleRelapse rates, mental state using PANSS, leaving the study early, serious adverse events and weight increase were similar between groups. However more people receiving depot risperidone experienced prolactin-related adverse events compared to those receiving oral aripiprazole (2 RCTs, n=729, RR 9.91 95% CI 2.78 to 35.29, very low quality of evidence). Risperidone depot versus oral olanzapineRelapse rates were not reported in any of the included studies for this comparison. Improvement in mental state using PANSS and instances of severe adverse events were similar between groups. More people receiving depot risperidone left the study early than those receiving oral olanzapine (1 RCT, n=618, RR 1.32 95% CI 1.10 to 1.58, low quality evidence) with those receiving risperidone depot also experiencing more extrapyramidal symptoms (1 RCT, n=547, RR 1.67 95% CI 1.19 to 2.36, low quality evidence). However, more people receiving oral olanzapine experienced weight increase (1 RCT, n=547, RR 0.56 95% CI 0.42 to 0.75, low quality evidence). Risperidone depot versus atypical depot antipsychotics (specifically paliperidone palmitate)Relapse rates were not reported and rates of response using PANSS, weight increase, prolactin-related adverse events and glucose-related adverse events were similar between groups. Fewer people left the study early due to lack of efficacy from the risperidone depot group (long term: 1 RCT, n=749, RR 0.60 95% CI 0.45 to 0.81, low quality evidence), but more people receiving depot risperidone required use of EPS-medication (2 RCTs, n=1666, RR 1.46 95% CI 1.18 to 1.8, moderate quality evidence). Risperidone depot versus typical depot antipsychoticsOutcomes of relapse, severe adverse events or movement disorders were not reported. Outcomes relating to improvement in mental state demonstrated no difference between groups (low quality evidence). However, more people receiving depot risperidone compared to other typical depots left the study early (long-term:1 RCT, n=62, RR 3.05 95% CI 1.12 to 8.31, low quality evidence). Depot risperidone may be more acceptable than placebo injection but it is hard to know if it is any more effective in controlling the symptoms of schizophrenia. The active drug, especially higher doses, may be associated with more movement disorders than placebo. People already stabilised on oral risperidone may continue to maintain benefit if treated with depot risperidone and avoid the need to take tablets, at least in the short term. In people who are happy to take oral medication the depot risperidone is approximately equal to oral risperidone. It is possible that the depot formulation, however, can bring a second-generation antipsychotic to people who do not reliably adhere to treatment. People with schizophrenia who have difficulty adhering to treatment, however, are unlikely to volunteer for a clinical trial. Such people may gain benefit from the depot risperidone with no increased risk of extrapyramidal side effects.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The objective is to identify and assess the effectiveness of tools and methods of teaching communication skills to health professional students in undergraduate and postgraduate programs, to facilitate communication in hospitals, nursing homes and mental health institutions.For this review, effective communication will be defined as that which enhances patient satisfaction, safety, symptom resolution, psychological status, or reduces the impact/burden of disease and/or improved communication skills within undergraduate or postgraduate studentsThe review question is: What is the best available evidence on strategies to effectively teach communication skills to undergraduate and postgraduate medical, nursing and allied health students (nutrition and dietetics, occupational therapy, physiotherapy, speech pathology etc)? Communication is a two-way interaction where information, meanings and feelings are shared both verbally and non-verbally. Effective communication is when the message being conveyed is understood as intended. Effective communication between the health professional and patient is increasingly being recognised as a core clinical skill. Research has identified the far reaching benefits of effective communication skills including enhanced patient satisfaction, patient safety, symptom resolution and improvements in functional and psychological status. Poor communication can result in omitted or misinterpretation of information resulting in declining health of the patient. Despite the importance of effective communication in ensuring positive outcomes for both the patient and health professional, there is concern that contemporary teaching and learning approaches do not always facilitate the development of a requisite level of communication skills, both verbal and written and a difficulty for the current generation of communication skills teachers is that many have not had the experience of being taught communication skills themselves.Studies have shown that communication skills can be taught, although proven learning strategies should be the basis of any communication teaching. It is reported that the communication skills teachers themselves be trained in communication skills and assessment of communication skills should be an important component of the health professionals' accreditation. Not only should the communication skills of the teacher be evaluated but the teaching modules within the program should also be evaluated on a regular basis.In all cases of communication teaching, strong faculty support is required for any communication skills programme to be successful. Early introduction of communication skills programmes, which are continued throughout all the years of the curriculum, has been shown to be effective in improving confidence and reducing the number of errors made and establishing a more permanent understanding of communication. Throughout the undergraduate degree, increased integration between communication and clinical teaching is important in learning to use the two skill sets together, so as to closely reflect what happens in clinical practice. Research suggests that communication training is most effective when longitudinal in nature and coincides with ongoing professional practice education.Many studies have shown that communication skills programmes with a strong experimental and/or practical component are more effective than programmes that are solely theory or discussion based. Simulations and role-play are effective instructional methods for developing communication skills including opening and closing consultations, conducting the consultation in a logical manner, improving body language, using language at the level of understanding of the patient and using clear verbal and written communication. One particular strategy that has been shown to be effective is the use of videotaped consultations with standardised patients.Although measuring the effectiveness of communication skills training is difficult, there are a few common strategies used in the current literature. It has been suggested that evaluation of the competence of students' verbal communication skills is best assessed during observations of simulated consultations with standardised patients followed by constructive feedback. The quality of the constructive feedback is crucial, needing to be specific, non-judgemental and descriptive. A number of studies have used objective structured clinical exams (OSCE) where a marking scheme is used to evaluate different components of communication whilst ensuring a more standardised assessment for all students.Given the concern with communication skills of contemporary health professionals, and the variability in current communication education programmes, it is important that an educational model be developed to foster the development of effective communication. This model should be multi-faceted, that is, address knowledge, skill and attitude domains and cover both verbal and non-verbal forms of communication.A preliminary search of JBI Library of Systematic Reviews, Cochrane Library of Systematic Reviews, Medline, CINAHL, DARE, PROSPERO has been performed and one existing systematic review was identified. The review investigated communication teaching in nurse education in the United Kingdom (UK). The review discusses a number of points including 1) who teaches communication skills; 2) the methods used; 3) time spent on communication skills training; 4) the goals or content of the teaching and; 5) assessment, evaluation and overall effectiveness of communication teaching. From the 17 studies included in this review, it was found that team teaching provides greater depth and more perspectives therefore likely to be more effective. Experiential methods, standardised patients, and group work are commonly used as methods of teaching with course content including empathy, self-awareness, interviewing skills and critical thinking. The time spent in teaching communication skills is often not reported with information on the methods of assessment of communication skills also limited although the use of standardised patients and OSCEs most commonly used. This review concluded that there was a lack of research in this area and the strength of conclusions from these studies were lessened due to the flaws in methodological design, Therefore, the question still remains as to what aspects of teaching communication are effective.Given the poor methodological design of the studies included in the above review, the time since publication of the last review (2002), and the lack of recent research specific to this topic, this review is somewhat exploratory and hopes to further explain effective methods of communication teaching and evaluation.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
People experiencing acute psychotic illnesses, especially those associated with agitated or violent behaviour, may require urgent pharmacological tranquillisation or sedation. Droperidol, a butyrophenone antipsychotic, has been used for this purpose in several countries. To estimate the effects of droperidol, including its cost-effectiveness, when compared to placebo, other 'standard' or 'non-standard' treatments, or other forms of management of psychotic illness, in controlling acutely disturbed behaviour and reducing psychotic symptoms in people with schizophrenia-like illnesses. We updated previous searches by searching the Cochrane Schizophrenia Group Register (18 December 2015). We searched references of all identified studies for further trial citations and contacted authors of trials. We supplemented these electronic searches by handsearching reference lists and contacting both the pharmaceutical industry and relevant authors. We included all randomised controlled trials (RCTs) with useable data that compared droperidol to any other treatment for people acutely ill with suspected acute psychotic illnesses, including schizophrenia, schizoaffective disorder, mixed affective disorders, the manic phase of bipolar disorder or a brief psychotic episode. For included studies, we assessed quality, risk of bias and extracted data. We excluded data when more than 50% of participants were lost to follow-up. For binary outcomes, we calculated standard estimates of risk ratio (RR) and the corresponding 95% confidence intervals (CI). We created a 'Summary of findings' table using GRADE. We identified four relevant trials from the update search (previous version of this review included only two trials). When droperidol was compared with placebo, for the outcome of tranquillisation or asleep by 30 minutes we found evidence of a clear difference (1 RCT, N = 227, RR 1.18, 95% CI 1.05 to 1.31, high-quality evidence). There was a clear demonstration of reduced risk of needing additional medication after 60 minutes for the droperidol group (1 RCT, N = 227, RR 0.55, 95% CI 0.36 to 0.85, high-quality evidence). There was no evidence that droperidol caused more cardiovascular arrhythmia (1 RCT, N = 227, RR 0.34, 95% CI 0.01 to 8.31, moderate-quality evidence) and respiratory airway obstruction (1 RCT, N = 227, RR 0.62, 95% CI 0.15 to 2.52, low-quality evidence) than placebo. For 'being ready for discharge', there was no clear difference between groups (1 RCT, N = 227, RR 1.16, 95% CI 0.90 to 1.48, high-quality evidence). There were no data for mental state and costs.Similarly, when droperidol was compared to haloperidol, for the outcome of tranquillisation or asleep by 30 minutes we found evidence of a clear difference (1 RCT, N = 228, RR 1.01, 95% CI 0.93 to 1.09, high-quality evidence). There was a clear demonstration of reduced risk of needing additional medication after 60 minutes for participants in the droperidol group (2 RCTs, N = 255, RR 0.37, 95% CI 0.16 to 0.90, high-quality evidence). There was no evidence that droperidol caused more cardiovascular hypotension (1 RCT, N = 228, RR 2.80, 95% CI 0.30 to 26.49,moderate-quality evidence) and cardiovascular hypotension/desaturation (1 RCT, N = 228, RR 2.80, 95% CI 0.12 to 67.98, low-quality evidence) than haloperidol. There was no suggestion that use of droperidol was unsafe. For mental state, there was no evidence of clear difference between the efficacy of droperidol compared to haloperidol (Scale for Quantification of Psychotic Symptom Severity, 1 RCT, N = 40, mean difference (MD) 0.11, 95% CI -0.07 to 0.29, low-quality evidence). There were no data for service use and costs.Whereas, when droperidol was compared with midazolam, for the outcome of tranquillisation or asleep by 30 minutes we found droperidol to be less acutely tranquillising than midazolam (1 RCT, N = 153, RR 0.96, 95% CI 0.72 to 1.28, high-quality evidence). As regards the 'need for additional medication by 60 minutes after initial adequate sedation, we found an effect (1 RCT, N = 153, RR 0.54, 95% CI 0.24 to 1.20, moderate-quality evidence). In terms of adverse effects, we found no statistically significant differences between the two drugs for either airway obstruction (1 RCT, N = 153, RR 0.13, 95% CI 0.01 to 2.55, low-quality evidence) or respiratory hypoxia (1 RCT, N = 153, RR 0.70, 95% CI 0.16 to 3.03, moderate-quality evidence) - but use of midazolam did result in three people (out of around 70) needing some sort of 'airway management' with no such events in the droperidol group. There were no data for mental state, service use and costs.Furthermore, when droperidol was compared to olanzapine, for the outcome of tranquillisation or asleep by any time point, we found no clear differences between the older drug (droperidol) and olanzapine (e.g. at 30 minutes: 1 RCT, N = 221, RR 1.02, 95% CI 0.94 to 1.11, high-quality evidence). There was a suggestion that participants allocated droperidol needed less additional medication after 60 minutes than people given the olanzapine (1 RCT, N = 221, RR 0.56, 95% CI 0.36 to 0.87, high-quality evidence). There was no evidence that droperidol caused more cardiovascular arrhythmia (1 RCT, N = 221, RR 0.32, 95% CI 0.01 to 7.88, moderate-quality evidence) and respiratory airway obstruction (1 RCT, N = 221, RR 0.97, 95% CI 0.20 to 4.72, low-quality evidence) than olanzapine. For 'being ready for discharge', there was no difference between groups (1 RCT, N = 221, RR 1.06, 95% CI 0.83 to 1.34, high-quality evidence). There were no data for mental state and costs. Previously, the use of droperidol was justified based on experience rather than evidence from well-conducted and reported randomised trials. However, this update found high-quality evidence with minimal risk of bias to support the use of droperidol for acute psychosis. Also, we found no evidence to suggest that droperidol should not be a treatment option for people acutely ill and disturbed because of serious mental illnesses.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
This is an updated version of the original Cochrane Review published in 2010, Issue 9, and last updated in 2014, Issue 4. Non-invasive brain stimulation techniques aim to induce an electrical stimulation of the brain in an attempt to reduce chronic pain by directly altering brain activity. They include repetitive transcranial magnetic stimulation (rTMS), cranial electrotherapy stimulation (CES), transcranial direct current stimulation (tDCS), transcranial random noise stimulation (tRNS) and reduced impedance non-invasive cortical electrostimulation (RINCE). To evaluate the efficacy of non-invasive cortical stimulation techniques in the treatment of chronic pain. For this update we searched CENTRAL, MEDLINE, Embase, CINAHL, PsycINFO, LILACS and clinical trials registers from July 2013 to October 2017. Randomised and quasi-randomised studies of rTMS, CES, tDCS, RINCE and tRNS if they employed a sham stimulation control group, recruited patients over the age of 18 years with pain of three months' duration or more, and measured pain as an outcome. Outcomes of interest were pain intensity measured using visual analogue scales or numerical rating scales, disability, quality of life and adverse events. Two review authors independently extracted and verified data. Where possible we entered data into meta-analyses, excluding studies judged as high risk of bias. We used the GRADE system to assess the quality of evidence for core comparisons, and created three 'Summary of findings' tables. We included an additional 38 trials (involving 1225 randomised participants) in this update, making a total of 94 trials in the review (involving 2983 randomised participants). This update included a total of 42 rTMS studies, 11 CES, 36 tDCS, two RINCE and two tRNS. One study evaluated both rTMS and tDCS. We judged only four studies as low risk of bias across all key criteria. Using the GRADE criteria we judged the quality of evidence for each outcome, and for all comparisons as low or very low; in large part this was due to issues of blinding and of precision.rTMSMeta-analysis of rTMS studies versus sham for pain intensity at short-term follow-up (0 to &lt; 1 week postintervention), (27 studies, involving 655 participants), demonstrated a small effect with heterogeneity (standardised mean difference (SMD) -0.22, 95% confidence interval (CI) -0.29 to -0.16, low-quality evidence). This equates to a 7% (95% CI 5% to 9%) reduction in pain, or a 0.40 (95% CI 0.53 to 0.32) point reduction on a 0 to 10 pain intensity scale, which does not meet the minimum clinically important difference threshold of 15% or greater. Pre-specified subgroup analyses did not find a difference between low-frequency stimulation (low-quality evidence) and rTMS applied to the prefrontal cortex compared to sham for reducing pain intensity at short-term follow-up (very low-quality evidence). High-frequency stimulation of the motor cortex in single-dose studies was associated with a small short-term reduction in pain intensity at short-term follow-up (low-quality evidence, pooled n = 249, SMD -0.38 95% CI -0.49 to -0.27). This equates to a 12% (95% CI 9% to 16%) reduction in pain, or a 0.77 (95% CI 0.55 to 0.99) point change on a 0 to 10 pain intensity scale, which does not achieve the minimum clinically important difference threshold of 15% or greater. The results from multiple-dose studies were heterogeneous and there was no evidence of an effect in this subgroup (very low-quality evidence). We did not find evidence that rTMS improved disability. Meta-analysis of studies of rTMS versus sham for quality of life (measured using the Fibromyalgia Impact Questionnaire (FIQ) at short-term follow-up demonstrated a positive effect (MD -10.80 95% CI -15.04 to -6.55, low-quality evidence).CESFor CES (five studies, 270 participants) we found no evidence of a difference between active stimulation and sham (SMD -0.24, 95% CI -0.48 to 0.01, low-quality evidence) for pain intensity. We found no evidence relating to the effectiveness of CES on disability. One study (36 participants) of CES versus sham for quality of life (measured using the FIQ) at short-term follow-up demonstrated a positive effect (MD -25.05 95% CI -37.82 to -12.28, very low-quality evidence).tDCSAnalysis of tDCS studies (27 studies, 747 participants) showed heterogeneity and a difference between active and sham stimulation (SMD -0.43 95% CI -0.63 to -0.22, very low-quality evidence) for pain intensity. This equates to a reduction of 0.82 (95% CI 0.42 to 1.2) points, or a percentage change of 17% (95% CI 9% to 25%) of the control group outcome. This point estimate meets our threshold for a minimum clinically important difference, though the lower confidence interval is substantially below that threshold. We found evidence of small study bias in the tDCS analyses. We did not find evidence that tDCS improved disability. Meta-analysis of studies of tDCS versus sham for quality of life (measured using different scales across studies) at short-term follow-up demonstrated a positive effect (SMD 0.66 95% CI 0.21 to 1.11, low-quality evidence).Adverse eventsAll forms of non-invasive brain stimulation and sham stimulation appear to be frequently associated with minor or transient side effects and there were two reported incidences of seizure, both related to the active rTMS intervention in the included studies. However many studies did not adequately report adverse events. There is very low-quality evidence that single doses of high-frequency rTMS of the motor cortex and tDCS may have short-term effects on chronic pain and quality of life but multiple sources of bias exist that may have influenced the observed effects. We did not find evidence that low-frequency rTMS, rTMS applied to the dorsolateral prefrontal cortex and CES are effective for reducing pain intensity in chronic pain. The broad conclusions of this review have not changed substantially for this update. There remains a need for substantially larger, rigorously designed studies, particularly of longer courses of stimulation. Future evidence may substantially impact upon the presented results.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
This is an updated version of the original Cochrane Review published in 2010, Issue 9, and last updated in 2014, Issue 4. Non-invasive brain stimulation techniques aim to induce an electrical stimulation of the brain in an attempt to reduce chronic pain by directly altering brain activity. They include repetitive transcranial magnetic stimulation (rTMS), cranial electrotherapy stimulation (CES), transcranial direct current stimulation (tDCS), transcranial random noise stimulation (tRNS) and reduced impedance non-invasive cortical electrostimulation (RINCE). To evaluate the efficacy of non-invasive cortical stimulation techniques in the treatment of chronic pain. For this update we searched CENTRAL, MEDLINE, Embase, CINAHL, PsycINFO, LILACS and clinical trials registers from July 2013 to October 2017. Randomised and quasi-randomised studies of rTMS, CES, tDCS, RINCE and tRNS if they employed a sham stimulation control group, recruited patients over the age of 18 years with pain of three months' duration or more, and measured pain as an outcome. Outcomes of interest were pain intensity measured using visual analogue scales or numerical rating scales, disability, quality of life and adverse events. Two review authors independently extracted and verified data. Where possible we entered data into meta-analyses, excluding studies judged as high risk of bias. We used the GRADE system to assess the quality of evidence for core comparisons, and created three 'Summary of findings' tables. We included an additional 38 trials (involving 1225 randomised participants) in this update, making a total of 94 trials in the review (involving 2983 randomised participants). This update included a total of 42 rTMS studies, 11 CES, 36 tDCS, two RINCE and two tRNS. One study evaluated both rTMS and tDCS. We judged only four studies as low risk of bias across all key criteria. Using the GRADE criteria we judged the quality of evidence for each outcome, and for all comparisons as low or very low; in large part this was due to issues of blinding and of precision.rTMSMeta-analysis of rTMS studies versus sham for pain intensity at short-term follow-up (0 to &lt; 1 week postintervention), (27 studies, involving 655 participants), demonstrated a small effect with heterogeneity (standardised mean difference (SMD) -0.22, 95% confidence interval (CI) -0.29 to -0.16, low-quality evidence). This equates to a 7% (95% CI 5% to 9%) reduction in pain, or a 0.40 (95% CI 0.53 to 0.32) point reduction on a 0 to 10 pain intensity scale, which does not meet the minimum clinically important difference threshold of 15% or greater. Pre-specified subgroup analyses did not find a difference between low-frequency stimulation (low-quality evidence) and rTMS applied to the prefrontal cortex compared to sham for reducing pain intensity at short-term follow-up (very low-quality evidence). High-frequency stimulation of the motor cortex in single-dose studies was associated with a small short-term reduction in pain intensity at short-term follow-up (low-quality evidence, pooled n = 249, SMD -0.38 95% CI -0.49 to -0.27). This equates to a 12% (95% CI 9% to 16%) reduction in pain, or a 0.77 (95% CI 0.55 to 0.99) point change on a 0 to 10 pain intensity scale, which does not achieve the minimum clinically important difference threshold of 15% or greater. The results from multiple-dose studies were heterogeneous and there was no evidence of an effect in this subgroup (very low-quality evidence). We did not find evidence that rTMS improved disability. Meta-analysis of studies of rTMS versus sham for quality of life (measured using the Fibromyalgia Impact Questionnaire (FIQ) at short-term follow-up demonstrated a positive effect (MD -10.80 95% CI -15.04 to -6.55, low-quality evidence).CESFor CES (five studies, 270 participants) we found no evidence of a difference between active stimulation and sham (SMD -0.24, 95% CI -0.48 to 0.01, low-quality evidence) for pain intensity. We found no evidence relating to the effectiveness of CES on disability. One study (36 participants) of CES versus sham for quality of life (measured using the FIQ) at short-term follow-up demonstrated a positive effect (MD -25.05 95% CI -37.82 to -12.28, very low-quality evidence).tDCSAnalysis of tDCS studies (27 studies, 747 participants) showed heterogeneity and a difference between active and sham stimulation (SMD -0.43 95% CI -0.63 to -0.22, very low-quality evidence) for pain intensity. This equates to a reduction of 0.82 (95% CI 0.42 to 1.2) points, or a percentage change of 17% (95% CI 9% to 25%) of the control group outcome. This point estimate meets our threshold for a minimum clinically important difference, though the lower confidence interval is substantially below that threshold. We found evidence of small study bias in the tDCS analyses. We did not find evidence that tDCS improved disability. Meta-analysis of studies of tDCS versus sham for quality of life (measured using different scales across studies) at short-term follow-up demonstrated a positive effect (SMD 0.66 95% CI 0.21 to 1.11, low-quality evidence).Adverse eventsAll forms of non-invasive brain stimulation and sham stimulation appear to be frequently associated with minor or transient side effects and there were two reported incidences of seizure, both related to the active rTMS intervention in the included studies. However many studies did not adequately report adverse events. There is very low-quality evidence that single doses of high-frequency rTMS of the motor cortex and tDCS may have short-term effects on chronic pain and quality of life but multiple sources of bias exist that may have influenced the observed effects. We did not find evidence that low-frequency rTMS, rTMS applied to the dorsolateral prefrontal cortex and CES are effective for reducing pain intensity in chronic pain. The broad conclusions of this review have not changed substantially for this update. There remains a need for substantially larger, rigorously designed studies, particularly of longer courses of stimulation. Future evidence may substantially impact upon the presented results.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
This is an updated version of the Cochrane Review previously published in 2016. This review is one in a series of Cochrane Reviews investigating pair-wise monotherapy comparisons.Epilepsy is a common neurological condition in which abnormal electrical discharges from the brain cause recurrent unprovoked seizures. It is believed that with effective drug treatment, up to 70% of individuals with active epilepsy have the potential to become seizure-free and go into long-term remission shortly after starting drug therapy with a single antiepileptic drug in monotherapy.Worldwide, carbamazepine and phenobarbitone are commonly used broad-spectrum antiepileptic drugs, suitable for most epileptic seizure types. Carbamazepine is a current first-line treatment for focal onset seizures, and is used in the USA and Europe. Phenobarbitone is no longer considered a first-line treatment because of concerns over associated adverse events, particularly documented behavioural adverse events in children treated with the drug. However, phenobarbitone is still commonly used in low- and middle-income countries because of its low cost. No consistent differences in efficacy have been found between carbamazepine and phenobarbitone in individual trials; however, the confidence intervals generated by these trials are wide, and therefore, synthesising the data of the individual trials may show differences in efficacy. To review the time to treatment failure, remission and first seizure with carbamazepine compared with phenobarbitone when used as monotherapy in people with focal onset seizures (simple or complex focal and secondarily generalised), or generalised onset tonic-clonic seizures (with or without other generalised seizure types). For the latest update, we searched the following databases on 24 May 2018: the Cochrane Register of Studies (CRS Web), which includes Cochrane Epilepsy's Specialized Register and CENTRAL; MEDLINE; the US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov); and the World Health Organization International Clinical Trials Registry Platform (ICTRP). We handsearched relevant journals and contacted pharmaceutical companies, original trial investigators, and experts in the field. Randomised controlled trials comparing monotherapy with either carbamazepine or phenobarbitone in children or adults with focal onset seizures or generalised onset tonic-clonic seizures. This was an individual participant data (IPD), review. Our primary outcome was time to treatment failure. Our secondary outcomes were time to first seizure post-randomisation, time to six-month remission, time to 12-month remission, and incidence of adverse events. We used Cox proportional hazards regression models to obtain trial-specific estimates of hazard ratios (HRs), with 95% confidence intervals (CIs), using the generic inverse variance method to obtain the overall pooled HR and 95% CI. We included 13 trials in this review and IPD were available for 836 individuals out of 1455 eligible individuals from six trials, 57% of the potential data. For remission outcomes, a HR of less than 1 indicates an advantage for phenobarbitone and for first seizure and treatment failure outcomes a HR of less than 1 indicates an advantage for carbamazepine.Results for the primary outcome of the review were: time to treatment failure for any reason related to treatment (pooled HR adjusted for seizure type for 676 participants: 0.66, 95% CI 0.50 to 0.86, moderate-quality evidence), time to treatment failure due to adverse events (pooled HR adjusted for seizure type for 619 participants: 0.69, 95% CI 0.49 to 0.97, low-quality evidence), time to treatment failure due to lack of efficacy (pooled HR adjusted for seizure type for 487 participants: 0.54, 95% CI 0.38 to 0.78, moderate-quality evidence), showing a statistically significant advantage for carbamazepine compared to phenobarbitone.For our secondary outcomes, we did not find any statistically significant differences between carbamazepine and phenobarbitone: time to first seizure post-randomisation (pooled HR adjusted for seizure type for 822 participants: 1.13, 95% CI 0.93 to 1.38, moderate-quality evidence), time to 12-month remission (pooled HR adjusted for seizure type for 683 participants: 1.09, 95% CI 0.84 to 1.40, low-quality evidence), and time to six-month remission pooled HR adjusted for seizure type for 683 participants: 1.01, 95% CI 0.81 to 1.24, low-quality evidence).Results of these secondary outcomes suggest that there may be an association between treatment effect in terms of efficacy and seizure type; that is, that participants with focal onset seizures experience seizure recurrence later and hence remission of seizures earlier on phenobarbitone than carbamazepine, and vice versa for individuals with generalised seizures. It is likely that the analyses of these outcomes were confounded by several methodological issues and misclassification of seizure type, which could have introduced the heterogeneity and bias into the results of this review.Limited information was available regarding adverse events in the trials and we could not compare the rates of adverse events between carbamazepine and phenobarbitone. Some adverse events reported on both drugs were abdominal pain, nausea, and vomiting, drowsiness, motor and cognitive disturbances, dysmorphic side effects (such as rash), and behavioural side effects in three paediatric trials. Moderate-quality evidence from this review suggests that carbamazepine is likely to be a more effective drug than phenobarbitone in terms of treatment retention (treatment failures due to lack of efficacy or adverse events or both). Moderate- to low-quality evidence from this review also suggests an association between treatment efficacy and seizure type in terms of seizure recurrence and seizure remission, with an advantage for phenobarbitone for focal onset seizures and an advantage for carbamazepine for generalised onset seizures.However, some of the trials contributing to the analyses had methodological inadequacies and inconsistencies that may have impacted upon the results of this review. Therefore, we do not suggest that results of this review alone should form the basis of a treatment choice for a patient with newly onset seizures. We recommend that future trials should be designed to the highest quality possible with consideration of masking, choice of population, classification of seizure type, duration of follow-up, choice of outcomes and analysis, and presentation of results.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Percutaneous vertebroplasty remains widely used to treat osteoporotic vertebral fractures although our 2015 Cochrane review did not support its role in routine practice. To update the available evidence of the benefits and harms of vertebroplasty for treatment of osteoporotic vertebral fractures. We updated the search of CENTRAL, MEDLINE and Embase and trial registries to 15 November 2017. We included randomised and quasi-randomised controlled trials (RCTs) of adults with painful osteoporotic vertebral fractures, comparing vertebroplasty with placebo (sham), usual care, or another intervention. As it is least prone to bias, vertebroplasty compared with placebo was the primary comparison. Major outcomes were mean overall pain, disability, disease-specific and overall health-related quality of life, patient-reported treatment success, new symptomatic vertebral fractures and number of other serious adverse events. We used standard methodologic procedures expected by Cochrane. Twenty-one trials were included: five compared vertebroplasty with placebo (541 randomised participants), eight with usual care (1136 randomised participants), seven with kyphoplasty (968 randomised participants) and one compared vertebroplasty with facet joint glucocorticoid injection (217 randomised participants). Trial size varied from 46 to 404 participants, most participants were female, mean age ranged between 62.6 and 81 years, and mean symptom duration varied from a week to more than six months.Four placebo-controlled trials were at low risk of bias and one was possibly susceptible to performance and detection bias. Other trials were at risk of bias for several criteria, most notably due to lack of participant and personnel blinding.Compared with placebo, high- to moderate-quality evidence from five trials indicates that vertebroplasty provides no clinically important benefits with respect to pain, disability, disease-specific or overall quality of life or treatment success at one month. Evidence for quality of life and treatment success was downgraded due to possible imprecision. Evidence was not downgraded for potential publication bias as only one placebo-controlled trial remains unreported. Mean pain (on a scale zero to 10, higher scores indicate more pain) was five points with placebo and 0.7 points better (0.3 better to 1.2 better) with vertebroplasty, an absolute pain reduction of 7% (3% better to 12% better, minimal clinical important difference is 15%) and relative reduction of 10% (4% better to 17% better) (five trials, 535 participants). Mean disability measured by the Roland-Morris Disability Questionnaire (scale range zero to 23, higher scores indicate worse disability) was 14.2 points in the placebo group and 1.5 points better (0.4 better to 2.6 better) in the vertebroplasty group, absolute improvement 7% (2% to 11% better), relative improvement 9% better (2% to 15% better) (four trials, 472 participants).Disease-specific quality of life measured by the Quality of Life Questionnaire of the European Foundation for Osteoporosis (QUALEFFO) (scale zero to 100, higher scores indicating worse quality of life) was 62 points in the placebo group and 2.3 points better (1.4 points worse to 6.7 points better), an absolute imrovement of 2% (1% worse to 6% better); relative improvement 4% better (2% worse to 10% better) (three trials, 351 participants). Overall quality of life (European Quality of Life (EQ5D), zero = death to 1 = perfect health, higher scores indicate greater quality of life) was 0.38 points in the placebo group and 0.05 points better (0.01 better to 0.09 better) in the vertebroplasty group, absolute improvement: 5% (1% to 9% better), relative improvement: 18% (4% to 32% better) (three trials, 285 participants). In one trial (78 participants), 9/40 (or 225 per 1000) people perceived that treatment was successful in the placebo group compared with 12/38 (or 315 per 1000; 95% CI 150 to 664) in the vertebroplasty group, RR 1.40 (95% CI 0.67 to 2.95), absolute difference: 9% more reported success (11% fewer to 29% more); relative change: 40% more reported success (33% fewer to 195% more).Low-quality evidence (downgraded due to imprecision and potential for bias from the usual-care controlled trials) indicates uncertainty around the risk estimates of harms with vertebroplasty. The incidence of new symptomatic vertebral fractures (from six trials) was 48/418 (95 per 1000; range 34 to 264)) in the vertebroplasty group compared with 31/422 (73 per 1000) in the control group; RR 1.29 (95% CI 0.46 to 3.62)). The incidence of other serious adverse events (five trials) was 16/408 (34 per 1000, range 18 to 62) in the vertebroplasty group compared with 23/413 (56 per 1000) in the control group; RR 0.61 (95% CI 0.33 to 1.10). Notably, serious adverse events reported with vertebroplasty included osteomyelitis, cord compression, thecal sac injury and respiratory failure.Our subgroup analyses indicate that the effects did not differ according to duration of pain (acute versus subacute). Including data from the eight trials that compared vertebroplasty with usual care in a sensitivity analyses altered the primary results, with all combined analyses displaying considerable heterogeneity. We found high- to moderate-quality evidence that vertebroplasty has no important benefit in terms of pain, disability, quality of life or treatment success in the treatment of acute or subacute osteoporotic vertebral fractures in routine practice when compared with a sham procedure. Results were consistent across the studies irrespective of the average duration of pain.Sensitivity analyses confirmed that open trials comparing vertebroplasty with usual care are likely to have overestimated any benefit of vertebroplasty. Correcting for these biases would likely drive any benefits observed with vertebroplasty towards the null, in keeping with findings from the placebo-controlled trials.Numerous serious adverse events have been observed following vertebroplasty. However due to the small number of events, we cannot be certain about whether or not vertebroplasty results in a clinically important increased risk of new symptomatic vertebral fractures and/or other serious adverse events. Patients should be informed about both the high- to moderate-quality evidence that shows no important benefit of vertebroplasty and its potential for harm.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Traumatic hyphema is the entry of blood into the anterior chamber (the space between the cornea and iris) subsequent to a blow or a projectile striking the eye. Hyphema uncommonly causes permanent loss of vision. Associated trauma (e.g. corneal staining, traumatic cataract, angle recession glaucoma, optic atrophy, etc.) may seriously affect vision. Such complications can lead to permanent impairment of vision. People with sickle cell trait/disease may be particularly susceptible to increases of elevated intraocular pressure. If rebleeding occurs, the rates and severity of complications increase. To assess the effectiveness of various medical interventions in the management of traumatic hyphema. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2018, Issue 6); MEDLINE Ovid; Embase.com; PubMed (1948 to June 2018); the ISRCTN registry; ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). The date of the search was 28 June 2018. Two review authors independently assessed the titles and abstracts of all reports identified by the electronic and manual searches. In this review, we included randomized and quasi-randomized trials that compared various medical (non-surgical) interventions versus other medical intervention or control groups for the treatment of traumatic hyphema following closed-globe trauma. We applied no restrictions regarding age, gender, severity of the closed-globe trauma, or level of visual acuity at the time of enrollment. Two review authors independently extracted the data for the primary outcomes, visual acuity and time to resolution of primary hemorrhage, and secondary outcomes including: secondary hemorrhage and time to rebleed; risk of corneal blood staining, glaucoma or elevated intraocular pressure, optic atrophy, or peripheral anterior synechiae; adverse events; and duration of hospitalization. We entered and analyzed data using Review Manager 5. We performed meta-analyses using a fixed-effect model and reported dichotomous outcomes as risk ratios (RR) and continuous outcomes as mean differences (MD). We included 20 randomized and seven quasi-randomized studies with a total of 2643 participants. Interventions included antifibrinolytic agents (systemic and topical aminocaproic acid, tranexamic acid, and aminomethylbenzoic acid), corticosteroids (systemic and topical), cycloplegics, miotics, aspirin, conjugated estrogens, traditional Chinese medicine, monocular versus bilateral patching, elevation of the head, and bed rest.We found no evidence of an effect on visual acuity for any intervention, whether measured within two weeks (short term) or for longer periods. In a meta-analysis of two trials, we found no evidence of an effect of aminocaproic acid on long-term visual acuity (RR 1.03, 95% confidence interval (CI) 0.82 to 1.29) or final visual acuity measured up to three years after the hyphema (RR 1.05, 95% CI 0.93 to 1.18). Eight trials evaluated the effects of various interventions on short-term visual acuity; none of these interventions was measured in more than one trial. No intervention showed a statistically significant effect (RRs ranged from 0.75 to 1.10). Similarly, visual acuity measured for longer periods in four trials evaluating different interventions was also not statistically significant (RRs ranged from 0.82 to 1.02). The evidence supporting these findings was of low or very low certainty.Systemic aminocaproic acid reduced the rate of recurrent hemorrhage (RR 0.28, 95% CI 0.13 to 0.60) as assessed in six trials with 330 participants. A sensitivity analysis omitting two studies not using an intention-to-treat analysis reduced the strength of the evidence (RR 0.43, 95% CI 0.17 to 1.08). We obtained similar results for topical aminocaproic acid (RR 0.48, 95% CI 0.20 to 1.10) in two studies with 121 participants. We assessed the certainty of these findings as low and very low, respectively. Systemic tranexamic acid had a significant effect in reducing the rate of secondary hemorrhage (RR 0.31, 95% CI 0.17 to 0.55) in five trials with 578 participants, as did aminomethylbenzoic acid as reported in one study (RR 0.10, 95% CI 0.02 to 0.41). The evidence to support an associated reduction in the risk of complications from secondary hemorrhage (i.e. corneal blood staining, peripheral anterior synechiae, elevated intraocular pressure, and development of optic atrophy) by antifibrinolytics was limited by the small number of these events. Use of aminocaproic acid was associated with increased nausea, vomiting, and other adverse events compared with placebo. We found no evidence of an effect in the number of adverse events with the use of systemic versus topical aminocaproic acid or with standard versus lower drug dose. The number of days for the primary hyphema to resolve appeared to be longer with the use of systemic aminocaproic acid compared with no use, but this outcome was not altered by any other intervention.The available evidence on usage of systemic or topical corticosteroids, cycloplegics, or aspirin in traumatic hyphema was limited due to the small numbers of participants and events in the trials.We found no evidence of an effect between a single versus binocular patch or ambulation versus complete bed rest on the risk of secondary hemorrhage or time to rebleed. We found no evidence of an effect on visual acuity by any of the interventions evaluated in this review. Although evidence was limited, it appears that people with traumatic hyphema who receive aminocaproic acid or tranexamic acid are less likely to experience secondary hemorrhaging. However, hyphema took longer clear in people treated with systemic aminocaproic acid.There is no good evidence to support the use of antifibrinolytic agents in the management of traumatic hyphema other than possibly to reduce the rate of secondary hemorrhage. Similarly, there is no evidence to support the use of corticosteroids, cycloplegics, or non-drug interventions (such as binocular patching, bed rest, or head elevation) in the management of traumatic hyphema. As these multiple interventions are rarely used in isolation, further research to assess the additive effect of these interventions might be of value.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Entertainment-education (E-E) media can improve behavioral intent toward health-related practices. In the era of COVID-19, millions of people can be reached by E-E media without requiring any physical contact. We have designed a short, wordless, animated video about COVID-19 hygiene practices-such as social distancing and frequent hand washing-that can be rapidly distributed through social media channels to a global audience. The E-E video's effectiveness, however, remains unclear. The study aims to achieve the following objectives. To: 1.Quantify people's interest in watching a short, animated video about COVID-19 hygiene (abbreviated to CoVideo).2.Establish the CoVideo's effectiveness in increasing behavioural intent toward COVID-19 hygiene.3.Establish the CoVideo's effectiveness in improving COVID-19 hygiene knowledge. The present study is a multi-site, parallel group, randomized controlled trial (RCT) comparing the effectiveness of the CoVideo against an attention placebo control (APC) video or no video. The trial has an intervention arm (CoVideo), placebo arm (APC), and control arm (no video). Nested in each trial arm is a list experiment and questionnaire survey, with the following ordering. Arm 1: the CoVideo, list experiment, and questionnaire survey. Arm 2: the APC video, list experiment, questionnaire survey, and CoVideo. Arm 3: the list experiment, questionnaire survey, and CoVideo. For each list experiment, participants will be randomized to a control or treatment group. The control group will receive a list of five items and the treatment group will receive the same five items plus one item about COVID-19 hygiene. We will use the list experiment to reduce response bias associated with socially desirable answers to COVID-19 questions. The questionnaire survey will include items about the participant's age, sex, country of residence, highest education, and knowledge of COVID-19 spread. After completing the list experiment and questionnaire survey, participants in Arms 2 and 3 will receive the CoVideo to ensure post-trial access to treatment. This will be an online study setting. We will use Prolific Academic (ProA: https://www.prolific.co) to recruit participants and host our study on the Gorilla™ platform (www.gorilla.sc). To be eligible, participants must be between the age of 18 and 59 years (male, female, or other) and have current residence in the United States, the United Kingdom, Germany, Spain, Mexico, or France. Participants will be excluded from the study if they cannot speak English, German, French, or Spanish (since the instructions and survey questions will be available in these 4 languages only). The intervention is an E-E video about COVID-19 hygiene (CoVideo). Developed by our co-author (MA) for Stanford Medicine, the CoVideo is animated with sound effects, and has no words, speech, or text. The CoVideo shows how the novel coronavirus is spread (airborne, physical contact) and summarizes the public's response to the COVID-19 outbreak. Key components of the CoVideo are the promotion of five hygiene practices: i) social distancing and avoiding group gatherings, ii) frequently washing hands with soap and water or sanitizer, iii) cleaning surfaces at home (e.g., kitchen counters), iv) not sharing eating utensils, and v) avoidance of stockpiling essential goods (such as toilet paper and face masks). The CoVideo, which was designed for universal reach and optimized for release on social media channels, can be viewed at https://www.youtube.com/watch?v=rAj38E7vrS8. The comparators are an APC video (Arm 2) or no video (Arm 3). The APC video is similar in style to the CoVideo; it is also animated with a duration of 2.30 minutes, has sound effects but no words, speech, or text. The video message is about how small choices become actions, which become habits, which become a way of life. It is available at https://www.youtube.com/watch?v=_HEnohs6yYw. Each list experiment will have a control list as the comparator. The control list is needed to measure the prevalence of behavioral intent toward COVID-19 hygiene. This study will measure primary and secondary outcomes related to COVID-19 hygiene. By hygiene, we mean the adoption of behaviors or practices that reduce the chances of being infected or spreading COVID-19. As our primary outcome, we will measure changes in behavioral intent toward five hygiene practices: social distancing, washing hands, cleaning household surfaces, not sharing eating utensils, and not stockpiling essential goods. As a secondary outcome, we will measure knowledge about behaviors that can prevent the spread of COVID-19. Using a web-based randomization algorithm, Gorilla will randomly allocate participants to the intervention (CoVideo), placebo (APC), or control (no video) arm (sequence generation) at a 1:1:1 ratio. Within each trial arm, Gorilla will randomly allocate participants at a 1:1 ratio to the control or treatment group. Items in the lists will be randomly ordered to avoid order effects. The presentation order of the list experiments will also be randomized. Because ProA handles the interaction between the study investigators and participants, the participants will be completely anonymous to the study investigators. The outcome measures will be self-reported and submitted anonymously. All persons in the study team will be blinded to the group allocation. The Gorilla algorithm will randomize 6,700 participants to each trial arm, giving a total sample size of 20,100. The protocol version number is 1.0 and the date is 18 May 2020. Recruitment is expected to end by 22 June 2020. Thus far, the study investigators have recruited 2,500 participants on ProA. Of these participants, 800 have completed the study on the Gorilla platform. The study and its outcomes were registered at the German Clinical Trials Register (www.drks.de) on May 12<supth</sup, 2020, protocol number: #DRKS00021582. The study was registered before any data was collected. The full protocol is attached as an additional file, accessible from the Trials website (Additional file 1). In the interest in expediting dissemination of this material, the familiar formatting has been eliminated; this Letter serves as a summary of the key elements of the full protocol.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Branch retinal vein occlusion (BRVO) is one of the most commonly occurring retinal vascular abnormalities. The most common cause of visual loss in people with BRVO is macular oedema (MO). Grid or focal laser photocoagulation has been shown to reduce the risk of visual loss. Limitations to this treatment exist, however, and newer modalities may have equal or improved efficacy. Antiangiogenic therapy with anti-vascular endothelial growth factor (anti-VEGF) has recently been used successfully to treat MO resulting from a variety of causes. To investigate the efficacy and gather evidence from randomised controlled trials (RCTs) on the potential harms of anti-vascular endothelial growth factor (VEGF) agents for the treatment of macular oedema (MO) secondary to branch retinal vein occlusion (BRVO). We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2019, Issue 6); MEDLINE Ovid; Embase Ovid; the ISRCTN registry; ClinicalTrials.gov; and the WHO ICTRP. The date of the last search was 12 June 2019. We included randomised controlled trials (RCTs) investigating BRVO. Eligible trials had to have at least six months' follow-up where anti-VEGF treatment was compared with another treatment, no treatment, or placebo. We excluded trials where combination treatments (anti-VEGF plus other treatments) were used; and trials that investigated the dose and duration of treatment without a comparison group (other treatment/no treatment/sham). Two review authors independently extracted the data using standard methodological procedures expected by Cochrane. The primary outcome was the proportion of participants with an improvement from baseline in best-corrected visual acuity of greater than or equal to 15 letters (3 lines) on the Early Treatment in Diabetic Retinopathy Study (ETDRS) Chart at six months and 12 months of follow-up. The secondary outcomes were the proportion of participants who lost greater than or equal to 15 ETDRS letters (3 lines) and the mean visual acuity (VA) change at six and 12 months, as well as the change in central retinal thickness (CRT) on optical coherence tomography from baseline at six and 12 months. We also collected data on adverse events and quality of life (QoL). We found eight RCTs of 1631 participants that met the inclusion criteria after independent and duplicate review of the search results. These studies took place in Europe, North America, Eastern Mediterranean region and East Asia. Included participants were adults aged 18 or over with VA of 20/40 or worse. Studies varied by duration of disease but permitted previously treated eyes as long as there was sufficient treatment-free interval. All anti-VEGF agents (bevacizumab, ranibizumab and aflibercept) and steroids (triamcinolone and dexamethasone) were included. Overall, we judged the studies to be at moderate or unclear risk of bias. Four of the eight studies did not mask participants or outcome assessors, or both. One trial compared anti-VEGF to sham. At six months, eyes receiving anti-VEGF were significantly more likely to have a gain of 15 or more ETDRS letters (risk ratio (RR) 1.72, 95% confidence interval (CI) 1.19 to 2.49; 283 participants; moderate-certainty evidence). Mean VA was better in the anti-VEGF group at six months compared with control (mean difference (MD) 7.50 letters, 95% CI 5.29 to 9.71; 282 participants; moderate-certainty evidence). Anti-VEGF also proved more effective at reducing CRT at six months (MD -57.50 microns, 95% CI -108.63 to -6.37; 281 participants; lower CRT is better; moderate-certainty evidence). There was only very low-certainty evidence on adverse effects. There were no reports of endophthalmitis. Mean change in QoL (measured using the National Eye Institute Visual Functioning Questionnaire VFQ-25) was better in people treated with anti-VEGF compared with people treated with sham (MD 7.6 higher score, 95% CI 4.3 to 10.9; 281 participants; moderate-certainty evidence). Three RCTs compared anti-VEGF with macular laser (total participants = 473). The proportion of eyes gaining 15 or more letters was greater in the anti-VEGF group at six months (RR 2.09, 95% CI 1.44 to 3.05; 2 studies, 201 participants; moderate-certainty evidence). Mean VA in the anti-VEGF groups was better than the laser groups at six months (MD 9.63 letters, 95% CI 7.23 to 12.03; 3 studies, 473 participants; moderate-certainty evidence). There was a greater reduction in CRT in the anti-VEGF group compared with the laser group at six months (MD -147.47 microns, 95% CI -200.19 to -94.75; 2 studies, 201 participants; moderate-certainty evidence). There was only very low-certainty evidence on adverse events. There were no reports of endophthalmitis. QoL outcomes were not reported. Four studies compared anti-VEGF with intravitreal steroid (875 participants). The proportion of eyes gaining 15 or more ETDRS letters was greater in the anti-VEGF group at six months (RR 1.67, 95% CI 1.33 to 2.10; 2 studies, 330 participants; high-certainty evidence) and 12 months (RR 1.76, 95% CI 1.36 to 2.28; 1 study, 307 participants; high-certainty evidence). Mean VA was better in the anti-VEGF group at six months (MD 8.22 letters, 95% CI 5.69 to 10.76; 2 studies, 330 participants; high-certainty evidence) and 12 months (MD 9.15 letters, 95% CI 6.32 to 11.97; 2 studies, 343 participants; high-certainty evidence). Mean CRT also showed a greater reduction in the anti-VEGF arm at 12 months compared with intravitreal steroid (MD -26.92 microns, 95% CI -65.88 to 12.04; 2 studies, 343 participants; moderate-certainty evidence). People receiving anti-VEGF showed a greater improvement in QoL at 12 months compared to those receiving steroid (MD 3.10, 95% CI 0.22 to 5.98; 1 study, 307 participants; moderate-certainty evidence). Moderate-certainty evidence suggested increased risk of cataract and raised IOP with steroids. There was only very low-certainty evidence on APTC events. No cases of endophthalmitis were observed. The available RCT evidence suggests that treatment of MO secondary to BRVO with anti-VEGF improves visual and anatomical outcomes at six and 12 months.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Intrauterine insemination (IUI), combined with ovarian stimulation (OS), has been demonstrated to be an effective treatment for infertile couples. Several agents for ovarian stimulation, combined with IUI, have been proposed, but it is still not clear which agents for stimulation are the most effective. This is an update of the review, first published in 2007. To assess the effects of agents for ovarian stimulation for intrauterine insemination in infertile ovulatory women. We searched the Cochrane Gynaecology and Fertility Group trials register, CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL and two trial registers from their inception to November 2020. We performed reference checking and contacted study authors and experts in the field to identify additional studies. We included truly randomised controlled trials (RCTs) that compared different agents for ovarian stimulation combined with IUI for infertile ovulatory women concerning couples with unexplained infertility. mild male factor infertility and minimal to mild endometriosis. We used standard methodological procedures recommended by Cochrane. In this updated review, we have included a total of 82 studies, involving 12,614 women. Due to the multitude of comparisons between different agents for ovarian stimulation, we highlight the seven most often reported here. Gonadotropins versus anti-oestrogens (13 studies) For live birth, the results of five studies were pooled and showed a probable improvement in the cumulative live birth rate for gonadotropins compared to anti-oestrogens (odds ratio (OR) 1.37, 95% confidence interval (CI) 1.05 to 1.79; I<sup2</sup = 30%; 5 studies, 1924 participants; moderate-certainty evidence). This suggests that if the chance of live birth following anti-oestrogens is assumed to be 22.8%, the chance following gonadotropins would be between 23.7% and 34.6%. The pooled effect of seven studies revealed that we are uncertain whether gonadotropins lead to a higher multiple pregnancy rate compared with anti-oestrogens (OR 1.58, 95% CI 0.60 to 4.17; I<sup2</sup = 58%; 7 studies, 2139 participants; low-certainty evidence). Aromatase inhibitors versus anti-oestrogens (8 studies) One study reported live birth rates for this comparison. We are uncertain whether aromatase inhibitors improve live birth rate compared with anti-oestrogens (OR 0.75, CI 95% 0.51 to 1.11; 1 study, 599 participants; low-certainty evidence). This suggests that if the chance of live birth following anti-oestrogens is 23.4%, the chance following aromatase inhibitors would be between 13.5% and 25.3%. The results of pooling four studies revealed that we are uncertain whether aromatase inhibitors compared with anti-oestrogens lead to a higher multiple pregnancy rate (OR 1.28, CI 95% 0.61 to 2.68; I<sup2</sup = 0%; 4 studies, 1000 participants; low-certainty evidence).  Gonadotropins with GnRH (gonadotropin-releasing hormone) agonist versus gonadotropins alone (4 studies) No data were available for live birth. The pooled effect of two studies  revealed that we are uncertain whether gonadotropins with GnRH agonist lead to a higher multiple pregnancy rate compared to gonadotropins alone (OR 2.53, 95% CI 0.82 to 7.86; I<sup2</sup = 0; 2 studies, 264 participants; very low-certainty evidence).  Gonadotropins with GnRH antagonist versus gonadotropins alone (14 studies) Three studies reported live birth rate per couple, and we are uncertain whether gonadotropins with GnRH antagonist improve live birth rate compared to gonadotropins (OR 1.5, 95% CI 0.52 to 4.39; I<sup2</sup = 81%; 3 studies, 419 participants; very low-certainty evidence). This suggests that if the chance of a live birth following gonadotropins alone is 25.7%, the chance following gonadotropins combined with GnRH antagonist would be between 15.2% and 60.3%. We are also uncertain whether gonadotropins combined with GnRH antagonist lead to a higher multiple pregnancy rate compared with gonadotropins alone (OR 1.30, 95% CI 0.74 to 2.28; I<sup2</sup = 0%; 10 studies, 2095 participants; moderate-certainty evidence). Gonadotropins with anti-oestrogens versus gonadotropins alone (2 studies) Neither of the studies reported data for live birth rate. We are uncertain whether gonadotropins combined with anti-oestrogens lead to a higher multiple pregnancy rate compared with gonadotropins alone, based on one study (OR 3.03, 95% CI 0.12 to 75.1; 1 study, 230 participants; low-certainty evidence). Aromatase inhibitors versus gonadotropins (6 studies) Two studies  revealed that aromatase inhibitors may decrease live birth rate compared with gonadotropins (OR 0.49, 95% CI 0.34 to 0.71; I<sup2</sup=0%; 2 studies, 651 participants; low-certainty evidence). This suggests that if the chance of a live birth following gonadotropins alone is 31.9%,  the chance of live birth following aromatase inhibitors would be between 13.7% and 25%. We are uncertain whether aromatase inhibitors compared with gonadotropins lead to a higher multiple pregnancy rate (OR 0.69, 95% CI 0.06 to 8.17; I<sup2</sup=77%; 3 studies, 731 participants; very low-certainty evidence).  Aromatase inhibitors with gonadotropins versus anti-oestrogens with gonadotropins (8 studies) We are uncertain whether aromatase inhibitors combined with gonadotropins improve live birth rate compared with anti-oestrogens plus gonadotropins (OR 0.99, 95% CI 0.3 8 to 2.54;  I<sup2</sup = 69%; 3 studies, 708 participants; very low-certainty evidence). This suggests that if the chance of a live birth following anti-oestrogens plus gonadotropins is 13.8%, the chance following aromatase inhibitors plus gonadotropins would be between 5.7% and 28.9%. We are uncertain of the effect of aromatase inhibitors combined with gonadotropins compared to anti-oestrogens combined with gonadotropins on multiple pregnancy rate (OR 1.31, 95% CI 0.39 to 4.37;  I<sup2</sup = 0%; 5 studies, 901 participants; low-certainty evidence). Based on the available results, gonadotropins probably improve cumulative live birth rate compared with anti-oestrogens (moderate-certainty evidence). Gonadotropins may also improve cumulative live birth rate when compared with aromatase inhibitors (low-certainty evidence). From the available data, there is no convincing evidence that aromatase inhibitors lead to higher live birth rates compared to anti-oestrogens. None of the agents compared lead to significantly higher multiple pregnancy rates. Based on low-certainty evidence, there does not seem to be a role for different combined therapies, nor for adding GnRH agonists or GnRH antagonists in IUI programs.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Stevens-Johnson syndrome (SJS), toxic epidermal necrolysis (TEN), and SJS/TEN overlap syndrome are rare, severe cutaneous adverse reactions usually triggered by medications. In addition to tertiary-level supportive care, various systemic therapies have been used including glucocorticoids, intravenous immunoglobulins (IVIGs), cyclosporin, N-acetylcysteine, thalidomide, infliximab, etanercept, and plasmapheresis. There is an unmet need to understand the efficacy of these interventions. To assess the effects of systemic therapies (medicines delivered orally, intramuscularly, or intravenously) for the treatment of SJS, TEN, and SJS/TEN overlap syndrome. We searched the following databases up to March 2021: the Cochrane Skin Specialised Register, CENTRAL, MEDLINE, and Embase. We also searched five clinical trial registers, the reference lists of all included studies and of key review articles, and a number of drug manufacturer websites. We searched for errata or retractions of included studies. We included only randomised controlled trials (RCTs) and prospective observational comparative studies of participants of any age with a clinical diagnosis of SJS, TEN, or SJS/TEN overlap syndrome. We included all systemic therapies studied to date and permitted comparisons between each therapy, as well as between therapy and placebo. We used standard methodological procedures as specified by Cochrane. Our primary outcomes were SJS/TEN-specific mortality and adverse effects leading to discontinuation of SJS/TEN therapy. Secondary outcomes included time to complete re-epithelialisation, intensive care unit length of stay, total hospital length of stay, illness sequelae, and other adverse effects attributed to systemic therapy. We rated the certainty of the evidence for each outcome using GRADE. We included nine studies with a total of 308 participants (131 males and 155 females) from seven countries. We included two studies in the quantitative meta-analysis. We included three RCTs and six prospective, controlled observational studies. Sample sizes ranged from 10 to 91. Most studies did not report study duration or time to follow-up. Two studies reported a mean SCORe of Toxic Epidermal Necrosis (SCORTEN) of 3 and 1.9. Seven studies did not report SCORTEN, although four of these studies reported average or ranges of body surface area (BSA) (means ranging from 44% to 51%). Two studies were set in burns units, two in dermatology wards, one in an intensive care unit, one in a paediatric ward, and three in unspecified inpatient units. Seven studies reported a mean age, which ranged from 29 to 56 years. Two studies included paediatric participants (23 children). We assessed the results from one of three RCTs as low risk of bias in all domains, one as high, and one as some concerns. We judged the results from all six prospective observational comparative studies to be at a high risk of bias. We downgraded the certainty of the evidence because of serious risk of bias concerns and for imprecision due to small numbers of participants. The interventions assessed included systemic corticosteroids, tumour necrosis factor-alpha (TNF-alpha) inhibitors, cyclosporin, thalidomide, N-acetylcysteine, IVIG, and supportive care. No data were available for the main comparisons of interest as specified in the review protocol: etanercept versus cyclosporin, etanercept versus IVIG, IVIG versus supportive care, IVIG versus cyclosporin, and cyclosporin versus corticosteroids. Corticosteroids versus no corticosteroids It is uncertain if there is any difference between corticosteroids (methylprednisolone 4 mg/kg/day for two more days after fever had subsided and no new lesions had developed) and no corticosteroids on disease-specific mortality (risk ratio (RR) 2.55, 95% confidence interval (CI) 0.72 to 9.03; 2 studies; 56 participants; very low-certainty evidence). Time to complete re-epithelialisation, length of hospital stay, and adverse effects leading to discontinuation of therapy were not reported. IVIG versus no IVIG It is uncertain if there is any difference between IVIG (0.2 to 0.5 g/kg cumulative dose over three days) and no IVIG in risk of disease-specific mortality (RR 0.33, 95% CI 0.04 to 2.91); time to complete re-epithelialisation (mean difference (MD) -2.93 days, 95% CI -4.4 to -1.46); or length of hospital stay (MD -2.00 days, 95% CI -5.81 to 1.81). All results in this comparison were based on one study with 36 participants, and very low-certainty evidence. Adverse effects leading to discontinuation of therapy were not reported. Etanercept (TNF-alpha inhibitor) versus corticosteroids Etanercept (25 mg (50 mg if weight &gt; 65 kg) twice weekly "until skin lesions healed") may reduce disease-specific mortality compared to corticosteroids (intravenous prednisolone 1 to 1.5 mg/kg/day "until skin lesions healed") (RR 0.51, 95% CI 0.16 to 1.63; 1 study; 91 participants; low-certainty evidence); however, the CIs were consistent with possible benefit and possible harm. Serious adverse events, such as sepsis and respiratory failure, were reported in 5 of 48 participants with etanercept and 9 of 43 participants with corticosteroids, but it was not clear if they led to discontinuation of therapy. Time to complete re-epithelialisation and length of hospital stay were not reported. Cyclosporin versus IVIG  It is uncertain if there is any difference between cyclosporin (3 mg/kg/day or intravenous 1 mg/kg/day until complete re-epithelialisation, then tapered off (10 mg/day reduction every 48 hours)) and IVIG (continuous infusion 0.75 g/kg/day for 4 days (total dose 3 g/kg) in participants with normal renal function) in risk of disease-specific mortality (RR 0.13, 95% CI 0.02 to 0.98, 1 study; 22 participants; very low-certainty evidence). Time to complete re-epithelialisation, length of hospital stay, and adverse effects leading to discontinuation of therapy were not reported. No studies measured intensive care unit length of stay. When compared to corticosteroids, etanercept may result in mortality reduction. For the following comparisons, the certainty of the evidence for disease-specific mortality is very low: corticosteroids versus no corticosteroids,  IVIG versus no IVIG and cyclosporin versus IVIG. There is a need for more multicentric studies, focused on the most important clinical comparisons, to provide reliable answers about the best treatments for SJS/TEN.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Selective dissemination of information to individuals provides a new and promising method for keeping abreast of current scientific information. Since SDI services are directed to the information needs of each individual, they are a significant step beyond grouporiented services and products, which require considerable expenditure of effort by each user as he sorts useful information from trash. However, SDI systems do require a high degree of precision in matching scientists against documents. They must operate more efficiently and economically than many current systems which occasionally provide a useful item of information to users. To meet these stringent requirements for quality, precision, efficiency, and economy, more research must be devoted to comparing and improving indexing methods, which are the basic component of all information storage and retrieval systems. It is incredible that so much money has been spent on the development and operation of scientific information systems before basic data on the comparative performance of various indexing methods have been gathered, analyzed, and confirmed by multiple investigators. The design of an effective information system would seem to require this type of basic knowledge, just as basic properties of alternative materials must be known before an engineer can design a building, bridge, or factory. Yet, except for the few studies mentioned in the previous section, research on indexing methods has been greatly neglected. Bourne's comment about studies of indexing languages is still an appropriate description of the situation: "In almost all the experimental reports, the investigator worked with an indexing language different than that of other experimenters. Consequently, no one has ever had his test results verified, or expanded, or made more precise by another experimenter" (47). Most existing information systems are based on keyword indexing, with concepts broken into isolated terms during input operations and recombined to synthesize the original concept during search and retrieval. Such systems tend to involve imprecise indexing, with a high level of "noise" in retrieved documents, difficult search strategy involving extensive post-coordination, and lengthy, complex computer manipulations. This situation reflects the fact that many producers of indexed data originally focused the design of their systems on the production of a published product with entries printed under short, concise index headings. Production of magnetic tapes as a by-product of the publication process, and their use for retrospective searching or for SDI services, was a much later development, almost an afterthought. Yet use of these tapes is growing so rapidly that it may be time to redesign the tape-producing systems, with ease of tape use for SDI services and retrospective searching as the primary consideration, and with publication of abstract and index bulletins or title listings relegated to secondary importance (49). The use of keywords to index documents creates a high degree of disorganization in information search and retrieval operations: Information is scattered under the many different terms that can be used to index different aspects of a concept. If the large-scale, comprehensive abstracting and indexing services were based on enumerative classifications with assignment of documents to logical hierarchical categories at the time of initial indexing, then many of the specialized information centers (50) and the 1300 abstracting and indexing services (3) would be unnecessary, and much of the reindexing and reprocessing of documents, the repackaging and reworking of abstracts and index data, and the resulting overlap and duplication characteristic of current information processing could be terminated. Partly because of the disorganization resulting from keyword indexing, the cost of a 5-year retrospective search of information on just one data base on magnetic tapes is a major investment (16). The effort and cost required to find a few items of useful information scattered among 1,285,000 abstracts indexed on 116 full reels of magnetic tape (11 million characters per reel) which will be needed for the 5-year Eighth Collective Index to Chemical Abstracts (1967-1971) (51) staggers the imagination. In contrast, when HICLASS systems based on enumerative hierarchical classifications are used, concepts that might be useful for later retrieval are identified and related items of information are grouped together during the indexing process. These enumerative classifications, with single-hit matching, make it possible to index and retrieve ideas as intact units and to perform simple sequential searches of the very small segment of a file that deals with a given topic (31). The experiments at both the Science Information Exchange and the National Cancer Institute, as described in this article, demonstrate that automated HICLASS systems are feasible and can operate at a very satisfactory level of performance. Although considerable effort may be required for the development and constant updating of detailed enumerative classifications, HICLASS categories may facilitate organization of data at the time of input, improve the precision of matching documents with users, and greatly simplify search logic and computer manipulations. If so, then output savings and performance would more than justify input costs, and the development and use of enumerative classifications would be a better solution to information problems than the current keyword-and-coordination approach. It is time to think beyond the ease of the single input step in information systems and to take a hard look at ways of easing retrieval problems for the multitude of information systems that process the indexed data (52). Indexing effort is expended only once, whereas search and retrieval effort is required by every user of a system. If information were better analyzed and organized during input operations, if more basic research were devoted to the effect of indexing methods on the performance of information systems, and if more emphasis were placed on the quality and usefulness of retrieved information, then the magnitude of problems related to the storage and retrieval of scientific information might be considerably reduced.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Tooth decay is one of the more common diseases of childhood. Slightly &gt;40% of US children are already affected by the time they reach kindergarten. Primary care physicians can play an important role in prevention and control of this disease because of their ready access to this population. Unlike dentists, they see a large percentage of children during their infant and toddler years. However, few studies have been conducted on oral screenings and referrals by primary care physicians or the effectiveness of their oral health preventive activities. The purpose of this study was to determine the accuracy of pediatric primary care providers' screening and referral for Early Childhood Caries. We sought to compare independent, blinded oral screening results and referral recommendations made by primary care providers with those of a pediatric dentist, considered for purposes of the study to be the reference gold standard. The study was conducted at a private pediatric group practice in North Carolina. The practice was selected because it serves a large volume of Medicaid patients and includes a large number of pediatric primary care providers (11 pediatricians and 1 nurse practitioner). Study participants included Medicaid-eligible children younger than 36 months of age with erupted teeth. The pediatric primary care providers in this practice received 2 hours of training in infant oral health. The training consisted of a review of the study methods and clinical slides illustrating dental caries in various stages of progression. Specific instructions were given to the providers on how to recognize a cavitated carious lesion and how to determine when a dental referral is needed. Providers were instructed to refer any child with 1 or more cavitated carious lesions, soft tissue pathology, or evidence of trauma to the teeth or mouth. Before commencing the study, calibration and a comparative analysis were performed to establish reliability and validity of the examinations performed by the pediatric dentist. Both a pediatric dentist and a pediatric primary care provider conducted a dental screening on each child and recorded carious teeth and whether a dental referral was needed. Sensitivity and specificity were calculated to compare the pediatric primary care providers' screenings to the gold standard (pediatric dentist) in 3 categories: caries at the tooth level, caries at the patient level (1 or more affected teeth), and need for referral. The final study sample consisted of 258 preschool-aged children (122 males and 136 females) with a mean age of 21.2 months (standard deviation [SD]: 9.13). One hundred eighty-four (71.3%) of the participants were white, 58 (22.5%) were black, and 16 (6.2%) were Hispanic. Tooth-Level Analysis: The pediatric dentist reported an average of 0.30 (SD: 0.005) cavitated teeth per child, whereas the pediatric primary care providers reported a mean of 0.25 (SD: 0.004). This difference was not statistically significant (t test). The pediatric dentist identified 80 (2.4%) teeth with cavitated carious lesions, whereas the pediatric primary care providers identified 64 (1.9%), 25 of which were false-positives. Their screening results include 41 false-negative teeth. Thus, the primary care providers tended to under-count the number of teeth with carious lesions. They achieved a sensitivity of 0.49 (95% confidence interval [CI]: 0.47-0.51) and a specificity of 0.99 (95% CI: 0.99-1.0) when their screening results for individual teeth were compared with the gold standard. Patient-Level Analysis: At the patient level, the pediatric dentist identified 25 (9.7%) children with 1 or more teeth affected by cavitated lesions. The pediatric primary care providers collectively identified 30 (11.6%) children who had cavitated lesions. They achieved a sensitivity of 0.76 (95% CI: 0.71-0.81) and a specificity of 0.95 (95% CI: 0.93-0.98) in identifying those children with cavitated carious lesions. There were 6 false-negatives and 11 false-positives when the pediatric primary care providers' findings were compared with the gold standard. At the patient-level, the positive predictive value of the dental screening was 0.63 and the negative predictive value was 0.97. Dental Referral: The pediatric dentist referred a total of 27 (10.5%) children to a dentist. Two of these children were referred for trauma and the other 25 were referred for cavities. The pediatric primary care providers referred a total of 23 (8.9%) children to a dentist. Two referrals were made because the provider was concerned about stains on the teeth, whereas the remaining 21 were referred for cavities. The pediatric primary care providers achieved a sensitivity of 0.63 (95% CI: 0.57-0.69) and a specificity of 0.98 (95% CI: 0.96-0.99) when their recommendations for referral were compared with the gold standard. The number of children receiving a referral from a pediatric primary care provider for cavities (N = 21) was less than the number of children they identified as having cavities (N = 30). The providers as a whole tended to under-refer, and only 70% of children with evidence of dental disease received a referral. After 2 hours of training in infant oral health, the pediatric primary care providers in this study achieved an adequate level of accuracy in identifying children with cavitated carious lesions. Additional training and research would be needed to optimize pediatric primary care providers' identification of carious teeth if that were the goal of screening. However, the purpose of screening by nondental personnel generally is to accurately identify those in need of referral, which does not require a tooth-by-tooth identification of cavities. Additional research is also needed to determine how to improve dental referrals by pediatric primary care providers. Results of our study suggest that dental screenings can easily be incorporated into a busy pediatrics practice and that pediatric primary care providers can significantly contribute to the overall oral health of young children by the identification of those children who need to be seen by a dentist.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
1,3-Butadiene is used as an intermediate in the production of elastomers, polymers, and other chemicals. Of the 1,3-butadiene used in 1978, 44% was used to manufacture styrene-butadiene rubber (a substitute for natural rubber, produced by copolymerization of 1,3-butadiene with styrene), and 19% was used to produce polybutane elastomer (a substance that increases resistance of tire products to wear, heat degradation, and blowouts). Chloroprene monomer, derived from 1,3-butadiene, is used exclusively to manufacture neoprene elastomers for non-tire and latex applications. Commercial nitrile rubber, used largely in rubber hoses, seals, and gaskets for automobiles, is a copolymer of 1,3-butadiene and acrylonitrile. Acrylonitrile- butadiene- styrene resins, usually containing 20%-30% 1,3-butadiene by weight, are used to make parts for automobiles and appliances. Other polymer uses include specialty polybutadiene polymers, thermoplastic elastomers, nitrile barrier resins, and K resins(R). 1,3-Butadiene is used as an intermediate in the production of a variety of industrial chemicals, including two fungicides, captan and captofol. It is approved by the U.S. Food and Drug Administration for use in the production of adhesives used in articles for packaging, transporting, or holding food; in components of paper and paperboard that are in contact with dry food; and as a modifier in the production of semigrid and rigid vinyl chloride plastic food-contact articles. No information was located on the levels of monomer or on its elution rate from any of the commercially available polymers. It is not known if unreacted 1,3-butadiene migrated from packaging materials. Male and female B6C3F1 mice were exposed to air containing 1,3-butadiene (greater than 99% pure) at concentrations of 0-8,000 ppm in 15-day and 14-week inhalation studies. In the 15-day studies, survival was unaffected by dose, and no pathologic effects were observed; slight decreases in mean body weight occurred at the high concentrations. In the 14-week studies, mean body weight gain decreased with dose, and survival in the 5,000-ppm and 8,000-ppm groups of males was markedly reduced; no other compound-related effects were reported. Inhalation carcinogenesis studies of 1,3-butadiene were conducted by exposing groups of 50 male and female B6C3F1 mice 6 hours per day for 5 days per week to air containing the test chemical at concentrations of 0 (chamber controls), 625, or 1,250 ppm. These studies were planned for 103-week exposures but were terminated at week 60 for male mice and week 61 for female mice because of the rapidly declining survival, primarily due to neoplasia. Body weights were not affected by 1,3-butadiene. Significantly increased incidences of neoplasms at multiple sites were observed in mice exposed to 1,3-butadiene. Hemangiosarcomas of the heart occurred at increased incidences in exposed males and females (male: control, 0/50; low dose, 16/49; high dose, 7/49; female: 0/50; 11/48; 18/49). Hemangiosarcomas were also observed in the peritoneal cavity (one high dose male), subcutaneous tissue (two low dose females), and liver (one high dose female). Malignant lymphomas, diagnosed as early as week 20, were observed at increased incidences in exposed male and female mice (male: 0/50; 23/50; 29/50; female: 1/50; 10/49; 10/49). Alveolar/bronchiolar adenomas and alveolar/bronchiolar (both separately and combined) occurred at increased incidences in exposed male and female mice (combined incidences -- male: 2/50; 14/49; 15/49; female: 3/49; 12/48; 23/49). Epithelial hyperplasia of the forestomach occurred at increased incidences in dosed mice (male: 0/49; 5/40; 7/44; female: 0/49; 5/42; 9/49). Papillomas of the forestomach occurred in low dose male and in low dose and high dose female mice (male: 0/49; 5/40; 0/44; female: 0/49; 4/42; 10/49). Squamous cell carcinomas of the forestomach were observed in dosed mice (male: 0/49, 2/40, 1/44; female: 0/49, 1/42, 1/49). Acinar cell carcinomas of the mammary gland were observed at an increased incidence in high dose female reased incidence in high dose female mice (0/50; 2/49; 6/49); adenosquamous carcinomas were found in four low dose females. The incidences of granulosa cell tumors of the ovary were increased in dosed females (0/49; 6/45; 12/48). A granulosa cell carcinoma was observed in another high dose female. Gliomas were observed in two 68- to 69-week-old low dose and one high dose male mice; brain tumors are uncommon even in 2-year old mice. Liver necrosis occurred at increased incidences in dosed male and low dose female mice (male; 1/50, 8/49, 8/49; female: 6/50, 15/47, 6/49). Hepatocellular adenomas or carcinomas (combined) were observed at an increased incidence in high dose female mice (0/50, 2/47 5/49). No neoplastic lesions of the nasal cavity were observed at any dose level. The following nonneoplastic lesions of the nasal cavity occurred in mice exposed at 1,250 ppm: chronic inflammation (male, 35/50; female, 2/49); fibrosis (male, 33/50; female, 2/49); cartilaginous metaplasia (male, 16/50; female, 1/49); osseous metaplasia (male, 11/50; female, 2/49); and atrophy of the sensory epithelium (male, 32/50). No nonneoplastic lesions of the nasal cavity were found in the controls. The incidence of testicular atrophy (0/50, 19/49, 11/48) or ovarian atrophy (2/49, 40/45, 40/48) was increased in exposed male or female mice. An audit of the experimental data from these studies on 1,3-butadiene was conducted by the National Toxicology Program. No data discrepancies were found that influenced the final interpretation of these experiments. Under the conditions of these studies, there was clear evidence of carcinogenicity for 1,3-butadiene in male and female B6C3F1 mice, as shown by increased incidences and early induction of hemangiosarcomas of the heart, malignant lymphomas, alveolar/bronchiolar adenomas and carcinomas, and papillomas of the stomach in males and females; and of acinar cell carcinomas of the mammary gland, granulosa cell tumors of the ovary, and hepatocellular adenomas and adenomas or carcinomas (combined) in females. 1,3-Butadiene was associated with nonneoplastic lesions in the respiratory epithelium, liver necrosis, and testicular or ovarian atrophy. Synonyms: butadiene; biethylene; bivinyl; divinyl; erythrene; vinylethylene; pyrrolylene
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Depression is the fourth most important disease in the estimation of the burden of disease Murray 1996 and is a common problem with prevalence rates estimated to be as high as 8% in young people. Depression in young people is associated with poor academic performance, social dysfunction, substance abuse, suicide attempts, and completed suicide (NHMRC 1997). This has precipitated the development of programmes aimed at preventing the onset of depression. This review evaluates evidence for the effectiveness of these prevention programmes. To determine whether psychological and/or educational interventions (both universal and targeted) are effective in reducing risk of depressive disorder by reducing depressive symptoms immediately after intervention or by preventing the onset of depressive disorder in children and adolescents over the next one to three years. The Cochrane Depression, Anxiety and Neurosis Group trials register (August 2002), MEDLINE (1966 to December Week 3 2002), EMBASE (1980 to January Week 2 2003), PsychInfo (1886 to January Week 2 2003) and ERIC (1985 to December 2002) were searched. In addition, conference abstracts, the reference lists of included studies, and other reviews were searched and experts in the field were contacted. Each identified study was assessed for possible inclusion by two independent reviewers based on the methods sections. The determinants for inclusion were that the trial include a psychological and/or educational prevention programme for young people aged 5 to 19 years-old, who did not meet DSM or ICD criteria for depression and/or did not fall into the clinical range on standardised, validated, and reliable rating scales of depression. The methodological quality of the included trials was assessed by two independent reviewers according to a list of pre-determined criteria, which were based on quality ratings devised by Moncrieff and colleagues (Moncrieff 2001). Outcome data was extracted and entered into Revman 4.2. Means and standard deviations for continuous outcomes and number of events for dichotomous outcomes were extracted where available. For trials where the required data were not reported or could not be calculated, further details were requested from first authors. If no further details were provided, the trial was included in the review and described, but not included in the meta-analysis. Results were presented for each type of intervention: targeted or universal interventions; and educational or psychological interventions and if data were provided, by gender. Where possible data were combined in meta-analyses to give a treatment effect across all trials. Sensitivity analysis were conducted on studies rated as "adequate" or "high" quality, that is with a score over 22, based on the scale by Moncrieff et al (Moncrieff 2001). The presence of publication bias was assessed using funnel plots. Studies were divided into those that compared intervention with an active comparison or placebo (i.e. a control condition that resembles the intervention being investigated but which lacks the elements thought to be active in preventing depression) and those that used a "wait-list" or no intervention comparison group. Only two studies fell into the former category and neither showed effectiveness although one study was inadequately powered to show a difference and in the other the "placebo" contained active therapeutic elements, reducing the ability to demonstrate a difference from intervention. Psychological interventions were effective compared with non-intervention immediately after the programmes were delivered with a significant reduction in scores on depression rating scales for targeted (standardised mean difference (SMD) of -0.26 and a 95% confidence interval (CI) of -0.40 to -0.13 ) but not universal interventions (SMD -0.21, 95% CI -0.48, 0.06), with a significant effect maintained on pooling data (SMD -0.26, 95% CI -0.36, -0.15). While small effect sizes were reported, these were associated with a significant reduction in depressive episodes. The overall risk difference after intervention translates to "numbers needed to treat" (NNT) of 10. The most effective study is the targeted programme by Clarke (Clarke 2001) where the initial effect size of -0.46 is associated with an initial risk difference of -0.22 and NNT 5. There was no evidence of effectiveness for educational interventions. Reports of effectiveness for boys and girls were contradictory. The quality of many studies was poor, and only two studies made allocation concealment explicit. Sensitivity analysis of only high quality studies did not alter the results significantly. The only analysis in which there was significant statistical heterogeneity was the sub-group analysis by gender where there was variability in the response to different programmes for both girls and boys. For the most part funnel plots indicate findings are robust for short term effects with no publication bias evident. There are too few studies to comment on whether there is publication bias for studies reporting long-term (12-36 month) follow-up. Although there is insufficient evidence to warrant the introduction of depression prevention programmes currently, results to date indicate that further study would be worthwhile. There is a need to compare interventions with a placebo or some sort of active comparison so that study participants do not know whether they are in the intervention group or not, to investigate the impact of booster sessions to see if effectiveness immediately after intervention can be prolonged, ideally for a year or longer, and to consider practical implementation of prevention programmes when choosing target populations. Until now most studies have focussed on psychological interventions. The potential effectiveness of educational interventions has not been fully investigated. Given the gender differences in prevalence, and the change in these that occurs in adolescence with a disproportionate increase in prevalence rates for girls, it is likely that girls and boys will respond differently to interventions. Although differences have been reported in studies in this review the findings are contradictory and a more definitive delineation of gender specific responses to interventions would be helpful.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Over the years, organic pollution in the environment has aroused people's concern worldwide, especially persistent organic pollutants (POPs). Particularly in developing countries, plenty of concentrated organic wastewaters treated noneffectively are discharged into aquatic environments from chemical, textile, paper-making, and other industries to seriously threaten the surface and drinking water. The conventional wastewater treatment techniques are often helpless due to high cost with multilevel processing. Adsorption as an efficient method is often applied to the treatment of wastewater. The aim of this work is to develop an eco-friendly and cost-effective wastewater-sorbing material with weak acidic pink red B (APRB) and calcium carbonate (CaCO(3)) by reusing highly concentrated dye wastewater. On the basis of the chemical coprecipitation of APRB with growing CaCO(3) particles, an inclusion material was prepared. The composition of material was determined by atomic absorption spectrometry, thermogravimetric analysis, and transmission electron microscopy (TEM)-energy dispersive X-ray, and its morphology characterized by X-ray diffraction, scanning electron microscopy, TEM, and particle-size analysis. Two cationic dyes, ethyl violet (EV) and methylene blue (MB), and four POPs, phenanthrene (Phe), fluorene (Flu), biphenyl (Bip), and biphenol A (Bpa), were used to investigate the adsorption selectivity, capacity, and mechanism of the new material, where spectrophotometry, fluorophotometry, and high-performance liquid chromatography were used for determination. An APRB-producing wastewater was reused for preparing the cost-effective wastewater-sorbing material instead of the APRB reagent and then treating cationic dye wastewaters. The remove rates of colority and chemical oxygen demand (COD) were evaluated. The CO(3) (2-)-APRB-Ca(2+) addition sequence is most favorable for the occlusion of APRB into the growing CaCO(3) particles, and the occlusion of APRB corresponded to the Langmuir isothermal adsorption with the binding constant (K) of 5.24 x 10(4) M(-1) and the Gibbs free energy change (Delta G) of -26.9 kJ/mol. The molar ratio of Ca(2+) to CO(3) (2-) and APRB was calculated to be 1:0.94:0.0102, i.e., approximately 92 CaCO(3) molecules occluded only one APRB. Approximately 78% of the inclusion aggregates are between 3 and 20 mm and the particles are global-like with 50-100 nm. The element mapping on Ca, S, and C indicated APRB distributed a lot of CaCO(3), i.e., the APRB layer may be pressed between both sides of CaCO(3) layers. The molar ratio of Ca to S was calculated to 44, i.e., 88 CaCO(3) molecules carried one APRB, according to the above data. During the growing of CaCO(3) particles, APRB may be attracted into the temporary electric double layer in micelle form by the strong charge interaction between sulfonic groups of APRB and Ca(2+) and the hydrophobic stack of long alkyl chains. Four dyes were adsorbed: reactive brilliant red X-3B and weak acid green GS as anionic dyes and EV and MB as cationic dyes. The removals of EV and MB are extremely obvious and the saturation adsorption of EV and MB just neutralized all the negative charges in the inclusion particles. The selectivity demonstrated the ion-pair attraction, i.e., the cationic adsorption capacity depends on the negative charge number of the inclusion material. By fitting the Langmuir isotherm model, the monolayer adsorptions of EV and MB were confirmed. Their K values were calculated to be 2.4 x 10(6) and 7.3 x 10(5) M(-1), and Delta G was calculated to be 36.4 and -33.4 kJ/mol. The adsorption of four POPs on the material obeyed the lipid-water partition law, and their partition coefficients (K (pw)) were calculated to be 9,342 L/kg for Phe, 7,301 L/kg for Flu, 1,226 L/kg for Bip, and 870 L/kg for Bpa. The K (pw) is the direct ratio to their lipid-water partition coefficients (K (ow)) with 0.314 of slope. Besides this, a cost-effective CaCO(3)/APRB inclusion material was prepared with an APRB-producing wastewater instead of APRB reagent, and it was used in the treatment of two practical cationic dye wastewaters (samples A and B). The colority and COD in sample B are 18 and 13 times high as those of sample A. The decolorization of sample A is over 96%, and the removal of COD is between 70% and 80% when more than 0.3% adsorbent was added. However, those of sample B are over 98% and 88% in the presence of over 1% adsorbent. The adsorbent added in sample B, which was only two to three times as high as that in sample A, brought a similar removal rate of colority and COD. The inclusion material is more efficient for treatment of a highly concentrated dye wastewater because it may adsorb the most cationic dye up to saturation. A cost-effective onion-like inclusion material was synthesized with the composition ratio 90 +/- 2 of CaCO(3) to APRB, and it carried a lot of negative charges and lipophilic groups. It has a high adsorption capacity and rapid saturation for cationic dye and POPs. The adsorption of cationic dyes corresponded to the Langmuir isothermal model and that of POPs to the lipid-water partition law. The adsorbent is suitable for treatment of concentrated cationic dye and POPs wastewater in neutral media. The addition quantity of the calcium carbonate-APRB adsorbent was suggested below: only 3-5 kg per ton of wastewater (&lt;1,000 colority or &lt;2 mg/L POPs) and 20-30 kg per ton of highly concentrated wastewater (&gt;20,000 colority or &gt;50 mg/L POPs). The skeleton reactants are low-cost, easily available, and harmless to the ecological environment; additionally, the APRB reactant can reuse APRB-producing wastewater. The dye-contaminated sludge can potentially be reused as the color additive in building material and rubber and plastics industries. However, the APRB and dye contaminant would be released from the sludge when exposed to an acidic media (pH &lt;4) for long time. This work has developed a simple, eco-friendly and practical method for the production of a cost-effective wastewater-sorbing material.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Peripheral nerve blocks can be performed using ultrasound guidance. It is not yet clear whether this method of nerve location has benefits over other existing methods. This review was originally published in 2009 and was updated in 2014. The objective of this review was to assess whether the use of ultrasound to guide peripheral nerve blockade has any advantages over other methods of peripheral nerve location. Specifically, we have asked whether the use of ultrasound guidance:1. improves success rates and effectiveness of regional anaesthetic blocks, by increasing the number of blocks that are assessed as adequate2. reduces the complications, such as cardiorespiratory arrest, pneumothorax or vascular puncture, associated with the performance of regional anaesthetic blocks In the 2014 update we searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2014, Issue 8); MEDLINE (July 2008 to August 2014); EMBASE (July 2008 to August 2014); ISI Web of Science (2008 to April 2013); CINAHL (July 2014); and LILACS (July 2008 to August 2014). We completed forward and backward citation and clinical trials register searches.The original search was to July 2008. We reran the search in May 2015. We have added 11 potential new studies of interest to the list of 'Studies awaiting classification' and will incorporate them into the formal review findings during future review updates. We included randomized controlled trials (RCTs) comparing ultrasound-guided peripheral nerve block of the upper and lower limbs, alone or combined, with at least one other method of nerve location. In the 2014 update, we excluded studies that had given general anaesthetic, spinal, epidural or other nerve blocks to all participants, as well as those measuring the minimum effective dose of anaesthetic drug. This resulted in the exclusion of five studies from the original review. Two authors independently assessed trial quality and extracted data. We used standard Cochrane methodological procedures, including an assessment of risk of bias and degree of practitioner experience for all studies. We included 32 RCTs with 2844 adult participants. Twenty-six assessed upper-limb and six assessed lower-limb blocks. Seventeen compared ultrasound with peripheral nerve stimulation (PNS), and nine compared ultrasound combined with nerve stimulation (US + NS) against PNS alone. Two studies compared ultrasound with anatomical landmark technique, one with a transarterial approach, and three were three-arm designs that included US, US + PNS and PNS.There were variations in the quality of evidence, with a lack of detail in many of the studies to judge whether randomization, allocation concealment and blinding of outcome assessors was sufficient. It was not possible to blind practitioners and there was therefore a high risk of performance bias across all studies, leading us to downgrade the evidence for study limitations using GRADE. There was insufficient detail on the experience and expertise of practitioners and whether experience was equivalent between intervention and control.We performed meta-analysis for our main outcomes. We found that ultrasound guidance produces superior peripheral nerve block success rates, with more blocks being assessed as sufficient for surgery following sensory or motor testing (Mantel-Haenszel (M-H) odds ratio (OR), fixed-effect 2.94 (95% confidence interval (CI) 2.14 to 4.04); 1346 participants), and fewer blocks requiring supplementation or conversion to general anaesthetic (M-H OR, fixed-effect 0.28 (95% CI 0.20 to 0.39); 1807 participants) compared with the use of PNS, anatomical landmark techniques or a transarterial approach. We were not concerned by risks of indirectness, imprecision or inconsistency for these outcomes and used GRADE to assess these outcomes as being of moderate quality. Results were similarly advantageous for studies comparing US + PNS with NS alone for the above outcomes (M-H OR, fixed-effect 3.33 (95% CI 2.13 to 5.20); 719 participants, and M-H OR, fixed-effect 0.34 (95% CI 0.21 to 0.56); 712 participants respectively). There were lower incidences of paraesthesia in both the ultrasound comparison groups (M-H OR, fixed-effect 0.42 (95% CI 0.23 to 0.76); 471 participants, and M-H OR, fixed-effect 0.97 (95% CI 0.30 to 3.12); 178 participants respectively) and lower incidences of vascular puncture in both groups (M-H OR, fixed-effect 0.19 (95% CI 0.07 to 0.57); 387 participants, and M-H OR, fixed-effect 0.22 (95% CI 0.05 to 0.90); 143 participants). There were fewer studies for these outcomes and we therefore downgraded both for imprecision and paraesthesia for potential publication bias. This gave an overall GRADE assessment of very low and low for these two outcomes respectively. Our analysis showed that it took less time to perform nerve blocks in the ultrasound group (mean difference (MD), IV, fixed-effect -1.06 (95% CI -1.41 to -0.72); 690 participants) but more time to perform the block when ultrasound was combined with a PNS technique (MD, IV, fixed-effect 0.76 (95% CI 0.55 to 0.98); 587 participants). With high levels of unexplained statistical heterogeneity, we graded this outcome as very low quality. We did not combine data for other outcomes as study results had been reported using differing scales or with a combination of mean and median data, but our interpretation of individual study data favoured ultrasound for a reduction in other minor complications and reduction in onset time of block and number of attempts to perform block. There is evidence that peripheral nerve blocks performed by ultrasound guidance alone, or in combination with PNS, are superior in terms of improved sensory and motor block, reduced need for supplementation and fewer minor complications reported. Using ultrasound alone shortens performance time when compared with nerve stimulation, but when used in combination with PNS it increases performance time.We were unable to determine whether these findings reflect the use of ultrasound in experienced hands and it was beyond the scope of this review to consider the learning curve associated with peripheral nerve blocks by ultrasound technique compared with other methods.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Intensive Case Management (ICM) is a community-based package of care aiming to provide long-term care for severely mentally ill people who do not require immediate admission. Intensive Case Management evolved from two original community models of care, Assertive Community Treatment (ACT) and Case Management (CM), where ICM emphasises the importance of small caseload (fewer than 20) and high-intensity input. To assess the effects of ICM as a means of caring for severely mentally ill people in the community in comparison with non-ICM (caseload greater than 20) and with standard community care. We did not distinguish between models of ICM. In addition, to assess whether the effect of ICM on hospitalisation (mean number of days per month in hospital) is influenced by the intervention's fidelity to the ACT model and by the rate of hospital use in the setting where the trial was conducted (baseline level of hospital use). We searched the Cochrane Schizophrenia Group's Trials Register (last update search 10 April 2015). All relevant randomised clinical trials focusing on people with severe mental illness, aged 18 to 65 years and treated in the community care setting, where ICM is compared to non-ICM or standard care. At least two review authors independently selected trials, assessed quality, and extracted data. For binary outcomes, we calculated risk ratio (RR) and its 95% confidence interval (CI), on an intention-to-treat basis. For continuous data, we estimated mean difference (MD) between groups and its 95% CI. We employed a random-effects model for analyses.We performed a random-effects meta-regression analysis to examine the association of the intervention's fidelity to the ACT model and the rate of hospital use in the setting where the trial was conducted with the treatment effect. We assessed overall quality for clinically important outcomes using the GRADE approach and investigated possible risk of bias within included trials. The 2016 update included two more studies (n = 196) and more publications with additional data for four already included studies. The updated review therefore includes 7524 participants from 40 randomised controlled trials (RCTs). We found data relevant to two comparisons: ICM versus standard care, and ICM versus non-ICM. The majority of studies had a high risk of selective reporting. No studies provided data for relapse or important improvement in mental state.1. ICM versus standard careWhen ICM was compared with standard care for the outcome service use, ICM slightly reduced the number of days in hospital per month (n = 3595, 24 RCTs, MD -0.86, 95% CI -1.37 to -0.34,low-quality evidence). Similarly, for the outcome global state, ICM reduced the number of people leaving the trial early (n = 1798, 13 RCTs, RR 0.68, 95% CI 0.58 to 0.79, low-quality evidence). For the outcome adverse events, the evidence showed that ICM may make little or no difference in reducing death by suicide (n = 1456, 9 RCTs, RR 0.68, 95% CI 0.31 to 1.51, low-quality evidence). In addition, for the outcome social functioning, there was uncertainty about the effect of ICM on unemployment due to very low-quality evidence (n = 1129, 4 RCTs, RR 0.70, 95% CI 0.49 to 1.0, very low-quality evidence).2. ICM versus non-ICMWhen ICM was compared with non-ICM for the outcome service use, there was moderate-quality evidence that ICM probably makes little or no difference in the average number of days in hospital per month (n = 2220, 21 RCTs, MD -0.08, 95% CI -0.37 to 0.21, moderate-quality evidence) or in the average number of admissions (n = 678, 1 RCT, MD -0.18, 95% CI -0.41 to 0.05, moderate-quality evidence) compared to non-ICM. Similarly, the results showed that ICM may reduce the number of participants leaving the intervention early (n = 1970, 7 RCTs, RR 0.70, 95% CI 0.52 to 0.95,low-quality evidence) and that ICM may make little or no difference in reducing death by suicide (n = 1152, 3 RCTs, RR 0.88, 95% CI 0.27 to 2.84, low-quality evidence). Finally, for the outcome social functioning, there was uncertainty about the effect of ICM on unemployment as compared to non-ICM (n = 73, 1 RCT, RR 1.46, 95% CI 0.45 to 4.74, very low-quality evidence).3. Fidelity to ACTWithin the meta-regression we found that i.) the more ICM is adherent to the ACT model, the better it is at decreasing time in hospital ('organisation fidelity' variable coefficient -0.36, 95% CI -0.66 to -0.07); and ii.) the higher the baseline hospital use in the population, the better ICM is at decreasing time in hospital ('baseline hospital use' variable coefficient -0.20, 95% CI -0.32 to -0.10). Combining both these variables within the model, 'organisation fidelity' is no longer significant, but the 'baseline hospital use' result still significantly influences time in hospital (regression coefficient -0.18, 95% CI -0.29 to -0.07, P = 0.0027). Based on very low- to moderate-quality evidence, ICM is effective in ameliorating many outcomes relevant to people with severe mental illness. Compared to standard care, ICM may reduce hospitalisation and increase retention in care. It also globally improved social functioning, although ICM's effect on mental state and quality of life remains unclear. Intensive Case Management is at least valuable to people with severe mental illnesses in the subgroup of those with a high level of hospitalisation (about four days per month in past two years). Intensive Case Management models with high fidelity to the original team organisation of ACT model were more effective at reducing time in hospital.However, it is unclear what overall gain ICM provides on top of a less formal non-ICM approach.We do not think that more trials comparing current ICM with standard care or non-ICM are justified, however we currently know of no review comparing non-ICM with standard care, and this should be undertaken.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The projected rise in the incidence of type 2 diabetes mellitus (T2DM) could develop into a substantial health problem worldwide. Whether diet, physical activity or both can prevent or delay T2DM and its associated complications in at-risk people is unknown. To assess the effects of diet, physical activity or both on the prevention or delay of T2DM and its associated complications in people at increased risk of developing T2DM. This is an update of the Cochrane Review published in 2008. We searched the CENTRAL, MEDLINE, Embase, ClinicalTrials.gov, ICTRP Search Portal and reference lists of systematic reviews, articles and health technology assessment reports. The date of the last search of all databases was January 2017. We continuously used a MEDLINE email alert service to identify newly published studies using the same search strategy as described for MEDLINE up to September 2017. We included randomised controlled trials (RCTs) with a duration of two years or more. We used standard Cochrane methodology for data collection and analysis. We assessed the overall quality of the evidence using GRADE. We included 12 RCTs randomising 5238 people. One trial contributed 41% of all participants. The duration of the interventions varied from two to six years. We judged none of the included trials at low risk of bias for all 'Risk of bias' domains.Eleven trials compared diet plus physical activity with standard or no treatment. Nine RCTs included participants with impaired glucose tolerance (IGT), one RCT included participants with IGT, impaired fasting blood glucose (IFG) or both, and one RCT included people with fasting glucose levels between 5.3 to 6.9 mmol/L. A total of 12 deaths occurred in 2049 participants in the diet plus physical activity groups compared with 10 in 2050 participants in the comparator groups (RR 1.12, 95% CI 0.50 to 2.50; 95% prediction interval 0.44 to 2.88; 4099 participants, 10 trials; very low-quality evidence). The definition of T2DM incidence varied among the included trials. Altogether 315 of 2122 diet plus physical activity participants (14.8%) developed T2DM compared with 614 of 2389 comparator participants (25.7%) (RR 0.57, 95% CI 0.50 to 0.64; 95% prediction interval 0.50 to 0.65; 4511 participants, 11 trials; moderate-quality evidence). Two trials reported serious adverse events. In one trial no adverse events occurred. In the other trial one of 51 diet plus physical activity participants compared with none of 51 comparator participants experienced a serious adverse event (low-quality evidence). Cardiovascular mortality was rarely reported (four of 1626 diet plus physical activity participants and four of 1637 comparator participants (the RR ranged between 0.94 and 3.16; 3263 participants, 7 trials; very low-quality evidence). Only one trial reported that no non-fatal myocardial infarction or non-fatal stroke had occurred (low-quality evidence). Two trials reported that none of the participants had experienced hypoglycaemia. One trial investigated health-related quality of life in 2144 participants and noted that a minimal important difference between intervention groups was not reached (very low-quality evidence). Three trials evaluated costs of the interventions in 2755 participants. The largest trial of these reported an analysis of costs from the health system perspective and society perspective reflecting USD 31,500 and USD 51,600 per quality-adjusted life year (QALY) with diet plus physical activity, respectively (low-quality evidence). There were no data on blindness or end-stage renal disease.One trial compared a diet-only intervention with a physical-activity intervention or standard treatment. The participants had IGT. Three of 130 participants in the diet group compared with none of the 141 participants in the physical activity group died (very low-quality evidence). None of the participants died because of cardiovascular disease (very low-quality evidence). Altogether 57 of 130 diet participants (43.8%) compared with 58 of 141 physical activity participants (41.1%) group developed T2DM (very low-quality evidence). No adverse events were recorded (very low-quality evidence). There were no data on non-fatal myocardial infarction, non-fatal stroke, blindness, end-stage renal disease, health-related quality of life or socioeconomic effects.Two trials compared physical activity with standard treatment in 397 participants. One trial included participants with IGT, the other trial included participants with IGT, IFG or both. One trial reported that none of the 141 physical activity participants compared with three of 133 control participants died. The other trial reported that three of 84 physical activity participants and one of 39 control participants died (very low-quality evidence). In one trial T2DM developed in 58 of 141 physical activity participants (41.1%) compared with 90 of 133 control participants (67.7%). In the other trial 10 of 84 physical activity participants (11.9%) compared with seven of 39 control participants (18%) developed T2DM (very low-quality evidence). Serious adverse events were rarely reported (one trial noted no events, one trial described events in three of 66 physical activity participants compared with one of 39 control participants - very low-quality evidence). Only one trial reported on cardiovascular mortality (none of 274 participants died - very low-quality evidence). Non-fatal myocardial infarction or stroke were rarely observed in the one trial randomising 123 participants (very low-quality evidence). One trial reported that none of the participants in the trial experienced hypoglycaemia. One trial investigating health-related quality of life in 123 participants showed no substantial differences between intervention groups (very low-quality evidence). There were no data on blindness or socioeconomic effects. There is no firm evidence that diet alone or physical activity alone compared to standard treatment influences the risk of T2DM and especially its associated complications in people at increased risk of developing T2DM. However, diet plus physical activity reduces or delays the incidence of T2DM in people with IGT. Data are lacking for the effect of diet plus physical activity for people with intermediate hyperglycaemia defined by other glycaemic variables. Most RCTs did not investigate patient-important outcomes.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Beyond term, the risks of stillbirth or neonatal death increase. It is unclear whether a policy of labour induction can reduce these risks. This Cochrane review is an update of a review that was originally published in 2006 and subsequently updated in 2012 OBJECTIVES: To assess the effects of a policy of labour induction at or beyond term compared with a policy of awaiting spontaneous labour or until an indication for birth induction of labour is identified) on pregnancy outcomes for infant and mother. We searched Cochrane Pregnancy and Childbirth's Trials Register, ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform (ICTRP) (9 October 2017), and reference lists of retrieved studies. Randomised controlled trials (RCTs) conducted in pregnant women at or beyond term, comparing a policy of labour induction with a policy of awaiting spontaneous onset of labour (expectant management). We also included trials published in abstract form only. Cluster-RCTs, quasi-RCTs and trials using a cross-over design are not eligible for inclusion in this review.We included pregnant women at or beyond term. Since a risk factor at this stage of pregnancy would normally require an intervention, only trials including women at low risk for complications were eligible. We accepted the trialists' definition of 'low risk'. The trials of induction of labour in women with prelabour rupture of membranes at or beyond term were not considered in this review but are considered in a separate Cochrane review. Two reviewers independently assessed trials for inclusion, assessed risk of bias and extracted data. Data were checked for accuracy. We assessed the quality of evidence using the GRADE approach. In this updated review, we included 30 RCTs (reporting on 12,479 women). The trials took place in Norway, China, Thailand, the USA, Austria, Turkey, Canada, UK, India, Tunisia, Finland, Spain, Sweden and the Netherlands. They were generally at a moderate risk of bias.Compared with a policy of expectant management, a policy of labour induction was associated with fewer (all-cause) perinatal deaths (risk ratio (RR) 0.33, 95% confidence interval (CI) 0.14 to 0.78; 20 trials, 9960 infants; moderate-quality evidence). There were two perinatal deaths in the labour induction policy group compared with 16 perinatal deaths in the expectant management group. The number needed to treat to for an additional beneficial outcome (NNTB) with induction of labour in order to prevent one perinatal death was 426 (95% CI 338 to 1337). There were fewer stillbirths in the induction group (RR 0.33, 95% CI 0.11 to 0.96; 20 trials, 9960 infants; moderate-quality evidence); there was one stillbirth in the induction policy arm and 10 in the expectant management group.For women in the policy of induction arms of trials, there were fewer caesarean sections compared with expectant management (RR 0.92, 95% CI 0.85 to 0.99; 27 trials, 11,738 women; moderate-quality evidence); and a corresponding marginal increase in operative vaginal births with induction (RR 1.07, 95% CI 0.99 to 1.16; 18 trials, 9281 women; moderate-quality evidence). There was no evidence of a difference between groups for perineal trauma (RR 1.09, 95% CI 0.65 to 1.83; 4 trials; 3028 women; low-quality evidence), postpartum haemorrhage (RR 1.09 95% CI 0.92 to 1.30, 5 trials; 3315 women; low-quality evidence), or length of maternal hospital stay (average mean difference (MD) -0.34 days, 95% CI -1.00 to 0.33; 5 trials; 1146 women; Tau² = 0.49; I² 95%; very low-quality evidence).Rates of neonatal intensive care unit (NICU) admission were lower (RR 0.88, 95% CI 0.77 to 1.01; 13 trials, 8531 infants; moderate-quality evidence) and fewer babies had Apgar scores less than seven at five minutes in the induction groups compared with expectant management (RR 0.70, 95% CI 0.50 to 0.98; 16 trials, 9047 infants; moderate-quality evidence).There was no evidence of a difference for neonatal trauma (RR 1.18, 95% CI 0.68 to 2.05; 3 trials, 4255 infants; low-quality evidence), for induction compared with expectant management.Neonatal encephalopathy, neurodevelopment at childhood follow-up, breastfeeding at discharge and postnatal depression were not reported by any trials.In subgroup analyses, no clear differences between timing of induction (&lt; 41 weeks versus ≥ 41 weeks' gestation) or by state of cervix were seen for perinatal death, stillbirth, NICU admission, caesarean section, or perineal trauma. However, operative vaginal birth was more common in the inductions at &lt; 41 weeks' gestation subgroup compared with inductions at later gestational ages. The majority of trials (about 75% of participants) adopted a policy of induction at ≥ 41 weeks (&gt; 287 days) gestation for the intervention arm. A policy of labour induction at or beyond term compared with expectant management is associated with fewer perinatal deaths and fewer caesarean sections; but more operative vaginal births. NICU admissions were lower and fewer babies had low Apgar scores with induction. No important differences were seen for most of the other maternal and infant outcomes.Most of the important outcomes assessed using GRADE had a rating of moderate or low-quality evidence - with downgrading decisions generally due to study limitations such as lack of blinding (a condition inherent in comparisons between a policy of acting and of waiting), or imprecise effect estimates. One outcome (length of maternal stay) was downgraded further to very low-quality evidence due to inconsistency.Although the absolute risk of perinatal death is small, it may be helpful to offer women appropriate counselling to help choose between scheduled induction for a post-term pregnancy or monitoring without (or later) induction).The optimal timing of offering induction of labour to women at or beyond term warrants further investigation, as does further exploration of risk profiles of women and their values and preferences. Individual participant meta-analysis is likely to help elucidate the role of factors, such as parity, in influencing outcomes of induction compared with expectant management.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Augmentation of soft-tissue repairs with an autologous fibrin clot has been used clinically for nearly four decades; however, fibrin clots tend to produce an abundance of scar tissue, which is known to inhibit soft-tissue regeneration. Mesenchymal stem cells (MSCs) embedded in fibrin clots before repair could reduce scar tissue deposition and facilitate soft-tissue regeneration. To our knowledge, no published studies have directly evaluated the viability or bioactivity of MSCs in fresh human fibrin clots over time. The purpose of this study was to evaluate the viability and bioactivity of human MSCs inside human fibrin clots over time in nutritive and non-nutritive culture media. We hypothesized that human MSCs would (1) be captured inside fibrin clots and retain their proliferative capacity, (2) remain viable for at least 7 days in the fibrin clots, (3) maintain their proliferative capacity for at least 7 days in the fibrin clots without evidence of active apoptosis, and (4) display similar viability and proliferative capacity when cultured in a non-nutritive medium over the same time periods. Twelve patients (mean age 33.7 years; range 4-72 years) who underwent elective knee surgery were approached between February 2016 and October 2017; all patients agreed to participate and were enrolled. MSCs isolated from human skeletal muscle and banked after prior studies were used for this analysis. On the day of surgery and after expansion of the MSC population, 3-mL aliquots of phosphate-buffered saline containing approximately 600,000 labeled with anti-green fluorescent protein (GFP) antibodies were transported to the operating room, mixed in 30 mL of venous blood from each enrolled patient, and stirred at 95 rpm for 10 minutes to create MSC-embedded fibrin clots. The fibrin clots were transported to the laboratory with their residual blood for analysis. Eleven samples were analyzed after exclusion of one sample because of a processing error. MSC capture was qualitatively demonstrated by enzymatically digesting half of each clot specimen, thus releasing GFP-positive MSCs into culture. The released MSCs were allowed to culture for 7 days. Manual counting of GFP-positive MSCs was performed at 2, 3, 4, and 7 days using an inverted microscope at 100 x magnification to document the change in the number of GFP-positive MSCs over time. The intact remaining half of each clot specimen was immediately placed in proliferation media and allowed to culture for 7 days. On Days 1, 2, 3, 4, and 7, a small portion of the clot was excised, flash-frozen, cryosectioned (8-μm thickness), and immunostained with antibodies specific to GFP, Ki67 (indicative of active proliferation), and cleaved caspase-3 ([CC3]; indicative of active apoptosis). Using an inverted microscope, we obtained MSC cell counts manually at time zero and after 1, 2, 3, 4, and 7 days of culture. Intact fresh clot specimens were immediately divided in half; one half was placed in nutritive (proliferation media) and the other was placed in non-nutritive (saline) media for 1, 2, 3, 4, and 7 days. At each timepoint, specimens were processed in an identical manner as described above, and a portion of each clot specimen was excised, immediately flash-frozen with liquid nitrogen, cryosectioned (8-μm thickness), and visualized at 200 x using an inverted microscope. The numbers of stain-positive MSCs per field of view, per culture condition, per timepoint, and per antibody stain type were counted manually for a quantitative analysis. Raw data were statistically compared using t-tests, and time-based correlations were assessed using Pearson's correlation coefficients. Two-tailed p values of less than 0.05 (assuming unequal variance) were considered statistically significant. Green fluorescence, indicative of viable GFP-positive MSCs, was absent in all residual blood samples after 48 hours of culturing; GFP-positive MSCs were visualized after enzymatic digestion of clot matrices. The number of GFP-positive MSCs per field of view increased between the 2-day and 7-day timepoints (mean 5.4 ± 1.5; 95% confidence interval, 4.7-6.1 versus mean 17.0 ± 13.6; 95% CI, 10.4-23.5, respectively; p = 0.029). Viable GFP-positive MSCs were present in each clot cryosection at each timepoint up to 7 days of culturing (mean 6.2 ± 4.3; 95% CI, 5.8-6.6). There were no differences in MSC counts between any of the timepoints. There was no visible evidence of GFP +/CC3 + double-positive MSCs. Combining all timepoints, there were 0.34 ± 0.70 (95% CI, 0.25-0.43) GFP+/Ki67+ double-positive MSCs per field of view. The mitotic indices at time zero and Day 7 were 7.5% ± 13.4% (95% CI, 3.0%-12.0%) and 7.2% ± 14.3% (95% CI, 3.3%-12,1%), respectively (p = 0.923). There was no visible evidence of GFP +/CC3 + double-positive MSCs (active apoptosis) at any timepoint. For active proliferation in saline-cultured fibrin clots, we found averages of 0.1 ± 0.3 (95% CI, 0.0-0.2) and 0.4 ± 0.9 (95% CI, 0.0-0.8) GFP/Ki67 double-positive MSCs at time zero and Day 7, respectively (p = 0.499). The mitotic indices in saline culture at time zero and Day 7 were 2.9% ± 8.4% (95% CI, 0.0%-5.8%) and 9.1% ± 20.7% (95% CI, 1.2%-17.0%; p = 0.144). There was no visible evidence of GFP +/CC3 + double-positive MSCs (active apoptosis) at any timepoint in either culturing condition. These preliminary in vitro results show that human MSCs mixed in unclotted fresh human venous blood were nearly completely captured in fibrin clots and that seeded MSCs were capable of maintaining their viability, proliferation capacity, and osteogenic differentiation capacity in the fibrin clot for up to 7 days, independent of external sources of nutrition. Fresh human fibrin clots have been used clinically for more than 30 years to improve soft-tissue healing, albeit with scar tissue. Our results demonstrate that allogenic human MSCs, which reduce soft-tissue scarring, can be captured and remain active inside human fibrin clots, even in the absence a nutritive culture medium.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Plasmodium vivax (P vivax) is a focus of malaria elimination. It is important because P vivax and Plasmodium falciparum infection are co-endemic in some areas. There are asymptomatic carriers of P vivax, and the treatment for P vivax and Plasmodium ovale malaria differs from that used in other types of malaria. Rapid diagnostic tests (RDTs) will help distinguish P vivax from other malaria species to help treatment and elimination. There are RDTs available that detect P vivax parasitaemia through the detection of P vivax-specific lactate dehydrogenase (LDH) antigens. To assess the diagnostic accuracy of RDTs for detecting P vivax malaria infection in people living in malaria-endemic areas who present to ambulatory healthcare facilities with symptoms suggestive of malaria; and to identify which types and brands of commercial tests best detect P vivax malaria. We undertook a comprehensive search of the following databases up to 30 July 2019: Cochrane Infectious Diseases Group Specialized Register; Central Register of Controlled Trials (CENTRAL), published in the Cochrane Library; MEDLINE (PubMed); Embase (OVID); Science Citation Index Expanded (SCI-EXPANDED) and Conference Proceedings Citation Index-Science (CPCI-S), both in the Web of Science. Studies comparing RDTs with a reference standard (microscopy or polymerase chain reaction (PCR)) in blood samples from patients attending ambulatory health facilities with symptoms suggestive of malaria in P vivax-endemic areas. For each included study, two review authors independently extracted data using a pre-piloted data extraction form. The methodological quality of the studies were assessed using a tailored Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. We grouped studies according to commercial brand of the RDT and performed meta-analysis when appropriate. The results given by the index tests were based on the antibody affinity (referred to as the strength of the bond between an antibody and an antigen) and avidity (referred to as the strength of the overall bond between a multivalent antibody and multiple antigens). All analyses were stratified by the type of reference standard. The bivariate model was used to estimate the pooled sensitivity and specificity with 95% confidence intervals (CIs), this model was simplified when studies were few. We assessed the certainty of the evidence using the GRADE approach. We included 10 studies that assessed the accuracy of six different RDT brands (CareStart Malaria Pf/Pv Combo test, Falcivax Device Rapid test, Immuno-Rapid Malaria Pf/Pv test, SD Bioline Malaria Ag Pf/Pv test, OnSite Pf/Pv test and Test Malaria Pf/Pv rapid test) for detecting P vivax malaria. One study directly compared the accuracy of two RDT brands. Of the 10 studies, six used microscopy, one used PCR, two used both microscopy and PCR separately and one used microscopy corrected by PCR as the reference standard. Four of the studies were conducted in Ethiopia, two in India, and one each in Bangladesh, Brazil, Colombia and Sudan. The studies often did not report how patients were selected. In the patient selection domain, we judged the risk of bias as unclear for nine studies. We judged all studies to be of unclear applicability concern. In the index test domain, we judged most studies to be at low risk of bias, but we judged nine studies to be of unclear applicability concern. There was poor reporting on lot testing, how the RDTs were stored, and background parasitaemia density (a key variable determining diagnostic accuracy of RDTs). Only half of the included studies were judged to be at low risk of bias in the reference standard domain, Studies often did not report whether the results of the reference standard could classify the target condition or whether investigators knew the results of the RDT when interpreting the results of the reference standard. All 10 studies were judged to be at low risk of bias in the flow and timing domain. Only two brands were evaluated by more than one study. Four studies evaluated the CareStart Malaria Pf/Pv Combo test against microscopy and two studies evaluated the Falcivax Device Rapid test against microscopy. The pooled sensitivity and specificity were 99% (95% CI 94% to 100%; 251 patients, moderate-certainty evidence) and 99% (95% CI 99% to 100%; 2147 patients, moderate-certainty evidence) for CareStart Malaria Pf/Pv Combo test. For a prevalence of 20%, about 206 people will have a positive CareStart Malaria Pf/Pv Combo test result and the remaining 794 people will have a negative result. Of the 206 people with positive results, eight will be incorrect (false positives), and of the 794 people with a negative result, two would be incorrect (false negative). For the Falcivax Device Rapid test, the pooled sensitivity was 77% (95% CI: 53% to 91%, 89 patients, low-certainty evidence) and the pooled specificity was 99% (95% CI: 98% to 100%, 621 patients, moderate-certainty evidence), respectively. For a prevalence of 20%, about 162 people will have a positive Falcivax Device Rapid test result and the remaining 838 people will have a negative result. Of the 162 people with positive results, eight will be incorrect (false positives), and of the 838 people with a negative result, 46 would be incorrect (false negative). The CareStart Malaria Pf/Pv Combo test was found to be highly sensitive and specific in comparison to microscopy for detecting P vivax in ambulatory healthcare in endemic settings, with moderate-certainty evidence. The number of studies included in this review was limited to 10 studies and we were able to estimate the accuracy of 2 out of 6 RDT brands included, the CareStart Malaria Pf/Pv Combo test and the Falcivax Device Rapid test. Thus, the differences in sensitivity and specificity between all the RDT brands could not be assessed. More high-quality studies in endemic field settings are needed to assess and compare the accuracy of RDTs designed to detect P vivax.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Many people with dementia are cared for at home by unpaid informal caregivers, usually family members. Caregivers may experience a range of physical, emotional, financial and social harms, which are often described collectively as caregiver burden. The degree of burden experienced is associated with characteristics of the caregiver, such as gender, and characteristics of the person with dementia, such as dementia stage, and the presence of behavioural problems or neuropsychiatric disturbances. It is a strong predictor of admission to residential care for people with dementia. Psychoeducational interventions might prevent or reduce caregiver burden. Overall, they are intended to improve caregivers' knowledge about the disease and its care; to increase caregivers' sense of competence and their ability to cope with difficult situations; to relieve feelings of isolation and allow caregivers to attend to their own emotional and physical needs. These interventions are heterogeneous, varying in their theoretical framework, components, and delivery formats. Interventions that are delivered remotely, using printed materials, telephone or video technologies, may be particularly suitable for caregivers who have difficulty accessing face-to-face services because of their own health problems, poor access to transport, or absence of substitute care. During the COVID-19 pandemic, containment measures in many countries required people to be isolated in their homes, including people with dementia and their family carers. In such circumstances, there is no alternative to remote delivery of interventions. To assess the efficacy and acceptability of remotely delivered interventions aiming to reduce burden and improve mood and quality of life of informal caregivers of people with dementia. We searched the Specialised Register of the Cochrane Dementia and Cognitive Improvement Group, MEDLINE, Embase and four other databases, as well as two international trials registries, on 10 April 2020. We also examined the bibliographies of relevant review papers and published trials. We included only randomised controlled trials that assessed the remote delivery of structured interventions for informal caregivers who were providing care for people with dementia living at home. Caregivers had to be unpaid adults (relatives or members of the person's community). The interventions could be delivered using printed materials, the telephone, the Internet or a mixture of these, but could not involve any face-to-face contact with professionals. We categorised intervention components as information, training or support. Information interventions included two key elements: (i) they provided standardised information, and (ii) the caregiver played a passive role. Support interventions promoted interaction with other people (professionals or peers). Training interventions trained caregivers in practical skills to manage care. We excluded interventions that were primarily individual psychotherapy. Our primary outcomes were caregiver burden, mood, health-related quality of life and dropout for any reason. Secondary outcomes were caregiver knowledge and skills, use of health and social care resources, admission of the person with dementia to institutional care, and quality of life of the person with dementia. Study selection, data extraction and assessment of the risk of bias in included studies were done independently by two review authors. We used the Template for Intervention Description and Replication (TIDieR) to describe the interventions. We conducted meta-analyses using a random-effects model to derive estimates of effect size. We used GRADE methods to describe our degree of certainty about effect estimates. We included 26 studies in this review (2367 participants). We compared (1) interventions involving training, support or both, with or without information (experimental interventions) with usual treatment, waiting list or attention control (12 studies, 944 participants); and (2) the same experimental interventions with provision of information alone (14 studies, 1423 participants). We downgraded evidence for study limitations and, for some outcomes, for inconsistency between studies. There was a frequent risk of bias from self-rating of subjective outcomes by participants who were not blind to the intervention. Randomisation methods were not always well-reported and there was potential for attrition bias in some studies. Therefore, all evidence was of moderate or low certainty. In the comparison of experimental interventions with usual treatment, waiting list or attention control, we found that the experimental interventions probably have little or no effect on caregiver burden (nine studies, 597 participants; standardised mean difference (SMD) -0.06, 95% confidence interval (CI) -0.35 to 0.23); depressive symptoms (eight studies, 638 participants; SMD -0.05, 95% CI -0.22 to 0.12); or health-related quality of life (two studies, 311 participants; SMD 0.10, 95% CI -0.13 to 0.32). The experimental interventions probably result in little or no difference in dropout for any reason (eight studies, 661 participants; risk ratio (RR) 1.15, 95% CI 0.87 to 1.53). In the comparison of experimental interventions with a control condition of information alone, we found that experimental interventions may result in a slight reduction in caregiver burden (nine studies, 650 participants; SMD -0.24, 95% CI -0.51 to 0.04); probably result in a slight improvement in depressive symptoms (11 studies, 1100 participants; SMD -0.25, 95% CI -0.43 to -0.06); may result in little or no difference in caregiver health-related quality of life (two studies, 257 participants; SMD -0.03, 95% CI -0.28 to 0.21); and probably result in an increase in dropouts for any reason (12 studies, 1266 participants; RR 1.51, 95% CI 1.04 to 2.20). Remotely delivered interventions including support, training or both, with or without information, may slightly reduce caregiver burden and improve caregiver depressive symptoms when compared with provision of information alone, but not when compared with usual treatment, waiting list or attention control. They seem to make little or no difference to health-related quality of life. Caregivers receiving training or support were more likely than those receiving information alone to drop out of the studies, which might limit applicability. The efficacy of these interventions may depend on the nature and availability of usual services in the study settings.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
INTRODUCTION : The studies on allergic reaction with the substances of Ascaris lumbricoides have long been studied by various worers; Conventry(1929), Campbell(1936), Sakei(1949), Miyakawa(1950), Ikeda(1952), Matsumoto and Imawari(1952), Morishita and Kobayashi(1953, 1954), Komiyayama(1954) and Yammoto(1956). Campbell(1936) and some other workers reported that the polysaccharides from ascaris produced the stronger intrademal reactions than protein fraction, though Yamamoto(1954) and others found the reverse results. On the other hand, Hosotani(1954) reported that the crude antigen or mixed antigen with polysaccharide and protein fraction of the ascaris produced the strongest skin reaction than with the other single fraction. As are shown in above reports, the intensity of the allergic reaction with the substances from ascaris is still remained under dipute. The reason might be due to the difference of the method of preparation, technique and evaluation. The aim of the present study is to elucidate the intensity of allergic reactivity fo protein and polysaccharide fration and mixed substance of two fractions and crude antigen of Ascaris lumbricoides. MATERIALS AND METHODS : A Intrdermal Test 1. Human Experiment. The intradermal test was performed on several groups of people. A: Ascaris lumbricoides egg positive cases among adult ages. B: Ascaris lumbricoides egg negative cases but who bad doubtful symptom. C: Ascaris lumbricoides egg negative cases but who had past history of ascaris infection. D: Ascaris lumbricoides egg negative cases and aged 3-8 months old. 2. Animal Experiment: Same breeds of 6 dogs were raised i cages of laboratory for 6 months, During the period, special attention was paid to keep them in parasite free conditon. The body weight was 10kg in average. B. Antigens. The adults worms of Ascaris lumbricoides, which were obtained during laparotomy, were first washed with sterilized saline solution. Each ascaris was placed in 50ml of saline solution and kept in 30 degrees C incubator for 24 hours. Among of them, the active one was seleted and put to sudden freeze at -70 degrees C for 20 hours. the whole body was powdered in a dried condition and kept it in ampoule at -4 degrees C. a) Crude antigen: The ether extract of powdered Ascaris lumbrucoides were motared by adding veronal buffer solution (1:100) and kept in icebox for 48 hours. The suspension was diluted with veronal buffer solution in the ratio of 1:10,000. b) Protein antigen : This antigen was prepared by Chaffee's modified methods and ammonium sulfate extraction mehtod. c) Polysaccharide antigen: Chaffee's modified method and ethanol extration method were applied. d) Mixed antigen : The same amount of preparation of protein and polysaccharide antigen were mixed. C. Intrdermal test 1. The intradermal test : 0.02ml of natigen was injected on the anterior surface of the frearm in human and on the back in aninmal, with tuberculin syringe. The criteria of the skin reaction were determined as follows; wheal: -; 0-4mm, +/-; 5-7mm, +;8-9mm, +; 21-32mm, ++; 33-44mm, +++; 45-56mm, ++++; over 57mm, in diameter. D. Stool examination : All the stool examination was done by formalin-ether concectration(M.G.L) method. E.P.G (egg per gram) was also determined by Stoll's egg counting method. RESULT : The intradermal reaction after the injection of each antigen was observed at 15, 30, 60 minutes and 3, 24 hours. In 58 ascariasis cases, the peak of wheal was appeared at 30 minute; 93.0% with the crude antigen, 15.5% with the mixed antigen, 10.3% with the Protein antigen, but all were negative in the polysaccharide antigen. The erythema reaction paralledled, in general, to the wheal; 75.8% at 15 minutes, 72.5% at 30 minutes and 48.3% at 60 mintues, with the crude antigen. Only 3.4% showed erythema at 15,30, and 60 minutes int the case of mixed antigen, and 1.7% fo positive was appeared at 30 minutes in the case of protein antigen, but none was observed in the polysaccharide antigen. The wheal and erythema revealed correlationship each other. Both of them showed 65.5% positive boundary in the case of crude antigen. Generally, the crude antigen resulted the highest and strongest reaction than the other fractions; the mixed, protein and polysaccharide in decreasing order. In adult age group, who showed egg negative at the time of injection, 81.5% were positive in skin reaction with the crude antigen, and 88.6% in the group who complained doubtful symptom but egg negative. In the group who had past history of ascaris infection showed 66.6% positive and the egg negative infant group with the same crude antigen. The sheal size was not always paralleled with the worm burden. The cross reaction with the antigen from Ascaris lumbricoides and Toxocara canis was examined by the intradermal test. There was no cross reaction between the two antigen. The infected dog with Toxocara canis showed positive reaction by the crude antigen of same species, but mot by the human species. Experimentally, the positive skin reaction was appeared only by the crude antigen at four weeks after the infection of Toxocara canis. CONCLUSION: The intradermal studies with the fraction of Ascaris lumbricoides and Toxocara canis were performed to human and dog, and the following results were observed. 1) Wheal and erythema were appeared in the cases of ascaris infection or who had past history, but not in the ascaris free before. 2) The size of wheal reached to peak 30 minutes after the injection. 3) The crude antigen had specificity and showed no cross reaction. 4) The crude antigen cause the strongest and largest reaction than the other substances; protein, polysaccharide and the mixed antigen. No cutaneous reaction was observed with the fraction of polysaccharides. 5) The size of wheal did not parallel with the worm burden. 6) The skin reaction was appeared four weeks after the infection.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
To compare the performance and cost-effectiveness of the key absorbent product designs to provide a more solid basis for guiding selection and purchase. Also to carry out the first stage in the development of a quality of life (QoL) instrument for measuring the impact of absorbent product use on users' lives. Three clinical trials focused on the three biggest market sectors. Each trial had a similar crossover design in which each participant tested all products within their group in random order. SETTING, PARTICIPANTS AND INTERVENTIONS: In Trial 1, 85 women with light urinary incontinence living in the community tested three products from each of the four design categories available (total of 12 test products): disposable inserts (pads); menstrual pads; washable pants with integral pad; and washable inserts. In Trial 2a, 85 moderate/heavily incontinent adults (urinary or urinary/faecal) living in the community (49 men and 36 women) tested three (or two) products from each of the five design categories available (total of 14 test products): disposable inserts (with mesh pants); disposable diapers (nappies); disposable pull-ups (similar to toddlers' trainer pants); disposable T-shaped diapers (nappies with waist-band); and washable diapers. All products were provided in a daytime and a (mostly more absorbent) night-time variant. In these first two trials, the test products were selected on the basis of data from pilot studies. In Trial 2b, 100 moderate/heavily incontinent adults (urinary or urinary/faecal) living in 10 nursing homes (27 men and 73 women) evaluated one product from each of the four disposable design categories from Trial 2a. Products were selected on the basis of product performance in Trial 2a and, again, day time and night-time variants were provided. The first phase of developing a QoL tool for measuring the impact of using different pad designs was carried out by interviewing participants from Trials 1 and 2a. Product performance (e.g. comfort, discreetness) was characterised using a weekly validated questionnaire. A daily pad change and leakage diary was used to record severity of leakage, numbers of laundry items and pads. Skin health changes were recorded weekly. At a final interview preferences were ranked, acceptability of each design recorded, and overall opinion marked on a visual analogue scale (VAS) of 0-100 points. This VAS score was used to estimate cost-effectiveness. In addition, a timed pad changing exercise was conducted with 10 women from Trial 2b to determine any differences between product designs. Disposable inserts are currently the mainstay of management for lightly incontinent women (Trial 1) and they were better for leakage and other variables (but not discreetness) and better overall than the other three designs. However, some women preferred menstrual pads (6/85) or washable pants (13/85), both of which are cheaper to use. Washable inserts were worse both overall and for leakage than the other three designs (72/85 found them unacceptable). For disposable inserts and disposable diapers, findings from the community (Trial 2a) and nursing home trials (Trial 2b) were broadly similar. Leakage performance of disposable inserts was worse than that of the other designs for day and night. Pull-ups were preferred over inserts for the daytime. The new T-shaped diaper was not better overall than the traditional disposable one. However, there were important differences in performance and preference findings for men and women from both trials. Pull-ups (the most expensive) were better overall than the other designs for women during the day and for community-dwelling women during the night. Although disposable diapers were better for leakage than disposable inserts (the cheapest), women did not prefer them (except in nursing homes at night), but for men the diapers were better both overall and for leakage and were the most cost-effective design. No firm conclusions could be drawn about the performance of designs for faecal incontinence. Nursing home carers found pull-ups and inserts easier to apply (in the standing position) and quicker (in the pad change experiment) than the diaper designs; the ability to stand was associated with preference for pull-ups or inserts. The T-shaped diaper was not easier or quicker to change than the diaper. The washable products (Trial 2a) gave diverse results: they were better for leakage at night, but were worse overall for daytime than the other designs. Three-quarters of the women (27/36) found them unacceptable, but nearly two-thirds of men (31/49) found them highly acceptable at night. Findings from the two community trials (Trials 1 and 2a) showed that there were many practical problems in dealing with washable products but, together with the less effective and less expensive products, such as menstrual pads, they were more acceptable at home (and, in the case of washables, at night). This suggests that cost-effective management may involve combining products by using more effective (for a given user) but more expensive designs (e.g. pull-ups) when out and less effective but less expensive designs when at home. The interviews examining the impact of pad use on QoL provided themes and domains that can be further developed into a tool for further evaluation of absorbent products. This study showed that there were significant and substantial differences between the designs of absorbent products and for moderate/heavy incontinence some designs are better for men/women than others. There was considerable individual variability in preferences and cost-effective management may best be achieved by allowing users to choose combinations of designs for different circumstances within a budget. Further research is needed into the feasibility of providing choice and combinations of designs to users, as well as into the development of more effective washables and of specifically male disposable products. QoL measurement tools are needed for users of absorbent products, as are clinical trials of designs for community-dwelling carer-dependent men and women with moderate/heavy incontinence.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
In four experiments, three with young incubator turkeys and one with young incubator chickens, in which the feces of old turkeys from an infectious flock, kept at room temperature up to 5, 8, and 10 days, were fed, no infection resulted. In an experiment in which two of four young incubator turkeys used in one of the above experiments were fed embryonated eggs of Heterakis papillosa and feces of turkeys from an infectious flock both contracted blackhead. Two controls remained well. Later they were fed embryonated eggs of Heterakis papillosa and both contracted blackhead. In another experiment three incubator turkeys received embryonated eggs plus turkey feces from an infectious flock. All contracted blackhead. Three received embryonated eggs alone; all contracted blackhead. Three received turkey feces only; none contracted blackhead. Three controls received nothing; one showed blackhead lesions at the autopsy. In a final experiment three turkeys were fed cultures of feces from the ceca of diseased turkeys, three were fed cultures of feces of old turkeys from an infected flock, and three controls were fed nothing. None contracted blackhead. The cultures of feces were prepared precisely as were the earlier ones containing Heterakis eggs but without the latter. From these experiments it becomes evident that blackhead may be produced in healthy incubator-raised turkeys, reared in the open in an environment where blackhead occurs, but out of direct contact with old turkeys and other poultry, by feeding cultures of embryonated eggs of Heterakis papillosa, prepared by cutting up the worms in isotonic salt solution and incubating the suspension at room temperature. These very definite and clear-cut results outweigh any objections which may be raised against the use of turkeys which had been in earlier experiments and which came through such experiments without any signs of disease, or which came from control flocks in which spontaneous cases had occurred. The short time elapsing between feeding embryonated eggs and the first signs of disease made these experiments unusually impressive. It should be stated, furthermore, that from a precise individual record of all turkeys it was possible to select birds from control flocks in which the infection had either not appeared or was very low. All but two turkeys in flocks serving as sources of this material were killed at the close of the year. None at any time had shown symptoms of disease, and no scars or other abnormalities of ceca and liver were found. Furthermore, all other control birds and those in field experiments, with the exception of two reserved for breeding, were likewise killed. As a result of these autopsies, it was determined that of all birds in which symptoms of disease had not been recorded during life, none showed abnormalities or scars at autopsy. The protozoan factor in blackhead was probably disseminated when the first spontaneous cases occurred in the stock, unless it was present and made invasive by incubation in the cultures fed. This latter theory seems at present not acceptable because of the wholly negative outcome of Experiment 8. The production of acute blackhead by feeding embryonated eggs to turkeys in whose ceca adults of Heterakis papillosa are already present seems incomprehensible at first thought. A tentative explanation to be offered is that the worms when invading the ceca in large numbers break down the resistance of the bird which is able to protect itself against a few. This may account for the very irregular occurrence of cases in contact with older recovered birds on infected grounds. The rôle of Heterakis as a preliminary agent may also account for the continuing high mortality in turkeys in which the disease has been operating for so many generations to eliminate the most susceptible. It now seems highly probable that the turkey has become relatively resistant to the invasion of the protozoan parasite acting alone and that such invasion may require other agencies. Whether Heterakis papillosa is the only, or at any rate, the chief accessory agent or whether there are others, living or inert, which when ingested by the turkey assist in preparing the way for the destructive invasion of the walls of the ceca and the liver by Amoeba meleagridis is a question now open to solution by experimentation. The relation of common poultry to outbreaks of blackhead may be accounted for, at least in part, by the fact that they are hosts of Heterakis papillosa. How frequently they also carry Amoeba meleagridis remains to be determined. Since earlier communications have contained certain practical suggestions on the rearing of turkeys and the prevention of blackhead, it is not out of place here to point out that the additional information presented in this article simply emphasizes the suggestions already made. Turkeys should be raised in the incubator and brooder and kept away from older turkeys and poultry. The shelters should be moved from time to time to prevent a too concentrated infection of the soil with Heterakis ova. Inasmuch as the factors producing blackhead may be deposited by certain still undetermined birds on the wing, disease may be looked for at any time during the warm season. It is not, however, very readily transmitted, and in the experiments described elsewhere the mortality from spontaneous blackhead was low. The flock should be looked over as frequently as possible, and whenever a turkey begins to droop, it should be isolated and killed if the drooping continues over several days. If such turkeys are allowed to recover, they should not be returned to the young flock but kept with older, presumably infected birds. Such birds are entirely satisfactory as a source of eggs, since there is no evidence that the latter transmit the infection.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Up to 80% of hospitalised patients receive intravenous therapy at some point during their admission. About 20% to 70% of patients receiving intravenous therapy develop phlebitis. Infusion phlebitis has become one of the most common complications in patients with intravenous therapy. However, the effects of routine treatments such as external application of 75% alcohol or 50% to 75% magnesium sulphate (MgSO4) are unsatisfactory. Therefore, there is an urgent need to develop new methods to prevent and alleviate infusion phlebitis. To systematically assess the effects of external application of Aloe vera for the prevention and treatment of infusion phlebitis associated with the presence of an intravenous access device. The Cochrane Peripheral Vascular Diseases Group Trials Search Co-ordinator (TSC) searched the Specialised Register (last searched February 2014) and CENTRAL (2014, Issue 1). In addition the TSC searched MEDLINE to week 5 January 2014, EMBASE to Week 6 2014 and AMED to February 2014. The authors searched the following Chinese databases until 28 February 2014: Chinese BioMedical Database; Traditional Chinese Medical Database System; China National Knowledge Infrastructure; Chinese VIP information; Chinese Medical Current Contents; Chinese Academic Conference Papers Database and Chinese Dissertation Database; and China Medical Academic Conference. Bibliographies of retrieved and relevant publications were searched. There were no restrictions on the basis of date or language of publication. Randomised controlled trials (RCTs) and quasi-randomised controlled trials (qRCTs) were included if they involved participants receiving topical Aloe vera or Aloe vera-derived products at the site of punctured skin, with or without routine treatment at the same site. Two review authors independently extracted the data on the study characteristics, description of methodology and outcomes of the eligible trials, and assessed study quality. Data were analysed using RevMan 5.1. For dichotomous outcomes, the effects were estimated by using risk ratio (RR) with its 95% confidence interval (CI). For continuous outcomes, mean differences (MD) with 95% CIs were used to estimate their effects. A total of 43 trials (35 RCTs and eight qRCTs) with 7465 participants were identified. Twenty-two trials with 5546 participants were involved in prevention of Aloe vera for phlebitis, and a further 21 trials with 1919 participants were involved in the treatment of phlebitis. The included studies compared external application of Aloe vera alone or plus non-Aloe vera interventions with no treatment or the same non-Aloe vera interventions. The duration of the intervention lasted from one day to 15 days. Most of the included studies were of low methodological quality with concerns for selection bias, attrition bias, reporting bias and publication bias.The effects of external application of fresh Aloe vera on preventing total incidence of phlebitis varied across the studies and we did not combine the data. Aloe vera reduced the occurrence of third degree phlebitis (RR 0.06, 95% CI 0.03 to 0.11, P &lt; 0.00001) and second degree phlebitis (RR 0.18, 95% CI 0.10 to 0.31, P &lt; 0.00001) compared with no treatment. Compared with external application of 75% alcohol, or 33% MgSO4 alone, Aloe vera reduced the total incidence of phlebitis (RR 0.02, 95% CI 0.00 to 0.28, P = 0.004 and RR 0.43, 95% CI 0.24 to 0.78, P = 0.005 respectively) but there was no clear evidence of an effect when compared with 50% or 75% MgSO4 (total incidence of phlebitis RR 0.41, 95% CI 0.16 to 1.07, P = 0.07 and RR 1.10 95% CI 0.54 to 2.25, P = 0.79 respectively; third degree phlebitis (RR 0.28, 95% CI 0.07 to 1.02, P = 0.051 and RR 1.19, 95% CI 0.08 to 18.73, P = 0.9 respectively; second degree phlebitis RR 0.68, 95% CI 0.21 to 2.23, P = 0.53 compared to 75% MgSO4) except for a reduction in second degree phlebitis when Aloe vera was compared with 50% MgSO4 (RR 0.26, 95% CI 0.14 to 0.50, P &lt; 0.0001).For the treatment of phlebitis, Aloe vera was more effective than 33% or 50% MgSO4 in terms of both any improvement (RR 1.16, 95% CI 1.09 to 1.24, P &lt; 0.0001 and RR 1.22, 95% CI 1.16 to 1.28, P &lt; 0.0001 respectively) and marked improvement of phlebitis (RR 1.97, 95% CI 1.44 to 2.70, P &lt; 0.001 and RR 1.56, 95% CI 1.29 to 1.87, P = 0.0002 respectively). Compared with 50% MgSO4, Aloe vera also improved recovery rates from phlebitis (RR 1.42, 95% CI 1.24 to 1.61, P &lt; 0.0001). Compared with routine treatments such as external application of hirudoid, sulphonic acid mucopolysaccharide and dexamethasone used alone, addition of Aloe vera improved recovery from phlebitis (RR 1.75, 95% CI 1.24 to 2.46, P = 0.001) and had a positive effect on overall improvement (marked improvement RR 1.26, 95% CI 1.09 to 1.47, P = 0.0003; any improvement RR 1.23, 95% CI 1.13 to 1.35, P &lt; 0.0001). Aloe vera, either alone or in combination with routine treatment, was more effective than routine treatment alone for improving the symptoms of phlebitis including shortening the time of elimination of red swelling symptoms, time of pain relief at the location of the infusion vein and time of resolution of phlebitis. Other secondary outcomes including health-related quality of life and adverse effects were not reported in the included studies. There is no strong evidence for preventing or treating infusion phlebitis with external application of Aloe vera. The current available evidence is limited by the poor methodological quality and risk of selective outcome reporting of the included studies, and by variation in the size of effect across the studies. The positive effects observed with external application of Aloe vera in preventing or treating infusion phlebitis compared with no intervention or external application of 33% or 50% MgSO4 should therefore be viewed with caution.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Individuals with osteoarthritis (OA) of the knee can be treated with a knee brace or a foot/ankle orthosis. The main purpose of these aids is to reduce pain, improve physical function and, possibly, slow disease progression. This is the second update of the original review published in Issue 1, 2005, and first updated in 2007. To assess the benefits and harms of braces and foot/ankle orthoses in the treatment of patients with OA of the knee. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and EMBASE (current contents, HealthSTAR) up to March 2014. We screened reference lists of identified trials and clinical trial registers for ongoing studies. Randomised and controlled clinical trials investigating all types of braces and foot/ankle orthoses for OA of the knee compared with an active control or no treatment. Two review authors independently selected trials and extracted data. We assessed risk of bias using the 'Risk of bias' tool of The Cochrane Collaboration. We analysed the quality of the results by performing an overall grading of evidence by outcome using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) approach. As a result of heterogeneity of studies, pooling of outcome data was possible for only three insole studies. We included 13 studies (n = 1356): four studies in the first version, three studies in the first update and six additional studies (n = 529 participants) in the second update. We included studies that reported results when study participants with early to severe knee OA (Kellgren &amp; Lawrence grade I-IV) were treated with a knee brace (valgus knee brace, neutral brace or neoprene sleeve) or an orthosis (laterally or medially wedged insole, neutral insole, variable or constant stiffness shoe) or were given no treatment. The main comparisons included (1) brace versus no treatment; (2) foot/ankle orthosis versus no treatment or other treatment; and (3) brace versus foot/ankle orthosis. Seven studies had low risk, two studies had high risk and four studies had unclear risk of selection bias. Five studies had low risk, three studies had high risk and five studies had unclear risk of detection bias. Ten studies had high risk and three studies had low risk of performance bias. Nine studies had low risk and four studies had high risk of reporting bias.Four studies compared brace versus no treatment, but only one provided useful data for meta-analysis at 12-month follow-up. One study (n = 117, low-quality evidence) showed lack of evidence of an effect on visual analogue scale (VAS) pain scores (absolute percent change 0%, mean difference (MD) 0.0, 95% confidence interval (CI) -0.84 to 0.84), function scores (absolute percent change 1%, MD 1.0, 95% CI -2.98 to 4.98) and health-related quality of life scores (absolute percent change 4%, MD -0.04, 95% CI -0.12 to 0.04) after 12 months. Many participants stopped their initial treatment because of lack of effect (24 of 60 participants in the brace group and 14 of 57 participants in the no treatment group; absolute percent change 15%, risk ratio (RR) 1.63, 95% CI 0.94 to 2.82). The other studies reported some improvement in pain, function and health-related quality of life (P value ≤ 0.001). Stiffness and treatment failure (need for surgery) were not reported in the included studies.For the comparison of laterally wedged insole versus no insole, one study (n = 40, low-quality evidence) showed a lower VAS pain score in the laterally wedged insole group (absolute percent change 16%, MD -1.60, 95% CI -2.31 to -0.89) after nine months. Function, stiffness, health-related quality of life, treatment failure and adverse events were not reported in the included study.For the comparison of laterally wedged versus neutral insole after pooling of three studies (n = 358, moderate-quality evidence), little evidence was found of an effect on numerical rating scale (NRS) pain scores (absolute percent change 1.0%, MD 0.1, 95% CI -0.45 to 0.65), Western Ontario-McMaster Osteoarthritis Scale (WOMAC) stiffness scores (absolute percent change 0.1%, MD 0.07, 95% CI -4.96 to 5.1) and WOMAC function scores (absolute percent change 0.9%, MD 0.94, 95% CI - 2.98 to 4.87) after 12 months. Evidence of an effect on health-related quality of life scores (absolute percent change 1.0%, MD 0.01, 95% CI -0.05 to 0.03) was lacking in one study (n = 179, moderate-quality evidence). Treatment failure and adverse events were not studied for this comparison in the included studies.Data for the comparison of laterally wedged insole versus valgus knee brace could not be pooled. After six months' follow-up, no statistically significant difference was noted in VAS pain scores (absolute percent change -2.0%, MD -0.2, 95% CI -1.15 to 0.75) and WOMAC function scores (absolute percent change 0.1%, MD 0.1, 95% CI -7.26 to 0.75) in one study (n = 91, low-quality evidence); however both groups showed improvement. Stiffness, health-related quality of life, treatment failure and adverse events were not reported in the included studies for this comparison. Evidence was inconclusive for the benefits of bracing for pain, stiffness, function and quality of life in the treatment of patients with medial compartment knee OA. On the basis of one laterally wedged insole versus no treatment study, we conclude that evidence of an effect on pain in patients with varus knee OA is lacking. Moderate-quality evidence shows lack of an effect on improvement in pain, stiffness and function between patients treated with a laterally wedged insole and those treated with a neutral insole. Low-quality evidence shows lack of an effect on improvement in pain, stiffness and function between patients treated with a valgus knee brace and those treated with a laterally wedged insole. The optimal choice for an orthosis remains unclear, and long-term implications are lacking.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
This is an update of the original Cochrane review, last published in 2009 (Huertas-Ceballos 2009). Recurrent abdominal pain (RAP), including children with irritable bowel syndrome, is a common problem affecting between 4% and 25% of school-aged children. For the majority of such children, no organic cause for their pain can be found on physical examination or investigation. Many dietary inventions have been suggested to improve the symptoms of RAP. These may involve either excluding ingredients from the diet or adding supplements such as fibre or probiotics. To examine the effectiveness of dietary interventions in improving pain in children of school age with RAP. We searched CENTRAL, Ovid MEDLINE, Embase, eight other databases, and two trials registers, together with reference checking, citation searching and contact with study authors, in June 2016. Randomised controlled trials (RCTs) comparing dietary interventions with placebo or no treatment in children aged five to 18 years with RAP or an abdominal pain-related, functional gastrointestinal disorder, as defined by the Rome III criteria (Rasquin 2006). We used standard methodological procedures expected by Cochrane. We grouped dietary interventions together by category for analysis. We contacted study authors to ask for missing information and clarification, when needed. We assessed the quality of the evidence for each outcome using the GRADE approach. We included 19 RCTs, reported in 27 papers with a total of 1453 participants. Fifteen of these studies were not included in the previous review. All 19 RCTs had follow-up ranging from one to five months. Participants were aged between four and 18 years from eight different countries and were recruited largely from paediatric gastroenterology clinics. The mean age at recruitment ranged from 6.3 years to 13.1 years. Girls outnumbered boys in most trials. Fourteen trials recruited children with a diagnosis under the broad umbrella of RAP or functional gastrointestinal disorders; five trials specifically recruited only children with irritable bowel syndrome. The studies fell into four categories: trials of probiotic-based interventions (13 studies), trials of fibre-based interventions (four studies), trials of low FODMAP (fermentable oligosaccharides, disaccharides, monosaccharides and polyols) diets (one study), and trials of fructose-restricted diets (one study).We found that children treated with probiotics reported a greater reduction in pain frequency at zero to three months postintervention than those given placebo (standardised mean difference (SMD) -0.55, 95% confidence interval (CI) -0.98 to -0.12; 6 trials; 523 children). There was also a decrease in pain intensity in the intervention group at the same time point (SMD -0.50, 95% CI -0.85 to -0.15; 7 studies; 575 children). However, we judged the evidence for these outcomes to be of low quality using GRADE due to an unclear risk of bias from incomplete outcome data and significant heterogeneity.We found that children treated with probiotics were more likely to experience improvement in pain at zero to three months postintervention than those given placebo (odds ratio (OR) 1.63, 95% CI 1.07 to 2.47; 7 studies; 722 children). The estimated number needed to treat for an additional beneficial outcome (NNTB) was eight, meaning that eight children would need to receive probiotics for one to experience improvement in pain in this timescale. We judged the evidence for this outcome to be of moderate quality due to significant heterogeneity.Children with a symptom profile defined as irritable bowel syndrome treated with probiotics were more likely to experience improvement in pain at zero to three months postintervention than those given placebo (OR 3.01, 95% CI 1.77 to 5.13; 4 studies; 344 children). Children treated with probiotics were more likely to experience improvement in pain at three to six months postintervention compared to those receiving placebo (OR 1.94, 95% CI 1.10 to 3.43; 2 studies; 224 children). We judged the evidence for these two outcomes to be of moderate quality due to small numbers of participants included in the studies.We found that children treated with fibre-based interventions were not more likely to experience an improvement in pain at zero to three months postintervention than children given placebo (OR 1.83, 95% CI 0.92 to 3.65; 2 studies; 136 children). There was also no reduction in pain intensity compared to placebo at the same time point (SMD -1.24, 95% CI -3.41 to 0.94; 2 studies; 135 children). We judged the evidence for these outcomes to be of low quality due to an unclear risk of bias, imprecision, and significant heterogeneity.We found only one study of low FODMAP diets and only one trial of fructose-restricted diets, meaning no pooled analyses were possible.We were unable to perform any meta-analyses for the secondary outcomes of school performance, social or psychological functioning, or quality of daily life, as not enough studies included these outcomes or used comparable measures to assess them.With the exception of one study, all studies reported monitoring children for adverse events; no major adverse events were reported. Overall, we found moderate- to low-quality evidence suggesting that probiotics may be effective in improving pain in children with RAP. Clinicians may therefore consider probiotic interventions as part of a holistic management strategy. However, further trials are needed to examine longer-term outcomes and to improve confidence in estimating the size of the effect, as well as to determine the optimal strain and dosage. Future research should also explore the effectiveness of probiotics in children with different symptom profiles, such as those with irritable bowel syndrome.We found only a small number of trials of fibre-based interventions, with overall low-quality evidence for the outcomes. There was therefore no convincing evidence that fibre-based interventions improve pain in children with RAP. Further high-quality RCTs of fibre supplements involving larger numbers of participants are required. Future trials of low FODMAP diets and other dietary interventions are also required to facilitate evidence-based recommendations.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
This is the second update of a Cochrane Review originally published in 2009. Millions of workers worldwide are exposed to noise levels that increase their risk of hearing disorders. There is uncertainty about the effectiveness of hearing loss prevention interventions. To assess the effectiveness of non-pharmaceutical interventions for preventing occupational noise exposure or occupational hearing loss compared to no intervention or alternative interventions. We searched the CENTRAL; PubMed; Embase; CINAHL; Web of Science; BIOSIS Previews; Cambridge Scientific Abstracts; and OSH UPDATE to 3 October 2016. We included randomised controlled trials (RCT), controlled before-after studies (CBA) and interrupted time-series (ITS) of non-clinical interventions under field conditions among workers to prevent or reduce noise exposure and hearing loss. We also collected uncontrolled case studies of engineering controls about the effect on noise exposure. Two authors independently assessed study eligibility and risk of bias and extracted data. We categorised interventions as engineering controls, administrative controls, personal hearing protection devices, and hearing surveillance. We included 29 studies. One study evaluated legislation to reduce noise exposure in a 12-year time-series analysis but there were no controlled studies on engineering controls for noise exposure. Eleven studies with 3725 participants evaluated effects of personal hearing protection devices and 17 studies with 84,028 participants evaluated effects of hearing loss prevention programmes (HLPPs). Effects on noise exposure Engineering interventions following legislationOne ITS study found that new legislation in the mining industry reduced the median personal noise exposure dose in underground coal mining by 27.7 percentage points (95% confidence interval (CI) -36.1 to -19.3 percentage points) immediately after the implementation of stricter legislation. This roughly translates to a 4.5 dB(A) decrease in noise level. The intervention was associated with a favourable but statistically non-significant downward trend in time of the noise dose of -2.1 percentage points per year (95% CI -4.9 to 0.7, 4 year follow-up, very low-quality evidence). Engineering intervention case studiesWe found 12 studies that described 107 uncontrolled case studies of immediate reductions in noise levels of machinery ranging from 11.1 to 19.7 dB(A) as a result of purchasing new equipment, segregating noise sources or installing panels or curtains around sources. However, the studies lacked long-term follow-up and dose measurements of workers, and we did not use these studies for our conclusions. Hearing protection devicesIn general hearing protection devices reduced noise exposure on average by about 20 dB(A) in one RCT and three CBAs (57 participants, low-quality evidence). Two RCTs showed that, with instructions for insertion, the attenuation of noise by earplugs was 8.59 dB better (95% CI 6.92 dB to 10.25 dB) compared to no instruction (2 RCTs, 140 participants, moderate-quality evidence). Administrative controls: information and noise exposure feedbackOn-site training sessions did not have an effect on personal noise-exposure levels compared to information only in one cluster-RCT after four months' follow-up (mean difference (MD) 0.14 dB; 95% CI -2.66 to 2.38). Another arm of the same study found that personal noise exposure information had no effect on noise levels (MD 0.30 dB(A), 95% CI -2.31 to 2.91) compared to no such information (176 participants, low-quality evidence). Effects on hearing loss Hearing protection devicesIn two studies the authors compared the effect of different devices on temporary threshold shifts at short-term follow-up but reported insufficient data for analysis. In two CBA studies the authors found no difference in hearing loss from noise exposure above 89 dB(A) between muffs and earplugs at long-term follow-up (OR 0.8, 95% CI 0.63 to 1.03 ), very low-quality evidence). Authors of another CBA study found that wearing hearing protection more often resulted in less hearing loss at very long-term follow-up (very low-quality evidence). Combination of interventions: hearing loss prevention programmesOne cluster-RCT found no difference in hearing loss at three- or 16-year follow-up between an intensive HLPP for agricultural students and audiometry only. One CBA study found no reduction of the rate of hearing loss (MD -0.82 dB per year (95% CI -1.86 to 0.22) for a HLPP that provided regular personal noise exposure information compared to a programme without this information.There was very-low-quality evidence in four very long-term studies, that better use of hearing protection devices as part of a HLPP decreased the risk of hearing loss compared to less well used hearing protection in HLPPs (OR 0.40, 95% CI 0.23 to 0.69). Other aspects of the HLPP such as training and education of workers or engineering controls did not show a similar effect.In three long-term CBA studies, workers in a HLPP had a statistically non-significant 1.8 dB (95% CI -0.6 to 4.2) greater hearing loss at 4 kHz than non-exposed workers and the confidence interval includes the 4.2 dB which is the level of hearing loss resulting from 5 years of exposure to 85 dB(A). In addition, of three other CBA studies that could not be included in the meta-analysis, two showed an increased risk of hearing loss in spite of the protection of a HLPP compared to non-exposed workers and one CBA did not. There is very low-quality evidence that implementation of stricter legislation can reduce noise levels in workplaces. Controlled studies of other engineering control interventions in the field have not been conducted. There is moderate-quality evidence that training of proper insertion of earplugs significantly reduces noise exposure at short-term follow-up but long-term follow-up is still needed.There is very low-quality evidence that the better use of hearing protection devices as part of HLPPs reduces the risk of hearing loss, whereas for other programme components of HLPPs we did not find such an effect. The absence of conclusive evidence should not be interpreted as evidence of lack of effectiveness. Rather, it means that further research is very likely to have an important impact.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Unconditional cash transfers (UCTs; provided without obligation) for reducing poverty and vulnerabilities (e.g. orphanhood, old age or HIV infection) are a type of social protection intervention that addresses a key social determinant of health (income) in low- and middle-income countries (LMICs). The relative effectiveness of UCTs compared with conditional cash transfers (CCTs; provided so long as the recipient engages in prescribed behaviours such as using a health service or attending school) is unknown. To assess the effects of UCTs for improving health services use and health outcomes in vulnerable children and adults in LMICs. Secondary objectives are to assess the effects of UCTs on social determinants of health and healthcare expenditure and to compare to effects of UCTs versus CCTs. We searched 17 electronic academic databases, including the Cochrane Public Health Group Specialised Register, the Cochrane Database of Systematic Reviews (the Cochrane Library 2017, Issue 5), MEDLINE and Embase, in May 2017. We also searched six electronic grey literature databases and websites of key organisations, handsearched key journals and included records, and sought expert advice. We included both parallel group and cluster-randomised controlled trials (RCTs), quasi-RCTs, cohort and controlled before-and-after (CBAs) studies, and interrupted time series studies of UCT interventions in children (0 to 17 years) and adults (18 years or older) in LMICs. Comparison groups received either no UCT or a smaller UCT. Our primary outcomes were any health services use or health outcome. Two reviewers independently screened potentially relevant records for inclusion criteria, extracted data and assessed the risk of bias. We tried to obtain missing data from study authors if feasible. For cluster-RCTs, we generally calculated risk ratios for dichotomous outcomes from crude frequency measures in approximately correct analyses. Meta-analyses applied the inverse variance or Mantel-Haenszel method with random effects. We assessed the quality of evidence using the GRADE approach. We included 21 studies (16 cluster-RCTs, 4 CBAs and 1 cohort study) involving 1,092,877 participants (36,068 children and 1,056,809 adults) and 31,865 households in Africa, the Americas and South-East Asia in our meta-analyses and narrative synthesis. The 17 types of UCTs we identified, including one basic universal income intervention, were pilot or established government programmes or research experiments. The cash value was equivalent to 1.3% to 53.9% of the annualised gross domestic product per capita. All studies compared a UCT with no UCT, and three studies also compared a UCT with a CCT. Most studies carried an overall high risk of bias (i.e. often selection and/or performance bias). Most studies were funded by national governments and/or international organisations.Throughout the review, we use the words 'probably' to indicate moderate-quality evidence, 'may/maybe' for low-quality evidence, and 'uncertain' for very low-quality evidence. UCTs may not have impacted the likelihood of having used any health service in the previous 1 to 12 months, when participants were followed up between 12 and 24 months into the intervention (risk ratio (RR) 1.04, 95% confidence interval (CI) 1.00 to 1.09, P = 0.07, 5 cluster-RCTs, N = 4972, I² = 2%, low-quality evidence). At one to two years, UCTs probably led to a clinically meaningful, very large reduction in the likelihood of having had any illness in the previous two weeks to three months (odds ratio (OR) 0.73, 95% CI 0.57 to 0.93, 5 cluster-RCTs, N = 8446, I² = 57%, moderate-quality evidence). Evidence from five cluster-RCTs on food security was too inconsistent to be combined in a meta-analysis, but it suggested that at 13 to 24 months' follow-up, UCTs could increase the likelihood of having been food secure over the previous month (low-quality evidence). UCTs may have increased participants' level of dietary diversity over the previous week, when assessed with the Household Dietary Diversity Score and followed up 24 months into the intervention (mean difference (MD) 0.59 food categories, 95% CI 0.18 to 1.01, 4 cluster-RCTs, N = 9347, I² = 79%, low-quality evidence). Despite several studies providing relevant evidence, the effects of UCTs on the likelihood of being moderately stunted and on the level of depression remain uncertain. No evidence was available on the effect of a UCT on the likelihood of having died. UCTs probably led to a clinically meaningful, moderate increase in the likelihood of currently attending school, when assessed at 12 to 24 months into the intervention (RR 1.06, 95% CI 1.03 to 1.09, 6 cluster-RCTs, N = 4800, I² = 0%, moderate-quality evidence). The evidence was uncertain for whether UCTs impacted livestock ownership, extreme poverty, participation in child labour, adult employment or parenting quality. Evidence from six cluster-RCTs on healthcare expenditure was too inconsistent to be combined in a meta-analysis, but it suggested that UCTs may have increased the amount of money spent on health care at 7 to 24 months into the intervention (low-quality evidence). The effects of UCTs on health equity (or unfair and remedial health inequalities) were very uncertain. We did not identify any harms from UCTs. Three cluster-RCTs compared UCTs versus CCTs with regard to the likelihood of having used any health services, the likelihood of having had any illness or the level of dietary diversity, but evidence was limited to one study per outcome and was very uncertain for all three. This body of evidence suggests that unconditional cash transfers (UCTs) may not impact a summary measure of health service use in children and adults in LMICs. However, UCTs probably or may improve some health outcomes (i.e. the likelihood of having had any illness, the likelihood of having been food secure, and the level of dietary diversity), one social determinant of health (i.e. the likelihood of attending school), and healthcare expenditure. The evidence on the relative effectiveness of UCTs and CCTs remains very uncertain.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Diabetic retinopathy (DR) is a chronic progressive disease of the retinal microvasculature associated with prolonged hyperglycaemia. Proliferative DR (PDR) is a sight-threatening complication of DR and is characterised by the development of abnormal new vessels in the retina, optic nerve head or anterior segment of the eye. Argon laser photocoagulation has been the gold standard for the treatment of PDR for many years, using regimens evaluated by the Early Treatment of Diabetic Retinopathy Study (ETDRS). Over the years, there have been modifications of the technique and introduction of new laser technologies. To assess the effects of different types of laser, other than argon laser, and different laser protocols, other than those established by the ETDRS, for the treatment of PDR. We compared different wavelengths; power and pulse duration; pattern, number and location of burns versus standard argon laser undertaken as specified by the ETDRS. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 5); Ovid MEDLINE; Ovid Embase; LILACS; the ISRCTN registry; ClinicalTrials.gov and the ICTRP. The date of the search was 8 June 2017. We included randomised controlled trials (RCTs) of pan-retinal photocoagulation (PRP) using standard argon laser for treatment of PDR compared with any other laser modality. We excluded studies of lasers that are not in common use, such as the xenon arc, ruby or Krypton laser. We followed Cochrane guidelines and graded the certainty of evidence using the GRADE approach. We identified 11 studies from Europe (6), the USA (2), the Middle East (1) and Asia (2). Five studies compared different types of laser to argon: Nd:YAG (2 studies) or diode (3 studies). Other studies compared modifications to the standard argon laser PRP technique. The studies were poorly reported and we judged all to be at high risk of bias in at least one domain. The sample size varied from 20 to 270 eyes but the majority included 50 participants or fewer.Nd:YAG versus argon laser (2 studies): very low-certainty evidence on vision loss, vision gain, progression and regression of PDR, pain during laser treatment and adverse effects.Diode versus argon laser (3 studies): very-low certainty evidence on vision loss, vision gain, progression and regression of PDR and adverse effects; moderate-certainty evidence that diode laser was more painful (risk ratio (RR) troublesome pain during laser treatment (RR 3.12, 95% CI 2.16 to 4.51; eyes = 202; studies = 3; I<sup2</sup = 0%).0.5 second versus 0.1 second exposure (1 study): low-certainty evidence of lower chance of vision loss with 0.5 second compared with 0.1 second exposure but estimates were imprecise and compatible with no difference or an increased chance of vision loss (RR 0.42, 95% CI 0.08 to 2.04, 44 eyes, 1 RCT); low-certainty evidence that people treated with 0.5 second exposure were more likely to gain vision (RR 2.22, 95% CI 0.68 to 7.28, 44 eyes, 1 RCT) but again the estimates were imprecise . People given 0.5 second exposure were more likely to have regression of PDR compared with 0.1 second laser PRP again with imprecise estimate (RR 1.17, 95% CI 0.92 to 1.48, 32 eyes, 1 RCT). There was very low-certainty evidence on progression of PDR and adverse effects.'Light intensity' PRP versus classic PRP (1 study): vision loss or gain was not reported but the mean difference in logMAR acuity at 1 year was -0.09 logMAR (95% CI -0.22 to 0.04, 65 eyes, 1 RCT); and low-certainty evidence that fewer patients had pain during light PRP compared with classic PRP with an imprecise estimate compatible with increased or decreased pain (RR 0.23, 95% CI 0.03 to 1.93, 65 eyes, 1 RCT).'Mild scatter' (laser pattern limited to 400 to 600 laser burns in one sitting) PRP versus standard 'full' scatter PRP (1 study): very low-certainty evidence on vision and visual field loss. No information on adverse effects.'Central' (a more central PRP in addition to mid-peripheral PRP) versus 'peripheral' standard PRP (1 study): low-certainty evidence that people treated with central PRP were more likely to lose 15 or more letters of BCVA compared with peripheral laser PRP (RR 3.00, 95% CI 0.67 to 13.46, 50 eyes, 1 RCT); and less likely to gain 15 or more letters (RR 0.25, 95% CI 0.03 to 2.08) with imprecise estimates compatible with increased or decreased risk.'Centre sparing' PRP (argon laser distribution limited to 3 disc diameters from the upper temporal and lower margin of the fovea) versus standard 'full scatter' PRP (1 study): low-certainty evidence that people treated with 'centre sparing' PRP were less likely to lose 15 or more ETDRS letters of BCVA compared with 'full scatter' PRP (RR 0.67, 95% CI 0.30 to 1.50, 53 eyes). Low-certainty evidence of similar risk of regression of PDR between groups (RR 0.96, 95% CI 0.73 to 1.27, 53 eyes). Adverse events were not reported.'Extended targeted' PRP (to include the equator and any capillary non-perfusion areas between the vascular arcades) versus standard PRP (1 study): low-certainty evidence that people in the extended group had similar or slightly reduced chance of loss of 15 or more letters of BCVA compared with the standard PRP group (RR 0.94, 95% CI 0.70 to 1.28, 270 eyes). Low-certainty evidence that people in the extended group had a similar or slightly increased chance of regression of PDR compared with the standard PRP group (RR 1.11, 95% CI 0.95 to 1.31, 270 eyes). Very low-certainty information on adverse effects. Modern laser techniques and modalities have been developed to treat PDR. However there is limited evidence available with respect to the efficacy and safety of alternative laser systems or strategies compared with the standard argon laser as described in ETDRS.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Since decompression sickness (DCS) in humans was first described, mankind has embarked on an odyssey to prevent it. The demonstration that decompression releases bubbles, which mainly contain inert gas (nitrogen, helium), into the circulation and that the slower the decompression rate the lesser the incidence of DCS, resulted in 1908 in the publication of the first, reasonably safe diving tables. Besides the development of proper diving tables, the selection of divers is also of importance. A relationship between body composition and DCS was observed in dogs as long ago as the nineteenth century, an observation supported early in the twentieth century: "Really fat men should never be allowed to work in compressed air, and plump men should be excluded from high pressure caissons…or in diving to more than about 10 fathoms, and at this depth the time of their exposure should be curtailed. If deep diving is to be undertaken…. skinny men should be selected." Alas, nothing is that simple! From my own experience it was not always the fat diver who ended up in the treatment chamber with DCS. Therefore, other factors must be at play; gender, age, physical fitness, and the existence of a persistent foramen ovale (PFO) have all been studied as possible factors for the development of vascular gas bubbles and, therefore, for DCS. However, none of these factors, alone or in combination, explain why there are intra-individual or intra-cohort differences in bubble grades (BG). In other words, why does a dive I did today led to a high BG but the same dive next week lead to a low one? Or, why is there such a difference in BG amongst divers of more or less the same age, gender, body composition and physical fitness? In a letter in this issue, a novel hypothesis is postulated that may fill in these gaps; active hydrophobic spots (AHS). These AHS can be found at the luminal side of capillary, venous and arterial walls and have an oligolamellar lining. In an in vitro experiment, nanobubbles developed on AHS after a 'dive' to 1,000 kPa (90 msw). It appears that AHS consist of dipalmitoylphosphatidylcholine (DPPC), which is the main component of surfactant. It is proposed that DPPC may leak from the alveoli into the alveolar capillary and be transported to veins and arteries where it precipitates and forms AHS. Based on these ideas, it is hypothesized that AHS generate nanobubbles that can grow into microbubbles. When these microbubbles detach from the AHS they might also take along pieces of the AHS membrane making the AHS smaller or even disappear. This phenomenon could explain some of the earlier findings regarding the formation of microbubbles in divers. The fact that the presence of microbubbles differs between younger and older divers, after repetitive dives, and between experienced divers and novice divers can be explained by this model, and AHS may be the missing link we are looking for in our quest to understand and treat DCS. However, some reservations must be made. Firstly, these observations are derived from in vitro and animal experiments and whether or not they reflect a similar process in man remains unclear. Secondly, it appears that female divers have lower bubble grades after similar dives compared to male divers, suggesting lower decompression stress. If AHS is the main generator for microbubbles, there should be a difference in the presence of AHS between men and women. We do not know from these animal experiments whether there is a gender difference, neither does a literature search in PubMed provide us with an answer. Thirdly, as said before, DPPC is the main component of surfactant. All alveolar surfactant phospholipids, such as DPPC, are secreted to the alveolar space via exocystosis of the lamellar bodies (LB) from alveolar type II (ATII) cells. To form a functional air-blood barrier, alveolar type I and ATII cells are connected to each other by tight junctions. These tight junctions constitute the seal of the intercellular cleft and in that way form a true barrier between the alveolus and the capillary. Only small molecules like oxygen, carbon dioxide, etc. can penetrate through this barrier by themselves due to passive diffusion. All other (macro)molecules, including DPPC, need intermediate processes such as ion transport proteins, channels, metabolic pumps, etc. to gain access to the pulmonary capillary lumen. To my knowledge, no such mechanisms for DPPC or LB are known. A theoretical explanation might be the fact that the production of DPPC and the exocystosis of DPPC-containing LBs into the alveolar space can be stimulated by stretch. Stretch of the alveoli can switch on Ca2+ entry by either mechanosensitive channels, store-operated channels or second messenger-operated channels, which induces LB exocystosis. Furthermore, an ATP-release mechanism might also be responsible for the pulmonary alveolar mechanotransduction of LB. During diving, transpulmonary pressure changes occur which might induce additional alveolar stretch and thus, theoretically, an extra release of LB. However, whether or not such exocystosis of LB is vascularly orientated remains unclear. Besides which, the leakage of DPPC from the alveolus to the pulmonary capillary might also be as simple as a malfunction of the tight junction due to epithelial membrane damage as a result of diving. Finally, it is also possible that DPPC is produced in other non-ATII cells in our body of which we are currently unaware. To conclude, this is an interesting hypothesis regarding the origin of microbubbles. Whether or not DPPC and LB are the main reason for individual sensitivity to DCS remains unclear. Further research will hopefully identify if DPPC and LB are indeed the missing link or just another branch on the big tree of the genesis of decompression sickness.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Hospitalised patients are at increased risk of developing deep vein thrombosis (DVT) in the lower limb and pelvic veins, on a background of prolonged immobilisation associated with their medical or surgical illness. Patients with DVT are at increased risk of developing a pulmonary embolism (PE). The use of graduated compression stockings (GCS) in hospitalised patients has been proposed to decrease the risk of DVT. This is an update of a Cochrane Review first published in 2000, and last updated in 2014. To evaluate the effectiveness and safety of graduated compression stockings in preventing deep vein thrombosis in various groups of hospitalised patients. For this review the Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), and trials registries on 21 March 2017; and the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE Ovid, Embase Ovid, CINAHL Ebsco, AMED Ovid , and trials registries on 12 June 2018. Randomised controlled trials (RCTs) involving GCS alone, or GCS used on a background of any other DVT prophylactic method. We combined results from both of these groups of trials. Two review authors (AS, MD) assessed potentially eligible trials for inclusion. One review author (AS) extracted the data, which a second review author (MD) cross-checked and authenticated. Two review authors (AS, MD) assessed the methodological quality of trials with the Cochrane 'Risk of bias' tool. Any disagreements were resolved by discussion with the senior review author (TL). For dichotomous outcomes, we calculated the Peto odds ratio and corresponding 95% confidence interval. We pooled data using a fixed-effect model. We used the GRADE system to evaluate the overall quality of the evidence supporting the outcomes assessed in this review. We included 20 RCTs involving a total of 1681 individual participants and 1172 individual legs (2853 analytic units). Of these 20 trials, 10 included patients undergoing general surgery; six included patients undergoing orthopaedic surgery; three individual trials included patients undergoing neurosurgery, cardiac surgery, and gynaecological surgery, respectively; and only one trial included medical patients. Graduated compression stockings were applied on the day before surgery or on the day of surgery and were worn up until discharge or until the participants were fully mobile. In the majority of the included studies DVT was identified by the radioactive I<sup125</sup uptake test. Duration of follow-up ranged from seven to 14 days. The included studies were at an overall low risk of bias.We were able to pool the data from 20 studies reporting the incidence of DVT. In the GCS group, 134 of 1445 units developed DVT (9%) in comparison to the control group (without GCS), in which 290 of 1408 units developed DVT (21%). The Peto odds ratio (OR) was 0.35 (95% confidence interval (CI) 0.28 to 0.43; 20 studies; 2853 units; high-quality evidence), showing an overall effect favouring treatment with GCS (P &lt; 0.001).Based on results from eight included studies, the incidence of proximal DVT was 7 of 517 (1%) units in the GCS group and 28 of 518 (5%) units in the control group. The Peto OR was 0.26 (95% CI 0.13 to 0.53; 8 studies; 1035 units; moderate-quality evidence) with an overall effect favouring treatment with GCS (P &lt; 0.001). Combining results from five studies, all based on surgical patients, the incidence of PE was 5 of 283 (2%) participants in the GCS group and 14 of 286 (5%) in the control group. The Peto OR was 0.38 (95% CI 0.15 to 0.96; 5 studies; 569 participants; low-quality evidence) with an overall effect favouring treatment with GCS (P = 0.04). We downgraded the quality of the evidence for proximal DVT and PE due to low event rate (imprecision) and lack of routine screening for PE (inconsistency).We carried out subgroup analysis by speciality (surgical or medical patients). Combining results from 19 trials focusing on surgical patients, 134 of 1365 (9.8%) units developed DVT in the GCS group compared to 282 of 1328 (21.2%) units in the control group. The Peto OR was 0.35 (95% CI 0.28 to 0.44; high-quality evidence), with an overall effect favouring treatment with GCS (P &lt; 0.001). Based on results from seven included studies, the incidence of proximal DVT was 7 of 437 units (1.6%) in the GCS group and 28 of 438 (6.4%) in the control group. The Peto OR was 0.26 (95% CI 0.13 to 0.53; 875 units; moderate-quality evidence) with an overall effect favouring treatment with GCS (P &lt; 0.001). We downgraded the evidence for proximal DVT due to low event rate (imprecision).Based on the results from one trial focusing on medical patients admitted following acute myocardial infarction, 0 of 80 (0%) legs developed DVT in the GCS group and 8 of 80 (10%) legs developed DVT in the control group. The Peto OR was 0.12 (95% CI 0.03 to 0.51; low-quality evidence) with an overall effect favouring treatment with GCS (P = 0.004). None of the medical patients in either group developed a proximal DVT, and the incidence of PE was not reported.Limited data were available to accurately assess the incidence of adverse effects and complications with the use of GCS as these were not routinely quantitatively reported in the included studies. There is high-quality evidence that GCS are effective in reducing the risk of DVT in hospitalised patients who have undergone general and orthopaedic surgery, with or without other methods of background thromboprophylaxis, where clinically appropriate. There is moderate-quality evidence that GCS probably reduce the risk of proximal DVT, and low-quality evidence that GCS may reduce the risk of PE. However, there remains a paucity of evidence to assess the effectiveness of GCS in diminishing the risk of DVT in medical patients.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
<bObjective:</b To assess the cognitive level of first aid knowledge regarding the small area burn among the child caregivers in Shanghai and improve the level of first aid for small area burn in children. <bMethods:</b From November 2017 to March 2018, 7 municipal districts in Shanghai were selected according to the random number table, from which 2 750 students of 4 nurseries, 5 kindergartens, 6 primary schools, and 2 junior middle schools were selected by adopting the convenience sampling method. Each student was limited to one caregiver as the research object. A cross-sectional survey was conducted on the cognitive level of first aid knowledge regarding small area burn among the caregivers with self-designed questionnaire through WeChat and Tencent QQ. The age, burn experience, and scarring after burns in children, the prevalence rate of burn in children of different age groups, the educational background of caregivers and their social relationship with their children, and the measures taken by caregivers firstly after small area burn occurred among their children were recorded. The choices of applying the folk prescription drugs to the wounds of their children made by caregivers and those with different educational backgrounds were recorded. The choices of applying daily necessities to the wound of their children made by caregivers were recorded. The caregivers' knowledge of standard first aid measures for small area burn, and the knowledge of caregivers with different educational backgrounds of all standard first aid measures for small area burn were recorded. The caregivers' choices of hospitals for treatment the first time, and the choices of going to the Grade Ⅲ Level A hospital with burn specialty for treatment made by caregivers with different knowledge levels about first aid measures for small area burn and those by caregivers whose children did or didn't have burn experience were recorded. The caregivers' choices of different types of medical institutions with burn specialty or specialized in burn treatment, and choices of going to burn department of comprehensive Grade Ⅲ Level A hospital for treatment made by caregivers with different knowledge levels about first aid measures for small area burn were recorded. Data were processed with Pearson chi-square test and partitions of chi-square test. <bResults:</b The effective recovery rate of questionnaire was 99.0% (2 723/2 750). The ages of children were mainly 6-11 years [64.7% (1 762/2 723)]The prevalence of burn in children was 19.4% (527/2 723). There was no statistically significant difference in the overall comparison of burn prevalence of children among the age groups (<iχ</i(2)=1.424, <iP</i&gt;0.05). The percentage of scar formation after burn in children was 27.3% (144/527). The education backgrounds of caregivers were mainly undergraduate [40.2% (1 094/2 723)], and their social relationships with children were mainly children's mothers [74.6% (2 030/2 723)]. Assuming that their children suffered from minor burns, the measures firstly taken by 74.0% (2 016/2 723) of the caregivers was to immediately access cool running water and remove clothing on the wound of children. Totally 19.2% (523/2 723) of the caregivers chose to apply folk prescription drugs for their burn children by themselves, and the percentage of caregivers with education background of junior middle school choosing to apply folk prescription drugs for their burn children by themselves was significantly higher than that of caregivers with education background of junior college, undergraduate, or graduate (<iχ</i(2)=18.502, 20.642, 13.319, <iP</i&lt;0.05). Totally 49.2% (1 340/2 723) of caregivers chose to daub many kinds of daily necessities for their burn children by themselves. Totally 39.2% (1 068/2 723) of caregivers knew all standard first aid measures for small area burn, the percentage of caregivers with education background of undergraduate knowing all standard first aid measures for small area burn was significantly higher than that of caregivers with education background of senior high school and secondary specialized school (<iχ</i(2)=11.234, <iP</i&lt;0.05). Assuming that their children suffered from minor burns, 39.0% (1 063/2 723) of the caregivers chose to go to the nearest hospital for treatment the first time, the percentage of caregivers who knew all standard first aid measures for small area burn choosing to go to Grade Ⅲ Level A hospital with burn specialty for treatment the first time was similar with that of caregivers who did not know/did not fully know (<iχ</i(2)=3.528, <iP</i&gt;0.05), and the percentage of caregivers whose children had burn experience choosing to go to Grade Ⅲ Level A hospital with burn specialty for treatment in the first time was similar with that of caregivers whose children didn't have burn experience (<iχ</i(2)=3.521, <iP</i&gt;0.05). Among all medical institutions with burn specialty or specialized in burn treatment, 28.0% (762/2 723) of the caregivers chose to go to comprehensive Grade Ⅲ Level A hospital for treatment, and the percentage of caregivers who knew all standard first aid measures for small area burn choosing to go to comprehensive Grade Ⅲ Level A hospital for treatment was significantly higher than that of caregivers who did not know/did not fully know (<iχ</i(2)=4.890, <iP</i&lt;0.05). <bConclusions:</b The caregivers of children are mainly children's mothers with education background of undergraduate in Shanghai, and caregivers' cognitive levels of first aid knowledge regarding the small area burn are low. Only a few caregivers know all standard first aid measures for small area burn, and there are still some caregivers who have the wrong idea of applying folk prescription drugs or daily necessities for children by themselves. The publicity and education of basic first aid knowledge of burn should be strengthened through various channels such as burn simulation exercise and network, and caregivers should be guided to take their children to hospitals with burn specialty for treatment after occurrence of burn in children, so as to obtain more professional medical treatment.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The purpose of this American Gastroenterological Association (AGA) Institute Clinical Practice Update is to review the available evidence and expert recommendations regarding the clinical care of patients with pancreatic necrosis and to offer concise best practice advice for the optimal management of patients with this highly morbid condition. This expert review was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board to provide timely guidance on a topic of high clinical importance to the AGA membership, and underwent internal peer review by the Clinical Practice Updates Committee and external peer review through standard procedures of Gastroenterology. This review is framed around the 15 best practice advice points agreed upon by the authors, which reflect landmark and recent published articles in this field. This expert review also reflects the experiences of the authors, who are advanced endoscopists or hepatopancreatobiliary surgeons with extensive experience in managing and teaching others to care for patients with pancreatic necrosis. BEST PRACTICE ADVICE 1: Pancreatic necrosis is associated with substantial morbidity and mortality and optimal management requires a multidisciplinary approach, including gastroenterologists, surgeons, interventional radiologists, and specialists in critical care medicine, infectious disease, and nutrition. In situations where clinical expertise may be limited, consideration should be given to transferring patients with significant pancreatic necrosis to an appropriate tertiary-care center. BEST PRACTICE ADVICE 2: Antimicrobial therapy is best indicated for culture-proven infection in pancreatic necrosis or when infection is strongly suspected (ie, gas in the collection, bacteremia, sepsis, or clinical deterioration). Routine use of prophylactic antibiotics to prevent infection of sterile necrosis is not recommended. BEST PRACTICE ADVICE 3: When infected necrosis is suspected, broad-spectrum intravenous antibiotics with ability to penetrate pancreatic necrosis should be favored (eg, carbapenems, quinolones, and metronidazole). Routine use of antifungal agents is not recommended. Computed tomography-guided fine-needle aspiration for Gram stain and cultures is unnecessary in the majority of cases. BEST PRACTICE ADVICE 4: In patients with pancreatic necrosis, enteral feeding should be initiated early to decrease the risk of infected necrosis. A trial of oral nutrition is recommended immediately in patients in whom there is absence of nausea and vomiting and no signs of severe ileus or gastrointestinal luminal obstruction. When oral nutrition is not feasible, enteral nutrition by either nasogastric/duodenal or nasojejunal tube should be initiated as soon as possible. Total parenteral nutrition should be considered only in cases where oral or enteral feeds are not feasible or tolerated. BEST PRACTICE ADVICE 5: Drainage and/or debridement of pancreatic necrosis is indicated in patients with infected necrosis. Drainage and/or debridement may be required in patients with sterile pancreatic necrosis and persistent unwellness marked by abdominal pain, nausea, vomiting, and nutritional failure or with associated complications, including gastrointestinal luminal obstruction; biliary obstruction; recurrent acute pancreatitis; fistulas; or persistent systemic inflammatory response syndrome. BEST PRACTICE ADVICE 6: Pancreatic debridement should be avoided in the early, acute period (first 2 weeks), as it has been associated with increased morbidity and mortality. Debridement should be optimally delayed for 4 weeks and performed earlier only when there is an organized collection and a strong indication. BEST PRACTICE ADVICE 7: Percutaneous drainage and transmural endoscopic drainage are both appropriate first-line, nonsurgical approaches in managing patients with walled-off pancreatic necrosis (WON). Endoscopic therapy through transmural drainage of WON may be preferred, as it avoids the risk of forming a pancreatocutaneous fistula. BEST PRACTICE ADVICE 8: Percutaneous drainage of pancreatic necrosis should be considered in patients with infected or symptomatic necrotic collections in the early, acute period (&lt;2 weeks), and in those with WON who are too ill to undergo endoscopic or surgical intervention. Percutaneous drainage should be strongly considered as an adjunct to endoscopic drainage for WON with deep extension into the paracolic gutters and pelvis or for salvage therapy after endoscopic or surgical debridement with residual necrosis burden. BEST PRACTICE ADVICE 9: Self-expanding metal stents in the form of lumen-apposing metal stents appear to be superior to plastic stents for endoscopic transmural drainage of necrosis. BEST PRACTICE ADVICE 10: The use of direct endoscopic necrosectomy should be reserved for those patients with limited necrosis who do not adequately respond to endoscopic transmural drainage using large-bore, self-expanding metal stents/lumen-apposing metal stents alone or plastic stents combined with irrigation. Direct endoscopic necrosectomy is a therapeutic option in patients with large amounts of infected necrosis, but should be performed at referral centers with the necessary endoscopic expertise and interventional radiology and surgical backup. BEST PRACTICE ADVICE 11: Minimally invasive operative approaches to the debridement of acute necrotizing pancreatitis are preferred to open surgical necrosectomy when possible, given lower morbidity. BEST PRACTICE ADVICE 12: Multiple minimally invasive surgical techniques are feasible and effective, including videoscopic-assisted retroperitoneal debridement, laparoscopic transgastric debridement, and open transgastric debridement. Selection of approach is best determined by pattern of disease, physiology of the patient, experience and expertise of the multidisciplinary team, and available resources. BEST PRACTICE ADVICE 13: Open operative debridement maintains a role in the modern management of acute necrotizing pancreatitis in cases not amenable to less invasive endoscopic and/or surgical procedures. BEST PRACTICE ADVICE 14: For patients with disconnected left pancreatic remnant after acute necrotizing mid-body necrosis, definitive surgical management with distal pancreatectomy should be undertaken in patients with reasonable operative candidacy. Insufficient evidence exists to support the management of the disconnected left pancreatic remnant with long-term transenteric endoscopic stenting. BEST PRACTICE ADVICE 15: A step-up approach consisting of percutaneous drainage or endoscopic transmural drainage using either plastic stents and irrigation or self-expanding metal stents/lumen-apposing metal stents alone, followed by direct endoscopic necrosectomy, and then surgical debridement is reasonable, although approaches may vary based on the available clinical expertise.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
To identify the clinical correlations between plasma growth differentiation factor-15 (GDF-15), skeletal muscle function, and acute muscle wasting in ICU patients with mechanical ventilation. In addition, to investigate its diagnostic value for ICU-acquired weakness (ICU-AW) and its predictive value for 90-day survival in mechanically ventilated patients. 95 patients with acute respiratory failure, who required mechanical ventilation therapy, were randomly selected among hospitalized patients from June 2017 to January 2019. The plasma GDF-15 level was detected by ELISA, the rectus femoris cross-sectional area (RFcsa) was measured by ultrasound, and the patient's muscle strength was assessed using the British Medical Research Council (MRC) muscle strength score on day 1, day 4, and day 7. Patients were divided into an ICU-AW group and a non-ICU-AW group according to their MRC-score on the 7th day. The differences in plasma GDF-15 level, MRC-score, and RFcsa between the two groups were compared on the 1st, 4th, and 7th day after being admitted to the ICU. Then, the correlations between plasma GDF-15 level, RFcsa loss, and MRC-score on day 7 were investigated. The receiver operating characteristic curve (ROC) was used to analyze the plasma GDF-15 level, RFcsa loss, and % decrease in RFcsa on the 7th day to the diagnosis of ICU-AW in mechanically ventilated patients. Moreover, the predictive value of GDF-15 on the 90-day survival status of patients was assessed using patient survival curves. Based on whether the 7th day MRC-score was &lt;48, 50 cases were included in the ICU-AW group and 45 cases in the non-ICU-AW group. The length of mechanical ventilation, ICU length of stay, and hospital length of stay were significantly longer in the ICU-AW group than in the non-ICU-AW group (all <iP</i &lt; 0.05), while the other baseline indicators were not statistically significant between the two groups. As the treatment time increased, the plasma GDF-15 level was significantly increased, the ICU-AW group demonstrated a significant decreasing trend in the MRC-score and RFcsa, while no significant changes were found in the non-ICU-AW group. In the ICU-AW group, the plasma GDF-15 level was significantly higher than that in the non-ICU-AW group, while the RFcsa and the MRC-score were significantly lower than those in the non-ICU-AW group (GDF-15 (pg/ml): 2542.44 ± 629.38 vs. 1542.86 ± 502.86; RFcsa (cm<sup2</sup): 2.04 ± 0.64 vs. 2.34 ± 0.61; MRC-score: 41.22 ± 3.42 vs. 51.42 ± 2.72, all <iP</i &lt; 0.05), while the other baseline indicators were not statistically significant between the two groups. As the treatment time increased, the plasma GDF-15 level was significantly increased, the ICU-AW group demonstrated a significant decreasing trend in the MRC-score and RFcsa, while no significant changes were found in the non-ICU-AW group. In the ICU-AW group, the plasma GDF-15 level was significantly higher than that in the non-ICU-AW group, while the RFcsa and the MRC-score were significantly lower than those in the non-ICU-AW group (GDF-15 (pg/ml): 2542.44 ± 629.38 vs. 1542.86 ± 502.86; RFcsa (cm<sup2</sup): 2.04 ± 0.64 vs. 2.34 ± 0.61; MRC-score: 41.22 ± 3.42 vs. 51.42 ± 2.72, all <ir</i = -0.60), while it was significantly positively correlated with the RFcsa loss (<ir</i = -0.60), while it was significantly positively correlated with the RFcsa loss (<ir</i = -0.60), while it was significantly positively correlated with the RFcsa loss (<ir</i = -0.60), while it was significantly positively correlated with the RFcsa loss (<iP</i &lt; 0.05), while the other baseline indicators were not statistically significant between the two groups. As the treatment time increased, the plasma GDF-15 level was significantly increased, the ICU-AW group demonstrated a significant decreasing trend in the MRC-score and RFcsa, while no significant changes were found in the non-ICU-AW group. In the ICU-AW group, the plasma GDF-15 level was significantly higher than that in the non-ICU-AW group, while the RFcsa and the MRC-score were significantly lower than those in the non-ICU-AW group (GDF-15 (pg/ml): 2542.44 ± 629.38 vs. 1542.86 ± 502.86; RFcsa (cm<sup2</sup): 2.04 ± 0.64 vs. 2.34 ± 0.61; MRC-score: 41.22 ± 3.42 vs. 51.42 ± 2.72, all <iP</i &lt; 0.05), while the other baseline indicators were not statistically significant between the two groups. As the treatment time increased, the plasma GDF-15 level was significantly increased, the ICU-AW group demonstrated a significant decreasing trend in the MRC-score and RFcsa, while no significant changes were found in the non-ICU-AW group. In the ICU-AW group, the plasma GDF-15 level was significantly higher than that in the non-ICU-AW group, while the RFcsa and the MRC-score were significantly lower than those in the non-ICU-AW group (GDF-15 (pg/ml): 2542.44 ± 629.38 vs. 1542.86 ± 502.86; RFcsa (cm<sup2</sup): 2.04 ± 0.64 vs. 2.34 ± 0.61; MRC-score: 41.22 ± 3.42 vs. 51.42 ± 2.72, all <iP</i &lt; 0.05), while the other baseline indicators were not statistically significant between the two groups. As the treatment time increased, the plasma GDF-15 level was significantly increased, the ICU-AW group demonstrated a significant decreasing trend in the MRC-score and RFcsa, while no significant changes were found in the non-ICU-AW group. In the ICU-AW group, the plasma GDF-15 level was significantly higher than that in the non-ICU-AW group, while the RFcsa and the MRC-score were significantly lower than those in the non-ICU-AW group (GDF-15 (pg/ml): 2542.44 ± 629.38 vs. 1542.86 ± 502.86; RFcsa (cm<sup2</sup): 2.04 ± 0.64 vs. 2.34 ± 0.61; MRC-score: 41.22 ± 3.42 vs. 51.42 ± 2.72, all. The plasma GDF-15 concentration level was significantly associated with skeletal muscle function and muscle wasting on day 7 in ICU patients with mechanical ventilation. Therefore, it can be concluded that the plasma GDF-15 level on the 7th day has a high diagnostic yield for ICU-acquired muscle weakness, and it can predict the 90-day survival status of ICU mechanically ventilated patients.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Vitamin and mineral deficiencies, particularly those of iron, vitamin A, and zinc, affect more than two billion people worldwide. Young children are highly vulnerable because of rapid growth and inadequate dietary practices. Multiple micronutrient powders (MNPs) are single-dose packets containing multiple vitamins and minerals in powder form, which are mixed into any semi-solid food for children six months of age or older. The use of MNPs for home or point-of-use fortification of complementary foods has been proposed as an intervention for improving micronutrient intake in children under two years of age. In 2014, MNP interventions were implemented in 43 countries and reached over three million children. This review updates a previous Cochrane Review, which has become out-of-date. To assess the effects and safety of home (point-of-use) fortification of foods with MNPs on nutrition, health, and developmental outcomes in children under two years of age. For the purposes of this review, home fortification with MNP refers to the addition of powders containing vitamins and minerals to semi-solid foods immediately before consumption. This can be done at home or at any other place that meals are consumed (e.g. schools, refugee camps). For this reason, MNPs are also referred to as point-of-use fortification. We searched the following databases up to July 2019: CENTRAL, MEDLINE, Embase, and eight other databases. We also searched four trials registers, contacted relevant organisations and authors of included studies to identify any ongoing or unpublished studies, and searched the reference lists of included studies. We included randomised controlled trials (RCTs) and quasi-RCTs with individual randomisation or cluster-randomisation. Participants were infants and young children aged 6 to 23 months at the time of intervention, with no identified specific health problems. The intervention consisted of consumption of food fortified at the point of use with MNP formulated with at least iron, zinc, and vitamin A, compared with placebo, no intervention, or use of iron-containing supplements, which is standard practice. Two review authors independently assessed the eligibility of studies against the inclusion criteria, extracted data from included studies, and assessed the risk of bias of included studies. We reported categorical outcomes as risk ratios (RRs) or odds ratios (ORs), with 95% confidence intervals (CIs), and continuous outcomes as mean differences (MDs) and 95% CIs. We used the GRADE approach to assess the certainty of evidence. We included 29 studies (33,147 children) conducted in low- and middle-income countries in Asia, Africa, Latin America, and the Caribbean, where anaemia is a public health problem. Twenty-six studies with 27,051 children contributed data. The interventions lasted between 2 and 44 months, and the powder formulations contained between 5 and 22 nutrients. Among the 26 studies contributing data, 24 studies (26,486 children) compared the use of MNP versus no intervention or placebo; the two remaining studies compared the use of MNP versus an iron-only supplement (iron drops) given daily. The main outcomes of interest were related to anaemia and iron status. We assessed most of the included studies at low risk of selection and attrition bias. We considered some studies to be at high risk of performance and detection bias due to lack of blinding. Most studies were funded by government programmes or foundations; only two were funded by industry. Home fortification with MNP, compared with no intervention or placebo, reduced the risk of anaemia in infants and young children by 18% (RR 0.82, 95% CI 0.76 to 0.90; 16 studies; 9927 children; moderate-certainty evidence) and iron deficiency by 53% (RR 0.47, 95% CI 0.39 to 0.56; 7 studies; 1634 children; high-certainty evidence). Children receiving MNP had higher haemoglobin concentrations (MD 2.74 g/L, 95% CI 1.95 to 3.53; 20 studies; 10,509 children; low-certainty evidence) and higher iron status (MD 12.93 μg/L, 95% CI 7.41 to 18.45; 7 studies; 2612 children; moderate-certainty evidence) at follow-up compared with children receiving the control intervention. We did not find an effect on weight-for-age (MD 0.02, 95% CI -0.03 to 0.07; 10 studies; 9287 children; moderate-certainty evidence). Few studies reported morbidity outcomes (three to five studies each outcome) and definitions varied, but MNP did not increase diarrhoea, upper respiratory infection, malaria, or all-cause morbidity. In comparison with daily iron supplementation, the use of MNP produced similar results for anaemia (RR 0.89, 95% CI 0.58 to 1.39; 1 study; 145 children; low-certainty evidence) and haemoglobin concentrations (MD -2.81 g/L, 95% CI -10.84 to 5.22; 2 studies; 278 children; very low-certainty evidence) but less diarrhoea (RR 0.52, 95% CI 0.38 to 0.72; 1 study; 262 children; low-certainty of evidence). However, given the limited quantity of data, these results should be interpreted cautiously. Reporting of death was infrequent, although no trials reported deaths attributable to the intervention. Information on side effects and morbidity, including malaria and diarrhoea, was scarce. It appears that use of MNP is efficacious among infants and young children aged 6 to 23 months who are living in settings with different prevalences of anaemia and malaria endemicity, regardless of intervention duration. MNP intake adherence was variable and in some cases comparable to that achieved in infants and young children receiving standard iron supplements as drops or syrups. Home fortification of foods with MNP is an effective intervention for reducing anaemia and iron deficiency in children younger than two years of age. Providing MNP is better than providing no intervention or placebo and may be comparable to using daily iron supplementation. The benefits of this intervention as a child survival strategy or for developmental outcomes are unclear. Further investigation of morbidity outcomes, including malaria and diarrhoea, is needed. MNP intake adherence was variable and in some cases comparable to that achieved in infants and young children receiving standard iron supplements as drops or syrups.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The greatest prevalence of asthma is in preschool children; however, the clinical utility of asthma therapy for this age group is limited by a narrow therapeutic index, long-term tolerability, and frequency and/or difficulty of administration. Inhaled corticosteroids and inhaled cromolyn are the most commonly prescribed controller therapies for young children with persistent asthma, although very young patients may have difficulty using inhalers, and dose delivery can be variable. Moreover, reduced compliance with inhaled therapy relative to orally administered therapy has been reported. One potential advantage of montelukast is the ease of administering a once-daily chewable tablet; additionally, no tachyphylaxis or change in the safety profile has been evidenced after up to 140 and 80 weeks of montelukast therapy in adults and pediatric patients aged 6 to 14 years, respectively. To our knowledge, this represents the first large, multicenter study to address the effects of a leukotriene receptor antagonist in children younger than 5 years of age with persistent asthma, as well as one of the few asthma studies that incorporated end points validated for use in preschool children. Our primary objective was to determine the safety profile of montelukast, an oral leukotriene receptor antagonist, in preschool children with persistent asthma. Secondarily, the effect of montelukast on exploratory measures of asthma control was also studied. DESIGN AND STATISTICAL ANALYSIS: We conducted a double-blind, multicenter, multinational study at 93 centers worldwide: including 56 in the United States, and 21 in countries in Africa, Australia, Europe, North America, and South America. In this study, we randomly assigned 689 patients (aged 2-5 years) to 12 weeks of treatment with placebo (228 patients) or 4 mg of montelukast as a chewable tablet (461 patients) after a 2-week placebo baseline period. Patients had a history of physician-diagnosed asthma requiring use of beta-agonist and a predefined level of daytime asthma symptoms. Caregivers answered questions twice daily on a validated, asthma-specific diary card and, at specified times during the study, completed a validated asthma-specific quality-of-life questionnaire. Physicians and caregivers completed a global evaluation of asthma control at the end of the study. Efficacy end points included: daytime and overnight asthma symptoms, daily use of beta-agonist, days without asthma, frequency of asthma attacks, number of patients discontinued because of asthma, need for rescue medication, physician and caregiver global evaluations of change, asthma-specific caregiver quality of life, and peripheral blood eosinophil counts. Although exploratory, the efficacy end points were predefined and their analyses were written in a data analysis plan before study unblinding. At screening and at study completion, a complete physical examination was performed. Routine laboratory tests were drawn at screening and weeks 6 and 12, and submitted to a central laboratory for analysis. Adverse effects were collected from caregivers at each clinic visit. An intention-to-treat approach, including all patients with a baseline measurement and at least 1 postrandomization measurement, was performed for all efficacy end points. An analysis-of-variance model with terms for treatment, study center and stratum (inhaled/nebulized corticosteroid use, cromolyn use, or none) was used to estimate treatment group means and between-group differences and to construct 95% confidence intervals. Treatment-by-age, -sex, -race, -radioallergosorbent test, -stratum, and -study center interactions were evaluated by including each term separately. Fisher's exact test was used for between-group comparisons of the frequency of asthma attacks, discontinuations from the study because of worsening asthma, need for rescue medication, and the frequencies of adverse effects. Because of an imbalance in baseline values for eosinophil counts for the 2 treatment groups, an analysis of covariance was performed on the eosinophil change from baseline with the patient's baseline as covariate. Of the 689 patients enrolled, approximately 60% were boys and 60% were white. Patients were relatively evenly divided by age: 21%, 24%, 30%, and 23% were aged 2, 3, 4, and 5 years, respectively. For 77% of the patients, asthma symptoms first developed during the first 3 years of life. During the placebo baseline period, patients had asthma symptoms on 6.1 days/week and used beta-agonist on 6.0 days/week. In over 12 weeks of treatment of patients aged 2 to 5 years, montelukast administered as a 4-mg chewable tablet produced significant improvements compared with placebo in multiple parameters of asthma control including: daytime asthma symptoms (cough, wheeze, trouble breathing, and activity limitation); overnight asthma symptoms (cough); the percentage of days with asthma symptoms; the percentage of days without asthma; the need for beta-agonist or oral corticosteroids; physician global evaluations; and peripheral blood eosinophils. The clinical benefit of montelukast was evident within 1 day of starting therapy. Improvements in asthma control were consistent across age, sex, race, and study center, and whether or not patients had a positive radioallergosorbent test. Montelukast demonstrated a consistent effect regardless of concomitant use of inhaled/nebulized corticosteroid or cromolyn therapy. Caregiver global evaluations, the percentage of patients experiencing asthma attacks, and improvements in quality-of-life scores favored montelukast, but were not significantly different from placebo. There were no clinically meaningful differences between treatment groups in overall frequency of adverse effects or of individual adverse effects, with the exception of asthma, which occurred significantly more frequently in the placebo group. There were no significant differences between treatment groups in the frequency of laboratory adverse effects or in the frequency of elevated serum transaminase levels. Approximately 90% of the patients completed the study. Oral montelukast (4-mg chewable tablet) administered once daily is effective therapy for asthma in children aged 2 to 5 years and is generally well tolerated without clinically important adverse effects. Similarly, in adults and children aged 6 to 14 years, montelukast improves multiple parameters of asthma control. Thus, this study confirms and extends the benefit of montelukast to younger children with persistent asthma.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Furfural is used as a precursor for the manufacture of furan, furfuryl alcohol, tetrahydrofuran, and their derivatives and as an industrial solvent. Furfural is also present in numerous processed food and beverage products. NTP Toxicology and Carcinogenesis studies were conducted by administering furfural (99% pure) in corn oil by gavage to groups of F344/N rats and B6C3F1 mice of each sex for 16 days, 13 weeks, or 2 years. Genetic toxicology studies were conducted in Salmonella typhimurium, mouse lymphoma cells, Chinese hamster ovary (CHO) cells, Drosophila melanogaster, and mouse bone marrow cells. Sixteen-Day Studies: Rats received doses ranging from 15 to 240 mg/kg, and mice received doses from 25 to 400 mg/kg. Eight of 10 rats that received 240 mg/kg died within 3 days. Final mean body weights of chemically exposed animals were similar to those of vehicle controls; no compound-related histologic lesions were observed in any dosed groups. Thirteen-Week Studies: Rats received doses ranging from 11 to 180 mg/kg, and mice received doses from 75 to 1,200 mg/kg. Most rats that received 180 mg/kg died; mean body weights of chemically exposed rats were similar to those of vehicle controls. Mean relative and absolute liver and kidney weights were increased in male rats that received 90 mg/kg, and cytoplasmic vacuolization of hepatocytes was increased in chemically exposed male rats. Almost all mice that received doses of 600 or 1,200 mg/kg died within the first 3 weeks. Mean body weights of chemically exposed mice were similar to those of vehicle controls throughout the studies. Mean absolute liver weights and liver weight to body weight ratios were increased in females that received 300 mg/kg. Centrilobular coagulative necrosis and/or multifocal subchronic inflammation of the liver were present in chemically exposed mice but not in vehicle control mice. Based on these results, doses selected for the 2-year studies were 0, 30, and 60 mg/kg for rats and 0, 50, 100, and 175 mg/kg for mice. Body Weight and Survival in the Two-Year Studies: Mean body weights of chemically exposed and vehicle control animals were similar throughout the studies for rats and mice. Two-year survival of male rats; low dose female rats, and mice was unaffected by chemical exposure (male rats: vehicle control, 31/50; low dose, 28/50; high dose, 24/50; female rats: 28/50; 32/50; 18/50; male mice: vehicle control, 35/50; low dose, 28/50; mid dose, 24/50; high dose, 27/50; female mice: 33/50; 28/50; 29/50; 32/50). Survival of high dose female rats was reduced by deaths associated with gavage administration; the administration of furfural was considered to be a contributing factor in these gavage-related deaths. Nonneoplastic and Neoplastic Effects in the Two-Year Studies: Centrilobular necrosis of the liver occurred at increased incidences in chemically exposed male rats (vehicle control, 3/50; low dose, 9/50; high dose, 12/50). Two high dose male rats had bile duct dysplasia with fibrosis, and two had cholangiocarcinomas; neither lesion was seen in the other dose groups. The historical incidence of bile duct neoplasms in corn oil vehicle control male rats is 3/2,145 (0.1%). Multifocal pigmentation and chronic inflammation of the subserosa of the liver occurred in chemically exposed mice (pigmentation--male: 0/50; 0/50; 8/49; 18/50; female: 0/50; 0/50; 0/50; 11/50; chronic inflammation--male: 0/50; 0/50; 8/49; 18/50; female: 0/50; 0/50; 1/50; 8/50). The incidences of hepatocellular adenomas and hepatocellular carcinomas in male mice and hepatocellular adenomas in female mice were significantly increased in the high dose group compared with those in the vehicle controls (male--adenomas: 9/50; 13/50; 11/49; 19/50; carcinomas: 7/50; 12/50; 6/49; 21/50; female--adenomas: 1/50; 3/50; 5/50; 8/50; adenomas or carcinomas, combined: 5/50; 3/50; 7/50; 12/50). Three renal cortical adenomas or carcinomas occurred in chemically exposed male mice (0/50; 1/50; 1/49; 1/50), and a renal cortical adenoma was present in one low dose female mouse; the historical incidence oal incidence of renal cortical neoplasms in National Toxicology Program 2-year corn oil gavage studies in male B6C3F1 mice is 8/2,183. Forestomach hyperplasia occurred in chemically exposed female mice, and squamous cell papillomas were increased in high dose female mice (hyperplasia: 0/50; 5/50; 5/50; 3/50; papillomas: 1/50; 0/50; 1/50; 6/50). Genetic Toxicology: In gene mutation tests with four strains of Salmonella (TA98, TA100, TA1535, and TA1537), no mutagenic activity was observed in the presence or absence of exogenous metabolic activation (S9) in one laboratory and an equivocal response was observed in TA100 in the absence of S9 in a second laboratory. Exposure to furfural induced trifluorothymidine resistance in mouse L5178Y lymphoma cells in the absence of S9 (no evaluation was made in the presence of S9), sister chromatid exchanges (SCEs) and chromosomal aberrations in CHO cells in the presence or absence of S9, and an increase in sex-linked recessive lethal mutations but no reciprocal translocations in germ cells of D. melanogaster; furfural did not induce SCEs or chromosomal aberrations in the bone marrow of B6C3F1 mice. Conclusions: Under the conditions of these 2-year gavage studies, there was some evidence of carcinogenic activity of furfural for male F344/N rats based on the occurrence of uncommon cholangiocarcinomas in two animals and bile duct dysplasia with fibrosis in two other animals. There was no evidence of carcinogenic activity for female F344/N rats that received doses of 0, 30, or 60 mg/kg furfural. There was clear evidence of carcinogenic activity for male B6C3F1 mice, based on increased incidences of hepatocellular adenomas and hepatocellular carcinomas. There was some evidence of carcinogenic activity in female B6C3F1 mice, based on increased incidences of hepatocellular adenomas. Renal cortical adenomas or carcinomas in male mice and squamous cell papillomas of the forestomach in female mice may have been related to exposure to furfural. Synonyms: 2-furancarboxaldehyde; 2-furaldehyde; pyromucic aldehyde Common Name: Artificial oil of ants
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Penicillin VK, a widely used antibiotic for treatment of gram-positive coccal infections, was nominated for study by the National Cancer Institute because rodent carcinogenicity studies for this drug had not been performed. The chemical (94% or 98% pure, USP grade) was administered orally (by gavage in corn oil) because oral administration is the primary route used to treat infections in humans. Fourteen-day, 13-week, and 2-year studies were conducted in F344/N rats and B6C3F1 mice. Additional studies were performed to evaluate the potential for genetic damage in bacteria and mammalian cells. Fourteen-Day and Thirteen-Week Studies: In the 14-day studies, penicillin VK was administered at doses of 150-2,400 mg/kg. No compound-related deaths or dose-related histopathologic lesions were seen in rats or mice. Final mean body weights of dosed male rats were 5%-17% lower than that of controls; weights of dosed and control female rats were comparable. Final mean body weights of dosed mice were 5%-9% lower than those of controls. Diarrhea was observed in all dosed groups of rats and mice. In the 13-week studies, male and female rats received doses of 180-3,000 mg/kg and male and female mice received doses of 250-3,000 mg/kg. No compound-related deaths were seen in rats or mice. Final mean body weights of rats that received 3,000 mg/kg were 11% lower than those of the vehicle controls for males and 6% lower for females. For mice, mean body weights were comparable. Diarrhea occurred in male rats at doses of 750 mg/kg and above and in female rats at doses of 1,500 and 3,000 mg/kg. Mucous cell metaplasia of the glandular stomach was observed in male and female rats receiving 1,500 and 3,000 mg/kg. Lesions of the glandular stomach (inflammation, mucous cell metaplasia, and eosinophilic cytoplasmic change) and the forestomach (papillary hyperplasia and hyperkeratosis) were seen in all groups of dosed mice. The severity of lesions at 1,000 mg/kg or below was considered minimal. Based on these results, doses selected for rats and mice in the 2-year studies were 0, 500, or 1,000 mg/kg. Body Weight and Survival in the Two-Year Studies: Mean body weights of dosed and vehicle control male and female rats and male mice were comparable. Mean body weights of dosed female mice were 4%-16% lower than those of the vehicle controls from week 28 to the end of the study. Diarrhea was observed for dosed male and female rats and for dosed male mice. Survival of low and high dose male rats and high dose female rats was reduced (male rats: vehicle control, 34/50; low dose, 19/50; high dose, 16/50;female rats: 29/50; 26/50; 16/50). Survival of male and female mice was comparable to that of the vehicle controls (male mice: 24/50; 36/50; 26/50; female mice: 36/50; 32/50; 32/50). Nonneoplastic and Neoplastic Effects in the Two-Year Studies: Nonneoplastic lesions occurred at low incidences in the nasal mucosa, lung, and forestomach of dosed male rats and in the nasal mucosa and lung of dosed female rats. Congestion and aspiration pneumonia occurring in dosed rats dying before week 104 was the principal cause of death in these animals. Nonneoplastic lesions of the gastric fundal gland (eosinophilic cytoplasmic change and dilatation) and glandular stomach (cyst, chronic focal inflammation, hyperplasia, fibrosis, and squamous metaplasia) were seen in dosed male and female mice, and lesions of the gallbladder (eosinophilic cytoplasmic change) were seen in male mice. Slight increases in the incidences of adenomas of the pituitary gland in high dose male rats and of fibroadenomas or adenomas (combined) of the mammary gland in low dose female rats were observed. These were not considered to be compound-related lesions. The incidence of hepatocellular adenomas was decreased in high dose male mice (14/50; 15/49; 4/49). No compound-related neoplasms were seen in female mice. Genetic Toxicology: Penicillin VK was not mutagenic in Salmonella typhimurium strains TA98, TA100, TA1535, or TA1537 with or without exogenous metabolic activation. The chemical was mutagenic onl exogenous metabolic activation. The chemical was mutagenic only with activation in the mouse lymphoma L5178Y/TK&amp;plusmn; forward mutation assay. Incubation of Chinese hamster ovary cells with penicillin VK resulted in increased frequencies of sister chromatid exchanges and chromosomal aberrations in the absence of metabolic activation under the conditions of delayed harvest to compensate for chemical-induced cell cycle delay, no effects from penicillin VK exposure were observed in these cells in the presence of S9. Audit: The data, documents, and pathology materials from the 2-year studies of penicillin VK were audited. The audit findings show that the conduct of the studies is documented and support the data and results given in this Technical Report. Conclusions: Under the conditions of these 2-year gavage studies, there was no evidence of carcinogenic activity of penicillin VK for F344/N rats or for B6C3F1 mice administered 500 or 1,000 mg/kg penicillin VK in corn oil gavage, 5 days per week for 2 years. Nonneoplastic lesions were seen in the glandular stomach of dosed mice. Decreased survival of low and high dose male rats and of high dose female rats reduced the sensitivity of the studies for determining the presence or absence of a carcinogenic response in this species. Synonyms: 4-thia-1-azabicyclo(3.2.0)heptane-2-carboxylic acid, 3,3-dimethyl-7-oxo-6-(2-phenoxy-acetamide)-, monopotassium salt; penicillin V potassium; penicillin V potassium salt; D-a-phenoxymethylpenicillinate K salt; phenoxymethylpenicillin potassium; PVK Trade Names: Antibiocin; Apsin VK; Aracil; Arcasin; Aspin VK; Beromycin; Beromycin 400; Betapen VK; Calciopen K; Cliacil; Compocillin VK; Distakaps V-K; Distaquaine V-K; Dowpen V-K; DQV-K; Fenoxypen; Icipen; Isocillin; Ispenoral; Ledercillin VK; Megacillin oral; Oracil-VK; Orapen; Ospeneff; Pedipen; Penagen; Pencompren; Pen-Vee K; Pen-V-K powder; Penvikal; Pfizerpen VK; Qidpen VK; Robicillin VK; Rocillin-VK; Roscopenin; SK-Penicillin VK; Stabilin VK Syrup 125; Stabilin VK Syrup 62.5; Sumapen VK; Suspen; Uticillin VK; V-Cil-K; V-Cillin K; Veetids; Vepen
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Cervical dystonia is the most common form of focal dystonia. It is characterized by involuntary posturing of the head and frequently is associated with neck pain. Disability and social withdrawal are common. Most cases of cervical dystonia are idiopathic and generally it is a life-long disorder. In recent years, Botulinum toxin type A (BtA) has become the first line therapy. However, some patients become resistant to it. This problem led to the study of another Botulinum toxin (Bt) serotype, Bt type B (BtB) to address the issues of clinical efficacy, effect size, and safety of BtB in the treatment of cervical dystonia. To determine whether botulinum toxin (BtB) is an effective and safe treatment for cervical dystonia. We identified studies for inclusion in the review using the Cochrane Movement Disorders Group trials register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE; and by handsearching the Movement Disorders Journal and abstracts of international congresses on movement disorders and botulinum toxin, by communication with other researchers in the field, by searching reference lists of papers found using the above search strategies, and by contacting authors and drug manufacturers. We considered studies eligible for inclusion in the review if they evaluated the efficacy of BtB for the treatment of cervical dystonia in randomized, placebo-controlled trials. We used a paper pro forma to collect data from the included studies with double extraction by two independent reviewers. Both reviewers assessed each trial for internal validity and settled differences by discussion. The outcome measures used included adverse events, improvement in symptomatic rating scales, subjective evaluation by patients and clinicians, changes in pain scores, changes in quality of life assessments. Studies were short term (16 weeks) employing a single BtB injection session. All were multicentre and conducted in the US. All patients included had previously received BtA. The trials differed with respect to whether or not the patients were still responding to BtA but other entry criteria were similar. All studies used a dose of 10,000 Units of BtB in one group and the technique of administration was the same. Meta-analysis of three trials enrolling 308 participants showed statistically and clinically significant improvements in the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS) total score at week four with a Peto odds ratio (OR) for the number of patients who had at least a 20% improvement of 4.69 (95% CI 2.06 to 10.69) and a weighted mean difference of -5.92 (95% CI -9.61 to -2.23). Subjective rating scales (Patient Global Assessment of Change, Investigator Global Assessment of Change, and Patient Analog Pain Assessment) also improved. Adverse events clearly associated with the mechanism of action of BtB included dysphagia and dry mouth and the number of patients with any adverse event were more frequent in BtB treatment groups. Subgroup analyses showed a clear dose-response relationship for subjective and objective benefit, for frequency and severity of adverse events, and a greater benefit for BtA resistant patients than BtA responders in the primary outcome. The duration of effect was about 16 weeks. We found three eligible studies enrolling 308 participants. Studies were short term (16 weeks) employing a single BtB injection session. All were multicentre and conducted in the US. All patients included had previously received BtA. The trials differed with respect to whether or not the patients were still responding to BtA but other entry criteria were similar. Patient groups were appropriately selected and well matched. From the methodological point of view these trials were probably not subjected to important selection, performance or attrition bias and all studies used an intention-to-treat analysis.The dose varied significantly between studies although all used 10,000 Units of BtB in one group and the technique of administration was the same. The primary outcome in all trials was change in TWSTRS total score at week four and other efficacy outcomes were similar between studies. The number of dropouts was small and balanced in all trials. Reasons for withdrawals were given. One randomized double-blind placebo-controlled study was excluded because data couldn't be extracted for the outcomes. Meta-analysis showed statistically and clinically significant improvements with a Peto odds ratio (OR) of 20% in TWSTRS total score at week four (OR 4.69; 95% CI 2.06 to 10.69) and a weighted mean difference of -5.92 (95% CI -9.61 to -2.23). Subjective rating scales (Patient Global Assessment of Change, Investigator Global Assessment of Change, and Patient Analog Pain Assessment) also improved. The weighted mean difference for changes in these subjective scales varied between -13% to -21%. However, for many of the outcomes, we could not combine data from all studies. Only adverse events clearly associated with the mechanism of action of BtB were more frequent in the treatment group. These included dysphagia and dry mouth. The number of patients with any adverse event was more frequent with BtB. Subgroup analyses showed a clear dose-response relationship for subjective and objective benefit and for frequency and severity of adverse events. Subgroup analyses showed a greater benefit for the BtA resistant patients than BtA responders in the primary outcome. The duration of effect was about 16 weeks. These trials did not measure quality of life nor did they establish the long term duration of effect or immunogenicity A single injection of BtB was effective and safe for treating cervical dystonia. Long-term uncontrolled studies suggested that further injection cycles continue to work for most patients. Future research should explore technical factors such as the optimum treatment intervals and use of image or electromyographic guidance for administration. Other issues include service delivery, quality of life, long-term efficacy and safety, and the relative indications for BtA, BtB and other treatments such as deep brain stimulation.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Although everyone working in routine mental health services recognizes the scientific and ethical importance to ensure that treatments being provided are of highest quality, there is a clear lack of consensus regarding what outcome domains to include, what measure of assessment to use and, moreover, who to question when assessing. Since the fifties, social functioning is considered as an important dimension to take into account for treatment planning and outcome measuring. But for many years, symptoms scales have been considered as sufficient outcome measures and social functioning improvement expected on the basis of symptoms alleviation. As symptoms and social adjustment sometimes appear relatively independent, no accurate conclusion concerning the patient's social functioning can so be driven on the basis of his clinical symptoms. More attention has then been directed toward the development of instruments specifically intended to measure the extent and nature of social functioning impairments observed in most psychiatric syndromes. Many of these instruments are designed to be completed by caregivers or remain time consuming and difficult to use routinely. Presently, in clinical practice, there is a need to rely on simple and brief instruments considering patients'perspective about their social adjustment as a function of time. The aim of this study is to present a new instrument, the QFS, initially developed in order to assess social functioning in patients involved in group psychotherapy programs conducted in a specialist mental health setting, as well as its psychometric characteristics. It was designed to be completed in less than 10 minutes and the questions are phrased in a simple and redundant way, in order to limit problems inherent to illiteracy or language comprehension. The QFS is a 16 items self-report instrument that assesses both the frequency of (8 items) and the satisfaction with (8 items) various social behaviours adopted during the 2 weeks period preceding the assessment. It yields three separate indexes of social functioning, defined a priori and labelled "frequency", "satisfaction" and "global". The higher the scores, the better the social functioning. The QFS was administered to 457 subjects, aged between 18 and 65, including 176 outpatients (99 with anxious or depressive disorders, 25 with personality disorders and 52 with psychotic disorders) and 281 healthy control subjects. No significant difference was found between patients and controls according to age or gender distribution. Acceptance rate was high (&gt;95%). Moreover, the QFS was generally acceptable to the clinicians who used it. Internal consistency calculated for each index ranged from 0.65 to 0.83 (Cronbach alpha). Test-retest reliability, calculated within a 15 days time interval on a sample of 49 healthy controls, ranged from 0.69 to 0.71 (intraclass correlation coefficient). Discriminant validity was calculated on healthy controls and patients divided into sub-groups according to their diagnosis. It showed to be excellent, with significantly higher scores in control subjects than in psychiatric patients and significant differences across diagnostic categories (Kruskal-Wallis ANOVA with post-hoc tests, all p&lt;0.05). The convergent validity of the QFS with other measures of social functioning was calculated, using the Social Adaptation Self-Evaluation Scale (SASS) and the Social Adjustment Scale Self-Report (SAS-SR). With the SASS, the convergent validity was higher among patients (Spearman rS 0.71 to 0.92, p&lt;0.01) than controls (rS from 0.49 to 0.66, p&lt;0.001). In healthy controls, correlation with the SAS-SR was moderate but statistically significant (rS from - 0.21 to - 0.44, p&lt;0.05). When comparing QFS scores with self-rated symptoms severity, lower levels of social functioning were significantly associated with more severe symptoms according to the Brief Symptom Inventory (BSI: rS from - 0.38 to - 0.65, p&lt;0.001). The QFS indexes demonstrated sensitivity to change (Wilcoxon: all p&lt;0.05) on a sample of 27 out-patients suffering from anxious-depressive disorders questioned before and after 4 months of cognitive behavioural group therapy running on a weekly basis during 16 sessions of 2 hours each.The factorial validity of the QFS was measured through 3 separate factor analysis conducted using the data of 457 subjects. The first analysis considered only Frequency items; 7 out of 8 items had loadings above 0.5 on Factor 1 accounting for 30.7% (unrotaded) of the variance. The second analysis considered only Satisfaction items; all items had loadings above 0.6 on Factor 1 explaining 43.4% (unrotaded) of the variance. And finally, in the third factor analysis, all QFS items were included; 15 out of 16 items had loadings above 0.4 on Factor 1 accounting for 30% (unrotated) of the variance. Concerning the factorial validity of the instrument, these results suggest that all QFS items belong to the same underlying dimension. Finally, provisional norms for the QFS are provided for healthy controls, in order to characterise individual patients or patient subgroups. In conclusion, the need for assessment in clinical routine, in order to estimate different aspects of patients conditions as well as the quality of the treatment provided, has contributed to the development of a large variety of instruments measuring several domains. Concerning the level of social functioning, many instruments fail to meet chief criterion of feasibility, remaining often too complex or time onsuming. Moreover, only few of them are available in French. The QFS presented here is a brief, simple and easy to administer self-rating scale that displays satisfactory psychometric properties. It seems to be a valuable instrument for the monitoring of social functioning in psychiatric patients which, from a therapeutic point of view, may have a clear impact as it sets up expectation of change and allows both to reality test patients and therapists beliefs about the presence of progress or not and to identify if therapy is working on this specific outcome domain. Though, to date, the administration of the QFS to other populations and treatment modalities requires further investigation.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
In summing up the contents of the preceding pages it may be stated that the action of digitalis has been divided into two stages according to the changes evinced by the ventricles under its influence; of these the first is characterized by marked inhibitory action together with modification of the cardiac muscle, while in the second the inhibitory action is less marked and the muscular action becomes the more prominent feature. The inhibitory action is due to direct stimulation by this series of the pneumogastric centrally in the medulla oblongata and peripherally in the heart. The extent to which the inhibitory mechanism is stimulated varies in different animals and with different members of the digitalis series. The muscular action of small quantities betrays itself in a tendency to increase the extent of the contraction, while in some cases the degree of relaxation reached in diastole is also lessened by it. In larger quantities the series increases the irritability of the cardiac muscle very considerably, and the spontaneous rhythm of the ventricles therefore becomes developed. Through the interaction of these two factors in the first stage the rhythm of the whole heart is slowed, the contraction of the ventricle is more complete, and the diastolic relaxation is generally increased, although it may be unchanged or lessened. The systolic pressure is increased and the fall from maximum to minimum pressure is slower than normal owing to the increased completeness and longer duration of systole (Rolleston). The auricles generally contract with less force and may relax more completely than normally. Sometimes, however, their contractions also are more complete than before the injection of the drug. This latter condition generally precedes the diminution of the force of the auricular contraction. This variation of the effects of digitalis in the auricle explains the changes in intra-auricular pressure noted by Kaufmann. The contraction volume of the ventricles is always much increased, and the output per unit of time is generally augmented, and this together with the contraction of the peripheral arterioles causes an increase in the tension in the systemic circulation, an acceleration of the circulation, and possibly a temporary increase in the pressure in the great veins and in the auricle and ventricle in diastole (Kaufmann). The pressure in the pulmonary artery is practically unaffected by some members of the series, while by others it is considerably increased. This difference in the reaction of the pulmonary circulation is due to the varying extent to which these drugs act on the peripheral arteries and not to any difference in their action on the two sides of the heart. If the inhibitory action be very strongly marked the slowing of the heart may be extreme, the ventricles assuming their own spontaneous rhythm and all connection with the auricles being lost. While the contraction volume of the ventricle is still greater than normal, their output per unit of time may become less than normal, the aortic tension therefore fall and the rapidity of the circulation be lessened. The ventricles maintain their association throughout, and probably the rhythm of the two auricles also remains equal. The ventricular rhythm, however, becomes irregular owing to the variation in the duration of the diastolic pause. The auricles may cease altogether in diastole, or may continue to beat with a slower or faster rhythm than the ventricles. During the second stage the rhythm of the heart becomes accelerated owing to the increased irritability of the heart muscle. The ventricle tends to assume a rapid spontaneous rhythm, while the auricular rhythm is also quicker than in the first stage. When these two rhythms interfere by the passage of impulses across the auriculo-ventricular boundary in either direction, irregularity of the heart is produced, generally bearing a distinctly periodic character. The ventricles continue to maintain their common rhythm, while the auricles and ventricles may contract at quite different rates. The two ventricles, however, do not necessarily contract with equal force, and the contractions of one may present periodic variations in strength, while those of the other may be almost perfectly uniform. The contractions of the auricles vary in the same way as regards each other and the ventricles. The inhibitory nerves are no longer able to slow the ventricular rhythm, but may affect the completeness of systole and diastole in the ordinary way. The auricular contractions can still be lessened in force and possibly be abolished by their stimulation, and the impulses passing between the auricle and ventricle may therefore be blocked and regularity of the heart produced by powerful inhibition. The irregularity of the contractions is therefore due indirectly to the increased irritability of the cardiac muscle and the acceleration must be attributed to the same cause. An extreme phase of this stage produced by the interference of the rhythms is a temporary standstill of one of the chambers, generally the auricle. The irregularity leads to a lessened efficiency of the work of the heart. The output varies extremely in successive observations and the contraction volume of every individual beat may differ. The various chambers often show a tendency to dilate during this stage. The blood pressure in the systemic arteries at first remains high, in fact may be higher than in the first stage owing to the increased rapidity of the heart rhythm, but afterwards falls continuously as the periodic variations become shorter in duration. The auricles generally cease contracting before the ventricles, but not invariably. There is no fixed order in the cessation of the ventricles or auricles. Each division comes to a standstill in a position somewhat nearer diastole than systole and then passes into delirium and dilates to the fullest extent.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
BACKGROUND, SCOPE, AND AIMS: Antibacterial fluoroquinolones (FQs) are third-generation antibiotics that are commonly used as therapeutic treatments of respiratory and urinary tract infections. They are used far less in intensively farmed animal production systems, though their use may be permitted in the veterinary treatments of flocks or in medicated feeds. When used, only a fraction of ingested parent FQ actually reaches the in vivo target site of infection, while the remainder is excreted as the parent FQ and its metabolized products. In many species' metabolism, enrofloxacin (EF) is converted into ciprofloxacin (CF) while both FQs are classified as parent FQs in human treatments. It is therefore likely that both FQs and their metabolic products will contribute to a common pool of metabolites in biological wastes. Wastes from intensive farming practices are either directly applied to agricultural land without treatment or may be temporarily stored prior to disposal. However, human waste is treated in sewage treatment plants (STPs) where it is converted into biosolids. In the storage or treatment process of STPs, FQs and their in vivo metabolites are further converted into other environmental metabolites (FQEMs) by ex vivo physicochemical processes that act and interact to produce complex mixtures of FQEMs, some of which have antibacterial-like activities. Biosolids are then often applied to agricultural land as a fertilizer amendment where FQs and FQEMs can be further converted into additional FQEMs by soil processes. It is therefore likely that FQ-contaminated biowaste-treated soils will contain complex mixtures of FQEMs, some of which may have antibacterial-like activities that may be expressed on bacteria endemic to the receiving agricultural soil environment. Concern has arisen in the scientific and in the general community that repeated use of FQ-contaminated biowaste as fertilizer amendments of nutrient-impoverished agricultural land may create a selective environment in which FQ-resistant bacteria might grow. The likelihood of this happening will depend, to some extent, on whether bioactive FQEMs are first synthesized from the parent FQs by the action and interaction of in vivo and ex vivo processes producing bioactive FQEMs in biowastes and biosolids. The postulated creation of a selective environment will also depend, in part, on whether such bioactive FQEMs are biologically available to bacteria, which may, in turn, be influenced by soil type, amendment regime, and the persistence of the bioactive FQEMs. Additionally, soil bacteria and soil processes may be affected in different ways or extents by bioactive FQEMs that could possibly act additively or synergistically at ecological targets in these non-target bacteria. This is an important consideration, since, while parent FQs have well-defined ecological targets (DNA gyrase and topoisomerase IV) and modes of bactericidal action, the FQEMs and their possible modes of action on the many different species of soil bacteria is less well studied. It is therefore understandable that there is a lack of conclusive evidence directly attributing biosolid usage to any increase in FQ-resistant bacteria detected in biowaste-amended agricultural soil. However, a lack of evidence may simply imply that a causal relationship between biosolid usage programs and any detection of low levels of FQ-resistant bacteria in soils has yet to be established, rather than an assumption of no relationship whatsoever. Based on results presented in this paper, the precautionary principle should be applied in the usage of FQ-contaminated biosolids as fertilizer amendments of agricultural land. The aim of this research was to test whether any bioactive FQEMs of EF could be synthesized by aerobic fermentation processes using Mycobacterium gilvum (American Tissue Culture Collection) and a mixed culture of microorganisms derived from an agricultural soil. High-performance thin-layer chromatography (HPTLC) and bioautography were tested as screening techniques in the detection and analysis of bioactive FQEMs. FQEMs derived from M. gilvum and mixed (soil) culture aerobic ferments were fractionated using preparative HPTLC. A standard strain of Escherichia coli was then used as the reporter organism in a bioautography assay in the detection of bioactive-FQEMs on a mid-section of the HPTLC plate. Plate sections were reassembled, and a photograph was taken under low-intensity ultraviolet (UV) light to reveal regions that contained analytes that had UV chromophores and antibacterial-like activities. Many fractionated FQEMs displayed antibacterial-like activity while bound to silica gel HPTLC plates. These results also provide evidence that sufficient quantities of biologically active FQEMs were biologically available from a silica gel surface to prevent the adherent growth of E. coli. Six to seven FQEMs derived from EF using aerobic fermentation processes had antibacterial-like activities, while two FQEMs were also detectable using UV light. Furthermore, similar banding patterns of antibacterial-like activity were observed in both the monoculture (M. gilvum) and mixed culture bioautography assays, indicating that similar processes operated in both aerobic fermentations, either producing similar biologically active FQEMs or biologically active FQEMs that had similar physicochemical properties in both ferments. The simplest explanation for these findings is that the tested agricultural soil also contained mycobacteria that metabolized EF in a similar way to the purchased standard monoculture M. gilvum. Additionally, the marked contrast between the bioautography results and the UV results indicated that the presence of UV chromophores is not a prerequisite for the detection of antibacterial-like activity. A reliance on spectrophotometric techniques in the detection of bioactive FQEMs in the environment may underestimate component antibacterial-like activity and, possibly, total antibacterial-like activity expressed by EF and its FQEMs. The described bioautography method provides a screening technique with which antibacterial-like activities derived from EF and possibly other FQs can be detected directly on silica gel HPTLC plates. It is recommended that both bioassay and instrumental analytical techniques be used in any measurement of hazard and risk relating to antibacterial-like activities in the environment that are derived from fluoroquinolone antibiotics and their environmental metabolites.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Charcot-Marie-Tooth disease (CMT) is the most common inherited disorder of the peripheral nervous system. The frequency of different CMT genotypes has been estimated in clinic populations, but prevalence data from the general population is lacking. Point mutations in the mitofusin 2 (MFN2) gene has been identified exclusively in Charcot-Marie-Tooth disease type 2 (CMT2), and in a single family with intermediate CMT. MFN2 point mutations are probably the most common cause of CMT2. The CMT phenotype caused by mutation in the myelin protein zero (MPZ) gene varies considerably, from early onset and severe forms to late onset and milder forms. The mechanism is not well understood. The myelin protein zero (P(0) ) mediates adhesion in the spiral wraps of the Schwann cell's myelin sheath. X-linked Charcot-Marie Tooth disease (CMTX) is caused by mutations in the connexin32 (cx32) gene that encodes a polypeptide which is arranged in hexameric array and form gap junctions. Estimate prevalence of CMT. Estimate frequency of Peripheral Myelin Protein 22 (PMP22) duplication and point mutations, insertions and deletions in Cx32, Early growth response 2 (EGR2), MFN2, MPZ, PMP22 and Small integral membrane protein of lysosome/late endosome (SIMPLE) genes. Description of novel mutations in Cx32, MFN2 and MPZ. Description of de novo mutations in MFN2. Our population based genetic epidemiological survey included persons with CMT residing in eastern Akershus County, Norway. The participants were interviewed and examined by one geneticist/neurologist, and classified clinically, neurophysiologically and genetically. Two-hundred and thirty-two consecutive unselected and unrelated CMT families with available DNA from all regions in Norway were included in the MFN2 study. We screened for point mutations in the MFN2 gene. We describe four novel mutations, two in the connexin32 gene and two in the MPZ gene. A total of 245 affected from 116 CMT families from the general population of eastern Akershus county were included in the genetic epidemiological survey. In the general population 1 per 1214 persons (95% CI 1062-1366) has CMT. Charcot-Marie-Tooth disease type 1 (CMT1), CMT2 and intermediate CMT were found in 48.2%, 49.4% and 2.4% of the families, respectively. A mutation in the investigated genes was found in 27.2% of the CMT families and in 28.6% of the affected. The prevalence of the PMP22 duplication and mutations in the Cx32, MPZ and MFN2 genes was found in 13.6%, 6.2%, 1.2%, 6.2% of the families, and in 19.6%, 4.8%, 1.1%, 3.2% of the affected, respectively. None of the families had point mutations, insertions or deletions in the EGR2, PMP22 or SIMPLE genes. Four known and three novel mitofusin 2 (MFN2) point mutations in 8 unrelated Norwegian CMT families were identified. The novel point mutations were not found in 100 healthy controls. This corresponds to 3.4% (8/232) of CMT families having point mutations in MFN2. The phenotypes were compatible with CMT1 in two families, CMT2 in four families, intermediate CMT in one family and distal hereditary motor neuronopathy (dHMN) in one family. A point mutation in the MFN2 gene was found in 2.3% of CMT1, 5.5% of CMT2, 12.5% of intermediate CMT and 6.7% of dHMN families. Two novel missense mutations in the MPZ gene were identified. Family 1 had a c.368G&gt;A (Gly123Asp) transition while family 2 and 3 had a c.103G&gt;A (Asp35Asn) transition. The affected in family 1 had early onset and severe symptoms compatible with Dejerine-Sottas syndrome (DSS), while affected in family 2 and 3 had late onset, milder symptoms and axonal neuropathy compatible with CMT2. Two novel connexin32 mutations that cause early onset X-linked CMT were identified. Family 1 had a deletion c.225delG (R75fsX83) which causes a frameshift and premature stop codon at position 247 while family 2 had a c.536G&gt;A (Cys179Tyr) transition which causes a change of the highly conserved cysteine residue, i.e. disruption of at least one of three disulfide bridges. The mean age at onset was in the first decade and the nerve conduction velocities were in the intermediate range. Charcot-Marie-Tooth disease is the most common inherited neuropathy. At present 47 hereditary neuropathy genes are known, and an examination of all known genes would probably only identify mutations in approximately 50% of those with CMT. Thus, it is likely that at least 30-50 CMT genes are yet to be identified. The identified known and novel point mutations in the MFN2 gene expand the clinical spectrum from CMT2 and intermediate CMT to also include possibly CMT1 and the dHMN phenotypes. Thus, genetic analyses of the MFN2 gene should not be restricted to persons with CMT2. The phenotypic variation caused by different missense mutations in the MPZ gene is likely caused by different conformational changes of the MPZ protein which affects the functional tetramers. Severe changes of the MPZ protein cause dysfunctional tetramers and predominantly uncompacted myelin, i.e. the severe phenotypes congenital hypomyelinating neuropathy and DSS, while milder changes cause the phenotypes CMT1 and CMT2. The two novel mutations in the connexin32 gene are more severe than the majority of previously described mutations possibly due to the severe structural change of the gap junction they encode. Charcot-Marie-Tooth disease is the most common inherited disorder of the peripheral nervous system with an estimated prevalence of 1 in 1214. CMT1 and CMT2 are equally frequent in the general population. The prevalence of PMP22 duplication and of mutations in Cx32, MPZ and MFN2 is 19.6%, 4.8%, 1.1% and 3.2%, respectively. The ratio of probable de novo mutations in CMT families was estimated to be 22.7%. Genotype- phenotype correlations for seven novel mutations in the genes Cx32 (2), MFN2 (3) and MPZ (2) are described. Two novel phenotypes were ascribed to the MFN2 gene, however further studies are needed to confirm that MFN2 mutations can cause CMT1 and dHMN.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Oral mucositis is a side effect of chemotherapy, head and neck radiotherapy, and targeted therapy, affecting over 75% of high risk patients. Ulceration can lead to severe pain and difficulty eating and drinking, which may necessitate opioid analgesics, hospitalisation and nasogastric or intravenous nutrition. These complications may lead to interruptions or alterations to cancer therapy, which may reduce survival. There is also a risk of death from sepsis if pathogens enter the ulcers of immunocompromised patients. Ulcerative oral mucositis can be costly to healthcare systems, yet there are few preventive interventions proven to be beneficial. Oral cryotherapy is a low-cost, simple intervention which is unlikely to cause side-effects. It has shown promise in clinical trials and warrants an up-to-date Cochrane review to assess and summarise the international evidence. To assess the effects of oral cryotherapy for preventing oral mucositis in patients with cancer who are receiving treatment. We searched the following databases: the Cochrane Oral Health Group Trials Register (to 17 June 2015), the Cochrane Central Register of Controlled Trials (CENTRAL) (Cochrane Library 2015, Issue 5), MEDLINE via Ovid (1946 to 17 June 2015), EMBASE via Ovid (1980 to 17 June 2015), CANCERLIT via PubMed (1950 to 17 June 2015) and CINAHL via EBSCO (1937 to 17 June 2015). We searched the US National Institutes of Health Trials Registry, and the WHO Clinical Trials Registry Platform for ongoing trials. No restrictions were placed on the language or date of publication when searching databases. We included parallel-design randomised controlled trials (RCTs) assessing the effects of oral cryotherapy in patients with cancer receiving treatment. We used outcomes from a published core outcome set registered on the COMET website. Two review authors independently screened the results of electronic searches, extracted data and assessed risk of bias. We contacted study authors for information where feasible. For dichotomous outcomes, we reported risk ratios (RR) and 95% confidence intervals (CI). For continuous outcomes, we reported mean differences (MD) and 95% CIs. We pooled similar studies in random-effects meta-analyses. We reported adverse effects in a narrative format. We included 14 RCTs analysing 1280 participants. The vast majority of participants did not receive radiotherapy to the head and neck, so this review primarily assesses prevention of chemotherapy-induced oral mucositis. All studies were at high risk of bias. The following results are for the main comparison: oral cryotherapy versus control (standard care or no treatment). Adults receiving fluorouracil-based (5FU) chemotherapy for solid cancersOral cryotherapy probably reduces oral mucositis of any severity (RR 0.61, 95% CI 0.52 to 0.72, 5 studies, 444 analysed, moderate quality evidence). In a population where 728 per 1000 would develop oral mucositis, oral cryotherapy would reduce this to 444 (95% CI 379 to 524). The number needed to treat to benefit one additional person (NNTB), i.e. to prevent them from developing oral mucositis, is 4 people (95% CI 3 to 5).The results were similar for moderate to severe oral mucositis (RR 0.52, 95% CI 0.41 to 0.65, 5 studies, 444 analysed, moderate quality evidence). NNTB 4 (95% CI 4 to 6).Severe oral mucositis is probably reduced (RR 0.40, 95% CI 0.27 to 0.61, 5 studies, 444 analysed, moderate quality evidence). Where 300 per 1000 would develop severe oral mucositis, oral cryotherapy would reduce this to 120 (95% CI 81 to 183), NNTB 6 (95% CI 5 to 9). Adults receiving high-dose melphalan-based chemotherapy before haematopoietic stem cell transplantation (HSCT)Oral cryotherapy may reduce oral mucositis of any severity (RR 0.59, 95% CI 0.35 to 1.01, 5 studies, 270 analysed, low quality evidence). Where 824 per 1000 would develop oral mucositis, oral cryotherapy would reduce this to 486 (95% CI reduced to 289 to increased to 833). The NNTB is 3, although the uncertainty surrounding the effect estimate means that the 95% CI ranges from 2 NNTB, to 111 NNTH (number needed to treat in order to harm one additional person, i.e. for one additional person to develop oral mucositis).The results were similar for moderate to severe oral mucositis (RR 0.43, 95% CI 0.17 to 1.09, 5 studies, 270 analysed, low quality evidence). NNTB 3 (95% CI 2 NNTB to 17 NNTH).Severe oral mucositis is probably reduced (RR 0.38, 95% CI 0.20 to 0.72, 5 studies, 270 analysed, moderate quality evidence). Where 427 per 1000 would develop severe oral mucositis, oral cryotherapy would reduce this to 162 (95% CI 85 to 308), NNTB 4 (95% CI 3 to 9).Oral cryotherapy was shown to be safe, with very low rates of minor adverse effects, such as headaches, chills, numbness/taste disturbance, and tooth pain. This appears to contribute to the high rates of compliance seen in the included studies.There was limited or no evidence on the secondary outcomes of this review, or on patients undergoing other chemotherapies, radiotherapy, targeted therapy, or on comparisons of oral cryotherapy with other interventions or different oral cryotherapy regimens. Therefore no further robust conclusions can be made. There was also no evidence on the effects of oral cryotherapy in children undergoing cancer treatment. We are confident that oral cryotherapy leads to large reductions in oral mucositis of all severities in adults receiving 5FU for solid cancers. We are less confident in the ability of oral cryotherapy to reduce oral mucositis in adults receiving high-dose melphalan before HSCT. Evidence suggests that it does reduce oral mucositis in these adults, but we are less certain about the size of the reduction, which could be large or small. However, we are confident that there is an appreciable reduction in severe oral mucositis in these adults.This Cochrane review includes some very recent and currently unpublished data, and strengthens international guideline statements for adults receiving the above cancer treatments.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Embryo transfer (ET) was traditionally performed two days after oocyte retrieval; however, developments in culture media have allowed embryos to be maintained in culture for longer periods. Delaying transfer from Day two to Day three would allow for further development of the embryo and might have a positive effect on pregnancy outcomes. To determine if there are any differences in live birth and pregnancy rates when embryo transfer is performed on day three after oocyte retrieval, compared with day two, in infertile couples undergoing treatment with in vitro fertilisation (IVF), including intracytoplasmic sperm injection (ICSI). We searched the Cochrane Gynaecology and Fertility Group Specialised Register of Controlled Trials, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (Ovid), Embase (Ovid), PsycINFO (Ovid) from the inception of the databases to 26th April 2016. We also searched ClinicalTrials.gov and the WHO portal for ongoing trials plus citation lists of relevant publications, review articles and included studies, as well as abstracts of appropriate scientific meetings. Randomised controlled trials that compared Day 3 versus Day 2 embryo transfer after oocyte retrieval during an IVF or ICSI treatment cycle in infertile couples. Two review authors independently assessed trial quality and extracted data. We contacted study authors for additional information. The primary outcome measures were live birth rate and ongoing pregnancy rate. We included 15 studies. Fourteen studies reported data per woman (2894 women) and one study reported data per cycle (969 cycles). The quality of the evidence using the GRADE approach ranged from moderate quality to very low quality. The main reasons for downgrading evidence were poor methodological reporting, selective reporting, inconsistency and imprecision. Live birth per woman - Overall, there was no evidence of a difference in live birth rate between Day three and Day two embryo transfer (risk ratio (RR) 1.05, 95% confidence interval (CI) 0.89 to 1.23; three studies, n = 1200 women; I<sup2</sup = 63%; very low quality evidence). The data suggest that if 32% of women who underwent a Day two embryo transfer had a live birth, then between 28% to 39% of women undergoing a Day three embryo transfer would have a live birth. Ongoing pregnancy per woman - There was no evidence of a difference between Day three and Day two embryo transfer for ongoing pregnancy (RR 0.98, 95% CI 0.85 to 1.12; six studies, n = 1740 women; I<sup2</sup = 52%; very low quality of evidence). The data suggest that if 33% of women undergoing a Day two embryo transfer had an ongoing pregnancy then between 28% to 37% of women undergoing a Day three embryo transfer would have an ongoing pregnancy. Clinical pregnancy per woman - There was no evidence of a difference between Day three and Day two embryo transfer for the chance of a clinical pregnancy (RR 1.08, 95% CI 0.98 to 1.19; 12 studies, n = 2461, I<sup2</sup = 51%; very low quality evidence). The data suggest that if 39% of women undergoing Day two embryo transfer had a clinical pregnancy, then between 38% to 46% of women undergoing a Day three embryo transfer would have a clinical pregnancy. Multiple pregnancy per woman - There was no evidence of a difference between Day three and Day two embryo transfer for the risk of a multiple pregnancy (RR 1.12, 95% CI 0.86 to 1.44; eight studies, n = 1837; I<sup2</sup = 0%; moderate quality evidence). The data suggest that if 11% of women undergoing Day two embryo transfer had a multiple pregnancy, then between 9% to 15% of women undergoing a Day three embryo transfer would have a multiple pregnancy. Miscarriage rate per woman - There was no evidence of a difference between Day three and Day two embryo transfer for the risk of miscarriage (RR 1.16, 95% CI 0.84 to 1.60; nine studies, n = 2153 women, I<sup2</sup = 26%; moderate quality evidence). The data suggest that if 6% of women undergoing Day two embryo transfer had a miscarriage, then between 5% to 10% of women undergoing a Day three embryo transfer would have a miscarriage. Ectopic pregnancy rate per woman - There was no evidence of a difference between Day three and Day two embryo transfer for the risk of ectopic pregnancy (RR 0.99, 95% CI 0.29 to 3.40; six studies, n = 1531 women, I<sup2</sup = 0%; low quality evidence). The data suggest that if 0.7% of women undergoing Day two embryo transfer have an ectopic pregnancy, then between 0.2% to 2% of women undergoing Day three embryo transfer would have an ectopic pregnancy.Subgroup analysis for pregnancy outcomes did not identify any differential effect between IVF and ICSI.None of the included studies prespecified complication rate (e.g. OHSS), fetal abnormality or women's evaluation of the procedure as outcomes in their studies. Twelve of 15 studies contributed data that could be included in meta-analyses. The quality of the evidence ranged from moderate to very low. Only three of the 15 studies reported data for live birth, although the data for ongoing pregnancy and clinical pregnancy are consistent with the live birth data, suggesting no difference between Day three and Day two embryo transfer for these outcomes. There was no evidence of a difference identified between Day three and Day two embryo transfer for multiple pregnancy, miscarriage or ectopic pregnancy per woman randomised. No data were reported for complication rate, fetal abnormality or woman's evaluation of the procedure. The current evidence has not identified any evidence of differences in pregnancy outcomes between Day two and Day three embryo transfers. Any further studies comparing these timings of embryo transfer are unlikely to alter the findings and we suggest that this review no longer be updated.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Combined modality treatment consisting of chemotherapy followed by localised radiotherapy is the standard treatment for patients with early stage Hodgkin lymphoma (HL). However, due to long- term adverse effects such as secondary malignancies the role of radiotherapy has been questioned recently and some clinical study groups advocate chemotherapy only for this indication. To assess the effects of chemotherapy alone compared to chemotherapy plus radiotherapy in adults with early stage HL . For the or i ginal version of this review, we searched MEDLINE, Embase and CENTRAL as well as conference proceedings (American Society of Hematology, American Society of Clinical Oncology and International Symposium of Hodgkin Lymphoma) from January 1980 to November 2010 for randomised controlled trials (RCTs) comparing chemotherapy alone versus chemotherapy regimens plus radiotherapy. For the updated review we searched MEDLINE, CENTRAL and conference proceedings to December 2016. We included RCTs comparing chemotherapy alone with chemotherapy plus radiotherapy in patients with early stage HL. We excluded trials with more than 20% of patients in advanced stage. As the value of radiotherapy in addition to chemotherapy is still not clear, we also compared to more cycles of chemotherapy in the control arm. In this updated review, we also included a second comparison evaluating trials with varying numbers of cycles of chemotherapy between intervention and control arms, same chemotherapy regimen in both arms assumed. We excluded trials evaluating children only, therefore only trials involving adults are included in this updated review. Two review authors independently extracted data and assessed the quality of trials. We contacted study authors to obtain missing information. As effect measures we used hazard ratios (HR) for overall survival (OS) and progression-free survival (PFS) and risk ratios (RR) for response rates. Since not all trials reported PFS according to our definitions, we evaluated all similar outcomes (e.g. event-free survival) as PFS/tumour control. Our search led to 5518 potentially relevant references. From these, we included seven RCTs in the analyses involving 2564 patients. In contrast to the first version of this review including five trials, we excluded trials randomising children. As a result, we excluded one trial from the former analyses and we identified three new trials.Five trials with 1388 patients compared the combination of chemotherapy alone and chemotherapy plus radiotherapy, with the same number of chemotherapy cycles in both arms. The addition of radiotherapy to chemotherapy has probably little or no difference on OS (HR 0.48; 95% confidence interval (CI) 0.22 to 1.06; P = 0.07, moderate- quality evidence), however two included trials had potential other high risk of bias due to a high number of patients not receiving planned radiotherapy. After excluding these trials in a sensitivity analysis, the results showed that the combination of chemotherapy and radiotherapy improved OS compared to chemotherapy alone (HR 0.31; 95% CI 0.19 to 0.52; P &lt;0.00001, moderate- quality evidence). In contrast to chemotherapy alone the use of chemotherapy and radiotherapy improved PFS (HR 0.42; 95% CI 0.25 to 0.72; P = 0.001; moderate- quality evidence). Regarding infection- related mortality (RR 0.33; 95% CI 0.01 to 8.06; P = 0.5; low- quality evidence), second cancer- related mortality (RR 0.53; 95% CI 0.07 to 4.29; P = 0.55; low- quality evidence) and cardiac disease- related mortality (RR 2.94; 95% CI 0.31 to 27.55; P = 0.35;low- quality evidence), there is no evidence for a difference between the use of chemotherapy alone and chemotherapy plus radiotherapy. For complete response rate (CRR) (RR 1.08; 95% CI 0.93 to 1.25; P = 0.33; low- quality evidence), there is also no evidence for a difference between treatment groups.Two trials with 1176 patients compared the combination of chemotherapy alone and chemotherapy plus radiotherapy, with different numbers of chemotherapy cycles in both arms. OS is reported in one trial only, the use of chemotherapy alone (more chemotherapy cycles) may improve OS compared to chemotherapy plus radiotherapy (HR 2.12; 95% CI 1.03 to 4.37; P = 0.04; low- quality evidence). This trial also had a potential other high risk of bias due to a high number of patients not receiving planned therapy. There is no evidence for a difference between chemotherapy alone and chemotherapy plus radiotherapy regarding PFS (HR 0.42; 95% CI 0.14 to 1.24; P = 0.12; low- quality evidence). After excluding the trial with patients not receiving the planned therapy in a sensitivity analysis, the results showed that the combination of chemotherapy and radiotherapy improved PFS compared to chemotherapy alone (HR 0.24; 95% CI 0.070 to 0.88; P = 0.03, based on one trial). For infection- related mortality (RR 6.90; 95% CI 0.36 to 132.34; P = 0.2; low- quality evidence), second cancer- related mortality (RR 2.22; 95% CI 0.7 to 7.03; P = 0.18; low- quality evidence) and cardiac disease-related mortality (RR 0.99; 95% CI 0.14 to 6.90; P = 0.99; low-quality evidence), there is no evidence for a difference between the use of chemotherapy alone and chemotherapy plus radiotherapy. CRR rate was not reported. This systematic review compared the effects of chemotherapy alone and chemotherapy plus radiotherapy in adults with early stage HL .For the comparison with same numbers of chemotherapy cycles in both arms, we found moderate- quality evidence that PFS is superior in patients receiving chemotherapy plus radiotherapy than in those receiving chemotherapy alone. The addition of radiotherapy to chemotherapy has probably little or no difference on OS . The sensitivity analysis without the trials with potential other high risk of bias showed that chemotherapy plus radiotherapy improves OS compared to chemotherapy alone.For the comparison with different numbers of chemotherapy cycles between the arms there are no implications for OS and PFS possible, because of the low quality of evidence of the results.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Kidney cancer (Renal Cell Carcinoma (RCC)) is one of the most deadly malignancies due to frequent late diagnosis and poor treatment options. Histologically, RCC embraces a wide variety of different subtypes with the clear cell variant (ccRCC) being the most common, accounting for 75-90% of all RCCs. At present, the surveillance protocols for follow-up of RCC patients after radical nephrectomy are based on the American Joint Committee on Cancers (AJCC) pathological tumor-node-metastasis (TNM) classification system. Other comprehensive staging modalities have emerged and have been implemented in an attempt to improve prognostication by combining other pathological and clinical variables, including Fuhrman nuclear grade and Leibovich score. However, even early stage tumors remain at risk of metastatic progression after surgical resection and 20-40% of patients undergoing nephrectomy for clinically localized RCC will develop a recurrence. Identifying this high-risk group of RCC patients remains a challenge. Hence, novel molecular prognostic biomarkers are needed to better predict clinical outcomes. An intensive search within this field has been ongoing in the past few years, and the three main predictive and prognostic markers validated in RCC are Von Hippel Lindau (VHL), vascular endothelial growth factor (VEGF) and carbonic anhydrase IX (CAIX). Nonetheless, the use of these is still debated and none of them have yet been implemented in clinical routine. RCC is resistant to conventional oncological therapies, such as chemotherapy and radiation. The availability of novel targeted therapies directed against tumorigenic and angiogenic pathways have increased over the last years, and the outcome of patients with advanced RCC has significantly improved as a consequence. Unfortunately, all patients eventually become resistant. Thus, the development of novel targeted therapies is of great importance. The aim of this thesis was therefore to contribute in the search for novel prognostic molecular markers in RCC and to identify novel targeted therapies by in-vitro studies. This was specifically conducted by investigating; 1) The impact of symptom presentation of RCC on prognosis, 2) The expression of Calcium-activated potassium channels in RCC, the correlation of KCa3.1 to prognosis in ccRCC and the ability of TRAM-34, RA-2 and Paxilline to inhibit the proliferation of ccRCC cell lines in-vitro, 3) The gene expression and prognostic value of 19 selected genes in ccRCC and 4) The expression of the protein kinase CK subunits in subtypes of RCC, the prognostic impact of high protein expression of the CK2α subunit in ccRCC and the ability of CX-4945 and E9 to inhibit ccRCC growth in-vitro. Our molecular study cohort consisted of 155 patients with different subtypes of RCC and the benign renal neoplasm, oncocytoma. They were diagnosed in Region of Southern Denmark in 2001-2013. Frozen tissue from tumor and normal renal cortex parenchyma, together with paraffin-embedded tissue was available for every patient. We performed gene expression analysis by qRT-PCR, immunohistochemical staining of Tissue Micro Arrays, protein kinase activity analysis and functional studies. Study I was performed as a descriptive observational study focusing on the prognostic impact of symptom presentation in RCC. We included 204 patients with renal neoplasms diagnosed in 2011-2012. Incidentally discovered RCC without symptomatic presentation had overall a better prognosis, and presented with smaller tumors, a lower T-stage, lower Fuhrman grade and lower Leibovich score. In addition, the non-symptomatic patient group experienced metastatic disease less frequently. In study II we focused on the expression of two calcium-activated potassium channels in ccRCC and oncocytoma. Both KCa3.1 and KCa1.1 were higher expressed in ccRCC compared to oncocytoma. High expression of KCa3.1 was moreover correlated with poor progression free survival of ccRCC. Functional studies provided new insights since we could detect currents compatible with KCa3.1 and KCa1.1 in the cell membrane of primary and commercial ccRCC cell lines. Nonetheless, we were not able to show any significant inhibition of cell growth by the selective inhibitors of KCa3.1 and KCa1.1, TRAM-34, RA-2 and Paxilline. In study III our aim was to investigate the prognostic role of 19 genes selected on the basis of an earlier study done by the group. We used Taqman® Low Density Array to perform a quantitative real-time PCR analysis. By selecting an optimal cut-point and correct for overestimation of the p-value, we could identify three genes with impact on prognosis of ccRCC in both univariate and multivariate analysis. High expression of the genes SPP1 and CSNK2A1 (encoding Osteopontin and CK2α respectively) correlated with poor prognosis while high expression of DEFB1 (encoding β-Defensin) correlated with better prognosis. Study IV focused on validating the results obtained in Paper III by investigating the protein expression of CK2α (Protein kinase 2, alpha subunit) in the different subtypes of RCC and oncocytoma. Furthermore, we investigated whether protein expression of CK2α in ccRCC correlated with prognosis. Here we could show, that a positive nuclear staining was a marker of poor prognosis in high-stage ccRCC. Moreover, enzyme activity analysis revealed a higher activity of the protein kinase in tumor tissue of ccRCC than in normal renal cortex. Novel insights were provided in a proliferation study where we investigated the selective inhibitors of CK2α, CX-4945 and E9. CX-4945 was able to inhibit ccRCC cell growth by nearly 50%. All together the studies presented in this thesis add additional information to the ongoing research within identification of novel prognostic markers in ccRCC. We have discovered four new molecular markers, which reliably can predict prognosis at the time of diagnosis. Additionally, we identified CK2α as a novel therapeutic target of ccRCC. The studies suggest further research to validate the findings on larger cohorts and thereby obtain more insight into the involved pathways. Future research initiatives based on the results presented in this thesis could clarify the potential role of CX-4945 as a novel targeted treatment of ccRCC patients.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Fibromyalgia is a clinically defined chronic condition of unknown etiology characterized by chronic widespread pain that often co-exists with sleep disturbances, cognitive dysfunction and fatigue. People with fibromyalgia often report high disability levels and poor quality of life. Drug therapy, for example, with serotonin and noradrenaline reuptake inhibitors (SNRIs), focuses on reducing key symptoms and improving quality of life. This review updates and extends the 2013 version of this systematic review. To assess the efficacy, tolerability and safety of serotonin and noradrenaline reuptake inhibitors (SNRIs) compared with placebo or other active drug(s) in the treatment of fibromyalgia in adults. For this update we searched CENTRAL, MEDLINE, Embase, the US National Institutes of Health and the World Health Organization (WHO) International Clinical Trials Registry Platform for published and ongoing trials and examined the reference lists of reviewed articles, to 8 August 2017. We selected randomized, controlled trials of any formulation of SNRIs against placebo or any other active treatment of fibromyalgia in adults. Three review authors independently extracted data, examined study quality, and assessed risk of bias. For efficacy, we calculated the number needed to treat for an additional beneficial outcome (NNTB) for pain relief of 50% or greater and of 30% or greater, patient's global impression to be much or very much improved, dropout rates due to lack of efficacy, and the standardized mean differences (SMD) for fatigue, sleep problems, health-related quality of life, mean pain intensity, depression, anxiety, disability, sexual function, cognitive disturbances and tenderness. For tolerability we calculated number needed to treat for an additional harmful outcome (NNTH) for withdrawals due to adverse events and for nausea, insomnia and somnolence as specific adverse events. For safety we calculated NNTH for serious adverse events. We undertook meta-analysis using a random-effects model. We assessed the evidence using GRADE and created a 'Summary of findings' table. We added eight new studies with 1979 participants for a total of 18 included studies with 7903 participants. Seven studies investigated duloxetine and nine studies investigated milnacipran against placebo. One study compared desvenlafaxine with placebo and pregabalin. One study compared duloxetine with L-carnitine. The majority of studies were at unclear or high risk of bias in three to five domains.The quality of evidence of all comparisons of desvenlafaxine, duloxetine and milnacipran versus placebo in studies with a parallel design was low due to concerns about publication bias and indirectness, and very low for serious adverse events due to concerns about publication bias, imprecision and indirectness. The quality of evidence of all comparisons of duloxetine and desvenlafaxine with other active drugs was very low due to concerns about publication bias, imprecision and indirectness.Duloxetine and milnacipran had no clinically relevant benefit over placebo for pain relief of 50% or greater: 1274 of 4104 (31%) on duloxetine and milnacipran reported pain relief of 50% or greater compared to 591 of 2814 (21%) participants on placebo (risk difference (RD) 0.09, 95% confidence interval (CI) 0.07 to 0.11; NNTB 11, 95% CI 9 to 14). Duloxetine and milnacipran had a clinically relevant benefit over placebo in patient's global impression to be much or very much improved: 888 of 1710 (52%) on duloxetine and milnacipran (RD 0.19, 95% CI 0.12 to 0.26; NNTB 5, 95% CI 4 to 8) reported to be much or very much improved compared to 354 of 1208 (29%) of participants on placebo. Duloxetine and milnacipran had a clinically relevant benefit compared to placebo for pain relief of 30% or greater. RD was 0.10; 95% CI 0.08 to 0.12; NNTB 10, 95% CI 8 to 12. Duloxetine and milnacipran had no clinically relevant benefit for fatigue (SMD -0.13, 95% CI -0.18 to -0.08; NNTB 18, 95% CI 12 to 29), compared to placebo. There were no differences between either duloxetine or milnacipran and placebo in reducing sleep problems (SMD -0.07; 95 % CI -0.15 to 0.01). Duloxetine and milnacipran had no clinically relevant benefit compared to placebo in improving health-related quality of life (SMD -0.20, 95% CI -0.25 to -0.15; NNTB 11, 95% CI 8 to 14).There were 794 of 4166 (19%) participants on SNRIs who dropped out due to adverse events compared to 292 of 2863 (10%) of participants on placebo (RD 0.07, 95% CI 0.04 to 0.10; NNTH 14, 95% CI 10 to 25). There was no difference in serious adverse events between either duloxetine, milnacipran or desvenlafaxine and placebo (RD -0.00, 95% CI -0.01 to 0.00).There was no difference between desvenlafaxine and placebo in efficacy, tolerability and safety in one small trial.There was no difference between duloxetine and desvenlafaxine in efficacy, tolerability and safety in two trials with active comparators (L-carnitine, pregabalin). The update did not change the major findings of the previous review. Based on low- to very low-quality evidence, the SNRIs duloxetine and milnacipran provided no clinically relevant benefit over placebo in the frequency of pain relief of 50% or greater, but for patient's global impression to be much or very much improved and in the frequency of pain relief of 30% or greater there was a clinically relevant benefit. The SNRIs duloxetine and milnacipran provided no clinically relevant benefit over placebo in improving health-related quality of life and in reducing fatigue. Duloxetine and milnacipran did not significantly differ from placebo in reducing sleep problems. The dropout rates due to adverse events were higher for duloxetine and milnacipran than for placebo. On average, the potential benefits of duloxetine and milnacipran in fibromyalgia were outweighed by their potential harms. However, a minority of people with fibromyalgia might experience substantial symptom relief without clinically relevant adverse events with duloxetine or milnacipran.We did not find placebo-controlled studies with other SNRIs than desvenlafaxine, duloxetine and milnacipran.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Pelvic floor muscle training (PFMT) is the most commonly used physical therapy treatment for women with stress urinary incontinence (SUI). It is sometimes also recommended for mixed urinary incontinence (MUI) and, less commonly, urgency urinary incontinence (UUI).This is an update of a Cochrane Review first published in 2001 and last updated in 2014. To assess the effects of PFMT for women with urinary incontinence (UI) in comparison to no treatment, placebo or sham treatments, or other inactive control treatments; and summarise the findings of relevant economic evaluations. We searched the Cochrane Incontinence Specialised Register (searched 12 February 2018), which contains trials identified from CENTRAL, MEDLINE, MEDLINE In-Process, MEDLINE Epub Ahead of Print, ClinicalTrials.gov, WHO ICTRP, handsearching of journals and conference proceedings, and the reference lists of relevant articles. Randomised or quasi-randomised controlled trials in women with SUI, UUI or MUI (based on symptoms, signs or urodynamics). One arm of the trial included PFMT. Another arm was a no treatment, placebo, sham or other inactive control treatment arm. At least two review authors independently assessed trials for eligibility and risk of bias. We extracted and cross-checked data. A third review author resolved disagreements. We processed data as described in the Cochrane Handbook for Systematic Reviews of Interventions. We subgrouped trials by diagnosis of UI. We undertook formal meta-analysis when appropriate. The review included 31 trials (10 of which were new for this update) involving 1817 women from 14 countries. Overall, trials were of small-to-moderate size, with follow-ups generally less than 12 months and many were at moderate risk of bias. There was considerable variation in the intervention's content and duration, study populations and outcome measures. There was only one study of women with MUI and only one study with UUI alone, with no data on cure, cure or improvement, or number of episodes of UI for these subgroups.Symptomatic cure of UI at the end of treatment: compared with no treatment or inactive control treatments, women with SUI who were in the PFMT groups were eight times more likely to report cure (56% versus 6%; risk ratio (RR) 8.38, 95% confidence interval (CI) 3.68 to 19.07; 4 trials, 165 women; high-quality evidence). For women with any type of UI, PFMT groups were five times more likely to report cure (35% versus 6%; RR 5.34, 95% CI 2.78 to 10.26; 3 trials, 290 women; moderate-quality evidence).Symptomatic cure or improvement of UI at the end of treatment: compared with no treatment or inactive control treatments, women with SUI who were in the PFMT groups were six times more likely to report cure or improvement (74% versus 11%; RR 6.33, 95% CI 3.88 to 10.33; 3 trials, 242 women; moderate-quality evidence). For women with any type of UI, PFMT groups were two times more likely to report cure or improvement than women in the control groups (67% versus 29%; RR 2.39, 95% CI 1.64 to 3.47; 2 trials, 166 women; moderate-quality evidence).UI-specific symptoms and quality of life (QoL) at the end of treatment: compared with no treatment or inactive control treatments, women with SUI who were in the PFMT group were more likely to report significant improvement in UI symptoms (7 trials, 376 women; moderate-quality evidence), and to report significant improvement in UI QoL (6 trials, 348 women; low-quality evidence). For any type of UI, women in the PFMT group were more likely to report significant improvement in UI symptoms (1 trial, 121 women; moderate-quality evidence) and to report significant improvement in UI QoL (4 trials, 258 women; moderate-quality evidence). Finally, for women with mixed UI treated with PFMT, there was one small trial (12 women) reporting better QoL.Leakage episodes in 24 hours at the end of treatment: PFMT reduced leakage episodes by one in women with SUI (mean difference (MD) 1.23 lower, 95% CI 1.78 lower to 0.68 lower; 7 trials, 432 women; moderate-quality evidence) and in women with all types of UI (MD 1.00 lower, 95% CI 1.37 lower to 0.64 lower; 4 trials, 349 women; moderate-quality evidence).Leakage on short clinic-based pad tests at the end of treatment: women with SUI in the PFMT groups lost significantly less urine in short (up to one hour) pad tests. The comparison showed considerable heterogeneity but the findings still favoured PFMT when using a random-effects model (MD 9.71 g lower, 95% CI 18.92 lower to 0.50 lower; 4 trials, 185 women; moderate-quality evidence). For women with all types of UI, PFMT groups also reported less urine loss on short pad tests than controls (MD 3.72 g lower, 95% CI 5.46 lower to 1.98 lower; 2 trials, 146 women; moderate-quality evidence).Women in the PFMT group were also more satisfied with treatment and their sexual outcomes were better. Adverse events were rare and, in the two trials that did report any, they were minor. The findings of the review were largely supported by the 'Summary of findings' tables, but most of the evidence was downgraded to moderate on methodological grounds. The exception was 'participant-perceived cure' in women with SUI, which was rated as high quality. Based on the data available, we can be confident that PFMT can cure or improve symptoms of SUI and all other types of UI. It may reduce the number of leakage episodes, the quantity of leakage on the short pad tests in the clinic and symptoms on UI-specific symptom questionnaires. The authors of the one economic evaluation identified for the Brief Economic Commentary reported that the cost-effectiveness of PFMT looks promising. The findings of the review suggest that PFMT could be included in first-line conservative management programmes for women with UI. The long-term effectiveness and cost-effectiveness of PFMT needs to be further researched.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Familial Mediterranean fever, a hereditary auto-inflammatory disease, mainly affects ethnic groups living in the Mediterranean region. Early studies reported colchicine as a potential drug for preventing attacks of familial Mediterranean fever. For those people who are colchicine-resistant or intolerant, drugs such as rilonacept, anakinra, canakinumab, etanercept, infliximab, thalidomide and interferon-alpha might be beneficial. This is an updated version of the review. To evaluate the efficacy and safety of interventions for reducing inflammation in people with familial Mediterranean fever. We used detailed search strategies to search the following databases: CENTRAL; MEDLINE; Embase; Chinese Biomedical Literature Database (CBM); China National Knowledge Infrastructure Database (CNKI); Wan Fang; and VIP. In addition, we also searched the clinical trials registries including ClinicalTrials.gov, the International Standard Randomized Controlled Trial Number Register, the WHO International Clinical Trials Registry Platform and the Chinese Clinical Trial Registry, as well as references listed in relevant reports.Date of last search: 21 August 2018. Randomized controlled studies (RCTs) of people diagnosed with familial Mediterranean fever, comparing active interventions (including colchicine, anakinra, rilonacept, canakinumab, etanercept, infliximab, thalidomide, interferon-alpha, ImmunoGuard™ (a herbal dietary supplement) and non-steroidal anti-inflammatory drugs) with placebo or no treatment, or comparing active drugs to each other. The authors independently selected studies, extracted data and assessed risk of bias. We pooled data to present the risk ratio or mean difference with their 95% confidence intervals. We assessed overall evidence quality according to the GRADE approach. We included nine RCTs with a total of 249 participants (aged three to 53 years); five were of cross-over and four of parallel design. Six studies used oral colchicine, one used oral ImmunoGuard™ and the remaining two used rilonacept or anakinra as a subcutaneous injection. The duration of each study arm ranged from one to eight months.The three studies of ImmunoGuard™, rilonacept and anakinra were generally well-designed, except for an unclear risk of detection bias in one of these. However, some inadequacy existed in the four older studies on colchicine, which had an unclear risk of selection bias, detection bias and reporting bias, and also a high risk of attrition bias and other potential bias. Neither of the two studies comparing a single to a divided dose of colchicine were adequately blinded, furthermore one study had an unclear risk of selection bias and reporting bias, a high risk of attrition bias and other potential bias.We aimed to report on the number of participants experiencing an attack, the timing of attacks, the prevention of amyloid A amyloidosis, any adverse drug reactions and the response of a number of biochemical markers from the acute phase of an attack, but data were not available for all outcomes across all comparisons.One study (15 participants) reported a significant reduction in the number of people experiencing attacks at three months with 0.6 mg colchicine three times daily (14% versus 100%), risk ratio 0.21 (95% confidence interval 0.05 to 0.95) (low-quality evidence). A further study (22 participants) of 0.5 mg colchicine twice daily showed no significant reduction in the number of participants experiencing attacks at two months (low-quality evidence). A study of rilonacept in individuals who were colchicine-resistant or intolerant (14 participants) also showed no reduction at three months (moderate-quality evidence). Likewise, a study of anakinra given to colchicine-resistant people (25 participants) showed no reduction in the number of participants experiencing an attack at four months (moderate-quality evidence).Three studies reported no significant differences in duration of attacks: one comparing colchicine to placebo (15 participants) (very low-quality evidence); one comparing single-dose colchicine to divided-dose colchicine (90 participants) (moderate-quality evidence); and one comparing rilonacept to placebo (14 participants) (low-quality evidence). Three studies reported no significant differences in the number of days between attacks: two comparing colchicine to placebo (24 participants in total) (very low-quality evidence); and one comparing rilonacept to placebo (14 participants) (low-quality evidence).No study reported on the prevention of amyloid A amyloidosis.One study of colchicine reported loose stools and frequent bowel movements (very low-quality evidence) and a second reported diarrhoea (very low-quality evidence). The rilonacept study reported no significant differences in gastrointestinal symptoms, hypertension, headache, respiratory tract infections, injection site reactions and herpes, compared to placebo (low-quality evidence). The ImmunoGuard study observed no side effects (moderate-quality evidence). The anakinra study reported no significant differences between intervention and placebo, including injection site reaction, headache, presyncope, dyspnea and itching (moderate-quality evidence). When comparing single and divided doses of colchicine, one study reported no difference in adverse events (including anorexia, nausea, diarrhoea, abdominal pain, vomiting and elevated liver enzymes) between groups (moderate-quality evidence) and the second study reported no adverse effects were detected.The rilonacept study reported no significant reduction in acute phase response indicators after three months (low-quality evidence). In the ImmunoGuard™ study, these indicators were not reduced after one month of treatment (moderate-quality evidence). The anakinra study, reported that C-reactive protein was significantly reduced after four months (moderate-quality evidence). One of the single dose versus divided dose colchicine studies reported no significant reduction in acute phase response indicators after eight months (low-quality evidence), while the second study reported no significant reduction in serum amyloid A concentration after six months (moderate-quality evidence). There were limited RCTs assessing interventions for people with familial Mediterranean fever. Based on the evidence, three times daily colchicine appears to reduce the number of people experiencing attacks, colchicine single dose and divided dose might not be different for children with familial Mediterranean fever and anakinra might reduce C-reactive protein in colchicine-resistant participants; however, only a few RCTs contributed data for analysis. Further RCTs examining active interventions, not only colchicine, are necessary before a comprehensive conclusion regarding the efficacy and safety of interventions for reducing inflammation in familial Mediterranean fever can be drawn.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
<bObjective:</b To explore the clinical efficacy of lobulated transplantation of free anterolateral thigh perforator flap in repairing electric burn wounds of limbs. <bMethods:</b From August 2014 to April 2019, 19 patients with electric burns in the limbs were hospitalized in our unit, including 18 males and 1 female, aged 20-58 years. There were 37 wounds deep to bone. The area of wounds ranged from 3.0 cm×2.0 cm to 40.0 cm×8.0 cm. Multiple-perforator-based anterolateral thigh flap was designed and resected. Then the flap was lobulated taking the respective perforators of the lateral circumflex femoral artery as the axial vessels before being transplanted to the debrided wounds in the limbs. The blood vessel trunk or the perforator vessels of flap lobes were anastomosed with the respective vessels in the recipient sites. The wounds were repaired with respective lobes of the flap when repairing multiple wounds in one surgical procedure, whereas the lobes were spliced or staggered to cover the wound to fit the shape of wound when repairing a single irregular wound in one surgical procedure. For the limb with distal blood supply disorder, the blood supply branch of flap was used to reconstruct the blood supply. If necessary, an appropriate length of vein was taken for transplantation. The improvement of reconstructed blood supply was observed. The number of surgeries, the number of anterolateral thigh perforator flaps, the number and size of flap lobes, the number of anastomosed vessels in each surgery, the treatment of the donor sites, the length of each surgery, the postoperative complications and survival condition of flap lobes were recorded. The upper extremity function was evaluated with the Carroll's Upper Extremity Function Test Scale, and the patients' satisfaction degree with the therapeutic effect of each surgery was investigated with a 5-point Likert Scale during follow-up. Surgeries were divided into single wound group of repairing one wound at one time and multiple wounds group of repairing two or more wounds at one time. The number of anastomosed vessels in each surgery, the treatment of the donor sites, the length of each surgery, and the postoperative survival condition of the flap lobes were compared between the two groups. Surgeries were divided into early group of performing surgery within post burn day 7 and late group of performing surgery on post burn day 7 and beyond. The postoperative complications and survival condition of flap lobes, the evaluation score of upper limb function and the patients' satisfaction degree with the therapeutic effect of each surgery at the last follow-up were compared between the two groups. Data were processed with independent sample <it</i test, Mann-Whitney <iU</i test, or Fisher's exact probability test. <bResults:</b The blood supply of 5 patients with distal hand or finger blood supply disorder recovered or improved significantly after vascular transplantation. A total of 46 lobes [(2.2±0.4) lobes per flap] were obtained from 21 anterolateral thigh perforator flaps in 19 patients with 21 surgeries. The area of flap lobes ranged from 4.0 cm×3.0 cm to 24.0 cm×13.0 cm. In each surgery, 2.0 (1.5, 3.0) arteries and 3.0 (2.0, 3.0) veins were anastomosed. Six donor sites were repaired by thin split-thickness scalp, and 15 donor sites were closed directly. The duration of each surgery was (8.9±1.7) h. After surgery, bleeding and hematoma occurred in 2 flap lobes and local infection occurred in 5 flap lobes, which were improved after management. Vascular crisis occurred in 4 flap lobes, and exploratory surgeries were performed, after which 2 lobes survived, while the other 2 lobes necrotized and were repaired by other methods. The rest flap lobes survived well. After each postoperative follow-up of 3 to 60 months, the flap covering areas of the limbs were well-recovered. At the last follow-up, the function evaluation score of 20 affected upper limbs was 85 (63, 90) points, and the score of patients' satisfaction degree with the therapeutic effect of each surgery was (4.4±0.7) points. A total of 30 flap lobes were obtained in 14 surgeries and repaired 30 wounds respectively in multiple wounds group, and 16 flap lobes were obtained in 7 surgeries and were spliced to repair 7 large irregular wounds in single wound group. There were no statistically significant differences in the number of anastomosed artery or vein in each surgery, and the duration of each surgery between multiple wounds group and single wound group (<iZ</i=0.240, 0.081, <it</i=0.180, <iP</i&gt;0.05), and the condition of skin grafting in the donor sites and the postoperative survival of the flap lobes in multiple wounds group were similar to those in single wound group (<iP</i&gt;0.05). A total of 22 flap lobes were obtained in 10 surgeries and repaired 18 wounds in early group, and 24 flap lobes were obtained in 11 surgeries and repaired 19 wounds in late group. The incidence of postoperative hematoma, infection, vascular crisis, and survival of flap lobes in early group were similar to those in late group (<iP</i&gt;0.05). There were no statistically significant differences in the patients' satisfaction degree with the therapeutic effect of each surgery at the last follow-up between early group and late group (<it</i=0.701, <iP</i&gt;0.05). At the last follow-up, the function evaluation score of 9 upper limbs in early group was 90 (85, 97) points, significantly higher than 80 (40, 85) points of 11 upper limbs in late group (<iZ</i=2.431, <iP</i&lt;0.05). <bConclusions:</b Free lobulated anterolateral thigh perforator flap is suitable for simultaneous repair of multiple electric burn wounds of limbs, as well as the repair of a single large irregular wound. It has the clinical advantages of less damage to the donor site and good repair quality. The early flap transplantation is beneficial to improve the function of limbs with electric burns.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
<bObjective:</b To observe the effects of pyrroloquinoline quinine (PQQ) on the mitochondrial function and cell survival of rat bone marrow mesenchymal stem cells (BMSCs) under oxidative stress, and to explore its mechanism. <bMethods:</b BMSCs of rats were cultured in vitro with Dulbecco's minimum essential medium/F12 medium containing fetal bovine serum in the volume fraction of 10% (hereinafter referred to as normal medium). The rat BMSCs of third to fifth passages in logarithmic growth phase were selected for the following experiments. (1) The cells were divided into normal control group, normal control+ PQQ group, hydrogen peroxide (H(2)O(2)) alone group, and H(2)O(2)+ PQQ group. The cells in normal control group were cultured in normal medium for 24 hours; the cells in normal control+ PQQ group were cultured in normal medium containing 100 μmol/L PQQ for 24 hours; the cells in H(2)O(2) alone group were cultured in normal medium containing 200 μmol/L H(2)O(2) for 24 hours; the cells in H(2)O(2)+ PQQ group were pre-incubated with normal medium containing 100 μmol/L PQQ for 2 hours, and then with H(2)O(2) added to the concentration of 200 μmol/L and cultured for 24 hours. The cell morphology of each group was observed under the inverted phase contrast microscope, and the cell survival rate was detected by cell count kit 8 method. (2) Five batches of cells were collected, and the cells of each batch were divided into normal control group, H(2)O(2) alone group, and H(2)O(2)+ PQQ group. The cells in each group received the same treatment as that in the corresponding group of experiment (1). After 24 hours of culture, one batch of cells was collected for apoptosis detection by flow cytometry, and the apoptosis rate was calculated. One batch of cells was subjected to mitochondrial membrane potential assay and JC-1 fluorescent staining observation using the JC-1 mitochondrial membrane potential detection kit and the inverted phase contrast fluorescence microscope, respectively. One batch of cells was collected for mitochondrial morphology observation under the transmission electron microscope. One batch of cells was subjected to catalase (CAT) and superoxide dismutase (SOD) activity assay by CAT activity assay kit and SOD activity assay kit, respectively. One batch of cells was subjected to Western blotting for determination of protein level of Epac1, adenine monophosphate activated protein kinase (AMPK), phosphorylated AMPK, cysteinyl aspartate-specific proteinase 3 (caspase-3), and cleaved caspase-3, and the phosphorylation level of AMPK and cleaved caspase-3/caspase-3 ratio were calculated. Six replicates were measured in each group for each index except for morphological observation. Data were statistically analyzed with one-way analysis of variance and independent sample equal variance <it</i test. <bResults:</b (1) After 24 hours of culture, compared with those in normal control group (the cell survival rate was set to 100.0%), there was an increase in cell vacuole and a decrease in cell number in H(2)O(2) alone group, and the cell survival rate was significantly reduced to (74.3±2.9)% (<it</i=6.39, <iP</i&lt;0.01). Compared with those in H(2)O(2) alone group, the cell morphology of H(2)O(2)+ PQQ group was significantly improved, and the cell survival rate was significantly increased to (116.9±4.2)% (<it</i=6.92, <iP</i&lt;0.01); the cell survival rate in normal control+ PQQ group was (101.2±1.1)%, close to that of control group (<it</i=1.06, <iP</i&gt;0.05). (2) After 24 hours of culture, compared with (13.6±1.0)% in normal control group, the apoptosis rate of cells in H(2)O(2) alone group was significantly increased to (37.1±2.0)% (<it</i=10.57, <iP</i&lt;0.01). Compared with that in H(2)O(2) alone group, the apoptosis rate of cells in H(2)O(2)+ PQQ group was significantly declined to (17.0±0.7)% (<it</i=9.49, <iP</i&lt;0.01). (3) After 24 hours of culture, compared with those in normal control group, the mitochondrial membrane potential of cells in H(2)O(2) alone group was depolarized, the JC-1 fluorescent dye mainly existed in the cytoplasm in the form of monomer, which emitted green fluorescence, and a significant decrease in mitochondrial membrane potential was shown (<it</i=4.18, <iP</i&lt;0.01). Compared with those in H(2)O(2) alone group, the mitochondrial membrane potential of cells in H(2)O(2)+ PQQ group was increased to normal level (<it</i=4.43, <iP</i&lt;0.01), and the JC-1 fluorescent dye accumulated in mitochondria following the polarized mitochondrial membrane potential and emitted red fluorescence. (4) After 24 hours of culture, compared with that in normal control group, the mitochondrial structure of cells in H(2)O(2) alone group was disordered, with disappeared mitochondrial cristae and decreased mitochondrial matrix density. Compared with that in H(2)O(2) alone group, the mitochondrial structure of cells in H(2)O(2)+ PQQ group was regular and intact, with clearly visible mitochondrial cristae and increased mitochondrial matrix density. (5) After 24 hours of culture, compared with those in normal control group, the CAT activity of cells in H(2)O(2) alone group was significantly increased (<it</i=4.54, <iP</i&lt;0.05), and the SOD activity was significantly decreased (<it</i=3.93, <iP</i&lt;0.05). Compared with those in H(2)O(2) alone group, the CAT activity of cells in H(2)O(2)+ PQQ group was obviously increased (<it</i=8.65, <iP</i&lt;0.01), while there was no significant change in the SOD activity (<it</i=0.72, <iP</i&gt;0.05). (6) After 24 hours of culture, compared with those in normal control group, the protein expression of Epac1 of cells in H(2)O(2) alone group was significantly decreased (<it</i=4.67, <iP</i&lt;0.01), while the AMPK phosphorylation level and the cleaved caspase-3/caspase-3 ratio were significantly increased (<it</i=7.88, 3.62, <iP</i&lt;0.01). Compared with those in H(2)O(2) alone group, the protein expression of Epac1 and the AMPK phosphorylation level of cells in H(2)O(2)+ PQQ group were both significantly increased (<it</i=4.34, 16.37, <iP</i&lt;0.01), while the cleaved caspase-3/caspase-3 ratio was significantly declined (<it</i=3.17, <iP</i&lt;0.05). <bConclusions:</b Pretreatment with PQQ can improve the mitochondrial function, reduce cell apoptosis rate, and enhance cell survival rate of rat BMSCs under oxidative stress, which may be related to the up-regulation of Epac1 protein expression, activation of AMPK signaling pathway, and down-regulation of cleaved caspase-3 protein level.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Compression hosiery or stockings are often the first line of treatment for varicose veins in people without either healed or active venous ulceration. Evidence is required to determine whether the use of compression stockings can effectively manage and treat varicose veins in the early stages. This is the second update of a review first published in 2011. To assess the effectiveness of compression stockings for the only and initial treatment of varicose veins in people without healed or active venous ulceration. For this update, the Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL, and AMED databases and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 12 May 2020. We also checked references of studies identified from the literature searches. We included randomised controlled trials (RCTs) involving people diagnosed with primary trunk varicose veins without healed or active venous ulceration (Clinical, Etiology, Anatomy, Pathophysiology (CEAP) classification C2 to C4). Included trials assessed compression stockings versus no treatment or placebo stockings, or compression stockings plus drug intervention versus drug intervention alone. We also included trials comparing different lengths and pressures of stockings. We excluded trials involving other types of treatment for varicose veins (either as a comparator to stockings or as an initial non-randomised treatment), including sclerotherapy and surgery. We followed standard Cochrane methodology. Two review authors independently assessed trials for inclusion, extracted data, assessed risk of bias and assessed the certainty of the evidence using GRADE. Outcomes of interest were change in symptoms; physiological measures; complications; compliance; comfort, tolerance and acceptability of wearing stockings; and quality of life. We included 13 studies with 1021 participants with varicose veins without healed or active venous ulceration. One study included pregnant women while other studies included participants who had sought medical intervention for their varicose veins by being on surgical waiting lists, or attending vascular surgery or dermatology clinics or outpatient departments. The stockings used in the studies exerted different levels of pressure, ranging from 10 mmHg to 50 mmHg. Five studies assessed compression stockings versus no compression stockings or placebo stockings. Three of these studies used knee-length stockings, one used full-length stockings and one used full tights. Eight studies compared different types or pressures of knee-length stockings. The risk of bias of many included trials was unclear, mainly because of inadequate reporting. We were unable to pool studies as they did not report the same outcomes or used different ways to assess them. Many studies were small and there were differences in the populations studied. The certainty of the evidence was therefore low to very low. Compression stockings compared with no treatment or placebo stockings All four studies that reported change in symptoms found a subjective improvement by the end of the study. However, change in symptoms was not always analysed by comparing the randomised arms of the studies and was therefore subject to bias. Two studies assessed physiological measures using either ankle circumference or duplex sonography to measure oedema. Ankle circumference showed no clear difference between baseline and follow-up while oedema was reduced in the stocking group compared with the placebo stocking group. Three studies reported complications or side effects with itching and irritation the main side effects reported. None of the trials reported severe side effects. Reports of compliance varied between studies. One study reported a high dropout rate with low levels of compliance due to discomfort, application and appearance; two studies reported generally good levels of compliance in the stocking group compared to placebo/no treatment. Two studies reported comfort, tolerance and acceptability with outcomes affected by the study population. Compression tights were increasingly rejected by pregnant women as their pregnancy progressed, while in one study of non-pregnant women, the stockings group showed no more hindrance of normal activities and daytime discomfort when compared with placebo stockings. One study reported quality of life showing no clear differences between the stocking and placebo stocking groups. Compression stockings compared with different compression stockings All five studies that reported change in symptoms found a subjective improvement in symptoms by the end of the study. Change in symptoms was not always analysed comparing the randomised arms of the trials and was therefore subject to bias. Five studies reported a variety of physiological measures such as foot volumetry, volume reduction and change in diameter. Generally, there were no clear differences between study arms. Four studies reported complications or side effects, including sweating, itching, skin dryness, and constriction and tightness. None of the trials reported severe side effects. Two studies reported compliance showing no difference in compliance rates between stockings groups, although one study reported high initial levels of dropout due to discomfort, appearance, non-effectiveness and irritation. Four studies reported comfort, tolerance and acceptability. Two studies reported similar levels of tolerance and discomfort between groups. Discomfort was the main reason for indicating a preference for one type of stocking over another. None of the studies assessed quality of life. No conclusions regarding the optimum length or pressure of compression stockings could be made as there were no conclusive results from the included studies. There is insufficient high-certainty evidence to determine whether or not compression stockings are effective as the sole and initial treatment of varicose veins in people without healed or active venous ulceration, or whether any type of stocking is superior to any other type. Future research should consist of large RCTs of participants with trunk varices either wearing or not wearing compression stockings to assess the efficacy of this intervention. If compression stockings are found to be beneficial, further studies assessing which length and pressure is the most efficacious could then take place.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Childhood vaccination is one of the most effective ways to prevent serious illnesses and deaths in children. However, worldwide, many children do not receive all recommended vaccinations, for several potential reasons. Vaccines might be unavailable, or parents may experience difficulties in accessing vaccination services; for instance, because of poor quality health services, distance from a health facility, or lack of money. Some parents may not accept available vaccines and vaccination services. Our understanding of what influences parents' views and practices around childhood vaccination, and why some parents may not accept vaccines for their children, is still limited. This synthesis links to Cochrane Reviews of the effectiveness of interventions to improve coverage or uptake of childhood vaccination. - Explore parents' and informal caregivers' views and practices regarding routine childhood vaccination, and the factors influencing acceptance, hesitancy, or nonacceptance of routine childhood vaccination. - Develop a conceptual understanding of what and how different factors reduce parental acceptance of routine childhood vaccination. - Explore how the findings of this review can enhance our understanding of the related Cochrane Reviews of intervention effectiveness. We searched MEDLINE, Embase, CINAHL, and three other databases for eligible studies from 1974 to June 2020. We included studies that: utilised qualitative methods for data collection and analysis; focused on parents' or caregivers' views, practices, acceptance, hesitancy, or refusal of routine vaccination for children aged up to six years; and were from any setting globally where childhood vaccination is provided. We used a pre-specified sampling frame to sample from eligible studies, aiming to capture studies that were conceptually rich, relevant to the review's phenomenon of interest, from diverse geographical settings, and from a range of income-level settings. We extracted contextual and methodological data from each sampled study. We used a meta-ethnographic approach to analyse and synthesise the evidence. We assessed methodological limitations using a list of criteria used in previous Cochrane Reviews and originally based on the Critical Appraisal Skills Programme quality assessment tool for qualitative studies. We used the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) approach to assess our confidence in each finding. We integrated the findings of this review with those from relevant Cochrane Reviews of intervention effectiveness. We did this by mapping whether the underlying theories or components of trial interventions included in those reviews related to or targeted the overarching factors influencing parental views and practices regarding routine childhood vaccination identified by this review. We included 145 studies in the review and sampled 27 of these for our analysis. Six studies were conducted in Africa, seven in the Americas, four in South-East Asia, nine in Europe, and one in the Western Pacific. Studies included urban and rural settings, and high-, middle-, and low-income settings. Many complex factors were found to influence parents' vaccination views and practices, which we divided into four themes. Firstly, parents' vaccination ideas and practices may be influenced by their broader ideas and practices surrounding health and illness generally, and specifically with regards to their children, and their perceptions of the role of vaccination within this context. Secondly, many parents' vaccination ideas and practices were influenced by the vaccination ideas and practices of the people they mix with socially. At the same time, shared vaccination ideas and practices helped some parents establish social relationships, which in turn strengthened their views and practices around vaccination. Thirdly, parents' vaccination ideas and practices may be influenced by wider political issues and concerns, and particularly their trust (or distrust) in those associated with vaccination programmes. Finally, parents' vaccination ideas and practices may be influenced by their access to and experiences of vaccination services and their frontline healthcare workers. We developed two concepts for understanding possible pathways to reduced acceptance of childhood vaccination. The first concept, 'neoliberal logic', suggests that many parents, particularly from high-income countries, understood health and healthcare decisions as matters of individual risk, choice, and responsibility. Some parents experienced this understanding as in conflict with vaccination programmes, which emphasise generalised risk and population health. This perceived conflict led some parents to be less accepting of vaccination for their children. The second concept, 'social exclusion', suggests that some parents, particularly from low- and middle-income countries, were less accepting of childhood vaccination due to their experiences of social exclusion. Social exclusion may damage trustful relationships between government and the public, generate feelings of isolation and resentment, and give rise to demotivation in the face of public services that are poor quality and difficult to access. These factors in turn led some parents who were socially excluded to distrust vaccination, to refuse vaccination as a form of resistance or a way to bring about change, or to avoid vaccination due to the time, costs, and distress it creates. Many of the overarching factors our review identified as influencing parents' vaccination views and practices were underrepresented in the interventions tested in the four related Cochrane Reviews of intervention effectiveness. Our review has revealed that parents' views and practices regarding childhood vaccination are complex and dynamic social processes that reflect multiple webs of influence, meaning, and logic. We have provided a theorised understanding of the social processes contributing to vaccination acceptance (or not), thereby complementing but also extending more individualistic models of vaccination acceptance. Successful development of interventions to promote acceptance and uptake of childhood vaccination will require an understanding of, and then tailoring to, the specific factors influencing vaccination views and practices of the group(s) in the target setting. The themes and concepts developed through our review could serve as a basis for gaining this understanding, and subsequent development of interventions that are potentially more aligned with the norms, expectations, and concerns of target users.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Trial monitoring is an important component of good clinical practice to ensure the safety and rights of study participants, confidentiality of personal information, and quality of data. However, the effectiveness of various existing monitoring approaches is unclear. Information to guide the choice of monitoring methods in clinical intervention studies may help trialists, support units, and monitors to effectively adjust their approaches to current knowledge and evidence. To evaluate the advantages and disadvantages of different monitoring strategies (including risk-based strategies and others) for clinical intervention studies examined in prospective comparative studies of monitoring interventions. We systematically searched CENTRAL, PubMed, and Embase via Ovid for relevant published literature up to March 2021. We searched the online 'Studies within A Trial' (SWAT) repository, grey literature, and trial registries for ongoing or unpublished studies. We included randomized or non-randomized prospective, empirical evaluation studies of different monitoring strategies in one or more clinical intervention studies. We applied no restrictions for language or date of publication. We extracted data on the evaluated monitoring methods, countries involved, study population, study setting, randomization method, and numbers and proportions in each intervention group. Our primary outcome was critical and major monitoring findings in prospective intervention studies. Monitoring findings were classified according to different error domains (e.g. major eligibility violations) and the primary outcome measure was a composite of these domains. Secondary outcomes were individual error domains, participant recruitment and follow-up, and resource use. If we identified more than one study for a comparison and outcome definitions were similar across identified studies, we quantitatively summarized effects in a meta-analysis using a random-effects model. Otherwise, we qualitatively summarized the results of eligible studies stratified by different comparisons of monitoring strategies. We used the GRADE approach to assess the certainty of the evidence for different groups of comparisons. We identified eight eligible studies, which we grouped into five comparisons. 1. Risk-based versus extensive on-site monitoring: based on two large studies, we found moderate certainty of evidence for the combined primary outcome of major or critical findings that risk-based monitoring is not inferior to extensive on-site monitoring. Although the risk ratio was close to 'no difference' (1.03 with a 95% confidence interval [CI] of 0.81 to 1.33, below 1.0 in favor of the risk-based strategy), the high imprecision in one study and the small number of eligible studies resulted in a wide CI of the summary estimate. Low certainty of evidence suggested that monitoring strategies with extensive on-site monitoring were associated with considerably higher resource use and costs (up to a factor of 3.4). Data on recruitment or retention of trial participants were not available. 2. Central monitoring with triggered on-site visits versus regular on-site visits: combining the results of two eligible studies yielded low certainty of evidence with a risk ratio of 1.83 (95% CI 0.51 to 6.55) in favor of triggered monitoring intervention. Data on recruitment, retention, and resource use were not available. 3. Central statistical monitoring and local monitoring performed by site staff with annual on-site visits versus central statistical monitoring and local monitoring only: based on one study, there was moderate certainty of evidence that a small number of major and critical findings were missed with the central monitoring approach without on-site visits: 3.8% of participants in the group without on-site visits and 6.4% in the group with on-site visits had a major or critical monitoring finding (odds ratio 1.7, 95% CI 1.1 to 2.7; P = 0.03). The absolute number of monitoring findings was very low, probably because defined major and critical findings were very study specific and central monitoring was present in both intervention groups. Very low certainty of evidence did not suggest a relevant effect on participant retention, and very low certainty evidence indicated an extra cost for on-site visits of USD 2,035,392. There were no data on recruitment. 4. Traditional 100% source data verification (SDV) versus targeted or remote SDV: the two studies assessing targeted and remote SDV reported findings only related to source documents. Compared to the final database obtained using the full SDV monitoring process, only a small proportion of remaining errors on overall data were identified using the targeted SDV process in the MONITORING study (absolute difference 1.47%, 95% CI 1.41% to 1.53%). Targeted SDV was effective in the verification of source documents, but increased the workload on data management. The other included study was a pilot study, which compared traditional on-site SDV versus remote SDV and found little difference in monitoring findings and the ability to locate data values despite marked differences in remote access in two clinical trial networks. There were no data on recruitment or retention. 5. Systematic on-site initiation visit versus on-site initiation visit upon request: very low certainty of evidence suggested no difference in retention and recruitment between the two approaches. There were no data on critical and major findings or on resource use. The evidence base is limited in terms of quantity and quality. Ideally, for each of the five identified comparisons, more prospective, comparative monitoring studies nested in clinical trials and measuring effects on all outcomes specified in this review are necessary to draw more reliable conclusions. However, the results suggesting risk-based, targeted, and mainly central monitoring as an efficient strategy are promising. The development of reliable triggers for on-site visits is ongoing; different triggers might be used in different settings. More evidence on risk indicators that identify sites with problems or the prognostic value of triggers is needed to further optimize central monitoring strategies. In particular, approaches with an initial assessment of trial-specific risks that need to be closely monitored centrally during trial conduct with triggered on-site visits should be evaluated in future research.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Endometriosis (EMS) is an estrogen-dependent disease, which easily recurs after operation. Gonadotropin-releasing hormone agonist (GnRH-a), an estrogen-inhibiting drug, can effectively inhibit the secretion of gonadotropin by pituitary gland, so as to significantly decrease the ovarian hormone level and facilitate the atrophy of ectopic endometrium, playing a positive role in preventing postoperative recurrence. The application of GnRH-a can lead to the secondary low estrogen symptoms, namely the perimenopausal symptoms, and is a main reason for patients to give up further treatment. The add-back therapy based on sex hormones can well address the perimenopausal symptoms, but long-term use of hormones may cause the recurrence of EMS, as well as liver function damage, venous embolism, breast cancer and other risks, which has long been a heated topic in the industry. Therefore, it is necessary to find effective and safe anti-additive drugs soon. Studies at home and abroad show that, as a plant extract, isopropanolic extract of cimicifuga racemosa (ICR) can well relieve the perimenopausal symptoms caused by natural menopause. Some studies have preliminarily confirmed that black cohosh preparations can antagonize perimenopausal symptoms of EMS patients treated with GnRH-a after operation. To establish a rat model of perimenopausal symptoms induced by GnRH-a injection, for the purposes of laying a foundation for further research and preliminarily exploring the effect of black cohosh preparations on reproductive endocrine of the rat model. The rat model of perimenopausal symptoms was established by GnRH-a injection, and normal saline (NS injection) was used as the control. The rats were randomly divided into four groups according to different modeling methods and drug intervention schemes. GnRH-a injection + normal saline intervention group (GnRH-a + NS), normal saline injection control + normal saline intervention group (NS + NS), GnRH-a injection + estradiol intervention group (GnRH-a + E2), and GnRH-a injection + black cohosh preparations intervention group (GnRH-a + ICR). After modelling was assessed to be successful with the vaginal smear method, the corresponding drugs were given for intervention for 28d. In the process of rat modeling and drug intervention, the skin temperature and anus temperature of the rat tails were measured every other day, the body weights of the rats were measured every other day, and the dosage was adjusted according to the body weight. After the intervention was over, the serum sex hormone level, the uterine weight, the uterine index, and the endometrial histomorphology changes, as well as the ovarian weight, the ovarian index, and the morphological changes of ovarian tissues of each group were measured. (1) The vaginal cell smears of the control group (NS + NS) showed estrous cycle changes, while other model rats had no estrous cycle of vaginal cells. (2) The body weight gains of the GnRH-a + NS, GnRH-a + E2 and GnRH-a + ICR groups were significantly higher than that of the NS + NS control group. The intervention with E2 and ICR could delay the weight gain trend of rats induced by GnRH-A. (3) After GnRH-a injection, the temperature of the tail and anus of rats showed an overall upward trend, and the intervention with E2 and ICR could effectively improve such temperature change. (4) The E2, FSH, and LH levels in the GnRH-a + NS, GnRH-a + E2, and GnRH-a + ICR groups were significantly lower than those in the NS + NS group (P &lt; 0.01). The E2 level was significantly higher and the LH level was significantly lower in the GnRH-a + E2 group than those in the GnRH-a + NS and GnRH-a + ICR groups (P &lt; 0.05). Compared with those of the GnRH-a + NS and GnRH-a + ICR groups, the FSH level of the GnRH-a + E2 group showed a slight downward trend, but the difference was not statistically significant (P &gt; 0.05). There was no significant difference in the levels of sex hormones between the GnRH-a + NS group and GnRH-a + ICR group (P &gt; 0.05). (5) Compared with those of the NS + NS group, the uterine weight and uterine index of the GnRH-a + NS, GnRH-a + E2 and GnRH-a + ICR groups significantly decreased (P &lt; 0.01). In a comparison between the groups, the uterine weight and uterine index in the GnRH-a + NS and GnRH-a + ICR groups were significantly lower than those in the GnRH-a + E2 group (P &lt; 0.01). There was a statistical difference in the uterine weight and uterine index between the GnRH-a + NS group and GnRH-a + ICR group (P &gt; 0.05). (6) Compared with those of the NS + NS group, the ovarian weight and ovarian index of the GnRH-a + NS, GnRH-a + E2 and GnRH-a + ICR groups significantly decreased (P &lt; 0.01). There was no statistical difference in the ovarian weight and ovarian index among the GnRH-a + E2, GnRH-a + NS and GnRH-a + ICR groups (P &gt; 0.05). (7) Compared with those in the NS + NS group, the number of primordial follicles increased significantly, while the number of growing follicles and mature follicles decreased significantly in the GnRH-a + NS, GnRH-a + E2, and GnRH-a + ICR groups (P &lt; 0.01), but there was a statistical difference in the total number of follicles among the four groups (P &gt; 0.05). The GnRH-a injection could achieve the desired effect. The animal model successfully achieved a significant decrease in the E2, FSH, and LH levels in rats, and could cause the rats to have rising body surface temperature similar to hot flashes in the perimenopausal period. The intervention with E2 and ICR could effectively relieve such "perimenopausal symptoms", and ICR had no obvious effect on the serum sex hormone level in rats.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The Committee on Operating Room Safety of Japan Society of Anesthesiologists (JSA) sends annually confidential questionnaires of perioperative mortality and morbidity (cardiac arrest, severe hypotension, severe hypoxia) to Certified Training Hospitals of JSA. This report is a special reference to anesthetic methods in perioperative mortality and morbidity in 2000. Five hundreds and twenty hospitals reported perioperative mortality and morbidity referred to anesthetic methods and total numbers of reported cases were 910,007. The percentage of cases reported by each anesthetic method was as follows; inhalation anesthesia 45.47%, total intravenous anesthesia (TIVA) 6.15%, inhalation anesthesia + epidural or spinal or conduction block 24.48%, TIVA + epidural or spinal or conduction block 6.33%, spinal with continuous epidural block (CSEA) 3.67%, epidural anesthesia 1.92%, spinal anesthesia 10%, conduction block 0.47% and others 1.49%. The incidence of cardiac arrest per 10,000 cases due to all etiology (anesthetic management, preoperative complications, intraoperative complications, surgery, others) is estimated to be 6.55 cases in average; 5.36 cases in inhalation anesthesia, 30.72 cases in total intravenous anesthesia (TIVA), 4.62 cases in inhalation anesthesia + epidural or spinal or conduction block, 2.6 cases in TIVA + epidural or spinal or conduction block, 1.2 cases in spinal with continuous epidural block (CSEA), 0.57 cases in epidural anesthesia, 1.65 cases in spinal anesthesia, 2.36 cases in conduction block and 46.38 cases in other methods. However, the incidence of cardiac arrest per 10,000 cases totally attributable to anesthetic management is estimated to be 0.54 cases in average; 0.34 cases in inhalation anesthesia, 1.07 cases in TIVA, 0.58 cases in inhalation anesthesia + epidural or spinal or conduction block, 0.17 cases in TIVA + epidural or spinal or conduction block, 0.9 cases in CSEA, 0.57 cases in epidural anesthesia, 0.99 cases in spinal anesthesia, zero case in conduction block and 1.47 cases in other methods. The incidence of severe hypotension per 10,000 cases due to all etiology is estimated to be 11.14 cases in average; 11.31 cases in inhalation anesthesia, 36.61 cases in TIVA, 9.29 cases in inhalation anesthesia + epidural or spinal or conduction block, 6.59 cases in TIVA + epidural or spinal or conduction block, 3.59 cases in CSEA, 6.3 cases in epidural anesthesia, 4.39 cases in spinal anesthesia, 2.36 cases in conduction block and 23.56 cases in other methods. On the other hand, the incidence of severe hypotension per 10,000 cases totally attributable to anesthetic management is estimated to be 1.25 cases in average; 0.97 cases in inhalation anesthesia, 0.89 cases in TIVA, 1.39 cases in inhalation anesthesia + epidural or spinal or conduction block, 1.39 cases in TIVA + epidural or spinal or conduction block, 2.09 cases in CSEA, 3.44 cases in epidural anesthesia, 1.87 cases in spinal anesthesia, zero case in conduction block and zero case in other methods. The incidence of severe hypoxia per 10,000 cases due to all etiology is estimated to be 4.8 cases in average; 6.35 cases in inhalation anesthesia, 9.64 cases in TIVA, 3.82 cases in inhalation anesthesia + epidural or spinal or conduction block, 2.26 cases in TIVA + epidural or spinal or conduction block, 0.3 cases in CSEA, 1.15 case in epidural anesthesia, 1.21 cases in spinal anesthesia, zero case in conduction block and 5.89 cases in other methods. On the other hands, the incidence of severe hypoxia per 10,000 cases totally attributable to anesthetic management is estimated to be 1.98 cases in average; 3.09 cases in inhalation anesthesia, 2.32 cases in TIVA, 1.3 cases in inhalation anesthesia + epidural or spinal or conduction block, 0.87 cases in TIVA + epidural or spinal or conduction block, zero case in CSEA, zero case in epidural anesthesia, 0.55 cases in spinal anesthesia, zero case in conduction block and zero case in other methods. The mortality rate of cardiac arrest within 7 postoperative days per 10,000 cases due to all etiology is estimated to be 3.55 (54.2%) cases in average; 3.12 (58.1%) cases in inhalation anesthesia, 19.29 (62.8%) cases in TIVA, 1.17 (25.2%) cases in inhalation anesthesia + epidural or spinal or conduction block, 0.52 (20%) cases in TIVA + epidural or spinal or conduction block, zero cases in CSEA, zero case in epidural anesthesia, 0.33 (20%) cases in spinal anesthesia, zero case in conduction block and 39.76 (85.7%) cases in other methods. On the other hands, the mortality rate of cardiac arrest per 10,000 cases totally attributable to anesthesia is estimated to be 0.07 (12.2%) case in average, 0.07 (21.4%) case in inhalation anesthesia, 0.18 (16.8%) case in TIVA, zero case in inhalation anesthesia + epidural or spinal or conduction block, zero case in TIVA + epidural or spinal or conduction block, zero case in CSEA, zero case in epidural anesthesia, 0.11 (11.1%) case in spinal anesthesia, zero case in conduction block and 0.74 (50%) case in other methods. Five major combinations of listed critical incidents, causes and anesthetic methods were as follows: 18.93 cases in TIVA, preoperative complications and severe hypotension; 18.75 cases in TIVA, preoperative complications and cardiac arrest; 11.07 cases in TIVA, surgery and severe hypotension; 6.79 cases in TIVA, surgery and cardiac arrest; 5.24 cases in inhalation anesthesia, preoperative complications and severe hypotension. In summary: 1. There was no significant difference with regard to perioperative mortality and morbidity due to anesthetic management among anesthetic methods. 2. The percentage of each anesthetic method in 2000 was not different significantly from that in 1999 in spite of increased cases reported. 3. Incidence of severe hypotension due to all etiology of TIVA in 2000 decreased significantly compared with that in 1999 (P &lt; 0.05). This may be attributed to the decreased incidence in preoperative complication (shock) and massive bleeding due to surgery.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Omalizumab [anti-IgE monoclonal antibody E25, E25, humanised anti-IgE MAb, IGE 025, monoclonal antibody E25, olizumab, rhuMAb-E25, Xolair] is a chimeric monoclonal antibody. It binds specifically to the Cepsilon3 domain of immunoglobulin E (IgE). Cepsilon3 is the site of high-affinity IgE receptor binding. IgE plays a major role in allergic disease by causing the release of histamine and other inflammatory mediators from mast cells. Omalizumab binds to and neutralises circulating IgE by preventing IgE from binding to its high-affinity mast-cell receptor. In addition, omalizumab does not bind to or induce histamine release from basophils, nor does it bind to or recognise IgG. The immune complexes formed between IgE and omalizumab in vivo are relatively small (molecular weight &lt;1 million) and are therefore unlikely to cause organ damage. COLLABORATION BETWEEN GENETECH NOVARTIS AND TANOX: omalizumab is very similar to the Tanox product CGP 51901. Genentech (Roche), Novartis and Tanox (formerly Tanox Biosystems) were developing both antibodies in phase II studies, with an agreement to collaborate on phase III development of the most promising one. The Genentech product, omalizumab, was selected for further development. Tanox has marketing rights to the drug in some Asian markets. Novartis and Genentech have marketing rights in the USA. Roche has an option to participate in the commercialisation of omalizumab and other anti-IgE products of the collaboration in Japan and Europe. Roche may exercise this option if specific events relating to commercialisation of the product occur; Roche has waived this option for omalizumab in Japan. If Genentech withdraws from the collaboration, Roche has an option to assume its place. Either Novartis or Genentech may withdraw from the collaboration on short notice, in which case rights to omalizumab revert to Tanox and the remaining collaborator unless Roche exercises its option in the event of withdrawal by Genentech. PATENTS: Protein Design Labs holds fundamental antibody humanisation patents. Protein Design Labs stated in its Annual Report for 2000 that Genentech may elect to take a patent licence for Xolair under a 1998 patent rights agreement. Phase III clinical trials of omalizumab for the treatment of allergic rhinitis and allergic asthma were in progress with Genentech (Roche) in the USA, Canada, Europe and Japan, and are now completed. In New Zealand, the antibody was investigated in clinical trials for the treatment of allergic asthma at the Wellington School of Medicine. In the phase III trials, omalizumab was administered as a subcutaneous injection. It may also be administered intravenously. In additional phase I and II studies, the safety and efficacy of aerosol administration for allergic asthma was tested. Initial results of these studies indicated that aerosol administration is less effective than intravenous or subcutaneous administration. TEMPORARY SUSPENSION OF TRIALS: In September 2000, the US FDA requested that Genentech and Novartis suspend new trials of omalizumab. Existing long-term trials, however, could continue. The hold on new trials was due to concerns about the preclinical toxicity of omalizumab and the follow-up antibody E26. Thrombocytopenia was reported in studies in monkeys for omalizumab at 5-27 times the maximum clinical dose and for E26 at 3-15 times the maximum dose. In response to FDA requests, Novartis and Genentech carried out additional preclinical trials so that a specific explanation of the toxicity could be obtained; Novartis suspected a species specificity for the adverse events, as no thrombocytopenic events occurred in the completed phase III clinical trials. The supplementary data were submitted to the FDA and the hold on clinical trials was lifted in November 2000. REGULATORY FILINGS: in June 2000, Genentech, Novartis and Tanox submitted a Biologics Licence Application (BLA) to the US FDA for approval of omalizumab for the treatment of allergic asthma and allergic rhinitis. Novartis fileiled for marketing approval of omalizumab in the European Union, Switzerland, Australia and New Zealand. INDICATION NARROWED TO ADULT ALLERGIC ASTHMA: In July 2001, the FDA requested additional data, both preclinical and clinical, for Xolair, as well as more detailed information concerning the effect of prolonged action of the drug. Genentech is to satisfy the FDA's request with data from the ALTO platelet monitoring safety study and with ongoing open-label studies. Genentech, Novartis and Tanox believe that substantial information can be provided from continuing trials, but additional trials on specific subgroups may be necessary. The new data will be submitted to both the US FDA and the EMEA in the European Union. The application for approval of Xolair that was submitted to the EMEA was withdrawn when it became clear that there would be a delay in approval in the USA. Tanox had originally anticipated that Xolair would be launched in mid-2001 in the USA and Europe. In November 2001, Genentech and Novartis stated that an amended BLA would be submitted to the FDA in the fourth quarter of 2002. The amended approval application will focus on the use of Xolair in adults only with allergic asthma. The original application was for treatment of both adults and children, and included allergic rhinitis. Genentech has stated that it will first pursue the narrower indication before filing supplemental BLAs. Approval of the drug in the USA may now be delayed until as late as the end of 2003. In Europe, Novartis is planning to develop Xolair for use only in asthmatic patients who are classed as being 'at risk', i.e. those who have been hospitalised or have visited an emergency department. Clinical studies are to be carried out, with submission for regulatory approval planned for 2003. APPROVAL IN AUSTRALIA: In June 2002, Xolair was approved by the Therapeutic Goods Administration in Australia for the treatment of adults and adolescents with moderate allergic asthma. This is the first marketing approval for Xolair.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Several health care organizations recommend that physicians provide preventive dentistry services, including dental screening and referral. This study is the first to investigate characteristics of medical providers that influence their referral to a dentist of children who are at risk for dental disease. A cross-sectional survey was undertaken of primary care clinicians in 69 pediatric practices and 49 family medicine practices who were enrolled in a study to evaluate a pediatric preventive dentistry program targeted toward Medicaid-eligible children in North Carolina. A 100-item, self-administered questionnaire with 23 items on some aspect of dental referral elicited providers' knowledge and opinions toward oral health, their provision of dental services, and their confidence in providing these services. We hypothesized that providers' dental knowledge, opinions about the importance of oral health, and confidence in providing oral health services would be associated with their propensity to refer children who are younger than 3 years and are suspected of having risk factors for future dental disease or a few teeth in the beginning stages of decay. We also hypothesized that providers' perceived referral difficulty would affect their referral activities. Patient characteristics (tooth decay status, insurance status, immigrant status, English speaking), practice characteristics (setting, number of providers, patient volume, busyness), practice environment (perceived and actual availability of dentists), and other provider characteristics (gender, type, practice experience, board certification, training in oral health during or after professional education, hours worked, teaching of residents, preventive behaviors) were assessed and used as control variables. Preliminary bivariate analysis (analysis of variance, chi2) identified characteristics associated with referral activity. Multivariable logistic regression analysis using backward stepwise logistic regression tested the posed hypotheses, with provider, practice, and patient characteristics included as potential control variables. Nearly 78% of 169 primary care clinicians who participated in the survey reported that they were likely to refer children who had signs of early decay or high risk for future disease. Approximately half (54%) call a dental office sometimes or more frequently to make an appointment for a child whom they refer, but the most common method is to give the caregiver the name of a dentist without additional assistance (96%). Bivariate analysis revealed that providers who had high confidence in their ability to perform screenings and reported low overall referral difficulty were more likely to refer children. Bivariate analyses also found that providers who were not in group practices, were board certified, graduated 20 years ago or more, saw 80 or more patients per week, had &gt;60% of their total patients who were infants and toddlers, and saw &gt;3.5 patients per hour were significantly less likely to refer at-risk children for dental care. No patient characteristics were associated with referral. The regression model revealed that an increase in odds of referral was significantly associated with confidence in screening abilities (odds ratio [OR]: 5.0; 95% confidence interval [CI]: 1.7-15.1), low referral difficulty (OR: 6.0; 95% CI: 1.0-34.5), and group practice (OR: 4.2; 95% CI: 1.4-12.1). Having a patient population of &gt;60% infants or toddlers was significantly associated with a decrease in odds of referral (OR: 0.2; 95% CI: 0.1-0.7). Oral health knowledge and opinions did not help to explain referral practices. Tooth decay remains a substantial problem in young children and is made worse by existing barriers that prevent them from obtaining dental care. Because most children are exposed to medical care but not dental care at an early age, primary care medical providers have the opportunity to play an important role in helping children and their families gain access to dental care. This study has identified several factors that need consideration in the further exploration and development of primary care physicians' role in providing for the oral health of their young patients. First, instructional efforts to increase providers' dental knowledge or opinions of the importance of oral diseases are unlikely to be effective in increasing dental referral unless they include methods to increase confidence in providers' ability to identify and appropriately refer children with disease. Medical education in oral health may need to be designed to include components that address self-efficacy in providing risk assessment, early detection, and referral services. Traditional, didactic instruction does not fulfill these requirements, but because the effectiveness of instructional methods for teaching medical providers oral health care, particularly confidence-building aspects, is untested, controlled evaluations are necessary. A second conclusion from this study is that the referral environment is more important than provider knowledge, experience, opinions, or patient characteristics in determining whether medical practitioners refer at-risk children for dental care. Most providers in this study held positive opinions about providing dental services in their practices, had relatively high levels of knowledge, screened for dental disease, accessed risk factors in their patients, and referred; they can be instrumental in helping young children get dental care, yet most providers face difficulties in making dental referrals, and changes in the availability of dental care will be necessary to decrease these barriers before referral can be most effective. The longer-term approach of increasing the number of dental graduates can be complemented in the shorter term by other approaches to increase dentists' participation in Medicaid, such as increases in reimbursement rates; training general dentists to treat young children; and community organization activities to link families, physicians, dentists, and public programs such as Early Head Start. Finally, pediatric primary health care providers can provide oral health promotion and disease prevention activities, thereby eliminating or delaying dental disease and the need for treatment at a very young age. However, effective and appropriate involvement of pediatric primary care clinicians can be expected only after they receive the appropriate training and encouragement and problems with the dental referral environment are addressed.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
For a long period in the history of psychological research, emotion and cognition have been studied independently, as if one were irrelevant to the other. The renewed interest of researchers for the study of the relations between cognition and emotion has led to the development of a range of laboratory methods for inducing temporary mood states. This paper aims to review the main mood induction procedures allowing the induction of a negative mood as well as a positive mood, developed since the pioneer study of Schachter and Singer [Psychol Rev 69 (1962) 379-399] and to account for the usefulness and problems related to the use of such techniques. The first part of this paper deals with the detailed presentation of some of the most popular mood induction procedures according to their type: simple (use of only one mood induction technique) or combined (association of two or more techniques at once). The earliest of the modern techniques is the Velten Mood Induction Procedure [Behav Res Ther 6 (1968) 473-482], which involves reading aloud sixty self-referent statements progressing from relative neutral mood to negative mood or dysphoria. Some researchers have varied the procedure slightly by changing the number of the statements [Behav Res Ther 21 (1983) 233-239, Br J Clin Psychol 21 (1982) 111-117, J Pers Soc Psychol 35 (1977) 625-636]. Various other mood induction procedures have been developed including music induction [Cogn Emotion 11 (1997) 403-432, Br J Med Psychol 55 (1982) 127-138], film clip induction [J Pers Soc Psychol 20 (1971) 37-43, Cogn Emotion 7 (1993) 171-193, Rottenberg J, Ray RR, Gross JJ. Emotion elicitation using films. In: Coan JA, Allen JJB, editors. The handbook of emotion elicitation and assessment. New York: Oxford University Press, 2007], autobiographical recall [J Clin Psychol 36 (1980) 215-226, Jallais C. Effets des humeurs positives et négatives sur les structures de connaissances de type script. Thèse de doctorat non publiée. Université de Nantes, Nantes] or combined inductions [Gilet AL. Etude des effets des humeurs positives et négatives sur l'organisation des connaissances en mémoire sémantique. Thèse de doctorat non publiée, Université de Nantes, Nantes, J Ment Imagery 19 (1995) 133-150]. In music or film clip inductions, subjects are asked to listen or view some mood-suggestive pieces of material determined by the experimenter according to standardized music or film sets [J Ment Imagery 19 (1995) 133-150, Cogn Emotion 7 (1993) 171-193] and selected to elicit target moods. According to many authors, these two mood induction procedures seem to be among the most effective manners to induce moods [Br J Psychol 85 (1994) 55-78, Eur J Soc Psychol 26 (1996) 557-580] in an individual or in a group setting [Jallais C. Effets des humeurs positives et négatives sur les structures de connaissances de type script. Thèse de doctorat non publiée. Université de Nantes, Nantes]. As it is believed that multiple inductions contribute additively to a mood [Am Psychol 36 (1981) 129-148], researchers proposed to combine two or more techniques at the same time. Thus, the Velten Mood Induction Procedure has been successively associated with the hypnosis mood induction procedure [J Pers Soc Psychol 42 (1982) 927-934], the music mood induction procedure [Behav Res Ther 21 (1983) 233-239, J Exp Soc Psychol 26 (1990) 465-480] or the imagination mood induction procedure [Br J Clin Psychol 21 (1982) 111-117]. Successful combinations of inductions usually use a first induction that occupies foreground attention and a second one that contributes to congruent background atmosphere. One of the most successful combined mood induction procedures has been developed by Mayer, Allen and Beauregard [J Ment Imagery 19 (1995) 133-150]. This technique associates guided imagery with music and is supposed to increase effectiveness of the induction. In the second part of this paper the aim is to present the usefulness of mood induction procedures in the study of cognitive processes in depression [Clin Psychol Rev 25 (2005) 487-510], borderline personality disorder [J Behav Ther Exp Psychiatry 36 (2005) 226-239] or associated with brain imaging [Am J Psychiatry 161 (2004) 2245-2256]. Then the inherent problems to the use of experimental mood induction procedures are reconsidered. Doubts have effectively arisen about the effectiveness and validity of the mood induction procedures usually used in research. Some authors questioned whether a sufficient intensity of mood is produced or the possibility that the effects observed are due mainly to demand effects [Br J Psychol 85 (1994) 55-78, Clin Psychol Rev 10 (1990) 669-697, Eur J Soc Psychol 26 (1996) 557-580]. In fact, the various mood induction procedures are not equal with regard to the demand effects observed. The question of demand characteristics with respect to mood induction procedures is still under debate, even if demand effects are supposed to be most likely to occur with self-statement techniques (especially with the Velten mood induction procedure) or when subjects are explicitly instructed to try to enter a specific mood state [Eur J Soc Psychol 26 (1996) 557-580]. Another interrogation relates to the effectiveness of these various procedures of induction and the duration of induced moods. Generally, the various techniques used produce true changes of moods in the majority if not the whole of the subjects. However, certain procedures seem more effective in inducing a mood in particular [Br J Psychol 85 (1994) 55-78, Clin Psychol Rev 10 (1990) 669-697, Eur J Soc Psychol 26 (1996) 557-580]. As for the duration of induced moods this depends at the same time on the procedure used and the mood induced. Nevertheless, mood induction remains fundamental in the study of the effects of mood on the cognitive activities, insofar as it makes it possible to study the effects of negative as well as positive moods.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
1. Tetany occurs spontaneously in many forms and may also be produced by the destruction of the parathyroid glands. Recent researches tend to demonstrate an intimate relation between the various forms of tetany and relative or absolute insufficiency of the parathyroid gland. 2. The parathyroid glands are independent organs with definite specific function. Whether or not this function is intimately related to that of other organs of internal secretion is not as yet proven. 3. The number and distribution of the parathyroid glands varies. Failure to produce tetany experimentally is probably due to the fact that some parathyroid tissue remains after an apparently complete extirpation. When extirpation is complete tetany appears, even in herbivora. Only a very small amount of parathyroid tissue is required to prevent this. 4. The effect of the extirpation of the parathyroid glands may be annulled by the reintroduction of an extract of these glands even from an animal of widely different character. The active principle is associated with a nucleo-proteid in the extract and may be separated with this nucleo-proteid from the remaining inert albuminous substances. Its effect in counteracting tetany appears some hours after injection and lasts several days. 5. The parathyroid glands contain no considerable amount of iodine. The parathyroid extract is not an iodine containing compound. 6. In tetany there is apparently some disturbance of the composition of the circulating fluids ordinarily prevented by the secretion of the parathyroid, which disarranges the balance of the mineral constituents of the tissues. Possibly this consists in the appearance of an injurious substance of an acid nature for such tetany may he relieved by extensive bleeding with replacement of the blood by salt solution. No actual poisonous material has, however, been demonstrated by the transference of the blood of a tetanic animal to the veins of a normal one. 7. Numerous researches have shown the important relation of the calcium salts to the excitability of the central nervous system. Their withdrawal leaves the nerve cells in a state of hyperexcitability which can be made to disappear by supplying them with a solution of a calcium salt. 8. Tetany may be regarded as an expression of hyperexcitability of the nerve cells from some such cause. 9. The injection of a solution of a salt of calcium into the circulation of an animal in tetany promptly checks all the symptoms and restores the animal to an apparently normal condition. 10. Injections of magnesium salts probably have a similar effect but these effects are masked by the toxic action of the salt. 11. The injection of sodium or potassium salts has no such beneficial effect but rather tends to intensify the symptoms. This is true also of the alkaline salts of sodium which were studied especially in respect to their basic properties. 12. The effect of calcium is of value in human therapeutics in combating the symptoms of spontaneous forms of tetany and in relieving the symptoms in cases of operative tetany and thus tiding over the period of acute parathyroid insufficiency until remnants of parathyroid tissue can recover their function or new parathyroid tissue can be transplanted. It is in this way an important and convenient ally of the method of injecting parathyroid extract. 13. Studies of the metabolism in parathyroidectomized animals show: 1. A marked reduction in the calcium content of the tissues especially of the blood and brain, during tetany. 2. An increased output of calcium in the urine and faeces on the development of tetany. 3. An increased output of nitrogen in the urine. 4. An increased output of ammonia in the urine with 4a. an increased ammonia ratio in the urine. 5. An increased amount of ammonia in the blood. Much of this affords evidence of the existence of some type of acid intoxication. Its effects are, however, not neutralized by the introduction of alkaline sodium salts and may perhaps be regarded as especially important in producing a drainage of calcium salts from the tissues which can be remedied by the reintroduction of calcium salts. 14. Emphasis must be laid upon the remarkable difference which exists between the alterations inmetabolism following thyroidectomy and those following parathyroidectomy. In myxoedema there is lowered metabolism, decreased respiratory changes and lowered nitrogen output with depression of body temperature. In tetany there is increased metabolism, probably increased respiratory changes, certainly increase in nitrogen output and elevation of the temperature. 15. It is important, therefore, that in any experiments upon metabolism in relation to the thyroid and parathyroid gland, these glands should be clearly distinguished as structures exercising very different and in large part contrary effects upon metabolism. 16. In general the role of the calcium salts in connection with tetany may be conceived of as follows: These salts have a moderating influence upon the nerve cells. The parathyroid secretion in some way controls the calcium exchange in the body. It may possibly be that in the absence of the parathyroid secretion, substances arise which can combine with calcium, abstract it from the tissues and cause its excretion and that the parathyroid secretion prevents the appearance of such bodies. The mechanism of the parathyroid action is not determined, but the result, the impoverishment of the tissues with respect to calcium and the consequent development of hyperexcitability of the nerve cells, and tetany is proven. Only the restoration of calcium to the tissues can prevent this. 17. This explanation is readily applicable to spontaneous forms of tetany in which there is a drain of calcium for physiological purposes, or in which some other condition causes a drain of calcium. In such cases the parathyroid glands may be relatively insufficient.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
BACKGROUND: Of great interest in cancer prevention is how nutrient components affect gene pathways associated with the physiological events of puberty. Nutrient-gene interactions may cause changes in breast or prostate cells and, therefore, may result in cancer risk later in life. Analysis of gene pathways can lead to insights about nutrient-gene interactions and the development of more effective prevention approaches to reduce cancer risk. To date, researchers have relied heavily upon experimental assays (such as microarray analysis, etc.) to identify genes and their associated pathways that are affected by nutrient and diets. However, the vast number of genes and combinations of gene pathways, coupled with the expense of the experimental analyses, has delayed the progress of gene-pathway research. The development of an analytical approach based on available test data could greatly benefit the evaluation of gene pathways, and thus advance the study of nutrient-gene interactions in cancer prevention. In the present study, we have proposed a chain reaction model to simulate gene pathways, in which the gene expression changes through the pathway are represented by the species undergoing a set of chemical reactions. We have also developed a numerical tool to solve for the species changes due to the chain reactions over time. Through this approach we can examine the impact of nutrient-containing diets on the gene pathway; moreover, transformation of genes over time with a nutrient treatment can be observed numerically, which is very difficult to achieve experimentally. We apply this approach to microarray analysis data from an experiment which involved the effects of three polyphenols (nutrient treatments), epigallo-catechin-3-O-gallate (EGCG), genistein, and resveratrol, in a study of nutrient-gene interaction in the estrogen synthesis pathway during puberty. RESULTS: In this preliminary study, the estrogen synthesis pathway was simulated by a chain reaction model. By applying it to microarray data, the chain reaction model computed a set of reaction rates to examine the effects of three polyphenols (EGCG, genistein, and resveratrol) on gene expression in this pathway during puberty. We first performed statistical analysis to test the time factor on the estrogen synthesis pathway. Global tests were used to evaluate an overall gene expression change during puberty for each experimental group. Then, a chain reaction model was employed to simulate the estrogen synthesis pathway. Specifically, the model computed the reaction rates in a set of ordinary differential equations to describe interactions between genes in the pathway (A reaction rate K of A to B represents gene A will induce gene B per unit at a rate of K; we give details in the "method" section). Since disparate changes of gene expression may cause numerical error problems in solving these differential equations, we used an implicit scheme to address this issue. We first applied the chain reaction model to obtain the reaction rates for the control group. A sensitivity study was conducted to evaluate how well the model fits to the control group data at Day 50. Results showed a small bias and mean square error. These observations indicated the model is robust to low random noises and has a good fit for the control group. Then the chain reaction model derived from the control group data was used to predict gene expression at Day 50 for the three polyphenol groups. If these nutrients affect the estrogen synthesis pathways during puberty, we expect discrepancy between observed and expected expressions. Results indicated some genes had large differences in the EGCG (e.g., Hsd3b and Sts) and the resveratrol (e.g., Hsd3b and Hrmt12) groups. CONCLUSIONS: In the present study, we have presented (I) experimental studies of the effect of nutrient diets on the gene expression changes in a selected estrogen synthesis pathway. This experiment is valuable because it allows us to examine how the nutrient-containing diets regulate gene expression in the estrogen synthesis pathway during puberty; (II) global tests to assess an overall association of this particular pathway with time factor by utilizing generalized linear models to analyze microarray data; and (III) a chain reaction model to simulate the pathway. This is a novel application because we are able to translate the gene pathway into the chemical reactions in which each reaction channel describes gene-gene relationship in the pathway. In the chain reaction model, the implicit scheme is employed to efficiently solve the differential equations. Data analysis results show the proposed model is capable of predicting gene expression changes and demonstrating the effect of nutrient-containing diets on gene expression changes in the pathway. One of the objectives of this study is to explore and develop a numerical approach for simulating the gene expression change so that it can be applied and calibrated when the data of more time slices are available, and thus can be used to interpolate the expression change at a desired time point without conducting expensive experiments for a large amount of time points. Hence, we are not claiming this is either essential or the most efficient way for simulating this problem, rather a mathematical/numerical approach that can model the expression change of a large set of genes of a complex pathway. In addition, we understand the limitation of this experiment and realize that it is still far from being a complete model of predicting nutrient-gene interactions. The reason is that in the present model, the reaction rates were estimated based on available data at two time points; hence, the gene expression change is dependent upon the reaction rates and a linear function of the gene expressions. More data sets containing gene expression at various time slices are needed in order to improve the present model so that a non-linear variation of gene expression changes at different time can be predicted.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Cardiac magnetic resonance imaging (CMR) is increasingly used to assess patients for myocardial viability prior to revascularisation. This is important to ensure that only those likely to benefit are subjected to the risk of revascularisation. To assess current evidence on the accuracy and cost-effectiveness of CMR to test patients prior to revascularisation in ischaemic cardiomyopathy; to develop an economic model to assess cost-effectiveness for different imaging strategies; and to identify areas for further primary research. Databases searched were: MEDLINE including MEDLINE In-Process &amp; Other Non-Indexed Citations Initial searches were conducted in March 2011 in the following databases with dates: MEDLINE including MEDLINE In-Process &amp; Other Non-Indexed Citations via Ovid (1946 to March 2011); Bioscience Information Service (BIOSIS) Previews via Web of Science (1969 to March 2011); EMBASE via Ovid (1974 to March 2011); Cochrane Database of Systematic Reviews via The Cochrane Library (1996 to March 2011); Cochrane Central Register of Controlled Trials via The Cochrane Library 1998 to March 2011; Database of Abstracts of Reviews of Effects via The Cochrane Library (1994 to March 2011); NHS Economic Evaluation Database via The Cochrane Library (1968 to March 2011); Health Technology Assessment Database via The Cochrane Library (1989 to March 2011); and the Science Citation Index via Web of Science (1900 to March 2011). Additional searches were conducted from October to November 2011 in the following databases with dates: MEDLINE including MEDLINE In-Process &amp; Other Non-Indexed Citations via Ovid (1946 to November 2011); BIOSIS Previews via Web of Science (1969 to October 2011); EMBASE via Ovid (1974 to November 2011); Cochrane Database of Systematic Reviews via The Cochrane Library (1996 to November 2011); Cochrane Central Register of Controlled Trials via The Cochrane Library (1998 to November 2011); Database of Abstracts of Reviews of Effects via The Cochrane Library (1994 to November 2011); NHS Economic Evaluation Database via The Cochrane Library (1968 to November 2011); Health Technology Assessment Database via The Cochrane Library (1989 to November 2011); and the Science Citation Index via Web of Science (1900 to October 2011). Electronic databases were searched March-November 2011. The systematic review selected studies that assessed the clinical effectiveness and cost-effectiveness of CMR to establish the role of CMR in viability assessment compared with other imaging techniques: stress echocardiography, single-photon emission computed tomography (SPECT) and positron emission tomography (PET). Studies had to have an appropriate reference standard and contain accuracy data or sufficient details so that accuracy data could be calculated. Data were extracted by two reviewers and discrepancies resolved by discussion. Quality of studies was assessed using the QUADAS II tool (University of Bristol, Bristol, UK). A rigorous diagnostic accuracy systematic review assessed clinical and cost-effectiveness of CMR in viability assessment. A health economic model estimated costs and quality-adjusted life-years (QALYs) accrued by diagnostic pathways for identifying patients with viable myocardium in ischaemic cardiomyopathy with a view to revascularisation. The pathways involved CMR, stress echocardiography, SPECT, PET alone or in combination. Strategies of no testing and revascularisation were included to determine the most cost-effective strategy. Twenty-four studies met the inclusion criteria. All were prospective. Participant numbers ranged from 8 to 52. The mean left ventricular ejection fraction in studies reporting this outcome was 24-62%. CMR approaches included stress CMR and late gadolinium-enhanced cardiovascular magnetic resonance imaging (CE CMR). Recovery following revascularisation was the reference standard. Twelve studies assessed diagnostic accuracy of stress CMR and 14 studies assessed CE CMR. A bivariate regression model was used to calculate the sensitivity and specificity of CMR. Summary sensitivity and specificity for stress CMR was 82.2% [95% confidence interval (CI) 73.2% to 88.7%] and 87.1% (95% CI 80.4% to 91.7%) and for CE CMR was 95.5% (95% CI 94.1% to 96.7%) and 53% (95% CI 40.4% to 65.2%) respectively. The sensitivity and specificity of PET, SPECT and stress echocardiography were calculated using data from 10 studies and systematic reviews. The sensitivity of PET was 94.7% (95% CI 90.3% to 97.2%), of SPECT was 85.1% (95% CI 78.1% to 90.2%) and of stress echocardiography was 77.6% (95% CI 70.7% to 83.3%). The specificity of PET was 68.8% (95% CI 50% to 82.9%), of SPECT was 62.1% (95% CI 52.7% to 70.7%) and of stress echocardiography was 69.6% (95% CI 62.4% to 75.9%). All currently used diagnostic strategies were cost-effective compared with no testing at current National Institute for Health and Care Excellence thresholds. If the annual mortality rates for non-viable patients were assumed to be higher for revascularised patients, then testing with CE CMR was most cost-effective at a threshold of £20,000/QALY. The proportion of model runs in which each strategy was most cost-effective, at a threshold of £20,000/QALY, was 40% for CE CMR, 42% for PET and 16.5% for revascularising everyone. The expected value of perfect information at £20,000/QALY was £620 per patient. If all patients (viable or not) gained benefit from revascularisation, then it was most cost-effective to revascularise all patients. Definitions and techniques assessing viability were highly variable, making data extraction and comparisons difficult. Lack of evidence meant assumptions were made in the model leading to uncertainty; differing scenarios were generated around key assumptions. All the diagnostic pathways are a cost-effective use of NHS resources. Given the uncertainty in the mortality rates, the cost-effectiveness analysis was performed using a set of scenarios. The cost-effectiveness analyses suggest that CE CMR and revascularising everyone were the optimal strategies. Future research should look at implementation costs for this type of imaging service, provide guidance on consistent reporting of diagnostic testing data for viability assessment, and focus on the impact of revascularisation or best medical therapy in this group of high-risk patients. The National Institute of Health Technology Assessment programme.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Several agents are used to clear secretions from the airways of people with cystic fibrosis. Inhaled dry powder mannitol is now available in Australia and some countries in Europe. The exact mechanism of action of mannitol is unknown, but it increases mucociliary clearance. Phase III trials of inhaled dry powder mannitol for the treatment of cystic fibrosis have been completed. The dry powder formulation of mannitol may be more convenient and easier to use compared with established agents which require delivery via a nebuliser. To assess whether inhaled dry powder mannitol is well tolerated, whether it improves the quality of life and respiratory function in people with cystic fibrosis and which adverse events are associated with the treatment. We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register which comprises references identified from comprehensive electronic databases, handsearching relevant journals and abstracts from conferences.Date of last search: 16 April 2015. All randomised controlled studies comparing mannitol with placebo, active inhaled comparators (for example, hypertonic saline or dornase alfa) or with no treatment. Authors independently assessed studies for inclusion, carried out data extraction and assessed the risk of bias in included studies. The searches identified nine separate studies (45 publications), of which four studies (36 publications) were included with a total of 667 participants, one study (only available as an abstract) is awaiting assessment and two studies are ongoing. Duration of treatment in the included studies ranged from two weeks to six months with open-label treatment for an additional six months in two of the studies. Three studies compared mannitol with control (a very low dose of mannitol or non-respirable mannitol); two of these were parallel studies with a similar design and data could be pooled, where data for a particular outcome and time point were available; also, one short-term cross-over study supplied additional results. The fourth study compared mannitol to dornase alfa alone and to mannitol plus dornase alfa. There was generally a low risk of bias in relation to randomisation and blinding; evidence from the parallel studies was judged to be of low to moderate quality and from the cross-over studies was judged to be of low to very low quality. While the published papers did not provide all the data required for our analysis, additional unpublished data were provided by the drug's manufacturer and the author of one of the studies. There was an initial test to see if participants tolerated mannitol, with only those who could tolerate the drug being randomised to the studies; therefore the study results are not applicable to the cystic fibrosis population as a whole.For the comparison of mannitol and control, we found no consistent differences in health-related quality of life in any of the domains, except for burden of treatment, which was less for mannitol up to four months in the two pooled studies of a similar design; this difference was not maintained at six months. Up to and including six months, lung function in terms of forced expiratory volume at one second (millilitres) and per cent predicted were significantly improved in all three studies comparing mannitol to control. Beneficial results were observed in these studies in adults and in both concomitant dornase alfa users and non users. A significant reduction was shown in the incidence of pulmonary exacerbations in favour of mannitol at six months; however, the estimate of this effect was imprecise so it is unclear whether the effect is clinically meaningful. Cough, haemoptysis, bronchospasm, pharyngolaryngeal pain and post-tussive vomiting were the most commonly reported side effects on both treatments. Mannitol was not associated with any increase in isolation of bacteria over a six-month period.In the 12-week cross-over study (28 participants), no significant differences were found in the recorded domains of health-related quality of life or measures of lung function between mannitol versus dornase alfa alone and versus mannitol plus dornase alfa. There seemed to be a higher rate of pulmonary exacerbations in the mannitol plus dornase alfa arm compared with dornase alfa alone; although not statistically significant, this was the most common reason for stopping treatment in this arm. Cough was the most common side effect in the mannitol alone arm but there was no occurrence of cough in the dornase alfa alone arm and the most commonly reported reason of withdrawal from the mannitol plus dornase alfa arm was pulmonary exacerbations. Mannitol (with or without dornase alfa) was not associated with any increase in isolation of bacteria over the 12-week period. There is evidence to show that treatment with mannitol over a six-month period is associated with an improvement in some measures of lung function in people with cystic fibrosis compared to control. There is no evidence that quality of life is improved for participants taking mannitol compared to control; a decrease in burden of treatment was observed up to four months on mannitol compared to control but this difference was not maintained to six months. Randomised information regarding the burden of adding mannitol to an existing treatment is limited. There is no randomised evidence of improvement in lung function or quality of life comparing mannitol to dornase alfa alone and to mannitol plus dornase alfa.Mannitol as a single or concomitant treatment to dornase alfa may be of benefit to people with cystic fibrosis, but further research is required in order to establish who may benefit most and whether this benefit is sustained in the longer term.The clinical implications from this review suggest that mannitol could be considered as a treatment in cystic fibrosis; however, studies comparing its efficacy against other (established) mucolytic therapies need to be undertaken before it can be considered for mainstream practice.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Femoro-popliteal bypass is implemented to save limbs that might otherwise require amputation, in patients with ischaemic rest pain or tissue loss; and to improve walking distance in patients with severe life-limiting claudication. Contemporary practice involves grafts using autologous vein, polytetrafluoroethylene (PTFE) or Dacron as a bypass conduit. This is the second update of a Cochrane review first published in 1999 and last updated in 2010. To assess the effects of bypass graft type in the treatment of stenosis or occlusion of the femoro-popliteal arterial segment, for above- and below-knee femoro-popliteal bypass grafts. For this update, the Cochrane Vascular Information Specialist searched the Vascular Specialised Register (13 March 2017) and CENTRAL (2017, Issue 2). Trial registries were also searched. We included randomised trials comparing at least two different types of femoro-popliteal grafts for arterial reconstruction in patients with femoro-popliteal ischaemia. Randomised controlled trials comparing bypass grafting to angioplasty or to other interventions were not included. Both review authors (GKA and CPT) independently screened studies, extracted data, assessed trials for risk of bias and graded the quality of the evidence using GRADE criteria. We included nineteen randomised controlled trials, with a total of 3123 patients (2547 above-knee, 576 below-knee bypass surgery). In total, nine graft types were compared (autologous vein, polytetrafluoroethylene (PTFE) with and without vein cuff, human umbilical vein (HUV), polyurethane (PUR), Dacron and heparin bonded Dacron (HBD); FUSION BIOLINE and Dacron with external support). Studies differed in which graft types they compared and follow-up ranged from six months to 10 years.Above-knee bypassFor above-knee bypass, there was moderate-quality evidence that autologous vein grafts improve primary patency compared to prosthetic grafts by 60 months (Peto odds ratio (OR) 0.47, 95% confidence interval (CI) 0.28 to 0.80; 3 studies, 269 limbs; P = 0.005). We found low-quality evidence to suggest that this benefit translated to improved secondary patency by 60 months (Peto OR 0.41, 95% CI 0.22 to 0.74; 2 studies, 176 limbs; P = 0.003).We found no clear difference between Dacron and PTFE graft types for primary patency by 60 months (Peto OR 1.67, 95% CI 0.96 to 2.90; 2 studies, 247 limbs; low-quality evidence). We found low-quality evidence that Dacron grafts improved secondary patency over PTFE by 24 months (Peto OR 1.54, 95% CI 1.04 to 2.28; 2 studies, 528 limbs; P = 0.03), an effect which continued to 60 months in the single trial reporting this timepoint (Peto OR 2.43, 95% CI 1.31 to 4.53; 167 limbs; P = 0.005).Externally supported prosthetic grafts had inferior primary patency at 24 months when compared to unsupported prosthetic grafts (Peto OR 2.08, 95% CI 1.29 to 3.35; 2 studies, 270 limbs; P = 0.003). Secondary patency was similarly affected in the single trial reporting this outcome (Peto OR 2.25, 95% CI 1.24 to 4.07; 236 limbs; P = 0.008). No data were available for 60 months follow-up.HUV showed benefits in primary patency over PTFE at 24 months (Peto OR 4.80, 95% CI 1.76 to 13.06; 82 limbs; P = 0.002). This benefit was still seen at 60 months (Peto OR 3.75, 95% CI 1.46 to 9.62; 69 limbs; P = 0.006), but this was only compared in one trial. Results were similar for secondary patency at 24 months (Peto OR 4.01, 95% CI 1.44 to 11.17; 93 limbs) and at 60 months (Peto OR 3.87, 95% CI 1.65 to 9.05; 93 limbs).We found HBD to be superior to PTFE for primary patency at 60 months for above-knee bypass, but these results were based on a single trial (Peto OR 0.38, 95% CI 0.20 to 0.72; 146 limbs; very low-quality evidence). There was no difference in primary patency between HBD and HUV for above-knee bypass in the one small study which reported this outcome.We found only one small trial studying PUR and it showed very poor primary and secondary patency rates which were inferior to Dacron at all time points.Below-knee bypassFor bypass below the knee, we found no graft type to be superior to any other in terms of primary patency, though one trial showed improved secondary patency of HUV over PTFE at all time points to 24 months (Peto OR 3.40, 95% CI 1.45 to 7.97; 88 limbs; P = 0.005).One study compared PTFE alone to PTFE with vein cuff; very low-quality evidence indicates no effect to either primary or secondary patency at 24 months (Peto OR 1.08, 95% CI 0.58 to 2.01; 182 limbs; 2 studies; P = 0.80 and Peto OR 1.22, 95% CI 0.67 to 2.23; 181 limbs; 2 studies; P = 0.51 respectively)Limited data were available for limb survival, and those studies reporting on this outcome showed no clear difference between graft types for this outcome. Antiplatelet and anticoagulant protocols varied extensively between trials, and in some cases within trials.The overall quality of the evidence ranged from very low to moderate. Issues which affected the quality of the evidence included differences in the design of the trials, and differences in the types of grafts they compared. These differences meant we were often only able to combine and analyse small numbers of participants and this resulted in uncertainty over the true effects of the graft type used. There was moderate-quality evidence of improved long-term (60 months) primary patency for autologous vein grafts when compared to prosthetic materials for above-knee bypasses. In the long term (two to five years) there was low-quality evidence that Dacron confers a small secondary patency benefit over PTFE for above-knee bypass. Only very low-quality data exist on below-knee bypasses, so we are uncertain which graft type is best. Further randomised data are needed to ascertain whether this information translates into an improvement in limb survival.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Anorexia nervosa (AN) is characterised by a failure to maintain a normal body weight due to a paucity of nutrition, an intense fear of gaining weight or behaviour that prevents the individual from gaining weight, or both. The long-term prognosis is often poor, with severe developmental, medical and psychosocial complications, high rates of relapse and mortality. 'Family therapy approaches' indicate a range of approaches, derived from different theories, that involve the family in treatment. We have included therapies developed on the basis of dominant family systems theories, approaches that are based on or broadly similar to the family-based therapy derived from the Maudsley model, approaches that incorporate a focus on cognitive restructuring, as well as approaches that involve the family without articulation of a theoretical approach.This is an update of a Cochrane Review first published in 2010. To evaluate the efficacy of family therapy approaches compared with standard treatment and other treatments for AN. We searched the Cochrane Common Mental Disorders Controlled Trials Register (CCMDCTR) and PsycINFO (OVID) (all years to April 2016). We ran additional searches directly on Cochrane Central Register for Controlled Trials (CENTRAL), MEDLINE, Ovid Embase, and PsycINFO (to 2008 and 2016 to 2018). We searched the World Health Organization (WHO) trials portal (ICTRP) and ClinicalTrials.gov, together with four theses databases (all years to 2018). We checked the reference lists of all included studies and relevant systematic reviews. We have included in the analyses only studies from searches conducted to April 2016. Randomised controlled trials (RCTs) of family therapy approaches compared to any other intervention or other types of family therapy approaches were eligible for inclusion. We included participants of any age or gender with a primary clinical diagnosis of anorexia nervosa. Four review authors selected the studies, assessed quality and extracted data. We used a random-effects meta-analysis. We used the risk ratio (with a 95% confidence interval) to summarise dichotomous outcomes and both the standardised mean difference and the mean difference to summarise continuous measures. We included 25 trials in this version of the review (13 from the original 2010 review and 12 newly-included studies). Sixteen trials were of adolescents, eight trials of adults (seven of these in young adults aged up to 26 years) and one trial included three age groups: one adolescent, one young adult and one adult. Most investigated family-based therapy or variants. Reporting of trial conduct was generally inadequate, so that in a large number of studies we rated the risk of bias as unclear for many of the domains. Selective reporting bias was particularly problematic, with 68% of studies rated at high risk of bias in this area, followed by incomplete outcome data, with 44% of studies rated at high risk of bias in this area. For the main outcome measure of remission there was some low-quality evidence (from only two studies, 81 participants) suggesting that family therapy approaches might offer some advantage over treatment as usual on rates of remission, post intervention (risk ratio (RR) 3.50, 95% confidence interval (CI) 1.49 to 8.23; I<sup2</sup = 0%). However, at follow-up, low-quality evidence from only one study suggested this effect was not maintained. There was very low-quality evidence from only one trial, which means it is difficult to determine whether family therapy approaches offer any advantage over educational interventions for remission (RR 9.00, 95% CI 0.53 to 153.79; 1 study, N = 30). Similarly, there was very low-quality evidence from only five trials for remission post-intervention, again meaning that it is difficult to determine whether there is any advantage of family therapy approaches over psychological interventions (RR 1.22, 95% CI 0.89 to 1.67; participants = 252; studies = 5; I<sup2</sup = 37%) and at long-term follow-up (RR 1.08, 95% CI 0.91 to 1.28; participants = 200; studies = 4 with 1 of these contributing 3 pairwise comparisons for different age groups; I<sup2</sup = 0%). There was no indication that the age group had any impact on the overall treatment effect; however, it should be noted that there were very few trials undertaken in adults, with the age range of adult studies included in this analysis from 20 to 27. There was some evidence of a small effect favouring family based therapy compared with other psychological interventions in terms of weight gain post-intervention (standardised mean difference (SMD) 0.32, 95% CI 0.01 to 0.63; participants = 210; studies = 4 with 1 of these contributing 3 pairwise comparisons for different age groups; I<sup2</sup = 11%) . Overall, there was insufficient evidence to determine whether there were any differences between groups across all comparisons for most of the secondary outcomes (weight, eating disorder psychopathology, dropouts, relapse, or family functioning measures), either at post-intervention or at follow-up. There is a limited amount of low-quality evidence to suggest that family therapy approaches may be effective compared to treatment as usual in the short term. This finding is based on two trials that included only a small number of participants, and both had issues about potential bias. There is insufficient evidence to determine whether there is an advantage of family therapy approaches in people of any age compared to educational interventions (one study, very low quality) or other psychological therapies (five studies, very low quality). Most studies contributing to this finding were undertaken in adolescents and youth. There are clear potential impacts on how family therapy approaches might be delivered to different age groups and further work is required to understand what the resulting effects on treatment efficacy might be. There is insufficient evidence to determine whether one type of family therapy approach is more effective than another. The field would benefit from further large, well-conducted trials.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Plasmodium vivax liver stages (hypnozoites) may cause relapses, prolonging morbidity, and impeding malaria control and elimination. The World Health Organization (WHO) recommends three schedules for primaquine: 0.25 mg/kg/day (standard), or 0.5 mg/kg/day (high standard) for 14 days, or 0.75 mg/kg once weekly for eight weeks, all of which can be difficult to complete. Since primaquine can cause haemolysis in individuals with glucose-6-phosphate dehydrogenase (G6PD) deficiency, clinicians may be reluctant to prescribe primaquine without G6PD testing, and recommendations when G6PD status is unknown must be based on an assessment of the risks and benefits of prescribing primaquine. Alternative safe and efficacious regimens are needed. To assess the efficacy and safety of alternative primaquine regimens for radical cure of P vivax malaria compared to the standard or high-standard 14-day courses. We searched the Cochrane Infectious Diseases Group Specialized Register; the Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE (PubMed); Embase (Ovid); LILACS (BIREME); WHO International Clinical Trials Registry Platform and ClinicalTrials.gov up to 2 September 2019, and checked the reference lists of all identified studies. Randomized controlled trials (RCTs) of adults and children with P vivax malaria using either chloroquine or artemisinin-based combination therapy plus primaquine at a total adult dose of at least 210 mg, compared with the WHO-recommended regimens of 0.25 or 0.5 mg/kg/day for 14 days. Two review authors independently assessed trial eligibility and quality, and extracted data. We calculated risk ratios (RRs) with 95% confidence intervals (CIs) for dichotomous data. We grouped efficacy data according to length of follow-up, partner drug, and trial location. We analysed safety data where included. 0.5 mg/kg/day for seven days versus standard 0.25 mg/kg/day for 14 days There may be little or no difference in P vivax recurrences at six to seven months when using the same total dose (210 mg adult dose) over seven days compared to 14 days (RR 0.96, 95% CI 0.66 to 1.39; 4 RCTs, 1211 participants; low-certainty evidence). No serious adverse events were reported. We do not know if there is any difference in the number of adverse events resulting in discontinuation of primaquine (RR 1.04, 95% CI 0.15 to 7.38; 5 RCTs, 1427 participants) or in the frequency of anaemia (RR 3.00, 95% CI 0.12 to 72.91, 1 RCT, 240 participants) between the shorter and longer regimens (very low-certainty evidence). Three trials excluded people with G6PD deficiency; two did not provide this information. Pregnant and lactating women were either excluded or no details were provided. High-standard 0.5 mg/kg/day for 14 days versus standard 0.25 mg/kg/day for 14 days There may be little or no difference in P vivax recurrences at six months with 0.5 mg/kg/day primaquine for 14 days compared to 0.25 mg/kg/day for 14 days (RR 0.84 (95% CI 0.49 to 1.43; 2 RCTs, 677 participants, low-certainty evidence). No serious adverse events were reported. We do not know whether there is a difference in adverse events resulting in discontinuation of treatment with the high-standard dosage (RR 4.19, 95% CI 0.90 to 19.60; 1 RCT, 778 participants, very low-certainty evidence). People with G6PD deficiency and pregnant or lactating women were excluded. 0.75 mg/kg/week for eight weeks versus high-standard 0.5 mg/kg/day for 14 days We do not know whether weekly primaquine increases or decreases recurrences of P vivax compared to high-standard 0.5 mg/kg/day for 14 days, at 11 months' follow-up (RR 3.18, 95% CI 0.37 to 27.60; 1 RCT, 122 participants; very low-certainty evidence). No serious adverse events and no episodes of anaemia were reported. G6PD-deficient patients were not randomized but included in the weekly primaquine group (only one patient detected). 1 mg/kg/day for seven days versus high standard 0.5 mg/kg/day for 14 days There is probably little or no difference in P vivax recurrences at 12 months between 1.0 mg/kg/day primaquine for seven days and the high-standard 0.5 mg/kg/day for 14 days (RR 1.03, 95% CI 0.82 to 1.30; 2 RCTs, 2526 participants; moderate-certainty evidence). There may be moderate to large increase in serious adverse events in the 1.0 mg/kg/day primaquine for seven days compared with the high-standard 0.5 mg/kg/day for 14 days, during 42 days follow-up (RR 12.03, 95% CI 1.57 to 92.30; 1 RCT, 1872 participants, low-certainty evidence). We do not know if there is a difference between 1.0 mg/kg/day primaquine for seven days and high-standard 0.5 mg/kg/day for 14 days in adverse events that resulted in discontinuation of treatment (RR 2.50, 95% CI 0.49 to 12.87; 1 RCT, 2526 participants, very low-certainty evidence), nor if there is difference in frequency of anaemia by 42 days (RR 0.93, 95% CI 0.62 to 1.41; 2 RCTs, 2440 participants, very low-certainty evidence). People with G6PD deficiency were excluded. Other regimens Two RCTs evaluated other rarely-used doses of primaquine, one of which had very high loss to follow-up. Adverse events were not reported. People with G6PD deficiency and pregnant or lactating women were excluded. Trials available to date do not detect a difference in recurrence between the following regimens: 1) 0.5 mg/kg/day for seven days versus standard 0.25 mg/kg/day for 14 days; 2) high-standard 0.5 mg/kg/day for 14 days versus standard 0.25 mg/kg/day for 14 days; 3) 0.75 mg/kg/week for eight weeks versus high-standard 0.5 mg/kg/day for 14 days; 4) 1 mg/kg/day for seven days versus high-standard 0.5 mg/kg/day for 14 days. There were no differences detected in adverse events for Comparisons 1, 2 or 3, but there may be more serious adverse events with the high seven-day course in Comparison 4. The shorter regimen of 0.5 mg/kg/day for seven days versus standard 0.25 mg/kg/day for 14 days may suit G6PD-normal patients. Further research will help increase the certainty of the findings and applicability in different settings.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Pulmonary metastases are a poor prognostic factor in patients with osteosarcoma; however, the clinical significance of subcentimeter lung nodules and whether they represent a tumor is not fully known. Because the clinician is faced with decisions regarding biopsy, resection, or observation of lung nodules and the potential impact they have on decisions about resection of the primary tumor, this remains an area of uncertainty in patient treatment. Surgical management of the primary tumor is tailored to prognosis, and it is unclear how aggressively patients with indeterminate pulmonary nodules (IPNs), defined as nodules smaller than 1 cm at presentation, should be treated. There is a clear need to better understand the clinical importance of these nodules. (1) What percentage of patients with high-grade osteosarcoma and spindle cell sarcoma of bone have IPNs at diagnosis? (2) Are IPNs at diagnosis associated with worse metastasis-free and overall survival? (3) Are there any clinical or radiologic factors associated with worse overall survival in patients with IPN? Between 2008 and 2016, 484 patients with a first presentation of osteosarcoma or spindle cell sarcoma of bone were retrospectively identified from an institutional database. Patients with the following were excluded: treatment at another institution (6%, 27 of 484), death related to complications of neoadjuvant chemotherapy (1%, 3 of 484), Grade 1 or 2 on final pathology (4%, 21 of 484) and lack of staging chest CT available for review (0.4%, 2 of 484). All patients with abnormalities on their staging chest CT underwent imaging re-review by a senior radiology consultant and were divided into three groups for comparison: no metastases (70%, 302 of 431), IPN (16%, 68 of 431), and metastases (14%, 61 of 431) at the time of diagnosis. A random subset of CT scans was reviewed by a senior radiology registrar and there was very good agreement between the two reviewers (κ = 0.88). Demographic and oncologic variables as well as treatment details and clinical course were gleaned from a longitudinally maintained institutional database. The three groups did not differ with regard to age, gender, subtype, presence of pathological fracture, tumor site, or chemotherapy-induced necrosis. They differed according to local control strategy and tumor size, with a larger proportion of patients in the metastases group presenting with larger tumor size and undergoing nonoperative treatment. There was no differential loss to follow-up among the three groups. Two percent (6 of 302) of patients with no metastases, no patients with IPN, and 2% (1 of 61) of patients with metastases were lost to follow-up at 1 year postdiagnosis but were not known to have died. Individual treatment decisions were determined as part of a multidisciplinary conference, but in general, patients without obvious metastases received (neo)adjuvant chemotherapy and surgical resection for local control. Patients in the no metastases and IPN groups did not differ in local control strategy. For patients in the IPN group, staging CT images were inspected for IPN characteristics including number, distribution, size, location, presence of mineralization, and shape. Subsequent chest CT images were examined by the same radiologist to reevaluate known nodules for interval change in size and to identify the presence of new nodules. A random subset of chest CT scans were re-reviewed by a senior radiology resident (κ = 0.62). The association of demographic and oncologic variables with metastasis-free and overall survival was first explored using the Kaplan-Meier method (log-rank test) in univariable analyses. All variables that were statistically significant (p &lt; 0.05) in univariable analyses were entered into Cox regression multivariable analyses. Following re-review of staging chest CTs, IPNs were found in 16% (68 of 431) of patients, while an additional 14% (61 of 431) of patients had lung metastases (parenchymal nodules 10 mm or larger). After controlling for potential confounding variables like local control strategy, tumor size, and chemotherapy-induced necrosis, we found that the presence of an IPN was associated with worse overall survival and a higher incidence of metastases (hazard ratio 1.9 [95% CI 1.3 to 2.8]; p = 0.001 and HR 3.6 [95% CI 2.5 to 5.2]; p &lt; 0.001, respectively). Two-year overall survival for patients with no metastases, IPN, or metastases was 83% [95% CI 78 to 87], 65% [95% CI 52 to 75] and 45% [95% CI 32 to 57], respectively (p = 0.001). In 74% (50 of 68) of patients with IPNs, it became apparent that they were true metastatic lesions at a median of 5.3 months. Eighty-six percent (43 of 50) of these patients had disease progression by 2 years after diagnosis. In multivariable analysis, local control strategy and tumor subtype correlated with overall survival for patients with IPNs. Patients who were treated nonoperatively and who had a secondary sarcoma had worse outcomes (HR 3.6 [95% CI 1.5 to 8.3]; p = 0.003 and HR 3.4 [95% CI 1.1 to 10.0]; p = 0.03). The presence of nodule mineralization was associated with improved overall survival in the univariable analysis (87% [95% CI 39 to 98] versus 57% [95% CI 43 to 69]; p = 0.008), however, because we could not control for other factors in a multivariable analysis, the relationship between mineralization and survival could not be determined. We were unable to detect an association between any other nodule radiologic features and survival. The findings show that the presence of IPNs at diagnosis is associated with poorer survival of affected patients compared with those with normal staging chest CTs. IPNs noted at presentation in patients with high-grade osteosarcoma and spindle cell sarcoma of bone should be discussed with the patient and be considered when making treatment decisions. Further work is required to elucidate how the nodules should be managed. Level III, prognostic study.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Psychosis is an illness characterised by the presence of hallucinations and delusions that can cause distress or a marked change in an individual's behaviour (e.g. social withdrawal, flat or blunted effect). A first episode of psychosis (FEP) is the first time someone experiences these symptoms that can occur at any age, but the condition is most common in late adolescence and early adulthood. This review is concerned with first episode psychosis (FEP) and the early stages of a psychosis, referred to throughout this review as 'recent-onset psychosis.' Specialised early intervention (SEI) teams are community mental health teams that specifically treat people who are experiencing, or have experienced a recent-onset psychosis. The purpose of SEI teams is to intensively treat people with psychosis early in the course of the illness with the goal of increasing the likelihood of recovery and reducing the need for longer-term mental health treatment. SEI teams provide a range of treatments including medication, psychotherapy, psychoeducation, and occupational, educational and employment support, augmented by assertive contact with the service user and small caseloads. Treatment is time limited, usually offered for two to three years, after which service users are either discharged to primary care or transferred to a standard adult community mental health team. A previous Cochrane Review of SEI found preliminary evidence that SEI may be superior to standard community mental health care (described as 'treatment as usual (TAU)' in this review) but these recommendations were based on data from only one trial. This review updates the evidence for the use of SEI services. To compare specialised early intervention (SEI) teams to treatment as usual (TAU) for people with recent-onset psychosis. On 3 October 2018 and 22 October 2019, we searched Cochrane Schizophrenia's study-based register of trials, including registries of clinical trials. We selected all randomised controlled trials (RCTs) comparing SEI with TAU for people with recent-onset psychosis. We entered trials meeting these criteria and reporting useable data as included studies. We independently inspected citations, selected studies, extracted data and appraised study quality. For binary outcomes we calculated the risk ratios (RRs) and their 95% confidence intervals (CIs). For continuous outcomes we calculated the mean difference (MD) and their 95% CIs, or if assessment measures differed for the same construct, we calculated the standardised mean difference (SMD) with 95% CIs. We assessed risk of bias for included studies and created a 'Summary of findings' table using the GRADE approach. We included three RCTs and one cluster-RCT with a total of 1145 participants. The mean age in the trials was between 23.1 years (RAISE) and 26.6 years (OPUS). The included participants were 405 females (35.4%) and 740 males (64.6%). All trials took place in community mental healthcare settings. Two trials reported on recovery from psychosis at the end of treatment, with evidence that SEI team care may result in more participants in recovery than TAU at the end of treatment (73% versus 52%; RR 1.41, 95% CI 1.01 to 1.97; 2 studies, 194 participants; low-certainty evidence). Three trials provided data on disengagement from services at the end of treatment, with fewer participants probably being disengaged from mental health services in SEI (8%) in comparison to TAU (15%) (RR 0.50, 95% CI 0.31 to 0.79; 3 studies, 630 participants; moderate-certainty evidence). There was low-certainty evidence that SEI may result in fewer admissions to psychiatric hospital than TAU at the end of treatment (52% versus 57%; RR 0.91, 95% CI 0.82 to 1.00; 4 studies, 1145 participants) and low-certainty evidence that SEI may result in fewer psychiatric hospital days (MD -27.00 days, 95% CI -53.68 to -0.32; 1 study, 547 participants). Two trials reported on general psychotic symptoms at the end of treatment, with no evidence of a difference between SEI and TAU, although this evidence is very uncertain (SMD -0.41, 95% CI -4.58 to 3.75; 2 studies, 304 participants; very low-certainty evidence). A different pattern was observed in assessment of general functioning with an end of trial difference that may favour SEI (SMD 0.37, 95% CI 0.07 to 0.66; 2 studies, 467 participants; low-certainty evidence). It was uncertain whether the use of SEI resulted in fewer deaths due to all-cause mortality at end of treatment (RR 0.21, 95% CI 0.04 to 1.20; 3 studies, 741 participants; low-certainty evidence). There was low risk of bias for random sequence generation and allocation concealment in three of the four included trials; the remaining trial had unclear risk of bias. Due to the nature of the intervention, we considered all trials at high risk of bias for blinding of participants and personnel. Two trials had low risk of bias and two trials had high risk of bias for blinding of outcomes assessments. Three trials had low risk of bias for incomplete outcome data, while one trial had high risk of bias. Two trials had low risk of bias, one trial had high risk of bias, and one had unclear risk of bias for selective reporting. There is evidence that SEI may provide benefits to service users during treatment compared to TAU. These benefits probably include fewer disengagements from mental health services (moderate-certainty evidence), and may include small reductions in psychiatric hospitalisation (low-certainty evidence), and a small increase in global functioning (low-certainty evidence) and increased service satisfaction (moderate-certainty evidence). The evidence regarding the effect of SEI over TAU after treatment has ended is uncertain. Further evidence investigating the longer-term outcomes of SEI is needed. Furthermore, all the eligible trials included in this review were conducted in high-income countries, and it is unclear whether these findings would translate to low- and middle-income countries, where both the intervention and the comparison conditions may be different.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The goal of bundled payments-lump monetary sums designed to cover the full set of services needed to provide care for a condition or medical event-is to provide a reimbursement structure that incentivizes improved value for patients. There is concern that such a payment mechanism may lead to patient screening and denying or providing orthopaedic care to patients based on the number and severity of comorbid conditions present associated with complications after surgery. Currently, however, there is no clear consensus about whether such an association exists. In this systematic review, we asked: (1) Is the implementation of a bundled payment model associated with a change in the sociodemographic characteristics of patients undergoing an orthopaedic procedure? (2) Is the implementation of a bundled payment model associated with a change in the comorbidities and/or case-complexity characteristics of patients undergoing an orthopaedic procedure? (3) Is the implementation of a bundled payment model associated with a change in the recent use of healthcare resources characteristics of patients undergoing an orthopaedic procedure? This systematic review was registered in PROSPERO before data collection (CRD42020189416). Our systematic review included scientific manuscripts published in MEDLINE, Embase, Web of Science, Econlit, Policyfile, and Google Scholar through March 2020. Of the 30 studies undergoing full-text review, 20 were excluded because they did not evaluate the outcome of interest (patient selection) (n = 8); were editorial, commentary, or review articles (n = 5); did not evaluate the appropriate intervention (introduction of a bundled payment program) (n = 4); or assessed the wrong patient population (not orthopaedic surgery patients) (n = 3). This led to 10 studies included in this systematic review. For each study, patient factors analyzed in the included studies were grouped into the following three categories: sociodemographics, comorbidities and/or case complexity, or recent use of healthcare resources characteristics. Next, each patient factor falling into one of these three categories was examined to evaluate for changes from before to after implementation of a bundled payment initiative. In most cases, studies utilized a difference-in-difference (DID) statistical technique to assess for changes. Determination of whether the bundled payment initiative required mandatory participation or not was also noted. Scientific quality using the Adapted Newcastle-Ottawa Scale had a median (range) score of 8 (7 to 8; highest possible score: 9), and the quality of the total body of evidence for each patient characteristic group was found to be low using the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) tool. We could not assess the likelihood of publication using funnel plots because of the variation of patient factors analyzed in each study and the heterogeneity of data precluded a meta-analysis. Of the nine included studies that reported on the sociodemographic characteristics of patients selected for care, seven showed no change with the implementation of bundled payments, and two demonstrated a difference. Most notably, the studies identified a decrease in the percentage of patients undergoing an orthopaedic operative intervention who were dual-eligible (range DID estimate -0.4% [95% CI -0.75% to -0.1%]; p &lt; 0.05 to DID estimate -1.0% [95% CI -1.7% to -0.2%]; p = 0.01), which means they qualified for both Medicare and Medicaid insurance coverage. Of the 10 included studies that reported on comorbidities and case-complexity characteristics, six reported no change in such characteristics with the implementation of bundled payments, and four studies noted differences. Most notably, one study showed a decrease in the number of treated patients with disabilities (DID estimate -0.6% [95% CI -0.97% to -0.18%]; p &lt; 0.05) compared with before bundled payment implementation, while another demonstrated a lower number of Elixhauser comorbidities for those treated as part of a bundled payment program (before: score of 0-1 in 63.6%, 2-3 in 27.9%, &gt; 3 in 8.5% versus after: score of 0-1 in 50.1%, 2-3 in 38.7%, &gt; 3 in 11.2%; p = 0.033). Of the three included studies that reported on the recent use of healthcare resources of patients, one study found no difference in the use of healthcare resources with the implementation of bundled payments, and two studies did find differences. Both studies found a decrease in patients undergoing operative management who recently received care at a skilled nursing facility (range DID estimate -0.50% [95% CI -1.0% to 0.0%]; p = 0.04 to DID estimate: -0.53% [95% CI -0.96% to -0.10%]; p = 0.01), while one of the studies also found a decrease in patients undergoing operative management who recently received care at an acute care hospital (DID estimate -0.8% [95% CI -1.6% to -0.1%]; p = 0.03) or as part of home healthcare (DID estimate -1.3% [95% CI -2.0% to -0.6%]; p &lt; 0.001). In six of 10 studies in which differences in patient characteristics were detected among those undergoing operative orthopaedic intervention once a bundled payment program was initiated, the effect was found to be minimal (approximately 1% or less). However, our findings still suggest some level of adverse patient selection, potentially worsening health inequities when considered on a large scale. It is also possible that our findings reflect better care, whereby the financial incentives lead to fewer patients with a high risk of complications undergoing surgical intervention and vice versa for patients with a low risk of complications postoperatively. However, this is a fine line, and it may also be that patients with a high risk of complications postoperatively are not being offered surgery enough, while patients at low risk of complications postoperatively are being offered surgery too frequently. Evaluation of the longer-term effect of these preliminary bundled payment programs on patient selection is warranted to determine whether adverse patient selection changes over time as health systems and orthopaedic surgeons become accustomed to such reimbursement models.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Partial deafness is a condition characterised by normal hearing thresholds in low frequencies and increased hearing thresholds (nearly deafness) for high frequencies. Typical hearing aids are rather of a very limited use in this condition as they do not improve understanding of speech. Patients with partial deafness can be presently treated with cochlear implants, which had not been used before due to the risk of damage of hearing remains by electrode introduced into cochlea. The purpose of our study was an objective and subjective assessment of voice quality in partial deafness patients before and after cochlear implantation. The subjects in this study were 25 post-lingual, bilaterally partially deaf patients, 13 females and 12 males. The reference group composed of 55 normal hearing individuals (28 females and 27 males). The acoustic analysis was performed with a multidimensional voice analysis MDVP (Multi Dimension Voice Program), and the subjective assessment was done with GRBAS scale. Initial analysis of voice changes in partial deafness patients was performed versus normal hearing individuals, then voice parameters were measured and perceptual voice assessment was done before and 9 months after cochlear implantation. Measurements of acoustic voice parameters in partially deaf patients showed changes in most of frequency, amplitude, noise and subharmonic components versus normal hearing control group. The most significant, statistically important changes were observed in fundamental frequency variation (vF0), absolute jitter (Jita), jitter percent (Jitt), amplitude perturbation quotient (APQ), smoothed amplitude perturbation quotient (sAPQ), relative average perturbation (RAP), peak amplitude variation (vAm), relative amplitude modulation (Shim), percent shimmer (%Shim), pitch perturbation quotient (PPQ), smoothed pitched perturbation quotient (sPPQ), degree of subharmonics (DSH), degree of voiceless (DUV), number of subharmonic segments (NSH), noise-to-harmonic ratio (NHR), voice turbulence index (VTI). All patients in the study group were subjects to cochlear implantation. After 9 months objective and subjective assessment of patients` voices were performed again. Statistically important changes were identified in average fundamental frequency variability (vF0), relative amplitude modulation index (ShdB), noise-to-harmonic ratio (NHR), number of subharmonics NSH, degree of subharmonics (DSH) and the degree of voiceless (DUV). Comparison of the objective voice parameters changes after cochlear implantation with subjective, perceptual voice quality assessments leads to observation, that improvement of subjective voice quality after cochlear implantation takes place parallelly with improvement of certain objective acoustic voice parameters and some correlations exist. We found, that G correlates with vF0 and Shim, R correlates with DSH and NSH, B correlates with NSH and NHR, A correlates with DUV. We did not prove correlation of S with any of the objective parameters in our research group. Our study proved, that acoustic and perceptual features of voice in partially deaf adults are different than in normally hearing people and cochlear implantation in partial deafness patients is an effective tool to improve hearing and leads to improvement of the acoustic structure of the voice.Partial deafness is a condition characterised by normal hearing thresholds in low frequencies and increased hearing thresholds (nearly deafness) for high frequencies. Typical hearing aids are rather of a very limited use in this condition as they do not improve understanding of speech. Patients with partial deafness can be presently treated with cochlear implants, which had not been used before due to the risk of damage of hearing remains by electrode introduced into cochlea. The purpose of our study was an objective and subjective assessment of voice quality in partial deafness patients before and after cochlear implantation. The subjects in this study were 25 post-lingual, bilaterally partially deaf patients, 13 females and 12 males. The reference group composed of 55 normal hearing individuals (28 females and 27 males). The acoustic analysis was performed with a multidimensional voice analysis MDVP (Multi Dimension Voice Program), and the subjective assessment was done with GRBAS scale. Initial analysis of voice changes in partial deafness patients was performed versus normal hearing individuals, then voice parameters were measured and perceptual voice assessment was done before and 9 months after cochlear implantation. Measurements of acoustic voice parameters in partially deaf patients showed changes in most of frequency, amplitude, noise and subharmonic components versus normal hearing control group. The most significant, statistically important changes were observed in fundamental frequency variation (vF0), absolute jitter (Jita), jitter percent (Jitt), amplitude perturbation quotient (APQ), smoothed amplitude perturbation quotient (sAPQ), relative average perturbation (RAP), peak amplitude variation (vAm), relative amplitude modulation (Shim), percent shimmer (%Shim), pitch perturbation quotient (PPQ), smoothed pitched perturbation quotient (sPPQ), degree of subharmonics (DSH), degree of voiceless (DUV), number of subharmonic segments (NSH), noise-to-harmonic ratio (NHR), voice turbulence index (VTI). All patients in the study group were subjects to cochlear implantation. After 9 months objective and subjective assessment of patients` voices were performed again. Statistically important changes were identified in average fundamental frequency variability (vF0), relative amplitude modulation index (ShdB), noise-to-harmonic ratio (NHR), number of subharmonics NSH, degree of subharmonics (DSH) and the degree of voiceless (DUV). Comparison of the objective voice parameters changes after cochlear implantation with subjective, perceptual voice quality assessments leads to observation, that improvement of subjective voice quality after cochlear implantation takes place parallelly with improvement of certain objective acoustic voice parameters and some correlations exist. We found, that G correlates with vF0 and Shim, R correlates with DSH and NSH, B correlates with NSH and NHR, A correlates with DUV. We did not prove correlation of S with any of the objective parameters in our research group. Our study proved, that acoustic and perceptual features of voice in partially deaf adults are different than in normally hearing people and cochlear implantation in partial deafness patients is an effective tool to improve hearing and leads to improvement of the acoustic structure of the voice.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Although most patients who undergo anterior cruciate ligament (ACL) reconstruction achieve long-term functional stability and symptom relief, graft rupture rates range from 2% to 10%<sup1,2</sup. A small subset of these patients require a 2-stage revision ACL reconstruction because of tunnel osteolysis or tunnel malposition that will interfere with the planned revision tunnel placement<sup3</sup. In the present article, we describe the hybrid use of arthroscopically delivered injectable allograft matrix in the femur and pre-shaped bone dowels in the tibia for the treatment of lower-extremity bone deficiencies. After induction of anesthesia, approximately 60 cc of bone marrow aspirate is harvested from the anterior iliac crest with use of sterile techniques and is processed to obtain bone marrow aspirate concentrate. Routine diagnostic knee arthroscopy is performed via the standard anterolateral and anteromedial portals. Any additional intra-articular pathology is addressed, followed by excision of the remnant graft material, removal of existing femoral hardware as needed, and exposure of the existing bone tunnels. The femoral tunnel is debrided arthroscopically, removing all soft-tissue remnants. The existing tibial tunnel is exposed via the previous anteromedial tibial incision when possible. Again, any existing tibial hardware is removed. The tibial tunnel is then prepared with use of a combination of sequential reaming and dilation. A shaver and curets are utilized to debride the sclerotic walls of the tunnel and remove the remnant graft material. A cannulated allograft bone dowel is then impacted into place over a guidewire, ensuring that the graft is not proud within the joint space. An injectable bone allograft matrix composite is prepared by manually mixing 5 mL of StimuBlast demineralized bone matrix (Arthrex) and 5 mL of FlexiGraft cortical fibers (Arthrex), along with the previously obtained bone marrow aspirate concentrate. Under dry arthroscopy, this bone graft is delivered into the femoral tunnel via a cannula with use of the anteromedial portal. Finally, a Freer elevator is used to contour the graft at the aperture of the tunnel. Graft osteointegration is mandatory prior to proceeding with the second stage of the procedure. Typically, a minimum 3-month follow-up is necessary to confirm adequate graft incorporation on computed tomography. As an alternative to the 2-stage procedure, previous studies have suggested the use of a single-stage revision utilizing cylindrical allografts or multiple "stacked screws."<sup4-6</sup In addition, a number of bone allograft and autograft options have been described. Autologous bone graft can be harvested from the ipsilateral iliac crest or proximal aspect of the tibia with use of a variety of techniques<sup7-10</sup. Allograft bone options include cancellous bone chips and commercially available bone matrices or dowels<sup11-14</sup. Finally, another viable option is calcium phosphate bone graft substitutes<sup15</sup. There is a paucity of high-quality studies comparing available bone graft materials for revision ACL reconstruction; thus, no consensus exists regarding the optimal choice<sup16</sup. A 2-stage approach is typically indicated for cases that demonstrate tunnel enlargement (&gt;12 mm) that would compromise graft fixation or non-anatomic tunnel placement that will interfere with placement of the revision tibial tunnel<sup3</sup. The aim of the first stage is to re-establish adequate bone stock to optimize future tunnel placement and healing of the ACL graft during the second stage. We believe that this 2-stage approach is a reliable and safe method of treating enlarged, irregularly shaped bone tunnel defects while minimizing the risk of complications. Furthermore, the use of allograft material avoids the donor-site morbidity and volume limitations associated with the use of autograft bone. In the case of the femoral tunnel, the injectable bone graft composite has the advantage of being easily delivered arthroscopically while completely filling irregularly shaped tunnels. The use of bone marrow aspirate concentrate may improve the rate of graft healing as well as a hydrating substance to reduce viscosity and facilitate the flow of the bone graft material through the cannula<sup16,17</sup. For the tibia, especially in cases of lengthy tibial bone deficiencies, allograft bone dowels are commercially available off-the-shelf in a variety of different lengths and diameters to allow for adequate fill of bone defects. It is well known that outcomes following revision ACL reconstruction are inferior to those following primary ACL reconstruction, with a number of variables, beyond those associated with the surgical technique, influencing clinical outcomes<sup18</sup. Few studies have reported on the results of 2-stage revision ACL reconstruction with use of allograft bone; however, a high rate of allograft bone integration and improved bone quality at the time of revision ACL reconstruction have been reported<sup13</sup. Moreover, Mitchell et al. reported no differences in either subjective outcomes or failure rates between the 1-stage and 2-stage revision ACL reconstruction groups<sup11</sup. Utilize computed tomography for preoperative assessment and measurement of the extent of osteolysis.If possible, obtain the operative report for the index ACL procedure in order to identify any preexisting hardware and to obtain any instrumentation that may be needed to facilitate hardware removal.Multiple bone dowel sizes are available off the shelf.A 70° arthroscope can aid in visualization of the entire tibial and femoral tunnel.Although the bone graft matrix can be injected while the joint is filled with irrigation fluid, we find it easier to administer the graft under dry arthroscopic conditions.Place the scope inside the tibial tunnel to confirm appropriate removal of soft tissue and hardware. Circumferential native cancellous bone should be visualized.It is acceptable to retain previous hardware if it does not interfere with the new tunnel placement.Utilize prior incisions to access the tibial tunnel.Do not underestimate the amount of bone graft needed for each tunnel.Avoid excessive force during impaction of the dowels. ACLR = Anterior cruciate ligament reconstructionBMAC = Bone marrow aspirate concentrateMRI = Magnetic resonance imagingCT = Computed tomographyBTB = Bone-patellar tendon-boneDVT = Deep vein thrombosisROM = Range of motion.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Physical restraints, such as bedrails, belts in chairs or beds, and fixed tables, are commonly used for older people in general hospital settings. Reasons given for using physical restraints are to prevent falls and fall-related injuries, to control challenging behavior (such as agitation or wandering), and to ensure the delivery of medical treatments. Clear evidence of their effectiveness is lacking, and potential harms are recognised, including injuries associated with the use of physical restraints and a negative impact on people's well-being. There are widespread recommendations that their use should be reduced or eliminated. To assess the best evidence for the effects and safety of interventions aimed at preventing and reducing the use of physical restraint of older people in general hospital settings. To describe the content, components and processes of these interventions. We searched the Cochrane Dementia and Cognitive Improvement Group's register, MEDLINE (Ovid SP), Embase (Ovid SP), PsycINFO (Ovid SP), CINAHL (EBSCOhost), Web of Science Core Collection (Clarivate), LILACS (BIREME), ClinicalTrials.gov and the World Health Organization's meta-register the International Clinical Trials Registry Portal on 20 April 2022. We included randomised controlled trials and controlled clinical trials that investigated the effects of interventions that aimed to prevent or reduce the use of physical restraints in general hospital settings. Eligible settings were acute care and rehabilitation wards. We excluded emergency departments, intensive care and psychiatric units, as well as the use of restrictive measures for penal reasons (e.g. prisoners in general medical wards). We included studies with a mean age of study participants of at least 65 years. Control groups received usual care or active control interventions that were ineligible for inclusion as experimental interventions. Two review authors independently selected the articles for inclusion, extracted data, and assessed the risk of bias of all included studies. Data were unsuitable for meta-analysis, and we reported results narratively. We used GRADE methods to describe our certainty in the results. We included four studies: two randomised controlled trials (one individually-randomised, parallel-group trial and one clustered, stepped-wedge trial) and two controlled clinical trials (both with a clustered design). One study was conducted in general medical wards in Canada and three studies were conducted in rehabilitation hospitals in Hong Kong. A total of 1709 participants were included in three studies; in the fourth study the number of participants was not reported. The mean age ranged from 67 years to 84 years. The duration of follow-up covered the period of patients' hospitalisation in one study (21 days average length of stay) and ranged from 4 to 11 months in the other studies. The definition of physical restraints differed slightly, and one study did not include bedrails. Three studies investigated organisational interventions aimed at implementing a least-restraint policy to reduce physical restraints. The theoretical approach of the interventions and the content of the educational components was comparable across studies. The fourth study investigated the use of pressure sensors for participants with an increased falls risk, which gave an alarm if the participant left the bed or chair. Control groups in all studies received usual care. Three studies were at high risk of selection bias and risk of detection bias was unclear in all studies. Because of very low-certainty evidence, we are uncertain about the effect of organisational interventions aimed at implementing a least-restraint policy on our primary efficacy outcome: the use of physical restraints in general hospital settings. One study found an increase in the number of participants with at least one physical restraint in the intervention and control groups, one study found a small reduction in both groups, and in the third study (the stepped-wedge study), the number of participants with at least one physical restraint decreased in all clusters after implementation of the intervention but no detailed information was reported. For the use of bed or chair pressure sensor alarms for people with an increased fall risk, we found moderate-certainty evidence of little to no effect of the intervention on the number of participants with at least one physical restraint compared with usual care. None of the studies systematically assessed adverse events related to use of physical restraint use, e.g. direct injuries, or reported such events. We are uncertain about the effect of organisational interventions aimed at implementing a least-restraint policy on the number of participants with at least one fall (very low-certainty evidence), and there was no evidence that organisational interventions or the use of bed or chair pressure sensor alarms for people with an increased fall risk reduce the number of falls (low-certainty evidence from one study each). None of the studies reported fall-related injuries. We found low-certainty evidence that organisational interventions may result in little to no difference in functioning (including mobility), and moderate-certainty evidence that the use of bed or chair pressure sensor alarms has little to no effect on mobility. We are uncertain about the effect of organisational interventions on the use of psychotropic medication; one study found no difference in the prescription of psychotropic medication. We are uncertain about the effect of organisational interventions on nurses' attitudes and knowledge about the use of physical restraints (very low-certainty evidence). We are uncertain whether organisational interventions aimed at implementing a least-restraint policy can reduce physical restraints in general hospital settings. The use of pressure sensor alarms in beds or chairs for people with an increased fall risk has probably little to no effect on the use of physical restraints. Because of the small number of studies and the study limitations, the results should be interpreted with caution. Further research on effective strategies to implement a least-restraint policy and to overcome barriers to physical restraint reduction in general hospital settings is needed.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Lead (Pb) is a naturally occurring element that poses environmental hazards when present at elevated concentration. It is being released into the environment because of industrial uses and from the combustion of fossil fuels. Hence, Pb is ubiquitous throughout global ecosystems. The existence of potentially harmful concentrations of Pb in the environment must be given full attention. Emissions from vehicles are major source of environmental contamination by Pb. Thus, it becomes imperative that concentrations of Pb and other hazardous materials in the environment not only in the Philippines, but elsewhere in the world be adequately examined in order that development of regulations and standards to minimize risk associated with these materials in urban areas is continued. The objectives of this study were: (1) to determine the levels of Pb in soil from selected urbanized cities in central region of the Philippines; (2) to identify areas with soil Pb concentration values that exceed estimated natural concentrations and allowable limits; and (3) to determine the possible sources that contribute to elevated soil Pb concentration (if any) in the study area. This study was limited to the determination of Pb levels in soils of selected urbanized cities located in central region in the Philippines, namely: Site 1--Tarlac City in Tarlac; Site 2--Cabanatuan City in Nueva Ecija; Site 3--Malolos City in Bulacan; Site 4--San Fernando City in Pampanga; Site 5--Balanga City in Bataan; and Site 6--Olongapo City in Zambales. Soil samples were collected from areas along major thoroughfares regularly traversed by tricycles, passenger jeepneys, cars, vans, trucks, buses, and other motor vehicles. Soil samples were collected from five sampling sites in each of the study areas. Samples from the selected sampling sites were obtained approximately 2 to 3 meters from the road. Analysis of the soil samples for Pb content was conducted using an atomic absorption spectrophotometer. This study was conducted from 2003 to 2004. Since this study assumed that vehicular emission is the major source of Pb contamination in urban soil, other information which the researchers deemed to have bearing on the study were obtained such as relative quantity of each gasoline type disposed of in each city within a given period and volume of traffic in each sampling site. A survey questionnaire for gasoline station managers was prepared to determine the relative quantity of each fuel type (diesel, regular gasoline, premium gasoline, and unleaded gasoline) disposed of or sold within a given period in each study area. Analysis of soil samples for Pb content showed the presence of Pb in all the soil samples collected from the 30 sampling sites in the six cities at varying concentrations ranging from 1.5 to 251 mg kg(-1). Elevated levels of Pb in soil (i.e. greater than 25 mg kg(-1) Pb) were detected in five out of the six cities investigated. Site 4 recorded the highest Pb concentration (73.9 +/- 94.4 mg kg(-1)), followed by Site 6 (56.3 +/- 17.1 mg kg(-1)), Site 3 (52.0 +/- 33.1 mg kg(-1)), Site 5 (39.3 +/- 19.0 mg kg(-1)), and Site 2 (38.4 +/- 33.2 mg kg(-1)). Soil Pb concentration in Site 1 (16.8 +/- 12.2 mg kg(-1)) was found to be within the estimated natural concentration range of 5 to 25 mg kg(-1). Site 1 registered the least Pb concentration. Nonetheless, the average Pb concentration in the soil samples from the six cities studied were all found to be below the maximum tolerable limit according to World Health Organization (WHO) standards. The high Pb concentration in Site 4 may be attributed mainly to vehicular emission. Although Site 4 only ranked 3rd in total volume of vehicles, it has the greatest number of Type B and Type C vehicles combined. Included in these categories are diesel trucks, buses, and jeepneys which are considered the largest contributors of TSP (total suspended particles) and PM10 (particulate matter less than 10 microns) emissions. Only one (San Juan in Site 4) of the thirty sampling sites recorded a Pb concentration beyond the WHO permissible limit of 100 mg kg(-1). San Juan in Site 4 had a Pb concentration of &gt;250 mg kg(-1). On the average, elevated Pb concentration was evident in the soil samples from San Fernando, Olongapo, Malolos, Balanga, and Cabanatuan. The average soil Pb concentrations in these cities exceeded the maximum estimated natural soil Pb concentration of 25 mg kg(-1). Average soil Pb concentration in Site 1 (16.8 mg kg(-1)) was well within the estimated natural concentration range of 5 to 25 mg kg(-1). Data gathered from the study areas showed that elevated levels of Pb in soil were due primarily to vehicular emissions and partly to igneous activity. The findings of this study presented a preliminary survey on the extent of Pb contamination of soils in urban cities in central region of Philippines Island. With this kind of information on hand, government should develop a comprehensive environmental management strategy to address vehicular air pollution in urban areas, which shows as one of the most pressing environmental problems in the country. Basic to this is the continuous monitoring of Pb levels and other pollutants in air, soil, and water. Further studies should be conducted to monitor soil Pb levels in the six cities studied particularly in areas with elevated Pb concentration. The potential for harm from Pb exposure cannot be understated. Of particular concern are children who are more predisposed to Pb toxicity than adults. Phytoremediation of Pb-contaminated sites is strongly recommended to reduce Pb concentration in soil. Several studies have confirmed that plants are capable of absorbing extra Pb from soil and that some plants, grass species in particular, and can naturally absorb far more Pb than others.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Adult Urodele Amphibians, if deprived of the tail, are able to fully regenerate it. This occurs owing to a typical epimorphic phenomenon which takes place in various phases. Within this matter, in particular on the reconstruction of the caudal nervous component, literature sources refer to a great quantity of research following only one amputation of the tail. Being aware of these data we programmed to investigate the possible persistence, decrease or disappearance of the medullary regenerative power after repeated amputations of the regenerated tail. With this objective in view, we have performed on adult Triturus carnifex a series of such operations at time spaced out from one another. In previous experiments, the amputations of the tail have been before seven and then nine. In the current experiment, the same specimens have been subjected to further removals of the tail. This study has developed into three cycles going on over a period of more than ten years. Overall, our panorama rising from the integration of present results and those of former observations is in agreement with what occurs in the area which is the centre of the beginnings of medullary regeneration processes and the bibliographic information concerning the pre-blastematic and blastematic phases. In the subsequent morphogenetic and differentiative phases, however, with the recurrence of the re-establishment of the spinal cord, these events proceed more slowly (gap which reduces when the time interval starting from the operation increases) than in the spinal cords which regenerated after only one tail amputation. Furthermore, although the regenerated spinal cords, if compared to normal spinal cord, show some anomalies (regarding medullary length and diameter, distribution of the spinal nerves and ganglia), the regenerated spinal cords (as well-known uncapable to re-form the Mauthner fibres and supplied with the Rohon-Beard sensitive neurons), their nerves and ganglia reacquire the same complex structural organization as normal spinal cord (where, already known, the Rohon-Beard larval neurons lack, because they play the same role of the spinal ganglia in adult life and disappear when these ganglia first appear). Therefore, at least within numerical bounds of our tail amputations, the medullary regenerative potentialities would seem not to decrease. At the time of our starting investigations, being aware that the Authors ask questions to the morphogenesis of the regenerated spinal cord on which some aspects have not certainly been clarified, two antithetic hypotheses have been proposed. We raised the doubt that the entity of mitotic activity could alone be responsible for the quick reacquisition of a regenerated spinal cord which is superimposable to a normal one. Owing to meditation, we tended towards the hypothesis that this regeneration would be due to trans-differentiative process, which would trigger off in the tissues of the stump of the tail, induced by the impulse following the amputation. In order to obtain a complete picture of the proliferative possibilities responsible mainly, if not exclusively, for these phenomena which could support such our propension, we also programmed the current experiments on a parallel twofold approach. Therefore, we, as in past studies, have analyzed the proliferative activities in progress, through karyokineses and moreover we have attempted to unmask the possible presence of latent proliferative activities symbolized by the elements in the S phase of their vital cycle. To this end, an appropriate proliferative test has been chosen, the Proliferating Cell Nuclear Antigen (PCNA). Mitoses and signals of perspective proliferative activities, revealed by this immunocytochemical marker, are localizable in the ependyma and the periventricular grey. In the normal spinal cord there is an irrelevant karyokinetic activity coexisting with the expression of a PCNA considerably higher. Against these physiological proliferative paintings, in progress and potential, in the regenerating and regenerated spinal cords the numerical entity of the mitoses and of the cells revealing DNA synthesis has been found to be, if not negligible, modest or on the whole inadequate to sustain the regeneration events in progress and later possible ones after further amputations of the tail. Based on the evidence at present available, one could hypothesize that the impulse following the amputation of the normal tail would operate as a priority on the natural incomparable initial reserve of cyclic cells in the S phase, detected immunoreactively, which would be depositary of medullary proliferative silent potentialities, so that these cells, leaving the stand by condition in which they would be, would mobilize and passing through the M phase would set out for their differentiation. These undifferentiated cells would be, therefore, mainly responsible for the first medullary regenerative event. Such a scenario would give weight to those Authors that suggested these elements play a decisive role in the regenerative processes, Authors, that's so, have limited their observations to only one amputation of the tail. After this event, once the inizial considerable stock of undifferentiated cells has irreparably dropped, one could then suppose that the shock subsequent to each new amputation promotes in the stump of the amputated tail trans-differentiative processes which would become of primary weight for the following new medullary regenerations. This interpretation, therefore, prefigures that the shock would have a different primary target depending on whether it is connected to the first or to successive amputations of the tail. In the dispute regarding the genesis of the regenerated spinal cord in adult Urodele Amphibians, such a vision taking into consideration current data would make it possible, to a certain extent, to reconcile the two contrasting hypotheses previously advanced by Authors and put an end to the doubts expressed by us in the past at the time of previous our observations where in supporting the hypothesis regarding trans-differentiative activities, we have been hesitant in sustaining they were solely responsible for these events.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Self-management education programmes are complex interventions specifically targeted at patient education and behaviour modification. They are designed to encourage people with chronic disease to take an active self-management role to supplement medical care and improve outcomes. To assess the effectiveness of self-management education programmes for people with osteoarthritis. The Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PyscINFO, SCOPUS and the World Health Organization (WHO) International Clinical Trial Registry Platform were searched, without language restriction, on 17 January 2013. We checked references of reviews and included trials to identify additional studies. Randomised controlled trials of self-management education programmes in people with osteoarthritis were included. Studies with participants receiving passive recipients of care and studies comparing one type of programme versus another were excluded. In addition to standard methods we extracted components of the self-management interventions using the eight domains of the Health Education Impact Questionnaire (heiQ), and contextual and participant characteristics using PROGRESS-Plus and the Health Literacy Questionnaire (HLQ). Outcomes included self-management of osteoarthritis, participant's positive and active engagement in life, pain, global symptom score, self-reported function, quality of life and withdrawals (including dropouts and those lost to follow-up). We assessed the quality of the body of evidence for these outcomes using the GRADE approach. We included twenty-nine studies (6,753 participants) that compared self-management education programmes to attention control (five studies), usual care (17 studies), information alone (four studies) or another intervention (seven studies). Although heterogeneous, most interventions included elements of skill and technique acquisition (94%), health-directed activity (85%) and self-monitoring and insight (79%); social integration and support were addressed in only 12%. Most studies did not provide enough information to assess all PROGRESS-Plus items. Eight studies included predominantly Caucasian, educated female participants, and only four provided any information on participants' health literacy. All studies were at high risk of performance and detection bias for self-reported outcomes; 20 studies were at high risk of selection bias, 16 were at high risk of attrition bias, two were at high risk of reporting bias and 12 were at risk of other biases. We deemed attention control as the most appropriate and thus the main comparator.Compared with attention control, self-management programmes may not result in significant benefits at 12 months. Low-quality evidence from one study (344 people) indicates that self-management skills were similar in active and control groups: 5.8 points on a 10-point self-efficacy scale in the control group, and the mean difference (MD) between groups was 0.4 points (95% confidence interval (CI) -0.39 to 1.19). Low-quality evidence from four studies (575 people) indicates that self-management programmes may lead to a small but clinically unimportant reduction in pain: the standardised mean difference (SMD) between groups was -0.26 (95% CI -0.44 to -0.09); pain was 6 points on a 0 to 10 visual analogue scale (VAS) in the control group, treatment resulted in a mean reduction of 0.8 points (95% CI -0.14 to -0.3) on a 10-point scale, with number needed to treat for an additional beneficial outcome (NNTB) of 8 (95% CI 5 to 23). Low-quality evidence from one study (251 people) indicates that the mean global osteoarthritis score was 4.2 on a 0 to 10-point symptom scale (lower better) in the control group, and treatment reduced symptoms by a mean of 0.14 points (95% CI -0.54 to 0.26). This result does not exclude the possibility of a clinically important benefit in some people (0.5 point reduction included in 95% CI). Low-quality evidence from three studies (574 people) showed no signficant difference in function between groups (SMD -0.19, 95% CI -0.5 to 0.11); mean function was 1.29 points on a 0 to 3-point scale in the control group, and treatment resulted in a mean improvement of 0.04 points with self-management (95% CI -0.10 to 0.02). Low-quality evidence from one study (165 people) showed no between-group difference in quality of life (MD -0.01, 95% CI -0.03 to 0.01) from a control group mean of 0.57 units on 0 to 1 well-being scale. Moderate-quality evidence from five studies (937 people) shows similar withdrawal rates between self-management (13%) and control groups (12%): RR 1.11 (95% CI 0.78 to 1.57). Positive and active engagement in life was not measured.Compared with usual care, moderate-quality evidence from 11 studies (up to 1,706 participants) indicates that self-management programmes probably provide small benefits up to 21 months, in terms of self-management skills, pain, osteoarthritis symptoms and function, although these are of doubtful clinical importance, and no improvement in positive and active engagement in life or quality of life. Withdrawal rates were similar. Low to moderate quality evidence indicates no important differences in self-management , pain, symptoms, function, quality of life or withdrawal rates between self-management programmes and information alone or other interventions (exercise, physiotherapy, social support or acupuncture). Low to moderate quality evidence indicates that self-management education programmes result in no or small benefits in people with osteoarthritis but are unlikely to cause harm.Compared with attention control, these programmes probably do not improve self-management skills, pain, osteoarthritis symptoms, function or quality of life, and have unknown effects on positive and active engagement in life. Compared with usual care, they may slightly improve self-management skills, pain, function and symptoms, although these benefits are of unlikely clinical importance.Further studies investigating the effects of self-management education programmes, as delivered in the trials in this review, are unlikely to change our conclusions substantially, as confounding from biases across studies would have likely favoured self-management. However, trials assessing other models of self-management education programme delivery may be warranted. These should adequately describe the intervention they deliver and consider the expanded PROGRESS-Plus framework and health literacy, to explore issues of health equity for recipients.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Diabetic retinopathy is a common complication of diabetes and a leading cause of visual impairment and blindness. Research has established the importance of blood glucose control to prevent development and progression of the ocular complications of diabetes. Simultaneous blood pressure control has been advocated for the same purpose, but findings reported from individual studies have supported varying conclusions regarding the ocular benefit of interventions on blood pressure. The primary aim of this review was to summarize the existing evidence regarding the effect of interventions to control or reduce blood pressure levels among diabetics on incidence and progression of diabetic retinopathy, preservation of visual acuity, adverse events, quality of life, and costs. A secondary aim was to compare classes of anti-hypertensive medications with respect to the same outcomes. We searched a number of electronic databases including CENTRAL as well as ongoing trial registries. We last searched the electronic databases on 25 April 2014. We also reviewed reference lists of review articles and trial reports selected for inclusion. In addition, we contacted investigators of trials with potentially pertinent data. We included in this review randomized controlled trials (RCTs) in which either type 1 or type 2 diabetic participants, with or without hypertension, were assigned randomly to intense versus less intense blood pressure control, to blood pressure control versus usual care or no intervention on blood pressure, or to different classes of anti-hypertensive agents versus placebo. Pairs of review authors independently reviewed titles and abstracts from electronic and manual searches and the full text of any document that appeared to be relevant. We assessed included trials independently for risk of bias with respect to outcomes reported in this review. We extracted data regarding trial characteristics, incidence and progression of retinopathy, visual acuity, quality of life, and cost-effectiveness at annual intervals after study entry whenever provided in published reports and other documents available from included trials. We included 15 RCTs, conducted primarily in North America and Europe, that had enrolled 4157 type 1 and 9512 type 2 diabetic participants, ranging from 16 to 2130 participants in individual trials. In 10 of the 15 RCTs, one group of participants was assigned to one or more anti-hypertensive agents and the control group received placebo. In three trials, intense blood pressure control was compared to less intense blood pressure control. In the remaining two trials, blood pressure control was compared with usual care. Five of the 15 trials enrolled type 1 diabetics, and 10 trials enrolled type 2 diabetics. Six trials were sponsored entirely by pharmaceutical companies, seven trials received partial support from pharmaceutical companies, and two studies received support from government-sponsored grants and institutional support.Study designs, populations, interventions, and lengths of follow-up (range one to nine years) varied among the included trials. Overall, the quality of the evidence for individual outcomes was low to moderate. For the primary outcomes, incidence and progression of retinopathy, the quality of evidence was downgraded due to inconsistency and imprecision of estimates from individual studies and differing characteristics of participants.For primary outcomes among type 1 diabetics, one of the five trials reported incidence of retinopathy and one trial reported progression of retinopathy after 4 to 5 years of treatment and follow-up; four of the five trials reported a combined outcome of incidence and progression over the same time interval. Among type 2 diabetics, 5 of the 10 trials reported incidence of diabetic retinopathy and 3 trials reported progression of retinopathy; one of the 10 trials reported a combined outcome of incidence and progression during a 4- to 5-year follow-up period. One trial in which type 2 diabetics participated had reported no primary (or secondary) outcome targeted for this review.The evidence from these trials supported a benefit of more intensive blood pressure control intervention with respect to 4- to 5-year incidence of diabetic retinopathy (estimated risk ratio (RR) 0.80; 95% confidence interval (CI) 0.71 to 0.92) and the combined outcome of incidence and progression (estimated RR 0.78; 95% CI 0.63 to 0.97). The available evidence provided less support for a benefit with respect to 4- to 5-year progression of diabetic retinopathy (point estimate was closer to 1 than point estimates for incidence and combined incidence and progression, and the CI overlapped 1; estimated RR 0.88; 95% CI 0.73 to 1.05). The available evidence regarding progression to proliferative diabetic retinopathy or clinically significant macular edema or moderate to severe loss of best-corrected visual acuity did not support a benefit of intervention on blood pressure: estimated RRs and 95% CIs 0.95 (0.83 to 1.09) and 1.06 (0.85 to 1.33), respectively, after 4 to 5 years of follow-up. Findings within subgroups of trial participants (type 1 and type 2 diabetics; participants with normal blood pressure levels at baseline and those with elevated levels) were similar to overall findings.The adverse event reported most often (7 of 15 trials) was death, yielding an estimated RR 0.86 (95% CI 0.64 to 1.14). Hypotension was reported from three trials; the estimated RR was 2.08 (95% CI 1.68 to 2.57). Other adverse ocular events were reported from single trials. Hypertension is a well-known risk factor for several chronic conditions in which lowering blood pressure has proven to be beneficial. The available evidence supports a beneficial effect of intervention to reduce blood pressure with respect to preventing diabetic retinopathy for up to 4 to 5 years. However, the lack of evidence to support such intervention to slow progression of diabetic retinopathy or to prevent other outcomes considered in this review, along with the relatively modest support for the beneficial effect on incidence, weakens the conclusion regarding an overall benefit of intervening on blood pressure solely to prevent diabetic retinopathy.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Cannabis has a long history of medicinal use. Cannabis-based medications (cannabinoids) are based on its active element, delta-9-tetrahydrocannabinol (THC), and have been approved for medical purposes. Cannabinoids may be a useful therapeutic option for people with chemotherapy-induced nausea and vomiting that respond poorly to commonly used anti-emetic agents (anti-sickness drugs). However, unpleasant adverse effects may limit their widespread use. To evaluate the effectiveness and tolerability of cannabis-based medications for chemotherapy-induced nausea and vomiting in adults with cancer. We identified studies by searching the following electronic databases: Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PsycINFO and LILACS from inception to January 2015. We also searched reference lists of reviews and included studies. We did not restrict the search by language of publication. We included randomised controlled trials (RCTs) that compared a cannabis-based medication with either placebo or with a conventional anti-emetic in adults receiving chemotherapy. At least two review authors independently conducted eligibility and risk of bias assessment, and extracted data. We grouped studies based on control groups for meta-analyses conducted using random effects. We expressed efficacy and tolerability outcomes as risk ratio (RR) with 95% confidence intervals (CI). We included 23 RCTs. Most were of cross-over design, on adults undergoing a variety of chemotherapeutic regimens ranging from moderate to high emetic potential for a variety of cancers. The majority of the studies were at risk of bias due to either lack of allocation concealment or attrition. Trials were conducted between 1975 and 1991. No trials involved comparison with newer anti-emetic drugs such as ondansetron. Comparison with placebo People had more chance of reporting complete absence of vomiting (3 trials; 168 participants; RR 5.7; 95% CI 2.6 to 12.6; low quality evidence) and complete absence of nausea and vomiting (3 trials; 288 participants; RR 2.9; 95% CI 1.8 to 4.7; moderate quality evidence) when they received cannabinoids compared with placebo. The percentage of variability in effect estimates that was due to heterogeneity rather than chance was not important (I(2) = 0% in both analyses).People had more chance of withdrawing due to an adverse event (2 trials; 276 participants; RR 6.9; 95% CI 1.96 to 24; I(2) = 0%; very low quality evidence) and less chance of withdrawing due to lack of efficacy when they received cannabinoids, compared with placebo (1 trial; 228 participants; RR 0.05; 95% CI 0.0 to 0.89; low quality evidence). In addition, people had more chance of 'feeling high' when they received cannabinoids compared with placebo (3 trials; 137 participants; RR 31; 95% CI 6.4 to 152; I(2) = 0%).People reported a preference for cannabinoids rather than placebo (2 trials; 256 participants; RR 4.8; 95% CI 1.7 to 13; low quality evidence). Comparison with other anti-emetics There was no evidence of a difference between cannabinoids and prochlorperazine in the proportion of participants reporting no nausea (5 trials; 258 participants; RR 1.5; 95% CI 0.67 to 3.2; I(2) = 63%; low quality evidence), no vomiting (4 trials; 209 participants; RR 1.11; 95% CI 0.86 to 1.44; I(2) = 0%; moderate quality evidence), or complete absence of nausea and vomiting (4 trials; 414 participants; RR 2.0; 95% CI 0.74 to 5.4; I(2) = 60%; low quality evidence). Sensitivity analysis where the two parallel group trials were pooled after removal of the five cross-over trials showed no difference (RR 1.1; 95% CI 0.70 to 1.7) with no heterogeneity (I(2) = 0%).People had more chance of withdrawing due to an adverse event (5 trials; 664 participants; RR 3.9; 95% CI 1.3 to 12; I(2) = 17%; low quality evidence), due to lack of efficacy (1 trial; 42 participants; RR 3.5; 95% CI 1.4 to 8.9; very low quality evidence) and for any reason (1 trial; 42 participants; RR 3.5; 95% CI 1.4 to 8.9; low quality evidence) when they received cannabinoids compared with prochlorperazine.People had more chance of reporting dizziness (7 trials; 675 participants; RR 2.4; 95% CI 1.8 to 3.1; I(2) = 12%), dysphoria (3 trials; 192 participants; RR 7.2; 95% CI 1.3 to 39; I(2) = 0%), euphoria (2 trials; 280 participants; RR 18; 95% CI 2.4 to 133; I(2) = 0%), 'feeling high' (4 trials; 389 participants; RR 6.2; 95% CI 3.5 to 11; I(2) = 0%) and sedation (8 trials; 947 participants; RR 1.4; 95% CI 1.2 to 1.8; I(2) = 31%), with significantly more participants reporting the incidence of these adverse events with cannabinoids compared with prochlorperazine.People reported a preference for cannabinoids rather than prochlorperazine (7 trials; 695 participants; RR 3.3; 95% CI 2.2 to 4.8; I(2) = 51%; low quality evidence).In comparisons with metoclopramide, domperidone and chlorpromazine, there was weaker evidence, based on fewer trials and participants, for higher incidence of dizziness with cannabinoids.Two trials with 141 participants compared an anti-emetic drug alone with a cannabinoid added to the anti-emetic drug. There was no evidence of differences between groups; however, the majority of the analyses were based on one small trial with few events. Quality of the evidence The trials were generally at low to moderate risk of bias in terms of how they were designed and do not reflect current chemotherapy and anti-emetic treatment regimens. Furthermore, the quality of evidence arising from meta-analyses was graded as low for the majority of the outcomes analysed, indicating that we are not very confident in our ability to say how well the medications worked. Further research is likely to have an important impact on the results. Cannabis-based medications may be useful for treating refractory chemotherapy-induced nausea and vomiting. However, methodological limitations of the trials limit our conclusions and further research reflecting current chemotherapy regimens and newer anti-emetic drugs is likely to modify these conclusions.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
There is growing research and policy interest in the potential for using the natural environment to enhance human health and well-being. This resource may be underused as a health promotion tool to address the increasing burden of common health problems such as increased chronic diseases and mental health concerns. Outdoor environmental enhancement and conservation activities (EECA) (for instance unpaid litter picking, tree planting or path maintenance) offer opportunities for physical activity alongside greater connectedness with local environments, enhanced social connections within communities and improved self-esteem through activities that improve the locality which may, in turn, further improve well-being. To assess the health and well-being impacts on adults following participation in environmental enhancement and conservation activities. We contacted or searched the websites of more than 250 EECA organisations to identify grey literature. Resource limitations meant the majority of the websites were from UK, USA, Canada and Australia. We searched the following databases (initially in October 2012, updated October 2014, except CAB Direct, OpenGrey, SPORTDiscus, and TRIP Database), using a search strategy developed with our project advisory groups (predominantly leaders of EECA-type activities and methodological experts): ASSIA; BIOSIS; British Education Index; British Nursing Index; CAB Abstracts; Campbell Collaboration; Cochrane Public Health Specialized Register; DOPHER; EMBASE; ERIC; Global Health; GreenFILE; HMIC; MEDLINE-in-Process; MEDLINE; OpenGrey; PsychINFO; Social Policy and Practice; SPORTDiscus; TRoPHI; Social Services Abstracts; Sociological Abstracts; The Cochrane Library; TRIP database; and Web of Science. Citation and related article chasing was used. Searches were limited to studies in English published after 1990. Two review authors independently screened studies. Included studies examined the impact of EECA on adult health and well-being. Eligible interventions needed to include each of the following: intended to improve the outdoor natural or built environment at either a local or wider level; took place in urban or rural locations in any country; involved active participation; and were NOT experienced through paid employment.We included quantitative and qualitative research. Includable quantitative study designs were: randomised controlled trials (RCTs), cluster RCTs, quasi-RCTs, cluster quasi-RCTs, controlled before-and-after studies, interrupted-time-series, cohort studies (prospective or retrospective), case-control studies and uncontrolled before-and-after studies (uBA). We included qualitative research if it used recognised qualitative methods of data collection and analysis. One reviewer extracted data, and another reviewer checked the data. Two review authors independently appraised study quality using the Effective Public Health Practice Project tool (for quantitative studies) or Wallace criteria (for qualitative studies). Heterogeneity of outcome measures and poor reporting of intervention specifics prevented meta-analysis so we synthesised the results narratively. We synthesised qualitative research findings using thematic analysis. Database searches identified 21,420 records, with 21,304 excluded at title/abstract. Grey literature searches identified 211 records. We screened 327 full-text articles from which we included 21 studies (reported in 28 publications): two case-studies (which were not included in the synthesis due to inadequate robustness), one case-control, one retrospective cohort, five uBA, three mixed-method (uBA, qualitative), and nine qualitative studies. The 19 studies included in the synthesis detailed the impacts to a total of 3,603 participants: 647 from quantitative intervention studies and 2630 from a retrospective cohort study; and 326 from qualitative studies (one not reporting sample size).Included studies shared the key elements of EECA defined above, but the range of activities varied considerably. Quantitative evaluation methods were heterogeneous. The designs or reporting of quantitative studies, or both, were rated as 'weak' quality with high risk of bias due to one or more of the following: inadequate study design, intervention detail, participant selection, outcome reporting and blinding.Participants' characteristics were poorly reported; eight studies did not report gender or age and none reported socio-economic status. Three quantitative studies reported that participants were referred through health or social services, or due to mental ill health (five quantitative studies), however participants' engagement routes were often not clear.Whilst the majority of quantitative studies (n = 8) reported no effect on one or more outcomes, positive effects were reported in six quantitative studies relating to short-term physiological, mental/emotional health, and quality-of-life outcomes. Negative effects were reported in two quantitative studies; one study reported higher levels of anxiety amongst participants, another reported increased mental health stress.The design or reporting, or both, of the qualitative studies was rated as good in three studies or poor in nine; mainly due to missing detail about participants, methods and interventions. Included qualitative evidence provided rich data about the experience of participation. Thematic analysis identified eight themes supported by at least one good quality study, regarding participants' positive experiences and related to personal/social identity, physical activity, developing knowledge, spirituality, benefits of place, personal achievement, psychological benefits and social contact. There was one report of negative experiences. There is little quantitative evidence of positive or negative health and well-being benefits from participating in EECA. However, the qualitative research showed high levels of perceived benefit among participants. Quantitative evidence resulted from study designs with high risk of bias, qualitative evidence lacked reporting detail. The majority of included studies were programme evaluations, conducted internally or funded by the provider.The conceptual framework illustrates the range of interlinked mechanisms through which people believe they potentially achieve health and well-being benefits, such as opportunities for social contact. It also considers potential moderators and mediators of effect.One main finding of the review is the inherent difficulty associated with generating robust evidence of effectiveness for complex interventions. We developed the conceptual framework to illustrate how people believed they benefited. Investigating such mechanisms in a subsequent theory-led review might be one way of examining evidence of effect for these activities.The conceptual framework needs further refinement through linked reviews and more reliable evidence. Future research should use more robust study designs and report key intervention and participant detail.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
People with chronic obstructive pulmonary disease (COPD) often experience difficulty with performing upper limb exercise due to dyspnoea and arm fatigue. Consequently, upper limb exercise training is typically incorporated in pulmonary rehabilitation programmes to improve upper limb exercise capacity; however, the effects of this training on dyspnoea and health-related quality of life (HRQoL) remain unclear. To determine the effects of upper limb training (endurance or resistance training, or both) on symptoms of dyspnoea and HRQoL in people with COPD. We searched the Cochrane Airways Group Specialised Register of trials, ClinicalTrials.gov and the World Health Organization trials portal from inception to 28 September 2016 as well as checking all reference lists of primary studies and review articles. We included randomised controlled trials (RCTs) in which upper limb exercise training of at least four weeks' duration was performed. Three comparisons were structured as: a) upper limb training only versus no training or sham intervention; b) combined upper limb training and lower limb training versus lower limb training alone; and c) upper limb training versus another type of upper limb training. Two review authors independently selected trials for inclusion, extracted outcome data and assessed risk of bias. We contacted study authors to provide missing data. We determined the treatment effect from each study as the post-treatment scores. We were able to analyse data for all three planned comparisons. For the upper limb training only versus no training or sham intervention structure, the upper limb training was further classified as 'endurance training' or 'resistance training' to determine the impact of training modality. Fifteen studies on 425 participants were included in the review, one of which was in abstract form only. Twelve studies were included in the meta-analysis across one or more of the three comparisons. The sample size of the included studies was small (12 to 43 participants) and overall study quality was moderate to low given the imprecision and risk of bias issues (i.e. missing information on sequence generation and allocation concealment as well as no blinding of outcome assessment and incomplete data).When upper limb training was compared to either no training or sham training, there was a small significant improvement in symptoms of dyspnoea with a mean difference (MD) of 0.37 points (95% confidence interval (CI) 0.02 to 0.72 points; data from four studies on 129 people). However, there was no significant improvement in dyspnoea when the studies of endurance training only (MD 0.41 points, 95% CI -0.13 to 0.95 points; data from two studies on 55 people) or resistance training only (MD 0.34 points, 95% CI -0.11 to 0.80 points; data from two studies on 74 people) were analysed. When upper limb training combined with lower limb training was compared to lower limb training alone, no significant difference in dyspnoea was shown (MD 0.36 points, 95% CI -0.04 to 0.76 points; data from three studies on 86 people). There were no studies which examined the effects on dyspnoea of upper limb training compared to another upper limb training intervention.There was no significant improvement in HRQoL when upper limb training was compared to either no training or sham training with a standardised mean difference (SMD) of 0.05 (95% CI -0.31 to 0.40; four studies on 126 people) or when upper limb training combined with lower limb training was compared to lower limb training alone (SMD 0.01, 95% CI -0.40 to 0.43; three studies on 95 people). Only one study, in which endurance upper limb training was compared to resistance upper limb training, reported on HRQoL and showed no between-group differences (St George's Respiratory Questionnaire MD 2.0 points, 95% CI -9 to 12; one study on 20 people).Positive findings were shown for the effects of upper limb training on the secondary outcome of unsupported endurance upper limb exercise capacity. When upper limb training was compared to either no training or sham training, there was a large significant improvement in unsupported endurance upper limb capacity (SMD 0.66, 95% CI 0.19 to 1.13; six studies on 142 people) which remained significant when the studies in this analysis of endurance training only were examined (SMD 0.99, 95% CI 0.32 to 1.66; four studies on 85 people) but not when the studies of resistance training only were examined (SMD 0.23, 95% CI -0.31 to 0.76; three studies on 57 people, P = 0.08 for test of subgroup differences). When upper limb training combined with lower limb training was compared to lower limb training alone, there was also a large significant improvement in unsupported endurance upper limb capacity (SMD 0.90, 95% CI 0.12 to 1.68; three studies on 87 people). A single study compared endurance upper limb training to resistance upper limb training with a significant improvement in the number of lifts performed in one minute favouring endurance upper limb training (MD 6.0 lifts, 95% CI 0.29 to 11.71 lifts; one study on 17 people).Available data were insufficient to examine the impact of disease severity on any outcome. Evidence from this review indicates that some form of upper limb exercise training when compared to no upper limb training or a sham intervention improves dyspnoea but not HRQoL in people with COPD. The limited number of studies comparing different upper limb training interventions precludes conclusions being made about the optimal upper limb training programme for people with COPD, although endurance upper limb training using unsupported upper limb exercises does have a large effect on unsupported endurance upper limb capacity. Future RCTs require larger participant numbers to compare the differences between endurance upper limb training, resistance upper limb training, and combining endurance and resistance upper limb training on patient-relevant outcomes such as dyspnoea, HRQoL and arm activity levels.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.