content
stringlengths
4.52k
9.54k
system_prompt
stringclasses
1 value
Abstracts of presentations at scientific meetings are usually available only in conference proceedings. If subsequent full publication of results reported in these abstracts is based on the magnitude or direction of the results, publication bias may result. Publication bias creates problems for those conducting systematic reviews or relying on the published literature for evidence about health and social care. To systematically review reports of studies that have examined the proportion of meeting abstracts and other summaries that are subsequently published in full, the time between meeting presentation and full publication, and factors associated with full publication. We searched MEDLINE, Embase, the Cochrane Library, Science Citation Index, reference lists, and author files. The most recent search was done in February 2016 for this substantial update to our earlier Cochrane Methodology Review (published in 2007). We included reports of methodology research that examined the proportion of biomedical results initially presented as abstracts or in summary form that were subsequently published. Searches for full publications had to be at least two years after meeting presentation. Two review authors extracted data and assessed risk of bias. We calculated the proportion of abstracts published in full using a random-effects model. Dichotomous variables were analyzed using risk ratio (RR), with multivariable models taking into account various characteristics of the reports. We assessed time to publication using Kaplan-Meier survival analyses. Combining data from 425 reports (307,028 abstracts) resulted in an overall full publication proportion of 37.3% (95% confidence interval (CI), 35.3% to 39.3%) with varying lengths of follow-up. This is significantly lower than that found in our 2007 review (44.5%. 95% CI, 43.9% to 45.1%). Using a survival analyses to estimate the proportion of abstracts that would be published in full by 10 years produced proportions of 46.4% for all studies; 68.7% for randomized and controlled trials and 44.9% for other studies. Three hundred and fifty-three reports were at high risk of bias on one or more items, but only 32 reports were considered at high risk of bias overall.Forty-five reports (15,783 abstracts) with 'positive' results (defined as any 'significant' result) showed an association with full publication (RR = 1.31; 95% CI 1.23 to 1.40), as did 'positive' results defined as a result favoring the experimental treatment (RR =1.17; 95% CI 1.07 to 1.28) in 34 reports (8794 abstracts). Results emanating from randomized or controlled trials showed the same pattern for both definitions (RR = 1.21; 95% CI 1.10 to 1.32 (15 reports and 2616 abstracts) and RR = 1.17; 95% CI, 1.04 to 1.32 (13 reports and 2307 abstracts), respectively.Other factors associated with full publication include oral presentation (RR = 1.46; 95% CI 1.40 to 1.52; studied in 143 reports with 115,910 abstracts); acceptance for meeting presentation (RR = 1.65; 95% CI 1.48 to 1.85; 22 reports with 22,319 abstracts); randomized trial design (RR = 1.51; 95% CI 1.36 to 1.67; 47 reports with 28,928 abstracts); and basic research (RR = 0.78; 95% CI 0.74 to 0.82; 92 reports with 97,372 abstracts). Abstracts originating at an academic setting were associated with full publication (RR = 1.60; 95% CI 1.34 to 1.92; 34 reports with 16,913 abstracts), as were those considered to be of higher quality (RR = 1.46; 95% CI 1.23 to 1.73; 12 reports with 3364 abstracts), or having high impact (RR = 1.60; 95% CI 1.41 to 1.82; 11 reports with 6982 abstracts). Sensitivity analyses excluding reports that were abstracts themselves or classified as having a high risk of bias did not change these findings in any important way.In considering the reports of the methodology research that we included in this review, we found that reports published in English or from a native English-speaking country found significantly higher proportions of studies published in full, but that there was no association with year of report publication. The findings correspond to a proportion of abstracts published in full of 31.9% for all reports, 40.5% for reports in English, 42.9% for reports from native English-speaking countries, and 52.2% for both these covariates combined. More than half of results from abstracts, and almost a third of randomized trial results initially presented as abstracts fail to be published in full and this problem does not appear to be decreasing over time. Publication bias is present in that 'positive' results were more frequently published than 'not positive' results. Reports of methodology research written in English showed that a higher proportion of abstracts had been published in full, as did those from native English-speaking countries, suggesting that studies from non-native English-speaking countries may be underrepresented in the scientific literature. After the considerable work involved in adding in the more than 300 additional studies found by the February 2016 searches, we chose not to update the search again because additional searches are unlikely to change these overall conclusions in any important way.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Cardiovascular disease (CVD) remains an important cause of mortality and morbidity, and high levels of blood cholesterol are thought to be the major modifiable risk factors for CVD. The use of statins is the preferred treatment strategy for the prevention of CVD, but some people at high-risk for CVD are intolerant to statin therapy or unable to achieve their treatment goals with the maximal recommended doses of statin. Ezetimibe is a selective cholesterol absorption inhibitor, whether it has a positive effect on CVD events remains uncertain. Results from clinical studies are inconsistent and a thorough evaluation of its efficacy and safety for the prevention of CVD and mortality is necessary. To assess the efficacy and safety of ezetimibe for the prevention of CVD and all-cause mortality. We searched the CENTRAL, MEDLINE, Embase and Web of Science on 27 June 2018, and two clinical trial registry platforms on 11 July 2018. We checked reference lists from primary studies and review articles for additional studies. No language restrictions were applied. We included randomised controlled trials (RCTs) that compared ezetimibe versus placebo or ezetimibe plus other lipid-modifying drugs versus other lipid-modifying drugs alone in adults, with or without CVD, and which had a follow-up of at least 12 months. Two review authors independently selected studies for inclusion, extracted data, assessed risk of bias and contacted trialists to obtain missing data. We performed statistical analyses according to the Cochrane Handbook for Systematic Reviews of Interventions and used the GRADE to assess the quality of evidence. We included 26 RCTs randomising 23,499 participants. All included studies assessed effects of ezetimibe plus other lipid-modifying drugs compared with other lipid-modifying drugs alone or plus placebo. Our findings were driven by the largest study (IMPROVE-IT), which had weights ranging from 41.5% to 98.4% in the different meta-analyses.Ezetimibe with statins probably reduces the risk of major adverse cardiovascular events compared with statins alone (risk ratio (RR) 0.94, 95% confidence interval (CI) 0.90 to 0.98; a decrease from 284/1000 to 267/1000, 95% CI 256 to 278; 21,727 participants; 10 studies; moderate-quality evidence). Trials reporting all-cause mortality used ezetimibe with statin or fenofibrate and found they have little or no effect on this outcome (RR 0.98, 95% CI 0.91 to 1.05; 21,222 participants; 8 studies; high-quality evidence). Adding ezetimibe to statins probably reduces the risk of non-fatal myocardial infarction (MI) (RR 0.88, 95% CI 0.81 to 0.95; a decrease from 105/1000 to 92/1000, 95% CI 85 to 100; 21,145 participants; 6 studies; moderate-quality evidence) and non-fatal stroke (RR 0.83, 95% CI 0.71 to 0.97; a decrease 32/1000 to 27/1000, 95% CI 23 to 31; 21,205 participants; 6 studies; moderate-quality evidence). Trials reporting cardiovascular mortality added ezetimibe to statin or fenofibrate, probably having little or no effect on this outcome (RR 1.00, 95% CI 0.89 to 1.12; 19457 participants; 6 studies; moderate-quality evidence). The need for coronary revascularisation might be reduced by adding ezetimibe to statin (RR 0.94, 95% CI 0.89 to 0.99; a decrease from 196/1000 to 184/1000, 95% 175 to 194; 21,323 participants; 7 studies); however, no difference in coronary revascularisation rate was observed when a sensitivity analysis was limited to studies with a low risk of bias.In terms of safety, adding ezetimibe to statins may make little or no difference in the risk of hepatopathy (RR 1.14, 95% CI 0.96 to 1.35; 20,687 participants; 4 studies; low-quality evidence). It is uncertain whether ezetimibe increase or decrease the risk of myopathy (RR 1.31, 95% CI 0.72 to 2.38; 20,581 participants; 3 studies; very low-quality evidence) and rhabdomyolysis, given the wide CIs and low event rate. Little or no difference in the risk of cancer, gallbladder-related disease and discontinuation due to adverse events were observed between treatment groups. For serum lipids, adding ezetimibe to statin or fenofibrate might further reduce the low-density lipoprotein cholesterol (LDL-C), total cholesterol and triglyceride levels and likely increase the high-density lipoprotein cholesterol levels; however, substantial heterogeneity was detected in most analyses.None of the included studies reported on health-related quality of life. Moderate- to high-quality evidence suggests that ezetimibe has modest beneficial effects on the risk of CVD endpoints, primarily driven by a reduction in non-fatal MI and non-fatal stroke, but it has little or no effect on clinical fatal endpoints. The cardiovascular benefit of ezetimibe might involve the reduction of LDL-C, total cholesterol and triglycerides. There is insufficient evidence to determine whether ezetimibe increases the risk of adverse events due to the low and very low quality of the evidence. The evidence for beneficial effects was mainly obtained from individuals with established atherosclerotic cardiovascular disease (ASCVD, predominantly with acute coronary syndrome) administered ezetimibe plus statins. However, there is limited evidence regarding the role of ezetimibe in primary prevention and the effects of ezetimibe monotherapy in the prevention of CVD, and these topics thus requires further investigation.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Does the addition of a low-quality embryo in fresh Day 3 double embryo transfer (DET) affect the ongoing pregnancy rate (OPR) and multiple gestation rate in patients with only one or no high-quality embryos available? In patients with only one- or no high-quality embryo available, the addition of a low-quality embryo in fresh Day 3 DET does not improve the OPR but increases multiple gestation rates in fresh DET. Pregnancy rates after DET are considered to be higher compared to single embryo transfer (SET) when analyzed per first embryo transfer only. However, these conclusions are based on RCTs in which mostly patients with two or more high-quality embryos were included, and can therefore not be applied to patients with only one or no high-quality embryo available. This is particularly relevant since it has been suggested that low-quality embryos could impair the implantation of simultaneously transferred embryos by paracrine signaling. Hence, we investigated in patients with only one or no high-quality embryo available whether the addition of a low-quality embryo in DET affects the OPR, multiple gestation rate and miscarriage rate. This was a retrospective cohort study of 5050 patients receiving 7252 fresh embryo transfers on Day 3 after fertilization in IVF/ICSI cycles from 2012 to 2015 in two academic hospitals. We included all women that received fresh SET or DET with any combination of high-quality embryos (7, 8 or 9 blastomeres, with equal to or &lt;20% fragmentation) or low-quality embryos (all other embryos). Outcomes were OPR (primary outcome, defined as a positive fetal heartbeat by transvaginal ultrasound at least 10 weeks after oocyte retrieval), miscarriage rate and multiple gestation rate. We used a generalized estimating equations model adjusting for maternal age, number of oocytes retrieved, center of treatment and the interaction between maternal age and number of oocytes retrieved. Other baseline characteristics, including infertility diagnosis, fertilization method and the number of consecutive fresh embryo transfers per patient, did not contribute significantly to the GEE model and were therefore excluded, and not adjusted for. Compared to SET with one high-quality embryo, DET with two high-quality embryos resulted in a higher OPR (adjusted odds ratio (OR) 1.38, 95% CI 1.14-1.67), while DET with one high- and one low-quality embryo resulted in a lower OPR (adjusted OR 0.65, 95% CI 0.49-0.90). However, SET in patients with only one high-quality embryo available resulted in a lower OPR compared to SET in patients with two or more high-quality embryos available (adjusted OR 0.52, 95% CI 0.39-0.70). After adjusting for this confounding factor, we found that both DET with two high-quality embryos (adjusted OR 0.99, 95% CI 0.74-1.31) and DET with one high- and one low-quality embryo (adjusted OR 0.78, 95% CI 0.47-1.27) resulted in a not significantly different OPR compared to SET with one high-quality embryo. If only low-quality embryos were available, DET did not increase the OPR as compared to SET with one low-quality embryo (adjusted OR 0.84, 95% CI 0.55-1.28). Multiple gestation rates were higher in all DET groups compared to SET (DET with ≥1 high-quality embryo(s) compared to SET with one high-quality embryo; DET with two low-quality embryos compared to SET with one low-quality embryo; all comparisons <iP</i &lt; 0.001). Miscarriage rates were not different in all DET groups compared to SET (DET with ≥1 high-quality embryo(s) compared to SET with one high-quality embryo; DET with two low-quality embryos compared to SET with one low-quality embryo; all comparisons <iP</i &gt; 0.05). Limitations to this study include the retrospective design and possible bias between study groups related to embryo transfer policies between 2012 and 2015. Consequently, we may have underestimated pregnancy chances in all DET groups. Furthermore, the OPR was calculated as a percentage of the number of fresh embryo transfers in each study group, and not the total number of started IVF/ICSI cycles. Therefore, the reported pregnancy outcomes may not truly reflect the pregnancy chances of couples at the start of treatment. A possible confounding effect of maternal age in our study is acknowledged but we could not compare clinical outcomes in different age groups separately owing to small sample sizes. Analysis of pregnancy outcomes in lower prognosis patients (higher maternal age, fewer oocytes retrieved) separately is an avenue for future research. The decision to perform DET rather than SET in order to increase the OPR per fresh embryo transfer seems not to be justified for those patients with only one or no high-quality embryo(s) available. However, owing to the limitations of this study, prospective RCTs are needed that specifically investigate pregnancy outcomes in patients with only one or no high-quality embryo(s) available in SET and DET. This study was funded by a grant from the joint Amsterdam Reproduction &amp; Development Institute of the Academic Medical Center and VU University Medical Center (www.amsterdam-reproduction-and-development.org). The authors have no conflicts of interest to declare.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Diabetes mellitus (DM) is a metabolic disorder that causes abnormal blood glucose (BG) regulation that might result in short and long-term health complications and even death if not properly managed. Currently, there is no cure for diabetes. However, self-management of the disease, especially keeping BG in the recommended range, is central to the treatment. This includes actively tracking BG levels and managing physical activity, diet, and insulin intake. The recent advancements in diabetes technologies and self-management applications have made it easier for patients to have more access to relevant data. In this regard, the development of an artificial pancreas (a closed-loop system), personalized decision systems, and BG event alarms are becoming more apparent than ever. Techniques such as predicting BG (modeling of a personalized profile), and modeling BG dynamics are central to the development of these diabetes management technologies. The increased availability of sufficient patient historical data has paved the way for the introduction of machine learning and its application for intelligent and improved systems for diabetes management. The capability of machine learning to solve complex tasks with dynamic environment and knowledge has contributed to its success in diabetes research. Recently, machine learning and data mining have become popular, with their expanding application in diabetes research and within BG prediction services in particular. Despite the increasing and expanding popularity of machine learning applications in BG prediction services, updated reviews that map and materialize the current trends in modeling options and strategies are lacking within the context of BG prediction (modeling of personalized profile) in type 1 diabetes. The objective of this review is to develop a compact guide regarding modeling options and strategies of machine learning and a hybrid system focusing on the prediction of BG dynamics in type 1 diabetes. The review covers machine learning approaches pertinent to the controller of an artificial pancreas (closed-loop systems), modeling of personalized profiles, personalized decision support systems, and BG alarm event applications. Generally, the review will identify, assess, analyze, and discuss the current trends of machine learning applications within these contexts. A rigorous literature review was conducted between August 2017 and February 2018 through various online databases, including Google Scholar, PubMed, ScienceDirect, and others. Additionally, peer-reviewed journals and articles were considered. Relevant studies were first identified by reviewing the title, keywords, and abstracts as preliminary filters with our selection criteria, and then we reviewed the full texts of the articles that were found relevant. Information from the selected literature was extracted based on predefined categories, which were based on previous research and further elaborated through brainstorming among the authors. The initial search was done by analyzing the title, abstract, and keywords. A total of 624 papers were retrieved from DBLP Computer Science (25), Diabetes Technology and Therapeutics (31), Google Scholar (193), IEEE (267), Journal of Diabetes Science and Technology (31), PubMed/Medline (27), and ScienceDirect (50). After removing duplicates from the list, 417 records remained. Then, we independently assessed and screened the articles based on the inclusion and exclusion criteria, which eliminated another 204 papers, leaving 213 relevant papers. After a full-text assessment, 55 articles were left, which were critically analyzed. The inter-rater agreement was measured using a Cohen Kappa test, and disagreements were resolved through discussion. Due to the complexity of BG dynamics, it remains difficult to achieve a universal model that produces an accurate prediction in every circumstance (i.e., hypo/eu/hyperglycemia events). Recently, machine learning techniques have received wider attention and increased popularity in diabetes research in general and BG prediction in particular, coupled with the ever-growing availability of a self-collected health data. The state-of-the-art demonstrates that various machine learning techniques have been tested to predict BG, such as recurrent neural networks, feed-forward neural networks, support vector machines, self-organizing maps, the Gaussian process, genetic algorithm and programs, deep neural networks, and others, using various group of input parameters and training algorithms. The main limitation of the current approaches is the lack of a well-defined approach to estimate carbohydrate intake, which is mainly done manually by individual users and is prone to an error that can severely affect the predictive performance. Moreover, a universal approach has not been established to estimate and quantify the approximate effect of physical activities, stress, and infections on the BG level. No researchers have assessed model predictive performance during stress and infection incidences in a free-living condition, which should be considered in future studies. Furthermore, a little has been done regarding model portability that can capture inter- and intra-variability among patients. It seems that the effect of time lags between the CGM readings and the actual BG levels is not well covered. However, in general, we foresee that these developments might foster the advancement of next-generation BG prediction algorithms, which will make a great contribution in the effort to develop the long-awaited, so-called artificial pancreas (a closed-loop system).
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Irritable bowel syndrome (IBS) is a prevalent condition that currently lacks highly effective therapies for its management. Biofeedback has been proposed as a therapy that may help individuals learn to exert conscious control over sympatho-vagal balance as an indirect method of symptom management. Our primary objective was to assess the efficacy and safety of biofeedback-based interventions for IBS in adults and children. We searched the Cochrane Inflammatory Bowel Disease (IBD) Group Specialized Trials Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), and the Allied and Complementary Medicine Database (AMED) from inception to 24 July 2019. We also searched reference lists from published trials, trial registries, device manufacturers, conference proceedings, theses, and dissertations. We judged randomized controlled trials to be eligible for inclusion if they met the Association for Applied Psychophysiology and Biofeedback definition of biofeedback, and if they compared a biofeedback intervention to an active, sham, or no-treatment control for the management of IBS. Two authors independently screened trials for inclusion, extracted data, and assessed risk of bias. Primary outcomes were IBS global or clinical improvement scores and overall quality of life measures. Secondary outcome measures were adverse events, assessments of stool frequency and consistency, changes in abdominal pain, depression, and anxiety. For dichotomous outcomes, we calculated the risk ratio (RR) and 95% confidence interval (CI). For continuous outcomes, we calculated the mean difference (MD) and 95% CI. We used GRADE criteria to assess the overall certainty of the evidence. We identified eight randomized trials with a total of 300 adult participants for our analysis. We did not identify any trials in children. Four trials assessed thermal biofeedback. One trial assessed rectosigmoidal biofeedback. Two trials assessed heart rate variability biofeedback. Two trials assessed electrocutaneous biofeedback. Comparators were: no treatment (symptom monitoring group; three studies), attention control (pseudomeditation; two studies), relaxation control (one study), counseling (two studies), hypnotherapy (one study), standard therapy (one study), and sham biofeedback (one study). We judged all trials to have a high or unclear risk of bias. Global/Clinical improvement The clinical benefit of biofeedback plus standard therapy compared to standard therapy alone was uncertain (RR 4.20, 95% CI 1.40 to 12.58; 1 study, 20 participants; very low-certainty evidence). The same study also compared biofeedback plus standard therapy to sham biofeedback plus standard therapy. The clinical benefit in the biofeedback group was uncertain (RR 2.33, 95% CI 1.13 to 4.80; 1 study, 20 participants; very low-certainty evidence). The clinical benefit of heart rate biofeedback compared to hypnotherapy was uncertain when measured with the IBS severity scoring system (IBS-SSS) (MD -58.80, 95% CI -109.11 to -8.49; 1 study, 61 participants; low-certainty evidence). Compared to counseling, the effect of heart rate biofeedback was unclear when measured with a composite symptom reduction score (MD 7.03, 95% CI -51.07 to 65.13; 1 study, 29 participants; low-certainty evidence) and when evaluated for clinical response (50% improvement) (RR 1.09, 95% CI 0.48 to 2.45; 1 study, 29 participants; low-certainty evidence). The clinical benefit of thermal biofeedback used in a multi-component psychological intervention (MCPI) compared to no treatment was uncertain when measured with a composite clinical symptom reduction score (MD 30.34, 95% CI 8.47 to 52.21; 3 studies, 101 participants; very low-certainty evidence), and when evaluated as clinical response (50% improvement) (RR 2.12, 95% CI 1.24 to 3.62; 3 studies, 101 participants; very low-certainty evidence). Compared to attention control, the effects of thermal biofeedback within an MCPI were unclear when measured with a composite clinical symptom reduction score (MD 4.02, 95% CI -21.41 to 29.45; 2 studies, 80 participants; very low-certainty evidence) and when evaluated as clinical response (50% improvement) (RR 1.10, 95% CI 0.72 to 1.69, 2 studies, 80 participants; very low-certainty evidence). Quality of life A single trial used overall quality of life as an outcome measure, and reported that both the biofeedback and cognitive therapy groups improved after treatment. The trial did not note any between-group differences, and did not report any outcome data. Adverse events Only one of the eight trials explicitly reported adverse events. This study reported no adverse events in either the biofeedback or cognitive therapy groups (RD 0.00, 95% CI -0.12 to 0.12; 29 participants; low-certainty evidence). There is currently not enough evidence to assess whether biofeedback interventions are effective for controlling symptoms of IBS. Given the positive results reported in small trials to date, biofeedback deserves further study in people with IBS. Future research should include active control groups that use high provider-participant interaction, in an attempt to balance non-specific effects of interventions between groups, and report both commonly used outcome measures (e.g. IBS-SSS) and historical outcome measures (e.g. the composite primary symptom reduction (CPSR) score) to allow for meta-analysis with previous studies. Future studies should be explicit in their reporting of adverse events.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Renal vasculitis presents as rapidly progressive glomerulonephritis and comprises of a group of conditions characterised by acute kidney injury (AKI), haematuria and proteinuria. Treatment of these conditions involve the use of steroid and non-steroid agents in combination with plasma exchange. Although immunosuppression overall has been very successful in treatment of these conditions, many questions remain unanswered in terms of dose and duration of therapy, the use of plasma exchange and the role of new therapies. This 2019 publication is an update of a review first published in 2008 and updated in 2015. To evaluate the benefits and harms of any intervention used for the treatment of renal vasculitis in adults. We searched the Cochrane Kidney and Transplant Register of Studies up to 21 November 2019 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov. Randomised controlled trials investigating any intervention for the treatment of renal vasculitis in adults. Two authors independently assessed study quality and extracted data. Statistical analyses were performed using a random effects model and results expressed as risk ratio (RR) with 95% confidence intervals (CI) for dichotomous outcomes or mean difference (MD) for continuous outcomes. Forty studies (3764 patients) were included. Studies conducted earlier tended to have a higher risk of bias due to poor (or poorly reported) study design, broad inclusion criteria, less well developed disease definitions and low patient numbers. Later studies tend to have improved in all areas of quality, aided by the development of large international study groups. Induction therapy: Plasma exchange as adjunctive therapy may reduce the need for dialysis at three (2 studies: RR 0.43, 95% CI 0.23 to 0.78; I<sup2</sup = 0%) and 12 months (6 studies: RR 0.45, 95% CI 0.29 to 0.72; I<sup2</sup = 0%) (low certainty evidence). Plasma exchange may make little or no difference to death, serum creatinine (SCr), sustained remission or to serious or the total number of adverse events. Plasma exchange may increase the number of serious infections (5 studies: RR 1.26, 95% CI 1.03 to 1.54; I<sup2</sup = 0%; low certainty evidence). Remission rates for pulse versus continuous cyclophosphamide (CPA) were equivalent but pulse treatment may increase the risk of relapse (4 studies: RR 1.79, 95% CI 1.11 to 2.87; I<sup2</sup = 0%) (low certainty evidence) compared with continuous cyclophosphamide. Pulse CPA may make little or no difference to death at final follow-up, or SCr at any time point. More patients required dialysis in the pulse CPA group. Leukopenia was less common with pulse treatment; however, nausea was more common. Rituximab compared to CPA probably makes little or no difference to death, remission, relapse, severe adverse events, serious infections, or severe adverse events. Kidney function and dialysis were not reported. A single study reported no difference in the number of deaths, need for dialysis, or adverse events between mycophenolate mofetil (MMF) and CPA. Remission was reported to improve with MMF however more patients relapsed. A lower dose of steroids was probably as effective as high dose and may be safer, causing fewer infections; kidney function and relapse were not reported. There was little of no difference in death or remission between six and 12 pulses of CPA. There is low certainty evidence that there were less relapses with 12 pulses (2 studies: RR 1.57, 95% CI 0.96 to 2.56; I<sup2</sup = 0%), but more infections (2 studies: RR 0.79, 95% CI 0.36 to 1.72; I<sup2</sup = 45%). One study reported severe adverse events were less in patients receiving six compared to 12 pulses of CPA. Kidney function and dialysis were not reported. There is limited evidence from single studies about the effectiveness of intravenous immunoglobulin, avacopan, methotrexate, immunoadsorption, lymphocytapheresis, or etanercept. Maintenance therapy: Azathioprine (AZA) has equivalent efficacy as a maintenance agent to CPA with fewer episodes of leucopenia. MMF resulted in a higher relapse rate when tested against azathioprine in remission maintenance. Rituximab is an effective remission induction and maintenance agent. Oral co-trimoxazole did not reduce relapses in granulomatosis with polyangiitis. There were fewer relapses but more serious adverse events with leflunomide compared to methotrexate. There is limited evidence from single studies about the effectiveness of methotrexate versus CPA or AZA, cyclosporin versus CPA, extended versus standard AZA, and belimumab. Plasma exchange was effective in patients with severe AKI secondary to vasculitis. Pulse cyclophosphamide may result in an increased risk of relapse when compared to continuous oral use but a reduced total dose. Whilst CPA is standard induction treatment, rituximab and MMF were also effective. AZA, methotrexate and leflunomide were effective as maintenance therapy. Further studies are required to more clearly delineate the appropriate place of newer agents within an evidence-based therapeutic strategy.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Portal hypertension commonly accompanies advanced liver disease and often gives rise to life-threatening complications, including bleeding (haemorrhage) from oesophageal and gastrointestinal varices. Variceal bleeding commonly occurs in children with chronic liver disease or portal vein obstruction. Therefore, prevention is important. Primary prophylaxis of variceal bleeding in adults is the established standard of care because of the results of numerous randomised clinical trials demonstrating the efficacy of non-selective beta-blockers or endoscopic variceal ligation in decreasing the incidence of variceal bleeding. In children, band ligation, beta-blockers, and sclerotherapy have been proposed as alternatives for primary prophylaxis of oesophageal variceal bleeding. However, it is unknown whether those treatments are of benefit or harm when used for primary prophylaxis in children. To assess the benefits and harms of sclerotherapy compared with sham or no intervention for primary prophylaxis of oesophageal variceal bleeding in children with chronic liver disease or portal vein thrombosis. We searched The Cochrane Hepato-Biliary Group Controlled Trials Register, CENTRAL, PubMed, Embase Elsevier, and two other registers in February 2019. We scrutinised the reference lists of the retrieved publications, and performed a manual search of the main paediatric gastroenterology and hepatology conference (NASPGHAN and ESPGHAN) abstracts from January 2008 to December 2018. We searched four registries for ongoing clinical trials. There were no language or document type restrictions. We included randomised clinical trials irrespective of blinding, language, or publication status assessing sclerotherapy versus sham or no intervention for primary prophylaxis of oesophageal variceal bleeding in children with chronic liver disease or portal vein thrombosis. We used standard Cochrane methodology to perform this systematic review. We used the intention-to-treat principle to analyse outcome data, and GRADE to assess the certainty of evidence per outcome. We found only one randomised clinical trial that fulfilled our inclusion criteria. The trial was at high risk of bias. The trial included 108 Brazilian children with median age of 4.3 years (range 11 months to 13 years). Fifty-six children were randomised to prophylactic sclerotherapy (ethanolamine oleate 2%) and 52 children to no intervention (control). Children were followed up for a median of 4.5 years. Eight children (six from the sclerotherapy group versus two from the control group) dropped out before the end of the trial. The follow-up was from 18 months to eight years. Mortality was 16% (9/56 children) in the sclerotherapy group versus 15% (8/52 children) in the control group (risk ration (RR) 1.04, 95% confidence interval (CI) 0.44 to 2.50; very low-certainty evidence). Upper gastrointestinal bleeding occurred in 21% (12/56) of the children in the sclerotherapy group versus 46% (24/52) in the control group (RR 0.46, 95% CI 0.26 to 0.83; very low-certainty evidence). There were more children with congestive hypertensive gastropathy in the sclerotherapy group than in the control group (14% (8/56) versus 6% (3/52); RR 2.48, 95% CI 0.69 to 8.84; very low-certainty evidence). The incidence of gastric varices was similar between the sclerotherapy group and the control group (11% (6/56) versus 10% (5/52); RR 1.11, 95% CI 0.36 to 3.43; very low-certainty evidence). The incidence of bleeding from gastric varices was higher in the sclerotherapy group than in the control group (4% (3/56) versus 0% (0/52); RR 6.51, 95% CI 0.34 to 123.06; very low-certainty evidence). The study did not assess health-related quality of life. Oesophageal variceal bleeding occurred in 5% (3/56) of the children in the sclerotherapy group versus 40% (21/52) of the children in the control group (RR 0.13, 95% CI 0.04 to 0.42; very low-certainty evidence). The most prevalent complications (defined as non-serious) were pain and fever after the procedure, which promptly resolved with analgesics. However, numerical data on the frequency of these adverse events and their occurrences in the two groups were lacking. No funding information was provided. We found no ongoing trials. The evidence, obtained from one randomised clinical trial at high risk of bias, is very uncertain on whether sclerotherapy has an influence on mortality and if it may decrease first upper gastrointestinal or oesophageal variceal bleeding in children. The evidence is very uncertain on whether sclerotherapy has an influence on congestive hypertensive gastropathy, incidence on gastric varices, and incidence of bleeding from gastric varices. Health-related quality of life was not measured. There were no serious events caused by sclerotherapy, and analysis of non-serious adverse events could not be performed due to lack of numerical data. The GRADE assessment of each outcome showed a very low-certainty evidence. The results of the trial need to be interpreted with caution. Larger randomised clinical trials, following the SPIRIT and CONSORT statements, assessing the benefits and harms of sclerotherapy compared with sham or no intervention for primary prophylaxis of oesophageal variceal bleeding in children with chronic liver disease or portal vein thrombosis are needed. The trials should include important clinical outcomes such as death, failure to control bleeding, and adverse events.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The incidence of diabetic gastroparesis (DGP) is mainly blamed to abnormity of interstitial cells of Cajal (ICCs). Autophagy could degrade damaged proteins and organelles to keep intracellular homeostasis, and it could directly influence structure and number of cells. In this study, we aimed to figure out the relationship between DGP and autophagy of ICCs. Sixty Sprague-Dawley (SD) rats were randomly divided into normal control group (NC, 10) and modeling group (50). Rats in the modeling group were injected 2% streptozotocin (STZ) and fed with high-glucose and high-fat diet for 8 weeks in order to establish DGP rat model. After modeling, 30 successfully modeled rats were randomly selected and separated into diabetic gastroparesis group (DGP, 10), GDP rats with electroacupuncture group (EA, 10), and GDP rats with metoclopramide group (MP, 10). When the intervention was completed, blood glucose was measured by ONE TOUCH glucometer and gastrointestinal propulsive rate was detected through measuring optical density. Autophagosomes were observed under transmission electron microscope (TEM). The expression of LC3 protein and P62 protein was measured by Western blot. When ICCs were transfected with GFP-RFP-LC3 plasmid, autophagy flux was observed by laser scanning confocal microscope. (1) After intervention, compared with blood glucose of rats in the NC group, all of the DGP, EA, and MP groups were remarkably increased (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01); compared with the DGP group, the blood glucose of the EA and MP groups was decreased greatly (<iP</i &lt; 0.01). (2) Compared with gastrointestinal propulsive rate of rats in the NC group, no matter gastric emptying rate or intestinal propulsive rate, the EA and MP groups were significantly reduced (<iP</i &lt; 0.01); compared with the NC group, gastric emptying rate and intestinal propulsive rate in the EA group were obviously decreased (<iP</i &lt; 0.05, <iP</i &lt; 0.01); compared with the DGP group, the EA and MP groups were increased significantly (<iP</i &lt; 0.01). (3) Compared with the NC group, intensity of RFP and GFP in the DGP group was obviously increased (<iP</i &lt; 0.05, <iP</i &lt; 0.01), in other words, the DGP group accompanying suppression of autophagy; compared with the DGP group, intensity of RFP and GFP in the EA group was decreased significantly (<iP</i &lt; 0.05, <iP</i &lt; 0.01). (4) There was no autophagosome in the NC group, and an autophagosome existed in the DGP group. Both EA and MP groups found autophagy. (5) When coming to LC3 II/LC3 I, compared with the NC group, the ratio was enhanced in the DGP and EA groups (<iP</i &lt; 0.01, <iP</i &lt; 0.05); compared with the DGP group, LC3 II/LC3 I was dramatically decreased in the MP and EA groups (<iP</i &lt; 0.01). (6) As the substrate of degradation, the expression of P62 in the other three groups was significantly increased (<iP</i &lt; 0.01) compared with the NC group; compared with the DGP group, the amount of P62 in the EA and MP groups was reduced greatly (<iP</i &lt; 0.01). The impaired autophagy flux in ICCs is the pathological basis of diabetic gastroparesis, blaming to fusion dysfunction of autophagosome and lysosome and electroacupuncture (EA) could ease the suppression of autophagy to improve gastric motility.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The setting in which induction of labour takes place (home or inpatient) is likely to have implications for safety, women's experiences and costs. Home induction may be started at home with the subsequent active phase of labour happening either at home or in a healthcare facility (hospital, birth centre, midwifery-led unit). More commonly, home induction starts in a healthcare facility, then the woman goes home to await the start of labour. Inpatient induction takes place in a healthcare facility where the woman stays while awaiting the start of labour. To assess the effects on neonatal and maternal outcomes of third trimester home induction of labour compared with inpatient induction using the same method of induction. For this update, we searched the Cochrane Pregnancy and Childbirth Group's Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) (31 January 2020)), and reference lists of retrieved studies. Published and unpublished randomised controlled trials (RCTs) in which home and inpatient settings for induction have been compared. We included conference abstracts but excluded quasi-randomised trials and cross-over studies. Two review authors independently assessed study reports for inclusion. Two review authors carried out data extraction and assessment of risk of bias independently. GRADE assessments were checked by a third review author. We included seven RCTs, six of which provided data on 1610 women and their babies. Studies were undertaken between 1998 and 2015, and all were in high- or upper-middle income countries. Most women were induced for post dates. Three studies reported government funding, one reported no funding and three did not report on their funding source. Most GRADE assessments gave very low-certainty evidence, downgrading mostly for high risk of bias and serious imprecision. 1. Home compared to inpatient induction with vaginal prostaglandin E (PGE) (two RCTs, 1028 women and babies; 1022 providing data). Although women's satisfaction may be slightly better in home settings, the evidence is very uncertain (mean difference (MD) 0.16, 95% confidence interval (CI) -0.02 to 0.34, 1 study, 399 women), very low-certainty evidence. There may be little or no difference between home and inpatient induction for other primary outcomes, with all evidence being very low certainty: - spontaneous vaginal birth (average risk ratio (RR) [aRR] 0.91, 95% CI 0.69 to 1.21, 2 studies, 1022 women, random-effects method); - uterine hyperstimulation (RR 1.19, 95% CI 0.40 to 3.50, 1 study, 821 women); - caesarean birth (RR 1.01, 95% CI 0.81 to 1.28, 2 studies, 1022 women); - neonatal infection (RR 1.29, 95% CI 0.59 to 2.82, 1 study, 821 babies); - admission to neonatal intensive care unit (NICU) (RR 1.20, 95% CI 0.50 to 2.90, 2 studies, 1022 babies). Studies did not report serious neonatal morbidity or mortality. 2. Home compared to inpatient induction with controlled release PGE (one RCT, 299 women and babies providing data). There was no information on whether the questionnaire on women's satisfaction with care used a validated instrument, but the findings presented showed no overall difference in scores. We found little or no difference between the groups for other primary outcomes, all also being very low-certainty evidence: - spontaneous vaginal birth (RR 0.94, 95% CI 0.77 to 1.14, 1 study, 299 women); - uterine hyperstimulation (RR 1.01, 95% CI 0.51 to 1.98, 1 study, 299 women); - caesarean births (RR 0.95, 95% CI 0.64 to 1.42, 1 study, 299 women); - admission to NICU (RR 1.38, 0.57 to 3.34, 1 study, 299 babies). The study did not report on neonatal infection nor serious neonatal morbidity or mortality. 3. Home compared to inpatient induction with balloon or Foley catheter (four RCTs; three studies, 289 women and babies providing data). It was again unclear whether questionnaires reporting women's experiences/satisfaction with care were validated instruments, with one study (48 women, 69% response rate) finding women were similarly satisfied. Home inductions may reduce the number of caesarean births, but the data are also compatible with a slight increase and are of very low-certainty (RR 0.64, 95% CI 0.41 to 1.01, 2 studies, 159 women). There was little or no difference between the groups for other primary outcomes with all being very low-certainty evidence: - spontaneous vaginal birth (RR 1.04, 95% CI 0.54 to 1.98, 1 study, 48 women): - uterine hyperstimulation (RR 0.45, 95% CI 0.03 to 6.79, 1 study, 48 women); - admission to NICU (RR 0.37, 95% CI 0.07 to 1.86, 2 studies, 159 babies). There were no serious neonatal infections nor serious neonatal morbidity or mortality in the one study (involving 48 babies) assessing these outcomes. Data on the effectiveness, safety and women's experiences of home versus inpatient induction of labour are limited and of very low-certainty. Given that serious adverse events are likely to be extremely rare, the safety data are more likely to come from very large observational cohort studies rather than relatively small RCTs.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The respiratory illness caused by SARS-CoV-2 infection continues to present diagnostic challenges. Our 2020 edition of this review showed thoracic (chest) imaging to be sensitive and moderately specific in the diagnosis of coronavirus disease 2019 (COVID-19). In this update, we include new relevant studies, and have removed studies with case-control designs, and those not intended to be diagnostic test accuracy studies. To evaluate the diagnostic accuracy of thoracic imaging (computed tomography (CT), X-ray and ultrasound) in people with suspected COVID-19. We searched the COVID-19 Living Evidence Database from the University of Bern, the Cochrane COVID-19 Study Register, The Stephen B. Thacker CDC Library, and repositories of COVID-19 publications through to 30 September 2020. We did not apply any language restrictions. We included studies of all designs, except for case-control, that recruited participants of any age group suspected to have COVID-19 and that reported estimates of test accuracy or provided data from which we could compute estimates. The review authors independently and in duplicate screened articles, extracted data and assessed risk of bias and applicability concerns using the QUADAS-2 domain-list. We presented the results of estimated sensitivity and specificity using paired forest plots, and we summarised pooled estimates in tables. We used a bivariate meta-analysis model where appropriate. We presented the uncertainty of accuracy estimates using 95% confidence intervals (CIs). We included 51 studies with 19,775 participants suspected of having COVID-19, of whom 10,155 (51%) had a final diagnosis of COVID-19. Forty-seven studies evaluated one imaging modality each, and four studies evaluated two imaging modalities each. All studies used RT-PCR as the reference standard for the diagnosis of COVID-19, with 47 studies using only RT-PCR and four studies using a combination of RT-PCR and other criteria (such as clinical signs, imaging tests, positive contacts, and follow-up phone calls) as the reference standard. Studies were conducted in Europe (33), Asia (13), North America (3) and South America (2); including only adults (26), all ages (21), children only (1), adults over 70 years (1), and unclear (2); in inpatients (2), outpatients (32), and setting unclear (17). Risk of bias was high or unclear in thirty-two (63%) studies with respect to participant selection, 40 (78%) studies with respect to reference standard, 30 (59%) studies with respect to index test, and 24 (47%) studies with respect to participant flow. For chest CT (41 studies, 16,133 participants, 8110 (50%) cases), the sensitivity ranged from 56.3% to 100%, and specificity ranged from 25.4% to 97.4%. The pooled sensitivity of chest CT was 87.9% (95% CI 84.6 to 90.6) and the pooled specificity was 80.0% (95% CI 74.9 to 84.3). There was no statistical evidence indicating that reference standard conduct and definition for index test positivity were sources of heterogeneity for CT studies. Nine chest CT studies (2807 participants, 1139 (41%) cases) used the COVID-19 Reporting and Data System (CO-RADS) scoring system, which has five thresholds to define index test positivity. At a CO-RADS threshold of 5 (7 studies), the sensitivity ranged from 41.5% to 77.9% and the pooled sensitivity was 67.0% (95% CI 56.4 to 76.2); the specificity ranged from 83.5% to 96.2%; and the pooled specificity was 91.3% (95% CI 87.6 to 94.0). At a CO-RADS threshold of 4 (7 studies), the sensitivity ranged from 56.3% to 92.9% and the pooled sensitivity was 83.5% (95% CI 74.4 to 89.7); the specificity ranged from 77.2% to 90.4% and the pooled specificity was 83.6% (95% CI 80.5 to 86.4). For chest X-ray (9 studies, 3694 participants, 2111 (57%) cases) the sensitivity ranged from 51.9% to 94.4% and specificity ranged from 40.4% to 88.9%. The pooled sensitivity of chest X-ray was 80.6% (95% CI 69.1 to 88.6) and the pooled specificity was 71.5% (95% CI 59.8 to 80.8). For ultrasound of the lungs (5 studies, 446 participants, 211 (47%) cases) the sensitivity ranged from 68.2% to 96.8% and specificity ranged from 21.3% to 78.9%. The pooled sensitivity of ultrasound was 86.4% (95% CI 72.7 to 93.9) and the pooled specificity was 54.6% (95% CI 35.3 to 72.6). Based on an indirect comparison using all included studies, chest CT had a higher specificity than ultrasound. For indirect comparisons of chest CT and chest X-ray, or chest X-ray and ultrasound, the data did not show differences in specificity or sensitivity. Our findings indicate that chest CT is sensitive and moderately specific for the diagnosis of COVID-19. Chest X-ray is moderately sensitive and moderately specific for the diagnosis of COVID-19. Ultrasound is sensitive but not specific for the diagnosis of COVID-19. Thus, chest CT and ultrasound may have more utility for excluding COVID-19 than for differentiating SARS-CoV-2 infection from other causes of respiratory illness. Future diagnostic accuracy studies should pre-define positive imaging findings, include direct comparisons of the various modalities of interest in the same participant population, and implement improved reporting practices.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
<bObjective:</b To explore the epidemiological characteristics and treatment outcomes of patients with inhalation injuries combined with total burn area less than 30% total body surface area (TBSA). <bMethods:</b A retrospective observational study was performed on medical records of 266 patients with inhalation injuries combined with total burn area less than 30%TBSA who were admitted to the First Affiliated Hospital of Naval Medical University from January 2008 to December 2016 and met the inclusion criteria. The following statistical data of the patients were collected, including gender, age, injury site, injurious factors of inhalation injury, degree of inhalation injury, combined total burn area, tracheotomy, time of tracheotomy, mechanical ventilation, whether stayed in intensive care unit (ICU) or not, microbial culture results of bronchoalveolar lavage fluid, length of hospital stay, length of ICU stay, mechanical ventilation days, and respiratory tract infections. Single factor and multivariate linear regression analysis were used to screen out the risk factors impacting the length of hospital stay, length of ICU stay, and mechanical ventilation days of patients. Single factor and multivariate logistic regression analysis were used to screen out the risk factors impacting respiratory tract infections of patients. <bResults:</b The 266 patients included 190 males and 76 females, with the majority age of above or equal to 21 years and below 65 years (217 patients). The major injury site was confined space. The major factor causing inhalation injury was hot air. Mild and moderate inhalation injuries were more common in patients. The combined total burn area was 9.00% (3.25%, 18.00%) TBSA. In 111 patients who had tracheotomy, most of them received the procedures before being admitted to the First Affiliated Hospital of Naval Medical University. The length of hospital stay of patients was 27 (10, 55) days. The length of ICU stay of 160 patients who were hospitalized in ICU was 15.5 (6.0, 40.0) days. The mechanical ventilation days of 109 patients who were conducted with mechanical ventilation were 6.0 (1.3, 11.5) days. A total of 119 patients were diagnosed with respiratory tract infections, with 548 strains including 35 types of pathogens isolated, mainly Gram-negative bacteria. Single factor linear regression analysis showed that age, injurious factors of inhalation injury, combined total burn area, degree of inhalation injury (moderate and severe), tracheotomy, mechanical ventilation, and respiratory tract infections were the factors impacting the length of hospital stay of patients (<iβ</i=-0.198, -0.224, 0.021, 0.127, 0.164, -0.298, 0.357, 0.447, 95% confidence interval (CI)=-0.397--0.001, -0.395--0.053, 0.015-0.028, 0.009-0.263, 0.008-0.319, -0.419--0.176, 0.242-0.471, 0.340-0.555, <iP</i&lt;0.1). Multivariate linear regression analysis showed that with mechanical ventilation and respiratory tract infections were the independent risk factors impacting the length of hospital stay of patients (<iβ</i=0.146, 0.383, 95% CI=0.022-0.271, 0.261-0.506, <iP</i&lt;0.05 or <iP</i&lt;0.01). Single factor linear regression analysis showed that injurious factors of inhalation injury, combined total burn area, degree of inhalation injury (moderate and severe), tracheotomy (no tracheotomy and prophylactic tracheotomy), mechanical ventilation, and respiratory tract infections were the factors impacting the length of ICU stay of patients (<iβ</i=0.225, 0.008, 0.237, 0.203, -0.408, -0.334, 0.309, 0.523, 95% CI=0.053-0.502, 0.006-0.010, -0.018-0.457, -0.022-0.428, -0.575--0.241, -0.687--0.018, 0.132-0.486, 0.369-0.678, <iP</i&lt;0.1). Multivariate linear regression analysis showed that with respiratory tract infections was the independent risk factor impacting the length of ICU stay of patients (<iβ</i=0.440, 95% CI=0.278-0.601, <iP</i&lt;0.01). Single factor linear regression analysis showed that injury site, injurious factors of inhalation injury (smoke and chemical gas), combined total burn area, degree of inhalation injury (moderate and severe), tracheotomy (no tracheotomy and prophylactic tracheotomy), and respiratory tract infections were the factors impacting mechanical ventilation days of patients (<iβ</i=-0.300, 0.545, 0.163, 0.005, 0.487, 0.799, -0.791, -0.736, 0.300, 95% CI=-0.565--0.034, 0.145-0.946, 0.051-1.188, 0.001-0.009, 0.127-0.847, 0.436-1.162, -1.075--0.508, -1.243--0.229, 0.005-0.605, <iP</i&lt;0.1). Multivariate linear regression analysis showed that smoke inhalation, severe inhalation injury, and respiratory tract infections were the independent risk factors impacting mechanical ventilation days of patients (<iβ</i=0.210, 0.495, 0.263, 95% CI=0.138-0.560, 0.143-0.848, 0.007-0.519, <iP</i&lt;0.05 or <iP</i&lt;0.01). Single factor logistic regression analysis showed that age, injury site, combined total burn area (10%-19%TBSA and 20%-29%TBSA), degree of inhalation injury (moderate and severe), tracheotomy (prophylactic tracheotomy and no tracheotomy), and mechanical ventilation were the factors impacting respiratory tract infections of patients (odds ratio=1.079, 0.815, 1.400, 1.331, 1.803, 1.958, 0.990, 0.320, 3.094, 95% CI=0.840-1.362, 0.641-1.044, 1.122-1.526, 1.028-1.661, 1.344-2.405, 1.460-2.612, 0.744-1.320, 0.241-0.424, 2.331-4.090, <iP</i&lt;0.1). Multivariate logistic regression analysis showed that with mechanical ventilation was the independent risk factor impacting respiratory tract infections of patients (odds ratio=4.300, 95% CI=2.152-8.624, <iP</i&lt;0.01). <bConclusions:</b The patients with inhalation injuries combined with total burn area less than 30%TBSA are mainly young and middle-aged males. Smoke inhalation, degree of inhalation injury, with mechanical ventilation and respiratory tract infections are the factors that affect the outcomes of patients with inhalation injuries combined with total burn area less than 30%TBSA. Additionally, prophylactic tracheotomy shows its potential value in reducing respiratory tract infections in patients with moderate or severe inhalation injuries.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
In Chile, the planted area of European hazelnut (<iCorylus avellana</i L.) reaches around 30,000 hectares, mainly concentrated in the central and southern area of the country where climate and soil provide a natural environment well suited to growing this species. Only a few diseases affect this nut tree in Chile. During the spring seasons in 2018 and 2020, European hazelnut plants (6 to 20% of incidence) exhibited wood necrosis and vascular discoloration of branches, with reduced growth, cankers and wilt branches, in orchards located in San Clemente and Curicó, Maule Region, Bulnes and El Carmen, Ñuble Region, Chile (36°45'-36°54' S; 71°03'-72°26' W). Symptomatic tissues were surface disinfected using a ~1% commercial sodium hypochlorite solution. Disinfected tissues were cut longitudinally, placed onto potato dextrose agar (PDA, Difco) plates, and incubated at 25 °C in the dark for 48 hours. Fungal hyphal tips were taken and placed on PDA medium. A fungal species was consistently isolated from these lignified tissues. The mycelium was initially translucent (turning white in appearance), while the mature mycelium was aerial, varying in color from pale to dark gray (Munsell color code: colony edge mycelium 6Y-6 4 / 5G and colony center mycelium B6-PB 7 / 5PB). The production of pycnidia and conidia was induced using pine needles in water agar medium and incubated in the dark for 10 days. Hyaline unicellular conidia of 25 ± 1.1 µm (range 23.9 to 26.1 µm) long and 11 ± 0.5 µm (Range 10.5 to 11.5 µm) wide (n = 50) were obtained from black pycnidia. Based on the cultural and morphological characteristics observed, the pathogen was identified as a possible species of the family Botryosphaeriaceae (20 isolates). Molecular techniques were used to identify the species of pathogen, and three isolates (F154, F199, and F167) were analyzed by using Multilocus sequence typing to confirm the identity of the pathogen. Genes ITS (internal transcribed spacer region), <itef-1</i (translation elongation factor 1-alpha) and <iβ-tub</i (β-tubulin) were amplified using endpoint PCR, with primers ITS1/ITS4 (White et al., 1990), EF1-728F/EF1-986R (Carbone &amp; Kohn, 1999) and Bt2a/Bt2b (Glass &amp; Donaldson, 1995), respectively. The segments were sequenced using the same primers, deposited in Gen Bank, and the accession numbers for each isolate were OM993582, OM993583, ON003481 for ITS, ON054936, ON054938, ON054937 for <itef1</i and ON054939, ON054941, ON054940 for <iβ-tub</i, respectively. A phylogenetic tree was constructed using the maximum likelihood statistical method with the Tamura-Nei model based on a concatenated dataset of ITS region, <itef1</i and <iβ-tubulin</i gene using Mega-X, and the three Chilean isolates (F154, F199, and F167) formed a single clade with the reference isolates of <iDiplodia mutila</i (Fr.) Mont. BLAST algorithm analyses indicated 100% identity to <iD. mutila</i for ITS (accession NR_144906), for <itef-1</i (accession MK573559), and for <iβ-tubulin</i (accession MG952719). The pathogenicity of the three isolates was validated through Koch's postulates. For this purpose, a trial was established in 6-year-old European hazelnut plants cv. Tonda Di Giffoni. Ten healthy branches were individually inoculated using actively growing mycelial discs from each isolate, while a disc of PDA without fungus was used as a control. Holes of 5-mm diameter were inoculated, making sure the mycelium was in contact with the wood. Finally, the wounds were sealed with plastic film to prevent external contamination and improve humidity conditions. After 120 days, each branch was cut longitudinal-sectioned to verify the presence of wood necrosis which arose between 3.0 to 16.2 mm of length around the point of inoculation. No necrosis was observed in the control. To confirm pathogenicity, infected tissues were cut into small pieces with sterile knives and scalpels, and surface disinfected with a 1% sodium hypochlorite solution for 1 min. The disinfected tissues were placed on PDA medium and incubated at 25°C in the dark until fungal growth was observed. Hyphal tips were taken from the mycelia developed from the pieces of wood, and placed on PDA medium in order to obtain pure isolates. The pathogenicity of the <iD. mutila</i isolates F154 and F199 was observed in 100% of the inoculated branches, while isolate F167 showed symptoms in 85% of the branches. The reisolated strains showed similar mycelial growth and microscopic fungal structures to those observed in the isolates used for inoculation. This is the first report of <iD. mutila</i affecting European hazelnut in Chile. This fungus has been recently reported affecting hazelnut in Oregon, USA (Wiman et al., 2019), causing similar symptoms to those observed in our study. In addition, <iD. mutila</i has been reported infecting walnut in Chile (Diaz et al. 2018) and native forest trees, specifically <iAraucaria araucana</i in Chile (Besoain et al., 2017). The presence of <iD. mutila</i in commercial hazelnut orchards in Chile highlights the need for epidemiological studies in order to understand the characteristics and impact of this pathogen and, based on this, develop adequate phytosanitary programs for its control.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The eastern region of Kazakhstan is an important territorial district producing seed potatoes. Since 2022, the region has been divided into the Abay region (administrative center in Semey city) and the East Kazakhstan region (administrative center in Ust-Kamenogorsk city). One of the largest elite seed farms producing potatoes is East Kazakhstan Agricultural Station LLP (Ust-Kamenogorsk). Potato Virus Y (PVY) was reported as problem for the production of <iSolanum tuberosum L</i. in Kazakhstan more than 25 years ago, however, over the past 5 years the spread of recombinant strains of PVY has become more serious problem (Loebenstein, Manadilova 2003). The purpose of this study was to find out the prevalence of PVY strains in the fields of the eastern region of Kazakhstan, including the fields of elite seed farm and commercial potato fields of the Glubokovsky district for the period from 2020 to 2021. Previously, the presence of recombinant strains (PVY<supN</sup, PVY<supNTN</sup, PVY<supN-Wi</sup, PVY<supO</sup) was shown in the west of Kazakhstan (Khassanov et al. 2020), but there is no data on the variety of PVY strains in the east region of Kazakhstan. Considering the geographical remoteness of these regions by more than 2000 km, the study of the prevalence of PVY in the fields of Kazakhstan needs to be supplemented with new data. PVY strains show a range of symptoms in different potato cultivars. The most damaging of these symptoms is tuber necrosis (Karasev and Gray 2013) associated with PVY<supNTN</sup and some other recombinant strains. The pathogenesis of a viral infection of the PVY<supN</sup strain and its recombinants is associated with the development of severe necrotic lesions of the tuber material, because of which yield losses can reach 60% (Chikh-Ali at al., 2020). In July 2021 leaf samples of 240 plants of two local regional potato varieties (Tavria and Izolda) were randomly selected to study PVY strains circulating in seed potatoes. 120 samples of the Tavria variety (seed material of the Elita class) and 120 samples of the Izolda variety (seed material of the Elita class) grown on seed potato fields of the East Kazakhstan Agricultural Experimental Station (GPS: N50.03324°, E82.53346°) were tested for the presence of potato viruses (PVA, PVS, PVM, PLRV, PVY) using commercial ELISA test systems (Russian Potato Research Center, Russia). In most samples (43 leaf samples), PVY is identified in the presence of a number of other potato viruses, mainly with PVM and PVS viruses. In order to type of PVY recombinant strains, samples containing PVY monoinfection have been selected. According to the results of ELISA, 21 PVY monoinfection positive samples have been detected: 14 plants of the Tavria variety, 7 plants of the Izolda variety. Serotype analysis using anti-rabbit polyclonal antibodies (Bioreba Ag, Switzerland) specific for PVY<supO</sup, PVY<supC</sup and PVY<supN</sup serotypes identified PVY<supO</sup serotype in 14 samples of the Tavria variety and in 7 samples of the Izolda variety, PVY<supN</sup serotype in 2 samples of the Tavria variety and in 2 samples of the Izolda variety. All 21 PVY-positive samples were tested for strain by reverse transcription polymerase chain reaction (RT-PCR) using the strain-specific primers described by Chikh Ali et al. (2010). The products of PCR analysis showed the presence of bands characteristic of the recombinant N:O strain, 853, 633, and 441 bp - in 17 samples and characteristic of the NTNa strain, 1307, 633 and 441 bp in 4 samples. After determining the results of the analysis of leaf material samples, PVY-infected plants were removed from the soil and the tuber material was visually analyzed for signs of necrosis. According to the results of visual diagnostics, the symptoms of tuber necrosis were found in 80% of cases of infection with recombinant strains of PVY<supNTNa</sup. In terms of severity, the symptoms of tuber necrosis were identical in both strains and caused to the damage of 35-50% of tubers on each plant of the Tavria and Isolda varieties, which indicates the absence of resistance to these recombinant strains. As is known, at present, many potato varieties have strain-specific resistance to PVY<supO</sup (Funke at al. 2017). However, N:O and NTNa recombinant strains are the most difficult to develop resistance (Green et al. 2017). This is the first report on the Tavria and Izolda potato varieties as a susceptible host to recombinant strains of PVY<supN:O</sup and PVY<supNTNa</sup. Over the past five years, recombinant strains PVY<supN:O</sup and PVY<supNTNa</sup were introduced in two regions of Kazakhstan. In this regard, research and development of effective strategies to reduce the spread of recombinant strains PVY<supN:O</sup and PVY<supNTNa</sup in Kazakhstan, is particularly relevant. The authors declared no conflicts of interest. Funding: research was carried out within the framework of the scientific project "Development and implementation of innovative technology aimed at imparting antiviral resistance to crop varieties", funded by the Ministry of Education and Science of the Republic of Kazakhstan, Individual registration number (IRN): AP08052163.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
In school dental screening, a dental health professional visually inspects children's oral cavities in a school setting and provides information for parents on their child's current oral health status and treatment needs. Screening at school aims to identify potential problems before symptomatic disease presentation, hence prompting preventive and therapeutic oral health care for the children. This review evaluates the effectiveness of school dental screening for improving oral health status. It is the second update of a review originally published in December 2017 and first updated in August 2019. To assess the effectiveness of school dental screening programmes on overall oral health status and use of dental services. An information specialist searched four bibliographic databases up to 15 October 2021 and used additional search methods to identify published, unpublished and ongoing studies. We included randomised controlled trials (RCTs; cluster- or individually randomised) that evaluated school dental screening compared with no intervention, or that compared two different types of screening. We used standard methodological procedures expected by Cochrane. The previous version of this review included seven RCTs, and our updated search identified one additional trial. Therefore, this update included eight trials (six cluster-RCTs) with 21,290 children aged 4 to 15 years. Four trials were conducted in the UK, two in India, one in the USA and one in Saudi Arabia. We rated two trials at low risk of bias, three at high risk of bias and three at unclear risk of bias.  No trials had long-term follow-up to ascertain the lasting effects of school dental screening. The trials assessed outcomes at 3 to 11 months of follow-up. No trials reported the proportion of children with treated or untreated oral diseases other than caries. Neither did they report on cost-effectiveness or adverse events. Four trials evaluated traditional screening versus no screening. We performed a meta-analysis for the outcome 'dental attendance' and found an inconclusive result with high heterogeneity. The heterogeneity was partly due to study design (three cluster-RCTs and one individually randomised trial). Due to this inconsistency, and unclear risk of bias, we downgraded the evidence to very low certainty, and we are unable to draw conclusions about this comparison. Two cluster-RCTs (both four-arm trials) evaluated criteria-based screening versus no screening, suggesting a possible small benefit (pooled risk ratio (RR) 1.07, 95% confidence interval (CI) 0.99 to 1.16; low-certainty evidence). There was no evidence of a difference when comparing criteria-based screening to traditional screening (RR 1.01, 95% CI 0.94 to 1.08; very low-certainty evidence). One trial compared a specific (personalised) referral letter to a non-specific letter. Results favoured the specific referral letter for increasing attendance at general dentist services (RR 1.39, 95% CI 1.09 to 1.77; very low-certainty evidence) and attendance at specialist orthodontist services (RR 1.90, 95% CI 1.18 to 3.06; very low-certainty evidence). One trial compared screening supplemented with motivation to screening alone. Dental attendance was more likely after screening supplemented with motivation (RR 3.08, 95% CI 2.57 to 3.71; very low-certainty evidence). One trial compared referral to a specific dental treatment facility with advice to attend a dentist. There was no evidence of a difference in dental attendance between these two referrals (RR 0.91, 95% CI 0.34 to 2.47; very low-certainty evidence). Only one trial reported the proportion of children with treated dental caries. This trial evaluated a post-screening referral letter based on the common-sense model of self-regulation (a theoretical framework that explains how people understand and respond to threats to their health), with or without a dental information guide, compared to a standard referral letter. The findings were inconclusive. Due to high risk of bias, indirectness and imprecision, we assessed the evidence as very low certainty. The evidence is insufficient to draw conclusions about whether there is a role for school dental screening in improving dental attendance.  We are uncertain whether traditional screening is better than no screening (very low-certainty evidence). Criteria-based screening may improve dental attendance when compared to no screening (low-certainty evidence). However, when compared to traditional screening, there is no evidence of a difference in dental attendance (very low-certainty evidence). For children requiring treatment, personalised or specific referral letters may improve dental attendance when compared to non-specific referral letters (very low-certainty evidence). Screening supplemented with motivation (oral health education and offer of free treatment) may improve dental attendance in comparison to screening alone (very low-certainty evidence). We are uncertain whether a referral letter based on the 'common-sense model of self-regulation' is better than a standard referral letter (very low-certainty evidence) or whether specific referral to a dental treatment facility is better than a generic advice letter to visit the dentist (very low-certainty evidence). The trials included in this review evaluated effects of school dental screening in the short term. None of them evaluated its effectiveness for improving oral health or addressed possible adverse effects or costs.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The Committee on Operating Room Safety of Japanese Society of Anesthesiologists (JSA) sends annually confidential questionnaires of perioperative mortality and morbidity to Certificated Training Hospitals of JSA. This report is on perioperative mortality and morbidity in 1999 with a special reference to anesthetic methods. Four hundred and sixty-seven hospitals reported the number of cases referred to anesthetic methods and total numbers of cases were 727,723. The incidences of cardiac arrest per 10,000 cases due to all etiology are estimated to be 6.77 cases in average, 5.33 cases in inhalation anesthesia, 34.26 cases in total intravenous anesthesia (TIVA), 5.26 cases in inhalation anesthesia plus epidural or spinal or conduction block, 5.29 cases in TIVA plus epidural or spinal or conduction block, 0.73 cases in spinal with continuous epidural block (CSEA), 2.85 cases in epidural anesthesia, 1.63 cases in spinal anesthesia, 2.53 cases in conduction block and 46.51 cases in other methods. However, the incidences of cardiac arrest per 10,000 cases totally attributable to anesthesia are estimated to be 0.78 case in average, 0.51 case in inhalation anesthesia, 1.35 cases in TIVA, 0.97 case in inhalation anesthesia plus epidural or spinal or conduction block, 1.51 cases in TIVA plus epidural or spinal or conduction block, 0.73 case in CSEA, 1.71 cases in epidural anesthesia, 0.54 case in spinal anesthesia, 2.52 cases in conduction block and 1.08 cases in other methods. The incidences of severe hypotension per 10,000 cases due to all etiology are estimated to be 16.64 cases in average, 13.61 cases in inhalation anesthesia, 100.36 cases in TIVA, 13.32 cases in inhalation anesthesia plus epidural or spinal or conduction block, 9.07 cases in TIVA plus epidural or spinal or conduction block, 3.65 cases in CSEA, 6.26 cases in epidural anesthesia, 7.31 cases in spinal anesthesia, 2.52 cases in conduction block and 28.12 cases in other methods. On the other hand, the incidences of cardiac arrest per 10,000 cases totally attributable to anesthesia are estimated to be 2.40 cases in average, 1.65 cases in inhalation anesthesia, 0.81 cases in TIVA, 3.92 cases in inhalation anesthesia plus epidural or spinal or conduction block, 2.77 cases in TIVA plus epidural or spinal or conduction block, 2.56 cases in CSEA, 3.42 cases in epidural anesthesia, 2.71 cases in spinal anesthesia, zero case in conduction block and zero case in other methods. The incidences of severe hypoxia per 10,000 cases due to all etiology are estimated to be 5.32 cases in average, 6.7 cases in inhalation anesthesia, 9.17 cases in TIVA, 5.16 cases in inhalation anesthesia plus epidural or spinal or conduction block, 4.53 cases in TIVA plus epidural or spinal or conduction block, 2.56 cases in CSEA, zero case in epidural anesthesia, 1.08 cases in spinal anesthesia, zero case in conduction block and 1.08 cases in other methods. On the other hand, the incidences of severe hypoxia per 10,000 cases totally attributable to anesthesia are estimated to be 2.39 cases in average, 3.22 cases in inhalation anesthesia, 2.43 cases in TIVA, 2.26 cases in inhalation anesthesia plus epidural or spinal or conduction block, 2.77 cases in TIVA plus epidural or spinal or conduction block, zero case in CSEA, zero case in epidural anesthesia, 0.54 cases in spinal anesthesia, zero case in conduction block and 1.08 cases in other methods. The mortality rates of cardiac arrest per 10,000 cases due to all etiology are estimated to be 3.56 cases in average, 2.82 cases in inhalation anesthesia, 24.55 cases in TIVA, 1.4 cases in inhalation anesthesia plus epidural or spinal or conduction block, 1.51 cases in TIVA plus epidural or spinal or conduction block, zero cases in CSEA, 0.57 cases in epidural anesthesia, 0.27 cases in spinal anesthesia, zero case in conduction block and 42.18 cases in other methods. On the other hand, the mortality rates of cardiac arrest per 10,000 cases totally attributable to anesthesia are estimated to be 0.08 case in average, 0.09 case in inhalation anesthesia, 0.27 case in TIVA, 0.05 case in inhalation anesthesia plus epidural or spinal or conduction block, zero case in TIVA plus epidural or spinal or conduction block, zero case in CSEA, 0.57 case in epidural anesthesia, zero case in spinal anesthesia, conduction block and other methods. The outcomes of cardiac arrest totally attributable to anesthesia are 70.2% of full recovery without any sequelae, 10.5% of death within 7 days, 1.8% of vegetative state and 17.5% of unknown results while the outcome of critical events including severe hypotension and severe hypoxia totally attributable to anesthesia is 94.9% of full recovery without any sequelae, 0.4% of death within 7 days, 0.2% of vegetative state and 4.5% of unknown results. These results indicate that there are no differences in mortality and morbidity totally attributable to anesthesia among anesthetic methods in 1999 at Certificated Training Hospitals of Japan Society of Anesthesiologists.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The consumption of antidepressant seems to be in France higher than in comparable countries, as well as the overall consumption of healthcare and medications. In Western countries, in recent years, the use of antidepressants has regularly increased, mainly due to the use of serotoninergic antidepressants. In France, in a week, the prevalence of antidepressant use in the overall population increased from 1.7% in 1992 to 3% in 1995. This survey addressed the overall population in the form of a representative sample focusing on subjects who indicated, at the time they were consulted, that they were taking an antidepressant. The study aimed to determine the circumstances of prescription: prescriber file, reason for prescription, type of medication prescribed, match between the prescription and the product indications stated in the marketing authorization, prescription duration and reason for discontinuing treatment. Methodology - The first stage consisted in forwarding a letter to a panel of 44 000 subjects aged 15 years or more and representative of the French population. The aim was to achieve a cross-sectional description of the population taking antidepressants. The response rate was 82% (36 036 subjects). The subjects who stated that they were taking an antidepressant were re-contacted by telephone by an interviewer trained in the use of the Composite International Diagnostic Interview - lifetime (CIDI), exploring depression and anxiety diseases with a view to potential diagnosis as per DSM criteria. Longitudinal follow-up over 8 months from the initial screening was evaluated using a monthly questionnaire on the time course of antidepressant consumption. Results - Out of 20 000 households, comprising 44 000 people aged over 15 years, 1 333 people were taking an antidepressant or had taken one in the previous 4 weeks. The sex ratio of the antidepressant consumers was 3 women to 1 man, amplifying the known sex ratio with respect to depressive disorders. The mean age of the subjects taking an antidepressant at time t was 51 years. Lifestyle and socioprofessional category did not seem to influence antidepressant consumption. Somatic comorbidity was present in 60% of antidepressant consumers. Among the consumers of antidepressants at time t, 45% were taking a selective serotonin reuptake inhibitor (SSRI). The two products most widely prescribed in that class were fluoxetine (30% of the subjects taking an antidepressant at time t) and paroxetine (10% of the subjects taking an antidepressant at time t). The other SSRIs accounted for the remaining 5%. Thirty-nine percent of the consumers were taking a tricyclic antidepressant: clomipramine in 16% of cases, amitriptyline in 14%, and other tricyclic antidepressants in 9%. Lastly, 20% of the consumers were taking an antidepressant that was neither an SSRI nor a tricyclic antidepressant. Only 4% of the patients were concomitantly taking 2 antidepressants: single-agent therapy is in line with the recommendations of the various expert groups. In the survey, 9 antidepressant prescriptions out of 10 were written by an open-care practitioner, and 1 out of 10 by a hospital physician. For 60% of the subjects, the antidepressant treatment was prescribed by a general practitioner. General practitioners prescribe less tricyclic antidepressants and more SSRIs than specialists. The main reason for prescription reported by the patient was depression (57% of cases); followed by a state of anxiety or stress (15% of cases). In 10% of cases, the consumer stated that the reason for treatment was not psychological. Sixty-two percent of subjects presented with, or had presented with, a mood disorder as per M-CIDI (major depression, mood disorder, or a combination of the two) and 14% an isolated anxiety disorder. Twenty-five percent of the subjects on antidepressants did not fulfill all the M-CIDI criteria for any diagnosis. Among the people receiving antidepressants, 54% had a CIDI diagnosis in strict compliance with the marketing authorization indications for the product considered. One quarter (25%) presented with a diagnosis of a characterized psychiatric disease, outside of the marketing authorization indications for the product taken. This finding reflects misuse or use on the basis of published data not incorporated in the marketing authorization. The dosages were in line with those stated in the marketing authorization for the disease considered in almost 99% of cases for the subjects on paroxetine and fluoxetine, but for only 22% of cases for the subjects on tricyclic antidepressants. Tricyclic antidepressants would therefore appear to be frequently inappropriately in terms of proportions that would be ineffective: half of the subjects on clomipramine were taking a dose less than or equal to one third of the minimum recommended dose. Conclusion - This survey shows that the point-prevalence of antidepressants in the global population in France is about 3.5%. Women consume more antidepressants than men. SSRIs are the most widely prescribed antidepressants. The survey findings point out the discrepancies between official indications, such as the ones issued by the regulatory authorities, and the physicians' prescribing practices.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Neural and stem cell transplantation is emerging as a potential treatment for neurodegenerative diseases from Parkinson's to Huntington's disease. Stereotactic placement of dopaminergic neurons in the caudate-putamen (striatum), is being attempted in centers of excellence and has proved to be beneficial. Basic research using cell transplantation indicates that structural development mechanisms seen in immature brains, i.e., fetal brains, can also function in the adult brain. The adult brain consumes 15% of the resting cardiac output for its metabolic needs. While most human tissues can sustain an anaerobic assault for a few minutes up to 30 minutes, a sudden total lack of oxygen supply to the brain cells in an adult will render the person unconscious within five to ten seconds. Our team has been working on the problem of human fetal tissue response to antigenic assault for the last two decades. In the present series, 12 patients with prolonged histories of Parkinsonism, who were not responding to anti-Parkinsonian drugs, and could not afford costly stereotactic surgery or deep brain stimulation and other modalities of recent Parkinson's disease treatment, were enrolled in the study. After obtaining proper informed consents from the patients or their guardians and from the multidisciplinary ethical committee, the patients, varying in age from 45 to 75 years and suffering for many years with Parkinsonism, were enrolled in the heterotopic brain tissue transplant programme. We followed standard antiseptic, aseptic and premedication protocols, after selecting a proposed site of transplantation of the brain in the axillary fold of the skin, under local infiltration anaesthesia. In an adjacent OR, a fetus was collected from a consenting patient undergoing hysterotomy and ligation (before 20 weeks), under general anaesthesia. Within a minute of hysterotomy, the fetal brain tissue was dissected, and under the guidance of the operative microscope, 1 g of fetal cortical brain tissue was dissected and weighed in an electronic machine. The tissue was collected from around 1 cm of the frontal opercula of the developing human fetal brain and grafted in the already dissected and prepared subcutaneous site in the axilla and the skin was closed. Hematological parameters (Hgb; total count, Tc; differential count, Dc; erythocyte sedimentation rate, ESR) were estimated sequentially up to one month. A small portion of the transplanted tissue was retrieved after one to two months, and a serial histological study was done along with a clinical assessment of the disease condition as per the specifications of the Unified Parkinson's Disease Rating Scale. The results were matched with the pre-transplant ratings of the individual cases. Presenting dyskinesia was also rated (0-4), on the basis of objective criteria assessment like walking, putting on a coat, lifting a cup to drink, etc. Initially 30 patients suffering from advanced Parkinson's disease (PD) were approached after getting the necessary clearance from the institutional multidisciplinary ethical committee; however, we have been able to arrange transplantation in only 12 cases so far. These patients were evaluated at the pre- and one month post-transplant period by the Unified Parkinson's Disease Rating Scale (0-108) and the minimum score was 40 in the motor portion of the unified scale at the pre-transplant state. Evaluation of the patients after one month revealed mild improvement of the pre-transplant scoring (up to 33.3%) in 41.6% of the cases, and moderate improvement (up to 66.6%) in another 41.6% of the cases. While 16.8% of the cases did not show any improvement from the basal score, i.e., the pre-transplant score, there was a definite sense of well being and rise in weight (2-4 pounds) noted in each case and there was also a reduction of the L-Dopa dosage in 75% of the cases. There was also a 58.3% improvement in the bradykinesia scoring from the pre-transplant level. What is intriguing is the survival, growth and proliferation of the grafted fetal brain tissue in the HLA- and sex-randomized adult axilla without any immunosuppressive support. Not a single histological study of the fetal brain tissues after removal from the axilla showed any signs of graft vs. host or inflammatory reaction (Figures 1-9) but there were features of growth of the transplanted cortical brain tissue along with its different components like neurogenesis, gliogenesis, early neovascularisation and angiogenesis, etc. There was also no systemic leucocytosis or lymphocytosis. Histological evidence at the transplanted tissue site suggests that fetal cortical brain tissue can sustain life in sex-randomized, HLA-randomized adult hosts, without the support of immuno-suppressive drugs and the tacit support of the blood-CSF and blood-brain barrier and other specific requirements of adult brain cells in the skull. Whether the clinical improvement in PD is transient or long lasting is presently under investigation along with basic questions like, is it due to transplanted fetal dopaminergic or non-dopaminergic neurons or is it the growth factors and the cytokine mediated hitherto unknown reactions causing the clinical improvement.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
H(+) is maintained constant in the internal environment at a given body temperature independent of external environment according to Bernard's principle of "milieu interieur". But CO2 relates to ventilation and H(+) to kidney. Hence, the title of the chapter. In order to do this, sensors for H(+) in the internal environment are needed. The sensor-receptor is CO2/H(+) sensing. The sensor-receptor is coupled to integrate and to maintain the body's chemical environment at equilibrium. This chapter dwells on this theme of constancy of H(+) of the blood and of the other internal environments. [H(+)] is regulated jointly by respiratory and renal systems. The respiratory response to [H(+)] originates from the activities of two groups of chemoreceptors in two separate body fluid compartments: (A) carotid and aortic bodies which sense arterial P(O2) and H(+); and (B) the medullary H(+) receptors on the ventrolateral medulla of the central nervous system (CNS). The arterial chemoreceptors function to maintain arterial P(O2) and H(+) constant, and medullary H(+) receptors to maintain H(+) of the brain fluid constant. Any acute change of H(+) in these compartments is taken care of almost instantly by pulmonary ventilation, and slowly by the kidney. This general theme is considered in Section 1. The general principles involving cellular CO2 reactions mediated by carbonic anhydrase (CA), transport of CO2 and H(+) are described in Section 2. Since the rest of the chapter is dependent on these key mechanisms, they are given in detail, including the role of Jacobs-Stewart Cycle and its interaction with carbonic anhydrase. Also, this section deals briefly with the mechanisms of membrane depolarization of the chemoreceptor cells because this is one mechanism on which the responses depend. The metabolic impact of endogenous CO2 appears in the section with a historical twist, in the context of acclimatization to high altitude (Section 3). Because low P(O2) at high altitude stimulates the peripheral chemoreceptors (PC) increasing ventilation, the endogenous CO2 is blown off, making the internal milieu alkaline. With acclimatization however ventilation increases. This alkalinity is compensated in the course of time by the kidney and the acidity tends to be restored, but the acidification is not great enough to increase ventilation further. The question is what drives ventilation during acclimatization when the central pH is alkaline? The peripheral chemoreceptor came to the rescue. Its sensitivity to P(O2) is increased which continues to drive ventilation further during acclimatization at high altitude even when pH is alkaline. This link of CO2 through the O2 chemoreceptor is described in Section 4 which led to hypoxia-inducible factor (HIF-1). HIF-1 is stabilized during hypoxia, including the carotid body (CB) and brain cells, the seat of CO2 chemoreception. The cells are always hypoxic even at sea level. But how CO2 can affect the HIF-1 in the brain is considered in this section. CO2 sensing in the central chemoreceptors (CC) is given in Section 5. CO(2)/H(+) is sensed by the various structures in the central nervous system but its respiratory and cardiovascular responses are restricted only to some areas. How the membranes are depolarized by CO2 or how it works through Na(+)/Ca(2+) exchange are discussed in this section. It is obvious, however, that CO2 is not maintained constant, decreasing with altitude as alveolar P(O2) decreases and ventilation increases. Rather, it is the [H(+)] that the organism strives to maintain at the expense of CO2. But then again, [H(+)] where? Perhaps it is in the intracellular environment. Gap junctions in the carotid body and in the brain are ubiquitous. What functions they perform have been considered in Section 6. CO2 changes take place in lung alveoli where inspired air mixes with the CO2 from the returning venous blood. It is the interface between the inspired and expired air in the lungs where CO2 change is most dramatic. As a result, various investigators have looked for CO2 receptors in the lung, but none have been found in the mammals. Instead, CO2/H(+) receptors were found in birds and amphibians. However, they are inhibited by increasing CO2/H(+), instead of stimulated. But the afferent impulses transmitted to the brain produced stimulation in the efferents. This reversal of afferent-efferent inputs is a curious situation in nature, and this is considered in Section 7. The NO and CO effects on CO2 sensing are interesting and have been briefly mentioned in Section 8. A model for CO2/H(+) sensing by cells, neurons and bare nerve endings are also considered. These NO effects, models for CO2/H(+) and O2-sensitive cells in the CNS have been considered in the perspectives. Finally, in conclusion, the general theme of constancy of internal environment for CO2/H(+) is reiterated, and for that CO2/H(+) sensors-receptors systems are essential. Since CO2/H(+) sensing as such has not been reviewed before, the recent findings in addition to defining basic CO2/H(+) reactions in the cells have been briefly summarized.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Because of concern that feedings may increase the risk of necrotizing enterocolitis, some high-risk infants have received prolonged periods of parenteral nutrition without enteral feedings. Providing trophic feedings (small volume feedings given at the same rate for at least 5 days) during this period of parenteral nutrition was developed as a strategy to enhance feeding tolerance and decrease time to reach full feedings. Whether trophic feedings result in better outcomes than initially withholding feedings or providing progressively increasing feedings can be established only in proper clinical trials. 1. For high-risk neonates receiving parenteral feedings, to assess the effect of trophic feeding compared to no enteral nutrient intake on measures of feeding tolerance and neonatal outcome.2. For high-risk neonates receiving parenteral feedings to assess the effect of trophic feedings compared to a specific initial feeding regimen involving a greater enteral nutrient intake on measures of feeding tolerance and neonatal outcome. Searches were performed of MEDLINE (1966 - June 2004), CINAHL (1982 - June 2004), the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 3, 2004), abstracts and conference proceedings, references from relevant publications in the English language, and studies identified by personal communication. Only randomized or quasi-randomized clinical trials were considered. Trials were included if they enrolled high-risk infants randomly assigned to receive trophic feedings (defined as dilute or full strength feedings providing &lt; = 25 kcal/kg/d for &gt; = 5d) compared to either 1) no enteral nutrient intake (no feedings or water only) or 2) a specific feeding regimen involving a greater enteral intake of formula or human milk than with trophic feedings. The two reviewers reached consensus for inclusion of trials. Data regarding clinical outcomes were extracted and evaluated by the two reviewers independently of each other. Authors were contacted as needed and feasible to clarify or provide missing data. The specific data that were needed were requested in writing. 1. Trophic feedings vs. no feedings (10 trials): Among infants given trophic feedings, there was an overall reduction in days to full feeding (weighted mean difference [WMD] = -2.6 [95% confidence limits = -4.1, -1.0]), total days that feedings were held (WMD = -3.1 [-4.6, -1.6]), and total hospital stay (WMD = -11.4 [-17.2, -5.7] compared to infants given no enteral nutrient intake. Tests for heterogeneity were significant in analyses of days to full enteral feedings, days to regain birth weight, days of phototherapy, and hospital stay. There was no significant difference in necrotizing enterocolitis, although the findings do not exclude an important effect (relative risk = 1.16 [0.75, 1.79]; risk difference = 0.02 [-0.03, 0.06].2. Trophic feedings vs. advancing feedings (one trial): Infants given trophic feedings required more days to reach full enteral feeding (13.4 [8.2, 18.6]) and tended to have a longer hospital stay (11.0 [-1.4, 23.4]) than did infants given advancing feedings. With only eight total cases of necrotizing enterocolitis, trophic feedings were associated with a marginally significant reduction in necrotizing enterocolitis (relative risk =0.14 [0.02, 1.07]; risk difference = -0.09 [-0.16, -0.01]. In both comparisons, the group with the greater enteral intake (trophic feedings in the first comparison and advancing feedings in the second comparison) required significantly less time to reach full feedings and had a significant or near significant reduction in hospital stay. In both comparisons, the group with the greater intake also had a higher incidence of necrotizing enterocolitis although the difference was not statistically significant. The concern is greatest for the advancing feeding regimen. Even when trophic feedings were compared to no feedings, the relative risk for necrotizing enterocolitis was 1.16 (0.75 - 1.79), a finding consistent with a 16% increase in necrotizing enterocolitis and a number needed to harm of 50. A true increase of this magnitude might outweigh any short- or long-term benefits of trophic feedings. Moreover, the 95% confidence interval does not exclude the possibility that trophic feedings increase necrotizing enterocolitis by as much as 79% with a number needed to harm of 17. Whether no feedings, trophic feedings, or advancing feedings should initially be used is difficult to discern for a variety of reasons--the inherent difficulty of assessing enteral feedings in high-risk infants, the limited sample size and methodologic limitations of most studies to date, unexplained heterogeneity with respect to a number of outcomes, the potential for bias to affect the findings in unblinded studies, and the large number of infants who must be studied to assess the effect on necrotizing enterocolitis. One or more large, well designed, multi-center trials are needed to compare these approaches to early feeding with respect to important clinical outcomes. A conclusive evaluation would assess effects on not only the survival rate without necrotizing enterocolitis prior to discharge from the neonatal unit but also on the survival rate without severe gastrointestinal or neurodevelopmental disability at &gt;= 18 months age.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The purpose of this work was to determine whether visual impairment caused by toxoplasmic chorioretinitis is associated with impaired performance of specific tasks on standardized tests of cognitive function. If so, then we worked to determine whether there are patterns in these difficulties that provide a logical basis for development of measures of cognitive function independent of visual impairment and compensatory intervention strategies to facilitate learning for such children. Sixty-four children with congenital toxoplasmosis with intelligence quotient scores &gt; or = 50 and visual acuity sufficient to cooperate with all of the intelligence quotient subscales had assessments of their vision, appearance of their retinas, and cognitive testing performed between 3.5 and 5 years of age. These evaluations took place between 1981 and 1998 as part of a longitudinal study to determine outcome of congenital toxoplasmosis. Children were evaluated at 3.5 or 5 (37 children) or both 3.5 and 5 (27 children) years of age. Cognitive function was measured using the Wechsler Preschool and Primary Scale of Intelligence-Revised. Wechsler Preschool and Primary Scale of Intelligence-Revised scale scores were compared for children grouped as those children who had normal visual acuity in their best eye (group 1), and those who had impaired vision in their best eye (acuity &lt; 20/40) because of macular disease (group 2). Demographic characteristics were compared for children in the 2 groups. Test scores were compared between groups using all of the 3.5-year-old visits, all of the 5-year-old visits, and using each child's "last" visit (ie, using the 5-year-old test results when a child was tested at both 3.5 and 5 years of age or only at 5 years, otherwise using the 3.5-year-old test results). The results were similar and, therefore, only the results from the last analysis are reported here. There were 48 children with normal visual acuity in their best eye (group 1) and 16 children with impaired vision because of macular involvement in their best eye (group 2). Ethnicity and socioeconomic scores were similar. There was a significantly greater proportion of males in group 2 compared with group 1 (81% vs 46%). There was no significant diminution in Wechsler Preschool and Primary Scale of Intelligence-Revised test scores between 3.5 and 5 years of age for the 27 children tested at both of these ages. Verbal intelligence quotient, performance intelligence quotient, full-scale intelligence quotient scores, and all of the scaled scores except arithmetic and block design were significantly lower for children in group 2 compared with group 1. The majority of the differences remained statistically significant or borderline significant after adjusting for gender. However, the difference in overall verbal scores does not remain statistically significant. Mean +/- SD verbal (98 +/- 20) and performance (95 +/- 17) intelligence quotients were not significantly different for children in group 1. However, verbal (88 +/- 13) and performance intelligence quotients (78 +/- 17) were significantly different for children in group 2. For children in group 2, their lowest scale scores were in object assembly, geometric design, mazes, and picture completion, all timed tests that involved visual discrimination of linear forms with small intersecting lines. In the 2 scales scored that did not differ between groups 1 and 2, arithmetic and block design, timing and vision but not linear forms were components of the tasks. Children with monocular and binocular normal visual acuity did not differ in verbal, performance, or full-scale intelligence quotients or any of the subscale tests. Difficulty with sight or concomitant neurologic involvement also seemed to impact the ability to acquire information, comprehension skills, and vocabulary and performance in similarities testing. After controlling for gender, however, these differences were diminished, and there were no longer differences in overall verbal scores. As noted above, results were generally similar when all of the tests for 3.5-year-olds or 5-year-olds were analyzed separately. At the 3.5-year visit there were fewer significant differences between the 2 groups for the verbal components than at the 5-year visit. In children with congenital toxoplasmosis and bilateral macular disease (group 2) because of toxoplasmic chorioretinitis, scaled scores were lowest on timed tests that require discrimination of fine intersecting lines. Although the severity of ocular and neurologic involvement is often congruent in children with congenital toxoplasmosis, ophthalmologic involvement seems to account for certain specific limitations on tests of cognitive function. Children with such visual impairment compensate with higher verbal skills, but their verbal scores are still less than those of children with normal vision, and in some cases significantly so, indicating that vision impairment might affect other aspects of cognitive testing. Patterns of difficulties noted in the subscales indicate that certain compensatory intervention strategies to facilitate learning and performance may be particularly helpful for children with these impairments. These patterns also provide a basis for the development of measures of cognitive function independent of visual impairment.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Worldwide, the prevalence of obesity has reached epidemic proportions. In Denmark one third of all pregnant women are overweight and 12 % are obese. Perhaps even more concerning, a dramatic rise in the prevalence of childhood overweight and obesity has also been evident over recent decades. The obesity epidemic is not simply a consequence of poor diet or sedentary lifestyles. Obesity is a multifactorial condition in which environmental, biological and genetic factors all play essential roles. The Developmental Origins of Health and Disease (DoHaD) hypothesis has highlighted the link between prenatal, perinatal and early postnatal exposure to certain environmental factors and subsequent development of obesity and non-communicable diseases. Maternal obesity and excessive gestational weight gain, resulting in over-nutrition of the fetus, are major contributors to obesity and metabolic disturbances in the offspring. Pregnancy offers the opportunity to modify the intrauterine environment, and maternal lifestyle changes during gestation may confer health benefits to the child. The overall aim with this PhD thesis was to study the effects of maternal obesity on offspring body size and metabolic outcomes, with special emphasis on the effects of lifestyle intervention during pregnancy. The thesis is based on a literature review, description of own studies and three original papers/manuscripts (I, II and III). In paper I, we used data from the Danish Medical Birth Registry. The aim of this paper was to examine the impact of maternal pregestational Body Mass Index (BMI) and smoking on neonatal abdominal circumference (AC) and weight at birth and to define reference curves for birth AC and weight in offspring of healthy, non-smoking, normal weight women. Data on 366,886 singletons were extracted and analyzed using multivariate linear regressions. We found that birth AC and weight increased with increasing pregestational BMI and decreased with smoking. Reference curves were created for offspring of healthy, non-smoking mothers with normal pregestational BMI. Paper II and III are based on an offspring follow-up of a randomized controlled trial (RCT) with 360 obese pregnant women. The intervention during pregnancy consisted of two major components: dietary advice and physical activity. The intervention resulted in a small, but significant difference in gestational weight gain compared to the control group. A number of 301 completed the trial and were eligible for the follow-up. We managed to include 157 mother and child dyads in the follow-up, which was conducted in Odense University Hospital and Aarhus University Hospital, Skejby between February 2010 and November 2012. At that time the children were in the ages 2.5-3 years. In addition to the children from the RCT, a group of 97 children born to lean mothers were included as an external reference group. The follow-up consisted of a clinical examination with anthropometric measures, DEXA scans and fasting blood samples for evaluation of metabolic outcomes. In paper II the effect of the maternal intervention on offspring body composition and anthropometric outcomes was studied. The primary outcome was BMI Z-score and secondary outcomes were: body composition values by DEXA (fat mass, lean mass and fat percentage), BMI, percentage of overweight or obese children and skin fold thicknesses. We found no significant differences in offspring outcomes between randomized groups of the preceding RCT. Neither was any differences detected between offspring of the RCT or the external reference group born to lean mothers. Paper III focused on the metabolic outcomes in the offspring. We additionally studied the predictive values of birth weight (BW) and birth abdominal circumference (BAC) on metabolic risk factors. We found that both BAC and BW were significantly associated with several risk factors in early childhood. All metabolic measurements in RCT offspring were similar, and no differences were detected between the RCT offspring and the external reference group of offspring of lean mothers. Lifestyle intervention in obese pregnant women has the potential to modify the intrauterine environment and confer long-term benefits to the child. In this follow-up study, lifestyle intervention in pregnancy did not result in changes in offspring body composition or metabolic risk factors at 2.8 years. This might be due to a limited difference in gestational weight gain between follow-up attendees. When comparing offspring of obese women with offspring of normal weight mothers all outcomes were similar. We speculate that obese mothers entering a lifestyle intervention RCT regardless of the intervention have a high motivation to focus on healthy lifestyle during pregnancy, which makes it difficult to determine the effects of the randomized lifestyle intervention compared to an unselected control group of obese women. Our studies (paper I and III) on birth abdominal circumference show that abdominal size at birth is a good predictor of later adverse metabolic profile. Abdominal circumference at birth may reflect visceral adiposity and this measurement together with birth weight are strongly associated to later adverse metabolic outcome. Future studies should be performed in other populations to confirm this.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
To compare the pharmacokinetics of, and food effect on, diclofenac potassium delivered as an oral solution vs an immediate-release tablet. Diclofenac potassium for oral solution is the only nonsteroidal anti-inflammatory drug approved as monotherapy for the acute treatment of migraine attacks with or without aura in adults 18 years of age or older. It is formulated with potassium bicarbonate as a buffering agent to raise the pH and consequently increase the aqueous solubility of diclofenac in the acidic environment of the stomach following oral administration. The dosage is 50 mg of powdered diclofenac potassium dissolved in 1 to 2 ounces (30 to 60 mL) of water prior to administration, with dosing time in relation to food intake not specified - this was the case for the pivotal efficacy and safety trials in subjects with acute migraine attacks in which the primary endpoints were achieved. For acute treatment of migraine attacks, rapid onset of pain relief is desirable and is likely related to a rapid appearance of an effective concentration of the drug in the systemic circulation. The rate at which an orally administered drug reaches the blood is affected by both its formulation and the presence of food in the stomach. The present study was designed to investigate the pharmacokinetics of 2 formulations of diclofenac potassium, an immediate-release tablet and an oral solution, and to ascertain the effect of food. This was an open-label, randomized, single-center, crossover trial in healthy volunteers. Subjects were randomized using computer-generated list to 1:1:1:1 ratio. They received a single 50-mg dose of diclofenac potassium in 4 sequences (ABCD, BADC, CDBA, and DCAB) during each of the 4 treatment periods. The 4 treatments were: A, oral solution fasting; B, tablet fasting; C, oral solution fed; and D, tablet fed. There was a ≥7-day washout period between dosing. Blood samples for pharmacokinetic analysis were taken for up to 12 hours post-dose and analyzed for diclofenac concentrations. Pharmacokinetic parameters, including peak concentration (Cmax ), time to Cmax (tmax ), area under the concentration-time curve (AUC) from time 0 to last measurable concentration (AUCt ), and extrapolation to infinity (AUC∞ ) were obtained using non-compartmental analysis. Comparative assessments for Cmax and AUC were performed between the solution and tablet under fed and fasting conditions and between fed and fasting states for both formulations. Bioequivalent exposure was defined as the geometric mean ratio and its 90% confidence interval falling within 80.0-125.0% for Cmax and AUC. Adverse events (AEs) were monitored throughout the trial. Sixty-one percent of the 36 randomized subjects were male, 91.7% were Caucasian, and the mean (standard deviation [SD]) age was 31.9 (7.6) years. Thirty-three (91.7%) subjects completed all 4 treatments. When taken under fed conditions, the oral solution resulted in an approximately 80% faster median tmax (0.17 vs 1.25 hours, P = .00015) and a 21% lower Cmax (mean ± SD, ng/mL: 506 ± 305 vs 835 ± 449, P = .00061) compared with the tablet. AUC values were similar between the 2 formulations. When taken under fasting conditions, the oral solution exhibited a 50% faster median tmax (0.25 vs 0.50 hours, P = .00035) to achieve a 77% higher Cmax (mean ± SD, ng/mL: 1620 ± 538 vs 1160 ± 452, P = .00032) compared with the tablet. AUCt and AUC∞ were similar between the 2 formulations. When taken under fed conditions, the oral solution resulted in a similar median tmax (0.17 vs 0.25 hours, P = .185) and 64% lower Cmax (mean ± SD, ng/mL: 506 ± 305 vs 1620 ± 538, P &lt; .00001) compared with fasting conditions. In comparison, the tablets under fed conditions resulted in a statistically significantly delayed median tmax (1.25 vs 0.50, P = .00143) and ∼30% lower Cmax (mean ± SD, ng/mL: 835 ± 449 vs 1160 ± 452, P = .00377). AUC values were similar between fed and fasting conditions for both formulations. Twelve subjects (33%) experienced ≥1 treatment-emergent AE during the study. All AEs were mild and resolved without treatment; none resulted in study discontinuation. More treatment-emergent AEs were reported in subjects receiving the tablet compared with the solution formulation (20.0% vs 11.8 % in fasting and 17.1% vs 8.6% in fed conditions). Diclofenac potassium oral solution and tablet formulations produced statistically significantly different Cmax and tmax but similar AUC under fed and fasting conditions. Fed conditions produced significantly lower Cmax for both formulations and profoundly delayed tmax for the tablet, but had no effect on tmax for the solution formulation. These data provide insights into the importance of an earlier and greater exposure to diclofenac arising from the solution formulation than the tablet, which may account for the superiority in the onset and sustained pain reduction for the solution than the tablet formulation observed in the double-blind, efficacy/safety study in migraine patients conducted in Europe.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
About 200 individual sarcocysts were excised from 12 samples of cattle beef from five countries (Argentina, Brazil, Germany, New Zealand, Uruguay) and tentatively identified to species or cyst type on the basis of their size and shape and cyst wall morphology. Genomic DNA was extracted from 147 of these sarcocysts and used initially for PCR amplification and sequencing of the partial mitochondrial cytochrome c oxidase subunit I gene (cox1) in order to identify the sarcocysts to species and/or sequence type. In addition, seven Sarcocystis sinensis-like sarcocysts collected from the oesophagus of water buffaloes in Egypt were examined at cox1 for comparative purposes. Based on the results from the cox1 marker, selected sarcocyst isolates from both hosts were further characterised at one to three regions of the nuclear ribosomal (r) DNA unit, i.e. the complete 18S rRNA gene, the complete internal transcribed spacer 1 (ITS1) region and the partial 28S rRNA gene. This was done in order to compare the results with previous molecular identifications based on 18S rRNA gene sequences and to evaluate the utility of these regions for species delimitations and phylogenetic inferences. On the basis of sarcocyst morphology and molecular data, primarily the cox1 sequences, four Sarcocystis spp. were identified in the samples of cattle beef. Twenty-two microscopic sarcocysts (1 × 0.1 mm) with hair-like protrusions were assigned to Sarcocystis cruzi, 56 macroscopic sarcocysts (3-8 × 0.5 mm) with finger-like protrusions were assigned to Sarcocystis hirsuta and 45 and 24 microscopic sarcocysts (1-3 × 0.1-0.2 mm) with finger-like protrusions were assigned to Sarcocystis bovifelis and Sarcocystis bovini n. sp., respectively. Sarcocysts of S. cruzi were identified in samples of beef from Argentina and Uruguay; sarcocysts of S. hirsuta in samples from Argentina, Brazil, Germany and New Zealand; sarcocysts of S. bovifelis in samples from Argentina and Germany; and sarcocysts of S. bovini in samples from Argentina and New Zealand. The microscopic sarcocysts from water buffaloes were confirmed to belong to S. sinensis. The cox1 sequences of S. bovifelis and S. bovini, respectively, shared an identity of 93-94 % with each other, and these sequences shared an identity of 89-90 % with cox1 of S. sinensis. In contrast, the intraspecific sequence identity was 98.4-100 % (n = 45), 99.3-100 % (n = 24) and 99.5-100 % (n = 7) for sequences of S. bovifelis, S. bovini and S. sinensis, respectively. In each of the latter three species, an aberrant type of cox1 sequences was also identified, which was only 91-92 % identical with the predominant cox1 type of the same species and about 98 % identical with the aberrant types of the two other species. These aberrant cox1 sequences are believed to represent non-functional nuclear copies of the mitochondrial genes (numts or pseudogenes). They might be used as additional markers to separate the three species from each other. Sequencing of a considerable number of clones of S. bovifelis, S. bovini and S. sinensis from each of the three regions of the rDNA unit revealed intraspecific sequence variation in all loci in all species and particularly in the ITS1 locus (78-100 % identity). As regards the 18S rRNA gene, it was possible to separate the three species from each other on the basis of a few consistent nucleotide differences in the less variable 3' end half of the gene. A comparison of the new sequences with GenBank sequences obtained from S. sinensis-like sarcocysts in cattle in other studies indicated that previous sequences derived from cattle in Germany and Austria belonged to S. bovifelis, whereas those derived from cattle in China belonged to S. bovini. On the basis of the new 28S rRNA sequences, it was possible to separate S. sinensis from S. bovifelis and S. bovini, whereas the latter two species could not be separated from each other. Based on ITS1 sequences, the three species were indistinguishable. Phylogenetic analysis using maximum parsimony placed with fairly high support cox1 sequences of S. bovifelis, S. bovini and S. sinensis, respectively, into three monophyletic clusters, with S. bovifelis and S. bovini being a sister group to S. sinensis. In contrast, phylogenies based on each of the three regions of the rDNA unit did not separate sequences of the three species completely from each other. Characterisation of cox1 of 56 isolates of S. hirsuta from four countries revealed only 13 haplotypes and an intraspecific sequence identity of 99.3-100 %. In the three regions of the rDNA unit, there was more extensive sequence variation, particularly in the ITS1 region. The 22 cox1 sequences of S. cruzi displayed a moderate intraspecific variation (98.6-100 %), whereas there was no variation at the 18S rRNA gene among 10 sequenced isolates. Sequencing of 16 clones of the partial 28S rRNA gene of S. cruzi yielded two markedly different sequence types, having an overall sequence identity of 95-100 %.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Rupture of the anterior cruciate ligament (ACL) is a common injury, mainly affecting young, physically active individuals. The injury is characterised by joint instability, leading to decreased activity, which can lead to poor knee-related quality of life. It is also associated with increased risk of secondary osteoarthritis of the knee. It is unclear whether stabilising the knee surgically via ACL reconstruction produces a better overall outcome than non-surgical (conservative) treatment. To assess the effects of surgical versus conservative interventions for treating ACL injuries. We searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register (18 January 2016), the Cochrane Central Register of Controlled Trials (2016, Issue 1), MEDLINE (1946 to January Week 1 2016), MEDLINE In-Process &amp; Other Non-Indexed Citations (18 January 2016), EMBASE (1974 to 15 January 2016), trial registers (February 2016) and reference lists. We included randomised controlled trials that compared the use of surgical and conservative interventions in participants with an ACL rupture. We included any trial that evaluated surgery for ACL reconstruction using any method of reconstruction, type of reconstruction technique, graft fixation or type of graft. Three review authors independently screened all titles and abstracts for potentially eligible studies, for which we then obtained full-text reports. Two authors then independently confirmed eligibility, extracted data and assessed the risk of bias using the Cochrane 'Risk of bias' tool. We used the GRADE approach to assess the overall quality of the evidence. We identified one study in which 141 young, active adults with acute ACL injury were randomised to either ACL reconstruction followed by structured rehabilitation (results reported for 62 participants) or conservative treatment comprising structured rehabilitation alone (results reported for 59 participants). Built into the study design was a formal option for subsequent (delayed) ACL reconstruction in the conservative treatment group, if the participant requested surgery and met pre-specified criteria.This study was deemed at low risk of selection and reporting biases, at high risk of performance and detection biases because of the lack of blinding and at unclear risk of attrition bias because of an imbalance in the post-randomisation exclusions. According to GRADE methodology, the overall quality of the evidence was low across different outcomes.This study identified no difference in subjective knee score (measured using the average score on four of the five sub-scales of the KOOS score (range from 0 (extreme symptoms) to 100 (no symptoms)) between ACL reconstruction and conservative treatment at two years (difference in KOOS-4 change from baseline scores: MD -0.20, 95% confidence interval (CI) -6.78 to 6.38; N = 121 participants; low-quality evidence), or at five years (difference in KOOS-4 final scores: MD -2.0, 95% CI -8.27 to 4.27; N = 120 participants; low-quality evidence). The total number of participants incurring one or more complications in each group was not reported; serious events reported in the surgery group were predominantly surgery-related, while those in conservative treatment group were predominantly knee instability. There were also incomplete data for total participants with treatment failure, including subsequent surgery. In the surgical group at two years, there was low-quality evidence of far fewer ACL-related treatment failures, when defined as either graft rupture or subsequent ACL reconstruction. This result is dominated by the uptake by 39% (23/59) of the participants in the conservative treatment group of ACL reconstruction for knee instability at two years and by 51% (30/59) of the participants at five years. There was low-quality evidence of little difference between the two groups in participants who had undergone meniscal surgery at anytime up to five years. There was low-quality evidence of no clinically important between-group differences in SF-36 physical component scores at two years. There was low-quality evidence of a higher return to the same or greater level of sport activity at two years in the ACL reconstruction group, but the wide 95% CI also included the potential for a higher return in the conservative treatment group. Based on an illustrative return to sport activities of 382 per 1000 conservatively treated patients, this amounts to an extra 84 returns per 1000 ACL-reconstruction patients (95% CI 84 fewer to 348 more). There was very low-quality evidence of a higher incidence of radiographically-detected osteoarthritis in the surgery group (19/58 (35%) versus 10/55 (18%)). For adults with acute ACL injuries, we found low-quality evidence that there was no difference between surgical management (ACL reconstruction followed by structured rehabilitation) and conservative treatment (structured rehabilitation only) in patient-reported outcomes of knee function at two and five years after injury. However, these findings need to be viewed in the context that many participants with an ACL rupture remained symptomatic following rehabilitation and later opted for ACL reconstruction surgery. Further research, including the two identified ongoing trials, will help to address the limitations in the current evidence, which is from one small trial in a young, active, adult population.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Exacerbations of chronic obstructive pulmonary disease (COPD) are a major driver of decline in health status and impose high costs on healthcare systems. Action plans offer a form of self-management that can be delivered in the outpatient setting to help individuals recognise and initiate early treatment for exacerbations, thereby reducing their impact. To compare effects of an action plan for COPD exacerbations provided with a single short patient education component and without a comprehensive self-management programme versus usual care. Primary outcomes were healthcare utilisation, mortality and medication use. Secondary outcomes were health-related quality of life, psychological morbidity, lung function and cost-effectiveness. We searched the Cochrane Airways Group Specialised Register along with CENTRAL, MEDLINE, Embase and clinical trials registers. Searches are current to November 2015. We handsearched bibliographic lists and contacted study authors to identify additional studies. We included randomised controlled trials (RCT) and quasi-RCTs comparing use of an action plan versus usual care for patients with a clinical diagnosis of COPD. We permitted inclusion of a single short education component that would allow individualisation of action plans according to management needs and symptoms of people with COPD, as well as ongoing support directed at use of the action plan. We used standard methodological procedures expected by Cochrane. For meta-analyses, we subgrouped studies via phone call follow-up directed at facilitating use of the action plan. This updated review includes two additional studies (and 976 additional participants), for a total of seven parallel-group RCTs and 1550 participants, 66% of whom were male. Participants' mean age was 68 years and was similar among studies. Airflow obstruction was moderately severe in three studies and severe in four studies; mean post bronchodilator forced expiratory volume in one second (FEV<sub1</sub) was 54% predicted, and 27% of participants were current smokers. Four studies prepared individualised action plans, one study an oral plan and two studies standard written action plans. All studies provided short educational input on COPD, and two studies supplied ongoing support for action plan use. Follow-up was 12 months in four studies and six months in three studies.When compared with usual care, an action plan with phone call follow-up significantly reduced the combined rate of hospitalisations and emergency department (ED) visits for COPD over 12 months in one study with 743 participants (rate ratio (RR) 0.59, 95% confidence interval (CI) 0.44 to 0.79; high-quality evidence), but the rate of hospitalisations alone in this study failed to achieve statistical significance (RR 0.69, 95% CI 0.47 to 1.01; moderate-quality evidence). Over 12 months, action plans significantly decreased the likelihood of hospital admission (odds ratio (OR) 0.69, 95% CI 0.49 to 0.97; n = 897; two RCTs; moderate-quality evidence; number needed to treat for an additional beneficial outcome (NNTB) 19 (11 to 201)) and the likelihood of an ED visit (OR 0.55, 95% CI 0.38 to 0.78; n = 897; two RCTs; moderate-quality evidence; NNTB over 12 months 12 (9 to 26)) compared with usual care.Results showed no significant difference in all-cause mortality during 12 months (OR 0.88, 95% CI 0.59 to 1.31; n = 1134; four RCTs; moderate-quality evidence due to wide confidence interval). Over 12 months, use of oral corticosteroids was increased with action plans compared with usual care (mean difference (MD) 0.74 courses, 95% CI 0.12 to 1.35; n = 200; two RCTs; moderate-quality evidence), and the cumulative prednisolone dose was significantly higher (MD 779.0 mg, 95% CI 533.2 to 10248; n = 743; one RCT; high-quality evidence). Use of antibiotics was greater in the intervention group than in the usual care group (subgrouped by phone call follow-up) over 12 months (MD 2.3 courses, 95% CI 1.8 to 2.7; n = 943; three RCTs; moderate-quality evidence).Subgroup analysis by ongoing support for action plan use was limited; review authors noted no subgroup differences in the likelihood of hospital admission or ED visits or all-cause mortality over 12 months. Antibiotic use over 12 months showed a significant difference between subgroups in studies without and with ongoing support.Overall quality of life score on St George's Respiratory Questionnaire (SGRQ) showed a small improvement with action plans compared with usual care over 12 months (MD -2.8, 95% CI -0.8 to -4.8; n = 1009; three RCTs; moderate-quality evidence). Low-quality evidence showed no benefit for psychological morbidity as measured with the Hospital Anxiety and Depression Scale (HADS). Use of COPD exacerbation action plans with a single short educational component along with ongoing support directed at use of the action plan, but without a comprehensive self-management programme, reduces in-hospital healthcare utilisation and increases treatment of COPD exacerbations with corticosteroids and antibiotics. Use of COPD action plans in this context is unlikely to increase or decrease mortality. Whether additional benefit is derived from periodic ongoing support directed at use of an action plan cannot be determined from the results of this review.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Despite substantial improvements in myocardial preservation strategies, coronary artery bypass grafting (CABG) is still associated with severe complications. It has been reported that remote ischaemic preconditioning (RIPC) reduces reperfusion injury in people undergoing cardiac surgery and improves clinical outcome. However, there is a lack of synthesised information and a need to review the current evidence from randomised controlled trials (RCTs). To assess the benefits and harms of remote ischaemic preconditioning in people undergoing coronary artery bypass grafting, with or without valve surgery. In May 2016 we searched CENTRAL, MEDLINE, Embase and Web of Science. We also conducted a search of ClinicalTrials.gov and the International Clinical Trials Registry Platform (ICTRP). We also checked reference lists of included studies. We did not apply any language restrictions. We included RCTs in which people scheduled for CABG (with or without valve surgery) were randomly assigned to receive RIPC or sham intervention before surgery. Two review authors independently assessed trials for inclusion, extracted data and checked them for accuracy. We calculated mean differences (MDs), standardised mean differences (SMDs) and risk ratios (RR) using a random-effects model. We assessed quality of the trial evidence for all primary outcomes using the GRADE methodology. We completed a 'Risk of bias' assessment for all studies and performed sensitivity analysis by excluding studies judged at high or unclear risk of bias for sequence generation, allocation concealment and incomplete outcome data. We contacted authors for missing data. Our primary endpoints were 1) composite endpoint (including all-cause mortality, non-fatal myocardial infarction or any new stroke, or both) assessed at 30 days after surgery, 2) cardiac troponin T (cTnT, ng/L) at 48 hours and 72 hours, and as area under the curve (AUC) 72 hours (µg/L) after surgery, and 3) cardiac troponin I (cTnI, ng/L) at 48 hours, 72 hours, and as area under the curve (AUC) 72 hours (µg/L) after surgery. We included 29 studies involving 5392 participants (mean age = 64 years, age range 23 to 86 years, 82% male). However, few studies contributed data to meta-analyses due to inconsistency in outcome definition and reporting. In general, risk of bias varied from low to high risk of bias across included studies, and insufficient detail was provided to inform judgement in several cases. The quality of the evidence of key outcomes ranged from moderate to low quality due to the presence of moderate or high statistical heterogeneity, imprecision of results or due to limitations in the design of individual studies.Compared with no RIPC, we found that RIPC has no treatment effect on the rate of the composite endpoint with RR 0.99 (95% confidence interval (CI) 0.78 to 1.25); 2 studies; 2463 participants; moderate-quality evidence. Participants randomised to RIPC showed an equivalent or better effect regarding the amount of cTnT release measured at 72 hours after surgery with SMD -0.32 (95% CI -0.65 to 0.00); 3 studies; 1120 participants; moderate-quality evidence; and expressed as AUC 72 hours with SMD -0.49 (95% CI -0.96 to -0.02); 3 studies; 830 participants; moderate-quality evidence. We found the same result in favour of RIPC for the cTnI release measured at 48 hours with SMD -0.21 (95% CI -0.40 to -0.02); 5 studies; 745 participants; moderate-quality evidence; and measured at 72 hours after surgery with SMD -0.37 (95% CI -0.59 to -0.15); 2 studies; 459 participants; moderate-quality evidence. All other primary outcomes showed no differences between groups (cTnT release measured at 48 hours with SMD -0.14, 95% CI -0.33 to 0.06; 4 studies; 1792 participants; low-quality evidence and cTnI release measured as AUC 72 hours with SMD -0.17, 95% CI -0.48 to 0.14; 2 studies; 159 participants; moderate-quality evidence).We also found no differences between groups for all-cause mortality after 30 days, non-fatal myocardial infarction after 30 days, any new stroke after 30 days, acute renal failure after 30 days, length of stay on the intensive care unit (days), any complications and adverse effects related to ischaemic preconditioning. We did not assess many patient-centred/salutogenic-focused outcomes. We found no evidence that RIPC has a treatment effect on clinical outcomes (measured as a composite endpoint including all-cause mortality, non-fatal myocardial infarction or any new stroke, or both, assessed at 30 days after surgery). There is moderate-quality evidence that RIPC has no treatment effect on the rate of the composite endpoint including all-cause mortality, non-fatal myocardial infarction or any new stroke assessed at 30 days after surgery, or both. We found moderate-quality evidence that RIPC reduces the cTnT release measured at 72 hours after surgery and expressed as AUC (72 hours). There is moderate-quality evidence that RIPC reduces the amount of cTnI release measured at 48 hours, and measured 72 hours after surgery. Adequately-designed studies, especially focusing on influencing factors, e.g. with regard to anaesthetic management, are encouraged and should systematically analyse the commonly used medications of people with cardiovascular diseases.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
High altitude illness (HAI) is a term used to describe a group of mainly cerebral and pulmonary syndromes that can occur during travel to elevations above 2500 metres (˜ 8200 feet). Acute mountain sickness (AMS), high altitude cerebral oedema (HACE) and high altitude pulmonary oedema (HAPE) are reported as potential medical problems associated with high altitude ascent. In this second review, in a series of three about preventive strategies for HAI, we assessed the effectiveness of five of the less commonly used classes of pharmacological interventions. To assess the clinical effectiveness and adverse events of five of the less commonly used pharmacological interventions for preventing acute HAI in participants who are at risk of developing high altitude illness in any setting. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, LILACS and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) in May 2017. We adapted the MEDLINE strategy for searching the other databases. We used a combination of thesaurus-based and free-text search terms. We scanned the reference lists and citations of included trials and any relevant systematic reviews that we identified for further references to additional trials. We included randomized controlled trials conducted in any setting where one of five classes of drugs was employed to prevent acute HAI: selective 5-hydroxytryptamine(1) receptor agonists; N-methyl-D-aspartate (NMDA) antagonist; endothelin-1 antagonist; anticonvulsant drugs; and spironolactone. We included trials involving participants who are at risk of developing high altitude illness (AMS or HACE, or HAPE, or both). We included participants with and without a history of high altitude illness. We applied no age or gender restrictions. We included trials where the relevant medication was administered before the beginning of ascent. We excluded trials using these drugs during ascent or after ascent. We used the standard methodological procedures employed by Cochrane. We included eight studies (334 participants, 9 references) in this review. Twelve studies are ongoing and will be considered in future versions of this review as appropriate. We have been unable to obtain full-text versions of a further 12 studies and have designated them as 'awaiting classification'. Four studies were at a low risk of bias for randomization; two at a low risk of bias for allocation concealment. Four studies were at a low risk of bias for blinding of participants and personnel. We considered three studies at a low risk of bias for blinding of outcome assessors. We considered most studies at a high risk of selective reporting bias.We report results for the following four main comparisons.Sumatriptan versus placebo (1 parallel study; 102 participants)Data on sumatriptan showed a reduction of the risk of AMS when compared with a placebo (risk ratio (RR) = 0.43, CI 95% 0.21 to 0.84; 1 study, 102 participants; low quality of evidence). The one included study did not report events of HAPE, HACE or adverse events related to administrations of sumatriptan.Magnesium citrate versus placebo (1 parallel study; 70 participants)The estimated RR for AMS, comparing magnesium citrate tablets versus placebo, was 1.09 (95% CI 0.55 to 2.13; 1 study; 70 participants; low quality of evidence). In addition, the estimated RR for loose stools was 3.25 (95% CI 1.17 to 8.99; 1 study; 70 participants; low quality of evidence). The one included study did not report events of HAPE or HACE.Spironolactone versus placebo (2 parallel studies; 205 participants)Pooled estimation of RR for AMS was not performed due to considerable heterogeneity between the included studies (I² = 72%). RR from individual studies was 0.40 (95% CI 0.12 to 1.31) and 1.44 (95% CI 0.79 to 2.01; very low quality of evidence). No events of HAPE or HACE were reported. Adverse events were not evaluated.Acetazolamide versus spironolactone (1 parallel study; 232 participants)Data on acetazolamide compared with spironolactone showed a reduction of the risk of AMS with the administration of acetazolamide (RR = 0.36, 95% CI 0.18 to 0.70; 232 participants; low quality of evidence). No events of HAPE or HACE were reported. Adverse events were not evaluated. This Cochrane Review is the second in a series of three providing relevant information to clinicians and other interested parties on how to prevent high altitude illness. The assessment of five of the less commonly used classes of drugs suggests that there is a scarcity of evidence related to these interventions. Clinical benefits and harms related to potential interventions such as sumatriptan are still unclear. Overall, the evidence is limited due to the low number of studies identified (for most of the comparison only one study was identified); limitations in the quality of the evidence (moderate to low); and the number of studies pending classification (24 studies awaiting classification or ongoing). We lack the large and methodologically sound studies required to establish or refute the efficacy and safety of most of the pharmacological agents evaluated in this review.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Pelvic organ prolapse (POP) affects as many as 50% of parous women, with 14% to 19% of women undergoing a surgical correction. Although surgery for the treatment of POP is common, limited supportive data can be found in the literature regarding the preoperative and postoperative interventions related to these procedures. The main goal of perioperative interventions is to reduce the rate of adverse events while improving women's outcomes following surgical intervention for prolapse. A broad spectrum of perioperative interventions are available, and although the benefits of interventions such as prophylactic antibiotics before abdominal surgery are well established, others are unique to women undergoing POP surgeries and as such need to be investigated separately. The aim of this review is to compare the safety and effectiveness of a range of perioperative interventions versus other interventions or no intervention (control group) at the time of surgery for pelvic organ prolapse. We searched the Cochrane Incontinence Group Specialised Register, which contains trials identified from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE In Process, ClinicalTrials.gov, WHO ICTRP, handsearching of journals and conference proceedings (searched 30 November 2017), and reference lists of relevant articles. We also contacted researchers in the field. We included randomised controlled trials (RCTs) of women undergoing surgical treatment for symptomatic pelvic organ prolapse that compared a perioperative intervention related to pelvic organ prolapse surgery versus no treatment or another intervention. We used standard methodological procedures recommended by Cochrane. Our primary outcomes were objective failure at any site and subjective postoperative prolapse symptoms. We also measured adverse effects, focusing on intraoperative blood loss and blood transfusion, intraoperative ureteral injury, and postoperative urinary tract infection. We included 15 RCTs that compared eight different interventions versus no treatment for 1992 women in five countries. Most interventions were assessed by only one RCT with evidence quality ranging from very low to moderate. The main limitation was imprecision, associated with small sample sizes and low event rates.Pelvic floor muscle training (PFMT) compared with no treatment (three RCTs) - peri-operative intervention The simplest of the PFMT programmes required women to attend six perioperative consultations in the three months surrounding prolapse surgery. Trial results provided no clear evidence of a difference between groups in objective failure at any site at 12 to 24 months (odds ratio (OR) 0.93, 95% confidence interval (CI) 0.56 to 1.54; two RCTs, 327 women; moderate-quality evidence). With respect to awareness of prolapse, findings were inconsistent. One RCT found no evidence of a difference between groups at 24 months (OR 1.07, 95% CI 0.61 to 1.87; one RCT, 305 women; low-quality evidence), and a second small RCT reported symptom reduction from the Pelvic Organ Prolapse Symptom Questionnaire completed by the intervention group at 12 months (mean difference (MD) -3.90, 95% CI -6.11 to -1.69; one RCT, 27 women; low-quality evidence). Researchers found no clear differences between groups at 24-month follow-up in rates of repeat surgery (or pessary) for prolapse (OR 1.92, 95% CI 0.74 to 5.02; one RCT, 316 women; low-quality evidence).Other interventionsSingle RCTs evaluated the following interventions: preoperative guided imagery (N = 44); injection of vasoconstrictor agent at commencement of vaginal prolapse surgery (N = 76); ureteral stent placement during uterosacral ligament suspension (N = 91); vaginal pack (N = 116); prophylactic antibiotics for women requiring postoperative urinary catheterisation (N = 159); and postoperative vaginal dilators (N = 60).Two RCTs evaluated bowel preparation (N = 298), and four RCTs assessed the method and timing of postoperative catheterisation (N = 514) - all in different comparisons.None of these studies reported our primary review outcomes. One study reported intraoperative blood loss and suggested that vaginal injection of vasoconstrictors at commencement of surgery may reduce blood loss by a mean of about 30 mL. Another study reported intraoperative ureteral injury and found no clear evidence that ureteral stent placement reduces ureteral injury. Three RCTs reported postoperative urinary tract infection and found no conclusive evidence that rates of urinary tract infection were influenced by use of a vaginal pack, prophylactic antibiotics, or vaginal dilators. Other studies did not report these outcomes. There was a paucity of data about perioperative interventions in pelvic organ prolapse surgery. A structured programme of pelvic floor muscle training before and after prolapse surgery did not consistently demonstrate any benefit for the intervention; however, this finding is based on the results of two small studies. With regard to other interventions (preoperative bowel preparation and injection of vasoconstrictor agent, ureteral stent placement during uterosacral ligament suspension, postoperative vaginal pack insertion, use of vaginal dilators, prophylactic antibiotics for postoperative catheter care), we found no evidence regarding rates of recurrent prolapse and no clear evidence that these interventions were associated with clinically meaningful reductions in adverse effects, such as intraoperative or postoperative blood transfusion, intraoperative ureteral injury, or postoperative urinary tract infection.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Many two-dimensional (2-D) radiographic views are used to help diagnose cam femoroacetabular impingement (FAI), but there is little consensus as to which view or combination of views is most effective at visualizing the magnitude and extent of the cam lesion (ie, severity). Previous studies have used a single image from a sequence of CT or MR images to serve as a reference standard with which to evaluate the ability of 2-D radiographic views and associated measurements to describe the severity of the cam lesion. However, single images from CT or MRI data may fail to capture the apex of the cam lesion. Thus, it may be more appropriate to use measurements of three-dimensional (3-D) surface reconstructions from CT or MRI data to serve as an anatomic reference standard when evaluating radiographic views and associated measurements used in the diagnosis of cam FAI. The purpose of this study was to use digitally reconstructed radiographs and 3-D statistical shape modeling to (1) determine the correlation between 2-D radiographic measurements of cam FAI and 3-D metrics of proximal femoral shape; and 2) identify the combination of radiographic measurements from plain film projections that were most effective at predicting the 3-D shape of the proximal femur. This study leveraged previously acquired CT images of the femur from a convenience sample of 37 patients (34 males; mean age, 27 years, range, 16-47 years; mean body mass index [BMI], 24.6 kg/m, range, 19.0-30.2 kg/m) diagnosed with cam FAI imaged between February 2005 and January 2016. Patients were diagnosed with cam FAI based on a culmination of clinical examinations, history of hip pain, and imaging findings. The control group consisted of 59 morphologically normal control participants (36 males; mean age, 29 years, range, 15-55 years; mean BMI, 24.4 kg/m, range, 16.3-38.6 kg/m) imaged between April 2008 and September 2014. Of these controls, 30 were cadaveric femurs and 29 were living participants. All controls were screened for evidence of femoral deformities using radiographs. In addition, living control participants had no history of hip pain or previous surgery to the hip or lower limbs. CT images were acquired for each participant and the surface of the proximal femur was segmented and reconstructed. Surfaces were input to our statistical shape modeling pipeline, which objectively calculated 3-D shape scores that described the overall shape of the entire proximal femur and of the region of the femur where the cam lesion is typically located. Digital reconstructions for eight plain film views (AP, Meyer lateral, 45° Dunn, modified 45° Dunn, frog-leg lateral, Espié frog-leg, 90° Dunn, and cross-table lateral) were generated from CT data. For each view, measurements of the α angle and head-neck offset were obtained by two researchers (intraobserver correlation coefficients of 0.80-0.94 for the α angle and 0.42-0.80 for the head-neck offset measurements). The relationships between radiographic measurements from each view and the 3-D shape scores (for the entire proximal femur and for the region specific to the cam lesion) were assessed with linear correlation. Additionally, partial least squares regression was used to determine which combination of views and measurements was the most effective at predicting 3-D shape scores. Three-dimensional shape scores were most strongly correlated with α angle on the cross-table view when considering the entire proximal femur (r = -0.568; p &lt; 0.001) and on the Meyer lateral view when considering the region of the cam lesion (r = -0.669; p &lt; 0.001). Partial least squares regression demonstrated that measurements from the Meyer lateral and 90° Dunn radiographs produced the optimized regression model for predicting shape scores for the proximal femur (R = 0.405, root mean squared error of prediction [RMSEP] = 1.549) and the region of the cam lesion (R = 0.525, RMSEP = 1.150). Interestingly, views with larger differences in the α angle and head-neck offset between control and cam FAI groups did not have the strongest correlations with 3-D shape. Considered together, radiographic measurements from the Meyer lateral and 90° Dunn views provided the most effective predictions of 3-D shape of the proximal femur and the region of the cam lesion as determined using shape modeling metrics. Our results suggest that clinicians should consider using the Meyer lateral and 90° Dunn views to evaluate patients in whom cam FAI is suspected. However, the α angle and head-neck offset measurements from these and other plain film views could describe no more than half of the overall variation in the shape of the proximal femur and cam lesion. Thus, caution should be exercised when evaluating femoral head anatomy using the α angle and head-neck offset measurements from plain film radiographs. Given these findings, we believe there is merit in pursuing research that aims to develop the framework necessary to integrate statistical shape modeling into clinical evaluation, because this could aid in the diagnosis of cam FAI.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Pasteurized autograft is regarded as a biologic reconstructive option for managing bone defects after tumor resection; however, reports on long-term outcomes from large patient series are scarce. Contrary to previous favorable reports, we have observed many patients with failures, in particular as the duration of followup increased. Because pasteurized autografts are used in many countries as a reconstruction option, we wished to formally evaluate patients who underwent this approach at one specialty center. (1) What is the graft survival and what proportion of patients achieved union when pasteurized autografts were used for bone defects after tumor resection? (2) What are the complications and causes of graft removal? (3) What factors are related to the likelihood of union and graft survival? (4) What is the survival and cause of failure by type of pasteurized autograft reconstruction? Over a 26-year period from 1988 to 2013, we performed 1358 tumor resections in our center. Of these, 353 were reconstructed with pasteurized autograft. Other reconstructions included endoprostheses (508 patients), instant arthrodesis using an intramedullary nail and bone cement (286 patients), allografts (97 patients), and resection only (114 patients). During the period in question, we generally used this approach when tumor showed an osteoblastic pattern and less than one-third cortical destruction in osteolytic tumor. We generally avoided this approach when the tumor showed an extensive osteolytic pattern. We excluded 75 (21% [75 of 353]) patients, 21 (6% [21 of 353]) for incomplete clinical data and 54 (15% [54 of 353]) with a followup &lt; 2 years or those lost to followup leaving 278 autografts eligible. The mean followup was 113 months (range, 25-295 months). Of these 278 patients, 242 patients had primary bone sarcomas, 22 patients had soft tissue tumor invading bone, seven patients had metastatic carcinoma, and seven patients had aggressive benign bone tumors. From a chart review, we obtained the age, sex, location, tumor volume, histologic diagnosis, use of chemotherapy, graft length, fixation modality, type of pasteurized bone used, proportion of union, complications, and oncologic outcome of the patients. In total, 377 junctional sites were assessed for union with serial radiographs. We defined junctions showing union &lt; 2 years as union and &gt; 2 years as delayed union. We grouped our patients into type of pasteurized bone use: pasteurized autograft-prosthesis composites (PPCs) were performed in 149, intercalary grafts in 71, hemicortical grafts in 15, osteoarticular in 12, and fusion of a joint in 31 patients. The endpoint of interest included removal of the autograft with implant loosening, infection, fracture of the graft, or any reoperation resulting in removal. Survival of the graft was determined by Kaplan-Meier plot and intergroup differences were determined using log-rank test. Five, 10-, and 20-year survival of 278 autografts was 73% ± 5.5%, 59% ± 6.7%, and 40% ± 13.6%, respectively. Of 278 autografts, 105 (38%) were removed with complications. Cause of removal included infection in 13% (33 patients), nonunion in 7% (18 patients), fracture of graft in 6% (16 patients), resorption of the graft in 5% (14 patients), and local recurrence in 4% (11 patients). Univariate survival analysis revealed that patient age ≤ 15 years (p = 0.027; hazard ratio [HR], 1.541), male sex (p = 0.004; HR, 1.810), and pelvic location (p = 0.05; HR, 2.518) were associated with graft removal. The 20-year survival rate of osteoarticular and hemicortical methods was 92% (95% confidence interval, -15.6% to +8.3%) and 80% ± 20%, respectively. For intercalary and fusion, it was 46% ± 15% and 28% ± 22%, respectively, although for PPC, it was 37% ± 22%. Log-rank survival analysis showed the osteoarticular and hemicortical groups had better graft survival compared with other types of reconstruction (p = 0.028; HR, 0.499). The most prevalent cause of graft removal in three major types of reconstruction was as follows: (1) PPC type was infection (30% [17 of 56]); (2) intercalary graft was infection, nonunion, and local recurrence in even proportions of 29% (86% [24 of 28]); and (3) fusion was infection (35% [six of 17]). Two hundred ten (56%) of 377 junctional sites showed union within 2 years (average, 14 months), 51 (13%) junctions showed delayed union after 2 years (average, 40 months), and the remaining 116 (31%) junctions showed nonunion. Diaphyseal junction (p = 0.029) and male sex (p = 0.004) showed a higher proportion of nonunion by univariate analysis. Compared with the favorable short-term and small cohort reports, survival of pasteurized autograft in this long-term large cohort was disappointing. We believe that pasteurized autograft should be used with caution in children and adolescents, in the pelvic region, and in PPC form. When bone stock destruction is minimal, it is worth considering this approach for small intercalary or distal long bone reconstruction. We believe this procedure is best indicated after hemicortical resection of long bone. Level III, therapeutic study.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Objective To compare the effects of 50-Hz 0.6-mT low-frequency pulsed electromagnetic fields(PEMFs) and 50-Hz 1.8-mT sinusoidal alternating electromagnetic fields(SEMFs) in preventing bone loss in tail-suspended rats,with an attempt to improve the prevention and treatment of bone loss caused by weightlessness.Methods Tail-suspension rat models were used to simulate microgravity on the ground. Forty rats were randomly divided into four groups[control group,hindlimb-suspended(HLS) group,HLS+PEMFs group,and HLS+SEMFs group],with 10 rats in each group. In the PEMFs treatment group and SEMFs treatment group,the intervention was 90 min per day. Rats were sacrificed after four weeks. Bone mineral density(BMD) of femur and vertebra was measured by dual-energy X-ray absorptiometry and biomechanical strength by AG-IS biomechanical instrument. Serum osteocalcin(OC),tartrate-resistant acid phosphatase 5b(Tracp 5b),parathyroid hormone(PTH),and cyclic adenosine monophosphate(cAMP) were detected by ELISA. The microstructure of bone tissue was observed by Micro-CT and HE staining.Results The BMD of the femur(P=0.000) and vertebrae(P=0.001) in the HLS group was significantly lower than in the control group;the BMD of the femurs(P=0.001) and vertebrae(P=0.039) in the HLS+PEMFs group was significantly higher than in the HLS group;the BMD of the femurs in the HLS+SEMFs group was significantly higher than in the HLS group(P=0.003),but the BMD of the vertebrae showed no significant difference(P=0.130). There was no significant difference in the BMD of the femur(P=0.818) and vertebrae(P=0.614) between the HLS+PEMFs group and the HLS+SEMFs group. The maximum load(P=0.000,P=0.009) and elastic modulus(P=0.015,P=0.009) of the femurs and vertebrae in the HLS group were significantly lower than those in the control group;the maximum load of the femur(P=0.038) and vertebrae(P=0.087) in the HLS+PEMFs group was significantly higher than that in the HLS group,but the elastic modulus was not significantly different from that in the HLS group(P=0.324,P=0.091). The maximum load(P=0.190,P=0.222) and elastic modulus(P=0.512,P=0.437) of femurs and vertebrae in the HLS+SEMFs group were not significantly different from those in the HLS group. There were no significant differences in the maximum load and elastic modulus of femurs(P=0.585,P=0.948) and vertebrae(P=0.668,P=0.349) between the HLS+PEMFs group and the HLS+SEMFs group. The serum OC level in the HLS group was significantly lower than that in the control group(P=0.000),and the OC level in HLS+PEMFs group(P=0.000) and HLS+SEMFs group(P=0.006) were significantly higher than that in the HLS group. The serum Tracp 5b concentration in the HLS group was significantly higher than that in the control group(P=0.011). There was no significant difference between the HLS+PEMFs group(P=0.459) and the HLS+SEMFs group(P=0.469) compared with the control group.Serum Tracp 5b concentrations in the HLS+PEMFs group(P=0.056) and the HLS+SEMFs group(P=0.054) were not significantly different from those in the HLS group. The PTH(P=0.000) and cAMP concentrations(P=0.000) in the HLS group were significantly lower than those in the control group. The PTH(P=0.000,P=0.000) and cAMP concentrations(P=0.000,P=0.000) in the HLS+PEMFs group and the HLS+SEMFs group were significantly higher than in the HLS group. The femoral cancellous bone of the HLS group was very sparse and small compared with the control group. The density and volume of the cancellous bone were similar among the control group,HLS+PEMFs group,and HLS+SEMFs group. Compared with the control group,the HLS group had lower BMD(P=0.000),bone volume (BV)/tissue volume(TV)(P=0.000),number of trabecular bone (Tb.N)(P=0.000),and trabecular thickness(Tb.Th)(P=0.000) and higher trabecular bone dispersion(Tb.Sp)(P=0.000) and bone surface area(BS)/BV(P=0.000). Compared with the HLS group,the HLS+PEMFs group and the HLS+SEMFs group had significantly lower Tb.Sp(P=0.000,P=0.000) and BS/BV(P=0.000,P=0.000) and significantly increased BMD(P=0.000,P=0.000),BV/TV(P=0.001,P=0.004),Tb.Th(P=0.000,P=0.001),and Tb.N(P=0.000,P=0.001). The trabecular thickness significantly differed between the HLS+PEMFs group and the HLS+SEMFs group(P=0.024). The HLS group(P=0.000),HLS+PEMFs group(P=0.000),and HLS+SEMFs group(P=0.000) had the significantly lower osteoblast density on the trabecular bone surface than the control group;however,it was significantly higher in the HLS+SEMFs group(P=0.000) and the HLS+PEMFs group(P=0.000) than in the HLS group. The HLS group had significantly lower density of osteoblasts in the endothelium than the control group(P=0.000);however,the density of osteoblasts was significantly higher in HLS+PEMFs group(P=0.000) and HLS+SEMFs group(P=0.000) than HLS group and was significantly higher in HLS+PEMFs group than in HLS+SEMFs group(P=0.041). Compared with the control group,a large number of fatty cavities were produced in the bone marrow cavity in the HLS group,but the fat globules remarkably decreased in the treatment groups,showing no significant difference from the control group. The number of adipose cells per mm <sup2</sup bone marrow in the HLS group was 4 times that of the control group(P=0.000);it was significantly smaller in the HLS+PEMFs group(P=0.000) and HLS+SEMFs group(P=0.000) than in the HLS group,whereas the difference between the HLS+PEMFs group and the HLS+SEMFs group was not statistically significant(P=0.086). Conclusions 50-Hz 0.6-mT PEMFs and 50-Hz 1.8-mT SEMFs can effectively increase bone mineral density and biomechanical values in tail-suspended rats,increase the concentration of bone formation markers in rat blood,activate the cAMP pathway by affecting PTH levels,and thus further increase the content of osteoblasts to prevent the deterioration of bone micro-structure. In particular,PEMFs can prevent the reduction of bone mineral density and maximum load value by about 50% and increase the bone mass of tail-suspended rats by promoting bone formation.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Perineural invasion is associated with adverse oncological outcomes in colorectal cancer. However, data regarding the prognostic and predictive impact in colon cancer are scarce. This study aims to clarify the role of perineural invasion in patients with nonmetastatic colon cancer. This study is a retrospective review of a prospectively maintained database. This study took place at a tertiary medical center. Patients with stage I to III colon cancer who underwent elective surgery at our tertiary center between 2004 and 2015 (n = 1145) were included. The primary long-term outcomes include disease-free survival, disease-specific survival, and overall survival. Differences were determined by multivariate Cox regression models adjusted for stage and potential confounders. Perineural invasion was identified in 215 patients (18.8%) and associated with emergency procedures, male sex, and advanced disease. Histopathological features including lymphatic and extramural vascular invasion, poor differentiation, and infiltrating tumor borders were correlated with perineural invasion. Compared with patients with perineural invasion-negative tumors, patients who had perineural invasion-positive tumors had worse disease-free, overall, and disease-specific survival (all p &lt; 0.001). Moreover, patients with perineural invasion-positive node-negative disease had worse overall survival than patients with perineural invasion-negative node-positive disease (p &lt; 0.001). After adjustment, perineural invasion remained significantly associated with worse disease-free survival (HR, 1.45; 95% CI, 1.03-2.03; p = 0.033), worse overall survival (HR, 1.75; 95% CI, 1.33-2.31; p &lt; 0.001), and worse disease-specific survival (HR, 1.52; 95% CI, 1.00-2.30; p = 0.048). However, we did not find a significant predictive response with adjuvant chemotherapy in perineural invasion-positive node-negative tumors (HR, 2.10; 95% CI, 0.80-5.51; p = 0.122). The predictive value was only demonstrated in stage III disease with a significant impaired overall survival in patients with perineural invasion-positive tumors who did not receive adjuvant therapy (HR, 0.23; 95% CI, 0.13-0.40; p &lt; 0.001). This study was limited by its retrospective design. Our study confirms the prognostic value of perineural invasion in stage I to II and III colon cancer. However, patients with node-negative disease and perineural invasion did not significantly benefit from adjuvant therapy. More information regarding postoperative treatment in node-negative perineural invasion-positive colon cancer is required. See Video Abstract at http://links.lww.com/DCR/A988. LA INVASIÓN PERINEURAL COMO FACTOR PRONÓSTICO NO PREDICTIVO EN EL CÁNCER DE COLON NO METASTÁSICO: La invasión perineural se encuentra asociada a resultados oncológicos adversos en casos de cáncer colorrectal. Sin embargo, los datos sobre el impacto pronóstico y predictivo en caso de cáncer de colon son pocos. Definir el papel de la invasión perineural en pacientes con cáncer de colon no metastásico. DISEÑO:: Revisión retrospectiva de una base de datos alimentada prospectivamente. Centro hospitalario de atención terciaria. Todos aquellos portadores de un cáncer de colon estadío I-III que se sometieron a cirugía electiva en nuestro centro entre 2004-2015 (n = 1145). Los resultados a largo plazo incluyeron la supervivencia sin enfermedad, la supervivencia específica de la enfermedad y la supervivencia general. Las diferencias se determinaron mediante modelos de regresión multivariantes de Cox, ajustados para el control de factores de confusión durante el análisis por estratificación. La invasión perineural fué identificada en 215 pacientes (18.8%) y se la asoció con procedimientos de emergencia, al género masculino y a la enfermedad avanzada. Las características histopatológicas que incluyeron la invasión vascular linfática y extramural, la diferenciación deficiente y los bordes tumorales infiltrantes se correlacionaron con la invasión perineural. Comparativamente con los tumores sin invasión perineural, los pacientes positivos a la invasión perineural tuvieron una peor supervivencia general, libre y específica de la enfermedad (todos p &lt; 0.001). Asimismo, aquellos pacientes con invasion-perineural con ganglios negativos tuvieron una supervivencia global mucho peor que aquellos pacientes con ganglios positivos e invasión perineural negativa (p &lt; 0.001). Después del ajuste, la invasión perineural se asoció significativamente con una peor supervivencia sin la enfermedad (HR, 1.45; IC 95%, 1.03-2.03; p = 0.033), supervivencia general (HR, 1.75; IC 95%, 1.33-2.31; p &lt;0.001), así como con una peor supervivencia específica de la enfermedad (HR, 1.52; IC 95%, 1.00-2.30; p = 0.048). Sin embargo, no encontramos una respuesta predictiva significativa con quimioterapia adyuvante en los tumores acompañados de invasion-perineural con ganglios negativos (HR, 2.10; IC del 95%, 0.80-5.51; p = 0.122). El valor predictivo solo fué demostrado en aquellos casos de estadio III con un deterioro significativo de la supervivencia global en pacientes con tumores perineurales positivos a la invasión y que no recibieron tratamiento adyuvante (HR, 0.23; IC 95%, 0.13-0.40; p &lt; 0.001). Diseño retrospectivo. CONCLUSIÓN:: Nuestros resultados confirman el valor pronóstico de la invasión perineural en el cáncer de colon estadios I-II y III. Sin embargo, los pacientes con enfermedad ganglionar negativa e invasión perineural no se beneficiaron significativamente de la terapia adyuvante. Se requiere más información sobre el tratamiento postoperatorio en el cáncer de colon positivo para la invasión perineural con ganglios negativos. Vea el Resumen del video en http://links.lww.com/DCR/A988.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Fatigue is a major problem in rheumatoid arthritis (RA). There is evidence for the clinical effectiveness of cognitive-behavioural therapy (CBT) delivered by clinical psychologists, but few rheumatology units have psychologists. To compare the clinical effectiveness and cost-effectiveness of a group CBT programme for RA fatigue [named RAFT, i.e. Reducing Arthritis Fatigue by clinical Teams using cognitive-behavioural (CB) approaches], delivered by the rheumatology team in addition to usual care (intervention), with usual care alone (control); and to evaluate tutors' experiences of the RAFT programme. A randomised controlled trial. Central trials unit computerised randomisation in four consecutive cohorts within each of the seven centres. A nested qualitative evaluation was undertaken. Seven hospital rheumatology units in England and Wales. Adults with RA and fatigue severity of ≥ 6 [out of 10, as measured by the Bristol Rheumatoid Arthritis Fatigue Numerical Rating Scale (BRAF-NRS)] who had no recent changes in major RA medication/glucocorticoids. RAFT - group CBT programme delivered by rheumatology tutor pairs (nurses/occupational therapists). Usual care - brief discussion of a RA fatigue self-management booklet with the research nurse. Primary - fatigue impact (as measured by the BRAF-NRS) at 26 weeks. Secondary - fatigue severity/coping (as measured by the BRAF-NRS); broader fatigue impact [as measured by the Bristol Rheumatoid Arthritis Fatigue Multidimensional Questionnaire (BRAF-MDQ)]; self-reported clinical status; quality of life; mood; self-efficacy; and satisfaction. All data were collected at weeks 0, 6, 26, 52, 78 and 104. In addition, fatigue data were collected at weeks 10 and 18. The intention-to-treat analysis conducted was blind to treatment allocation, and adjusted for baseline scores and centre. Cost-effectiveness was explored through the intervention and RA-related health and social care costs, allowing the calculation of quality-adjusted life-years (QALYs) with the EuroQol-5 Dimensions, five-level version (EQ-5D-5L). Tutor and focus group interviews were analysed using inductive thematic analysis. A total of 308 out of 333 patients completed 26 weeks (RAFT, <in/N</i = 156/175; control, <in/N</i = 152/158). At 26 weeks, the mean BRAF-NRS impact was reduced for the RAFT programme (-1.36 units; <ip</i &lt; 0.001) and the control interventions (-0.88 units; <ip</i &lt; 0.004). Regression analysis showed a difference between treatment arms in favour of the RAFT programme [adjusted mean difference -0.59 units, 95% confidence interval (CI) -1.11 to -0.06 units; <ip</i = 0.03, effect size 0.36], and this was sustained over 2 years (-0.49 units, 95% CI -0.83 to -0.14 units; <ip</i = 0.01). At 26 weeks, further fatigue differences favoured the RAFT programme (BRAF-MDQ fatigue impact: adjusted mean difference -3.42 units, 95% CI -6.44 to - 0.39 units, <ip</i = 0.03; living with fatigue: adjusted mean difference -1.19 units, 95% CI -2.17 to -0.21 units, <ip</i = 0.02; and emotional fatigue: adjusted mean difference -0.91 units, 95% CI -1.58 to -0.23 units, <ip</i = 0.01), and these fatigue differences were sustained over 2 years. Self-efficacy favoured the RAFT programme at 26 weeks (Rheumatoid Arthritis Self-Efficacy Scale: adjusted mean difference 3.05 units, 95% CI 0.43 to 5.6 units; <ip</i = 0.02), as did BRAF-NRS coping over 2 years (adjusted mean difference 0.42 units, 95% CI 0.08 to 0.77 units; <ip</i = 0.02). Fatigue severity and other clinical outcomes were not different between trial arms and no harms were reported. Satisfaction with the RAFT programme was high, with 89% of patients scoring ≥ 8 out of 10, compared with 54% of patients in the control arm rating the booklet (<ip</i &lt; 0.0001); and 96% of patients and 68% of patients recommending the RAFT programme and the booklet, respectively, to others (<ip</i &lt; 0.001). There was no significant difference between arms for total societal costs including the RAFT programme training and delivery (mean difference £434, 95% CI -£389 to £1258), nor QALYs gained (mean difference 0.008, 95% CI -0.008 to 0.023). The probability of the RAFT programme being cost-effective was 28-35% at the National Institute for Health and Care Excellence's thresholds of £20,000-30,000 per QALY. Tutors felt that the RAFT programme's CB approaches challenged their usual problem-solving style, helped patients make life changes and improved tutors' wider clinical practice. Primary outcome data were missing for 25 patients; the EQ-5D-5L might not capture fatigue change; and 30% of the 2-year economic data were missing. The RAFT programme improves RA fatigue impact beyond usual care alone; this was sustained for 2 years with high patient satisfaction, enhanced team skills and no harms. The RAFT programme is &lt; 50% likely to be cost-effective; however, NHS costs were similar between treatment arms. Given the paucity of RA fatigue interventions, rheumatology teams might investigate the pragmatic implementation of the RAFT programme, which is low cost. Current Controlled Trials ISRCTN52709998. This project was funded by the NIHR Health Technology Assessment programme and will be published in full in <iHealth Technology Assessment</i; Vol. 23, No. 57. See the NIHR Journals Library website for further project information.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Indoor exposure to dry air during heating periods has been associated with dryness and irritation symptoms of the upper respiratory airways and the skin. The irritated or damaged mucous membrane poses an important entry port for pathogens causing respiratory infections. To determine the effectiveness of interventions that increase indoor air humidity in order to reduce or prevent dryness symptoms of the eyes, the skin and the upper respiratory tract (URT) or URT infections, at work and in educational settings. The last search for all databases was done in December 2020. We searched Ovid MEDLINE, Embase, CENTRAL (Cochrane Library), PsycINFO, Web of Science, Scopus and in the field of occupational safety and health: NIOSHTIC-2, HSELINE, CISDOC and the In-house database of the Division of Occupational and Environmental Medicine, University of Zurich. We also contacted experts, screened reference lists of included trials, relevant reviews and consulted the WHO International Clinical Trials Registry Platform (ICTRP). We included controlled studies with a parallel group or cross-over design, quasi-randomised studies, controlled before-and-after and interrupted time-series studies on the effects of indoor air humidification in reducing or preventing dryness symptoms and upper respiratory tract infections as primary outcomes at workplace and in the educational setting. As secondary outcomes we considered perceived air quality, other adverse events, sick leave, task performance, productivity and attendance and costs of the intervention. Two review authors independently screened titles, abstracts and full texts for eligibility, extracted data and assessed the risks of bias of included studies. We synthesised the evidence for the primary outcomes 'dry eye', 'dry nose', 'dry skin', for the secondary outcome 'absenteeism', as well as for 'perception of stuffiness' as the harm-related measure. We assessed the certainty of evidence using the GRADE system. We included 13 studies with at least 4551 participants, and extracted the data of 12 studies with at least 4447 participants. Seven studies targeted the occupational setting, with three studies comprising office workers and four hospital staff. Three of them were clustered cross-over studies with 846 participants (one cRCT), one parallel-group controlled trial (2395 participants) and three controlled before-and-after studies with 181 participants. Five studies, all CTs, with at least 1025 participants, addressing the educational setting, were reported between 1963 and 1975, and in 2018. In total, at least 3933 (88%) participants were included in the data analyses. Due to the lack of information, the results of the risk of bias assessment remained mainly unclear and the assessable risks of bias of included studies were considered as predominantly high. Primary outcomes in occupational setting:  We found that indoor air humidification at the workplace may have little to no effect on dryness symptoms of the eye and nose (URT). The only cRCT showed a significant decrease in dry eye symptoms among working adults (odds ratio (OR) 0.54, 95% confidence interval (CI) 0.37 to 0.79) with a low certainty of the evidence. The only cluster non-randomised cross-over study showed a non-significant positive effect of humidification on dryness nose symptoms (OR 0.87, 95% CI 0.53 to 1.42) with a low certainty of evidence. We found that indoor air humidification at the workplace may have little and non-significant effect on dryness skin symptoms. The pooled results of two cluster non-RCTs showed a non-significant alleviation of skin dryness following indoor air humidification (OR 0.66, 95% CI 0.33 to 1.32) with a low certainty of evidence. Similarly, the pooled results of two before-after studies yielded no statistically significant result (OR 0.69, 95% CI 0.33 to 1.47) with very low certainty of evidence No studies reported on the outcome of upper respiratory tract infections. No studies conducted in educational settings investigated our primary outcomes. Secondary outcomes in occupational setting: Perceived stuffiness of the air was increased during the humidification in the two cross-over studies (OR 2.18, 95% CI 1.47 to 3.23); (OR 1.70, 95% CI 1.10 to 2.61) with low certainty of evidence. Secondary outcomes in educational setting: Based on different measures and settings of absenteeism, four of the six controlled studies found a reduction in absenteeism following indoor air humidification (OR 0.54, 95% CI 0.45 to 0.65; OR 0.38, 95% CI 0.15 to 0.96; proportion 4.63% versus 5.08%). Indoor air humidification at the workplace may have little to no effect on dryness symptoms of the eyes, the skin and the URT. Studies investigating illness-related absenteeism from work or school could only be summarised narratively, due to different outcome measures assessed. The evidence suggests that increasing humidification may reduce the absenteeism, but the evidence is very uncertain. Future RCTs involving larger sample sizes, assessing dryness symptoms more technically or rigorously defining absenteeism and controlling for potential confounders are therefore needed to determine whether increasing indoor air humidity can reduce or prevent dryness symptoms of the eyes, the skin, the URT or URT infections at work and in educational settings over time.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
What are the data and trends on ART and IUI cycle numbers and their outcomes, and on fertility preservation (FP) interventions, reported in 2018 as compared to previous years? The 22nd ESHRE report shows a continued increase in reported numbers of ART treatment cycles and children born in Europe, a decrease in transfers with more than one embryo with a further reduction of twin delivery rates (DRs) as compared to 2017, higher DRs per transfer after fresh IVF or ICSI cycles (without considering freeze-all cycles) than after frozen embryo transfer (FET) with higher pregnancy rates (PRs) after FET and the number of reported IUI cycles decreased while their PR and DR remained stable. ART aggregated data generated by national registries, clinics or professional societies have been gathered and analysed by the European IVF-monitoring Consortium (EIM) since 1997 and reported in 21 manuscripts published in <iHuman Reproduction</i and <iHuman Reproduction Open.</i Data on medically assisted reproduction (MAR) from European countries are collected by EIM for ESHRE on a yearly basis. The data on treatment cycles performed between 1 January and 31 December 2018 were provided by either national registries or registries based on initiatives of medical associations and scientific organizations or committed persons of 39 countries. Overall, 1422 clinics offering ART services in 39 countries reported a total of more than 1 million (1 007 598) treatment cycles for the first time, including 162 837 with IVF, 400 375 with ICSI, 309 475 with FET, 48 294 with preimplantation genetic testing, 80 641 with egg donation (ED), 532 with IVM of oocytes and 5444 cycles with frozen oocyte replacement (FOR). A total of 1271 institutions reported data on IUI cycles using either husband/partner's semen (IUI-H; n = 148 143) or donor semen (IUI-D; n = 50 609) in 31 countries and 25 countries, respectively. Sixteen countries reported 20 994 interventions in pre- and post-pubertal patients for FP including oocyte, ovarian tissue, semen and testicular tissue banking. In 21 countries (21 in 2017) in which all ART clinics reported to the registry, 410 190 treatment cycles were registered for a total population of ∼ 300 million inhabitants, allowing a best estimate of a mean of 1433 cycles performed per million inhabitants (range: 641-3549). Among the 39 reporting countries, for IVF, the clinical PR per aspiration slightly decreased while the PR per transfer remained similar compared to 2017 (25.5% and 34.1% in 2018 versus 26.8% and 34.3% in 2017). In ICSI, the corresponding rates showed similar evolutions in 2018 compared to 2017 (22.5% and 32.1% in 2018 versus 24.0% and 33.5% in 2017). When freeze-all cycles were not considered for the calculations, the clinical PRs per aspiration were 28.8% (29.4% in 2017) and 27.3% (27.3% in 2017) for IVF and ICSI, respectively. After FET with embryos originating from own eggs, the PR per thawing was 33.4% (versus 30.2% in 2017), and with embryos originating from donated eggs 41.8% (41.1% in 2017). After ED, the PR per fresh embryo transfer was 49.6% (49.2% in 2017) and per FOR 44.9% (43.3% in 2017). In IVF and ICSI together, the trend towards the transfer of fewer embryos continues with the transfer of 1, 2, 3 and ≥4 embryos in 50.7%, 45.1%, 3.9% and 0.3% of all treatments, respectively (corresponding to 46.0%, 49.2%. 4.5% and 0.3% in 2017). This resulted in a reduced proportion of twin DRs of 12.4% (14.2% in 2017) and similar triplet DR of 0.2%. Treatments with FET in 2018 resulted in twin and triplet DRs of 9.4% and 0.1%, respectively (versus 11.2% and 0.2%, respectively in 2017). After IUI, the DRs remained similar at 8.8% after IUI-H (8.7% in 2017) and at 12.6% after IUI-D (12.4% in 2017). Twin and triplet DRs after IUI-H were 8.4% and 0.3%, respectively (in 2017: 8.1% and 0.3%), and 6.4% and 0.2% after IUI-D (in 2017: 6.9% and 0.2%). Among 20 994 FP interventions in 16 countries (18 888 in 13 countries in 2017), cryopreservation of ejaculated sperm (n = 10 503, versus 11 112 in 2017) and of oocytes (n = 9123 versus 6588 in 2017) were the most frequently reported. The results should be interpreted with caution as data collection systems and completeness of reporting vary among European countries. Some countries were unable to deliver data about the number of initiated cycles and/or deliveries. The 22nd ESHRE data collection on ART, IUI and FP interventions shows a continuous increase of reported treatment numbers and MAR-derived livebirths in Europe. Although it is the largest data collection on MAR in Europe, further efforts towards optimization of both the collection and reporting, with the aim of improving surveillance and vigilance in the field of reproductive medicine, are awaited. The study has received no external funding and all costs are covered by ESHRE. There are no competing interests.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Septic ankle arthritis is a devastating clinical problem with a high potential for permanent disability and amputation. Successful treatment of septic ankle arthritis remains a challenge for the surgeon and patient. Ankle arthrodesis combined with radical debridement may be an effective option to eradicate infection and salvage the limb. Although numerous fusion methods have been proposed, there is controversy about the most effective technique. At a minimum follow-up of 6 years after ankle arthrodesis performed using an Ilizarov external fixator, we asked, (1) In what proportion of patients was bony fusion achieved? (2) What complications were observed, and what reoperations were performed in these patients? (3) How much did patient-reported outcomes improve from before surgery to the most recent follow-up in this group? Between April 2010 to March 2015, we treated 59 patients for septic ankle arthritis. Of those, we considered patients who were at least 18 years of age with irreversible destruction of the joint as potentially eligible. During that time period, all patients met the prespecified criteria and were treated with ankle arthrodesis using an Ilizarov external fixator. Two percent (one of 59) of patients were excluded because they died in the second year after surgery for reasons unrelated to the procedure, and another 7% (four of 59) of patients were excluded because they were lost before the minimum study follow-up interval of 6 years. Finally, 92% (54 of 59) of patients were analyzed at a mean follow-up time of 9 ± 1 years. A total of 61% (33 of 54) were men, and they had a mean age of 48 ± 12 years. Forty-six percent (25 of 54) of patients were smokers, and 13% (seven of 54) of patients had Type 2 diabetes mellitus. All patients received radical debridement and primary arthrodesis with an Ilizarov external fixator, followed by antibiotic therapy. Postoperatively, patients were instructed to perform lower extremity functional exercises and external fixator care; weightbearing ambulation as tolerated was encouraged as early as possible. Fusion was assessed with a radiographic review that was performed by an individual who was not involved in the surgical care of these patients. We defined bony fusion as continuous trabeculae and complete cortical bridging in the fusion interface achieved before 9 months; delayed union was defined as fusion achieved by 9 to 12 months; and nonunion was defined as patients in whom fusion was not achieved by 12 months. Complications and reoperations were tallied through a record review that was performed by an individual who was not involved in the surgical care of these patients. We defined complications as any deviation from the expected postoperative course. We used the American Orthopedic Foot and Ankle Society (AOFAS) ankle-hindfoot score, the VAS active pain score, and the SF-12 questionnaire (including the physical component summary [PCS] score and mental component summary [MCS] score) to assess patient-reported outcomes. The minimum clinically important difference (MCID) for the AOFAS score was 30 points of 100, the MCID for the VAS active pain score was 2 points of 10, and the MCID of PCS and MCS scores was 7 points and 9 points, respectively. Primary bony fusion was achieved in 94% (51 of 54) of patients. Delayed union was found in 2% (one of 54) of patients. Nonunion was found in 6% (three of 54); one of these patients underwent autologous bone grafting during revision, and bony fusion was ultimately achieved. Final bony fusion was achieved in 96% (52 of 54) of patients. Recurrent infection was found in 2% (one of 54). The median (range) AOFAS score improved from 28 points (8 to 59) before surgery to 80 points (52 to 86) at the most recent follow-up (median difference 52; p &lt; 0.001). The median (range) VAS active pain score decreased from 8 points (6 to 9) before surgery to 2 points (0 to 5) at the most recent follow-up (median difference -6; p &lt; 0.001). For the Short Form 12-item score, the median (range) PCS score improved from 0 points (0 to 30) before surgery to 70 points (40 to 95) at the most recent follow-up (median difference 70; p &lt; 0.001), and the median (range) MCS score improved from 46 points (21 to 75) before surgery to 75 points (50 to 92) at the most recent follow-up (median difference 29; p &lt; 0.001). Ankle arthrodesis with Ilizarov external fixation might eradicate an infection and restore foot function in patients with septic ankle arthritis. However, patients should be fully informed of the complications related to the external fixator, such as pin-tract infections, recurrent infection, and nonunion. Standardized and professional pin care is important. Additionally, because Ilizarov external fixators can be inconvenient to the patients' daily lives, future studies should explore how psychologic support affects patients who undergo ankle arthrodesis with these devices. Level IV, therapeutic study.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Chemotherapy and concurrent irradiation, intended to cure, are presently standard treatments for non metastatic, unresectable oesophageal cancer. The results of the combined therapy are superior to those of radiotherapy alone, attaining 25-35% 2-year survival rates. However these results mainly refer to stage I and II tumours as most of the available literature has focussed on these groups. The aim of our report is to present our experience with Stage III and IV patients. Sixty-four Stage III and IV oesophageal cancer patients were referred to our Departments from January 1, 1990 to December 31, 1996. Diagnosis was obtained through oesophagoscopy and biopsy, stage was assessed by physical examination, chest CT scan, bronchoscopy, barium X-ray examination, upper abdomen ultrasonography and bone nuclide scan. Thirty-four patients, with no signs of blood-born metastases and in satisfactory medical conditions (i.e. age not exceeding 70 years, weight loss not exceeding 10% of body weight, normal serum values of BUN and creatinine, no other severe disease), were submitted to concurrent chemo-radiotherapy. The case features were as follows: histology of squamous cell carcinoma in 32 cases, of adenocarcinoma in 2; tumour in the upper third of the oesophagus in 11 (32.5%), in the middle third in 18 (53%), in the lower third in 5 (14.5%); male/female ratio 29/5, age 48-68 years (mean 56), Karnofsky performance status of 60% or higher. On referral, 18 out of 34 (53%) had a weight loss more than 5% of body weight and 22 (64.5%) had dysphagia. Twenty-one had Stage III (61.75%) and 13 stage IV (38.25%) cancer, with metastasis limited to the supraclavicular or coeliac nodes, which could be included in the radiation volume. In all cases chemotherapy consisted of 5-Fluoruracil (administered in a continuous i.v. infusion, from day 1 to 5, with a 750-1.000 mg/n.sq daily dose) and Cisplatin (75-100 mg/n.sq on the first day, or 20 mg/n.sq for 5 consecutive daily doses, administered by i.v. bolus). Three to 5 cycles were administered, one every 21 days. Irradiation started with the first cycle of chemotherapy in 5 patients, with the second or third cycle in 29. At least two cycles of chemotherapy were administered during the course of radiation. Radiotherapy was performed with 4 to 18 MeV linear accelerator X-rays, or telecobalt, through opposite anterior and posterior treatment portals or more complex field arrangements. The doses were in the range of 44-66 Gy, with fractionation of 5x180-200 cGy weekly sessions. After treatment, periodic follow-up controls were carried out in all cases. Thorough restaging was performed only in selected cases, thus a systematic evaluation of objective responses was not possible. Data on improvement of swallowing were always available, however, and the early therapeutic results were analysed accordingly. Toxicity was recorded according to the WHO parameters. Two-year survival after conclusion of the treatment was calculated according to Kaplan and Maier. Survival was analysed (log-rank test) according to stage, Performance Status, oesophagectomy and body weight loss. After treatment, subjective symptomatic relief occurred in 17 of the 22 patients presenting dysphagia (77.5%). Acute toxicity (Grade III or IV WHO) of the treatment accounted for 47% of hematologic adverse effects, 40% of mucositis, 20.5% of vomiting or diarrhoea not responding to drug treatment. Treatment delays of more than one week, due to toxicity, occurred in 23.5%. Moreover, we observed 20.5% of mild cardiotoxicity and 6% of mild nephrotoxicity. No symptomatic lung fibrosis was observed. No death could be related to toxicity. Overall 2 year survival was 13%, with a median value of 10 months. Survival analysis, according to stage, showed 2 year values of 24% in Stage III and 0% in Stage IV (p=0.09). No significant difference was related to Performance Status and weight loss. Six patients showed a remarkable improvement in symptoms and general conditions after treatment, and were restaged with oesophagoscopy, thoracic CT scan and bronchoscopy, which evidenced resectable residual tumors, and they were then operated. Although histologic examination showed tumour in all the resected specimens, 2 patients survived more than two years (33.5% survival, median 14 months). Due to the small number of operated patients, no attempt was made to assess the significance of this result, in comparison with the other cases. Many Stage III and IV patients, selected for an aggressive chemo-radiation approach on the grounds of satisfactory medical conditions, can obtain relief of dysphagia. Toxicity can be severe, but is rarely life-threatening. Some cases, without extrathoracic spread of the tumor can achieve long term survival (in our experience 24% 2-year survival in Stage III, in our experience which favourably compares with the results obtained by other authors). Whether surgery may improve the therapeutic results of chemo-radiotherapy in patients whose tumour has become resectable, is an issue that cannot be satisfactorily addressed on the basis of our experience, nor are the results from the available literature exhaustive to this regard.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Nonmetal oxidation catalysts have gained much attention in recent years. The reason for this surge in activity is 2-fold: On one hand, a number of such catalysts has become readily accessible; on the other hand, such catalysts are quite resistant toward self-oxidation and compatible under aerobic and aqueous reaction conditions. In this review, we have focused on five nonmetal catalytic systems which have attained prominence in the oxidation field in view of their efficacy and their potential for future development; stoichiometric cases have been mentioned to provide overview and scope. Such nonmetal oxidation catalysts include the alpha-halo carbonyl compounds 1, ketones 2, imines 3, iminium salts 4, and nitroxyl radicals 5. In combination with a suitable oxygen source (H2O2, KHSO5, NaOCl), these catalysts serve as precursors to the corresponding oxidants, namely, the perhydrates I, dioxiranes II, oxaziridines III, oxaziridinium ions IV, and finally oxoammonium ions V. A few of the salient features about these nonmetal, catalytic systems shall be reiterated in this summary. The first class entails the alpha-halo ketones, which catalyze the oxidation of a variety of organic substrates [figure: see text] by hydrogen peroxide as the oxygen source. The perhydrates I, formed in situ by the addition of hydrogen peroxide to the alpha-halo ketones, are quite strong electrophilic oxidants and expectedly transfer an oxygen atom to diverse nucleophilic acceptors. Thus, alpha-halo ketones have been successfully employed for catalytic epoxidation, heteroatom (S, N) oxidation, and arene oxidation. Although high diastereoselectivities have been achieved by these nonmetal catalysts, no enantioselective epoxidation and sulfoxidation have so far been reported. Consequently, it is anticipated that catalytic oxidations by perhydrates hold promise for further development, especially, and should ways be found to transfer the oxygen atom enantioselectively. The second class, namely, the dioxiranes, has been extensively used during the last two decades as a convenient oxidant in organic synthesis. These powerful and versatile oxidizing agents are readily available from the appropriate ketones by their treatment [figure: see text] with potassium monoperoxysulfate. The oxidations may be performed either under stoichiometric or catalytic conditions; the latter mode of operation is featured in this review. In this case, a variety of structurally diverse ketones have been shown to catalyze the dioxirane-mediated epoxidation of alkenes by monoperoxysulfate as the oxygen source. By employing chiral ketones, highly enantioselective (up to 99% ee) epoxidations have been developed, of which the sugar-based ketones are so far the most effective. Reports on catalytic oxidations by dioxiranes other than epoxidations are scarce; nevertheless, fructose-derived ketones have been successfully employed as catalysts for the enantioselective CH oxidation in vic diols to afford the corresponding optically active alpha-hydroxy ketones. To date, no catalytic asymmetric sulfoxidations by dioxiranes appear to have been documented in the literature, an area of catalytic dioxirane chemistry that merits attention. A third class is the imines; their reaction with hydrogen peroxide or monoperoxysulfate affords oxaziridines. These relatively weak electrophilic oxidants only manage to oxidize electron-rich substrates such as enolates, silyl enol ethers, sulfides, selenides, and amines; however, the epoxidation of alkenes has been achieved with activated oxaziridines produced from perfluorinated imines. Most of the oxidations by in-situ-generated oxaziridines have been performed stoichiometrically, with the exception of sulfoxidations. When chiral imines are used as catalysts, optically active sulfoxides are obtained in good ee values, a catalytic asymmetric oxidation by oxaziridines that merits further exploration. The fourth class is made up by the iminium ions, which with monoperoxysulfate lead to the corresponding oxaziridinium ions, structurally similar to the above oxaziridine oxidants except they possess a much more strongly electrophilic oxygen atom due to the positively charged ammonium functionality. Thus, oxaziridinium ions effectively execute besides sulfoxidation and amine oxidation the epoxidation of alkenes under catalytic conditions. As expected, chiral iminium salts catalyze asymmetric epoxidations; however, only moderate enantioselectivities have been obtained so far. Although asymmetric sulfoxidation has been achieved by using stoichiometric amounts of isolated optically active oxaziridinium salts, iminium-ion-catalyzed asymmetric sulf-oxidations have not been reported to date, which offers attractive opportunities for further work. The fifth and final class of nonmetal catalysts concerns the stable nitroxyl-radical derivatives such as TEMPO, which react with the common oxidizing agents (sodium hypochlorite, monoperoxysulfate, peracids) to generate oxoammonium ions. The latter are strong oxidants that chemoselectively and efficiently perform the CH oxidation in alcohols to produce carbonyl compounds rather than engage in the transfer of their oxygen atom to the substrate. Consequently, oxoammonium ions behave quite distinctly compared to the previous four classes of oxidants in that their catalytic activity entails formally a dehydrogenation, one of the few effective nonmetal-based catalytic transformations of alcohols to carbonyl products. Since less than 1 mol% of nitroxyl radical is required to catalyze the alcohol oxidation by the inexpensive sodium hypochlorite as primary oxidant under mild reaction conditions, this catalytic process holds much promise for future practical applications.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Schizophrenic patients are known to feature alterations in their cognitive performances, principally in executive functions, attention and memory. In this last domain, studies have shown a relatively severe and global deficit, which can be assessed in chronic and first episode patients. It seems that the memory dysfunction is independent of age and intellectual level, but does correlate with negative psychopathology and global functioning. In the study of memory dysfunction, attentional capacities, information processing and symptomatology have to be considered as determining factors. It has been shown that patients with schizophrenia perform poorly in selective attention tasks and that this deficit may interfere with learning. In the same way, the slowing of information processing contributes to a superficial and incomplete learning. The impact of symptomatology has also to be considered, as negative and depressive symptoms are linked to mnesic performances. The majority of studies bearing on working memory and schizophrenia show an alteration of performances, but studies on long term memory are more equivocal. Procedural memory seems to be preserved, while declarative memory is impaired. These results support the hypothesis that in schizophrenia, memory processes that are consciously controlled are impaired, contrary to implicit learning which may be intact. Nevertheless, studies bearing on semantic memory and episodic memory show controversial results. Still, many authors argue that schizophrenic patients have difficulties in recalling learned material, specially when a delay or a interfering task are introduced in the test. Besides, the schizophrenic subjects do not use the semantic properties of the words, as well as the control subjects, when they have to learn a words list for example. The main goal of the present study was to examine the auditory-verbal learning capacities of 31 schizophrenic patients (20 men and 11 women, 19-56 years old), compared to 27 healthy subjects (11 men and 16 women, 23-56 years old). All subjects received an evaluation including the Rey Auditory-Verbal Learning Test, used to study the progressive acquisition of 15 disyllabic words which are successively orally presented five times to the subject. About forty-five minutes after the last of the five immediate recalls, the delayed recall is assessed and a percentage of retention is also calculated. Visual reasoning and attention capacities were studied with the Progressive Matrix and the d2 encumbrance test respectively. Global psychiatric symptomatology of the patients group was assessed with the Brief Psychiatric Rating Scale. Considering the literature existing on the verbal learning capacities of schizophrenic patients, it was expected that the patients would perform poorly and learn slower than controls. The initial learning of the material, which is a critical stage for schizophrenic patients, was studied with particular attention as well as the effect of the introduction of a delay upon the recall of the words list. A secondary objective of the study was to investigate the role of visual reasoning and attention upon auditory-verbal learning process. According to published studies, it is expected that schizophrenic patients manifest some impairment in the domains of visual reasoning and attention. The question is to know whether it alters performances in the auditory-verbal learning test or not. Finally, the links between clinical characteristics of the patients, like age and illness duration, and their learning performances were explored. Statistical analysis included first a descriptive analysis of data to examine differences between the two groups. Second, ANCOVAs were used in order to control the respective impact of educational level, attention capacities and verbal reasoning capacities upon learning performances. Third, Spearman's correlations were used to detect links between clinical characteristics of the patients and learning performances. The comparisons between patients and controls confirmed that schizophrenic patients scored less in the attentional and visual reasoning tasks. They also featured a lower educational level compared to the healthy subjects. In the auditory-verbal learning test, the patients showed altered performances in the five recalls, as well as in the delayed recall and for the retention percentage. In order to control the impact of educational level, attentional and visual reasoning capacities, these parameters were introduced in the statistical analyses. Educational level did not influence memory alterations in the schizophrenic group. However, attention and, to a lesser extend, visual reasoning had an impact on the comparison of memory scores: when controlling attention, almost no significant group effect remained. Finally, the exploratory analyses of links between clinical characteristics and memory only revealed the presence of a significant negative correlation between illness duration and learning performances. Thus, the analyze of data showed that schizophrenic subjects featured poor performances in the domains of attention, verbal reasoning and auditory-verbal memory. Further analyses taking into account group differences on attention suggest that the impairment featured by schizophrenic patients in the domain of verbal memory strongly relies on an attentional deficit. These results are discussed according to the existing literature and methodological limitations. Clinical implications are also discussed.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Cytoreductive therapy is effective in the management of metastatic neuroendocrine tumors to the liver, independent of their functioning status. In functioning tumors, clinical endocrinopathies are relieved in most patients and this response usually lasts for several months. Major morbidity and mortality are not greater than the average complication rate for resection for nonneuroendocrine metastatic tumors at major centers; therefore, surgical outcomes appear to justify operative intervention. Patients whose primary tumor can be controlled, whose metastases outside the liver are limited, and who have a reasonable performance status are candidates for resection. The authors' data support the previous statements. The current mortality rate of 1.2% and major morbidity rate of 15% clearly represent the success of the operative approach in such complex cases (54% of patients received a resection of at least one lobe) [9]. A symptomatic response in the 95% range with a median response of 45 months adds many months of symptom-free survival to the lives of most patients [9]. In the literature reviewed for this article, more than half of the patients also underwent a major hepatic resection and 40% of them had concurrent resection of the primary tumor. These data confirm that resection in selected patients is not more complicated or risky than resection for other metastatic tumors. Endocrinopathies have not increased anesthetic or operative risk in this population; however, these results are the product of managing these patients over time, becoming familiar with their clinical syndromes, and being active in the prevention of life-threatening endocrine complications (i.e., carcinoid crisis). The authors have learned over time that patients with valvular disease are not good candidates for surgery. These patients develop right-sided heart failure with an increase in the central venous pressure. This condition can result in massive hemorrhage during the liver resection because of the difficulty in controlling backbleeding from the hepatic veins [26]. Correction of valvular disease is warranted for safe liver resection. The authors' current policy is to rule out valvular disease in every patient with carcinoid tumors and repair the valves prior to hepatic resection when indicated [27]. This policy clearly has decreased the complication rate. Even though liver transplantation seems to be very attractive as a means of eradicating the disease, this has not been common in clinical practice because of the shortage of allografts, and the overall costs and complications of the procedure override its benefits, especially when compared with partial hepatectomy. Current methods to detect the spread of disease that were not readily available in the past, such as MRI and indium-111 pentetreotide (Octreoscan), may expand the applications of transplantation and allow for better selection of candidates. The option of transplantation is still open for improvement and is dependent on organ availability and better staging of the disease. Metastases from neuroendocrine tumors are hypervascular, favoring the application of MRI as the single imaging method; MRI not only evaluates the location and characteristics of the lesions but also determines the relationship with major vessels and bile ducts. Spiral CT scan has been used extensively in the past with acceptable results. Indium-111 pentetreotide functions on the base of somatostatin receptors present in these tumors, but its use has not been established definitely in the work-up of these patients. Perhaps the best use of indium-111 pentetreotide is in the evaluation of disease beyond the primary and liver locations, including bone metastases; its use therefore will likely affect the preoperative work-up of candidates for transplantation [28]. Once the patient has been deemed to have resectable disease by the preoperative work-up, several steps need to be completed prior to surgery to decrease the effect of specific endocrinopathies. For patients with symptoms related to carcinoid tumors, preoperative preparation with 150 to 500 micrograms of somatostatin decreases the chances of carcinoid crisis, which is manifested by hemodynamic instability [29]. The use of this medication intraoperatively should be kept in mind because a carcinoid crisis can occur despite anesthetic premedication. For islet cell tumors, treatment of underlying endocrinopathy has been initiated before referral for surgical treatment in most patients. Surgery is appropriate for patients with metastatic neuroendocrine tumors for the following two reasons: (1) many of them still have the primary tumor in place and resection should be undertaken to avoid acute complications and (2) the addition of adjunctive ablative therapies to surgical resection accomplishes the control of greater than or equal to 90% of the bulk of the tumor. If preoperative evaluation indicates that less than 90% of the tumor is treatable, surgical therapy is contraindicated. Last, even when complete resections are performed, the recurrence rate for these tumors is extremely high. In practical terms, patients with metastatic neuroendocrine tumors are seldom cured. The best hope physicians can offer these patients is an extended survival period with minimal endocrine symptoms and decreased requirements of somatostatin analogs.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The clinical practice guideline on otitis media with effusion (OME) provides evidence-based recommendations on diagnosing and managing OME in children. This is an update of the 1994 clinical practice guideline "Otitis Media With Effusion in Young Children," which was developed by the Agency for Healthcare Policy and Research (now the Agency for Healthcare Research and Quality). In contrast to the earlier guideline, which was limited to children aged 1 to 3 years with no craniofacial or neurologic abnormalities or sensory deficits, the updated guideline applies to children aged 2 months through 12 years with or without developmental disabilities or underlying conditions that predispose to OME and its sequelae. The American Academy of Pediatrics, American Academy of Family Physicians, and American Academy of Otolaryngology-Head and Neck Surgery selected a subcommittee composed of experts in the fields of primary care, otolaryngology, infectious diseases, epidemiology, hearing, speech and language, and advanced practice nursing to revise the OME guideline. The subcommittee made a strong recommendation that clinicians use pneumatic otoscopy as the primary diagnostic method and distinguish OME from acute otitis media (AOM). The subcommittee made recommendations that clinicians should (1) document the laterality, duration of effusion, and presence and severity of associated symptoms at each assessment of the child with OME; (2) distinguish the child with OME who is at risk for speech, language, or learning problems from other children with OME and more promptly evaluate hearing, speech, language, and need for intervention in children at risk; and (3) manage the child with OME who is not at risk with watchful waiting for 3 months from the date of effusion onset (if known), or from the date of diagnosis (if onset is unknown). The subcommittee also made recommendations that (4) hearing testing be conducted when OME persists for 3 months or longer, or at any time that language delay, learning problems, or a significant hearing loss is suspected in a child with OME; (5) children with persistent OME who are not at risk should be reexamined at 3- to 6-month intervals until the effusion is no longer present, significant hearing loss is identified, or structural abnormalities of the eardrum or middle ear are suspected; and (6) when a child becomes a surgical candidate, tympanostomy tube insertion is the preferred initial procedure. Adenoidectomy should not be performed unless a distinct indication exists (nasal obstruction, chronic adenoiditis); repeat surgery consists of adenoidectomy plus myringotomy, with or without tube insertion. Tonsillectomy alone or myringotomy alone should not be used to treat OME. The subcommittee made negative recommendations that (1) population-based screening programs for OME not be performed in healthy, asymptomatic children and (2) antihistamines and decongestants are ineffective for OME and should not be used for treatment; antimicrobials and corticosteroids do not have long-term efficacy and should not be used for routine management. The subcommittee gave as options that (1) tympanometry can be used to confirm the diagnosis of OME and (2) when children with OME are referred by the primary clinician for evaluation by an otolaryngologist, audiologist, or speech-language pathologist, the referring clinician should document the effusion duration and specific reason for referral (evaluation, surgery), and provide additional relevant information such as history of AOM and developmental status of the child. The subcommittee made no recommendations for (1) complementary and alternative medicine as a treatment for OME based on a lack of scientific evidence documenting efficacy and (2) allergy management as a treatment for OME based on insufficient evidence of therapeutic efficacy or a causal relationship between allergy and OME. Last, the panel compiled a list of research needs based on limitations of the evidence reviewed. The purpose of this guideline is to inform clinicians of evidence-based methods to identify methods to identify, monitor, and manage OME in children aged 2 months through 12 years. The guideline may not apply to children older than 12 years because OME is uncommon and the natural history is likely to differ from younger children who experience rapid developmental change. The target population includes children with or without developmental disabilities or underlying conditions that predispose to OME and its sequelae. The guideline is intended for use by providers of health care to children, including primary care and specialist physicians, nurses and nurse practitioners, physician assistants, audiologists, speech-language pathologists, and child development specialists. The guideline is applicable to any setting in which children with OME would be identified, monitored, or managed. This guideline is not intended as a sole source of guidance in evaluating children with OME. Rather, it is designed to assist primary care and other clinicians by providing an evidence-based framework for decision-making strategies. It is not intended to replace clinical judgment or establish a protocol for all children with this condition, and may not provide the only appropriate approach to diagnosing and managing this problem.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
To evaluate the effectiveness and cost-effectiveness of two complementary interventions, using familial breast cancer as a model condition. The primary care intervention consisted of providing computerised referral guidelines and related education to GPs. The nurse counsellor intervention evaluated genetic nurses as substitutes for specialist geneticists in the initial assessment and management of referred patients. The computerised referral guidelines study was a pragmatic, cluster randomised controlled trial (RCT) with general practices randomised to intervention or control groups. The nurse counsellor intervention was tested in two concurrent RCTs conducted in separate UK health service locations, using predetermined definitions of equivalence. The computerised referral guidelines trial took place in general practices in Scotland from November 2000 to June 2001. The nurse counsellor intervention took place in a regional genetics clinic in Scotland, and in two health authorities in Wales served by a single genetics service during 2001. The computerised referral guidelines study involved GPs and referred patients. Both nurse counsellor intervention trials included women referred for the first time, aged 18 years or over and whose main concern was family history of breast cancer. The software system was developed with GPs, presenting cancer genetic referral guidelines in a checklist approach. Intervention GPs were invited to postgraduate update education sessions, and both intervention and control practices received paper-based guidelines. The intervention period was November 2000 to June 2001. For the nurse counsellor trial, trial 1 ran outpatient sessions with the same appointment length as the standard service offered by geneticists, but the nurse counsellor saw new patients at the first appointment and referred back to the GP or on to a clinical geneticist according to locally developed protocol, under the supervision of a consultant geneticist. The control intervention was the current service, which comprised an initial and a follow-up appointment with a clinical geneticist. In trial 2, a nurse counsellor ran outpatient sessions with the same appointment length as the new consultant-based cancer genetics service and new patients were seen at the first appointment and referred as in trial 1. The control intervention was a new service, and comprised collection of family history by telephone followed by a consultation with a clinical assistant or a specialist registrar, supervised by a consultant. The intervention was implemented between 1998 and 2001. In the software system trial, the primary outcome was GPs' confidence in their management of patients with concerns about family history of breast cancer. For the nurse counsellor trial, the primary outcome was patient anxiety, measured using standard scales. In the software system trial, 57 practices (230 GPs) were randomised to the intervention group and 29 (116 GPs) to the control group. No statistically significant differences were detected in GPs' confidence or any other outcomes. Fewer than half of the intervention GPs were aware of the software, and only 22 reported using it in practice. The estimated total cost was GBP3.12 per CD-ROM distributed (2001 prices). For the two arms of the nurse counsellor trial, 289 patients (193 intervention, 96 control) and 297 patients (197 intervention and 100 control) consented, were randomised, returned a baseline questionnaire and attended the clinic for trials 1 and 2 respectively. The analysis in both cases suggested equivalence in all anxiety scores, and no statistically significant differences were detected in other outcomes in either trial. A cost-minimisation analysis suggested that the cost per counselling episode was GBP10.23 lower in intervention arm than in the control arm and GBP10.89 higher in the intervention arm than in the control arm (2001 prices) for trials 1 and 2, respectively. Taking the trials together, the costs were sensitive to the grades of doctors and the time spent in consultant supervision of the nurse counsellor, but they were only slightly affected by the grade of nurse counsellor, the selected discount rate and the lifespan of equipment. Computer-based systems in the primary care intervention cannot be recommended for widespread use without further evaluation and testing in real practice settings. Genetic nurse counsellors may be a cost-effective alternative to assessment by doctors. This trial does not provide definitive evidence that the general policy of employing genetics nurse counsellors is sound, as it was based on only three individuals. Future evaluations of computer-based decision support systems for primary care must first address their efficacy under ideal conditions, identify barriers to the use of such systems in practice, and provide evidence of the impact of the policy of such systems in routine practice. The nurse counsellor trial should be replicated in other settings to provide reassurance of the generalisability of the intervention and other models of nurse-based assessment, such as in outreach clinics, should be developed and evaluated. The design of future evaluations of professional substitution should also address issues such as the effect of different levels of training and experience of nurse counsellors, and learning effects.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
To evaluate the effects of lidocaine on changes of neuropathological outcome as well as the learning and memory abilities induced by transient global cerebral ischemia in mice of different apolipoprotein E genotypes. Transient global ischemia was induced by bilateral common carotid arteries occlusion (BCCAO) for a period of 17 minutes. Healthy male C57BL/6J wild-type mice (C57 mice) and apolipoprotein E knockout mice (apoE mice) were randomly divided into six groups: C57 control group (n = 15, undergoing sham operation, neither BCCAO was performed nor pharmacologic intervention was given), C57 ischemia group (n = 21, BCCAO for 17 minutes was performed and normal saline was given intraperitoneally), C57 lidocaine group (n = 22, BCCAO for 17 minutes was performed and lidocaine was given intraperitoneally), apoE control group (n = 15, undergoing the same procedure as that of the C57 control group), apoE ischemia group (n = 19, undergoing the same procedure as that of the C57 ischemia group), apoE lidocaine group (n = 16, undergoing the same procedure as that of the C57 lidocaine group). The mice were allowed to recover for 7 days. Twenty-eight mice were randomly selected from these 6 groups for neuropathological studies on the 7 th postoperative day. The percentage of ischemic neurons in the CA1 region of hippocampus was calculated. Morris water maze tasks were performed for the rest mice from the 8 th postoperative day. Mice were tested four times daily for 5 consecutive days. The latency period was recorded and the percentage of effective search strategies were calculated. (1) The percentage of ischemic neurons in the CA1 region of hippocampus was 0.3% +/- 0.1% in the C57 control group, 19.3% +/- 4.5% in the C57 ischemia group, 36.9% +/- 2.5% in the C57 lidocaine group, 0.6% +/- 0.3% in the apoE control group, 65.5% +/- 2.2% in the apoE ischemia group, and 39.4% +/- 6.5% in the apoE lidocaine group, significantly higher in the ischemia and lidocaine groups than in the corresponding control groups (all P &lt; 0.01), significantly higher in the C57 lidocaine and apoE ischemia groups than in the C57 ischemia group (both P &lt; 0.01), however, significantly lower in the apoE lidocaine group than in the apoE ischemia group (both P &lt; 0.01). (2) The latency period decreased significantly along with the test days in all groups except in the apoE ischemia group (P &lt; 0.05 or 0.01), significantly longer in the ischemia and lidocaine groups than in the corresponding control groups (P &lt; 0.05 or 0.01), significantly longer in the C57 lidocaine group than in the C57 ischemia group on the 3 rd day of test [73.1 (22.1-120.1) s vs. 40.2 (28.4-91.1) s], and in the apoE ischemia group than in the C57 ischemia group on the 3 rd and 4 th days of test [88.2 (41.0-120.1) s vs. 40.2 (28.4-91.1) s, and 78.2 (32.9-120.1) s vs. 46.3 (11.6-81.9) s] (P &lt; 0.05 or 0.01), and, however, significantly shorter in the apoE lidocaine group than in the C57 lidocaine group on the 3rd day of test [39.0 (15.5-103.5) s vs. 73.1 (22.1-120.1) s], and in the apoE lidocaine group than in the apoE ischemia group from the 3 rd to the 5 th days of test [39.0 (15.5-103.5) s vs. 88.2 (41.0-120.1) s, 24.9 (11.8-68.0) s vs. 78.2 (32.9-120.1) s, and 29.1 (6.6-57.2) s vs. 66.3 (14.2-97.0) s respectively] (P &lt; 0.05 or 0.01). (3) The percentage of effective search strategy increased significantly along with the test day in all groups except in the C57 lidocaine and apoE ischemia groups (P &lt; 0.05 or 0.01), significantly lower in the ischemia and lidocaine groups than in the corresponding control groups (P &lt; 0.05 or 0.01), significantly lower in the C57 lidocaine group than in the C57 ischemia group on the 4 th and 5 th days of test [25 (0-50)% vs. 50 (25-75)% and 37.5 (0-75)% vs. 50 (50-100)%], and in the apoE ischemia group than in the C57 ischemia group from the 3 rd to the 5th days of test [25 (0-25)% vs. 50 (25-75)%, 25 (0-25)% vs. 50 (25-75)%, and 25 (0-50)% vs. 50 (50-100)% respectively] (P &lt; 0.05 or 0.01), and, however, significantly higher in the apoE lidocaine group than in the C57 lidocaine group [50 (0-75)% vs. 25 (0-50)% and 50 (25-100)% vs. 37.5 (0-75)%], and in the apoE lidocaine group than in apoE ischemia group [50 (0-75)% vs. 25 (0-25)% and 50 (25-100)% vs. 25 (0-50)%] on the 4 th and 5 th days of test (all P &lt; 0.05). Transient global cerebral ischemia causes significant brain damage, which is more severe in the apoE mice than in the C57 mice. Lidocaine significantly worsens the ischemic brain damage in the C57 mice, and, however, significantly alleviates the ischemic cerebral result in the apoE mice.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
TREATMENT OF ARTERIAL HYPERTENSION - Blood pressure (BP) should be regularly measured in all patients with CKD (Strength of Recommendation C). - BP control and proteinuria reduction delay progression of CKD (Strength of Recommendation A) and reduce cardiovascular risk (Strength of Recommendation C). Thus, control of both factors should be the treatment objective. - The BP target in patients with CKD should be &lt; 130/80 mmHg, and 125/75 mmHg if proteinuria is &gt; 1 g/24 hours (Strength of Recommendation A). - Lifestyle changes should be made: low-sodium diet (less than 100 mEq/day of sodium or 2.4 g/day of salt); weight reduction if patient is overweight (body mass index 20-25 kg/m2); regular aerobic physical exercise and moderate alcohol intake for BP control and prevention of cardiovascular risk (Strength of Recommendation A). - The choice of the antihypertensive drug in patients with CKD depends on the etiology of CKD, cardiovascular risk, or presence of clinical or subclinical cardiovascular disease (Strength of Recommendation A). - Two or more antihypertensive drugs are usually required to control blood pressure in patients with CKD (Strength of Recommendation B), and will frequently include a diuretic, which in stages 4-5 should be a loop diuretic (Strength of Recommendation B). - Renin-angiotensin-aldosterone system (RAAS) inhibitors are first choice drugs in patients with diabetic nephropathy, patients with non-diabetic nephropathy with a protein/creatinine ratio higher than 200 mg/g, and patients with heart failure (Strength of Recommendation A). The combination of ACEIs and ARBs is indicated for reducing proteinuria that remains high despite treatment with a RAAS inhibitor, provided potassium levels do not exceed 5.5 mEq/L (Strength of Recommendation B). - When RAAS blockers are started or their dose is changed in patients with advanced CKD, kidney function and serum potassium levels should be monitored at least after 1-2 weeks. DIAGNOSIS AND TREATMENT OF DYSLIPIDEMIA - A complete evaluation of the lipid profile including total cholesterol, LDL-C, HDL-C, and triglycerides should be performed in any patient with CKD at baseline and at least annually (Strength of Recommendation B). - In patients with stage 4-5 CKD and LDL-C &gt;or= 100 mg/dL, treatment to decrease levels to &lt; 100 mg/dL should be considered because of their high CV risk. This reduction is recommended in secondary prevention and in primary prevention in diabetic patients. Lipid-lowering treatment is recommended in all other patients, although no evidence showing its benefits is available yet (Strength of Recommendation C). - In patients with stage 4-5 CKD and triglyceride levels &gt;or= 500 mg/dL which are not corrected by treating the underlying cases, treatment with triglyceride-lowering drugs may be considered to reduce the risk of pancreatitis. However, treatment with fibrates should be used with caution, and these drugs should not be associated to statins due to the risk of rhabdomyolysis (Strength of Recommendation C). There is little experience on the efficacy and safety of omega-3 fatty acids for the treatment of hypertriglyceridemia in patients with grade 4-5 CRF, but they may be considered a possibly safer alternative to fibrates (Strength of Recommendation C). SMOKING - Smoking is a cardiovascular risk factor and a risk factor for progression of kidney disease in patients with CRF (Strength of Recommendation B). - Use of active measures to achieve smoking cessation is recommended in patients with CRF (Strength of Recommendation C). HOMOCYSTEINE - Hyperhomocysteinemia has been postulated as a cardiovascular risk factor in the general population and in kidney patients, but the available evidence is not consistent. - There is no evidence that vitamin therapy decreases cardiovascular risk in patients with CRF, and recommendation of routine vitamin measurement and start of vitamin therapy to reduce cardiovascular risk in these patients is therefore questionable (Strength of Recommendation B). LEFT VENTRICULAR HYPERTROPHY - Left ventricular hypertrophy (LVH) is a cardiovascular risk factor in patients with CRF (Strength of Recommendation B). - It is advisable to perform an echocardiogram at baseline and every 12-24 months and to consider treatments allowing for LVH regression (Strength of Recommendation C). The approach to LVH should be early and multifactorial because its reversibility is limited once established (Strength of Recommendation C). - RAAS blockade with ACEIs or ARBs partially reverts LVH in patients with CRF (Strength of Recommendation B). ANTI-PLATELET AGGREGATION - Because of the high cardiovascular risk in patients with CKD, anti-platelet aggregant therapy, especially low-dose aspirin, would be indicated in patients with type 2 diabetes as primary prevention, and in all patients with CKD as secondary prevention. There is however no evidence of the benefits of anti-platelet aggregant therapy in primary prevention in patients with CKD, particularly in stages 4-5; indication for treatment in this situation should therefore be individualised because of its greater risk of bleeding. - Adequate good blood pressure control should previously be achieved to minimise the risk of haemorrhagic stroke (Strength of Recommendation C).
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The frequency of periprosthetic fractures related to total knee arthroplasty is increasing, with a prevalence of 1.3% on the average and with women being affected more often (4 out of 5 patients). Fractures of the distal femur are common, while tibial fractures are rare. Crucial for treatment is to distinguish fractures of the metaphysis above the femoral component, which remains firmly fixed, from those involving the knee joint replacement and component loosening. Supracondylar periprosthetic fractures are almost always managed surgically, using methods of osteosynthesis with an angle condylar or DCS plate, or a short retrograde- inserted supracondylar intramedullary nail. The recent use of implants such as LCPs with angle-stable screws has offered good prospects. This retrospective study presents our first experience with an LCP for treatment of supracondylar periprosthetic fractures of the knee joint. Between 2005 and 2008, a total of 13 supracondylar periprosthetic knee fractures were treated by the LCP technique. The patient group included 10 women and three men the average age was 67.4 (range, 56-81) years. The fractures were classified using the system proposed by Su et al. and the AO classification system. According to the Su classification, 12 types I and II fractures and one type III fracture were indicated for osteosynthesis. Based on the AO classification, there were four type 33 A1 fractures, five 33 A2 fractures, three 33 A3 fractures and one 33 C2 initially incorrectly classified as type 33 A3 fracture. The average time between total knee arthroplasty and injury was 6.8 years. In all patients fractures occurred after primary implantation of a cemented condylar total knee replacement without a femoral stem.The fractures were treated by a less invasive technique of LCP implantation within an average of 2.5 days of injury. The patients were followed up until radiographic fracture union, and complications were recorded. The 13 patients were treated by LCP osteosynthesis through a less invasive approach. One patient had primary spongioplasty, two had spongioplasty after an interval of 7 weeks. One patient died of a disease unrelated to trauma and surgery at 3 months after osteosynthesis. In one patient, osteosynthesis failed with fragment dislocation shortly after the operation. The case analysis showed that the initial indication was marginal and the comminuted zone was too low above the implant, with the fracture line extending to the component. Subsequently, conversion to revision total knee arthroplasty involving a stem was carried out. In nine patients, bone union was achieved in an average of 18 weeks, with radiographic evidence of fracture union. No complications such as wound infection, delayed wound healing or thromboembolic disease were recorded. No bone union failure and pseudoarthrosis development occurred. There are only few reports on the treatment of supracondylar periprosthetic knee fractures and evaluation of its results in the literature, and the groups evaluated are small. In a meta-analysis of cases from the 1981 to 2006 period, Herrera et al. have found only 29 assessable studies with a total of 415 cases, i.e., an average of 14 cases per study. The usual method of treatment was DCS plate osteosynthesis. Complications associated with conventional osteosynthesis techniques, as reported by various authors, may reach up to 30% (pseudoarthrosis development, 9% osteosynthesis failure, 4% necessity of revision surgery, 13% fracture malunion, 47%).Good results have been achieved with a retrograde-inserted intramedullary nail. The use of an LCP has been reported in the literature only occasionally. The classification system described by Rorabeck et al. is most widely used, but the system proposed by Su et al. seems more convenient to us, because fractures are placed in three groups, according to the localisation of a fracture line and its distance from the femoral component, as follows: type 1 fracture, fracture line is proximal to the femoral component type 2 fracture, fracture line starts at the level of a proximal edge of the femoral component and runs proximally type 3 fracture, fracture line extends below the upper end of the femoral component. Type 1 fracture is indicated for a retro- grade-inserted intramedullary nail, type 2 fracture for LCP osteosynthesis, and type 3 fracture for revision total knee arthroplasty. The use of LCPs in the treatment of supracondylar fractures of total knee arthroplasty, with a success rate of 86%, is described by Ricci et al. Other authors also report better outcomes with the use of LISS or LCP methods than with conventional osteosynthesis techniques. Osteosynthesis with an angle-stable table LCP is an efficient method suitable also for the treatment of periprosthetic fractures of the distal femur above total knee arthroplasty. It offers all advantages of angle-stable implants. It is more effective for osteoporotic bone than a DCS implant or a condylar plate, because it provides better fixation stability for the distal fragment. However, further studies are needed to compare its efficiency with that of an IM nail.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Parturitions of 338 among humid-zone ewes and 690 among subhumid-zone ewes gave rise to 361 and 797 live-born lambs, respectively. Mean age and weight at first parity were 431.3 days and 15.6 kg for the humid-zone ewes while those for the subhumid-zone ewes were 429.4 days and 17.0 kg. Early age at first parity is related to physiological age and growth of body weight which is closely related to early sexual maturity. The monthly distribution of parturitions is discussed with considerably higher peaks in October, April and May in the humid zone while in the subhumid zone, high frequencies were recorded in July, August, February and March. Higher parities significantly shortened the successive parturition intervals within and between the climatic zones with general intervals of 234.2 and 208.7 days among the humid- and the subhumid-zone ewes, respectively, at the fifth parity. The shortest parturition intervals (242.6 days) were recorded during the dryspell-minor wet season (August-October) in the humid zone and the major-wet season (May-July; 223.9 days) in the subhumid zone. Higher annual reproduction rates of 1.95 (1st year) and 1.79 (2nd year) lambs among the free-range ewes while subnormal rates of 1.72 (1st year) and 1.68 (2nd year) lambs among the humid-zone ewes were recorded. These differences are related to the natural service, parturition intervals, litter size and the availability of fodder plants as influenced by management (semi-tethering as against free-range). ZUSAMMENFASSUNG: Untersuchungen über traditionelle Schafproduktion in der humiden und sub-humiden Asante Region Ghanas II. Reproduktionsleistungen und Fruchtbarkeitsraten Gegenüber 690 Lebendgeburten bei 338 Mutterschafen in der humiden Zone (H) wurden in der semi-humiden Zone (SH) 797 Geburten bei 361 Mutterschafen erfaßt. Mit einem Erstlammalter zwischen 431,3 (H) und 429,4 (SH) Tagen und Körpergewichten von 15,6 kg (H) und 17,0 kg (SH) sind Mutterschafe zu diesem Zeitpunkt in der semi- humiden Zone schwerer. Das höhere Körpergewicht bestimmt weitgehend die sexuelle Frühreife. Die monatliche Verteilung der Ablammungen zeigte deutliche Häufungen in den Monaten Oktober, April und Mai (H) und Juli, August, Februar und März (SH). Eine höhere Anzahl von Ablammungen führte zu einer signifikanten Verkürzung der aufeinanderfolgenden Ablammungsintervalle innerhalb und zwischen den Klimazonen. Die Zwischenlammzeit zur fünften Ablammung reduzierte sich in der humiden Zone auf 234,2 Tage und in der subhumiden Zone auf 208,7 Tage. Die kürzesten Zwischenlammzeit wurden während der kleinen 'Dryspell'-Regenzeit (August-Oktober) in der humiden Zone (242,6 Tage) bzw während der Haupt-regenzeit (Mai-Juli) in der sub-humiden Zone (223,9 Tage) beobachtet. Während höhere Reproduktionsraten von 1,95 Lämmern (1. Jahr) und 1,79 Lämmern (2. Jahr) bei den freilaufenden Mutterschafen in der sub-humiden Zone festgestellt wurden, sind niedrigere Reproduktionsraten von 1,72 Lämmern (1. Jahr) und 1,68 Lämmern (2. Jahr) beim Tüdern in der humiden Zone zu verzeichenen. Als Gründe wurden die Haltungsformen (Tüdern bzw. freie Weidegang), die Wurfgröße und die verfügbare Futtergrundlage mit ihrem Einflüssen auf die Zwischenlammzeiten analysiert. Insgesamt unterschieden sich hier die Mutterschafe der humiden Zone mit 112,2 % von Mutterschafen der sub-humiden Zone mit 120,4%. RÉSUMÉ: Analyse de la production traditionnelle d'ovins dans les zones humide et sub-humide de la région d'Asante, Ghana II. Rendements de reproduction et taux de fertilité Par rapport aux 690 naissances vives ches 338 brebis dans la zone humide (H), on a enregistré 797 naissances chez 361 brebis dans la zone semi-humide (SH). Avec un age de premier agnelage entre 431,3 (H) et 429,4 (SH) jours et avec un poids vif de 15,6 (H) et 17,0 kg (SH), les brebis, vers ce temps-là, ont plus de poids dans la zone semi-humide. Le poids plus élevé détermine largement la maturité sexuelle précoce. La distribution mensuelle des agnelages montre des fréquences plus élevées aux mois d'octobre, d'avril et de mai (H), et mois de juillet, d'août, de février et de mars (SH). Un nombre plus élevé d'agnelages menait à une diminution significative des intervalles successives d'agnelages non seulement dedans mais aussi entre les zones climatiques. La période entre le quatrième et le cinquième agnelage se réduisait à 234,2 jours dans la zone humide et à 208,7 jours dans la zone sub-humide. La période la plus courte entre deux agnelages qu'on a observée était de 242,6 jours pendant la courte période des pluies, nommée 'dry spelle' (août à octobre) dans la zone humide, et de 223,9 jours pendant la période principale des pluies (mai à juillet) dans la zone sub-humide. On a enregistré des taux plus élevées de reproduction (1,95 agneaux pendant la première année, et 1,79 agneaux pendant la deuxième) chez les brebis pâturant librement, dans la zone sub-humide, et des taux plus bas de reproduction (1,72 agneaux/la première année et 1,68/la deuxième) chez les brebis pâturant au piquet, dans la zone humide. Les formes d'entretien (pâturage au piquet ou pâturage libre), la taille de la portée et la disponibilité de fourrage ont été analysés comme des facteurs importants sur les périodes entre les agnelages.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Since 1969, CDC has conducted abortion surveillance to document the number and characteristics of women obtaining legal induced abortions in the United States. 1999-2008. Each year, CDC requests abortion data from the central health agencies of 52 reporting areas (the 50 states, the District of Columbia, and New York City). This information is provided voluntarily. For 2008, data were received from 49 reporting areas. For the purpose of trend analysis, data were evaluated from the 45 areas that reported data every year during 1999-2008. Abortion rates (number of abortions per 1,000 women) and ratios (number of abortions per 1,000 live births) were calculated using census and natality data, respectively. A total of 825,564 abortions were reported to CDC for 2008. Of these, 808,528 abortions (97.9% of the total) were from the 45 reporting areas that provided data every year during 1999-2008. Among these same 45 reporting areas, the abortion rate for 2008 was 16.0 abortions per 1,000 women aged 15-44 years, and the abortion ratio was 234 abortions per 1,000 live births. Compared with 2007, the total number and rate of reported abortions for these 45 reporting areas essentially were unchanged, although the abortion ratio was 1% higher. Reported abortion numbers, rates, and ratios remained 3%, 4%, and 10% lower, respectively, in 2008 than they had been in 1999. Women aged 20-29 years accounted for 57.1% of all abortions reported in 2008 and for the majority of abortions during the entire period of analysis (1999-2008). In 2008, women aged 20-29 years also had the highest abortion rates (29.6 abortions per 1,000 women aged 20-24 years and 21.6 abortions per 1,000 women aged 25-29 years). Adolescents aged 15-19 years accounted for 16.2% of all abortions in 2008 and had an abortion rate of 14.3 abortions per 1,000 adolescents aged 15-19 years; women aged ≥35 years accounted for a smaller percentage (11.9%) of abortions and had lower abortion rates (7.8 abortions per 1,000 women aged 35-39 years and 2.7 abortions per 1,000 women aged ≥40 years). Throughout the period of analysis, abortion rates decreased among adolescents aged ≤19 years, whereas they increased among women aged ≥35 years. Among women aged 20-24 years abortion rates decreased during 1999-2003 and then leveled off during 2004-2008. In contrast to the percentage distribution of abortions and abortion rates by age, abortion ratios in 2008 and throughout the entire period of analysis were highest among adolescents aged ≤19 years and lowest among women aged 30-39 years. Abortion ratios decreased during 1999-2008 for women in all age groups except for those aged &lt;15 years; however, the steady decrease was interrupted from 2007 to 2008 when abortion ratios increased among women in all age groups except for those aged ≥40 years. In 2008, most (62.8%) abortions were performed at ≤8 weeks' gestation, and 91.4% were performed at ≤13 weeks' gestation. Few abortions (7.3%) were performed at 14-20 weeks' gestation, and even fewer (1.3%) were performed at ≥21 weeks' gestation. During 1999-2008, the percentage of abortions performed at ≤13 weeks' gestation remained stable, whereas abortions performed at ≥16 weeks' gestation decreased 13%-17%. Moreover, among the abortions performed at ≤13 weeks' gestation, the distribution shifted toward earlier gestational ages, with the percentage of abortions performed at ≤6 weeks' gestation increasing 53%. In 2008, 75.9% of abortions were performed by curettage at ≤13 weeks' gestation, and 14.6% were performed by early medical abortion (a nonsurgical abortion at ≤8 weeks' gestation); 8.5% of abortions were performed by curettage at &gt;13 weeks' gestation. Among the 62.8% of abortions that were performed at ≤8 weeks' gestation and thus were eligible for early medical abortion, 22.5% were completed by this method. The use of medical abortion increased 17% from 2007 to 2008. Deaths of women associated with complications from abortions for 2008 are being investigated under CDC's Pregnancy Mortality Surveillance System. In 2007, the most recent year for which data were available, six women were reported to have died as a result of complications from known legal induced abortions. No reported deaths were associated with known illegal induced abortions. Among the 45 areas that reported data every year during 1999-2008, the total number and rate of reported abortions essentially did not change from 2007 to 2008. This finding is consistent with the recent leveling off from steady decreases that had been observed in the past. In contrast, the abortion ratio increased from 2007 to 2008 after having decreased steadily. In 2007, as in previous years, reported deaths related to abortion were rare. This report provides the data for examining trends in the number and characteristics of women obtaining abortions. This information is needed to better understand the reasons why efforts to reduced unintended pregnancy have stalled and can be used by policymakers and program planners to guide and evaluate efforts to prevent unintended pregnancy.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
  In 2006, pharmacies began offering select generic prescription drugs at discount prices (e.g., $4 for a 30-day supply) through nonmembership and membership programs. As part of the contract in membership generic drug discount programs, the member agrees to forgo submission of the claim to the insurance company. Claims not submitted for insurance adjudication may result in incomplete pharmacy benefit manager (PBM) and health plan data, which could negatively influence adherence reporting and clinical programs. To address potentially missing claims data, the Centers for Medicare Medicaid Services (CMS) encourages Medicare Part D sponsors to incentivize network pharmacies to submit claims directly to the plan for drugs dispensed outside of a member's Part D benefit, unless a member refuses. The extent of PBM and health plan claims capture loss due to generic drug discount programs is unknown. To identify changes in levothyroxine utilizers' prescription claims capture rate following the advent of generic drug discount membership and nonmembership programs. This retrospective concurrent cohort study used claims data from 3.5 million commercially insured members enrolled in health plans located in the central and southern United States with Prime Therapeutics pharmacy benefit coverage. Members were required to be 18 years or older and younger than 60 years as of January 1, 2006, and continuously enrolled from January 1, 2006, through December 31, 2010. Members utilizing generic levothyroxine for at least 120 days during January 1, 2006, through June 30, 2006 (baseline period) from the same pharmacy group with supply on July 1, 2006, were placed into 1 of 3 pharmacy groups: (1) nonmembership (Walmart, Sam's Club, Target, Kroger, City Market, and King Soopers pharmacies), (2) membership (Walgreens, CVS, Albertsons, and Savon pharmacies), or (3) the reference group of all other pharmacies. The index date was defined as July 1, 2006. The levothyroxine claim providing supply on July 1, 2006, was the index claim. Members with a Kmart pharmacy index claim were excluded, since the Kmart membership drug discount program began prior to July 1, 2006. Levothyroxine claims capture nonpersistency, defined as the occurrence of a claim supply end date prior to a 180-day gap, was the primary outcome variable and was assessed from July 1, 2006, through June 30, 2010 (follow-up period). The odds of levothyroxine claims capture nonpersistency by pharmacy group were assessed using a logistic regression analysis adjusted for the following covariates: age, gender, median income in the ZIP code of residence (binomial for ≤ $50,000 vs. greater than $50,000), switch to a brand levothyroxine product during the follow-up period, index levothyroxine claim supply of 90 days or more, and index levothyroxine claim member cost share per 30-day supply in tertiles (≤ $5.00, $5.01-$7.99, ≥ $8.00). Of 2,632,855 eligible members aged 18 years or older, 13,427 met all study eligibility criteria. The baseline pharmacy groups were membership with 3,595 (26.8%), nonmembership with 1,919 (14.3%), and all other pharmacies with 7,913 (58.9%) members. The rates of levothyroxine claims capture persistency throughout the 4-year follow-up period were 85.4% for nonmembership (P = 0.593 vs. all other pharmacies), 77.7% for the membership group (P  less than  0.001 vs. all other pharmacies), and 85.9% for all other pharmacies. The Kaplan-Meier comparison of claims capture persistency found nearly identical claims capture loss for the nonmembership compared with all other pharmacies group, and when compared in a multivariate logistic regression model, there was no difference in the odds of levothyroxine claims capture over 4 years follow-up (OR = 1.01, 95% CI = 0.88-1.16, P = 0.900). The membership generic drug discount programs (Walgreens, CVS, Alberstons, and Savon pharmacies) had a statistically significant 61% higher odds (OR = 1.61, 95% CI = 1.45-1.79, P  less than  0.001) of levothyroxine claims capture nonpersistency. The onset of the difference between the membership group and the all other pharmacies group was temporally associated with the launch of the membership programs. In comparison to index levothyroxine member cost of ≤ $5.00 per 30-day supply, higher cost shares were associated with higher levothyroxine claims capture nonpersistency ($5.01 to $7.99 OR 1.34, 95% CI 1.19-1.52 and ≥ $8.00 OR 1.60, 95% CI 1.40-1.82). Among levothyroxine utilizers in 2006 (prior to the advent of drug discount programs), those with claims from a pharmacy that subsequently implemented a nonmembership generic drug discount program did not appear to have a different rate of levothyroxine claims capture than members from the reference group when followed through June 2010. Utilizers with claims from a pharmacy that subsequently implemented a membership program had a significantly lower levothyroxine claims capture rate. Increasing index levothyroxine member cost was associated with higher levothyroxine claims capture loss. Because the analysis could not directly measure claims capture loss associated with members who switched to a new pharmacy group without presenting their insurance information (e.g., membership discount programs), further research is needed to confirm these findings.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Gestational diabetes mellitus (GDM) affects a significant number of women each year and is associated with a wide range of adverse outcomes for women and their babies. Dietary counselling is the main strategy in managing GDM, but it remains unclear which dietary therapy is best. To assess the effects of different types of dietary advice for women with GDM on pregnancy outcomes. We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (17 May 2012) and the WOMBAT Perinatal Trials Registry (17 April 2012). Randomised controlled trials (RCTs) and cluster-RCTs assessing the effects of different types of dietary advice for women with GDM on pregnancy outcomes.We intended to compare two or more forms of the same type of dietary advice against each other, i.e. standard dietary advice compared with individualised dietary advice, individual dietary education sessions compared with group dietary education sessions. We intended to compare different intensities of dietary intervention with each other, i.e. single dietary counselling session compared with multiple dietary counselling sessions. Two review authors independently assessed study eligibility, extracted data and assessed risk of bias of included studies. Data were checked for accuracy. We included nine trials; 429 women (436 babies) provided outcome data. All nine included trials had small sample sizes with variation in levels of risk of bias. A total of 11 different types of dietary advice were assessed under six different comparisons, including:low-moderate glycaemic index (GI) food versus high-moderate GI food, low-GI diet versus high-fibre moderate-GI diet, energy-restricted diet versus no energy restriction diet, low-carbohydrate diet (≤ 45% daily total energy intake from carbohydrate) versus high-carbohydrate diet (≥ 50% daily total energy intake from carbohydrate), high-monounsaturated fat diet (at least 20% total energy from monounsaturated fat) versus high-carbohydrate diet (at least 50% total energy from carbohydrate), standard-fibre diet (American Diabetes Association (ADA) diet) (20 grams fibre/day) versus fibre-enriched diet (80 grams fibre/day).In the low-moderate GI food versus moderate-high GI food comparison, no significant differences were seen for macrosomia or large-for-gestational age (LGA), (two trials, 89 babies) (risk ratio (RR) 0.45, 95% confidence interval (CI) 0.10 to 2.08), (RR 0.95, 95% CI 0.27 to 3.36), respectively; or caesarean section (RR 0.66, 95% CI 0.29 to 1.47, one trial, 63 women).In the low-GI diet versus high-fibre moderate-GI diet comparison, no significant differences were seen for macrosomia or LGA (one trial, 92 babies) (RR 0.32, 95% CI 0.03 to 2.96), (RR 2.87, 95% CI 0.61 to 13.50), respectively; or caesarean section (RR 1.80, 95% CI 0.66 to 4.94, one trial, 88 women).In the energy-restricted versus unrestricted diet comparison, no significant differences were seen for macrosomia (RR 1.56, 95% CI 0.61 to 3.94, one trial, 122 babies); LGA (RR 1.17, 95% CI 0.65 to 2.12, one trial, 123 babies); or caesarean section (RR 1.18, 95% CI 0.74 to 1.89, one trial, 121 women).In the low- versus high-carbohydrate diet comparison, none of the 30 babies in a single trial were macrosomic; and no significant differences in caesarean section rates were seen (RR 1.40, 95% CI 0.57 to 3.43, one trial, 30 women).In the high-monounsaturated fat versus high-carbohydrate diet comparison, neither macrosomia or LGA (one trial 27 babies) (RR 0.65, 95% CI 0.91 to 2.18), (RR 0.54 95% CI 0.21 to 1.37), respectively showed significant differences. Women having a high-monounsaturated fat diet had a significantly higher body mass index (BMI) at birth (mean difference (MD) 3.90 kg/m², 95% CI 2.41 to 5.39, one trial, 27 women) and at six to nine months postpartum (MD 4.10 kg/m², 95% CI 2.34 to 5.86, one trial, 27 women) when compared with those having a high-carbohydrate diet. However, these findings were based on a single, small RCT with baseline imbalance in maternal BMI.Perinatal mortality was reported in only trial which recorded no fetal deaths in either the energy- restricted or unrestricted diet group.A single trial comparing ADA diet (20 grams gram fibre/day) with fibre-enriched fibre enriched diet (80 grams gram fibre/day) did not report any of our prespecified primary outcomes.Very limited data were reported on the prespecified outcomes for each of the six comparisons. Only one trial reported on early postnatal outcomes. No trial reported long-term health outcomes for women and their babies. No data were reported on health service cost or women's quality of life. Data for most comparisons were only available from single studies and they are too small for reliable conclusions about which types of dietary advice are the most suitable for women with GDM. Based on the current available evidence, we did not find any significant benefits of the diets investigated.Further larger trials with sufficient power to assess the effects of different diets for women with GDM on maternal and infant health outcomes are needed. Outcomes such as longer-term health outcomes for women and their babies, women's quality of life and health service cost should be included.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Minimally invasive techniques to treat great saphenous varicose veins include ultrasound-guided foam sclerotherapy (UGFS), radiofrequency ablation (RFA) and endovenous laser therapy (EVLT). Compared with flush saphenofemoral ligation with stripping, also referred to as open surgery or high ligation and stripping (HL/S), proposed benefits include fewer complications, quicker return to work, improved quality of life (QoL) scores, reduced need for general anaesthesia and equivalent recurrence rates. This is an update of a review first published in 2011. To determine whether endovenous ablation (radiofrequency and laser) and foam sclerotherapy have any advantages or disadvantages in comparison with open surgical saphenofemoral ligation and stripping of great saphenous vein varices. For this update the Cochrane Peripheral Vascular Diseases Group Trials Search Co-ordinator searched the Specialised Register (last searched January 2014) and CENTRAL (2013, Issue 12). Clinical trials databases were also searched for details of ongoing or unpublished studies. All randomised controlled trials (RCTs) of UGFS, EVLT, RFA and HL/S were considered for inclusion. Primary outcomes were recurrent varicosities, recanalisation, neovascularisation, technical procedure failure, patient QoL scores and associated complications. CN and RB independently reviewed, assessed and selected trials which met the inclusion criteria. CN and RB extracted data and used the Cochrane Collaboration's tool for assessing risk of bias. CN and RB contacted trial authors to clarify details as needed. For this update, eight additional studies were included making a total of 13 included studies with a combined total of 3081 randomised patients. Three studies compared UGFS with surgery, eight compared EVLT with surgery and five compared RFA with surgery (two studies had two or more comparisons with surgery). Study quality, evaluated through the six domains of risk of bias, was generally moderate for all included studies, however no study blinded participants, researchers and clinicians or outcome assessors. Also, nearly all included studies had other sources of bias. The overall quality of the evidence was moderate due to the variations in the reporting of results, which limited meaningful meta-analyses for the majority of proposed outcome measures. For the comparison UGFS versus surgery, the findings may have indicated no difference in the rate of recurrences in the surgical group when measured by clinicians, and no difference between the groups for symptomatic recurrence (odds ratio (OR) 1.74, 95% confidence interval (CI) 0.97 to 3.12; P = 0.06 and OR 1.28, 95% CI 0.66 to 2.49, respectively). Recanalisation and neovascularisation were only evaluated in a single study. Recanalisation at &lt; 4 months had an OR of 0.66 (95% CI 0.20 to 2.12), recanalisation &gt; 4 months an OR of 5.05 (95% CI 1.67 to 15.28) and for neovascularisation an OR of 0.05 (95% CI 0.00 to 0.94). There was no difference in the rate of technical failure between the two groups (OR 0.44, 95% CI 0.12 to 1.57). For EVLT versus surgery, there were no differences between the treatment groups for either clinician noted or symptomatic recurrence (OR 0.72, 95% CI 0.43 to 1.22; P = 0.22 and OR 0.87, 95% CI 0.47 to 1.62; P = 0.67, respectively). Both early and late recanalisation were no different between the two treatment groups (OR 1.05, 95% CI 0.09 to 12.77; P = 0.97 and OR 4.14, 95% CI 0.76 to 22.65; P = 0.10). Neovascularisation and technical failure were both statistically reduced in the laser treatment group (OR 0.05, 95% CI 0.01 to 0.22; P &lt; 0.0001 and OR 0.29, 95% CI 0.14 to 0.60; P = 0.0009, respectively). Long-term (five-year) outcomes were evaluated in one study so no association could be derived,but it appeared that EVLT and surgery maintained similar findings. Comparing RFA versus surgery, there were no differences in clinician noted recurrence (OR 0.82, 95% CI 0.49 to 1.39; P = 0.47); symptomatic noted recurrence was only evaluated in a single study. There were also no differences between the treatment groups for recanalisation (early or late) (OR 0.68, 95% CI 0.01 to 81.18; P = 0.87 and OR 1.09, 95% CI 0.39 to 3.04; P = 0.87, respectively), neovascularisation (OR 0.31, 95% CI 0.06 to 1.65; P = 0.17) or technical failure (OR 0.82, 95% CI 0.07 to 10.10; P = 0.88).QoL scores, operative complications and pain were not amenable to meta-analysis, however quality of life generally increased similarly in all treatment groups and complications were generally low, especially major complications. Pain reporting varied greatly between the studies but in general pain was similar between the treatment groups. Currently available clinical trial evidence suggests that UGFS, EVLT and RFA are at least as effective as surgery in the treatment of great saphenous varicose veins. Due to large incompatibilities between trials and different time point measurements for outcomes, the evidence is lacking in robustness. Further randomised trials are needed, which should aim to report and analyse results in a congruent manner to facilitate future meta-analysis.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Selenium is a trace mineral essential to health and has an important role in immunity, defence against tissue damage and thyroid function. Improving selenium status could help protect against overwhelming tissue damage and infection in critically ill adults. This Cochrane review was originally published in 2004 updated in 2007 and again 2015. The primary objective was to examine the effect of nutrition supplemented with selenium or ebselen on mortality in critically ill patients.The secondary objective was to examine the relationship between selenium or ebselen supplementation and number of infections, duration of mechanical ventilation, length of intensive care unit stay and length of hospital stay. In this update, we searched the current issue of the Cochrane Central Register of Controlled Trials, the Cochrane Library (2014, Issue 5); MEDLINE (Ovid SP, to May 20, 2014), EMBASE (Ovid SP, to May 20, 2014), CAB, BIOSIS and CINAHL. We handsearched the reference lists of the newest reviews and cross-checked with our search in MEDLINE. We contacted the main authors of included studies to request any missed, unreported or ongoing studies. The latest search was performed up to 21 May 2014. The search is now from inception until 21 May 2014. We included randomized controlled trials (RCTs) irrespective of publication status, date of publication, blinding status, outcomes published or language. We contacted the trial investigators and authors in order to retrieve relevant and missing data. Two review authors independently extracted data and we resolved any disagreements by discussion. Our primary outcome measure was all-cause mortality. We performed several subgroup and sensitivity analyses to assess the effects of selenium in critically ill patients. We presented pooled estimates of the effects of intervention as risk ratios (RRs) with 95% confidence intervals (CIs). We assessed the risk of bias through assessment of trial methodological components and the risk of random error through trial sequential analysis. We included six new RCTs in this review update. In total we included 16 RCTs (2084 participants) in this review. Most trials were at high risk of bias. The availability of outcome data was limited and trials involving selenium supplementation were, with the exception of one trial, small regarding sample size. Thus the results must be interpreted with caution.Thirteen trials of intravenous sodium selenite showed a statistically significant reduction in overall mortality (RR 0.82, 95% CI 0.72 to 0.93, 1391 participants, very low quality of evidence). However, the overall point estimate on mortality is primarily influenced by trials of high risk of bias. Meta-analysis of three trials of ebselen had a RR of 0.83 (95% CI 0.52 to 1.34, 693 participants, very low quality of evidence).Nine trials of intravenous sodium selenite were analysed for 28 days mortality with no statistically significant difference (RR 0.84, 95% CI 0.69 to 1.02, 1180 participants, very low quality of evidence) while three trials were analysed for 90 days mortality with similar findings (RR 0.96, 95% Cl 0.78 to 1.18, 614 participants, very low quality of evidence).Two trials of ebselen were analysed for 90 days mortality and were not found to yield any benefit (RR 0.72, 95% Cl 0.42 to 1.22, 588 participants, very low quality of evidence).For mortality among intensive care patients selenium supplementation failed to indicate any statistically significant advantage (RR 0.88, 95% CI 0.77 to 1.01, nine trials, 1168 participants, very low quality of evidence).Six trials of intravenous sodium selenite found no statistically significant difference for participants developing infection (RR 0.96, 95% CI 0.75 to 1.23, 934 patients, very low quality of evidence). Similarly, three trials of ebselen provided data for participants developing infections (pyrexia, respiratory infections or meningitis) with no obvious benefit (RR 0.60, 95% CI 0.36 to 1.02, 685 participants, very low quality of evidence).Our analyses showed no effect of selenium or ebselen on adverse events (Selenium: RR 1.03, 95% Cl 0.85 to 1.24; six trials, 925 participants ; Ebselen: RR 1.16, 95% CI 0.40 to 3.36; two trials, 588 participants, very low quality of evidence).No clear evidence emerged in favour of selenium supplementation for outcomes such as number of days on a ventilator (mean difference (MD) -0.86, 95% CI -4.39 to 2.67, four trials, 191 participants, very low quality of evidence), length of intensive care unit stay (MD 0.54, 95% CI -2.27 to 3.34, seven trials, 934 participants, very low quality of evidence) or length of hospital stay (MD -3.33, 95% Cl -5.22 to -1.44, five trials, 693 participants, very low quality of evidence).The quality of trial methodology was low. Due to high risk of bias in the included trials, results must be interpreted with caution. Despite publication of a number of trials, the current evidence to recommend supplementation of critically ill patients with selenium or ebselen remains disputed. Trials are required which overcome the methodological inadequacies of the reviewed studies, particularly in relation to sample size, design and outcomes.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Generic substitution means that one medicinal product is replaced by another product containing the same active substance. Generic substitution has existed in Denmark since 1991, and pharmacies are obliged to substitute a generic version of a medication, unless the general practitioner (GP) has explicitly stated that it should not be done, or the patient insists on having the more expensive drug.  Generic prescribing, that is prescribing the substance name, is not allowed in Denmark. Some specialists and patients cast doubt on the real interchange-ability of generics, although international studies have shown that most patients have positive attitudes towards generic substitution. The severity of disease is known to be associated with patients being more concerned about generic substitution. The generic substitution scheme implies changing from one drug to another that may vary in brand-name, form, size, colour and taste. Speculations have been raised as to whether these medication changes between generic brands or from brand-name drugs to generics or vice versa may cause patient concerns. Qualitative studies have shown problems in recognising the substituted medicine and lack of confidence in the identical effect of the substitutable medicines. Several studies have focused on one specific drug group such as antihypertensive drugs. However, the influence of generic switching may affect concerns about medicine differently, depending on drug categories. Research on generic substitution often focuses on incident drug users, whose prescription is substituted at their first redemption. Most of these studies did not identify significant associations between generic substitution and non-adherence, but one study assessing the association between generic substitution and persistence showed reduced persistence. So far, studies of the effect of generic drug substitution on drug continuation have not focused on patients' overall experience of generic switches within one specific drug. To analyse associations between generic substitution and patient characteristics as well as patients' views on generic medicines, confidence in the healthcare system, beliefs about medicine, and experience with earlier generic substitution. To investigate the possible association between a specific generic switch and patients' concerns about their medicine. To examine how generic switch influences persistence with long-term treatment with special focus on importance of patients' concerns and views on generic medicine. The design was a combined cross-sectional questionnaire and register study and additionally a cohort study. The study was conducted among 6,000 medicine users, who had redeemed generically substitutable drugs with general reimbursement in September 2008 (2,000 users of antidepressants, 2,000 users of antiepileptics and 2,000 users of other substitutable drugs), who were aged 20 years or older and living in the Region of Southern Denmark. The medicine users were identified through Odense PharmacoEpidemiologic Database (OPED). The purpose of the questionnaire survey was to elucidate patients' experience with medicine, combined with information from OPED on a single well-defined generic switch of the index drug. The questionnaire was adapted to the individual subject with reference to their specific drug (index drug) in every question and index date printed on the questionnaire. The questionnaire comprises scales from the validated Beliefs about Medicine Questionnaire (BMQ) and ad hoc constructed scales. By means of OPED data it was possible to conduct a cohort study comprising information on all purchased medicine during the 12 months following the index date. The cohort comprised users of antidepressants and users of antiepileptics. A total of 2,476 patients (44.1%) were included in the analyses. Experience with earlier generic switches within the index ATC code was associated with experience of a generic switch on the index day (OR 5.93; 95% CI 4.70-7.49). However, experience with earlier generic switches was drug-specific, e.g. having had more than five earlier switches within other ATC codes reduced the odds of experiencing a generic switch on the index day. Having negative views on generic medicines also reduced the odds of experiencing a generic switch on the index day. The second study showed no statistically significant associations between experiencing a generic switch on the index day and having more or less concerns about the index medicine (-0.02 95% CI: -0.10-0.05). Patients experiencing their first-time switch of a specific drug were at higher risk of non-persistence, hazard ratio 2.98, 95% CI 1.81-4.89, versus those who have never switched, and 35.7% became non-persistent during the first year of follow-up. Generic switching did not influence persistence considerably in those having previous experience with generic switching of the specific drug. The overall results from the thesis showed that experience with earlier generic switches of a specific drug was associated with making a future generic switch and did not cause additional concerns about the index medicine. The effect of previous experience with generic substitution has been shown to be drug-specific. The third study showed that patients, who are first-time switchers of a specific drug, were at higher risk of becoming non-persistent compared to never switchers and those having experienced previous generic switching.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Urinary tract infection (UTI) in the United States is the most common bacterial infection, and urine cultures often make up the largest portion of workload for a hospital-based microbiology laboratory. Appropriately managing the factors affecting the preanalytic phase of urine culture contributes significantly to the generation of meaningful culture results that ultimately affect patient diagnosis and management. Urine culture contamination can be reduced with proper techniques for urine collection, preservation, storage, and transport, the major factors affecting the preanalytic phase of urine culture. The purposes of this review were to identify and evaluate preanalytic practices associated with urine specimens and to assess their impact on the accuracy of urine culture microbiology. Specific practices included collection methods for men, women, and children; preservation of urine samples in boric acid solutions; and the effect of refrigeration on stored urine. Practice efficacy and effectiveness were measured by two parameters: reduction of urine culture contamination and increased accuracy of patient diagnosis. The CDC Laboratory Medicine Best Practices (LMBP) initiative's systematic review method for assessment of quality improvement (QI) practices was employed. Results were then translated into evidence-based practice guidelines. A search of three electronic bibliographic databases (PubMed, SCOPUS, and CINAHL), as well as hand searching of bibliographies from relevant information sources, for English-language articles published between 1965 and 2014 was conducted. The search contained the following medical subject headings and key text words: urinary tract infections, UTI, urine/analysis, urine/microbiology, urinalysis, specimen handling, preservation, biological, preservation, boric acid, boric acid/borate, refrigeration, storage, time factors, transportation, transport time, time delay, time factor, timing, urine specimen collection, catheters, indwelling, urinary reservoirs, continent, urinary catheterization, intermittent urethral catheterization, clean voided, midstream, Foley, suprapubic, bacteriological techniques, and microbiological techniques. Both boric acid and refrigeration adequately preserved urine specimens prior to their processing for up to 24 h. Urine held at room temperature for more than 4 h showed overgrowth of both clinically significant and contaminating microorganisms. The overall strength of this body of evidence, however, was rated as low. For urine specimens collected from women, there was no difference in rates of contamination for midstream urine specimens collected with or without cleansing. The overall strength of this evidence was rated as high. The levels of diagnostic accuracy of midstream urine collection with or without cleansing were similar, although the overall strength of this evidence was rated as low. For urine specimens collected from men, there was a reduction in contamination in favor of midstream clean-catch over first-void specimen collection. The strength of this evidence was rated as high. Only one study compared midstream collection with cleansing to midstream collection without cleansing. Results showed no difference in contamination between the two methods of collection. However, imprecision was due largely to the small event size. The diagnostic accuracy of midstream urine collection from men compared to straight catheterization or suprapubic aspiration was high. However, the overall strength of this body of evidence was rated as low. For urine specimens collected from children and infants, the evidence comparing contamination rates for midstream urine collection with cleansing, midstream collection without cleansing, sterile urine bag collection, and diaper collection pointed to larger reductions in the odds of contamination in favor of midstream collection with cleansing over the other methods of collection. This body of evidence was rated as high. The accuracy of diagnosis of urinary tract infection from midstream clean-catch urine specimens, sterile urine bag specimens, or diaper specimens compared to straight catheterization or suprapubic aspiration was varied. No recommendation for or against is made for delayed processing of urine stored at room temperature, refrigerated, or preserved in boric acid. This does not preclude the use of refrigeration or chemical preservatives in clinical practice. It does indicate, however, that more systematic studies evaluating the utility of these measures are needed. If noninvasive collection is being considered for women, midstream collection with cleansing is recommended, but no recommendation for or against is made for midstream collection without cleansing. If noninvasive collection is being considered for men, midstream collection with cleansing is recommended and collection of first-void urine is not recommended. No recommendation for or against is made for collection of midstream urine without cleansing. If noninvasive collection is being considered for children, midstream collection with cleansing is recommended and collection in sterile urine bags, from diapers, or midstream without cleansing is not recommended. Whether midstream collection with cleansing can be routinely used in place of catheterization or suprapubic aspiration is unclear. The data suggest that midstream collection with cleansing is accurate for the diagnosis of urinary tract infections in infants and children and has higher average accuracy than sterile urine bag collection (data for diaper collection were lacking); however, the overall strength of evidence was low, as multivariate modeling could not be performed, and thus no recommendation for or against can be made.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Total hip arthroplasty (THA) relieves pain and improves physical function in patients with hip osteoarthritis, but requires a year or more for full postoperative recovery. Proponents of intermuscular surgical approaches believe that the direct-anterior approach may restore physical function more quickly than transgluteal approaches, perhaps because of diminished muscle trauma. To evaluate this, we compared patient-reported physical function and other outcome metrics during the first year after surgery between groups of patients who underwent primary THA either through the direct-anterior approach or posterior approach. We asked: (1) Is a primary THA using a direct-anterior approach associated with better patient-reported physical function at early postoperative times (1 and 3 months) compared with a THA performed through the posterior approach? (2) Is the direct-anterior approach THA associated with shorter operative times and higher rates of noninstitutional discharge than a posterior approach THA? Between October 2008 and February 2010, an arthroplasty fellowship-trained surgeon performed 135 THAs. All 135 were performed using the posterior approach. During that period, we used this approach when patients had any moderate to severe degenerative joint disease of the hip attributable to any type of arthritis refractory to nonoperative treatment measures. Of the patients who were treated with this approach, 21 (17%; 23 hips) were lost to followup, whereas 109 (83%; 112 hips) were available for followup at 1 year. Between February and September 2011, the same surgeon performed 86 THAs. All 86 were performed using the direct-anterior approach. During that period, we used this approach when patients with all types of moderate to severe degenerative joint disease had nonoperative treatment measures fail. Of the patients who were treated with this approach, 35 (41%; 35 hips) were lost to followup, whereas 51 (59%; 51 hips) were available for followup at 1 year. THAs during the surgeon's direct-anterior approach learning period (February 2010 through January 2011) were excluded because both approaches were being used selectively depending on patient characteristics. Clinical outcomes included operative blood loss; allogeneic transfusion; adverse events; patient-reported Veterans RAND-12 Physical (PCS) and Mental Component Summary (MCS) scores, and University of California Los Angeles (UCLA) activity scores at 1 month, 3 months, and 1 year after surgery. Resource utilization outcomes included operative time, length of stay, and discharge disposition (home versus institution). Outcomes were compared using logistic and linear regression techniques. After controlling for relevant confounding variables including age, sex, and BMI, the direct-anterior approach was associated with worse adjusted MCS changes 1 and 3 months after surgery (1-month score change, -9; 95% CI, -13 to -5; standard error, 2), compared with the posterior approach (3-month score change, -9; 95% CI, -14 to -3; standard error, 3) (both p &lt; 0.001), while the direct-anterior approach was associated with greater PCS improvement at 3 months compared with the posterior approach (score change, 6; 95% CI, 2-10; standard error, 2; p = 0.008). There were no differences in adjusted PCS at either 1 month or 12 months, and no clinically important differences in UCLA scores. Although the PCS score differences are greater than the minimum clinically important difference of 5 points for this endpoint, the clinical importance of such a small effect is questionable. At 1 year after THA, there were no intergroup differences in self-reported physical function, although both groups had significant loss-to-followup at that time. Operative time (skin incision to skin closure) between the two groups did not differ (81 versus 79 minutes; p = 0.411). Mean surgical blood loss (403 versus 293 mL; p &lt; 0.001; adjusted, 119 more mL; 95% CI, 79-160; p &lt; 0.001) and in-hospital transfusion rates (direct-anterior approach, 20% [17/86] versus posterior approach, 10% [14/135], p = 0.050; adjusted odds ratio, 3.6; 95% CI, 1.3-10.1; p = 0.016) were higher in the direct-anterior approach group. With the numbers available, there was no difference in the frequency of adverse events between groups when comparing intraoperative complications, perioperative Technical Expert Panel complications, and other non-Technical Expert Panel complications within 1 year of surgery, although this study was not adequately powered to detect differences in rare adverse events. With suitable experience, the direct-anterior approach can be performed with expected results similar to those of the posterior approach. There may be transient and small benefits to the direct-anterior approach, including improved physical function at 3 months after surgery. However, the greater operative blood loss and greater likelihood of blood transfusions, even when the surgeon is experienced, may be a disadvantage. Given some of the kinds of bias present that we found, including loss to followup, the conclusions we present should be considered preliminary, but it appears that any benefits that accrue to the patients who had the direct-anterior approach would be transient and modest. Prospective randomized studies on the topic are needed to address the differences between surgical approaches more definitively. Level III, therapeutic study.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Controversy exists over whether longchain polyunsaturated fatty acids (LCPUFA) are essential nutrients for preterm infants because they may not be able to synthesise sufficient amounts of LCPUFA to meet the needs of the developing brain and retina. To assess whether supplementation of formula milk with LCPUFA is safe and of benefit to preterm infants. The main areas of interest were the effects of supplementation on the visual function, development and growth of preterm infants. Trials were identified by searching the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 2) in the Cochrane Library (searched 28 February 2016), MEDLINE Ovid (1966 to 28 February 2016), Embase Ovid (1980 to 28 February 2016), CINAHL EBSCO (Cumulative Index to Nursing and Allied Health Literature; 1980 to 28 February 2016), MEDLINE In Process &amp; Other Non-indexed Citations (1966 to 28 February 2016) and by checking reference lists of articles and conference proceedings. We also searched ClinicalTrials.gov (13 April 2016). No language restrictions were applied. All randomised trials evaluating the effect of LCPUFA-supplemented formula in enterally-fed preterm infants (compared with standard formula) on visual development, neurodevelopment and physical growth. Trials reporting only biochemical outcomes were not included. All authors assessed eligibility and trial quality, two authors extracted data separately. Study authors were contacted for additional information. Seventeen trials involving 2260 preterm infants were included in the review. The risk of bias varied across the included trials with 10 studies having low risk of bias in a majority of the domains. The median gestational age (GA) in the included trials was 30 weeks and median birth weight (BW) was 1300 g. The median concentration of docosahexaenoic acid (DHA) was 0.33% (range: 0.15% to 1%) and arachidonic acid (AA) 0.37% (range: 0.02% to 0.84%). Visual acuity Visual acuity over the first year was measured by Teller or Lea acuity cards in eight studies, by visual evoked potential (VEP) in six studies and by electroretinogram (ERG) in two studies. Most studies found no significant differences in visual acuity between supplemented and control infants. The form of data presentation and the varying assessment methods precluded the use of meta-analysis. A GRADE analysis for this outcome indicated that the overall quality of evidence was low. Neurodevelopment Three out of seven studies reported some benefit of LCPUFA on neurodevelopment at different postnatal ages. Meta-analysis of four studies evaluating Bayley Scales of Infant Development at 12 months (N = 364) showed no significant effect of supplementation (Mental Development Index (MDI): MD 0.96, 95% CI -1.42 to 3.34; P = 0.43; I² = 71% - Psychomotor DeveIopment Index (PDI): MD 0.23, 95% CI -2.77 to 3.22; P = 0.88; I² = 81%). Furthermore, three studies at 18 months (N = 494) also revealed no significant effect of LCPUFA on neurodevelopment (MDI: MD 2.40, 95% CI -0.33 to 5.12; P = 0.08; I² = 0% - PDI: MD 0.74, 95% CI -1.90 to 3.37; P = 0.58; I² = 54%). A GRADE analysis for these outcomes indicated that the overall quality of evidence was low. Physical growth Four out of 15 studies reported benefits of LCPUFA on growth of supplemented infants at different postmenstrual ages (PMAs), whereas two trials suggested that LCPUFA-supplemented infants grow less well. One trial reported mild reductions in length and weight z scores at 18 months. Meta-analysis of five studies (N = 297) showed increased weight and length at two months post-term in supplemented infants (Weight: MD 0.21, 95% CI 0.08 to 0.33; P = 0.0010; I² = 69% - Length: MD 0.47, 95% CI 0.00 to 0.94; P = 0.05; I² = 0%). Meta-analysis of four studies at a corrected age of 12 months (N = 271) showed no significant effect of supplementation on growth outcomes (Weight: MD -0.10, 95% CI -0.31 to 0.12; P = 0.34; I² = 65% - Length: MD 0.25; 95% CI -0.33 to 0.84; P = 0.40; I² = 71% - Head circumference: MD -0.15, 95% CI -0.53 to 0.23; P = 0.45; I² = 0%). No significant effect of LCPUFA on weight, length or head circumference was observed on meta-analysis of two studies (n = 396 infants) at 18 months (Weight: MD -0.14, 95% CI -0.39 to 0.10; P = 0.26; I² = 66% - Length: MD -0.28, 95% CI -0.91 to 0.35; P = 0.38; I² = 90% - Head circumference: MD -0.18, 95% CI -0.53 to 0.18; P = 0.32; I² = 0%). A GRADE analysis for this outcome indicated that the overall quality of evidence was low. Infants enrolled in the trials were relatively mature and healthy preterm infants. Assessment schedule and methodology, dose and source of supplementation and fatty acid composition of the control formula varied between trials. On pooling of results, no clear long-term benefits or harms were demonstrated for preterm infants receiving LCPUFA-supplemented formula.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Dysarthria is an acquired speech disorder following neurological injury that reduces intelligibility of speech due to weak, imprecise, slow and/or unco-ordinated muscle control. The impact of dysarthria goes beyond communication and affects psychosocial functioning. This is an update of a review previously published in 2005. The scope has been broadened to include additional interventions, and the title amended accordingly. To assess the effects of interventions to improve dysarthric speech following stroke and other non-progressive adult-acquired brain injury such as trauma, infection, tumour and surgery. We searched the Cochrane Stroke Group Trials Register (May 2016), CENTRAL (Cochrane Library 2016, Issue 4), MEDLINE, Embase, and CINAHL on 6 May 2016. We also searched Linguistics and Language Behavioral Abstracts (LLBA) (1976 to November 2016) and PsycINFO (1800 to September 2016). To identify further published, unpublished and ongoing trials, we searched major trials registers: WHO ICTRP, the ISRCTN registry, and ClinicalTrials.gov. We also handsearched the reference lists of relevant articles and contacted academic institutions and other researchers regarding other published, unpublished or ongoing trials. We did not impose any language restrictions. We selected randomised controlled trials (RCTs) comparing dysarthria interventions with 1) no intervention, 2) another intervention for dysarthria (this intervention may differ in methodology, timing of delivery, duration, frequency or theory), or 3) an attention control. Three review authors selected trials for inclusion, extracted data, and assessed risk of bias. We attempted to contact study authors for clarification and missing data as required. We calculated standardised mean difference (SMD) and 95% confidence interval (CI), using a random-effects model, and performed sensitivity analyses to assess the influence of methodological quality. We planned to conduct subgroup analyses for underlying clinical conditions. We included five small trials that randomised a total of 234 participants. Two studies were assessed as low risk of bias; none of the included studies were adequately powered. Two studies used an attention control and three studies compared to an alternative intervention, which in all cases was one intervention versus usual care intervention. The searches we carried out did not find any trials comparing an intervention with no intervention. The searches did not find any trials of an intervention that compared variations in timing, dose, or intensity of treatment using the same intervention. Four studies included only people with stroke; one included mostly people with stroke, but also those with brain injury. Three studies delivered interventions in the first few months after stroke; two recruited people with chronic dysarthria. Three studies evaluated behavioural interventions, one investigated acupuncture and another transcranial magnetic stimulation. One study included people with dysarthria within a broader trial of people with impaired communication.Our primary analysis of a persisting (three to nine months post-intervention) effect at the activity level of measurement found no evidence in favour of dysarthria intervention compared with any control (SMD 0.18, 95% CI -0.18 to 0.55; 3 trials, 116 participants, GRADE: low quality, I² = 0%). Findings from sensitivity analysis of studies at low risk of bias were similar, with a slightly wider confidence interval and low heterogeneity (SMD 0.21, 95% CI -0.30 to 0.73, I² = 32%; 2 trials, 92 participants, GRADE: low quality). Subgroup analysis results for stroke were similar to the primary analysis because few non-stroke participants had been recruited to trials (SMD 0.16, 95% CI -0.23 to 0.54, I² = 0%; 3 trials, 106 participants, GRADE: low quality).Similar results emerged from most of the secondary analyses. There was no evidence of a persisting effect at the impairment (SMD 0.07, 95% CI -0.91 to 1.06, I² = 70%; 2 trials, 56 participants, GRADE: very low quality) or participation level (SMD -0.11, 95% CI -0.56 to 0.33, I² = 0%; 2 trials, 79 participants, GRADE: low quality) but substantial heterogeneity on the former. Analyses of immediate post-intervention outcomes provided no evidence of any short-term benefit on activity (SMD 0.29, 95% CI -0.07 to 0.66, I² = 0%; 3 trials, 117 participants, GRADE: very low quality); or participation (SMD -0.24, 95% CI -0.94 to 0.45; 1 study, 32 participants) levels of measurement.There was a statistically significant effect favouring intervention at the immediate, impairment level of measurement (SMD 0.47, 95% CI 0.02 to 0.92, P = 0.04, I² = 0%; 4 trials, 99 participants, GRADE: very low quality) but only one of these four trials had a low risk of bias. We found no definitive, adequately powered RCTs of interventions for people with dysarthria. We found limited evidence to suggest there may be an immediate beneficial effect on impairment level measures; more, higher quality research is needed to confirm this finding.Although we evaluated five studies, the benefits and risks of interventions remain unknown and the emerging evidence justifies the need for adequately powered clinical trials into this condition.People with dysarthria after stroke or brain injury should continue to receive rehabilitation according to clinical guidelines.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
A dacryocystorhinostomy (DCR) procedure aims to restore drainage of tears by bypassing a blockage in the nasolacrimal duct, through the creation of a bony ostium that allows communication between the lacrimal sac and the nasal cavity. It can be performed using endonasal or external approaches. The comparative success rates of these two approaches have not yet been established and this review aims to evaluate the relevant up-to-date research. The primary aim of this review is to compare the success rates of endonasal DCR with that of external DCR. The secondary aim is to compare the complication rates between the two procedures. We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2016, Issue 8), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to 22 August 2016), Embase (January 1980 to 22 August 2016), Latin American and Caribbean Health Sciences Literature Database (LILACS) (January 1982 to 22 August 2016), Web of Science Conference Proceedings Citation Index- Science (CPCI-S) (January 1990 to 22 August 2016), the ISRCTN registry (www.isrctn.com/editAdvancedSearch), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 22 August 2016. We requested or examined relevant conference proceedings for appropriate trials. We included all randomised controlled trials (RCTs) comparing endonasal and external DCRs. Two review authors independently assessed studies for eligibility and extracted data on reported outcomes. We attempted to contact investigators to clarify the methodological quality of the studies. We graded the certainty of the evidence using GRADE. We included two trials in this review. One trial from Finland compared laser-assisted endonasal DCR with external DCR, and one trial from India compared mechanical endonasal DCR (using punch forceps) with external DCR. The trials were poorly reported and it was difficult to judge the extent to which bias had been avoided.Anatomic success was defined as the demonstration of a patent lacrimal passage on syringing, or endoscopic visualisation of fluorescein dye at the nasal opening of the anastomoses after a period of at least six months following surgery. Subjective success was defined as the resolution of symptoms of watering following surgery after a period of at least six months. Both included trials used anatomic patency demonstrated by irrigation as a measure of anatomic success. Different effects were seen in these two trials (I<sup2</sup = 76%). People receiving laser-assisted endonasal DCR were less likely to have a successful operation compared with external DCR (63% versus 91%; risk ratio (RR) 0.69, 95% confidence intervals (CI) 0.52 to 0.92; 64 participants). There was little or no difference in success comparing mechanical endonasal DCR and external DCR (90% in both groups; RR 1.00, CI 0.81 to 1.23; 40 participants). We judged this evidence on success to be very low-certainty, downgrading for risk of bias, imprecision and inconsistency. The trial from Finland also assessed subjective improvement in symptoms following surgery. Resolution of symptoms of watering in outdoor conditions was reported by 84% of the participants in the external DCR group and 59% of those in the laser-assisted endonasal DCR group (RR 0.70, CI 0.51 to 0.97; 64 participants, low-certainty evidence).There were no cases of intraoperative bleeding in any participant in the trial that compared laser-assisted endonasal DCR to external DCR. This was in contrast to the trial comparing mechanical endonasal DCR to external DCR in which 45% of participants in both groups experienced intraoperative bleeding (RR 1.00, 95% CI 0.50 to 1.98; 40 participants). We judged this evidence on intraoperative bleeding to be very low-certainty, downgrading for risk of bias, imprecision and inconsistency.There were only two cases of postoperative bleeding, both in the external DCR group (RR 0.33, 95% CI 0.04 to 3.10; participants = 104; studies = 2). There were only two cases of wound infection/gaping, again both in the external DCR group (RR 0.20, CI 0.01 to 3.92; participants = 40; studies = 1). We judged this evidence on complications to be very low-certainty, downgrading one level for risk of bias and two levels for imprecision due to the very low number of cases. There is uncertainty as to the relative effects of endonasal and external DCR. Differences in effect seen in the two trials included in this review may be due to variations in the endonasal technique, but may also be due to other differences between the trials. Future larger RCTs are required to further assess the success and complication rates of endonasal and external DCR. Different techniques of endonasal DCR should also be assessed, as the choice of endonasal technique can influence the outcome. Strict outcome criteria should be adopted to assess functional and anatomical outcomes with a minimal follow-up of six months.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
In people with acute pancreatitis, it is unclear what the role should be for medical treatment as an addition to supportive care such as fluid and electrolyte balance and organ support in people with organ failure. To assess the effects of different pharmacological interventions in people with acute pancreatitis. We searched the Cochrane Central Register of Controlled Trials (CENTRAL, 2016, Issue 9), MEDLINE, Embase, Science Citation Index Expanded, and trial registers to October 2016 to identify randomised controlled trials (RCTs). We also searched the references of included trials to identify further trials. We considered only RCTs performed in people with acute pancreatitis, irrespective of aetiology, severity, presence of infection, language, blinding, or publication status for inclusion in the review. Two review authors independently identified trials and extracted data. We did not perform a network meta-analysis as planned because of the lack of information on potential effect modifiers and differences of type of participants included in the different comparisons, when information was available. We calculated the odds ratio (OR) with 95% confidence intervals (CIs) for the binary outcomes and rate ratios with 95% CIs for count outcomes using a fixed-effect model and random-effects model. We included 84 RCTs with 8234 participants in this review. Six trials (N = 658) did not report any of the outcomes of interest for this review. The remaining 78 trials excluded 210 participants after randomisation. Thus, a total of 7366 participants in 78 trials contributed to one or more outcomes for this review. The treatments assessed in these 78 trials included antibiotics, antioxidants, aprotinin, atropine, calcitonin, cimetidine, EDTA (ethylenediaminetetraacetic acid), gabexate, glucagon, iniprol, lexipafant, NSAIDs (non-steroidal anti-inflammatory drugs), octreotide, oxyphenonium, probiotics, activated protein C, somatostatin, somatostatin plus omeprazole, somatostatin plus ulinastatin, thymosin, ulinastatin, and inactive control. Apart from the comparison of antibiotics versus control, which included a large proportion of participants with necrotising pancreatitis, the remaining comparisons had only a small proportion of patients with this condition. Most trials included either only participants with severe acute pancreatitis or included a mixture of participants with mild acute pancreatitis and severe acute pancreatitis (75 trials). Overall, the risk of bias in trials was unclear or high for all but one of the trials. seven trials were not funded or funded by agencies without vested interest in results. Pharmaceutical companies partially or fully funded 21 trials. The source of funding was not available from the remaining trials.Since we considered short-term mortality as the most important outcome, we presented only these results in detail in the abstract. Sixty-seven studies including 6638 participants reported short-term mortality. There was no evidence of any differences in short-term mortality in any of the comparisons (very low-quality evidence). With regards to other primary outcomes, serious adverse events (number) were lower than control in participants taking lexipafant (rate ratio 0.67, 95% CI 0.46 to 0.96; N = 290; 1 study; very low-quality evidence), octreotide (rate ratio 0.74, 95% CI 0.60 to 0.89; N = 770; 5 studies; very low-quality evidence), somatostatin plus omeprazole (rate ratio 0.36, 95% CI 0.19 to 0.70; N = 140; 1 study; low-quality evidence), and somatostatin plus ulinastatin (rate ratio 0.30, 95% CI 0.15 to 0.60; N = 122; 1 study; low-quality evidence). The proportion of people with organ failure was lower in octreotide than control (OR 0.51, 95% CI 0.27 to 0.97; N = 430; 3 studies; very low-quality evidence). The proportion of people with sepsis was lower in lexipafant than control (OR 0.26, 95% CI 0.08 to 0.83; N = 290; 1 study; very low-quality evidence). There was no evidence of differences in any of the remaining comparisons in these outcomes or for any of the remaining primary outcomes (the proportion of participants experiencing at least one serious adverse event and the occurrence of infected pancreatic necrosis). None of the trials reported heath-related quality of life. Very low-quality evidence suggests that none of the pharmacological treatments studied decrease short-term mortality in people with acute pancreatitis. However, the confidence intervals were wide and consistent with an increase or decrease in short-term mortality due to the interventions. We did not find consistent clinical benefits with any intervention. Because of the limitations in the prognostic scoring systems and because damage to organs may occur in acute pancreatitis before they are clinically manifest, future trials should consider including pancreatitis of all severity but power the study to measure the differences in the subgroup of people with severe acute pancreatitis. It may be difficult to power the studies based on mortality. Future trials in participants with acute pancreatitis should consider other outcomes such as complications or health-related quality of life as primary outcomes. Such trials should include health-related quality of life, costs, and return to work as outcomes and should follow patients for at least three months (preferably for at least one year).
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
La présente directive clinique examine les avantages potentiels de la salpingectomie opportuniste pour prévenir le développement du cancer séreux de grade élevé de l'ovaire, de la trompe de Fallope et du péritoine à la lumière de données probantes actuelles selon lesquelles ce type de cancer prendrait naissance dans la trompe de Fallope. Gynécologues, obstétriciens, médecins de famille, infirmières autorisées, infirmières praticiennes, résidents et fournisseurs de soins de santé. Femmes adultes (18 ans et plus) : OPTIONS: Les femmes envisageant une hystérectomie et souhaitant conserver leurs ovaires conservent généralement aussi leurs trompes de Fallope. De plus, celles qui subissent une chirurgie de stérilisation permanente subissent habituellement aussi une ligature des trompes selon des méthodes variées plutôt qu'un retrait chirurgical complet des trompes. RéSULTATS: Les sections « Données probantes appuyant l'hypothèse selon laquelle les CSGE prendraient naissance dans la trompe de Fallope » et « Articles récents sur les répercussions et la sûreté de la salpingectomie opportuniste » reposent sur des études pertinentes rédigées en anglais, qui ont été repérées dans PubMed, Medline et la Cochrane Database of Systematic Reviews à l'aide des termes suivants, seuls ou combinés : high grade serous cancers ovary, fallopian tube, peritoneum, opportunistic salpingectomy, epithelial ovarian cancers, origin, tubal carcinoma in situ, BRCA mutation, prophylactic salpingectomy, inflammation, clear cell et endometrioid. La recherche initiale a été menée en mars 2015, et une dernière recherche a été effectuée en mars 2016. Dans l'ordre, les données probantes pertinentes ont été tirées de méta-analyses, de revues de la littérature, de directives, d'essais cliniques randomisés, d'études de cohorte prospectives, d'études d'observation, de revues non systématiques, d'études de série de cas ainsi que de rapports. Au total, 458 études ont été repérées, et 56 ont été retenues pour la présente directive. Pour la section « Autres facteurs influant sur le risque de développer un cancer de ″l'ovaire″ », une recherche générale a été effectuée dans Medline à partir des termes ovarian neoplasm et prevention. Ont été inclus dans cette recherche des articles rédigés entre décembre 2005 et mars 2016. Les méta-analyses ont été privilégiées lorsque possible. Des recherches supplémentaires ont également été menées pour chaque sous-descripteurs (p. ex., ovarian neoplasm et tubal ligation). D'autres articles pertinents ont été ciblés au moyen d'une vérification des références des revues de la littérature retenues. Les termes ovarian neoplasm et prevention ont permis de repérer 10 méta-analyses; les termes ovarian neoplasm et tubal ligation, 4 méta-analyses. MéTHODES DE VALIDATION: Le contenu et les recommandations ont été rédigés et acceptés par les auteurs principaux. La direction et le conseil de la Société de gynéco-oncologie du Canada ont examiné le contenu et soumis des commentaires, puis le Conseil d'administration de la SOGC a approuvé la version finale avant publication. La qualité des données probantes a été évaluée à partir des critères de l'approche GRADE (Grading of Recommendations Assessment, Development and Evaluation) (tableau 1). L'interprétation des recommandations solides et conditionnelles est décrite dans le tableau 2. Le résumé des conclusions peut être fourni sur demande. AVANTAGES, INCONVéNIENTS ET COûTS: L'ajout d'une salpingectomie opportuniste à une hystérectomie ou à une procédure de stérilisation permanente prévue n'a pas entraîné une augmentation des taux de réadmission à l'hôpital (RC : 0,91; IC à 95 % : 0,75-1, 10 et RC : 0,8; IC à 95 % : 0,56-1,21, respectivement) ou de transfusion sanguine (RC : 0,86; IC à 95 % : 0,67-1,10 et RC : 0,75; IC à 95 % : 0,32-1,73, respectivement), mais il a entraîné une hausse de la durée des opérations (de 16 minutes et de 10 minutes, respectivement) selon une étude rétrospective portant sur 43 931 femmes. Le risque de subir des interventions répétées pour une pathologie tubaire chez les femmes ayant conservé leurs trompes de Fallope après une hystérectomie était au moins deux fois plus élevé (RC : 2,13; IC à 95 % : 1,88-2,42, selon une étude fondée sur une population de 170 000 femmes). Selon des experts, si les gynécologues généralistes envisageaient systématiquement de retirer les trompes de Fallope lors d'une hystérectomie ou d'une procédure de stérilisation et d'aiguiller toutes les patientes aux prises avec un CSGE vers une consultation en oncologie génétique et un dépistage génétique, le taux de CSGE pourrait diminuer de 40 % au cours des 20 prochaines années. MISE à JOUR DE DIRECTIVES CLINIQUES: Une revue des données probantes sera menée cinq ans après la publication de la présente directive clinique afin de déterminer si une mise à jour complète ou partielle s'impose. Cependant, si de nouvelles données probantes importantes sont publiées avant la fin du cycle de cinq ans, le processus pourrait être accéléré afin que certaines recommandations soient mises à jour rapidement. La présente directive clinique a été élaborée à l'aide de ressources financées par la Société de gynéco-oncologie du Canada et la SOGC. DéCLARATIONS SOMMAIRES: RECOMMANDATIONS.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
<bObjective:</b To investigate the morphological and pathological changes of the larynx after severe laryngeal burn in dogs and their relationship with laryngostenosis. <bMethods:</b Eighteen healthy, male beagle dogs were assigned into control group, immediately after injury group, and 2, 4, 6, and 8 weeks after injury groups according to the random number table, with 3 dogs in each group. Dogs of injury group inhaled saturated steam through mouth for 5 seconds to reproduce severe laryngeal burn. Tracheotomy and intubation were performed immediately after injury, and 400 000 U/d penicillin was intravenously infused for 1 week. The feeding, activity, and vocalization of dogs in each group after injury were observed until they were sacrificed. Immediately after injury and 2, 4, 6, and 8 weeks after injury, the laryngeal morphology of the dogs in corresponding time point groups were observed by endoscope. After the observation, the dogs in each injury group were sacrificed, and the laryngeal tissue was taken. The epiglottis, glottis, and cricoid cartilage were collected to make full-thickness tissue slice, respectively, and their pathological changes were observed with hematoxylin and eosin staining. The dogs of control group were not specially treated, and their life activities, laryngeal morphological and pathological changes were observed. <bResults:</b (1) The dogs of control group had normal feeding, activities, and vocalization. All the dogs in injury group survived until they were sacrificed, and their feeding, activities, and vocalization were obviously reduced after injury compared with those of control group. The dogs of 2, 4, 6 and 8 weeks after injury groups ate and moved normally 2 weeks after injury but vocalized abnormally in frequency and volume compared with those of control group, which lasted until they were sacrificed. (2) The dog's laryngeal mucosa in control group was complete and pink, without obvious exudation. The laryngeal mucosa of the dog in immediately after injury group was pale and edematous, with obvious exudation, local ulceration, necrosis, and exfoliation, and dilated microvessels on the surface. The laryngeal mucosa of the dogs in 2 weeks after injury group was pale, edematous, and oozed less than that of immediately after injury group, and the glottis was blocked by an obviously extruding mass. The paleness and edema of laryngeal mucosa were significantly reduced in the dogs of 4 weeks after injury group compared with those of 2 weeks after injury group, without dilated microvessel, and the glottic extruding mass was obviously smaller than that of 2 weeks after injury group. The sizes of glottic mass were similar between the dogs of 6 and 8 weeks after injury groups, which were obviously smaller than that in 4 weeks after injury group. (3) In the dogs of control group, the epithelial cells of epiglottis, glottis, and cricoid cartilage were normal in morphology, the proper glands were visible in the intrinsic layer, and the muscle fibers and the chondrocytes were normal in morphology. In the dogs of immediately after injury group, large sheets of epiglottis epidermis exfoliated, the epithelial cells were swollen and necrotic, the intrinsic glands were atrophic and necrotic, and the chondrocytes were degenerated and necrotic. The epidermis of the glottis partially exfoliated, the epithelial cells were swollen and necrotic, the intrinsic glands were atrophic and necrotic, the muscle fibers were partially atrophic and fractured, and the vacuolar chondrocytes were visible. The cricoid cartilage epidermis was ablated, the epithelial cells were swollen, the intrinsic layer and submucosal layer were slightly edematous, and the morphological structure of glands, chondrocytes, and muscle fibers were normal. In the dogs of 2 weeks after injury group, the epiglottis epidermis was completely restored, a small amount of glands in the intrinsic layer were repaired, and obsolete necrotic chondrocytes and new chondrocytes could be seen. A large number of fibroblasts, new capillaries, and inflammatory cells infiltration were observed in the epidermis of glottis, and intrinsic layer glands were repaired. The cricoid cartilage epidermis was repaired intactly, and there was no edema in the intrinsic layer. In the dogs of 4 weeks after injury group, the epiglottis intrinsic layer glands were further repaired compared with those of 2 weeks after injury group, and new chondrocytes were seen in the submucosa of the glottis. The condition of cricoid cartilage was consistent with that of control group. The dog's epiglottis, glottis, and cricoid cartilage were similar between the 6 and 8 weeks after injury groups, and no significant change was observed compared with those of 4 weeks after injury group. <bConclusions:</b The morphological changes of larynx after severe laryngeal burn in dogs include mucosa detachment and necrosis, and mass blocking glottis. Pathological changes include epidermis shedding and necrosis, gland atrophy and necrosis, vascular congestion and embolism, chondrocytes degeneration, necrosis and proliferation, even local granulation tissue formation and cartilaginous metaplasia. These results may be the cause of laryngostenosis after laryngeal burn.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
What is the diagnostic potential of next generation sequencing (NGS) based on a 'mouse azoospermia' gene panel in human non-obstructive azoospermia (NOA)? The diagnostic performance of sequencing a gene panel based on genes associated with mouse azoospermia was relatively successful in idiopathic NOA patients and allowed the discovery of two novel genes involved in NOA due to meiotic arrest. NOA is a largely heterogeneous clinical entity, which includes different histological pictures. In a large proportion of NOA, the aetiology remains unknown (idiopathic NOA) and yet, unknown genetic factors are likely to play be involved. The mouse is the most broadly used mammalian model for studying human disease because of its usefulness for genetic manipulation and its genetic and physiological similarities to man. Mouse azoospermia models are available in the Mouse Genome Informatics database (MGI: http://www.informatics.jax.org/). The first step was to design of a 'mouse azoospermia' gene panel through the consultation of MGI. The second step was NGS analysis of 175 genes in a group of highly selected NOA patients (n = 33). The third step was characterization of the discovered gene defects in human testis tissue, through meiotic studies using surplus testicular biopsy material from the carriers of the RNF212 and STAG3 pathogenic variants. The final step was RNF212 and STAG3 expression analysis in a collection of testis biopsies. From a total of 1300 infertile patients, 33 idiopathic NOA patients were analysed in this study, including 31 unrelated men and 2 brothers from a consanguineous family. The testis histology of the 31 unrelated NOA patients was as follows: 20 Sertoli cell-only syndrome (SCOS), 11 spermatogenic arrest (6 spermatogonial arrest and 5 spermatocytic arrest). The two brothers were affected by spermatocytic arrest. DNA extracted from blood was used for NGS on Illumina NextSeq500 platform. Generated sequence data was filtered for rare and potentially pathogenic variants. Functional studies in surplus testicular tissue from the carriers included the investigation of meiotic entry, XY body formation and metaphases by performing fluorescent immunohistochemical staining and immunocytochemistry. mRNA expression analysis through RT-qPCR of RNF212 and STAG3 was carried out in a collection of testis biopsies with different histology. Our approach was relatively successful, leading to the genetic diagnosis of one sporadic NOA patient and two NOA brothers. This relatively high diagnostic performance is likely to be related to the stringent patient selection criteria i.e. all known causes of azoospermia were excluded and to the relatively high number of patients with rare testis histology (spermatocytic arrest). All three mutation carriers presented meiotic arrest, leading to the genetic diagnosis of three out of seven cases with this specific testicular phenotype. For the first time, we report biallelic variants in STAG3, in one sporadic patient, and a homozygous RNF212 variant, in the two brothers, as the genetic cause of NOA. Meiotic studies allowed the detection of the functional consequences of the mutations and provided information on the role of STAG3 and RNF212 in human male meiosis. All genes, with the exception of 5 out of 175, included in the panel cause azoospermia in mice only in the homozygous or hemizygous state. Consequently, apart from the five known dominant genes, heterozygous variants (except compound heterozygosity) in the remaining genes were not taken into consideration as causes of NOA. We identified the genetic cause in approximately half of the patients with spermatocytic arrest. The low number of analysed patients can be considered as a limitation, but it is a very rare testis phenotype. Due to the low frequency of this specific phenotype among infertile men, our finding may be considered of low clinical impact. However, at an individual level, it does have relevance for prognostic purposes prior testicular sperm extraction. Our study represents an additional step towards elucidating the genetic bases of early spermatogenic failure, since we discovered two new genes involved in human male meiotic arrest. We propose the inclusion of RNF212 and STAG3 in a future male infertility diagnostic gene panel. Based on the associated testis phenotype, the identification of pathogenic mutations in these genes also confers a negative predictive value for testicular sperm retrieval. Our meiotic studies provide novel insights into the role of these proteins in human male meiosis. Mutations in STAG3 were first described as a cause of female infertility and ovarian cancer, and Rnf212 knock out in mice leads to male and female infertility. Hence, our results stimulate further research on shared genetic factors causing infertility in both sexes and indicate that genetic counselling should involve not only male but also female relatives of NOA patients. This work was funded by the Spanish Ministry of Health Instituto Carlos III-FIS (grant number: FIS/FEDER-PI14/01250; PI17/01822) awarded to CK and AR-E, and by the European Commission, Reproductive Biology Early Research Training (REPROTRAIN, EU-FP7-PEOPLE-2011-ITN289880), awarded to CK, WB, and AE-M. The authors have no conflict of interest.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The hypothesis of the study is that treatment with hydroxychloroquine sulphate in hospitalised patients with coronavirus disease 2019 (Covid-19) is safe and will accelerate the virological clearance rate for patients with moderately severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) when compared to standard care. Furthermore, we hypothesize that early treatment with hydroxychloroquine sulphate is associated with more rapid resolve of clinical symptoms as assessed by the National Early Warning Score 2 (NEWS2), decreased admission rate to intensive care units and mortality, and improvement in protein biomarker profiles (C-reactive protein, markers of renal and hepatic injury, and established cardiac biomarkers like cardiac troponin and B-type natriuretic peptide). The study is a two-arm, open label, pragmatic randomised controlled group sequential adaptive trial designed to assess the effect on viral loads and clinical outcome of hydroxychloroquine sulphate therapy in addition to standard care compared to standard care alone in patients with established Covid-19. By utilizing resources already paid for by the hospitals (physicians and nurses in daily clinical practice), this pragmatic trial can include a larger number of patients over a short period of time and at a lower cost than studies utilizing traditional randomized controlled trial designs with an external study organization. The pragmatic approach will enable swift initiation of randomisation and allocation to treatment. Patients will be recruited from all inpatients at Akershus University Hospital, Lørenskog, Norway. Electronic real-time surveillance of laboratory reports from the Department of Microbiology will be examined regularly for SARS-CoV-2 positive subjects. All of the following conditions must apply to the prospective patient at screening prior to inclusion: (1) Hospitalisation; (2) Adults 18 years or older; (3) Moderately severe Covid-19 disease (NEWS2 of 6 or less); (4) SARS-CoV-2 positive nasopharyngeal swab; (5) Expected time of hospitalisation &gt; 48 hours; and (6) Signed informed consent must be obtained and documented according to Good Clinical Practice guidelines of the International Conference on Harmonization, and national/local regulations. Patients will be excluded from participation in the study if they meet any of the following criteria: (1) Requiring intensive care unit admission at screening; (2) History of psoriasis; (3) Known adverse reaction to hydroxychloroquine sulphate; (4) Pregnancy; or (5) Prolonged corrected QT interval (&gt;450 ms). Clinical data, including standard hospital biochemistry, medical therapy, vital signs, NEWS2, and microbiology results (including blood culture results and reverse transcriptase polymerase chain reaction [RT-PCR] for other upper airway viruses), will be automatically extracted from the hospital electronic records and merged with the study specific database. Included patients will be randomised in a 1:1 ratio to (1) standard care with the addition of 400 mg hydroxychloroquine sulphate (Plaquenil<supTM</sup) twice daily for seven days or (2) standard care alone. The primary endpoint of the study is the rate of decline in SARS-CoV-2 viral load in oropharyngeal samples as assessed by RT-PCR in samples collected at baseline, 48 and 96 hours after randomization and administration of drug for the intervention arm. Secondary endpoints include change in NEWS2 at 96 hours after randomisation, admission to intensive care unit, mortality (in-hospital, and at 30 and 90 days), duration of hospital admission, clinical status on a 7-point ordinal scale 14 days after randomization ([1] Death [2] Hospitalised, on invasive mechanical ventilation or extracorporeal membrane oxygenation [3] Hospitalised, on non-invasive ventilation or high flow oxygen devices [4] Hospitalized, requiring supplemental oxygen [5] Hospitalised, not requiring supplemental oxygen [6] Not hospitalized, but unable to resume normal activities [7] Not hospitalised, with resumption of normal activities), and improvement in protein biomarker profiles (C-reactive protein, markers of renal and hepatic injury, and established cardiac biomarkers like cardiac troponin and B-type natriuretic peptide) at 96 hours after randomization. Eligible patients will be allocated in a 1:1 ratio, using a computer randomisation procedure. The allocation sequence has been prepared by an independent statistician. Open label randomised controlled pragmatic trial without blinding, no active or placebo control. The virologist assessing viral load in the oropharyngeal samples and the statistician responsible for analysis of the data will be blinded to the treatment allocation for the statistical analyses. This is a group sequential adaptive trial where analyses are planned after 51, 101, 151 and 202 completed patients, with a maximum sample size of 202 patients (101 patients allocated to intervention and standard care and 101 patients allocated to standard care alone). Protocol version 1.3 (March 26, 2020). Recruitment of first patient on March 26, 2020, and 51 patients were included as per April 28, 2020. Study recruitment is anticipated to be completed by July 2020. ClinicalTrials.gov number, NCT04316377. Trial registered March 20, 2020. The full protocol is attached as an additional file, accessible from the Trials website (Additional file 1). In the interest in expediting dissemination of this material, the familiar formatting has been eliminated; this Letter serves as a summary of the key elements of the full protocol.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
High-flow nasal cannulae (HFNC) deliver high flows of blended humidified air and oxygen via wide-bore nasal cannulae and may be useful in providing respiratory support for adults experiencing acute respiratory failure, or at risk of acute respiratory failure, in the intensive care unit (ICU). This is an update of an earlier version of the review. To assess the effectiveness of HFNC compared to standard oxygen therapy, or non-invasive ventilation (NIV) or non-invasive positive pressure ventilation (NIPPV), for respiratory support in adults in the ICU. We searched CENTRAL, MEDLINE, Embase, CINAHL, Web of Science, and the Cochrane COVID-19 Register (17 April 2020), clinical trial registers (6 April 2020) and conducted forward and backward citation searches. We included randomized controlled studies (RCTs) with a parallel-group or cross-over design comparing HFNC use versus other types of non-invasive respiratory support (standard oxygen therapy via nasal cannulae or mask; or NIV or NIPPV which included continuous positive airway pressure and bilevel positive airway pressure) in adults admitted to the ICU. We used standard methodological procedures as expected by Cochrane. We included 31 studies (22 parallel-group and nine cross-over designs) with 5136 participants; this update included 20 new studies. Twenty-one studies compared HFNC with standard oxygen therapy, and 13 compared HFNC with NIV or NIPPV; three studies included both comparisons. We found 51 ongoing studies (estimated 12,807 participants), and 19 studies awaiting classification for which we could not ascertain study eligibility information. In 18 studies, treatment was initiated after extubation. In the remaining studies, participants were not previously mechanically ventilated. HFNC versus standard oxygen therapy HFNC may lead to less treatment failure as indicated by escalation to alternative types of oxygen therapy (risk ratio (RR) 0.62, 95% confidence interval (CI) 0.45 to 0.86; 15 studies, 3044 participants; low-certainty evidence). HFNC probably makes little or no difference in mortality when compared with standard oxygen therapy (RR 0.96, 95% CI 0.82 to 1.11; 11 studies, 2673 participants; moderate-certainty evidence). HFNC probably results in little or no difference to cases of pneumonia (RR 0.72, 95% CI 0.48 to 1.09; 4 studies, 1057 participants; moderate-certainty evidence), and we were uncertain of its effect on nasal mucosa or skin trauma (RR 3.66, 95% CI 0.43 to 31.48; 2 studies, 617 participants; very low-certainty evidence). We found low-certainty evidence that HFNC may make little or no difference to the length of ICU stay according to the type of respiratory support used (MD 0.12 days, 95% CI -0.03 to 0.27; 7 studies, 1014 participants). We are uncertain whether HFNC made any difference to the ratio of partial pressure of arterial oxygen to the fraction of inspired oxygen (PaO<sub2</sub/FiO<sub2</sub) within 24 hours of treatment (MD 10.34 mmHg, 95% CI -17.31 to 38; 5 studies, 600 participants; very low-certainty evidence). We are uncertain whether HFNC made any difference to short-term comfort (MD 0.31, 95% CI -0.60 to 1.22; 4 studies, 662 participants, very low-certainty evidence), or to long-term comfort (MD 0.59, 95% CI -2.29 to 3.47; 2 studies, 445 participants, very low-certainty evidence). HFNC versus NIV or NIPPV We found no evidence of a difference between groups in treatment failure when HFNC were used post-extubation or without prior use of mechanical ventilation (RR 0.98, 95% CI 0.78 to 1.22; 5 studies, 1758 participants; low-certainty evidence), or in-hospital mortality (RR 0.92, 95% CI 0.64 to 1.31; 5 studies, 1758 participants; low-certainty evidence). We are very uncertain about the effect of using HFNC on incidence of pneumonia (RR 0.51, 95% CI 0.17 to 1.52; 3 studies, 1750 participants; very low-certainty evidence), and HFNC may result in little or no difference to barotrauma (RR 1.15, 95% CI 0.42 to 3.14; 1 study, 830 participants; low-certainty evidence). HFNC may make little or no difference to the length of ICU stay (MD -0.72 days, 95% CI -2.85 to 1.42; 2 studies, 246 participants; low-certainty evidence). The ratio of PaO<sub2</sub/FiO<sub2</sub may be lower up to 24 hours with HFNC use (MD -58.10 mmHg, 95% CI -71.68 to -44.51; 3 studies, 1086 participants; low-certainty evidence). We are uncertain whether HFNC improved short-term comfort when measured using comfort scores (MD 1.33, 95% CI 0.74 to 1.92; 2 studies, 258 participants) and responses to questionnaires (RR 1.30, 95% CI 1.10 to 1.53; 1 study, 168 participants); evidence for short-term comfort was very low certainty. No studies reported on nasal mucosa or skin trauma. HFNC may lead to less treatment failure when compared to standard oxygen therapy, but probably makes little or no difference to treatment failure when compared to NIV or NIPPV. For most other review outcomes, we found no evidence of a difference in effect. However, the evidence was often of low or very low certainty. We found a large number of ongoing studies; including these in future updates could increase the certainty or may alter the direction of these effects.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Levetiracetam (Keppra<sup®</sup, UCB Pharma Ltd, Slough, UK) and zonisamide (Zonegran<sup®</sup, Eisai Co. Ltd, Tokyo, Japan) are licensed as monotherapy for focal epilepsy, and levetiracetam is increasingly used as a first-line treatment for generalised epilepsy, particularly for women of childbearing age. However, there is uncertainty as to whether or not they should be recommended as first-line treatments owing to a lack of evidence of clinical effectiveness and cost-effectiveness. To compare the clinical effectiveness and cost-effectiveness of lamotrigine (Lamictal<sup®</sup, GlaxoSmithKline plc, Brentford, UK) (standard treatment) with levetiracetam and zonisamide (new treatments) for focal epilepsy, and to compare valproate (Epilim<sup®</sup, Sanofi SA, Paris, France) (standard treatment) with levetiracetam (new treatment) for generalised and unclassified epilepsy. Two pragmatic randomised unblinded non-inferiority trials run in parallel. Outpatient services in NHS hospitals throughout the UK. Those aged ≥ 5 years with two or more spontaneous seizures that require anti-seizure medication. Participants with focal epilepsy were randomised to receive lamotrigine, levetiracetam or zonisamide. Participants with generalised or unclassifiable epilepsy were randomised to receive valproate or levetiracetam. The randomisation method was minimisation using a web-based program. The primary outcome was time to 12-month remission from seizures. For this outcome, and all other time-to-event outcomes, we report hazard ratios for the standard treatment compared with the new treatment. For the focal epilepsy trial, the non-inferiority limit (lamotrigine vs. new treatments) was 1.329. For the generalised and unclassified epilepsy trial, the non-inferiority limit (valproate vs. new treatments) was 1.314. Secondary outcomes included time to treatment failure, time to first seizure, time to 24-month remission, adverse reactions, quality of life and cost-effectiveness. <iFocal epilepsy</i. A total of 990 participants were recruited, of whom 330 were randomised to receive lamotrigine, 332 were randomised to receive levetiracetam and 328 were randomised to receive zonisamide. Levetiracetam did not meet the criteria for non-inferiority (hazard ratio 1.329) in the primary intention-to-treat analysis of time to 12-month remission (hazard ratio vs. lamotrigine 1.18, 97.5% confidence interval 0.95 to 1.47), but zonisamide did meet the criteria (hazard ratio vs. lamotrigine 1.03, 97.5% confidence interval 0.83 to 1.28). In the per-protocol analysis, lamotrigine was superior to both levetiracetam (hazard ratio 1.32, 95% confidence interval 1.05 to 1.66) and zonisamide (hazard ratio 1.37, 95% confidence interval 1.08 to 1.73). For time to treatment failure, lamotrigine was superior to levetiracetam (hazard ratio 0.60, 95% confidence interval 0.46 to 0.77) and zonisamide (hazard ratio 0.46, 95% confidence interval 0.36 to 0.60). Adverse reactions were reported by 33% of participants starting lamotrigine, 44% starting levetiracetam and 45% starting zonisamide. In the economic analysis, both levetiracetam and zonisamide were more costly and less effective than lamotrigine and were therefore dominated. <iGeneralised and unclassifiable epilepsy</i. Of 520 patients recruited, 260 were randomised to receive valproate and 260 were randomised to receive to levetiracetam. A total of 397 patients had generalised epilepsy and 123 had unclassified epilepsy. Levetiracetam did not meet the criteria for non-inferiority in the primary intention-to-treat analysis of time to 12-month remission (hazard ratio 1.19, 95% confidence interval 0.96 to 1.47; non-inferiority margin 1.314). In the per-protocol analysis of time to 12-month remission, valproate was superior to levetiracetam (hazard ratio 1.68, 95% confidence interval 1.30 to 2.15). Valproate was superior to levetiracetam for time to treatment failure (hazard ratio 0.65, 95% confidence interval 0.50 to 0.83). Adverse reactions were reported by 37.4% of participants receiving valproate and 41.5% of those receiving levetiracetam. Levetiracetam was both more costly (incremental cost of £104, 95% central range -£587 to £1234) and less effective (incremental quality-adjusted life-year of -0.035, 95% central range -0.137 to 0.032) than valproate, and was therefore dominated. At a cost-effectiveness threshold of £20,000 per quality-adjusted life-year, levetiracetam was associated with a probability of 0.17 of being cost-effective. The SANAD II trial was unblinded, which could have biased results by influencing decisions about dosing, treatment failure and the attribution of adverse reactions. SANAD II data could now be included in an individual participant meta-analysis of similar trials, and future similar trials are required to assess the clinical effectiveness and cost-effectiveness of other new treatments, including lacosamide and perampanel. <iFocal epilepsy</i - The SANAD II findings do not support the use of levetiracetam or zonisamide as first-line treatments in focal epilepsy. <iGeneralised and unclassifiable epilepsy</i - The SANAD II findings do not support the use of levetiracetam as a first-line treatment for newly diagnosed generalised epilepsy. For women of childbearing potential, these results inform discussions about the benefit (lower teratogenicity) and harm (worse seizure outcomes and higher treatment failure rate) of levetiracetam compared with valproate. Current Controlled Trials ISRCTN30294119 and EudraCT 2012-001884-64. This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in <iHealth Technology Assessment</i; Vol. 25, No. 75. See the NIHR Journals Library website for further project information.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Joint attention (JA) is an early manifestation of social cognition, commonly described as interactions in which an infant looks or gestures to an adult female to share attention about an object, within a positive emotional atmosphere. We label this description the JA phenotype. We argue that characterizing JA in this way reflects unexamined assumptions which are, in part, due to past developmental researchers' primary focus on western, middle-class infants and families. We describe a range of cultural variations in caregiving practices, socialization goals, and parenting ethnotheories as an essential initial step in viewing joint attention within inclusive and contextualized perspectives. We begin the process of conducting a decolonized study of JA by considering the core construct of joint attention (i.e., triadic connectedness) and adopting culturally inclusive definitions (labeled joint engagement [JE]). Our JE definitions allow for attention and engagement to be expressed in visual and tactile modalities (e.g., for infants experiencing distal or proximal caregiving), with various social partners (e.g., peers, older siblings, mothers), with a range of shared topics (e.g., representing diverse socialization goals, and socio-ecologies with and without toys), and with a range of emotional tone (e.g., for infants living in cultures valuing calmness and low arousal, and those valuing exuberance). Our definition of JE includes initiations from either partner (to include priorities for adult-led or child-led interactions). Our next foundational step is making an ecological commitment to naturalistic observations (Dahl, 2017, Child Dev Perspect, 11(2), 79-84): We measure JE while infants interact within their own physical and social ecologies. This commitment allows us to describe JE as it occurs in everyday contexts, without constraints imposed by researchers. Next, we sample multiple groups of infants drawn from diverse socio-ecological settings. Moreover, we include diverse samples of chimpanzee infants to compare with diverse samples of human infants, to investigate the extent to which JE is unique to humans, and to document diversity both within and between species. We sampled human infants living in three diverse settings. U.K. infants (n = 8) were from western, middle-class families living near universities in the south of England. Nso infants (n = 12) were from communities of subsistence farmers in Cameroon, Africa. Aka infants (n = 10) were from foraging communities in the tropical rain forests of Central African Republic, Africa. We coded behavioral details of JE from videotaped observations (taken between 2004 and 2010). JE occurred in the majority of coded intervals (Mdn = 68%), supporting a conclusion that JE is normative for human infants. The JA phenotype, in contrast, was infrequent, and significantly more common in the U.K. (Mdn = 10%) than the other groups (Mdn &lt; 3%). We found significant within-species diversity in JE phenotypes (i.e., configurations of predominant forms of JE characteristics). We conclude that triadic connectedness is very common in human infants, but there is significant contextualization of behavioral forms of JE. We also studied chimpanzee infants living in diverse socio-ecologies. The PRI/Zoo chimpanzee infants (n = 7) were from captive, stable groups of mixed ages and sexes, and included 4 infants from the Chester Zoo, U.K. and 3 from the Primate Research Institute, Kyoto University, Japan. The Gombe chimpanzee infants (n = 12) were living in a dynamically changing, wild community in the Gombe National Park, Tanzania, Africa. Additionally, we include two Home chimpanzee infants who were reared from birth by a female scientist, in the combined U.S., middle-class contexts of home and university cognition laboratory. JE was coded from videotaped observations (taken between 1993 and 2006). JE occurred during the majority of coded intervals (Mdn = 64%), consistent with the position that JE is normative for chimpanzee infants. The JA phenotype, in contrast, was rare, but more commonly observed in the two Home chimpanzee infants (in 8% and 2% of intervals) than in other chimpanzee groups (Mdns = 0%). We found within-species diversity in the configurations comprising the JE phenotypes. We conclude that triadic connectedness is very common in chimpanzee infants, but behavioral forms of joint engagement are contextualized. We compared JE across species, and found no species-uniqueness in behavioral forms, JE characteristics, or JE phenotypes. Both human and chimpanzee infants develop contextualized social cognition. Within-species diversity is embraced when triadic connectedness is described with culturally inclusive definitions. In contrast, restricting definitions to the JA phenotype privileges a behavioral form most valued in western, middle-class socio-ecologies, irrespective of whether the interactions involve human or chimpanzee infants. Our study presents a model for how to decolonize an important topic in developmental psychology. Decolonization is accomplished by defining the phenomenon inclusively, embracing diversity in sampling, challenging claims of human-uniqueness, and having an ecological commitment to observe infant social cognition as it occurs within everyday socio-ecological contexts. It is essential that evolutionary and developmental theories of social cognition are re-built on more inclusive and decolonized empirical foundations.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
PURPOSE OF THE STUDY Unplanned revision spinal surgeries constitute a complication in the treatment algorithm for the patient, surgeon and the entire treatment team. Any complication leading to an unplanned revision surgery is therefore undesirable. The percentage of complications referred to in publications on this topic focusing on unplanned revision surgeries only varies from 0.7% to 29.8%, with obvious diversity of causes and significant risk factors. The purpose of the submitted paper is to carry out a prospective evaluation of the most serious complications requiring unplanned revision spinal surgeries in the course of 13 years at a single department performing a broad range of spinal surgeries, namely 1300 procedures annually on average. MATERIAL AND METHODS In the period 2006 - 2018, a total of 16872 patients underwent a surgery at our department. During this period, in 556 patients an unplanned revision spinal surgery was performed. In agreement with literature, the patients were categorised by cause for revision: 1/ impaired wound suprafascial (superficial) healing - superficial infection, 2/ impaired wound subfascial (deep) healing - deep infection, 3/ surgical wound hematoma, 4/ deterioration or occurrence of new neurological symptoms, 5/ cerebrospinal fluid leak (liquorrhoea) and 6/ others. The patients operated on for inflammatory diseases of the spine with subsequent infectious complications, primarily treated at another department, and the patients with open spinal injury were excluded from the study. According to these criteria, a cohort of 521 patients was followed up, namely 236 (45.3%) women and 285 (54.7%) men, aged 1 year to 86 years, with the mean age of 55.0 years (median 60 years). Demographic effects, tobacco smoking and comorbidities were followed up in the cohort, together with the effects of surgery, diagnosis, surgical approach and physician. All parameters were statistically evaluated at a p-value below 0.05, including comparison with the control group. RESULTS Of the total number of 16872 operated patients, a group of 521 (3.09%) patients undergoing a revision surgery for complications was analysed in detail. Impaired wound healing - infection (SSI) was found in 199 (1.18%) patients, of whom superficial infection in 124 cases (0.73%) and deep infection in 75 cases (0.44%). Hematoma in a surgical site was detected in 149 (0.88%) patients. In 63 (0.37%) cases, deterioration of the existing neurological finding or occurrence of a new neurological finding were observed, in 68 (0.40%) cases cerebrospinal fluid leak was reported and in 40 (0.24%) cases other complications were identified. As concerns the surgical assistant, the percentage of complications in a board-certified physician is 2.77 (1.14 - 3.29%), in a medical resident it increases to 3.60 (0.00 - 9.38%) (p&lt;0.05). The prevalence of smokers in the group with complications (N=521) was 34.7%. The control group (N=3650) included 30.1% of smokers (p&lt;0.05). The mean age of patients in the group with complications (N=521) was higher, i.e. 55.0 years, with the median age of 60.0 years, than in the primary cohort (N=16872) with the mean age of 49.8 years and the median age of 52.0 years (p&lt;0.05). The mean BMI in the group with complications was (N=521) 27.3, the median BMI was 26.9. In the control group (N=16872), the mean BMI was 27.11, the median BMI was 26.8. In this case the significance (p&gt;0.05) was not confirmed. The complications prevailed strongly in posterior surgical approach, namely in 483 patients (92.7%). As concerns the surgically treated segment, lumber spine dominates with 320 (61.4%) cases. Corticosteroid therapy was used twice as often in women, namely in 13.1% vs. 6.3%. The group of patients with complications (N=521) showed a much higher average length of hospital stay of 12.8 days compared to the average of 4.6 days (N=16872). DISCUSSION In our cohort, the complication rate was 3.09%, of which infections constituted 1.18%, which is in agreement with similarly focused papers. As regards the patient-related factors, in our study the results reported by literature were confirmed with respect to the age, smoking and comorbidities. Moreover, the posterior surgical procedure, lumber spine surgery and presence of a medical resident are essential (p&lt;0.05). No major age difference was observed between women and men (p&gt;0.05). Obesity is one of the key risk factors, especially in infectious complications. In our cohort, a higher BMI did not increase the risk of complications in general (p&gt;0.05). CONCLUSIONS In correlation with current literature, our cohort confirmed a significantly higher risk of complications leading to revision spinal surgery associated with age, smoking, posterior surgical procedure in thoracic or lumber spine, and presence of a medical resident as a surgical assistant. The average length of hospital stay was demonstrably longer in complicated patients, it almost tripled compared to the whole cohort. Contrary to literature, the effect of obesity on the occurrence of complications was not confirmed. Key words: spinal surgery, complications, infection, reoperation, risk factor, hematoma, cerebrospinal fluid leak, screw malposition, smoking, obesity.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
To examine the relationships of parental and family pain history on the pain experience of children with chronic rheumatic disease. The aims of the study were as follows: 1) to describe the pain history of parents and families of children with rheumatic disease, 2) to examine relationships between parental and family pain history and the pain report and physician-rated health status of children with chronic rheumatic disease, and 3) to determine whether child coping mediates the relationship between family pain history and the child's pain and physician-rated health status. Parents of 100 children were recruited from a pediatric rheumatology clinic during routine visits. Parents completed questionnaires assessing parental pain history and family characteristics. Children in the study completed a series of questionnaires to assess pain and pain coping strategies, including the Coping Strategies Questionnaire and parts of the Pediatric Pain Questionnaire. A pediatric rheumatologist provided a global assessment of disease severity on a 100-mm visual analog scale as an index of child health status. A high number of parents of children seen in a pediatric rheumatology clinic described a personal pain history. More than 90% of parents reported having at least 1 chronic pain condition, with an equal proportion reporting an episode of pain in the past month. The most commonly reported pain conditions were lower back pain, shoulder/neck pain, and migraine headache pain. On average, this group of parents reported a history of 3.5 chronic pain conditions (standard deviation: 2.3) and reported having sought treatment for 1.7 (standard deviation: 2.3) of these conditions. Additionally, 93% of all parents reported extended family members experiencing at least 1 chronic pain condition. Correlational analyses indicated that parents reporting higher levels of current pain and higher mean levels of pain during the past month were more likely to have children reporting higher levels of current pain (r = 0.23 and r = 0.27). In addition, parents who sought more treatment for their own pain were more likely to have children reporting higher levels of pain (r = 0.22) and presenting with poorer health status (r = 0.22). Similarly, parents reporting higher levels of pain-related interference with activity were more likely to have children reporting higher levels of current pain (r = 0.23). Correlational analyses also indicated that children whose extended families reported a history of multiple pain conditions were more likely to report higher levels of current pain (r = 0.24) and more pain locations (r = 0.23). Finally, a series of mediational statistical models confirmed that child use of the pain coping strategy, catastrophizing, partially accounted for the relationship between several parent and family pain history variables and the child's own current pain ratings and physician global assessment. Specifically, child catastrophizing mediated the relationships between the total number of treated pain conditions and children's current pain ratings and physician global assessment. In addition, child catastrophizing was shown to mediate the relationship between parental mean level of pain in the past month and children's current pain rating and the relationship between total number of family pain conditions and children's current pain rating. Taken together, our results suggest that parental and familial pain experiences predict children's use of catastrophizing to cope with pain, which in turn predicts physician global assessment and children's current pain. The results from the present study indicate that many of the parents of children seen in a pediatric rheumatology clinic have a personal pain history and highlight the potential impact of parental pain history on children's pain experiences. Specifically, parents who were more likely to seek treatment for their own pain or more likely to report interference with recreational activities because of pain had children with higher pain ratings and poorer health status as measured by the physician global assessment. Additionally, a series of mediational models showed that child catastrophizing serves as a specific mechanism through which parental and familial pain history variables influence child ratings of current pain and physician ratings of health status. Future studies are needed to determine exactly how children living in families with painful conditions become more reliant on catastrophizing to cope with their pain. In addition, more research is needed to identify other potential mediators, such as positive ways parents may influence children's pain coping. There are several important clinical implications of our findings. First, our results suggest that by gathering information from parents about their own pain histories, health care providers may be able to identify children at risk for developing maladaptive pain coping strategies and higher levels of disease-related pain and disability. Second, our results indicate that intervention programs should focus specifically on reducing children's use of catastrophizing to cope with their pain. Perhaps most importantly, our results highlight the need to include parents in interventions aimed at reducing children's pain and improving children's abilities to cope with pain.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Probenecid is a white crystalline solid commonly used as a uricosuric agent in the treatment of gout. Because of its inhibitory effects on renal tubule transport processes, probenecid is also used as a therapeutic adjunct to enhance blood levels of penicillin and its action. Toxicology and carcinogenicity studies were conducted by administering probenecid (&gt;99% pure) in corn oil by gavage to groups of F344/N rats and B6C3F1 mice of each sex once daily, 5 days per week in 14-day, 13-week, and 2-year studies. Genetic toxicology studies were conducted in Salmonella typhimurium and Chinese hamster ovary cells. 14-Day Studies: Doses used in the 14-day studies for both rats and mice were 0, 200, 400, 800, 1,600, or 3,200 mg/kg. Of the animals receiving 3,200 mg/kg, all rats, all female mice, and two of five male mice died during the studies. No deaths occurred among the other dose groups. There was a significant reduction in body weight gain in male and female rats receiving 1,600 mg/kg and in female rats receiving 800 mg/kg. No gross lesions were attributed to probenecid administration in rats or mice of either sex. 13-Week Studies: Doses used in the 13-week studies were 0, 50, 100, 200, 400, or 800 mg/kg for rats and 0, 100, 200, 400, 800, or 1,600 mg/kg for mice. No rats died during the 13-week studies. In mice, 5 of 10 males and 3 of 10 females receiving 1,600 mg/kg and 1 of 10 males receiving 800 mg/kg died during the study. Significant reductions in body weight gain occurred in male rats administered 800 mg/kg, male mice administered 1,600 mg/kg, and female mice administered 800 or 1,600 mg/kg. All dose groups of male rats and all groups of female rats receiving 100 mg/kg or more showed significant increases in absolute and/or relative liver weights compared to control groups. This change was also seen in mice receiving 200 mg/kg and greater, except female mice in the 400 mg/kg group. No compound-related lesions occurred in rats or mice of either sex. Based on compound-related deaths and suppression of body weight gains observed at higher doses in the 13-week studies, doses of 0, 100, and 400 mg/kg were used for the 2-year studies in rats and mice. These doses were administered once daily, 5 days a week for up to 103 weeks to groups of 50 males or 50 females of each species. Body Weight and Survival in the 2-Year Studies: The mean body weight of high-dose female rats was 10% to 20% lower than that of controls throughout the studies. Mean body weights for all other dosed rats and for all dosed mice were similar to those of controls throughout the 2-year studies. Survival of high-dose male rats and high-dose and low-dose male mice was significantly lower than that of controls. Survival rates after 2 years were: male rats--control, 37/50; 100 mg/kg, 34/50; 400 mg/kg, 22/50; female rats--24/50; 35/50; 19/50; male mice--38/50; 23/50; 24/50; female mice--32/49; 32/49; 32/50. Neoplasms and Nonneoplastic Lesions in the 2-Year Studies: No chemical-related histopathologic toxic effects or increased incidence of tumors attributable to probenecid were observed in male or female rats receiving probenecid by corn oil gavage for up to 2 years. Mammary gland fibroadenomas and combined thyroid C-cell adenomas or carcinomas exhibited significant negative trends in female rats. These decreased tumor rates were associated with lower body weights. The incidence of adrenal medullary pheochromocytomas was significantly decreased in high-dose male rats. No compound-related increase in nonneoplastic lesions was observed in rats of either sex. No compound-related neoplastic effects were observed in male mice. In high-dose female mice, there were significant increases in the incidences of hepatocellular adenomas (3/48; 2/49; 14/49), but there was no corresponding increase in carcinomas (2/48; 2/49; 3/49). Treatment-related increased incidences of ovarian abscesses in female mice were causally related to Klebsiella species infection rather than directly related to chemical administration. Genetic Toxicology: Probenecid was not mutagenic in Salmonenot mutagenic in Salmonella typhimurium strain TA100, TA1535, TA1537, or TA98 with or without metabolic activation. In cytogenetic tests with Chinese hamster ovary cells, probenecid induced sister chromatid exchanges in the absence, but not in the presence of S9 activation. No induction of chromosomal aberrations was observed with or without S9. Conclusions: Under the conditions of these 2-year gavage studies, there was no evidence of carcinogenic activity of probenecid for male or female F344/N rats receiving 100 or 400 mg/kg in corn oil. There was no evidence of carcinogenic activity of probenecid for male B6C3F1 mice given 100 or 400 mg/kg probenecid in corn oil. There was some evidence of carcinogenic activity of probenecid for female B6C3F1 mice based on an increased incidence of hepatocellular adenomas. Synonyms: 4-[(Dipropylamino)sulfonyl]benzoic acid; p-(dipropylsulfamoyl)benzoic acid; p-(dipropylsulfamyl)benzoic acid Trade Names: Benacen; Benemid; Benemide; Benn; Probalan; Probecid; Proben; Probenid; Robenecid; Uricocid
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Neonatal septicaemia is characterized by high mortality so that intravenous antibiotics must be administered on clinical suspicion. Initial antibiotic therapy, before the results of microbiological evaluation, is based on empirical data in regard to sensitivity of prevalent bacterial strains. In the past years, aetiological causes of neonatal sepsis have been changed with an increased bacterial resistance to the usual combination of initial antibiotics. We compared changes of serum C-reactive protein (CRP) concentrations in neonates with proven neonatal sepsis in response to initial antibiotic therapy (inappropriate or appropriate). Our hypothesis was that changes of CRP levels during the first 48 hours of treatment and before microbiologic results could be useful in evaluation of effectiveness of empiric antibiotics. Our prospective study included all neonates with suspected sepsis referred to our Intensive Care Unit from January 1992 to December 1996. Neonates received ampicillin and gentamycin/or amikacin (during the first week of life), while infants older than 7 days of life were given combination of cloxacillin and aminoglycozides. In patients with late neonatal sepsis who also had meninigitis, cloxacillin was substituted with ampicillin. Microbiological identification was performed with routine bacteriological methods. Susceptibility of isolated bacterial strains to antibiotics was performed by Kirby-Bauer disc-diffusion method. Serum concentration of CRP was measured by immunoturbidimetry (Turbox CRP, Orion Diagnostica) and CRP concentration higher than 20 mg/l was regarded as elevated. Blood sampling for CRP measurements were taken before the treatment (CRP0), and during the first (CRP1) and second (CRP2) day of empiric therapy. The interval between sampling was from 12 to 24 hours. A total of 1520 neonates were evaluated during this study period and 47 patients fulfilled criteria for final analysis. In 14 of 47 patients initial antibiotic treatment was inappropriate. The most frequent resistant strains was KI. pneumoniae (6) followed with St. aureus (4), E. coli (2) and Pseudomonas (2). During initial evaluation six patients had concomitant meningitis while two had concomitant septic arthritis and two necrotizing enterocolitis, respectively. Seven (50%) of 14 patients with non-adequate initial treatment died. In 33 cases of adequately treated septicaemia the course was uncomplicated and no lethal outcome was observed. In the first group of 14 patients who received inappropriate treatment serum CRP concentations (mg/L; mean and +/- SD) were: CRP0 = 107.5 +/- 65.6; CRP1 = 155.3 +/- 75.7; CRP2 = 209.1 +/- 67.0, while in 33 repeated samples of the 33 patients in the second group who received adequate treatment the following results were recorded: CRP0 = 124.0 +/- 78.1; CRP1 = 133.8 +/- 63.5; CRP2 = 94.6 +/- 46.4. Increase in serum CRP concentration in the first group during the first 48 hours of initial non-adequate therapy was significantly higher (p = 0.015, two way ANOVA) than in the second group with appropriate treatment. During the first 24 hours of treatment increase in serum concentration of CRP was registered in 12 (85.7%) of 14 measurements in patients with non-adequate therapy and in 19 (56.7%) of 33 measurements in patients with adequate therapy. In the first group during the second day of treatment, in 11 (78.6%) of 14 cases an increase in serum CRP concentration was recorded while in 3 (14.3%) cases CRP concentration decreased. In 31 (91.2%) of 34 measurements in patients with adequate treatment CRP concentration decreased during the second day of treatment and in only 3 (8.8%) cases CRP concentration increased. With an increase in serum concentration of CRP more than 10 mg/L in the second day of antibiotic treatment, probability of non-adequate antibiotic therapy (positive predictive value) was estimated to be 77.0%. Any recorded decrease of serum CRP concentration may confirm appropriate choice of antibiotics during the second day of treatment with probability of 93.3% (negative predictive value). Measurement of serum CRP concentration is useful for diagnosis of severe neonatal sepsis. In our study all isolated bacterial strains were comparable in their ability to activate systemic inflammatory response and CRP production. It is known that serial CRP measurements during neonatal sepsis are useful in making decision to cease antibiotic treatment. The highest serum CRP concentrations were detected during the first day of illness but, in some cases even with appropriate treatment, CRP peak levels due to sustained pro-inflammatory action of interleukin-6 production could be detected even 24 hours after treatment was started. Our study showed that in patients with non-adequate initial antibiotic therapy of neonatal sepsis serum CRP concentrations increase further during the second day of treatment. By contrast, the use of appropriate antibiotic therapy in the same time period followed the significant decrease of serum CRP levels in our patients. Therefore, increase of CRP level during initial treatment, especially during the second day of treatment of neonatal sepsis should be taken as indication for replacement of initial antibiotics, even before sensitivity of microbiological causes to given antibiotics is known.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Tuberculosis (TB) patients must be hospitalized while the smear of sputum is positive because TB spreads through air. Cooperation of a patient is important in order to complete the treatment of TB. However, a small number of patients are noncooperative for the treatment and may sometimes refuse it. At this symposium, we discussed about whether we could restrict the human rights of noncooperative TB patients. Although the patients' human rights must be protected, we also have to protect the human rights of people who may receive TB infection. The balance of the both people's rights is fully considered in the TB control policy. It is epoch-making that the TB society took up the theme about the human rights' restriction of TB patients. Five speakers presented their papers from each position. There were presentations about the scientific evidence of isolation, the actual cases, the situation of the United States, and the legal view on the human rights' restriction of TB patients. The present situation and the legal problems in Japan became clear at this symposium. We need further discussion about the human rights' restriction of TB patients for the revision of the Tuberculosis Protection Act and have to obtain the national consensus on it. 1. The evidence for isolation: Emiko TOYODA (International Medical Center of Japan) To determine appropriate periods of respiratory isolation, available biological, clinical, and epidemiological issues and data were studied. Although absolute lack of infectiousness requires consecutive culture negative and it takes too long and impractical periods. There seems to be no established evidence for noncontagiousness after 2 to 3 weeks effective treatment. Practically conversion to 3 negative consecutive smear results may used as a surrogate for noninfectiousness, even though a small risk of transmission still be present. Chemical isolation has been more important and administration with DOT should be indicated to keep compliance. 2. Discontinued hospitalization in tuberculosis patient: Yoshiko KAWABE (National Hospital Organization Tokyo National Hospital) We investigated the background of tuberculosis patients who entered our hospital in 11 years from 1993 to 2003 and discontinued hospitalization. Out of 4,126 cases 76 cases (1.8 %) discontinued hospitalization. We classify three groups. One is self discharged group who leaved hospital without permission. Second is obligatory discharged group who were displaced for some trouble. Third is transferred group who were transferred to another hospital including mental hospital that have ward for tuberculosis. Major reasons were drinking during hospitalization, violence, roam because of dementia and major backgrounds were repeatedly noncompliant patients, homeless people, and suffering from senile dementia. We concluded we need some legal intervention for few cases who cannot continue hospitalization. 3. Tuberculosis control policy and human rights in public health center: Keiko FUJIWARA (Infection Diseases Control Division, Public Health Bureau, City of Yokohama) It is required for a success of the tuberculosis control policy to consider human rights. Patients' human rights should be respected, and surrounding people's human rights should also be respected. We sometimes see a tuberculosis patient who cannot continue tuberculosis treatment. A society as a whole has to share the recognition of tuberculosis as a social illness. The completion of tuberculosis treatment is not only the benefit of individual, but also it is very important as social defense. When we revise the tuberculosis control policy, we should think about both protecting a society from tuberculosis and protecting tuberculosis patients' human rights and obtain national consensus. 4. The mandatory TB control policy in the US: Hidenori MASUYAMA (Shibuya Dispensary, Japan Anti-TB Association) The mandatory TB control policy in the US was discussed. If the mandatory health policy would be applied, the following three criteria of human rights must be satisfied. 1. The health of others will be adversely affected without a mandatory program. 2. The mandatory program is the least restrictive alternative. 3. The mandatory program is implemented equitably without purposeful bias. For example, the mandatory DOT could not satisfy these criteria. Before applying the mandatory TB control policy in Japan, the TB patient's autonomy and social cooperation of TB therapy need to be considered. 5. Tuberculosis and guarantee of human rights: Shigeru TAKAHASHI (Graduate School of Law, Hitotsubashi University) In modern administrative Law the relations between Governments and peoples are regarded not as the facing relationships between Governments and the peoples, who submit to the interventions by Government, but as the triangle relationships between Governments, the peoples who submit to the interventions by Governments and the peoples who enjoy benefits from the interventions by Governments. When we make a new design of the Tuberculosis Protection Act, we must at first take considerations of the human rights of the tuberculosis patients from the view points of due Process of Law. And we must also take considerations of the human rights of the peoples who are threatened with the risks of tuberculosis infection.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Tipranavir [PNU 140690, tipranivir, Aptivus] is a second-generation HIV dihydropyrone (a sulphonamide derivative), nonpeptidic protease inhibitor (NPPI) discovered by Pharmacia &amp; Upjohn (now Pfizer) in the US. The compound is in development with Boehringer Ingelheim. Tipranavir has potent in vitro activity against a variety of HIV-1 laboratory strains and clinical isolates, including those resistant to ritonavir, as well as HIV-2. Tipranavir has been shown to act synergistically with other antiretroviral agents. The limited bioavailability of the hard gel (and first available) formulation of tipranavir led to the development of a soft capsule formulation that has better oral bioavailability. Pharmacia Corporation (now Pfizer) considers that the resistance profile of tipranavir may be sufficiently unique for it to be effective against protease inhibitor resistant virus. On 16 April 2003, Pharmacia Corporation was acquired by, and merged into, Pfizer. In February 2000, Boehringer Ingelheim acquired exclusive worldwide rights to tipranavir. Tipranavir was launched in the US in mid-2005. In June 2005, the US FDA granted accelerated approval to tipranavir capsules for use in combination treatment, based on 24-week data from ongoing clinical trials. The approved dose is Aptivus 500 mg taken with ritonavir 200 mg, twice daily. Aptivus 250 mg soft gel capsules are expected to be available in the second half of 2005. A submission was made to the FDA in October 2004 seeking accelerated approval. In May 2005, the Antiviral Drugs Advisory Committee of the FDA recommended the approval of tipranavir. The positive recommendation is based on data from the RESIST-1 and RESIST-2 studies. Also in October 2004, Boehringer Ingelheim submitted a Marketing Authorisation Application (MAA) to the European Medicines Agency (EMEA) for tipranavir for the treatment of HIV-1 infection in combination with other antiretroviral agents in patients who are protease inhibitor experienced. In July 2005, the Committee for Medicinal Products for Human Use (CHMP) issued a positive opinion for tipranavir in the European Union. If approved, the drug will be marketed in Europe too under the name Aptivus. Marketing authorisation under exceptional circumstances (accelerated approval) is expected before the end of 2005.A phase III clinical programme (RESIST- Randomised Evaluation of Strategic Intervention in Multi-drug ReSistant Patients with Tipranavir) was launched by Boehringer Ingelheim in February 2003. The RESIST programme consists of two phase III pivotal trials (RESIST 1 and RESIST 2) and two companion trials (study 1182.51 and RESIST 3) available at some sites for even more advanced patients. The trials are designed to further study the efficacy and safety of tipranavir (500 mg) boosted with low-dose (200 mg) ritonavir, taken twice daily, versus a low-dose ritonavir boosted comparator protease inhibitor that is chosen by the patient's physician on the basis of the treatment history and baseline resistance testing. Each patient will also receive an individualised background regimen. Study participants will all be highly treatment-experienced HIV-positive adults. RESIST 1 study enrolled 620 patients in the US, Canada and Australia and RESIST 2 enrolled more than 863 patients in Europe and South America. These trials are now fully recruited. The clinical endpoint for RESIST 1 is at 24 weeks and for RESIST 2, the endpoints are at 16 and 24 weeks. Interim data from RESIST 1 (1182.12) were presented at the 44th Interscience Conference on Antimicrobial Agents and Chemotherapy in Washington, DC, USA, in October 2004. Results from this study show that tipranavir is a viable treatment option for patients who have failed other protease inhibitors. In June 2004, Boehringer Ingelheim announced the expansion of enrolment criteria in the international Compassionate Use Programme to allow broader access to tipranavir for HIV patients in need of new treatment options. All countries participating in the tipranavir phase III programme are eligible to take part in the Compassionate Use programme, which is enrolling patients over the age of 18 years, who are triple-antiretroviral class-experienced with at least two PI-based regimens. In November 2004, Boehringer Ingelheim opened the tipranavir Expanded Access Program (EAP) in the US, following a review of the protocol by the FDA. The programme will provide access to tipranavir for HIV-infected patients (&gt; or =18 years old) who are not enrolled in the ongoing tipranavir clinical studies and who are triple-antiretroviral class-experienced with at least two previous PI-based regimens, and have documented PI-resistance and need tipranavir to construct a viable treatment regimen. Eligibility is not dependent upon viral load or CD4+ cell count. Tiparanvir is also being evaluated in phase II studies for use in paediatric and treatment-naive patient populations. Phase II trials completed in the US have established the clinical activity of tiprananvir in both antiretroviral-naive and -experienced patients with HIV infection. The studies have also shown that tipranavir can be combined with ritonavir for maximal clinical benefit. In its 2003 Annual Report, Boehringer Ingelheim stated that the process- and paediatric- formulation development of tipranavir had been completed.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Sweden has prohibited the deposition of organic waste since January, 2005. Since 1 million tons of sludge is produced every year in Sweden and the capacity for incineration does not fill the demands, other methods of sludge management have to be introduced to a larger degree. One common method in the USA and parts of Europe is the use of wetlands to treat wastewater and sewage sludge. The capacity of reed beds to affect the toxicity of a complex mixture of nitroaromatics in sludge, however, is not fully elucidated. In this study, an industrial sludge containing explosives and pharmaceutical residues was therefore treated in artificial reed beds and the change in toxicity was studied. Nitroaromatic compounds, which are the main ingredients of many pharmaceuticals and explosives, are well known to cause cytotoxicity and genotoxicity. Recently performed studies have also showed that embryos of zebrafish (Danio rerio) are sensitive to nitroaromatic compounds. Therefore, we tested the sludge passing through constructed wetlands in order to detect any changes in levels of embryotoxicity, genotoxicity and dioxin-like activity (AhR-agonists). We also compared unplanted and planted systems in order to examine the impact of the root system on the fate of the toxicants. An industrial sludge containing a complex mixture of nitroaromatics was added daily to small-scale constructed wetlands (vertical flow), both unplanted and planted with Phragmites australis. Sludge with an average dry weight of 1.25%, was added with an average hydraulic loading rate of 1.2 L/day. Outgoing water was collected daily and stored at -20 degrees C. The artificial wetland sediment was Soxhlet extracted, followed by clean-up with multi-layer silica, or extracted by ultrasonic treatment, yielding one organic extract and one water extract of the same sample. Genotoxicity of the extracts was measured according to the ISO protocol for the umu-C genotoxicity assay (ISO/TC 147/SC 5/ WG9 N8), using Salmonella typhimurium TA1535/pSK1002 as test organism. Embryotoxicity and teratogenicity were studied using the fish egg assay with zebrafish (Danio rerio) and the dioxin-like activity was measured using the DR-CALUX assay. Chemical analyses of nitroaromatic compounds were performed using Solid Phase Micro Extraction (SPME) and GC-MS. Organic extracts of the bed material showed toxic potential in all three toxicity tests after two years of sludge loading. There was a difference between the planted and the unplanted beds, where the toxicity of organic extracts overall was higher in the bed material from the planted beds. The higher toxicity of the planted beds could have been caused by the higher levels of total carbon in the planted beds, which binds organic toxicants, and by enrichment caused by lower volumes of outgoing water from the planted beds. Developmental disorders were observed in zebrafish exposed directly in contact to bed material from unplanted beds, but not in fish exposed to bed material from planted beds. Hatching rates were slightly lower in zebrafish exposed to outgoing water from unplanted beds than in embryos exposed to outgoing water from planted beds. Genotoxicity in the outgoing water was below detection limit for both planted and unplanted beds. Most of the added toxicants via the sludge were unaccounted for in the outgoing water, suggesting that the beds had toxicant removal potential, although the mechanisms behind this remain unknown. During the experimental period, the beds received a sludge volume (dry weight) of around three times their own volume. In spite of this, the toxicity in the bed material was lower than in the sludge. Thus, the beds were probably able to actually decrease the toxicity of the added, sludge-associated toxicants. When testing the acetone extracts of the bed material, the planted bed showed a higher toxicity than the unplanted beds in all three toxicity tests. The toxicity of water extracts from the unplanted beds, detected by the fish egg assay, were higher than the water extracts from the planted beds. No genotoxicity was detected in outgoing water from either planted or unplanted beds. All this together indicates that the planted reed beds retained semi-lipophilic acetone-soluble toxic compounds from the sludge better than the unplanted beds, which tended to leak out more of the water soluble toxic compounds in the outgoing water. The compounds identified by SPME/GC in the outgoing water were not in sufficient concentrations to have caused induction in the genotoxicity test. This study has pointed out the benefits of using constructed wetlands receiving an industrial sludge containing a complex mixture of nitroaromatics to reduce toxicity in the outgoing water. The water from planted, constructed wetlands could therefore be directed to a recipient without further cleaning. The bed material should be investigated over a longer period of time in order to evaluate potential accumulation and leakage prior to proper usage or storage. The plants should be investigated in order to examine uptake and possible release when the plant biomass is degraded.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
1. Electrophysiological and behavioural observations have shown that changes in the sleep-waking activity occur in astronauts during the space flight. Experiments performed in ground-based experiments have previously shown that the immediate early gene (IEG) c-fos, a marker of neuronal activation, can be used as a molecular correlate of sleep and waking. However, while Fos expression peaks within 2-4 hours after the stimulus and returns to baseline within 6-8 hours, other IEGs as the FRA proteins which are also synthetized soon after their induction, persist in the cell nuclei for longer periods of time, ranging from 1-2 days to weeks. 2. Both Fos and FRA expression were evaluated in several adult albino rats sacrificed at different time points of the space flight, i.e. either at FD2 and FD14, i.e. at launch and about two weeks after launch, respectively, or at R + 1 and R + 13, i.e. at the reentry and about two weeks after landing. The changes in Fos and FRA expression were then compared with those obtained in ground controls. These experiments demonstrate activation of several brain areas which varies during the different phases of the space flight. Due to their different time of persistence, Fos and FRA immunohistochemistry can provide only correlative observations. In particular, FRA expression has been quite helpful to identify the occurrence of short-lasting events such as those related either to stress or to REM-sleep, whose episodes last in the rat only a few min and could hardly be detected by using only Fos expression. 3. Evidence was presented indicating that at FD2 and FD14 Fos-labeled cells were observed in several brain areas in which Fos had been previously identified as being induced by spontaneous or forced waking in ground-based experiments. In contrast to these findings FLT rats sacrificed at R + 1 showed low levels of Fos immunostaining in the cerebral cortex (neocortex) and several forebrain structures such as the hypothalamus and thalamus. Some Fos staining was also present in limbic cortical areas, the septum, and the hippocampus. The main area of the forebrain of FLT rats sacrificed at R + 1, showing an increased expression of Fos, was the central nucleus of the amygdala (CeA) (cf. 127), as well as the noradrenergic locus coeruleus (LC) nucleus (cf. 122). At R + 13 Fos immunostaining was variable among FLT rats. However, none of these rats showed a significant number of Fos-positive cells in CeA. 4. Most of the rats studied for Fos expression were also tested for FRA expression. In particular, a scattered amount of FRA expression occurred at FD14 in different areas of the neocortex and in limbic forebrain regions (such as the cingulate, retrosplenial and entorhinal cortex). It included also the hippocampus, the lateral septum, the caudate/putamen, as well as some hypothalamic regions. At the reentry (R + 1) it was previously shown that a prominent increase in FRA expression occurred in the LC of FLT rats (cf. 122). This finding was associated with an increase in FRA expression which affected not only the nucleus paragigantocellularis lateralis of the medulla, which sends excitatory glutamatergic afferents to the LC (cf. 31 for ref.), but also structures which are known to produce corticotropin-releasing factor (CRF), a neuropeptide which activates the noradrenergic LC neurons during stress. 5. These findings which result from acceleration stress were followed by REMS episodes, which probably occurred after a long period of sleep deprivation following exposure to microgravity. It was previously shown that an increase in Fos and FRA expression occurred at the reentry in some pontine and medullary reticular structures (cf. 128), which are likely to be involved in both the descending (postural atonia) and the ascending manifestations of PS. These findings can be integrated by results of the present experiments showing that at the reentry high levels of FRA expression occurred in the hippocampus and the limbic system, i.e. in structures which are involved in the generalized pattern of EEG desynchronization and the theta activity, typical of REMS (cf. 83, 84). A prominent increase in FRA expression also affected at the reentry some components of the amygdaloid complex, particularly the CeA. as well as some related structures, such as the lateral parabrachial nucleus (cf. 122) and the nucleus of the tractus solitarius (cf. 127). These structures are known to contribute to the PGO waves, which drive the oculomotor system either directly or through the medial vestibular nuclei (128, cf. also 126). Unfortunately due to our brainstem transections we were unable to evaluate the changes in gene expression which could affect the dorsolateral pontine structures during the occurrence of REMS episodes. Further experiments are thus required to investigate the role that these pontine structures exert in determining adaptive changes following exposure to microgravity after launch as well as readaptation to the terrestrial environment after landing.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
To assess the acceptability and feasibility of functional tests as a gateway to angiography for management of coronary artery disease (CAD), the ability of diagnostic strategies to identify patients who should undergo revascularisation, patient outcomes in each diagnostic strategy, and the most cost-effective diagnostic strategy for patients with suspected or known CAD. A rapid systematic review of economic evaluations of alternative diagnostic strategies for CAD was carried out. A pragmatic and generalisable randomised controlled trial was undertaken to assess the use of the functional cardiac tests: angiography (controls); single photon emission computed tomography (SPECT); magnetic resonance imaging (MRI); stress echocardiography. The setting was Papworth Hospital NHS Foundation Trust, a tertiary cardiothoracic referral centre. Patients with suspected or known CAD and an exercise test result that required non-urgent angiography. Patients were randomised to one of the four initial diagnostic tests. Eighteen months post-randomisation: exercise time (modified Bruce protocol); cost-effectiveness compared with angiography (diagnosis, treatment and follow-up costs). The aim was to demonstrate equivalence in exercise time between those randomised to functional tests and those randomised to angiography [defined as the confidence interval (CI) for mean difference from angiography within 1 minute]. The 898 patients were randomised to angiography (n = 222), SPECT (n = 224), MRI (n = 226) or stress echo (n = 226). Initial diagnostic tests were completed successfully with unequivocal results for 98% of angiography, 94% of SPECT (p = 0.05), 78% of MRI (p &lt; 0.001) and 90% of stress echocardiography patients (p &lt; 0.001). Some 22% of SPECT patients, 20% of MRI patients and 25% of stress echo patients were not subsequently referred for an angiogram. Positive functional tests were confirmed by positive angiography in 83% of SPECT patients, 89% of MRI patients and 84% of stress echo patients. Negative functional tests were followed by positive angiograms in 31% of SPECT patients, 52% of MRI patients and 48% of stress echo patients tested. The proportions that had coronary artery bypass graft surgery were 10% (angiography), 11% (MRI) and 13% (SPECT and stress echo) and percutaneous coronary intervention 25% (angiography), 18% (SPECT) and 23% (MRI and stress echo). At 18 months, comparing SPECT and stress echo with angiography, a clinically significant difference in total exercise time can be ruled out. The MRI group had significantly shorter mean total exercise time of 35 seconds and the upper limit of the CI was 1.14 minutes less than in the angiography group, so a difference of at least 1 minute cannot be ruled out. At 6 months post-treatment, SPECT and angiography had equivalent mean exercise time. Compared with angiography, the MRI and stress echo groups had significantly shorter mean total exercise time of 37 and 38 seconds, respectively, and the upper limit of both CIs was 1.16 minutes, so a difference of at least 1 minute cannot be ruled out. The differences were mainly attributable to revascularised patients. There were significantly more non-fatal adverse events in the stress echo group, mostly admissions for chest pain, but no significant difference in the number of patients reporting events. Mean (95% CI) total additional costs over 18 months, compared with angiography, were 415 pounds (-310 pounds to 1084 pounds) for SPECT, 426 pounds (-247 pounds to 1088 pounds) for MRI and 821 pounds (10 pounds to 1715 pounds) for stress echocardiography, with very little difference in quality-adjusted life-years (QALYs) amongst the groups (less than 0.04 QALYs over 18 months). Cost-effectiveness was mainly influenced by test costs, clinicians' willingness to trust negative functional tests and by a small number of patients who had a particularly difficult clinical course. Between 20 and 25% of patients can avoid invasive testing using functional testing as a gateway to angiography, without substantial effects on outcomes. The SPECT strategy was as useful as angiography in identifying patients who should undergo revascularisation and the additional cost was not significant, in fact it would be reduced further by restricting the rest test to patients who have a positive stress test. MRI had the largest number of test failures and, in this study, had the least practical use in screening patients with suspected CAD, although it had similar outcomes to stress echo and is still an evolving technology. Stress echo patients had a 10% test failure rate, significantly shorter total exercise time and time to angina at 6 months post-treatment, and a greater number of adverse events, leading to significantly higher costs. Given the level of skill required for stress echo, it may be best to reserve this test for those who have a contraindication to SPECT and are unable or unwilling to have MRI. Further research, using blinded reassessment of functional test results and angiograms, is required to formally assess diagnostic accuracy. Longer-term cost-effectiveness analysis, and further studies of MRI and new generation computed tomography are also required.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
THE CONCLUSIONS WHICH WE DERIVE FROM OUR OBSERVATIONS ARE AS FOLLOWS: 1. The excretion of injected, egg-albumen as such is in no case complete. The quantity retained varies from 23 to 100%. 2. The amount retained varies: a) directly with the slowness of absorption. This is determined by the manner of administration. b) directly with the time during which the proteid remains in the body; and therefore inversely to the rapidity of excretion. c) inversely to the quantity injected; this has however much less effect than (a) or (b). d) with individual peculiarities; but these are not very conspicuous. 3. The excreted proteid coagulates at the same temperatures as the injected albumen. 4. Injection of egg-albumen does not cause the appearance of globulins in the urine. 5. The proportion of proteid coagulating at lower temperatures is less in the urine than in the injected solution. When a solution has been heated to 73 degrees before injection, the urine also does not coagulate below this temperature. 6. Egg-albumen injected into the hen is excreted as with mammals. 7. The albuminuria lasts in typical cases from 1(1/2) to 3 days, according to the manner of administration. The excretion begins very shortly (7 minutes) after injection. 37 per cent of the total proteid injected may be excreted in an hour. About three-fourths of the total excretion takes place within the first 17 hours; the excretion is almost completed in the next 15 hours, only traces being excreted thereafter. With hypodermic injection the amount is more nearly equal on 2 or 3 successive days, since the absorption may extend over 2 days. 8. Alkali-albumin, as well as muscle-proteids (from foreign species) are completely retained. An unconverted mixture of egg-albumen and sodium carbonate behaves like egg-albumen. 9. A small amount of proteid (less than 5%) is excreted unchanged by the faeces. 10. A variable proportion is excreted as non-coagulable proteid. The quantity of this is proportional to that of the coagulable proteid of the urine. 11. The rest undergoes complete metabolism to urea. 12. The total nitrogen excretion is increased beyond the amount of nitrogen introduced as albumen. 13. Starvation appears to cause an increase in the ratio of the urea to the total nitrogen of the urine. 14. The effects of intravenous injection of egg-albumen on circulation and respiration do not differ from those of an equivalent injection of the solvent. Albumen causes, however, a specific diuresis, beginning 50 minutes after the intravenous injection, and reaching its maximum in about 2 hours. It causes neither glycosuria nor haemoglobinuria. 15. The injection of egg-albumen, alkaline egg-syntonin, or muscle extracts, causes in rabbits a rise of temperature of 1 to 2 degrees C. This begins in about an hour, usually reaches its maximum in from 6 to 8 hours, and then falls rapidly. It may in rare cases persist for several days. It is indifferent qualitatively whether the injection is made by the jugular or the ear-vein, hypodermically, or into the peritoneum. Even extremely small quantities injected into the ear-vein cause this rise. The fever does not cause histological alterations in any organ examined. The injection of normal salt solution may cause a rise, but this is much smaller. 16. The injection of egg-albumen causes but very slight histological changes. The kidneys are usually congested, especially in the cortex. The cells may be slightly cloudy. A slight degree of nephritis may occur, but this is not of such degree as to effect permanent lesions. The injection of muscle extracts may give rise to a more pronounced parenchymatous nephritis. 17. Urethane is fatal to rabbits in doses of 0.75 to 1.0 grm. per kilo. The symptoms consist mainly in a very marked fall of temperature, and in medullary paralysis. 0.5 grm. per kilo. lowers the temperature 2.3 degrees C. Doses as small as 0.6 grm. per kilo cause very marked histological changes, consisting mainly in extensive granular and vacuolar degeneration of the hepatic epithelium, which are so acute as to be fully developed when death occurs in 1(1/2) hours after injection. Doses of 0.35 grm. per kilo. do not produce this change. Chloretone did not cause the degeneration, but is followed by congestion of the abdominal viscera. 18. Native egg-albumen, injected into the femoral vein of a dog, was followed in one case by a fatal ending with convulsions and coma, after several intervening cases of good health. Further experiments demonstrated that there is no toxicity inherent in fresh egg-albumen, nor can it be developed by breeding the eggs in the shell. The cause of the above fatal issue must therefore be sought in some extraneous toxic agent which contaminated the solution. Muscle-extracts were also devoid of toxicity. Alkali-albumin produces no changes beyond those which may be attributed to the free alkali contained therein.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The pigment contained in the extracts obtained from B. phosphorescens by freezing and thawing, and in the alkaline extracts of B. phosphorescens and yeast, resembles the "cytochrome c" of Hill and Keilin (6) and the "porphyratin B" of Schumm (7) in giving absorption bands at mmicro 552-550 and 522-520) but shows in addition a band about 575, as in the "hemochromogen A" obtained by Keilin (3) by prolonged treatment of yeast with strong alkali. Like cytochrome c the pigment of yeast extracts appears to be distinct from the ordinary hemochromogen of blood, because of the difference in position of the bands of the native materials and of the corresponding pyridine hemochromogens. On treatment with acetic acid, however, the yeast extract yields alpha-hematin, as identified spectroscopically. It is evident then that one portion of its iron-porphyrin nucleus is identical with alpha-hematin (iron-protoporphyrin), which must be present not as such, but in chemical combination. The alkaline extracts of C. diphtheriae, compared with those of The alkaline extracts of C. diphtheriae, compared with those of B. phosphorescens and yeast, show a constant difference in the position of the two bands in the green, which lie nearer the red end of the spectrum, at mmicro 556 and 528. This extract likewise on treatment with acetic acid yields alpha-hematin, which in the form of its alkaline hemochromogen may be responsible for the bands in the alkaline extract at mmicro 556 and 528. Great interest has attached in our investigation to the substance responsible for the absorption band in the alkaline extracts about 575. Extraction with acetic acid-ether of these alkaline solutions, as well as of the whole bacteria, yields a material which shows absorption bands at mmicro 575-574 and 539-535, and appears to be identical with a complex porphyrin which has been found in culture filtrates of C. diphtheriae. This complex porphyrin has been described in a previous paper (1). It is labile and breaks down readily to yield coproporphyrin and the copper compound of coproporphyrin, and is apparently the source of the coproporphyrin which is often found free in the culture filtrates. In the work repeated earlier we had been unable to obtain this complex porphyrin, or porphyrin compound, directly from the bacteria. In the present work we have been successful in obtaining it from the three species investigated. The behavior of the complex porphyrin extracted from the whole bacteria is the same as of that found in filtrates. It is insoluble in 25 per cent HCl, and on disintegration gives coproporphyrin and the copper compound of coproporphyrin. Information is quite lacking as to the particular form of combination in which this complex porphyrin occurs within the cell. The complex porphyrin is certainly not present there in the form in which it appears in the extracts. If diphtheria bacilli showing strong absorption bands of reduced cytochrome, while under examination with the microspectroscope are treated with glacial acetic acid, the bands of cytochrome are seen to fade and are replaced by those of the complex porphyrin at mmicro 575 and 539. The origin of the copper which is found, combined with coproporphyrin, as a product of the disintegration of the porphyrin compound, has been a matter of uncertainty. In the case of filtrates of C. diphtheriae it has seemed possible that the copper was never a constituent of the bacteria, and that combination with copper occurs only after the porphyrin has been liberated from the bacterial cell. With washed bacteria, however, the presence of copper in extracts indicates that this element has been taken up from the culture medium and incorporated within the cell. Whether or not the copper is there combined with porphyrin cannot be decided by the present evidence. Copper occurs naturally, however, in combination with porphyrin in turacin (14), a pigment of the wing feathers of certain birds. In the present case such combination seems the more probable, so that the complex porphyrin may represent a form in which copper is contained within the cell. Objection may be raised to the use of the term complex porphyrin or porphyrin compound for the substance referred to here and in the previous paper (1). The name hemochromogen might be applied with equal justification. Until the chemical nature of the substance is better known, however, it seems best not to use any but a simple descriptive name. Reference should not be omitted here to the bacteriological significance of this compound, which arises from the correlation which we have previously observed between its amount and the content of toxin, in filtrates of C. diphtheriae. In respect to this porphyrin compound the pathogen C. diphtheriae seems to differ from the nonpathogenic forms in the readiness with which the material is liberated from the bacteria in cultures, rather than in the nature of the material.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Tuberculosis in the elderly remains a health burden in Japan. Most of the elderly aged more than 70 years in Japan had become infected with Mycobacterium tuberculosis in their youth, and the elderly represent a population at a special high risk for developing tuberculosis owing to comorbidity and age-related immunosuppression. The characteristics of tuberculosis in the elderly are different from young patients. To reduce active tuberculosis in the elderly, treatment of latent tuberculosis infection for compromised host could be strengthened, however its impact might be limited. Elderly tuberculosis patients have not only clinical problems but also socioeconomic problems. Major problems of elderly tuberculosis patients are concurrent diseases, bed ridden states, necessity of nursing care, undernourished, poor adherence, and poor performance status of patients. With this symposium, we focused on the issue of tuberculosis in the elderly in Japan. The speakers were invited from various areas, including tuberculosis surveillance center, public health center and national hospital organization medical center. (1) Current trend of elderly TB: Masako OHMORI (Tuberculosis Surveillance Center, Research Institute of Tuberculosis, JATA) Although the tuberculosis (TB) incidence rate in Japan reached 19.4 per 100,000 in 2008, the rates among the elderly (65 + yrs) were high, e.g., 29.5 of those aged 64-74 years, 64.2 of those aged 75-84 years and 97.3 of those aged 85 years and over. The proportion of those aged 65 years and over increased from 36.8% in 1987 to 56.7% in 2008. Regarding the delay of case detection among elderly TB patients, the patient's delay tended to be shorter but the doctor's delay was longer. Although most TB patients including elderly TB patients were detected upon visiting a medical institution with some symptoms, in the case of elderly TB more patients were detected as outpatients or inpatients for a disease other than TB. Among TB patients aged 65 years and over, 26.4% died within one year. (2) The issues of elderly tuberculosis--An outbreak of pulmonary tuberculosis at nursing home for the elderly: Michiaki OKUMURA (Public Health Division, Public Health and Welfare Bureau, City of Osaka) I experienced a mass outbreak of pulmonary tuberculosis with 8 patients (including the source of infection) and 6 latent tuberculosis infections. Five patients (including the source) of the 8, I underwent restriction fragment length polymorphism (RFLP) analysis of isolated from the sputum. Five patients showed an identical RFLP pattern. These results showed that the infection had arisen from one source. The disease of 4 patients (aged 74-103) seemed to be caused by exogenous reinfections. The elderly tend to have some complications and to be malnutrition. These factors may be risk factors of tuberculosis reinfection for elderly. (3) The community DOTS in the elderly: Yoko HASHIMOTO (Wakayama Prefecture Gobo Health Center) In Wakayama prefecture, we have established a standard assessment list of adherence for tuberculosis patients. To identify predictors of default in the elderly, we investigated assessment lists of tuberculosis patients registered in Gobo Health Center from 2004 to 2007. Factors associated with default were concurrent diseases, side effects, disability and no family support. We have developed a liaison critical pathway for tuberculosis in Gobo Health Center and Tanabe Health Center since 2007. Introducing the path, we could strengthen community medical cooperation and build a network to support adherence. Health center staff should expand the community DOTS in the elderly with establishing an effective community collaboration. (4) The clinical issue of tuberculosis in the elderly: Takeshi KAWASAKI (Department of Respirology, Graduate School of Medicine, Chiba University, Department of Thoracic Disease, National Hospital Organization Chiba-East National Hospital) To identify the clinical issue of TB in the elderly, 139 cases were studied. There were 63 elderly cases in the 139. In the elderly TB patients, there were many cases of death and moving out, so the clinical results were poor. Some cases take much time to move out. It is important to inform doctors and people who care for the elderly that the elderly are under high risk of tuberculosis, to consider treatment for latent tuberculosis infection of high risk groups of tuberculosis, and that experts in tuberculosis, local doctors, health care center and geriatric facilities have close relation. (5) Problems and measures of tuberculosis in elderly group: Masahiro ABE (National Hospital Organization Ehime National Hospital) The percentage of the aged is high among all of tuberculosis patients, especially in the country compared to the city. I reported problems concerning tuberculosis treatment and ward management for elderly patients. During the hospitalization, the management of underlying diseases and new complications besides tuberculosis treatment is critical. Dysphagia features particularly make difficult to take anti-TB drugs and nutritional state worse. The rehabilitation of swallowing functions is effective to improve these conditions. To make discharge support more helpful, the support system, including regional cooperation path is expected to advance more widely and deeply.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Aquatic ecosystems are chronically exposed to natural radioactivity or to artificial radionuclides released by human activities (e.g., nuclear medicine and biology,nuclear industry, military applications). Should the nuclear industry expand in the future, radioactive environmental releases, under normal operating conditions or accidental ones, are expected to increase, which raises public concerns about possible consequences on the environment and human health. Radionuclide exposures may drive macromolecule alterations, and among macromolecules DNA is the major target for ionizing radiations. DNA damage, if not correctly repaired, may induce mutations, teratogenesis, and reproductive effects. As such, damage at the molecular level may have consequences at the population level. In this review, we present an overview of the literature dealing with the effects of radionuclides on DNA, development, and reproduction of aquatic organisms. The review focuses on the main radionuclides that are released by nuclear power plants under normal operating conditions, γ emitters and tritium. Additionally, we fitted nonlinear curves to the dose-response data provided in the reviewed publications and manuscripts, and thus obtained endpoints commonly associated with ecotoxicological studies, such as the EDR(10). These were then used as a common metric for comparing the values and data published in the literature.The effects of tritium on aquatic organisms were reviewed for dose rates that ranged from 29 nGy/day to 29 Gy/day. Although beta emission from tritium decay presents a rather special risk of damage to DNA, genotoxicity-induced by tritium has been scarcely studied. Most of the effects studied have related to reproduction and development. Species sensitivity and the form of tritium present are important factors that drive the ecotoxicity of tritium. We have concluded from this review that invertebrates are more sensitive to the effects of tritium than are vertebrates.Because several calculated EDR10 values are ten times lower than background levels of γ irradiation the results of some studies either markedly call into question the adequacy of the benchmark value of 0.24 mGy/day for aquatic ecosystems that was recommended by Garnier-Laplace et al. (2006), or the dose rate estimates made in the original research, from which our EDR(10) values were derived, were under estimated, or were inadequate. For γ irradiation, the effects of several different dose rates on aquatic organisms were reviewed, and these ranged from 1 mGy/day to 18 Gy/day. DNA damage from exposure to y irradiation was studied more often than for tritium, but the major part of the literature addressed effects on reproduction and development. These data sets support the benchmark value of 0.24 mGy/day, which is recommended to protect aquatic ecosystems. RBEs, that describe the relative effectiveness of different radiation types to produce the same biological effect, were calculated using the available datasets. These RBE values ranged from 0.06 to 14.9, depending on the biological effect studied, and they had a mean of 3.1 ± 3.7 (standard deviation). This value is similar to the RBE factors of 2-3 recommended by international organizations responsible for providing guidance on radiation safety. Many knowledge gaps remain relative to the biological effects produced from exposure to tritium and y emitters. Among these are: Dose calculations: this review highlights several EDR(10) values that are below the normal range of background radiation. One explanation for this result is that dose rates were underestimated from uncertainties linked to the heterogenous distribution of tritium in cells. Therefore, the reliability of the concept of average dose to organisms must be addressed. Mechanisms of DNA DBS repair: very few studies address the most deleterious form of DNA damage, which are DNA DBSs. Future studies should focus on identifying impaired DNA DBS repair pathways and kinetics, in combination with developmental and reproductive effects. The transmission of genetic damage to offspring, which is of primary concern in the human health arena. However, there has been little work undertaken to assess the potential risk from germ cell mutagens in aquatic organisms, although this is one of the means of extrapolating effects from subcellular levels to populations. Reproductive behavior that is linked to alterations of endocrine function. Despite the importance of reproduction for population dynamics, many key endpoints were scarcely addressed within this topic. Hence, there is, to our knowledge,only one study of courtship behavior in fish exposed to γ rays, while no studies of radionuclide effects on fish endocrine function exist. Recent technical advances in the field of endocrine disrupters can be used to assess the direct or indirect effects of radionuclides on endocrine function. Identifying whether resistance to radiation effects in the field result from adaptation or acclimation mechanisms. Organisms may develop resistance to the toxic effects of high concentrations of radionuclides. Adaptation occurs at the population level by genetic selection for more resistant organisms. To date, very few field studies exist in which adaptation has been addressed, despite the fact that it represents an unknown influence on observed biological responses.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Operations on structures in the chest (usually the lungs) involve cutting between the ribs (thoracotomy). Severe post-thoracotomy pain can result from pleural (lung lining) and muscular damage, costovertebral joint (ribcage) disruption and intercostal nerve (nerves that run along the ribs) damage during surgery. Poor pain relief after surgery can impede recovery and increase the risks of developing complications such as lung collapse, chest infections and blood clots due to ineffective breathing and clearing of secretions. Effective management of acute pain following thoracotomy may prevent these complications and reduce the likelihood of developing chronic pain. A multi-modal approach to analgesia is widely employed by thoracic anaesthetists using a combination of regional anaesthetic blockade and systemic analgesia, with both non-opioid and opioid medications and local anaesthesia blockade.There is some evidence that blocking the nerves as they emerge from the spinal column (paravertebral block, PVB) may be associated with a lower risk of major complications in thoracic surgery but the majority of thoracic anaesthetists still prefer to use a thoracic epidural blockade (TEB) as analgesia for their patients undergoing thoracotomy. In order to bring about a change in practice, anaesthetists need a review that evaluates the risk of all major complications associated with thoracic epidural and paravertebral block in thoracotomy. To compare the two regional techniques of TEB and PVB in adults undergoing elective thoracotomy with respect to:1. analgesic efficacy;2. the incidence of major complications (including mortality);3. the incidence of minor complications;4. length of hospital stay;5. cost effectiveness. We searched for studies in the Cochrane Central Register of Controlled Trials (CENTRAL 2013, Issue 9); MEDLINE via Ovid (1966 to 16 October 2013); EMBASE via Ovid (1980 to 16 October 2013); CINAHL via EBSCO host (1982 to 16 October 2013); and reference lists of retrieved studies. We handsearched the Journal of Cardiothoracic Surgery and Journal of Cardiothoracic and Vascular Anesthesia (16 October 2013). We reran the search on 31st January 2015. We found one additional study which is awaiting classification and will be addressed when we update the review. We included all randomized controlled trials (RCTs) comparing PVB with TEB in thoracotomy, including upper gastrointestinal surgery. We used standard methodological procedures expected by Cochrane. Two review authors (JY and SG) independently assessed the studies for inclusion and then extracted data as eligible for inclusion in qualitative and quantitative synthesis (meta-analysis). We included 14 studies with a total of 698 participants undergoing thoracotomy. There are two studies awaiting classification. The studies demonstrated high heterogeneity in insertion and use of both regional techniques, reflecting real-world differences in the anaesthesia techniques. Overall, the included studies have a moderate to high potential for bias, lacking details of randomization, group allocation concealment or arrangements to blind participants or outcome assessors. There was low to very low-quality evidence that showed no significant difference in 30-day mortality (2 studies, 125 participants. risk ratio (RR) 1.28, 95% confidence interval (CI) 0.39 to 4.23, P value = 0.68) and major complications (cardiovascular: 2 studies, 114 participants. Hypotension RR 0.30, 95% CI 0.01 to 6.62, P value = 0.45; arrhythmias RR 0.36, 95% CI 0.04 to 3.29, P value = 0.36, myocardial infarction RR 3.19, 95% CI 0.13, 76.42, P value = 0.47); respiratory: 5 studies, 280 participants. RR 0.62, 95% CI 0.26 to 1.52, P value = 0.30). There was moderate-quality evidence that showed comparable analgesic efficacy across all time points both at rest and after coughing or physiotherapy (14 studies, 698 participants). There was moderate-quality evidence that showed PVB had a better minor complication profile than TEB including hypotension (8 studies, 445 participants. RR 0.16, 95% CI 0.07 to 0.38, P value &lt; 0.0001), nausea and vomiting (6 studies, 345 participants. RR 0.48, 95% CI 0.30 to 0.75, P value = 0.001), pruritis (5 studies, 249 participants. RR 0.29, 95% CI 0.14 to 0.59, P value = 0.0005) and urinary retention (5 studies, 258 participants. RR 0.22, 95% CI 0.11 to 0.46, P value &lt; 0.0001). There was insufficient data in chronic pain (six or 12 months). There was no difference found in and length of hospital stay (3 studies, 124 participants). We found no studies that reported costs. Paravertebral blockade reduced the risks of developing minor complications compared to thoracic epidural blockade. Paravertebral blockade was as effective as thoracic epidural blockade in controlling acute pain. There was a lack of evidence in other outcomes. There was no difference in 30-day mortality, major complications, or length of hospital stay. There was insufficient data on chronic pain and costs. Results from this review should be interpreted with caution due to the heterogeneity of the included studies and the lack of reliable evidence. Future studies in this area need well-conducted, adequately-powered RCTs that focus not only on acute pain but also on major complications, chronic pain, length of stay and costs.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Meta-analyses based on individual participant data (IPD-MAs) allow more powerful and uniformly consistent analyses as well as better characterisation of subgroups and outcomes, compared to those which are based on aggregate data (AD-MAs) extracted from published trial reports. However, IPD-MAs are a larger undertaking requiring greater resources than AD-MAs. Researchers have compared results from IPD-MA against results obtained from AD-MA and reported conflicting findings. We present a methodology review to summarise this empirical evidence . To review systematically empirical comparisons of meta-analyses of randomised trials based on IPD with those based on AD extracted from published reports, to evaluate the level of agreement between IPD-MA and AD-MA and whether agreement is affected by differences in type of effect measure, trials and participants included within the IPD-MA and AD-MA, and whether analyses were undertaken to explore the main effect of treatment or a treatment effect modifier. An electronic search of the Cochrane Library (includes Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effectiveness, CENTRAL, Cochrane Methodology Register, HTA database, NHS Economic Evaluations Database), MEDLINE, and Embase was undertaken up to 7 January 2016. Potentially relevant articles that were known to any of the review authors and reference lists of retrieved articles were also checked. Studies reporting an empirical comparison of the results of meta-analyses of randomised trials using IPD with those using AD. Studies were included if sufficient numerical data, comparing IPD-MA and AD-MA, were available in their reports. Two review authors screened the title and abstract of identified studies with full-text publications retrieved for those identified as eligible or potentially eligible. A 'quality' assessment was done and data were extracted independently by two review authors with disagreements resolved by involving a third author. Data were summarised descriptively for comparisons where an estimate of effect measure and corresponding precision have been provided both for IPD-MA and for AD-MA in the study report. Comparisons have been classified according to whether identical effect measures, identical trials and patients had been used in the IPD-MA and the AD-MA, and whether the analyses were undertaken to explore the main effect of treatment, or to explore a potential treatment effect modifier.Effect measures were transformed to a standardised scale (z scores) and scatter plots generated to allow visual comparisons. For each comparison, we compared the statistical significance (at the 5% two-sided level) of an IPD-MA compared to the corresponding AD-MA and calculated the number of discrepancies. We examined discrepancies by type of analysis (main effect or modifier) and according to whether identical trials, patients and effect measures had been used by the IPD-MA and AD-MA. We calculated the average of differences between IPD-MA and AD-MA (z scores, ratio effect estimates and standard errors (of ratio effects)) and 95% limits of agreement. From the 9330 reports found by our searches, 39 studies were eligible for this review with effect estimate and measure of precision extracted for 190 comparisons of IPD-MA and AD-MA. We classified the quality of studies as 'no important flaws' (29 (74%) studies) or 'possibly important flaws' (10 (26%) studies).A median of 4 (interquartile range (IQR): 2 to 6) comparisons were made per study, with 6 (IQR 4 to 11) trials and 1225 (542 to 2641) participants in IPD-MAs and 7 (4 to 11) and 1225 (705 to 2541) for the AD-MAs. One hundred and forty-four (76%) comparisons were made on the main treatment effect meta-analysis and 46 (24%) made using results from analyses to explore treatment effect modifiers.There is agreement in statistical significance between the IPD-MA and AD-MA for 152 (80%) comparisons, 23 of which disagreed in direction of effect. There is disagreement in statistical significance for 38 (20%) comparisons with an excess proportion of IPD-MA detecting a statistically significant result that was not confirmed with AD-MA (28 (15%)), compared with 10 (5%) comparisons with a statistically significant AD-MA that was not confirmed by IPD-MA. This pattern of disagreement is consistent for the 144 main effect analyses but not for the 46 comparisons of treatment effect modifier analyses. Conclusions from some IPD-MA and AD-MA differed even when based on identical trials, participants (but not necessarily identical follow-up) and treatment effect measures. The average difference between IPD-MA and AD-MA in z scores, ratio effect estimates and standard errors is small but limits of agreement are wide and include important differences in both directions. Discrepancies between IPD-MA and AD-MA do not appear to increase as the differences between trials and participants increase. IPD offers the potential to explore additional, more thorough, and potentially more appropriate analyses compared to those possible with AD. But in many cases, similar results and conclusions can be drawn from IPD-MA and AD-MA. Therefore, before embarking on a resource-intensive IPD-MA, an AD-MA should initially be explored and researchers should carefully consider the potential added benefits of IPD.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Infectious mononucleosis (IM) is a clinical syndrome, usually caused by the Epstein Barr virus (EPV), characterised by lymphadenopathy, fever and sore throat. Most cases of symptomatic IM occur in older teenagers or young adults. Usually IM is a benign self-limiting illness and requires only symptomatic treatment. However, occasionally the disease course can be complicated or prolonged and lead to decreased productivity in terms of school or work. Antiviral medications have been used to treat IM, but the use of antivirals for IM is controversial. They may be effective by preventing viral replication which helps to keep the virus inactive. However, there are no guidelines for antivirals in IM. To assess the effects of antiviral therapy for infectious mononucleosis (IM). We searched the Cochrane Central Register of Controlled Trials (CENTRAL, Issue 3, March 2016), which contains the Cochrane Acute Respiratory Infections (ARI) Group's Specialised Register, MEDLINE (1946 to 15 April 2016), Embase (1974 to 15 April 2016), CINAHL (1981 to 15 April 2016), LILACS (1982 to 15 April 2016) and Web of Science (1955 to 15 April 2016). We searched the World Health Organization (WHO) International Clinical Trials Registry Platform and ClinicalTrials.gov for completed and ongoing trials. We included randomised controlled trials (RCTs) comparing antivirals versus placebo or no treatment in IM. We included trials of immunocompetent participants of any age or sex with clinical and laboratory-confirmed diagnosis of IM, who had symptoms for up to 14 days. Our primary outcomes were time to clinical recovery and adverse events and side effects of medication. Secondary outcomes included duration of abnormal clinical examination, complications, viral shedding, health-related quality of life, days missing from school or work and economic outcomes. Two review authors independently assessed studies for inclusion, assessed the included studies' risk of bias and extracted data using a customised data extraction sheet. We used the GRADE criteria to rate the quality of the evidence. We pooled heterogeneous data where possible, and presented the results narratively where we could not statistically combine data. We included seven RCTs with a total of 333 participants in our review. Three trials studied hospitalised patients, two trials were conducted in an outpatient setting, while the trial setting was unclear in two studies. Participants' ages ranged from two years to young adults. The type of antiviral, administration route, and treatment duration varied between the trials. The antivirals in the included studies were acyclovir, valomaciclovir and valacyclovir. Follow-up varied from 20 days to six months. The diagnosis of IM was based on clinical symptoms and laboratory parameters.The risk of bias for all included studies was either unclear or high risk of bias. The quality of evidence was graded as very low for all outcomes and so the results should be interpreted with caution. There were statistically significant improvements in the treatment group for two of the 12 outcomes. These improvements may be of limited clinical significance.There was a mean reduction in 'time to clinical recovery as assessed by physician' of five days in the treatment group but with wide confidence intervals (CIs) (95% CI -8.04 to -1.08; two studies, 87 participants). Prospective studies indicate that clinical signs and symptoms may take one month or more to resolve and that fatigue may be persistent in approximately 10% of patients at six-month follow-up, so this may not be a clinically meaningful result.Trial results for the outcome 'adverse events and side effects of medication' were reported narratively in only five studies. In some reports authors were unsure whether an adverse event was related to medication or complication of disease. These results could not be pooled due to the potential for double counting results but overall, the majority of trials reporting this outcome did not find any significant difference between treatment and control groups.There was a mean reduction in 'duration of lymphadenopathy' of nine days (95% CI -11.75 to -6.14, two studies, 61 participants) in favour of the treatment group.In terms of viral shedding, the overall effect from six studies was that viral shedding was suppressed while on antiviral treatment, but this effect was not sustained when treatment stopped.For all other outcomes there was no statistically significant difference between antiviral treatment and control groups. The effectiveness of antiviral agents (acyclovir, valomaciclovir and valacyclovir) in acute IM is uncertain. The quality of the evidence is very low. The majority of included studies were at unclear or high risk of bias and so questions remain about the effectiveness of this intervention. Although two of the 12 outcomes have results that favour treatment over control, the quality of the evidence of these results is very low and may not be clinically meaningful. Alongside the lack of evidence of effectiveness, decision makers need to consider the potential adverse events and possible associated costs, and antiviral resistance. Further research in this area is warranted.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Decision aids are interventions that support patients by making their decisions explicit, providing information about options and associated benefits/harms, and helping clarify congruence between decisions and personal values. To assess the effects of decision aids in people facing treatment or screening decisions. Updated search (2012 to April 2015) in CENTRAL; MEDLINE; Embase; PsycINFO; and grey literature; includes CINAHL to September 2008. We included published randomized controlled trials comparing decision aids to usual care and/or alternative interventions. For this update, we excluded studies comparing detailed versus simple decision aids. Two reviewers independently screened citations for inclusion, extracted data, and assessed risk of bias. Primary outcomes, based on the International Patient Decision Aid Standards (IPDAS), were attributes related to the choice made and the decision-making process.Secondary outcomes were behavioural, health, and health system effects.We pooled results using mean differences (MDs) and risk ratios (RRs), applying a random-effects model. We conducted a subgroup analysis of studies that used the patient decision aid to prepare for the consultation and of those that used it in the consultation. We used GRADE to assess the strength of the evidence. We included 105 studies involving 31,043 participants. This update added 18 studies and removed 28 previously included studies comparing detailed versus simple decision aids. During the 'Risk of bias' assessment, we rated two items (selective reporting and blinding of participants/personnel) as mostly unclear due to inadequate reporting. Twelve of 105 studies were at high risk of bias.With regard to the attributes of the choice made, decision aids increased participants' knowledge (MD 13.27/100; 95% confidence interval (CI) 11.32 to 15.23; 52 studies; N = 13,316; high-quality evidence), accuracy of risk perceptions (RR 2.10; 95% CI 1.66 to 2.66; 17 studies; N = 5096; moderate-quality evidence), and congruency between informed values and care choices (RR 2.06; 95% CI 1.46 to 2.91; 10 studies; N = 4626; low-quality evidence) compared to usual care.Regarding attributes related to the decision-making process and compared to usual care, decision aids decreased decisional conflict related to feeling uninformed (MD -9.28/100; 95% CI -12.20 to -6.36; 27 studies; N = 5707; high-quality evidence), indecision about personal values (MD -8.81/100; 95% CI -11.99 to -5.63; 23 studies; N = 5068; high-quality evidence), and the proportion of people who were passive in decision making (RR 0.68; 95% CI 0.55 to 0.83; 16 studies; N = 3180; moderate-quality evidence).Decision aids reduced the proportion of undecided participants and appeared to have a positive effect on patient-clinician communication. Moreover, those exposed to a decision aid were either equally or more satisfied with their decision, the decision-making process, and/or the preparation for decision making compared to usual care.Decision aids also reduced the number of people choosing major elective invasive surgery in favour of more conservative options (RR 0.86; 95% CI 0.75 to 1.00; 18 studies; N = 3844), but this reduction reached statistical significance only after removing the study on prophylactic mastectomy for breast cancer gene carriers (RR 0.84; 95% CI 0.73 to 0.97; 17 studies; N = 3108). Compared to usual care, decision aids reduced the number of people choosing prostate-specific antigen screening (RR 0.88; 95% CI 0.80 to 0.98; 10 studies; N = 3996) and increased those choosing to start new medications for diabetes (RR 1.65; 95% CI 1.06 to 2.56; 4 studies; N = 447). For other testing and screening choices, mostly there were no differences between decision aids and usual care.The median effect of decision aids on length of consultation was 2.6 minutes longer (24 versus 21; 7.5% increase). The costs of the decision aid group were lower in two studies and similar to usual care in four studies. People receiving decision aids do not appear to differ from those receiving usual care in terms of anxiety, general health outcomes, and condition-specific health outcomes. Studies did not report adverse events associated with the use of decision aids.In subgroup analysis, we compared results for decision aids used in preparation for the consultation versus during the consultation, finding similar improvements in pooled analysis for knowledge and accurate risk perception. For other outcomes, we could not conduct formal subgroup analyses because there were too few studies in each subgroup. Compared to usual care across a wide variety of decision contexts, people exposed to decision aids feel more knowledgeable, better informed, and clearer about their values, and they probably have a more active role in decision making and more accurate risk perceptions. There is growing evidence that decision aids may improve values-congruent choices. There are no adverse effects on health outcomes or satisfaction. New for this updated is evidence indicating improved knowledge and accurate risk perceptions when decision aids are used either within or in preparation for the consultation. Further research is needed on the effects on adherence with the chosen option, cost-effectiveness, and use with lower literacy populations.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Exposure to light plays a crucial role in biological processes, influencing mood and alertness. Daytime workers may be exposed to insufficient or inappropriate light during daytime, leading to mood disturbances and decreases in levels of alertness. To assess the effectiveness and safety of lighting interventions to improve alertness and mood in daytime workers. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, seven other databases; ClinicalTrials.gov and the World Health Organization trials portal up to January 2018. We included randomised controlled trials (RCTs), and non-randomised controlled before-after trials (CBAs) that employed a cross-over or parallel-group design, focusing on any type of lighting interventions applied for daytime workers. Two review authors independently screened references in two stages, extracted outcome data and assessed risk of bias. We used standardised mean differences (SMDs) and 95% confidence intervals (CI) to pool data from different questionnaires and scales assessing the same outcome across different studies. We combined clinically homogeneous studies in a meta-analysis. We used the GRADE system to rate quality of evidence. The search yielded 2844 references. After screening titles and abstracts, we considered 34 full text articles for inclusion. We scrutinised reports against the eligibility criteria, resulting in the inclusion of five studies (three RCTs and two CBAs) with 282 participants altogether. These studies evaluated four types of comparisons: cool-white light, technically known as high correlated colour temperature (CCT) light versus standard illumination; different proportions of indirect and direct light; individually applied blue-enriched light versus no treatment; and individually applied morning bright light versus afternoon bright light for subsyndromal seasonal affective disorder.We found no studies comparing one level of illuminance versus another.We found two CBA studies (163 participants) comparing high CCT light with standard illumination. By pooling their results via meta-analysis we found that high CCT light may improve alertness (SMD -0.69, 95% CI -1.28 to -0.10; Columbia Jet Lag Scale and the Karolinska Sleepiness Scale) when compared to standard illumination. In one of the two CBA studies with 94 participants there was no difference in positive mood (mean difference (MD) 2.08, 95% CI -0.1 to 4.26) or negative mood (MD -0.45, 95% CI -1.84 to 0.94) assessed using the Positive and Negative Affect Schedule (PANAS) scale. High CCT light may have fewer adverse events than standard lighting (one CBA; 94 participants). Both studies were sponsored by the industry. We graded the quality of evidence as very low.We found no studies comparing light of a particular illuminance and light spectrum or CCT versus another combination of illuminance and light spectrum or CCT.We found no studies comparing daylight versus artificial light.We found one RCT (64 participants) comparing the effects of different proportions of direct and indirect light: 100% direct lighting, 70% direct lighting plus 30% indirect lighting, 30% direct lighting plus 70% indirect lighting and 100% indirect lighting. There was no substantial difference in mood, as assessed by the Beck Depression Inventory, or in adverse events, such as ocular, reading or concentration problems, in the short or medium term. We graded the quality of evidence as low.We found two RCTs comparing individually administered light versus no treatment. According to one RCT with 25 participants, blue-enriched light individually applied for 30 minutes a day may enhance alertness (MD -3.30, 95% CI -6.28 to -0.32; Epworth Sleepiness Scale) and may improve mood (MD -4.8, 95% CI -9.46 to -0.14; Beck Depression Inventory). We graded the quality of evidence as very low. One RCT with 30 participants compared individually applied morning bright light versus afternoon bright light for subsyndromal seasonal affective disorder. There was no substantial difference in alertness levels (MD 7.00, 95% CI -10.18 to 24.18), seasonal affective disorder symptoms (RR 1.60, 95% CI 0.81, 3.20; number of participants presenting with a decrease of at least 50% in SIGH-SAD scores) or frequency of adverse events (RR 0.53, 95% CI 0.26 to 1.07). Among all participants, 57% had a reduction of at least 50% in their SIGH-SAD score. We graded the quality of evidence as low.Publication bias could not be assessed for any of these comparisons. There is very low-quality evidence based on two CBA studies that high CCT light may improve alertness, but not mood, in daytime workers. There is very low-quality evidence based on one CBA study that high CCT light may also cause less irritability, eye discomfort and headache than standard illumination. There is low-quality evidence based on one RCT that different proportions of direct and indirect light in the workplace do not affect alertness or mood. There is very low-quality evidence based on one RCT that individually applied blue-enriched light improves both alertness and mood. There is low-quality evidence based on one RCT that individually administered bright light during the afternoon is as effective as morning exposure for improving alertness and mood in subsyndromal seasonal affective disorder.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
<bObjective:</b To analyze the relationship between serum lactic acid value and risk of death in patients with extensive burn during shock stage and the related influencing factors. <bMethods:</b Clinical data of 127 patients (111 males and 16 females) with extensive burn admitted to Institute of Burn Research of the First Affiliated Hospital of Army Medical University from January 2009 to December 2013 and Department of Plastic Surgery and Burns of the Affiliated Hospital of Southwest Medical University from January 2016 to December 2018, who met the admission criteria, were retrospectively analyzed. The patients aged 21 to 62 years, with total burn area more than 50% total body surface area. All patients were treated with antishock therapy after admission. (1) According to the treatment outcome, the patients were divided into survival group (<in</i=98) and death group (<in</i=29). The gender, age, total burn area, partial-thickness burn area, full-thickness burn area, abbreviated burn severity index (ABSI), admission time after injury, number of patients with inhalation injury, number of patients with acute renal failure, and serum lactic acid values on admission and at post admission hour (PAH) 12, 24, 36, and 48 were recorded. (2) According to the optimal positive cut-off value of serum lactic acid 48 hours after admission, the patients were divided into high lactic acid group and normal lactic acid group. Age, gender, total burn area, indexes at PAH 48 including urea nitrogen, creatinine, alanine aminotransferase (ALT), aspartate aminotransferase (AST), total serum bilirubin, alkaline phosphatase (ALP), albumin, white blood cell count, platelet count, lymphocyte count, prothrombin time (PT), hematocrit value, oxygenation index, respiratory index (RI), the alveolar-arterial oxygen partial pressure difference, mean arterial pressure (MAP) at PAH 48, the average urine volume within 48 hours after admission, the total volume of intravenous fluid infusion within 48 hours after admission, the volume of fluid infusion per kilogram of body mass within the first 24 hours after admission, the volume of fluid infusion per one percent of body surface area per kilogram of body mass within the first 24 hours after admission, the volume of urine per kilogram of body mass per hour within the first 24 hours after admission, and the percentage of hospital death were recorded. Data were processed with <it</i test, chi-square test, and Fisher's exact probability test. Cox regression analysis was used to screen independent risk factors affecting the prognosis of patients. Receiver operating characteristic curve (ROC) of serum lactic acid value at PAH 48 of 127 patients was drawn to predict patients' death and determine the optimal positive cut-off value. Multivariate logistic regression analysis was used to screen independent risk factors causing increase of serum lactic acid. <bResults:</b (1) There were significantly statistical differences in total burn area, full-thickness burn area, and ABSI of patients between survival group and death group (<it</i=6.257, 4.476, 5.727, <iP</i&lt;0.01), while other indexes between the two groups were close. (2) The serum values of lactic acid of patients in death group on admission and at PAH 12, 24, 36, and 48 were (4.00±0.28), (4.50±0.26), (4.02±0.31), (3.48±0.22), (3.40±0.19) mmol/L, respectively, which were significantly higher than those in survival group [(3.30±0.21), (3.20±0.19), (2.33±0.17), (1.85±0.18), (1.50±0.09) mmol/L, <it</i=14.552, 29.603, 38.133, 40.648, 74.973, <iP</i&lt;0.05 or <iP</i&lt;0.01]. (3) Cox regression analysis showed that the serum value of lactic acid at PAH 48 was the independent risk factor affecting the prognosis of patients, with risk ratio of 1.853 and 95% confidence interval of 1.342-2.559, <iP</i&lt;0.01. (4) The total area under ROC of serum value of lactic acid at PAH 48 to predict death of 127 patients was 0.811, with 95% confidence interval of 0.699-0.924, <iP</i&lt;0.01. The optimal positive cut-off value of serum value of lactic acid was 1.75 mmol/L, with sensitivity of 75.0% and specificity of 79.5% for predicting death. (5) There were significantly statistical differences in total burn area, ALT, AST, ALP, PT, total serum bilirubin, total volume of intravenous fluid infusion within 48 hours after admission, volume of fluid infusion per kilogram of body mass within the first 24 hours after admission, and percentage of hospital deaths of patients between high lactic acid group (<in</i=34) and normal lactic acid group (<in</i=93), <it</i=3.592, 6.797, 10.367, 2.089, 2.880, 4.517, 2.984, 4.044, <iχ</i(2)=58.498, <iP</i&lt;0.05 or <iP</i&lt;0.01, while other indexes were close between the two groups. (6) Multivariate logistic regression analysis showed that AST and total serum bilirubin were independent risk factors for increase of serum lactic acid, with odds ratios of 1.021 and 1.064 and 95% confidence intervals of 1.001-1.040 and 1.001-1.132, <iP</i&lt;0.05. <bConclusions:</b Serum value of lactic acid at PAH 48 can independently predict the death of patients with extensive burns. Liver injury is an important risk factor causing hyperlacticemia during burn shock stage. Widespread increase of vascular permeability and large amount of fluid resuscitation are the core factors leading to aggravation of abdominal organ injury.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
<bObjective:</b To evaluate the efficacy and safety of fecal microbiota transplantation (FMT) for intestinal disorders. <bMethods:</b A retrospectively descriptive cohort study was carried out. Clinical data of 2010 patients who underwent FMT and received follow-up for more than 3 months from May 2014 to November 2018 were collected, including 1,206 cases from Tongji University Shanghai Tenth People's Hospital and 804 cases from Nanjing Eastern Military General Hospital. Of the 2,010 patients, 797 were male and 1,213 were female, with a mean age of (49.4±16.5) years old. Inclusion criteria were those with indications for FMT and voluntary treatment of FMT. Pregnant or lactating women, patients with end-stage disease, cases who were participating or participated in other clinical trials within 3 months, and patients with previous bowel history of pathogen infection, oral antibiotics or proton pump inhibitors (PPI) for the recent2 weeks, and those at immunosuppressive state were excluded. Informed consent was obtained from the enrolled patients and their families. There were 1,356 cases of constipation, 175 cases of inflammatory bowel disease, 148 cases of chronic diarrhea, 127 cases of radiation enteritis, 119 cases of irritable bowel syndrome, and 85 cases of autism (complicating with intestinal disorders). FMT donor requirements: (1) 18 to 30 years old non-relatives, non-pregnant healthy adults with healthy lifestyle and good eating habits as volunteers to participate in fecal donation; (2) no administration of antibiotics within 3 months; (3) no chronic diseases such as constipation, irritable bowel syndrome, inflammatory bowel disease, etc., no autoimmune disease, not in immunosuppressive state, no history of malignant disease; (4) negative pathogen examination of infectious diseases (hepatitis B virus, hepatitis C virus, syphilis, HIV, etc.); (5) negative fecal examination (C.difficile, dysentery bacillus, Shigella, Campylobacter, parasites, etc.). The donor requirements after enrollment: (1) physical examination was reviewed once every two months, and the result still met the above requirements; (2) 16S rRNA sequencing was performed for every fecal donation in order to ensure that the composition and diversity of the fecal flora was stable and reliable. The preparation of the stool suspension referred to the Amsterdam criteria and the preparation process was less than 1 hour. The preparation of the FMT capsule was processed by pre-freezing the stool suspension after the preparation of the above suspension, and the frozen sample was transferred into a freeze dryer for freezing. The dried and lyophilized powder was encapsulated in capsules, and the capsule shell was made of acid-resistant hypromellose capsule (No.0) and pediatric-specific capsule (No.3), sealed and packaged in a-20℃ refrigerator. Three ways of accepting FMT treatment pathways included 6-day transplantation after the placement of the nasointestinal tube, 6-day oral FMT capsule transplantation and one-time transplantation through colonoscopy. Intestinal preparation (nasointestinal tube feeding of polyethylene glycol until watery stool) was carried out before transplantation. Other treatments were stopped during treatment and follow-up, and any medication was not recommended when necessary. <bResults:</b Of the 2010 patients, 1,497 cases received nasointestinal tube transplantation (nasointestinal tube group), 452 cases oral capsule transplantation (oral capsule group) and 61 cases colonoscopy (colonoscopy group). At 3 time points of 3, 12, and 36 months after FMT, the clinical cure rates and the clinical improvement rates were 41.3% (560/1 356), 35.2% (320/909), 31.4% (69/220), and 29.0% (393/1 356), 27.8% (253/909), 29.1% (64/220), respectively in constipation patients; 33.1% (58/175), 29.9% (35/117), 24.5% (12/49), and 31.4% (55/175), 27.4% (32/117), 57.1% (28/49), respectively in inflammatory bowel disease patients; 87.8% (130/148), 81.8% (81/99), 78.3% (36/46), and 8.1% (12/148), 7.1% (7/99), 4.3% (2/46), respectively in chronic diarrhea patients; 61.4% (78/127), 56.5% (48/85), 47.6% (20/42), and 21.2% (27/127), 15.3% (13/85), 14.3% (6/42), respectively in radiation enteritis patients; 53.8% (64/119), 45.0% (36/80), 6/15, and 21.0% (25/119), 26.2% (21/80), 4/15, respectively in irritable bowel syndrome patients; 23.5% (20/85), 22.8% (13/57), 20.0%(5/25), and 55.3% (47/85), 49.1% (28/57), 40.0% (10/25), respectively in autism patients. Meanwhile the clinical cure rates and the clinical improvement rates at 3, 12, and 36 months were 47.7% (714/1 497), 42.8% (425/994), 39.1% (128/327), and 29.1% (436/1 497), 27.0% (268/994), 28.1% (92/327), respectively in the nasointestinal tube group; 38.7% (175/452), 30.2% (91/301), 33.3% (16/48), and 24.3% (110/452), 26.2% (79/301), 25.0% (12/48), respectively in the oral capsule group; 34.4% (21/61), 32.7% (17/52), 18.2% (4/22), and 21.3% (13/61), 13.5% (7/52), 45.5% (10/22), respectively in colonoscopy group. No serious adverse events occurred during treatment and follow-up period. The adverse event of nasointestinal tube group presented higher ratio of discomfort in respiratorytract accounting for 13.1% (196/1497); the oral capsule group had a higher proportion of nausea and vomiting when swallowing capsules accounting for 7.1% (32/452); the colonoscopy group was mainly diarrhea, accounting for 37.7% (23/61). The above symptoms disappeared after the nasointestinal tube was removed, or after treatment ended, or within 1 to 3 days after hospitalization. <bConclusion:</b FMT is a safe and effective method for the treatment of intestinal dysfunction.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
<bObjective:</b To explore the effects of autologous platelet-rich plasma (PRP) in the repair of soft tissue defects of rabbits with free flap. <bMethods:</b Thirty 6-month-old New Zealand white rabbits, male and female unlimited, were used to harvest blood from the heart. PRP was prepared by Aghaloo method, then free flap model with size of 5 cm×3 cm was reproduced on each ear of the rabbit. According to the random number table, one ear of each rabbit was recruited to PRP group, and the other ear was recruited to normal saline group. The base of flap on rabbit ear in PRP group was evenly spread with 1.0 mL autologous PRP, and equivalent volume of normal saline was applied to that in normal saline group. Then, the flap was replanted in situ. On post surgery day (PSD) 2, 3, 5, 7, and 14, 6 rabbits in each group were taken. The survival of flap was observed and recorded. The morphology of the basal tissue of flap was observed by hematoxylin-eosin staining. The expressions of CD31 and α smooth muscle actin (α-SMA) in the basal tissue of flap were detected by immunofluorescence method. Another 6-month-old male New Zealand white rabbit without making flap under the same experimental conditions was used for harvesting whole blood and preparing PRP. Then blood platelet count in whole blood and PRP was determined, and the content of vascular endothelial growth factor (VEGF) and transforming growth factor β (TGF-β) was detected by double-antibody sandwich enzyme-linked immunosorbent assay. Data were processed with analysis of variance of factorial design, paired sample <it</i test, and Bonferroni correction. <bResults:</b (1) On PSD 2, the flaps of wounds of rabbits in PRP group were reddish and adhered well to the basal tissue; the flaps of wounds of rabbits in normal saline group were dark red and poorly attached to the basal tissue. On PSD 3, the flaps of wounds of rabbits in PRP group were ruddy and closely adhered to the basal tissue; the flaps of wounds of rabbits in normal saline group were scattered in the plaque-like dark red and generally attached to the base. On PSD 5, the flaps of wounds of rabbits in PRP group were reddish and closely adhered to the basal tissue, and the flaps were alive; while flaps of wounds of rabbits in normal saline group were rosy and closely adhered to the basal tissue. On PSD 7, the surface of flaps of wounds of rabbits in PRP group was covered with a medium amount of rabbit hair. The color of flap was similar to that of the surrounding skin. The flaps of wounds of rabbits in normal saline group were generally attached to the base, and the surface was only covered with a small amount of fluff. On PSD 14, the incisions were healed well in PRP group, while small wounds in normal saline group were not healed. (2) On PSD 2, inflammatory cell infiltration was observed in flaps of wounds of rabbits in both groups. On PSD 3, the flaps of wounds of rabbits in PRP group showed neovascularization, with less interstitial hemorrhage; while there were less neovascularization in the flaps of wounds of rabbits in normal saline group. On PSD 5, a medium number of inflammatory cell infiltration and a small amount of new microvessels were observed in flaps of wounds of rabbits in normal saline group. Many fibroblasts, a small amount of inflammatory cells, and scattered new microvessels were observed in flaps of wounds of rabbits in PRP group. On PSD 7, the number of new microvessels in normal saline group was significantly lower than that in PRP group. On PSD 14, the new microvessels in the flaps of wounds of rabbits in PRP group gradually matured, and a large number of fibroblasts distributed around them. Some of the newly formed microvessels in the flaps of wounds of rabbits in normal saline group were mature, and the healing was slower than that of PRP group. (3) On PSD 2, 3, 5, 7, and 14, the expressions of CD31 and α-SMA in the basal tissue of flaps of wounds of rabbits in PRP group were significantly higher than those in normal saline group (<it</i=10.133, 5.444, 9.450, 6.986, 8.394, 14.896, 10.328, 9.295, 13.902, 10.814, <iP</i&lt;0.01). (4) The platelet count in activated PRP of rabbits was (2 863±962)×10(9)/L, which was significantly higher than (393±49)×10(9)/L in whole blood (<it</i=7.690, <iP</i&lt;0.05). (5) The content of VEGF and TGF-β in activated PRP of rabbits was (564.3±3.2) and (1 143±251) pg/mL, which was significantly higher than (99.7±0.4) and (274±95) pg/mL in whole blood, respectively (<it</i=287.390, 9.648, <iP</i&lt;0.05 or <iP</i&lt;0.01). <bConclusions:</b PRP of rabbits contains high concentrations of VEGF and TGF-β. Therefore, PRP can effectively promote microvascular regeneration in free flap tissue and accelerate the survival of free flap.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Anaemia is a condition where the number of red blood cells (and consequently their oxygen-carrying capacity) is insufficient to meet the body's physiological needs. Fortification of wheat flour is deemed a useful strategy to reduce anaemia in populations. To determine the benefits and harms of wheat flour fortification with iron alone or with other vitamins and minerals on anaemia, iron status and health-related outcomes in populations over two years of age. We searched CENTRAL, MEDLINE, Embase, CINAHL, 21 other databases and two trials registers up to 21 July 2020, together with contacting key organisations to identify additional studies. We included cluster- or individually-randomised controlled trials (RCTs) carried out among the general population from any country, aged two years and above. The interventions were fortification of wheat flour with iron alone or in combination with other micronutrients. We included trials comparing any type of food item prepared from flour fortified with iron of any variety of wheat DATA COLLECTION AND ANALYSIS: Two review authors independently screened the search results and assessed the eligibility of studies for inclusion, extracted data from included studies and assessed risks of bias. We followed Cochrane methods in this review. Our search identified 3538 records, after removing duplicates. We included 10 trials, involving 3319 participants, carried out in Bangladesh, Brazil, India, Kuwait, Philippines, South Africa and Sri Lanka. We identified two ongoing studies and one study is awaiting classification. The duration of interventions varied from 3 to 24 months. One study was carried out among adult women and one trial among both children and nonpregnant women. Most of the included trials were assessed as low or unclear risk of bias for key elements of selection, performance or reporting bias. Three trials used 41 mg to 60 mg iron/kg flour, three trials used less than 40 mg iron/kg and three trials used more than 60 mg iron/kg flour. One trial used various iron levels based on type of iron used: 80 mg/kg for electrolytic and reduced iron and 40 mg/kg for ferrous fumarate. All included studies contributed data for the meta-analyses. Iron-fortified wheat flour with or without other micronutrients added versus wheat flour (no added iron) with the same other micronutrients added Iron-fortified wheat flour with or without other micronutrients added versus wheat flour (no added iron) with the same other micronutrients added may reduce by 27% the risk of anaemia in populations (risk ratio (RR) 0.73, 95% confidence interval (CI) 0.55 to 0.97; 5 studies, 2315 participants; low-certainty evidence). It is uncertain whether iron-fortified wheat flour with or without other micronutrients reduces iron deficiency (RR 0.46, 95% CI 0.20 to 1.04; 3 studies, 748 participants; very low-certainty evidence) or increases haemoglobin concentrations (in g/L) (mean difference MD 2.75, 95% CI 0.71 to 4.80; 8 studies, 2831 participants; very low-certainty evidence). No trials reported data on adverse effects in children (including constipation, nausea, vomiting, heartburn or diarrhoea), except for risk of infection or inflammation at the individual level. The intervention probably makes little or no difference to the risk of Infection or inflammation at individual level as measured by C-reactive protein (CRP) (mean difference (MD) 0.04, 95% CI -0.02 to 0.11; 2 studies, 558 participants; moderate-certainty evidence). Iron-fortified wheat flour with other micronutrients added versus unfortified wheat flour (nil micronutrients added) It is unclear whether wheat flour fortified with iron, in combination with other micronutrients decreases anaemia (RR 0.77, 95% CI 0.41 to 1.46; 2 studies, 317 participants; very low-certainty evidence). The intervention probably reduces the risk of iron deficiency (RR 0.73, 95% CI 0.54 to 0.99; 3 studies, 382 participants; moderate-certainty evidence) and it is unclear whether it increases average haemoglobin concentrations (MD 2.53, 95% CI -0.39 to 5.45; 4 studies, 532 participants; very low-certainty evidence). No trials reported data on adverse effects in children. Nine out of 10 trials reported sources of funding, with most having multiple sources. Funding source does not appear to have distorted the results in any of the assessed trials. Fortification of wheat flour with iron (in comparison to unfortified flour, or where both groups received the same other micronutrients) may reduce anaemia in the general population above two years of age, but its effects on other outcomes are uncertain. Iron-fortified wheat flour in combination with other micronutrients, in comparison with unfortified flour, probably reduces iron deficiency, but its effects on other outcomes are uncertain. None of the included trials reported data on adverse side effects except for risk of infection or inflammation at the individual level. The effects of this intervention on other health outcomes are unclear. Future studies at low risk of bias should aim to measure all important outcomes, and to further investigate which variants of fortification, including the role of other micronutrients as well as types of iron fortification, are more effective, and for whom.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
How accurately do women report a diagnosis of endometriosis on self-administered questionnaires? Based on the analysis of four international cohorts, women self-report endometriosis fairly accurately with a &gt; 70% confirmation for clinical and surgical records. The study of complex diseases requires large, diverse population-based samples, and endometriosis is no exception. Due to the difficulty of obtaining medical records for a condition that may have been diagnosed years earlier and for which there is no standardized documentation, reliance on self-report is necessary. Only a few studies have assessed the validity of self-reported endometriosis compared with medical records, with the observed confirmation ranging from 32% to 89%. We compared questionnaire-reported endometriosis with medical record notation among participants from the Black Women's Health Study (BWHS; 1995-2013), Etude Epidémiologique auprès de femmes de la Mutuelle Générale de l'Education Nationale (E3N; 1990-2006), Growing Up Today Study (GUTS; 2005-2016), and Nurses' Health Study II (NHSII; 1989-1993 first wave, 1995-2007 second wave). Participants who had reported endometriosis on self-administered questionnaires gave permission to procure and review their clinical, surgical, and pathology medical records, yielding records for 827 women: 225 (BWHS), 168 (E3N), 85 (GUTS), 132 (NHSII first wave), and 217 (NHSII second wave). We abstracted diagnosis confirmation as well as American Fertility Society (AFS) or revised American Society of Reproductive Medicine (rASRM) stage and visualized macro-presentation (e.g. superficial peritoneal, deep endometriosis, endometrioma). For each cohort, we calculated clinical reference to endometriosis, and surgical- and pathologic-confirmation proportions. Confirmation was high-84% overall when combining clinical, surgical, and pathology records (ranging from 72% for BWHS to 95% for GUTS), suggesting that women accurately report if they are told by a physician that they have endometriosis. Among women with self-reported laparoscopic confirmation of their endometriosis diagnosis, confirmation of medical records was extremely high (97% overall, ranging from 95% for NHSII second wave to 100% for NHSII first wave). Importantly, only 42% of medical records included pathology reports, among which histologic confirmation ranged from 76% (GUTS) to 100% (NHSII first wave). Documentation of visualized endometriosis presentation was often absent, and details recorded were inconsistent. AFS or rASRM stage was documented in 44% of NHSII first wave, 13% of NHSII second wave, and 24% of GUTS surgical records. The presence/absence of deep endometriosis was rarely noted in the medical records. Medical record abstraction was conducted separately by cohort-specific investigators, potentially introducing misclassification due to variation in abstraction protocols and interpretation. Additionally, information on the presence/absence of AFS/rASRM stage, deep endometriosis, and histologic findings were not available for all four cohort studies. Variation in access to care and differences in disease phenotypes and risk factor distributions among patients with endometriosis necessitates the use of large, diverse population samples to subdivide patients for risk factor, treatment response and discovery of long-term outcomes. Women self-report endometriosis with reasonable accuracy (&gt;70%) and with exceptional accuracy when women are restricted to those who report that their endometriosis had been confirmed by laparoscopic surgery (&gt;94%). Thus, relying on self-reported endometriosis in order to use larger sample sizes of patients with endometriosis appears to be valid, particularly when self-report of laparoscopic confirmation is used as the case definition. However, the paucity of data on histologic findings, AFS/rASRM stage, and endometriosis phenotypic characteristics suggests that a universal requirement for harmonized clinical and surgical data documentation is needed if we hope to obtain the relevant details for subgrouping patients with endometriosis. This project was supported by Eunice Kennedy Shriver National Institute of Child Health and Development grants HD48544, HD52473, HD57210, and HD94842, National Cancer Institute grants CA50385, R01CA058420, UM1CA164974, and U01CA176726, and National Heart, Lung, and Blood Institute grant U01HL154386. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. AS, SM, and KT were additionally supported by the J. Willard and Alice S. Marriott Foundation. MK was supported by a Marie Curie International Outgoing Fellowship within the 7th European Community Framework Programme (#PIOF-GA-2011-302078) and is grateful to the Philippe Foundation and the Bettencourt-Schueller Foundation for their financial support. Funders had no role in the study design, conduct of the study or data analysis, writing of the report, or decision to submit the article for publication. LA Wise has served as a fibroid consultant for AbbVie, Inc for the last three years and has received in-kind donations (e.g. home pregnancy tests) from Swiss Precision Diagnostics, Sandstone Diagnostics, Kindara.com, and FertilityFriend.com for the PRESTO cohort. SA Missmer serves as an advisory board member for AbbVie and a single working group service for Roche; neither are related to this study. No other authors have a conflict of interest to report. Funders had no role in the study design, conduct of the study or data analysis, writing of the report, or decision to submit the article for publication. N/A.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Mesenchymal stem cells derived from adipose tissue have been successfully used to promote sphincter-saving anal fistula healing. The aim of this study was to evaluate the efficacy and safety of the use of autologous centrifuged adipose tissue in the healing process of cryptoglandular complex anal fistulas. This is a randomized controlled trial. This study was conducted at a single center. Patients with complex perianal fistulas not associated with Crohn's disease were included. Rectovaginal fistulas were not included. Patients were randomly allocated to receive treatment with centrifuged adipose tissue injection (experimental group) and without injection (control group) in combination with fistula surgery. The primary outcome was defined as the proportion of patients with complete fistula closure at 4 weeks (short-term outcome) and 6 months after surgery (long-term outcome). Healing was defined as when the external opening was closed with no perianal discharge on clinical assessment. The secondary outcome was safety that was evaluated by the analysis of adverse events up to 3 months after surgery. Pelvic MRI was performed at 3 months to assure safety and the accuracy of the clinical determination of healing. Postoperative pain, return to work/daily activities, persistent closure at 6 months, fecal incontinence, and patient satisfaction were evaluated. Fifty-eight patients who received centrifuged adipose tissue injection and 58 patients who did not receive centrifuged adipose tissue injection were included in the safety and efficacy analysis. After 4 weeks, the healing rate was 63.8% in the experimental group compared with 15.5% in the control group (p &lt; 0.001). No major adverse events were recorded. Postoperative anal pain was significantly lower in the injection group. Time taken to return to work/daily activities was significantly shorter in the experimental group (3 days) than in the control group (17 days). At 6 months, persistent closure was similar in the 2 groups (86.2% vs 81%). Fecal Incontinence Score at 6 months after surgery was identical to the preoperative score. Patient satisfaction was high in both groups. The absence of blinding, the lack of correlation between stem cell content, and the clinical outcome were limitations of the study. Autologous centrifuged adipose tissue injection may represent a safe, efficacious, and inexpensive option for the treatment of complex fistula-in-ano. See Video Abstract at http://links.lww.com/DCR/B607. URL: https://www.clinicaltrials.gov. Identifier: NCT04326907. ANTECEDENTES:Las células madre mesenquimales derivadas del tejido adiposo se han utilizado con éxito para promover la curación de la fístula anal con preservación de esfínter.OBJETIVO:El objetivo de este estudio fue evaluar la eficacia y seguridad del uso de tejido adiposo autólogo centrifugado en el proceso de cicatrización de fístulas anales complejas de origen criptoglandular.DISEÑO:Ensayo controlado aleatorio.ENTORNO CLÍNICO:Estudio unicéntrico.PACIENTES:Se incluyeron pacientes con fístulas perianales complejas no asociadas a Enfermedad de Crohn. No se incluyeron las fístulas rectovaginales.INTERVENCIONES:Los pacientes fueron asignados aleatoriamente para recibir tratamiento con inyección de tejido adiposo centrifugado (grupo experimental) y sin inyección (grupo de control) en combinación con cirugía de fístula.PRINCIPALES MEDIDAS DE VALORACIÓN:El resultado primario se definió como la proporción de pacientes con cierre completo de la fístula a las 4 semanas (resultado a corto plazo) y 6 meses después de la cirugía (resultado a largo plazo). La curación se definió cuando orificio externo se cerró sin secreción perianal en la valoración clínica. El resultado secundario fue la seguridad que se evaluó mediante el análisis de los eventos adversos (EA) hasta 3 meses después de la cirugía. La resonancia magnética pélvica se realizó a los 3 meses para garantizar la seguridad y la precisión clínica de la curación. Se evaluó el dolor postoperatorio, el regreso al trabajo / actividades diarias, el cierre persistente a los 6 meses, la incontinencia fecal y la satisfacción del paciente.RESULTADOS:Cincuenta y ocho pacientes que recibieron inyección de tejido adiposo centrifugado y 58 pacientes que no recibieron inyección de tejido adiposo centrifugado se incluyeron en el análisis de seguridad y eficacia. Después de 4 semanas, la tasa de curación fue del 63,8% en el grupo experimental en comparación con el 15,5% en el grupo de control (p &lt;0,001). No se registraron eventos adversos importantes. El dolor anal posoperatorio fue significativamente menor en el grupo de inyección. El tiempo necesario para volver al trabajo / actividades diarias fue significativamente menor en el grupo experimental (3 días) con respecto al grupo de control (17 días). A los 6 meses, el cierre persistente fue similar en los dos grupos (86,2% vs 81%). La puntuación de incontinencia fecal a los 6 meses después de la cirugía fue idéntica a la puntuación preoperatoria. La satisfacción del paciente fue muy alta en ambos grupos.LIMITACIONES:Ausencia de cegamiento, falta de correlación entre el contenido de células madre y el resultado clínico.CONCLUSIONES:La inyección de tejido adiposo centrifugado autólogo puede representar una opción segura, eficaz y económica para el tratamiento de la fístula anal compleja.Registro de ensayos clínicos: www.clinicaltrials.gov, identificador NCT04326907; No patrocinado.Consulte Video Resumen en http://links.lww.com/DCR/B607.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Pregnant women are at increased risk for morbidity owing to infection with the COVID-19 virus.<sup1</sup Vaccination presents an important strategy to mitigate illness in this population. However, there is a paucity of data on vaccination safety and pregnancy outcomes because pregnant women were excluded from the initial phase III clinical trials. Our objective was to describe the maternal, neonatal, and obstetrical outcomes of women who received a messenger RNA (mRNA) COVID-19 vaccination while pregnant during the first 4 months of vaccine availability. This was an institutional review board-approved descriptive study of pregnant women at New York University Langone Health who received at least 1 dose of an mRNA COVID-19 vaccination approved by the US Food and Drug Administration (FDA) (Pfizer-BioNTech or Moderna) from the time of the FDA Emergency Use Authorization to April 22, 2021. Eligible women were identified via search of the electronic medical record (EMR) system. Vaccine administration was ascertained via immunization records from the New York State Department of Health. Women were excluded if they were vaccinated before conception or during the postpartum period. Charts were reviewed for maternal demographics and pregnancy outcomes. Descriptive analyses were performed using the R software version 4.0.2 (The R Foundation, Boston, MA). We identified 424 pregnant women who received an mRNA vaccination. Of those, 348 (82.1%) received both doses and 76 (17.9%) received only 1 dose. The maternal characteristics and vaccination information are shown in Table 1. Of the included women, 4.9% had a history of a confirmed COVID-19 diagnosis before vaccination. After vaccination, no patient in our cohort was diagnosed with COVID-19. In terms of the pregnancy outcomes, 9 women had spontaneous abortions, 3 terminated their pregnancies, and 327 have ongoing pregnancies. Of the women included, 85 delivered liveborn infants. There were no stillbirths in our population. Of the 9 spontaneous abortions, 8 occurred during the first trimester at a range of 6 to 13 weeks' gestation. There was 1 second trimester loss. The rate of spontaneous abortion among women vaccinated in the first trimester was 6.5%. The 327 women with ongoing pregnancies have been followed for a median of 4.6 weeks (range, 0-17 weeks) following their most recent dose. A total of 113 (34.6%) women, initiated vaccination during the first trimester, 178 (54.4%) initiated vaccination during the second trimester, and 36 (11.0%) during the third trimester. Following the vaccination, 2 fetuses (0.6%) developed intrauterine growth restriction, whereas 5 (1.5%) were diagnosed with anomalies. Outcomes for the 85 women who delivered are shown in Table 2. Of the women who delivered, 18.8% were diagnosed with a hypertensive disorder of pregnancy. The rate of preterm birth was 5.9%. One preterm delivery was medically indicated, whereas the remaining 3 were spontaneous. A total of 15.3% of neonates required admission to the neonatal intensive care unit (NICU). Of the NICU admissions, 61.5% were because of hypoglycemia or an evaluation for sepsis. Other reasons for admission included prematurity, hypothermia, and transient tachypnea of the newborn. Of all the neonates, 12.2% were small for gestational age (SGA) per the World Health Organization standards. This series describes our experience with women who received an mRNA COVID-19 vaccine during pregnancy. In line with other published findings,<sup2</sup we observed no concerning trends. There were no stillbirths. Our 6.5% rate of spontaneous abortion is within the expected rate of 10%,<sup3</sup and our preterm birth rate of 5.9% is below the national average of 9.5%.<sup4</sup Our rate of pregnancy-related hypertensive disorders is higher than our baseline institutional rate of 9.5%, however, this may be because of the underlying characteristics of our study population or skewing of our small sample size. Our 12.2% rate of SGA neonates is near the expected value based on the definition that 10% of neonates will be SGA at birth. The NICU admission rate is at par with our institutional rate of 12%. To date, most women in this series have had uncomplicated pregnancies and have delivered at-term. Strengths of this study include using the EMR system to identify subjects and gather data. We did not rely on self-enrollment and self-report, thereby reducing selection and recall bias. By performing manual chart reviews, we obtained detailed and reliable information about individual patients. One limitation of this study is the lack of a matched control group consisting of unvaccinated pregnant women and therefore direct conclusions could not be drawn about the relative risks of complications. In addition, our cohort is small and may not be generalizable. Finally, many women included are healthcare workers who had early access to vaccinations. As more pregnant women become eligible for the COVID-19 vaccinations, there is an urgent need to report on the maternal, neonatal, and obstetrical outcomes of COVID-19 vaccinations during pregnancy. The results of this study can be used to counsel and reassure pregnant patients facing this decision.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
To investigate the effects of sodium butyrate (NaB) on long-term anxiety like behavior and inflammatory activation of microglia in the hippocampus of sepsis-associated encephalopathy (SAE) mice. (1) Animal experiment: fifty C57BL/6 mice aged 6-8 weeks were randomly divided into Sham group (only the cecum was found by laparotomy without perforation or ligation), and SAE model group caused by cecal ligation and puncture (CLP; SAE model group, the cecum was found by laparotomy and perforated after ligation. The open field test indicated that the ability of independent exploration decreased and showed anxiety like behavior, which proved that the SAE model was successfully replicated) and NaB pretreatment group was established (NaB was administered at a dose of 500 mg×kg<sup-1</sup×d<sup-1</sup for 3 days before modeling, and the same dose once a day for 3 days after modeling). Open field test was used to detect the anxiety like behavior of mice at 7 days. The protein expressions and content changes of interleukin-1β (IL-1β) and tumor necrosis factor-α (TNF-α) in hippocampus of mice at 1 day and 3 days after operation were detected by Western blotting and enzyme linked immunosorbent assay (ELISA). Immunofluorescence staining was used to observe microglia labeled protein ionized calcium bindingadaptor molecule-1 (Iba-1) and TNF-α protein co localization. (2) Cell experiment: mouse microglia cell line BV-2 microglia were divided into blank control group, lipopolysaccharide (LPS) group (cells were treated with 1 mg/L LPS), and NaB treatment group (cells were treated with 1 mg/L LPS+5 mmol/L NaB). The protein expressions of IL-1β, TNF-α, Toll-like receptor 4 (TLR4), phosphorylated nuclear factor-κB p65 (p-NF-κB p65), nuclear factor-κB p65 (NF-κB p65) and NF-κB inhibitor protein-α (IκB-α) were detected by Western blotting. The expressions of Iba-1 and TNF-α in each group were observed by immunofluorescence. (1) Animal experiment: compared with the Sham group, the distance and duration of movement in the central area, the total distance moved of mice decreased 7 days after the establishment of SAE model group were decreased [distance of movement in the central area (mm): 13.45±3.97 vs. 161.44±27.00, duration of movement in the central area (s): 1.82±0.58 vs. 13.45±2.17, the total distance moved (mm): 835.01±669.67 vs. 2 254.51±213.45, all P &lt; 0.05]. In the hippocampus tissues of mice, a large number of nerve nuclei were pyknotic and deeply stained, and the arrangement of nerve cells was disordered. The cell bodies of microglia in mouse hippocampus increased significantly. The number of positive cells of Iba-1/TNF-α (Iba-1<sup+</sup/TNF-α<sup+</sup) increased significantly. The contents and protein expression of proinflammatory factors TNF-α, IL-1β in hippocampal homogenate supernatant 3 days after operation in SAE model group were significantly higher than those in Sham group [TNF-α (ng/L): 119.17±18.40 vs. 90.18±21.17, IL-1β (ng/L): 407.89±70.64 vs. 313.69±34.63; TNF-α/GAPDH: 1.42±0.50 vs. 0.80±0.08, IL-1β/GAPDH: 1.27±0.22 vs. 0.85±0.25, all P &lt; 0.05]. After intragastric administration of NaB, the distance and duration of movement in the central area of mice were significantly higher than those in SAE model group [distance of movement in the central area (mm): 47.39±15.63 vs. 13.45±3.97, duration of movement in the central area (s): 6.12±1.87 vs. 1.82±0.58, all P &lt; 0.05]. There was no significant change in the total distance moved (mm: 1 550.59±1 004.10 vs. 835.01±669.67, P &gt; 0.05). The pyknosis and deep staining of nerve nuclei in mice were significantly less than those in SAE model group. The number of Iba-1<sup+</sup/TNF-α<sup+</sup positive cells decreased significantly. The contents and protein expression levels of proinflammatory factors TNF-α, IL-1β in hippocampal homogenate supernatant 3 days after operation were significantly lower than those in SAE model group [TNF-α (ng/L): 64.95±9.10 vs. 119.17±18.40, IL-1β (ng/L): 311.94±69.92 vs. 407.89±70.64; TNF-α/GAPDH: 1.02±0.36 vs. 1.42±0.50, IL-1β/GAPDH: 0.86±0.20 vs. 1.27±0.22, all P &lt; 0.05]. (2) Cell experiment: after LPS intervention, the fluorescence intensity of TNF-α in BV-2 cells was significantly enhanced, the protein expression levels of TNF-α, IL-1β, TLR4 and p-NF-κB p65 protein increased (TNF-α/GAPDH: 0.39±0.06 vs. 0.20±0.02, IL-1β/GAPDH: 0.27±0.03 vs. 0.19±0.01, TLR4/GAPDH: 0.55±0.12 vs. 0.33±0.09, p-NF-κB p65/NF-κB p65: 0.55±0.05 vs. 0.29±0.04, all P &lt; 0.05), the expression level of IκB-α was lower than that in the control group(IκB-α/GAPDH: 0.54±0.06 vs. 0.81±0.03, P &lt; 0.05). After NaB treatment, the fluorescence intensity of TNF-α in BV-2 cells was decreased. The protein expression levels of TNF-α, IL-1β, TLR4 and p-NF-κB p65 protein were significantly lower than that of LPS model group (TNF-α/GAPDH: 0.26±0.02 vs. 0.39±0.06, IL-1β/GAPDH: 0.11±0.04 vs. 0.27±0.03, TLR4/GAPDH: 0.28±0.14 vs. 0.55±0.12, p-NF-κB p65/NF-κB p65: 0.29±0.01 vs. 0.55±0.05, all P &lt; 0.05), the protein expression level of IκB-α was significantly higher than that in the LPS group (IκB-α/GAPDH: 0.75±0.01 vs. 0.54±0.06, P &lt; 0.05). NaB could antagonism the TLR4 activation induced by LPS, thus inhibiting p-NF-κB p65 nuclear transcription and IκB-α degradation. It can reduce microglia activation and secretion of inflammatory factors, and finally improve the inflammation in the hippocampus of septic mice and long-term anxiety like behavior.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The lateral hypothalamus (LHA) is still a poorly understood brain region. Based on published Dlx and Gad gene expression patterns in the embryonic and adult hypothalamus respectively, three large areas are identified in the LHA. A central tuberal LHA region is already well described as it contains neurons producing the peptides melanin-concentrating hormone or hypocretin. This region is rich in GABAergic neurons and is specified by Dlx gene expression in the rodent embryo. Rostrally and caudally bordering the tuberal LHA, two Dlx-GAD-GABA poor regions are then easily delineated. The three regions show different organizational schema. The tuberal region is reticularly organized, connected with the cerebral cortex and the spinal cord, and its embryonic development occurs along the tractus postopticus. The region anterior to it is associated with the stria medullaris in both embryonic and adult subjects. The posterior LHA region is made of differentiated nuclei and includes the subthalamic nucleus. Therefore, the LHA is divided into three distinct parts: in addition to the well-known tuberal LHA, caudal and anterior LHA regions exist that have specific anatomical and functional characteristics. The hypothalamus is made up of several dozens of nuclei or areas that are more or less well differentiated and whose boundaries and arrangements are drawn differently according to authors and atlases (Allen Institute, 2004; Paxinos and Franklin, 2019; Paxinos and Watson, 2013; Swanson, 2004). The dominant hypothesis for more than 50 years is that these structures are distributed within three antero-posterior areas (anterior, tuberal, posterior) and more or less three longitudinal zones (lateral, medial and periventricular) (Fig. 1). In addition to these regions, several adjacent territories are often associated to the hypothalamus. The preoptic area is functionally related to the hypothalamus, but it is better seen as a telencephalic structure based on developmental data (Croizier et al., 2015; Puelles and Rubenstein, 2015). Lately, the zona incerta and the subthalamic nucleus (STN) have also been associated to the hypothalamus on the basis of their connections and development for the STN (Altman and Bayer, 1986; Barbier and Risold, 2021; Swaab et al., 2021). However, the zona incerta is still included in the 'pre-thalamus' or "ventral thalamus" in the embryo (Puelles and Rubenstein, 2015). Thus, the boundaries of the hypothalamus remain blurred around what we can call a 'core' made of the anterior to posterior regions (Brooks, 1988). In addition, unlike other large brain regions that are characterized early on by a molecular signature, i.e. by the embryonic expression of specific molecular markers, data illustrating the distribution of dozens of transcription factors involved in brain patterning and cell lineage specification confirmed the extremely heterogeneous and mosaic nature of the anterior and posterior regions of the hypothalamus (Alvarez-Bolado, 2019; Puelles et al., 2013; Puelles and Rubenstein, 2015). The rich nuclear organization of the medial and periventricular zones of the hypothalamus is consistent with the mosaic expression of developmental genes. The LHA, however, is often perceived as much more homogeneous in its cytoarchitectural organization. At the same time, there is little information regarding the expression of developmental genes in the anterior and posterior territories of the LHA. Most studies focus on the tuberal LHA which expresses many of these genes. Admittedly, even in the adult hypothalamus, the internal boundaries of the LHA are difficult to identify and the same is true in the embryo. Developmental data alone are insufficient to achieve a better understanding of the LHA anatomical organization and for this region as for medial and periventricular zones, a coherence must be established between development and adult anatomical organization. Among the most useful neurochemical markers to identify large regions of the forebrain, those involved in the identification of GABAergic and glutamatergic neurons have proven to be particularly efficient. Indeed, GABAergic neurons are not ubiquitously distributed. Large regions of the forebrain are rich in such cells, including the basal telencephalon, but others contain few or no GABAergic cells and are rich in glutamatergic neurons instead (for example the dorsal thalamus that is free of GABA-neurons in rodents). The same applies for the hypothalamus: several structures of the hypothalamus are free of GABAergic neurons, as, for example, the mammillary nuclei (Hahn et al., 2019). Recently, we also identified a GABA-poor posterior LHA territory that includes the (STN), and is localized caudal to the GABA-rich tuberal LHA (Barbier et al., 2020; Barbier and Risold, 2021; Chometton et al., 2016b). Therefore, the LHA seems partitioned into GABA-rich/GABA-poor regions. However, to define or confirm distinct neuroanatomical entities, these regions must have a specific embryological origin, and show specific hodological patterns and functions. Hence, the purpose of this short review is to identify divisions of the LHA based on developmental and neurochemical criteria. Such an analysis seems to us relevant in order to allow later functional studies on regions whose boundaries will be based on objective criteria.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Infective endocarditis is a severe infection arising in the lining of the chambers of the heart. It can be caused by fungi, but most often is caused by bacteria. Many dental procedures cause bacteraemia, which could lead to bacterial endocarditis in a small proportion of people. The incidence of bacterial endocarditis is low, but it has a high mortality rate.  Guidelines in many countries have recommended that antibiotics be administered to people at high risk of endocarditis prior to invasive dental procedures. However, guidance by the National Institute for Health and Care Excellence (NICE) in England and Wales states that antibiotic prophylaxis against infective endocarditis is not recommended routinely for people undergoing dental procedures. This is an update of a review that we first conducted in 2004 and last updated in 2013. Primary objective To determine whether prophylactic antibiotic administration, compared to no antibiotic administration or placebo, before invasive dental procedures in people at risk or at high risk of bacterial endocarditis, influences mortality, serious illness or the incidence of endocarditis. Secondary objectives To determine whether the effect of dental antibiotic prophylaxis differs in people with different cardiac conditions predisposing them to increased risk of endocarditis, and in people undergoing different high risk dental procedures. Harms Had we foundno evidence from randomised controlled trials or cohort studies on whether prophylactic antibiotics affected mortality or serious illness, and we had found evidence from these or case-control studies suggesting that prophylaxis with antibiotics reduced the incidence of endocarditis, then we would also have assessed whether the harms of prophylaxis with single antibiotic doses, such as with penicillin (amoxicillin 2 g or 3 g) before invasive dental procedures, compared with no antibiotic or placebo, equalled the benefits in prevention of endocarditis in people at high risk of this disease. An information specialist searched four bibliographic databases up to 10 May 2021 and used additional search methods to identify published, unpublished and ongoing studies SELECTION CRITERIA: Due to the low incidence of bacterial endocarditis, we anticipated that few if any trials would be located. For this reason, we included cohort and case-control studies with suitably matched control or comparison groups. The intervention was antibiotic prophylaxis, compared to no antibiotic prophylaxis or placebo, before a dental procedure in people with an increased risk of bacterial endocarditis. Cohort studies would need to follow at-risk individuals and assess outcomes following any invasive dental procedures, grouping participants according to whether or not they had received prophylaxis. Case-control studies would need to match people who had developed endocarditis after undergoing an invasive dental procedure (and who were known to be at increased risk before undergoing the procedure) with those at similar risk who had not developed endocarditis.  Our outcomes of interest were mortality or serious adverse events requiring hospital admission; development of endocarditis following any dental procedure in a defined time period; development of endocarditis due to other non-dental causes; any recorded adverse effects of the antibiotics; and the cost of antibiotic provision compared to that of caring for patients who developed endocarditis. Two review authors independently screened search records, selected studies for inclusion, assessed the risk of bias in the included study and extracted data from the included study. As an author team, we judged the certainty of the evidence identified for the main comparison and key outcomes using GRADE criteria. We presented the main results in a summary of findings table. Our new search did not find any new studies for inclusion since the last version of the review in 2013. No randomised controlled trials (RCTs), controlled clinical trials (CCTs) or cohort studies were included in the previous versions of the review, but one case-control study met the inclusion criteria. The trial authors collected information on 48 people who had contracted bacterial endocarditis over a specific two-year period and had undergone a medical or dental procedure with an indication for prophylaxis within the past 180 days. These people were matched to a similar group of people who had not contracted bacterial endocarditis. All study participants had undergone an invasive medical or dental procedure. The two groups were compared to establish whether those who had received preventive antibiotics (penicillin) were less likely to have developed endocarditis. The authors found no significant effect of penicillin prophylaxis on the incidence of endocarditis. No data on other outcomes were reported. The level of certainty we have about the evidence is very low. There remains no clear evidence about whether antibiotic prophylaxis is effective or ineffective against bacterial endocarditis in at-risk people who are about to undergo an invasive dental procedure. We cannot determine whether the potential harms and costs of antibiotic administration outweigh any beneficial effect. Ethically, practitioners should discuss the potential benefits and harms of antibiotic prophylaxis with their patients before a decision is made about administration.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
There are an estimated 1·3-4·0 million cases of cholera and 20 000-140 000 cholera-related deaths worldwide each year. The rice-based cholera toxin B subunit (CTB) vaccine, MucoRice-CTB, is an oral candidate vaccine that does not require a cold chain, has shown efficacy in animal models, and could be of benefit in places where there is a paucity of medical infrastructure. We aim to assess the safety, tolerability, and immunogenicity of MucoRice-CTB in humans. We did a double-blind, randomised, placebo-controlled, dose-escalation, phase 1 study at one centre in Tokyo, Japan. Eligible participants were healthy adult men with measurable serum and faecal antibodies against CTB at screening. Participants were excluded if they had allergy to rice; history of cholera or travellers' diarrhoea; poorly controlled constipation; abnormal results on hepatic, renal, or haematological screening tests; use of any over-the-counter drugs within 7 days before first administration; inability to use a medically acceptable means of contraception; or other reasons by medical judgment of the investigator. Three dose cohorts of participants were randomly assigned by block to receive oral MucoRice-CTB (1 g, 3 g, or 6 g) or placebo (1 g, 3 g, or 6 g), once every 2 weeks for 8 weeks (for a total of 4 doses). The dose groups were performed sequentially, and each dose cohort was completed before the higher dose cohort began. All medical staff, participants, and most trial staff were masked to treatment allocation. The primary outcomes were safety and tolerability, measured by 12-lead electrocardiogram; vital signs; haematology, biochemistry, and urinalysis; rice protein-specific serum IgE antibody concentration; and monitoring of adverse events. Participants were assessed at baseline and at 1, 2, 4, 6, 8, and 16 weeks after the first administration of vaccine or placebo. The safety analysis set included all participants enrolled in the trial who received at least one dose of the study drug or placebo and were compliant with good clinical practice. The full analysis population included all participants enrolled in the trial who received at least one dose of the study drug and for whom any data were obtained after the start of study drug administration. Meta-genomic analysis of study participants was performed using bacterial DNA from faecal samples before vaccination. This trial is registered with UMIN.ac.jp, UMIN000018001. Between June 23, 2015, and May 31, 2016, 226 participants were recruited and assessed for eligibility. 166 participants were excluded based on health condition or schedule. We then randomly selected 60 male volunteers aged 20-40 years who were enrolled and assigned to MucoRice-CTB (10 participants assigned to 1 g, 10 participants assigned to 3 g, and 10 participants assigned to 6 g), or placebo (10 participants assigned to 1 g, 10 participants assigned to 3 g, and 10 participants assigned to 6 g). All participants received at least one dose of study drug or placebo and were included in the safety analyses. Two participants given MucoRice-CTB 3 g and one participant given MucoRice-CTB 6 g were lost to follow-up and excluded from the efficacy analysis. Serum CTB-specific IgG and IgA antibody concentrations in participants who received 6 g MucoRice-CTB increased significantly in both a time-dependent and dose-dependent manner compared with those in the placebo groups (p for interaction=0·002 for IgG, p=0·004 for IgA). Genome analysis of subjects' faeces before vaccination revealed that compared to non-responders, responders had a gut microbiota of higher diversity with the presence of Escherichia coli and Shigella spp. 28 (93%) of 30 participants who received MucoRice-CTB at any dose had at least one adverse event during the study period, compared with 30 (100%) of 30 participants given placebo. Grade 3 or higher adverse events were reported in four participants in the MucoRice-CTB group (5 events) and four participants in the placebo group (10 events). The most common serious adverse event was haemoglobin decreased (2 events in 2 participants in the pooled MucoRice-CTB group, 2 events in 2 participants in the placebo group; all grade 3). Participants given MucoRice-CTB showed increased CTB-specific serum IgG and IgA antibody concentrations without inducing serious adverse events, indicating that MucoRice-CTB could be a safe and potent vaccine to prevent diarrhoeal disease. MucoRice-CTB induced neutralising antibodies against diarrhoeal toxins in a gut microbiota-dependent manner. A similar phase 1 trial will be done with participants of other ethnicities to substantiate our findings. Translational Research Acceleration Network Program of Japan Agency for Medical Research and Development; Ministry of Education, Culture, Sports, Science and Technology, Japan; Science and Technology Research Partnership for Sustainable Development; Grant-in-Aid for Scientific Research (S) (18H05280) (to H K) from the Japan Society for the Promotion of Science (JSPS); Grant-in-Aid for Young Scientists (B) (16K16144) (to Y K) from JSPS; Grant-in-Aid for Young Scientists (18K18148) (to Y K) from JSPS; Grant from International Joint Usage/Research Center (K3002), the Institute of Medical Science, University of Tokyo.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
There is now a wide choice of medical imaging to show both focal and diffuse pathologies in various organs. Conventional radiology with plain films, fluoroscopy and contrast medium have many advantages, being readily available with low-cost apparatus and a familiarity that almost leads to contempt. The use of plain films in chest disease and in trauma does not need emphasizing, yet there are still too many occasions when the answer obtainable from a plain radiograph has not been available. The film may have been mislaid, or the examination was not requested, or the radiograph had been misinterpreted. The converse is also quite common. Examinations are performed that add nothing to patient management, such as skull films when CT will in any case be requested or views of the internal auditory meatus and heal pad thickness in acromegaly, to quote some examples. Other issues are more complicated. Should the patient who clinically has gall-bladder disease have more than a plain film that shows gall-stones? If the answer is yes, then why request a plain film if sonography will in any case be required to 'exclude' other pathologies especially of the liver or pancreas? But then should cholecystography, CT or scintigraphy be added for confirmation? Quite clearly there will be individual circumstances to indicate further imaging after sonography but in the vast majority of patients little or no extra information will be added. Statistics on accuracy and specificity will, in the case of gall-bladder pathology, vary widely if adenomyomatosis is considered by some to be a cause of symptoms or if sonographic examinations 'after fatty meals' are performed. The arguments for or against routine contrast urography rather than sonography are similar but the possibility of contrast reactions and the need to limit ionizing radiation must be borne in mind. These diagnostic strategies are also being influenced by their cost and availability; purely pragmatic considerations are not infrequently the overriding factor. Non-invasive methods will be preferred, particularly sonography as it is far more acceptable by not being claustrophobic and totally free of any known untoward effects. There is another quite different but unrelated aspect. The imaging methods, apart from limited exceptions, cannot characterize tissues as benign or malignant, granulomatous or neoplastic; cytology or histology usually provides the answer. Sonography is most commonly used to locate the needle tip correctly for percutaneous sampling of tissues. Frequently sonography with fine needle aspiration cytology or biopsy is the least expensive, safest and most direct route to a definitive diagnosis. Abscesses can be similarly diagnosed but with needles or catheters through which the pus can be drained. The versatility and mobility of sonography has spawned other uses, particularly for the very ill and immobile, for the intensive therapy units and for the operating theatre, as well in endosonography. The appointment of more skilled sonographers to the National Health Service could make a substantial contribution to cost-effective management of hospital services. Just when contrast agents and angiography have become safe and are performed rapidly, they are being supplanted by scanning methods. They are now mainly used for interventional procedures or of pre-operative 'road maps' and may be required even less in the future as MRI angiography and Doppler techniques progress. MRI will almost certainly extent its role beyond the central nervous system (CNS) should the equipment become more freely available, especially to orthopaedics. Until then plain films, sonography or CT will have to suffice. Even in the CNS there are conditions where CT is more diagnostic, as in showing calculations in cerebral cysticercosis. Then, too, in most cases CT produces results comparable to MRI apart from areas close to bone, structures at the base of the brain, in the posterior fossa and in the spinal cord. Scintigraphy for pulmonary infarcts and bone metastases and in renal disease in children plays a prominent role and its scope has increased with new equipment and radionuclides. Radio-immunoscintigraphy in particular is likely to expand greatly not only in tumour diagnosis but also in metabolic and infective conditions. Whether the therapeutic implications will be realized is more problematic. The value of MRS and NM for metabolic studies in clinical practice is equally problematical, although the data from cerebral activity are extremely interesting. While scanning has replaced many radiographic examinations, endoscopy has had a similar effect on barium meals and to a lesser extent on barium enemas. The combined visual/sonographic endoscope is likely to accelerate this process. There is no doubt that over the last 2 decades medical imaging has changed the diagnostic process, but its influence on the outcome of disease other than infections is less certain and probably indefinable. Data concerning the comparative efficacy in terms of patient outcome for each of the imaging techniques would be of considerable interest and a great help in determining diagnostic strategies.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The automated hematology analyzer with CBC and differential results has replaced the traditional manual or individual assay methods for hematologic parameters and the eyecount leukocyte differential as the initial screening and detection system for hematologic abnormalities in modern hospitals and clinics. The traditional review of all automated hematology instrument results by preparation, staining, and microscopic examination of a blood film has disappeared in most institutions. The reasons are the more accurate detection of specimens with distributional or morphologic abnormalities by the instruments than by the traditional eyecount method. The opportunity for a clinician to request a microscopic examination of a blood film, whether or not it is flagged, must be preserved, because the clinician's knowledge of the patient's history, physical findings, and current or prior therapy may indicate review to discover an abnormality that may not have been apparent from the instrument results alone. There has also been a dramatic reduction of the numbers of medical technologists and technicians in medical laboratories. Automation of the CBC and differential counts has reduced the number of technologists needed for performance of these tests. But other factors have had a negative effect, such as the necessity to reduce costs. Consolidation of hematology and chemistry laboratories in core laboratories may produce savings in labor costs, but may also create problems of creating and maintaining areas of expertise, such as hematologic morphology, because of the cross-training required and the necessity of personnel to do all things. This article suggests and documents a number of measures that can be infinity stituted by the laboratory and by clinicians to reduce the number of eyecount differentials and blood film reviews that need to be performed. The first effort is to convince clinicians that valid data exist that confirm that a policy of allowing the laboratory to initiate blood film review based on findings of the CBC and automated differential is a more sensitive and accurate method of detecting patients with blood film abnormalities than routine blood film review of all specimens by technologists. Clinicians need to recognize that daily differential results or differentials at intervals of less than a week are not medically necessary in most patients. The laboratory, however, must provide opportunities for the clinician to request differentials at any time for specific medical reasons. The laboratory must establish the validity of screening criteria for detection of distribution and morphologic abnormalities of leukocytes by clinical correlation studies or adopt criteria established by laboratories with the same instrumentation and which have conducted clinical evaluations. A final observation on the eyecount differential is that it was the only way to identify cell types and their relative proportion for nearly 100 years. Cells were identified by their shape, intracellular structures, and staining characteristics. Many studies were able eventually to correlate some aspect of each cell type's function with their morphologic appearance. It has also been learned that the bone marrow is the source of production of most circulating cells and a great deal of the controls of cell production and release into the peripheral blood have been learned. But leukocytes have many functions, almost none of which are performed in the peripheral blood. The peripheral blood is mainly a conduit from the bone marrow to the tissues where the leukocytes perform their function in the case of the neutrophils and monocytes. It is mainly a recirculation and redistribution system for lymphocytes that usually receive their instructions from antigen processing cells in the tissues and allow these modified cells to home to sites where their functions occur. Cellular morphology and staining characteristics tell little about the maturation stage and functional capabilities of leukocytes. One cannot tell the difference between a band and a segmented neutrophil or whether a lymphocyte is a T or B cell on the conventional eyecount differential. One cannot tell the mature granulocyte of a patient with chronic myeloid leukemia from a normal mature neutrophil. Increasingly, techniques are being developed to identify better the maturation stages of cells and association with specific functional capabilities by flow cytometric techniques. The neoplastic nature of some normal-appearing leukocytes can be identified by techniques, such as fluorescent in situ hybridization. With the rapid advances in many approachs to understand the nature and functional capability of leukocytes, the eyecount differential with the traditional Romanowsky stain may be past the apogee of its ascent and beginning its trip into history along with the hemocytometer counting chamber and the Sahli pipet. The development and implementation of new laboratory cornerstone techniques for diagnosis of hematologic disease are eagerly awaited. On the other hand, the red cells and platelets exist to function in the peripheral blood. More emphasis is needed in the development of automated methods of determining the nature and functional capabilities of these true blood cells as part of the CBC.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Earlier studies suggested that while after spinal cord lesions and transplants at birth, the transplants serve both as a bridge and as a relay to restore supraspinal input caudal to the injury (Bregman, 1994), after injury in the adult the spinal cord transplants serve as a relay, but not as a bridge. We show here, that after complete spinal cord transection in adult rats, delayed spinal cord transplants and exogenous neurotrophic factors, the transplants can also serve as a bridge to restore supraspinal input (Fig. 9). We demonstrate here that when the delivery of transplants and neurotrophins are delayed until 2 weeks after spinal cord transection, the amount of axonal growth and the amount of recovery of function are dramatically increased. Under these conditions, both supraspinal and propriospinal projections to the host spinal cord caudal to the transection are reestablished. The growth of supraspinal axons across the transplant and back into the host spinal cord caudal to the lesion was dependent upon the presence of exogenous neurotrophic support. Without the neurotrophins, only propriospinal axons were able to re-establish connections across the transplant. Studies using peripheral nerve or Schwann cell grafts have shown that some anatomical connectivity can be restored across the injury site, particularly under the influence of neurotrophins (Xu et al., 1995a,b; Cheng et al., 1996; Ye and Houle, 1997). Without neurotrophin treatment, brainstem axons do not enter [figure: see text] the graft (Xu et al., 1995a,b; Cheng et al., 1996; Ye and Houle, 1997). Similarly, cells genetically modified to secrete neurotrophins and transplanted into the spinal cord influence the axonal growth of specific populations of spinally projecting neurons (Tuszynski et al., 1996, 1997; Grill et al., 1997; Blesch and Tuszynski, 1997). Taken together, these studies support a role for neurotrophic factors in the repair of the mature CNS. The regrowth of supraspinal and propriospinal input across the transection site was associated with consistent improvements in hindlimb locomotor function. Animals performed alternating and reciprocal hindlimb stepping with plantar foot contact to the treadmill or stair during ascension. Furthermore, they acquired hindlimb weight support and demonstrated appropriate postural control for balance and equilibrium of all four limbs. After spinal cord injury in the adult, the circuitry underlying rhythmic alternating stepping movements is still present within the spinal cord caudal to the lesion, but is now devoid of supraspinal control. We show here that restoring even relatively small amounts of input allows supraspinal neurons to access the spinal cord circuitry. Removing the re-established supraspinal input after recovery (by retransection rostral to the transplant) abolished the recovery and abolished the serotonergic fibers within the transplant and spinal cord caudal to the transplant. This suggests that at least some of the recovery observed is due to re-establishing supraspinal input across the transplant, rather than a diffuse influence of the transplant on motor recovery. It is unlikely, however, that the greater recovery of function in animals that received delayed transplant and neurotrophins is due solely to the restoration of supraspinal input. Recent work by Ribotta et al. (2000) suggests that segmental plasticity within the spinal cord contributes to weight support and bilateral foot placement after spinal cord transection. This recovery of function occurs after transplants of fetal raphe cells into the adult spinal cord transected at T11. Recovery of function appears to require innervation of the L1-L2 segments with serotonergic fibers, and importantly, animals require external stimulation (tail pinch) to elicit the behavior. In the current study, animals with transection only did not develop stepping overground or on the treadmill without tail pinch, although the transplant and neurotrophin-treated groups did so without external stimuli. Therefore both reorganization of the segmental circuitry and partial restoration of supraspinal input presumably interact to yield the improvements in motor function observed. It is unlikely that the recovery of skilled forelimb movement observed can be mediated solely by reorganization of segmental spinal cord circuitry. We suggest that the restoration of supraspinal input contributes to the recovery observed. It is likely that after CNS injury, reorganization occurs both within the spinal cord and at supraspinal levels, and together contribute to the recovery of automatic and skilled forelimb function and of locomotion. In summary, the therapeutic intervention of tissue transplantation and exogenous neurotrophin support leads to improvements in supraspinal and propriospinal input across the transplant into the host caudal cord and a concomitant improvement in locomotor function. Paradoxically, delaying these interventions for several weeks after a spinal cord transection leads to dramatic improvements in recovery of function and a concomitant restoration of supraspinal input into the host caudal spinal cord. These findings suggest that opportunity for intervention after spinal cord injury may be far greater than originally envisioned, and that CNS neurons with long-standing injuries may be able to re-initiate growth leading to improvement in motor function.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
Chronic fatigue syndrome (CFS) is defined as constellation of the prolonged fatigue and several somatic symptoms, in the absence of organic or severe psychiatric disease. However, this is an operational definition and conclusive biomedical explanation remains elusive. Similarities between the signs and symptoms of CFS and adrenal insufficiency prompted the research of the hypothalamo-pituitary-adrenal axis (HPA) derangement in the pathogenesis of the CFS. Early studies showed mild glucocorticoid deficiency, probably of central origin that was compensated by enhanced adrenal sensitivity to ACTH. Further studies showed reduced ACTH response to vasopressin infusion. The response to CRH was either blunted or unchanged. Cortisol response to insulin induced hypoglycaemia was same as in the control subjects while ACTH response was reported to be same or enhanced. However, results of direct stimulation of the adrenal cortex using ACTH were conflicting. Cortisol and DHEA responses were found to be the same or reduced compared to control subjects. Scott et al found that maximal cortisol increment from baseline is significantly lower in CFS subjects. The same group also found small adrenal glands in some CFS subjects. These varied and inconsistent results could be explained by the heterogeneous study population due to multifactorial causes of the disease and by methodological differences. The aim of our study was to assess cortisol response to low dose (1 microgram) ACTH using previously validated methodology. We compared cortisol response in the CFS subjects with the response in control and in subjects with suppressed HPA axis due to prolonged corticosteroid use. Cortisol responses were analysed in three subject groups: control (C), secondary adrenal insufficiency (AI), and in CFS. The C group consisted of 39 subjects, AI group of 22, and CFS group of nine subjects. Subject data are presented in table 1. Low dose ACTH test was started at 0800 h with the i.v. injection of 1 microgram ACTH (Galenika, Belgrade, Serbia). Blood samples for cortisol determination were taken from the i.v. cannula at 0, 15, 30, and 60 min. Data are presented as mean +/- standard error (SE). Statistical analysis was done using ANOVA with the Games-Howell post-hoc test to determine group differences. ACTH dose per kg or per square meter of body surface was not different between the groups. Baseline cortisol was not different between the groups. However, cortisol concentrations after 15 and 30 minutes were significantly higher in the C group than in the AI group. Cortisol concentration in the CFS group was not significantly different from any other group (Graph 1). Cortisol increment at 15 and 30 minutes from basal value was significantly higher in C group than in other two groups. However, there was no significant difference in cortisol increment between the AI and CFS groups at any time of the test. On the contrary, maximal cortisol increment was not different between CFS and other two groups, although it was significantly higher in C group than in the AI group. Maximal cortisol response to the ACTH stimulation and area under the cortisol response curve was significantly larger in C group compared to AI group, but there was no difference between CFS and other two groups. Several previous studies assessed cortisol response to ACTH stimulation. Hudson and Cleare analysed cortisol response to 1 microgram ACTH in CFS and control subjects. They compared maximum cortisol attained during the test, maximum cortisol increment, and area under the cortisol response curve. There was no difference between the groups in any of the analysed parameters. However, authors commented that responses were generally low. On the contrary Scott et al found that cortisol increment at 30 min is significantly lower in the CFS than in the control group. Taking into account our data it seems that the differences found in previous studies papers are caused by the methodological differences. We have shown that cortisol increment at 15 and 30 min is significantly lower in CFS group than in C group. Nevertheless, maximum cortisol attained during the test, maximum cortisol increment, and area under the cortisol response curve were not different between the C and CFS groups. This is in agreement with our previous findings that cortisol increment at 15 minutes has the best diagnostic value of all parameters obtained during of low dose ACTH test. However, there was no difference between CFS and AI group in any of the parameters, although AI group had significantly lower cortisol concentrations at 15 and 30 minutes, maximal cortisol response, area under the cortisol curve, maximal cortisol increment, and maximal cortisol change velocity than C group. Consequently, reduced adrenal responsiveness to ACTH exists in CFS. In conclusion, we find that regarding the adrenal response to ACTH stimulation CFS subjects present heterogeneous group. In some subjects cortisol response is preserved, while in the others it is similar to one found in secondary adrenal insufficiency.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
DEVELOPEMENT OF MAC LUNG DISEASE: An increase of nodular bronchiectatic type of MAC lung disease becomes a problem among respiratory physician today. The reason is still unknown, but it seems to be globally recognized that this type of MAC disease is developing particularly in middle-aged woman. Some papers mentioned the existence of such type of MAC lung disease already early in the 70s, in Japan. Yamamoto described that 17 cases of middle lobe type lung disease out of 154 non-photochoromogen cases, and 76.5% were female, in 1970. Shimoide also pointed such type of 39 cases out of 240 MAC lung disease and 84.6% were female, in 1980. Prince reported MAC lung disease seen in old and middle age female of 21 cases including lethality example of 4 cases without a precedent disease in 1989. After his report, the international consensus of this peculiar type of MAC lung disease seems to be spread. In 1989, we compared 72 cases of nodular bronchiectatic type of MAC lung disease and 56 cases of diffuse panbronchiolitis (DPB) that was a most typical chronic airway disease at that time in Japan. The average age of disease onset of DPB group was 37.0 +/- 16.3 years old and that of MAC group was 54.5 +/- 16.3 years old. The percentage of female was 32% in DPB group and 87.5% in MAC group. It was highly possible that two groups belong different parent population. We could grasp that nodular bronchiectatic type of MAC lung disease patients is a unique group. We observed the serial films of 21 cases of nodular bronchiectatic MAC lung disease, and divide the progression of the disease to sequential 7 steps as Fig. 1. Small nodules progress to cavities in mean about 10 years. However, why is MAC which is opportunistic pathogen with weak virulence, able to form a lesion at unimpaired lung parenchyma? Is there really normal site? Why dose it start from lingula? Why is MAC seen a lot in woman? While it is extremely pathognomonic clinical picture, and, is an extremely interesting problem, most are still unidentified. STUDY OF MAC LUNG DISEASE TREATMENT: It was known that Mycobacterium kansasii lung disease is healed with a chemotherapy like analog of anti-tuberculosis chemotherapy, already in those days. However, the results of MAC lung disease chemotherapy were extremely poor. We tried to express a physicians experience quantitatively as follows, in 1987. The results of 8 weeks sputum culture on Ogawa egg medium were converted semi-quantitatively to CFU numbers based on "Japanese standard guideline of Mycobacterium tuberculosis inspection". We exhibit the ratio of post-treatment consecutive 6 months culture yield to pre-treatment culture yield as response rate, about 110 pulmonary MAC cases. Through this study, we clarify the followings. The results of chemotherapy do not correlate susceptibility test for Mycobacterium tuberculosis. Multidrug regimen is more useful. Small extent of lesion is more responsive. Combination with aminoglycoside chemotherapy is more effective. These conclusions were almost same as the ATS guideline of 1990. New drugs such as, new macrolides and new quinolones appeared for pulmonary MAC treatment through the feedback from systemic MAC complicated AIDS treatments from the latter half of 90's. We measured the sensitive strain ratio at 2 mcg/ml of OFLX, CPFX, LVFX about 990 clinical isolates and could expect availability for M. kansasii or M. fortuitum, but these new quinolones are not enough effective for MAC. Also we examined MIC for various antimycobacterial agent by 50 MAC clinical isolates, and we could expect a certain availability of SPFX, GFLX, CPFX, CAM for MAC. The availability of clarithromycin (CAM) has been established through many randomized clinical trials for disseminated MAC complicated AIDS, but for pulmonary MAC, complete cure is still difficult if we use CAM including regimen. We performed surgical treatment for relatively young patients with localized lesions. We carry out the adaptation reference such as Table, now. The localization of the lesions become a problem at surgical resection. Through the study of our 55 surgical treatment cases, 8 cases (67%) relapsed out of 12 cases which had destructive airway structure in unresected lung field. On the other, only 1 case relapsed (10%) relapsed out of 10 cases without airway destruction in unresected lung. Therefore, even if there is a little dispersal focus without airway destruction in the other pulmonary lobe except purpose focus of resection, it seems that control is possible by post operational chemotherapy. LONG SURVIVAL: As overall consequence, we calculate the survival curves of 201 pulmonary MAC patients visited Tokyo National Hospital from 1953. The survival medium value was 7332 days. The prognosis of nodular bronchiectatic type was better than that of post-tuberculosis type. Extent of disease measured by chest X-ray examination at the time of first visit may be a most affecting factor to the survival rate.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.
The foregoing experiments show that in cats a definite lobar pneumonia may be caused by Bacillus mucosus capsulatus. Judging both from the clinical course and from the pathological findings, this form of pulmonary infection differs from the usual pneumococcus types of pneumonia and closely resembles the so called Friedländer's bacillus or Bacillus pneumonioe in man. In all instances in which a lobar pneumonia was found after the injection of the bacillus, a similar organism was recovered from the lung, and in no case was this associated with other organisms. The course of the disease in cats is very short, the animals developing early symptoms of profound toxemia. In 87 per cent of the animals showing a lobar pneumonia positive blood cultures were obtained. The pathological findings, judging from the early stages of the disease, are subject to considerable variation. In some instances the process may suggest a pseudolobar or confluent lobular distribution. In these cases the lung has a mottled, marble-like appearance. In the majority of cases, however, the process gave a more homogeneous appearance, suggesting a diffuse and uniform distribution. Foci of hemorrhage were not uncommon in both. Such areas cause the mottled appearance sometimes found. In all instances the consolidated lung presents a greater infiltration of tissue than is usually seen in other types of experimental pneumonia. Although the exudate as seen on the cut surface may be abundant and especially viscid in character, this is not present in most cases. The cut surface of the consolidated lung does not present a granular appearance. The histological findings are also subject to considerable variation. In most instances the infundibular and alveolar spaces are completely filled with an exudate made up chiefly of polymorphonuclear cells. Associated with these are the capsulated bacilli, large vacuolated mononuclear phagocytic cells, and red blood cells, and occasionally small amounts of fibrin. The organisms may vary greatly in numbers. Some sections show spaces almost completely filled with bacilli. The contrast between spaces containing an exudate consisting chiefly of polymorphonuclear forms and an adjoining one filled with organisms is often striking (Fig. 4). The bacilli found are both intra- and extracellular. The large vacuolated cells are numerous in this type of pneumonia. They apparently are the first cells to become phagocytic. Often they are seen to contain as many as 10 to 15 capsulated bacilli, while polymorphonuclear cells in the same exudate contain no organisms. The histogenesis of these cells seems to be somewhat clearer from the study of these early stages of pneumonia. In many instances one sees swollen, partially desquamated epithelial cells along the alveolar wall. These closely resemble the large vacuolated forms. Various types of these vacuolated mononuclear cells were observed. These may well represent stages of development from the desquamating epithelial cell to the large vacuolated form. Although similar cells may arise elsewhere, we have been led to regard them in our studies as epithelial in origin (Fig. 5). The number of red blood cells and the amount of fibrin present in the exudate vary greatly. Small foci consisting of alveolar spaces filled with erythrocytes are not uncommon. The fibrin is very much less abundant than in most types of pneumonia. From the above experiments it is seen that a lobar pneumonia in cats can be produced at least by two methods, either by intrabronchial insufflation of the organism or by direct injection into the veins, provided that in the latter case an irritant is introduced into the lungs. In each case there is little doubt but that a local injury of the lung parenchyma was produced. Without this injury (that is, by intravenous injection of the organism alone), no pulmonary lesion was obtained. Further studies with both these methods must be undertaken to ascertain more exactly the sequence of the pathological process. It seems probable that they are identical in each case. The results obtained from the second method employed to produce a lobar pneumonia offer suggestive evidence in support of a hematogenous causation of this disease in at least certain instances. It is not proposed to discuss the aerogenous versus hernatogenous theories at this time. Kidd(5) has recently reviewed the subject and states that the aerogenous theory for the causation of pneumonia is most widely held. This view has gained credence especially since the work of Meltzer and Lamar. In spite of this, Kidd emphasizes the fact that based on our knowledge of pulmonary infections in man and upon theoretical grounds and upon certain experimental facts, the hernatogenous theory seems more plausible. No definite conclusions can be drawn from the last series of experiments. From this limited study it seems probable that lobar pneumonic processes are produced less easily after intravenous injection of various cocci and insufflation of irritating substances than by-similar treatment with Friedländer's bacillus.
Given the following content, create a long question whose answer is long and can be found within the content. Then, provide the long answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}.