content
stringlengths 5.14k
89.3k
| system_prompt
stringclasses 1
value |
---|---|
Trabectedin [Ecteinascidin 743, Yondelis, ET 743, NSC 684766] is a tetrahydroisoquinoline alkaloid derived from the Caribbean marine tunicate, Ecteinascidia turbinata. The drug is being developed by PharmaMar (Zeltia) in partnership with Johnson & Johnson Pharmaceutical Research & Development LLC. It was synthetically isolated and developed by the University of Illinois and licensed to PharmaMar; the company has completed the hemisynthesis of agent. Trabectedin interacts with the minor groove of DNA and alkylates guanine at the N2 position, which bends towards the major groove. In this manner, it is thought that the drug affects various transcription factors involved in cell proliferation, particularly via the transcription-coupled nucleotide excision repair system. Trabectedin blocks the cell cycle at the G(2) phase, while cells at the G(1 )phase are most sensitive to the drug. It also inhibits overexpression of the multidrug resistance-1 gene (MDR-1) coding for the P-glycoprotein that is a major factor responsible for cells developing resistance to cancer drugs. The agent is also thought to interfere with the nucleotide excision repair pathways of cancer cells, suggesting that it could be effective in the treatment of many cancer types including melanoma and sarcoma, as well as lung, breast, ovarian, endometrial and prostate cancers; clinical evaluations are underway in these indications. PharmaMar and Ortho Biotech Products (Johnson & Johnson) entered into an agreement in August 2001 for the joint development and commercialisation of trabectedin. PharmaMar retains commercialisation rights in Europe, including Eastern Europe. Ortho Biotech will market the product in the US, Japan and the rest of the world; Tibotec Therapeutics (a division of Ortho Biotech) will commercialise it in the US. PharmaMar will receive an initial payment from Ortho Biotech plus future milestone and royalty payments linked to development targets and sales; the upfront payment would be approximately 20 million US dollars with royalties contributing 10-20% of total sales of the drug. Although details of the licensing transaction for trabectedin were undisclosed, analysts estimate the figure to be around 100 million US dollars. Previously, PharmaMar signed an agreement granting Bristol-Myers Squibb the option to evaluate and develop as many as 12 of PharmaMar's marine-derived anticancer compounds on an exclusive worldwide basis. However, it appears that Bristol-Myers Squibb had chosen not to exercise the option. Trabectedin is undergoing clinical trials in soft tissue sarcoma (Sarcoma in the Phase table), ovarian, breast, endometrial, prostate and non-small-cell lung cancers. PharmaMar indicated in January 2004 that it intends to launch trabectedin in one of these indications in 2006. PharmaMar raised funds from a round of financing in June 2005 that will be used to fund further clinical trials of its anticancer products, including trabectedin. The US FDA granted trabectedin orphan drug status for ovarian cancer in April 2005. Trabectedin also received orphan drug status from the European Commission for the treatment of ovarian cancer in October 2003. This followed a positive opinion by the Committee for Orphan Medicinal Products (COMP) of the EMEA. Trabectedin has undergone a phase II study for the second- or third-line treatment of ovarian cancer in Europe (England and Belgium), the US and Canada. The trial was initiated in October 2002 and evaluated a weekly schedule of trabectedin (0.58 mg/m(2)) via IV infusion for 3 weeks followed by a week of rest. Final results from this study have been presented. A separate phase II trial evaluating the antitumour activity of trabectedin as a second-line therapy in advanced ovarian cancer was conducted by researchers at the Southern Europe New Drugs Organization (SENDO) in Milan, Italy. PharmaMar and Johnson & Johnson are conducting a pivotal (STS-201) trial to compare a weekly and daily dosing regimen of trabectedin among patients with advanced or metastatic soft tissue sarcoma who are unresponsive to standard chemotherapy of doxorubicin and ifosfamide. The randomised, multicentre, open-label trial has completed enrolment of 270 patients during the second quarter of 2005. Positive data from the STS-201 trial have been announced. An independent data monitoring committee has found that interim data supports a positive trend in time to disease progression favouring patients receiving the daily dosing regimen. Consequently, all patients have been offered the option of switching to the daily regimen. Final results from the STS-201 trial will form the basis of MAA re-submission with European regulatory authorities. PharmaMar has held a pre-submission meeting with the EMEA and has presented a formal letter of intent to file for approval of trabectedin for soft tissue sarcoma. Previously, PharmaMar first filed for EU registration of trabectedin for treatment of advanced soft tissue sarcoma in November 2001, which was accepted for review by the EMEA and Swiss Health Authorities. However, the CPMP confirmed its recommendation not to grant trabectedin marketing authorisation in November 2003 following PharmaMar's appeal against the CPMP's negative opinion first announced in July 2003; the opinion was adopted by a majority vote rather than by consensus. Trabectedin was granted orphan drug status in Europe for recurrent soft tissue sarcoma in 2001. It was also granted orphan drug status by the FDA for the same indication in October 2004. Phase I studies are being conducted to evaluate trabectedin in combination with doxorubicin and liposomal doxorubicin for the treatment of soft tissue sarcoma. PharmaMar is also conducting a phase I study of sequential paclitaxel followed by trabectedin in patients with soft tissue sarcoma. At additional dose levels, patients with other tumour types will be enrolled to assess the antitumour activity of the combination. The US NCI has approved and is partially funding a phase I clinical programme to determine the feasibility of using trabectedin to treat children with soft tissue sarcoma and bone sarcoma who are resistant to conventional therapies. PharmaMar has reported that trabectedin can be safely administered to children at doses up to 1100mg given as a 3-hour infusion, and that this dose will be used in further paediatric studies. Trabectedin has completed phase II studies for small round cell sarcoma and rhabdomyosarcoma, which are aggressive tumours occurring predominantly in children. A phase II study evaluating two dosing schedules of trabectedin has been conducted in patients with leiomyosarcomas or liposarcomas refractory to standard doxorubicin + ifosfamide chemotherapy. The study was conducted in Australia, Canada, Russia and the US. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Tuberculosis control programs place an almost exclusive emphasis on adults with sputum smear-positive tuberculosis, because they are most infectious. However, children contribute a significant proportion of the global tuberculosis caseload and experience considerable tuberculosis-related morbidity and mortality, but few children in endemic areas have access to antituberculosis treatment. The diagnostic difficulty experienced in endemic areas with limited resources has been identified as a major factor contributing to poor treatment access. In general, there is a sense of scepticism regarding the potential value of symptom-based diagnostic approaches, because current clinical diagnostic approaches are often poorly validated. The natural history of childhood tuberculosis demonstrates that symptoms may offer good diagnostic value if they are well defined and if appropriate risk stratification is applied. This study aimed to determine the value of well-defined symptoms to diagnose childhood pulmonary tuberculosis in a tuberculosis-endemic area. A prospective, community-based study was conducted in Cape Town, South Africa. Specific well-defined symptoms were documented in all children < 13 years of age reporting a persistent, nonremitting cough of > 2 weeks' duration; study participants were thoroughly evaluated for tuberculosis. In addition, all of the children who received antituberculosis treatment during the study period were reviewed by the investigator, irrespective of study inclusion. This concurrent disease surveillance provided a comprehensive overview of all of the childhood tuberculosis cases, allowing accurate assessment of the possible disadvantages associated with this symptom-based diagnostic approach. In the absence of an acceptable gold standard test, optimal case definition is an important consideration. Children were categorized as "bacteriologically confirmed tuberculosis," "radiologically certain tuberculosis," "probable tuberculosis," or "not tuberculosis." Bacteriologically confirmed tuberculosis was defined as the presence of acid-fast bacilli on sputum microscopy and/or Mycobacterium tuberculosis cultured from a respiratory specimen. Radiologically certain tuberculosis was defined as agreement between both independent experts that the chest radiograph indicated certain tuberculosis in the absence of bacteriologic confirmation. Probable tuberculosis was defined as the presence of suggestive radiologic signs and good clinical response to antituberculosis treatment in the absence of bacteriologic confirmation or radiologic certainty. Good clinical response was defined as complete symptom resolution and weight gain of > or = 10% of body weight at diagnosis, within 3 months of starting antituberculosis treatment. Not tuberculosis was defined as spontaneous symptom resolution or no response to antituberculosis therapy in the absence of bacteriologic confirmation or radiologic signs suggestive of tuberculosis. Pulmonary tuberculosis was defined as a symptomatic child with: (1) bacteriologically confirmed tuberculosis, (2) radiologically confirmed tuberculosis, or (3) probable tuberculosis (as defined), excluding isolated pleural effusion. In total, 1024 children were referred for evaluation. Resolving symptoms were reported in 596 children (58.2%); 428 (41.8%) children with persistent, nonremitting symptoms at evaluation were investigated for tuberculosis. Pulmonary tuberculosis was diagnosed in 197 children; 96 were categorized as bacteriologically confirmed tuberculosis, 75 as radiologically certain tuberculosis, and 26 as probable tuberculosis. Combining a persistent nonremitting cough of > 2 weeks' duration, documented failure to thrive (in the preceding 3 months), and fatigue provided reasonable diagnostic accuracy in HIV-uninfected children (sensitivity: 62.6%; specificity: 89.8%; positive predictive value: 83.6%); the performance was better in the low-risk group (> or = 3 years; sensitivity: 82.3%; specificity: 90.2%; positive predictive value: 82.3%) than in the high-risk group (< 3 years; sensitivity: 51.8%; specificity: 92.5%; positive predictive value: 90.1%). In children with an uncertain diagnosis at presentation, clinical follow-up was a valuable diagnostic tool that further improved diagnostic accuracy, particularly in the low-risk group. Symptom-based approaches offered little diagnostic value in HIV-infected children. Three (15%) of the 20 HIV-infected children diagnosed with pulmonary tuberculosis failed to report symptoms of sufficient duration to warrant study inclusion, whereas 25% reported persistent, nonremitting symptoms in the absence of tuberculosis. In addition, the tuberculin skin test was positive in < 20% of HIV-infected children diagnosed with pulmonary tuberculosis. The combined presence of 3 well-defined symptoms at presentation (persistent, nonremitting cough of > 2 weeks' duration; objective weight loss [documented failure to thrive] during the preceding 3 months; and reported fatigue) provided good diagnostic accuracy in HIV-uninfected children > or = 3 years of age, with clinical follow-up providing additional value. The approach performed less well in children < 3 years. However, the presence of a persistent, nonremitting cough together with documented failure to thrive still provided a fairly accurate diagnosis (sensitivity: 68.3%; specificity: 80.1%; positive predictive value: 82.1%), illustrating the importance of regular weight monitoring in young children. Clinical follow-up also offered additional diagnostic value, but caution is required, because very young children have an increased risk of rapid disease progression. The approach performed poorly in HIV-infected children. Recent household contact with an adult index case seemed to provide more diagnostic value than a positive tuberculin skin test, but novel T-cell-based assays may offer the only real improvement in sensitivity to diagnose M. tuberculosis infection in HIV-infected children. The variable diagnostic value offered by this symptom-based diagnostic approach illustrates the importance of risk stratification, as demonstrated by the fact that 11 (91.7%) of 12 children with severe disease manifestations who failed to meet the entry criteria were < 3 years of age or HIV infected. Particular emphasis should be placed on the provision of preventive chemotherapy after documented exposure and/or infection in these high-risk children. Study limitations include the small number of HIV-infected children, but on the positive side, the large number of HIV-uninfected children permitted adequate evaluation in this important group. It is often forgotten that HIV-uninfected children constitute the majority of child tuberculosis cases, even in settings where HIV is endemic. This study demonstrates the importance of ascertaining a child's HIV status before symptom-based diagnosis is attempted. Because children were recruited at both the clinic and hospital level, some selection bias may have been introduced; however, the only significant difference between the 2 groups was the proportion of HIV-infected children. Pulmonary tuberculosis was diagnosed with different levels of certainty, but no significant differences were recorded between these groups. Pulmonary tuberculosis can be diagnosed with a reasonable degree of accuracy in HIV-uninfected children using a simple symptom-based approach. This offers the exciting prospect of improving treatment access for children, particularly in resource-limited settings where current access to antituberculosis treatment is poor. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Diving medicine is a peculiar specialty. There are physicians and scientists from a wide variety of disciplines with an interest in diving and who all practice 'diving medicine': the study of the complex whole-body physiological changes and interactions upon immersion and emersion. To understand these, the science of physics and molecular gas and fluid movements comes into play. The ultimate goal of practicing diving medicine is to preserve the diver's health, both during and after the dive. Good medicine starts with prevention. For most divers, underwater excursions are not a professional necessity but a hobby; avoidance of risk is generally a much better option than risk mitigation or cure. However, prevention of diving illnesses seems to be even more difficult than treating those illnesses. The papers contained in this issue of DHM are a nice mix of various aspects of PFO that divers are interested in, all of them written by specialist doctors who are avid divers themselves. However, diving medicine should also take advantage of research from the "non-diving" medicine community, and PFO is a prime example. Cardiology and neurology have studied PFO for as long, or even longer than divers have been the subjects of PFO research, and with much greater numbers and resources. Unexplained stroke has been associated with PFO, as has severe migraine with aura. As the association seems to be strong, investigating the effect of PFO closure was a logical step. Devices have been developed and perfected, allowing now for a relatively low-risk procedure to 'solve the PFO problem'. However, as with many things in science, the results have not been as spectacular as hoped for: patients still get recurrences of stroke, still have migraine attacks. The risk-benefit ratio of PFO closure for these non-diving diseases is still debated. For diving, we now face a similar problem. Let there be no doubt that PFO is a pathway through which venous gas emboli (VGE) can arterialize, given sufficiently favourable circumstances (such as: a large quantity of VGE, size of the PFO, straining or provocation manoeuvres inducing increased right atrial pressure, delayed tissue desaturation so that seeding arterial gas emboli (AGE) grow instead of shrink, and there may be other, as yet unknown factors). There is no doubt that closing a PFO, either surgically or using a catheter-delivered device, can reduce the number of VGE becoming AGE. There is also no doubt that the procedure itself carries some health risks which are, at 1% or higher risk of serious complications, an order of magnitude greater than the risk for decompression illness (DCI) in recreational diving. Scientists seek the 'truth', but the truth about how much of a risk PFO represents for divers is not likely to be discovered nor universally accepted. First of all, the exact prevalence of PFO in divers is not known. As it has been pointed out in the recent literature, a contrast echocardiography (be it transthoracic or transoesophageal) or Doppler examination is only reliable if performed according to a strict protocol, taking into account the very many pitfalls yielding false negative results. The optimal procedure for injection of contrast medium was described several years ago, but has not received enough attention. Indeed, it is our and others' experience that many divers presenting with PFO-related DCI symptoms initially are declared "PFO-negative" by eminent, experienced cardiologists! Failing a prospective study, the risks of diving with a right-to left vascular shunt can only be expressed as an 'odds ratio', which is a less accurate measure than is 'relative risk'. The DAN Europe Carotid Doppler Study, started in 2001, is nearing completion and will provide more insight into the actual risks of DCI for recreational divers. The degree of DCI risk reduction from closing a PFO is thus not only dependent on successful closure but also (mostly?) on how the diver manages his/her dive and decompression in order to reduce the incidence of VGE. It has been convincingly shown that conservative dive profiles reduce DCI incidence even in divers with large PFOs, just as PFO closure does not protect completely from DCI if the dive profiles are aggressive. Prospective studies should not only focus on the reduction of DCI incidence after closure, but should take into account the costs and side effects of the procedure, as has been done in the cardiology and neurology studies. Imagine lung transplants becoming a routine operation, costly but with a high success rate; imagine also a longterm smoker suffering from a mild form of obstructive lung disease and exercise-limiting dyspnoea. Which of two options would you recommend: having a lung transplant and continue smoking as before, or quit smoking and observe a progressive improvement of pulmonary and cardiac pathology? As opposed to patients with thrombotic disease and migraine, divers can choose to reduce DCI risk. In fact, all it takes is acceptance that some types of diving carry too high a health risk - whether it is because of a PFO or another 'natural' factor. It would be unethical to promote PFO closure in divers solely on the basis of its efficacy of shunt reduction. Unfortunately, at least one device manufacturer has already done so in the past, citing various publications to specifically target recreational divers. Some technical diving organizations even have recommended preventive PFO closure in order to undertaking high-risk dive training. As scientists, we must not allow ourselves to be drawn into intuitive diver fears and beliefs. Nor should we let ourselves be blinded by the ease and seemingly low risk of the procedure. With proper and objective information provided by their diving medicine specialist, divers could make an informed decision, rather than focus on the simplistic idea that they need 'to get it fixed' in order to continue diving. A significant relationship between PFO and cerebral damage, in the absence of high-risk diving or DCI, has yet to be confirmed. Studying PFO-related DCI provides us with unique opportunities to learn more about the effect of gas bubbles in various tissues, including the central vascular bed and neurological tissue. It may also serve to educate divers that safe diving is something that needs to be learned, not something that can be implanted. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Influenza-associated deaths in healthy children that were reported during the 2003-2004 influenza season heightened the public awareness of the seriousness of influenza in children. In 1996-1998, a pivotal phase III trial was conducted in children who were 15 to 71 months of age. Live attenuated influenza vaccine, trivalent (LAIV-T), was shown to be safe and efficacious. In a subsequent randomized, double-blind, placebo-controlled LAIV-T trial in children who were 1 to 17 years of age, a statistically significant increase in asthma encounters was observed for children who were younger than 59 months. LAIV-T was not licensed to children who were younger than 5 years because of the concern for asthma. We report on the largest safety study to date of the recently licensed LAIV-T in children 18 months to 4 years, 5 to 9 years, and 10 to 18 years of age in a 4-year (1998-2002) community-based trial that was conducted at Scott & White Memorial Hospital and Clinic (Temple, TX). An open-label, nonrandomized, community-based trial of LAIV-T was conducted before its licensure. Medical records of all children were surveyed for serious adverse events (SAEs) 6 weeks after vaccination. Health care utilization was evaluated by determining the relative risk (RR) of medically attended acute respiratory illness (MAARI) and asthma rates at 0 to 14 and 15 to 42 days after vaccination compared with the rates before vaccination. Medical charts of all visits coded as asthma were reviewed for appropriate classification of events: acute asthma or other. We evaluated the risk for MAARI (health care utilization for acute respiratory illness) 0 to 14 and 15 to 42 days after LAIV-T by a method similar to the postlicensure safety analysis conducted on measles, mumps, and rubella and on diphtheria, tetanus, and whole-cell pertussis vaccines. All children regardless of age were administered a single intranasal dose of LAIV-T in each vaccine year. In the 4 years of the study, we administered 18780 doses of LAIV-T to 11096 children. A total of 4529, 7036, and 7215 doses of LAIV-T were administered to children who were 18 months to 4 years, 5 to 9 years, and 10 to 18 years of age, respectively. In vaccination years 1, 2, 3, and 4, we identified 10, 15, 11, and 6 SAEs, respectively. None of the SAEs was attributed to LAIV-T. In vaccination years 1, 2, 3, and 4, we identified 3, 2, 1, and 0 pregnancies, respectively, among adolescents. All delivered healthy infants. The RR for MAARI from 0 to 14 and 15 to 42 days after LAIV-T was assessed in vaccinees during the 4 vaccine years. Compared with the prevaccination period, there was no significant increase in risk in health care utilization attributed to MAARI from 0 to 14 and 15 to 42 days after vaccination in children who were 18 months to 4 years, 5 to 9 years, and 10 to 18 years of age in the 4 vaccine years. In children who were 18 months to 4 years of age, there was no significant increase in the risk in health care utilization for MAARI, MAARI subcategories (otitis media/sinusitis, upper respiratory tract illness, and lower respiratory tract illness), and asthma during the 0 to 14 days after vaccination compared with the prevaccination period. No significant increase in the risk in health care utilization for MAARI, MAARI subcategories, and asthma was detected when the risk period was extended to 15 to 42 days after vaccination, except for asthma events in vaccine year 1. A RR of 2.85 (95% confidence interval [CI]: 1.01-8.03) for asthma events was detected in children who were 18 months to 4 years of age but was not significantly increased for the other 3 vaccine years (vaccine year 2, RR: 1.42 [95% CI: 0.59-3.42]; vaccine year 3, RR: 0.47 [95% CI: 0.12-1.83]; vaccine year 4, RR: 0.20 [95% CI: 0.03-1.54]). No significant increase in the risk in health care utilization for MAARI or asthma was observed in children who were 18 months to 18 years of age and received 1, 2, 3, or 4 annual sequential doses of LAIV-T. Children who were 18 months to 4 years of age and received 1, 2, 3, or 4 annual doses of LAIV-T did not experience a significant increase in the RR for MAARI 0 to 14 days after vaccination; this was also true for children who were 5 to 9 and 10 to 18 years of age. We observed no increased risk for asthma events 0 to 14 days after vaccination in children who were 18 months to 4 years, 5 to 9 years, and 10 to 18 years of age, In vaccine year 1, children who were 18 months to 4 years of age did have a significantly higher RR (2.85; 95% CI: 1.01-8.03) for asthma events 15 to 42 days after vaccination. In vaccine year 2, the formulation of LAIV-T was identical to the vaccine formulation used in vaccine year 1; however, in children who were 18 months to 4 years of age, no statistically significant increased risk was detected for asthma events 15 to 42 days after vaccination. Similarly, in vaccine years 3 and 4, children who were 18 months to 4 years of age did not have a statistically significant increased risk for asthma events 15 to 42 days after vaccination. Also, LAIV-T did not increase the risk for asthma in children who received 1, 2, 3, or 4 annual doses of LAIV-T. Although the possibility for a true increased risk for asthma was observed in 1 of 4 years in children who were 18 months to 4 years at 15 to 42 days after vaccination, it is more likely that the association is a chance effect because of the 190 comparisons made without adjustment for multiple comparisons. We conclude that LAIV-T is safe in children who are 18 months to 4 years, 5 to 9 years, and 10 to 18 years of age. The hypothesis that LAIV-T is associated with an increase in asthma events in children who are younger than 5 years is not supported by our data. Reassessment of the lower age limit for use of LAIV-T in children is indicated. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
To observe the effects of astragalus polysaccharide (AP) on the intestinal mucosal morphology, level of secretory IgA (s-IgA) in intestinal mucus, and distribution of T lymphocyte subsets in Peyer's patch in rats with severe scald injury. One hundred and thirty SD rats were divided into sham injury group (SI, sham injured, n = 10), scald group (S, n = 30), low dosage group (LD, n = 30), moderate dosage group (MD, n = 30), and high dosage group (HD, n = 30) according to the random number table. Rats in the latter 4 groups were inflicted with 30% TBSA full-thickness scald on the back. From post injury hour 2, rats in groups LD, MD, and HD were intraperitoneally injected with 0.5 mL AP solution with the dosage of 100, 200, and 300 mg/kg each day respectively, and rats in group S were injected with 0.5 mL normal saline instead. Ten rats from group SI immediately after injury and 10 rats from each of the latter 4 groups on post injury day (PID) 3, 7, 14 were sacrificed, and their intestines were harvested. The morphology of ileal mucosa was examined after HE staining; the level of s-IgA in ileal mucus was determined with double-antibody sandwich ELISA method; the proportions of CD3⁺, CD4⁺, CD8⁺ T lymphocytes in Peyer's patches of intestine were determined with flow cytometer, and the proportion of CD4⁺ to CD8⁺ was calculated. Data were processed with one-way analysis of variance, analysis of variance of factorial design, and SNK test. (1) Villi in normal form and intact villus epithelial cells were observed in rats of group SI immediately after injury, while edema of villi and necrosis and desquamation of an enormous amount of villi were observed in groups with scalded rats on PID 3, with significant infiltration of inflammatory cells. On PID 7, no obvious improvement in intestinal mucosal lesion was observed in groups with scalded rats. On PID 14, the pathology in intestinal mucosa of rats remained nearly the same in group S, and it was alleviated obviously in groups LD and MD, and the morphology of intestinal mucosa of rats in group HD was recovered to that of group SI. (2) On PID 3, 7, and 14, the level of s-IgA in intestinal mucus significantly decreased in groups S, LD, MD, and HD [(43 ± 5), (45 ± 5), (46 ± 5) µg/mL; (47 ± 5), (48 ± 5), (49 ± 6) µg/mL; (50 ± 6), (51 ± 5), (52 ± 5) µg/mL; (53 ± 6), (54 ± 5), (55 ± 5) µg/mL] as compared with that of rats in group SI immediately after injury [(69 ± 4) µg/mL, with P values below 0.05]. The level of s-IgA in intestinal mucus of rats in group MD was significantly higher than that in group S at each time point (with P values below 0.05), and that of group HD was significantly higher than that in groups S and LD at each time point (with P values below 0.05). (3) Compared with those of rats in group SI immediately after injury, the proportions of CD3⁺ T lymphocytes and CD4⁺ T lymphocytes significantly decreased in groups with scalded rats at each time point (with P values below 0.05), except for those in group HD on PID 14. The proportion of CD4⁺ T lymphocytes of rats in group LD was significantly higher than that in group S on PID 3 (P < 0.05). The proportions of CD3⁺ T lymphocytes and CD4⁺ T lymphocytes were significantly higher in groups MD and HD than in groups S and LD (except for the proportion of CD4⁺ T lymphocytes in group MD on PID 3 and 14) at each time point (with P values below 0.05). The proportion of CD3⁺ T lymphocytes on PID 7 and 14 and that of CD4⁺ T lymphocytes on PID 3 were significantly higher in group HD than in group MD (with P values below 0.05). Compared with that of rats in group SI immediately after injury, the proportion of CD8⁺ T lymphocytes significantly increased in the other 4 groups at each time point (with P values below 0.05). The proportion of CD8⁺ T lymphocytes was significantly lower in rats of group LD on PID 7 and 14 and groups MD and HD at each time point than in group S (with P values below 0.05). The proportion of CD8⁺ T lymphocytes was significantly lower in rats of group MD on PID 7 and 14 and group HD at each time point than in group LD (with P values below 0.05). The proportion of CD8⁺ T lymphocytes was significantly lower in rats of group HD on PID 7 and 14 than in group MD (with P values below 0.05). On PID 3, 7, and 14, the proportion of CD4⁺ to CD8⁺ was significantly lower in groups S, LD, MD, and HD (0.65 ± 0.11, 0.68 ± 0.13, 0.73 ± 0.22; 0.76 ± 0.15, 0.78 ± 0.14, 0.90 ± 0.10; 0.85 ± 0.21, 0.89 ± 0.18, 1.08 ± 0.19; 0.99 ± 0.20, 1.05 ± 0.21, 1.25 ± 0.23) as compared with that of rats in group SI immediately after injury (1.74 ± 0.20, with P values below 0.05). The proportion of CD4⁺ to CD8⁺ was significantly higher in rats of group HD than in group MD on PID 7 (P < 0.05), and the proportion was significantly higher in these two groups than in group S at each time point (with P values below 0.05). The proportion of CD4⁺ to CD8⁺ was significantly higher in rats of group MD on PID 14 and group HD at each time point than in group LD (with P values below 0.05). Compared within each group, the proportions of CD3⁺, CD4⁺, CD8⁺ T lymphocytes and the proportion of CD4⁺ to CD8⁺ of rats in groups LD, MD, and HD showed a trend of gradual elevation along with passage of time. AP can improve the injury to intestinal mucosa and modulate the balance of T lymphocyte subsets in Peyer's patch in a time- and dose-dependent manner, and it can promote s-IgA secretion of intestinal mucosa in a dose-dependent manner. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The primary purpose of the FFF X-rays is to provide much higher dose rates available for treatments. For example, FFF X-rays from Varian TrueBEAM can deliver 1400 MU/minute for 6 MV X-rays and 2400 MU/minutes for 10 MV X-rays. Higher dose rates have definite clinical benefits in organ motion management. For example, larger dose fractions can be delivered in a single breath-hold or gated portion of a breathing cycle. In SRS or SBRT treatments, large MUs are often required and FFF X-ray beams can deliver these large MUs in much shorter "beam-on" time. With shorten treatment time, these FFF X-rays improve patient comfort and dose delivery accuracy. FFF X-ray beams may become one of the necessary equipment configurations for SBRT and/or SRS treatments, in the future. This presentation will address some unique issues dealing with the FFF X-rays: (1) FIELD SIZES flattening Filter Free (FFF) X-ray beam has been in clinical use for quite some time. However, not until recently, these FFF beams are used in limited, small field sizes, for example, in Tomotherapy and CyberKnife machines.Varian TrueBEAM allows the FFF X-ray beam to have up to 40 × 40 cm field sizes for both 6 and 10 MV X-rays (15 MV FFF X-rays are not yet released for clinical use). For large treatment fields, the dose uniformity within an irradiated treatment field will need to be "modulated" by MLC movements (IMRT) to cut down the higher beam intensity near the central portion of the FFF X-ray beam. Thus, larger MUs are required compared with a conventional (flattened) X-ray beam. Or, MLC movements (EVIRT) are now being used to "flatten" the FFF X-rays to provide dose uniformity within those large PTVs. The high dose rates from the FFF X-rays are now being off-set by the larger MUs requirements. Therefore, FFF X-rays can bring clinical advantages over conventional X-rays when used with small field sizes, such as in SBRT and/or SRS applications. (2) DOSEVIETRY MEASUREMENT EQUIMENT: Because of the more than 2 to 4 fold increases in dose rate (MU/minute), the radiation measurement equipment and techniques need to be carefully evaluated, such as ionization chamber characteristics, electrometers, scanning equipment. First comes to mind is the ion-recombination characteristics of the ionization chamber (P-ion). This will determine the accuracies of the measured percentage depth doses and penumbra of these FFF X-ray fields. And, it will also affect the absolute dose measurements (Gy/MU) using the TG-51 formulations. The measured PDDs and profiles should be corrected for the P-ion effect. However, it is not a simple task for physicists to perform the P-ion corrections for PDDs and profiles using the presently available methods associated in commercial 3-D scanning equipment and algorithm. It may become necessary for physicists to adapt and get accustom to the use of "standard beam data" provided by manufacturers in the future. In addition, because of the use of FFF X-rays are focused on SBRT and/or SRS applications, beam data acquisition, scanning techniques, and beam modeling are vitally important. There are many publications addressing the "output factors" from small fields, but none pay enough attention to the penumbra characteristics of these small X-ray beams. Because of the proximity to critical organs, the penumbra characteristics of small fields are often more clinically important than output factors. FFF X-rays play an important role in SBRT and SRS applications. Therefore, careful penumbra measurements should be addressed. Again, it may become necessary for physicists to adapt the use of "standard beam data" provided by manufacturers. (3) RADIO-BIOLOGICAL QUESTION: Though there is a lack of controlled clinical studies with FFF X-ray beams, there are several scientific articles addressing the radiobiological concerns of high dose rate deliveries, especially when it is used to deliver large doses per fraction, such as 10 Gy/faction. This type of dose per fraction is often used in SBRT or SRS treatments. Radiobiological concerns are not in the cell kill effect within the target volume. It is the normal tissue damages surrounding the target. There are concerns about the late toxicities of these high dose rate and high dose per fraction deliveries using FFF X-ray beams. (4) SKIN (ENTRANCE) DOSE DISCUSSIONS: In conventional X-ray beams, the low energy components of the X-ray beam are removed by the in-line X-ray flattening filter. But, in Flatten Filter Free X-ray beams, these low energy components are exiting the X-ray collimators. This is clearly documented by the difference in the percentage depth doses for these FFF X-ray beams. The FFF X-ray beams have a lower "effective energy" compared to conventional X-rays. Therefore, it is important to study the skin (entrance) dose from these FFF X-rays. In the literature, reported skin (entrance) doses from different linear accelerator manufacturers vary widely. Skin doses from Varian TrueBEAM have been studied and have found to be marginally higher than the conventional X-rays. However this margin increase is not clinically significant.(5) Summary / conclusions / discussions: The FFF X-rays improve the treatment delivery by their very high dose rates (1400 and 2400 MU/minute) and shortened treatment time. FFF X-ray beams are most applicable and the high dose rates are most advantageous when the treatment field sizes are small. The dosimetry of FFF-X-rays is made more complex by the P-ion determination and necessary corrections to X-ray beam percentage depth doses and profiles. There are radiobiological concerns about late toxicity of normal tissue irradiated by FFF X-rays when large dose per faction treatment applications are used. There are wide ranges of skin doses from these FFF X-rays reported in the literature. 1. Flattening Filter Free X-rays have been in clinical use for many years but mostly for small field sizes. 2. Flattening Filter Free X-rays have significantly shorter treatment time if is used for small field applications, such as: SBRT or SRS. 3. Because of the high dose rate (MU/minute), dosimetric properties of these FFF X-rays need to be carefully studied. Ion-recombination of ionization chambers is a concern. Beam data acquisition, beam modeling, and absolute dosimetry need to be done with great care, especially in small field applications. 4. Late toxicities of normal tissue may be a concern and need to be studied by organized clinical protocols. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Environmental risk-management decisions in the U.S. involving potential exposures to methylmercury currently use a reference dose (RfD) developed by the U.S. Environmental Protection Agency (USEPA). This RfD is based on retrospective studies of an acute poisoning incident in Iraq in which grain contaminated with a methylmercury fungicide was inadvertently used in the baking of bread. The exposures, which were relatively high but lasted only a few months, were associated with neurological effects in both adults (primarily paresthesia) and infants (late walking, late talking, etc.). It is generally believed that the developing fetus represents a particularly sensitive subpopulation for the neurological effects of methylmercury. The USEPA derived an RfD of 0.1 microg/kg/day based on benchmark dose (BMD) modeling of the combined neurological endpoints reported for children exposed in utero. This RfD included an uncertainty factor of 10 to consider human pharmacokinetic variability and database limitations (lack of data on multigeneration effects or possible long-term sequelae of perinatal exposure). Alcoa signed an Administrative Order of Consent for the conduct of a remedial investigation/feasibility study (RI/FS) at their Point Comfort Operations and the adjacent Lavaca Bay in Texas to address the effects of historical discharges of mercury-containing wastewater. In cooperation with the Texas Natural Resource Conservation Commission and USEPA Region VI, Alcoa conducted a baseline risk assessment to assess potential risk to human health and the environment. As a part of this assessment. Alcoa pursued the development of a site-specific RfD for methylmercury to specifically address the potential human health effects associated with the ingestion of contaminated finfish and shellfish from Lavaca Bay. Application of the published USEPA RfD to this site is problematic; while the study underlying the RfD represented acute exposure to relatively high concentrations of methylmercury, the exposures of concern for the Point Comfort site are from the chronic consumption of relatively low concentrations of methylmercury in fish. Since the publication of the USEPA RfD, several analyses of chronic exposure to methylmercury in fish-eating populations have been reported. The purpose of the analysis reported here was to evaluate the possibility of deriving an RfD for methylmercury, specifically for the case of fish ingestion, on the basis of these new studies. In order to better support the risk-management decisions associated with developing a remediation approach for the site in question, the analysis was designed to provide information on the distribution of acceptable ingestion rates across a population, which could reasonably be expected to be consistent with the results of the epidemiological studies of other fish-eating populations. Based on a review of the available literature on the effects of methylmercury, a study conducted with a population in the Seychelles Islands was selected as the critical study for this analysis. The exposures to methylmercury in this population result from chronic, multigenerational ingestion of contaminated fish. This prospective study was carefully conducted and analyzed, included a large cohort of mother-infant pairs, and was relatively free of confounding factors. The results of this study are essentially negative, and a no-observed-adverse-effect level (NOAEL) derived from the estimated exposures has recently been used by the Agency for Toxic Substances and Disease Registry (ATSDR) as the basis for a chronic oral minimal risk level (MRL) for methylmercury. In spite of the fact that no statistically significant effects were observed in this study, the data as reported are suitable for dose-response analysis using the BMD method. Evaluation of the BMD method used in this analysis, as well as in the current USEPA RfD, has demonstrated that the resulting 95% lower bound on the 10% benchmark dose (BMDL) represents a conservative estimate of the traditional NOAEL, and that it is superior to the use of "average" or "grouped" exposure estimates when dose-response information is available, as is the case for the Seychelles study. A more recent study in the Faroe Islands, which did report statistically significant associations between methylmercury exposure and neurological effects, could not be used for dose-response modeling due to inadequate reporting of the data and confounding from co-exposure to polychlorinated biphenyls (PCBs). BMD modeling over the wide range of neurological endpoints reported in the Seychelles study yielded a lowest BMDL for methylmercury in maternal hair of 21 ppm. This BMDL was then converted to an expected distribution of daily ingestion rates across a population using Monte Carlo analysis with a physiologically based pharmacokinetic (PBPK) model to evaluate the impact of interindividual variability. The resulting distribution of ingestion rates at the BMDL had a geometric mean of 1.60 microg/kg/day with a geometric standard deviation of 1.33; the 1st, 5th, and 10th percentiles of the distribution were 0.86, 1.04, and 1.15 microg/kg/day. In place of the use of an uncertainty factor of 3 for pharmacokinetic variability, as is done in the current RfD, one of these lower percentiles of the daily ingestion rate distribution provides a scientifically based, conservative basis for taking into consideration the impact of pharmacokinetic variability across the population. On the other hand, it was felt that an uncertainty factor of 3 for database limitations should be used in the current analysis. Although there can be high confidence in the benchmark-estimated NOAEL of 21 ppm in the Seychelles study, some results in the New Zealand and Faroe Islands studies could be construed to suggest the possibility of effects at maternal hair concentrations below 10 ppm. In addition, while concerns regarding the possibility of chronic sequelae are not supported by the available data, neither can they be absolutely ruled out. The use of an uncertainty factor of 3 is equivalent to using a NOAEL of 7 ppm in maternal hair, which provides additional protection against the possibility that effects could occur at lower concentrations in some populations. Based on the analysis described above, the distribution of acceptable daily ingestion rates (RfDs) recommended to serve as the basis for site-specific risk-management decisions at Alcoa's Point Comfort Operations ranges from approximately 0.3 to 1.1 microg/kg/day, with a population median (50th percentile) of 0.5 microg/kg/day. By analogy with USEPA guidelines for the use of percentiles in applications of distributions in exposure assessments, the 10th percentile provides a reasonably conservative measure. On this basis, a site-specific RfD of 0.4 microg/kg/day is recommended. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Quincke's researches (1904) have demonstrated that when a 20 per cent gelatin gel is allowed to swell in water it gives rise to positive double refraction, as if the gel were under tensile stresses. If, on the other hand, the gel shrinks on being placed in alcohol it becomes negatively double refractive, as if it were compressed. But the double refraction as found by Quincke lasts only during the process of swelling or shrinking, and disappears as soon as the gel reaches a state of equilibrium. This phenomenon was investigated here and it was found that the reason for the disappearance of the double refraction is due to the fact that at equilibrium the percentage change in the size of a gel is equal in all three dimensions and the strain is therefore uniform. Double refraction persists as long as there is a difference in the elastic strain in the three dimensions of the strained material. It was found that when gels are cast on glass slides or in glass frames, so as to prevent swelling in certain directions, the double refraction produced by swelling at 6 degrees C. persists permanently in the gel as long as it is swollen, and is proportional to the percentage change in the linear dimensions of the gel. Gels made up of various concentrations of isoelectric gelatin of less than 10 per cent when placed in dilute buffer of the same pH as that of the isoelectric point of the gelatin shrink and give rise to negative double refraction, while gels of concentrations of more than 10 per cent swell and give rise to positive double refraction. The double refraction produced in either case when divided by the percentage change in the dimensions of the gel and by its changed concentration gives a constant value both for swelling and shrinking. This constant which stands for the double refraction produced in a gel of unit concentration per unit strain is termed here the optical modulus of elasticity since it is proportional to the internal elastic stress in the swollen gelatin. It was found that the optical modulus of elasticity is the same both for gels cast on slides and in frames, although the mode of swelling is different in the two forms of gels. Gels removed from their glass supports after apparent swelling equilibrium, when placed in dilute buffer, begin to swell gradually in all three dimensions and the double refraction decreases slowly, though it persists for a long time. But the double refraction per unit change in dimension and per unit concentration still remains the same as before, thus proving that the internal elastic stress as indicated by the double refraction is brought about by the resistance of the gel itself to deformation. A study was also made on the effect of salts, acid and base on the double refraction of a 10 per cent gel during swelling. The experiments show that below M/8 salts affect very slightly the optical modulus of elasticity of the gel. At higher concentrations of salts the elasticity of the gel is reduced by some salts and increased by others, while such salts as sodium acetate and sodium and ammonium sulfates do not change the elasticity of the gels at all during swelling. The investigated salts may thus be arranged in this respect in the following approximate series: CaCl(2), NaI, NaSCN, NaBr, AlCl(3), NaCl, Na acetate, Na(2)SO(4), (NH(4))(2)SO(4), Al(2)SO(4) and MgSO(4). The first five in the series decrease the elasticity while the last two in the series increase the elasticity of the gels during swelling. Acids and bases in higher concentrations exert a powerful influence on the reduction of the elasticity of the gel but in the range of pH between 2.0 and 10.0 the elasticity remains unaffected. The general conclusions to be drawn from these studies are as follows: 1. Swelling or shrinking produces elastic stresses in gels of gelatin, tensile in the first case and compressive in the second case, both being proportional to the percentage change in the dimensions of the gel. 2. Unsupported gels when immersed in aqueous solutions swell or shrink in such a manner that at equilibrium the percentage change in size is equal in all three dimensions, and the stresses become equalized throughout the gel. 3. Gels cast on glass slides or in frames when immersed in aqueous solutions swell or shrink mostly in one direction, and give rise to unidirectional stresses that can be determined accurately by measuring the double refraction produced. 4. The modulus of elasticity of swelling gelatin gels, as calculated from the double refraction measurements, is the same both for compression and for tension and is proportional to the concentration of gelatin in the gel. 5. The modulus of elasticity of gels during swelling is affected only slightly or not at all by salts at concentrations of less than M/8 and is independent of the pH in the range approximately between 2.0 and 10.0. 6. Higher concentrations of salts affect the modulus of elasticity of gelatin gels and the salts in their effectiveness may be arranged in a series similar to the known Hoffmeister series. 7. Acid and alkali have a strong reducing influence on the elastic modulus of swelling gels. 8. The swelling produced in isoelectric gelatin by salts is due primarily to a change brought about by the salts in the osmotic forces in the gel, but in high concentrations of some salts the swelling is increased by the influence of the salt on the elasticity of the gel. This agrees completely with the theory of swelling of isoelectric gelatin as developed by Northrop and the writer in former publications. 9. The studies of Loeb and the writer on the effect of salts on swelling of gelatin in acid and alkali have been in the range of concentrations of salts where the modulus of elasticity of the gelatin is practically constant, and the specific effect of the various salts has been negligible as compared with the effect of the valency of the ions. In concentrations of salts below M/4 or M/8 the Hoffmeister series plays no rôle. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Case Study Sarah is a 58-year-old breast cancer survivor, social worker, and health-care administrator at a long-term care facility. She lives with her husband and enjoys gardening and reading. She has two grown children and three grandchildren who live approximately 180 miles away. SECOND CANCER DIAGNOSIS One morning while showering, Sarah detected a painless quarter-sized lump on her inner thigh. While she thought it was unusual, she felt it would probably go away. One month later, she felt the lump again; she thought that it had grown, so she scheduled a visit with her primary care physician. A CT scan revealed a 6.2-cm soft-tissue mass in the left groin. She was referred to an oncologic surgeon and underwent an excision of the groin mass. Pathology revealed a grade 3 malignant melanoma. She was later tested and found to have BRAF-negative status. Following her recovery from surgery, Sarah was further evaluated with an MRI scan of the brain, which was negative, and a PET scan, which revealed two nodules in the left lung. As Sarah had attended a cancer support group during her breast cancer treatment in the past, she decided to go back to the group when she learned of her melanoma diagnosis. While the treatment options for her lung lesions included interleukin-2, ipilimumab (Yervoy), temozolomide, dacarbazine, a clinical trial, or radiosurgery, Sarah's oncologist felt that ipilimumab or radiosurgery would be the best course of action. She shared with her support group that she was ambivalent about this decision, as she had experienced profound fatigue and nausea with chemotherapy during her past treatment for breast cancer. She eventually opted to undergo stereotactic radiosurgery. DISEASE RECURRENCE After the radiosurgery, Sarah was followed every 2 months. She complained of shortness of breath about 2 weeks prior to each follow-up visit. Each time her chest x-ray was normal, and she eventually believed that her breathlessness was anxiety-related. Unfortunately, Sarah's 1-year follow-up exam revealed a 2 cm × 3 cm mass in her left lung, for which she had a surgical wedge resection. Her complaints of shortness of breath increased following the surgery and occurred most often with anxiety, heat, and gardening activities, especially when she needed to bend over. Sarah also complained of a burning "pins and needles" sensation at the surgical chest wall site that was bothersome and would wake her up at night. Sarah met with the nurse practitioner in the symptom management clinic to discuss her concerns. Upon physical examination, observable signs of breathlessness were lacking, and oxygen saturation remained stable at 94%, but Sarah rated her breathlessness as 7 on the 0 to 10 Borg scale. The nurse practitioner prescribed duloxetine to help manage the surgical site neuropathic pain and to assist with anxiety, which in turn could possibly improve Sarah's breathlessness. Several nonpharmacologic modalities for breathlessness were also recommended: using a fan directed toward her face, working in the garden in the early morning when the weather is cooler, gardening in containers that are at eye level to avoid the need to bend down, and performing relaxation exercises with pursed lip breathing to relieve anxiety-provoked breathlessness. One month later, Sarah reported relief of her anxiety; she stated that the fan directed toward her face helped most when she started to feel "air hungry." She rated her breathlessness at 4/10 on the Borg scale. SECOND RECURRENCE: MULTIPLE PULMONARY NODULES Sarah's chest x-rays remained clear for 6 months, but she developed a chronic cough shortly before the 9-month exam. An x-ray revealed several bilateral lung lesions and growth in the area of the previously resected lung nodule. Systemic therapy was recommended, and she underwent two cycles of ipilimumab. Sarah's cough and breathlessness worsened, she developed colitis, and she decided to stop therapy after the third cycle. In addition, her coughing spells triggered bronchospasms that resulted in severe anxiety, panic attacks, and air hunger. She rated her breathlessness at 10/10 on the Borg scale during these episodes. She found communication difficult due to the cough and began to isolate herself. She continued to attend the support group weekly but had difficulty participating in conversation due to her cough. Sarah was seen in the symptom management clinic every 2 weeks or more often as needed. No acute distress was present at the beginning of each visit, but when Sarah began to talk about her symptoms and fear of dying, her shortness of breath and anxiety increased. The symptom management nurse practitioner treated the suspected underlying cause of the breathlessness and prescribed oral lorazepam (0.5 to 1 mg every 6 hours) for anxiety and codeine cough syrup for the cough. Opioids were initiated for chest wall pain and to control the breathlessness. Controlled-release oxycodone was started at 10 mg every 12 hours with a breakthrough pain (BTP) dose of 5 mg every 2 hours as needed for breathlessness or pain. Sarah noted improvement in her symptoms and reported a Borg scale rating of 5/10. Oxygen therapy was attempted, but subjective improvement in Sarah's breathlessness was lacking. END OF LIFE Sarah's disease progressed to the liver, and she began experiencing more notable signs of breathlessness: nasal flaring, tachycardia, and restlessness. Opioid doses were titrated over the course of 3 months to oxycodone (40 mg every 12 hours) with a BTP dose of 10 to 15 mg every 2 hours as needed, but her breathlessness caused significant distress, which she rated 8/10. The oxycodone was rotated to IV morphine continuous infusion with patient-controlled analgesia (PCA) that was delivered through her implantable port. This combination allowed Sarah to depress the PCA as needed and achieve immediate control of her dyspneic episodes. Oral lorazepam was also continued as needed. Sarah's daughter moved home to take care of her mother, and hospice became involved for end-of-life care. As Sarah became less responsive, nurses maintained doses of morphine for control of pain and breathlessness and used a respiratory distress observation scale to assess for breathlessness since Sarah could no longer self-report. A bolus PCA dose of morphine was administered by Sarah's daughter if her mother appeared to be in distress. Sarah died peacefully in her home without signs of distress. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The association between adiposity parameters and cognition is complex. The purpose of this study was to assess the relationship between adiposity parameters and cognition in middle-aged and elderly people in China.Data were obtained from a cross-sectional study. Cognitive function was evaluated in 5 domains, and adiposity parameters were measured. The association between adiposity parameters and cognition was analyzed using multiple linear and binary logistic regression analyses.After controlling for confounders, men with overweight and obesity had better scores in TICS-10 ([1] total, overweight vs normal: P = .006, β = 0.04; obesity vs normal: P = .005, β = 0.04. [2] stratification by age, with age ≥ 59 years, overweight vs normal: P = .006, β = 0.05; obesity vs normal: P = .014, β = 0.05. [3] stratification by educational levels, with less than elementary education, overweight vs normal: P = .011, β = 0.05; obesity vs normal: P = .005, β = 0.05), immediate word recall ([1] total, overweight vs normal: P = .015, β = 0.04. [2] stratification by age, with age 45-58 years, overweight vs normal: P = .036, β = 0.05. [3] stratification by educational levels, with less than elementary education, overweight vs normal: P = .044, β = 0.04; above high school, overweight vs normal: P = .041, β = 0.09), self-rated memory ([1] stratification by age, with age ≥ 59 years, overweight vs normal: P = .022, β = 0.05. [2] stratification by educational levels, with less than elementary education, overweight vs normal: P = .023, β = 0.04), and drawing a picture ([1] total, overweight vs normal: OR = 1.269, 95% CI = 1.05-1.53. [2] stratification by educational levels, with less than elementary education, overweight vs normal: OR = 1.312, 95% CI = 1.06-1.63); obesity vs normal: OR = 1.601, 95% CI = 1.11-2.31 than the normal weight; women with overweight and obesity had better measure scores in the TICS-10 ([1] total, overweight vs normal: P < .0001, β = 0.06; obesity vs normal: P < .0001, β = 0.05. [2] stratification by age, with age 45-58 years, obesity vs normal: P = .007, β = 0.05; with age ≥ 59 years: overweight vs normal: P < .0001, β = 0.07, obesity vs normal: P = .002, β = 0.06. [3] stratification by educational levels, with illiterate, overweight vs normal: P = .001, β = 0.08; obesity vs normal: P = .004, β = 0.06; with less than elementary education, overweight vs normal: P < .0001, β = 0.07; obesity vs normal: P = .010, β = 0.05), immediate word recall ([1] total, overweight vs normal: P = .011, β = 0.04; obesity vs normal: P = .002, β = 0.04. [2] stratification by age, with age 45-58 years, obesity vs normal: P = .021, β = 0.05; with age ≥ 59 years: overweight vs normal: P = .003, β = 0.06. [3] stratification by educational levels, with illiterate, obesity vs normal: P = .028, β = 0.05; with less than elementary education, obesity vs normal: P = .016, β = 0.05), delay word recall ([1] total, overweight vs normal: P = .015, β = 0.03; obesity vs normal: P = .031, β = 0.03. [2] stratification by age, with age ≥ 59 years: overweight vs normal: P = .004, β = 0.06. [3] stratification by educational levels, with less than elementary education, obesity vs normal: P = .043, β = 0.04), self-rated memory ([1] total, obesity vs normal: P = .026, β = 0.03. [2] stratification by age, with age ≥ 59 years, overweight vs normal: P = .044, β = 0.04; obesity vs normal: P = .018, β = 0.05), and drawing a picture ([1] total, overweight vs normal: OR = 1.226, 95% CI = 1.06-1.42. [2] stratification by age, with age 45-58 years: overweight vs normal: OR = 1.246, 95% CI = 1.02-1.53) than the normal weight. Regarding the association between WC and cognitive function, the obesity demonstrated better mental capacity ([1] total, men: P < .0001, β = 0.06; women: P < .0001, β = 0.05. [2] stratification by age, men with age 45-58 years: P < .0001, β = 0.08; men with ≥ 59 years: P = .006, β = 0.05. women with age 45-58 years: P = .001, β = 0.06; women with ≥ 59 years: P = .012, β = 0.04. [3] stratification by educational levels, men with illiterate: P = .045, β = 0.09; men with less than elementary education: P < .0001, β = 0.08; women with illiterate: P < .0001, β = 0.09), ability to recall immediately ([1] total, men: P = .030, β = 0.03; women: P = .001, β = 0.05. [2] stratification by age, women with age 45-58 years: P = .028, β = 0.04; women with ≥ 59 years: P = .007, β = 0.05. [3] stratification by educational levels, men with less than elementary education: P = .007, β = 0.05; women with illiterate: P = .027, β = 0.05; women with less than elementary education: P = .002, β = 0.06), delay word recall ([1] total, women: P = .044, β = 0.03. [2] stratification by educational levels, men with less than elementary education: P = .023, β = 0.04), self-rated memory (stratification by educational levels, women with less than elementary education: P = .030, β = 0.04), and draw a picture ([1] total, men: OR = 1.399, 95% CI = 1.17-1.67; women: OR = 1.273, 95% CI = 1.12-1.45. [2] stratification by age, men with age 45-58 years: OR = 1.527, 95% CI = 1.15-2.03; men with age ≥ 59 years: OR = 1.284, 95% CI = 1.02-1.61; women with age 45-58 years: OR = 1.320, 95% CI = 1.10-1.58; women with age ≥ 59 years: OR = 1.223, 95% CI = 1.01-1.49. [3] stratification by educational levels, men with less than elementary education: OR = 1.528, 95% CI = 1.25-1.87; women with illiterate: OR = 1.404, 95% CI = 1.14-1.73) than the participants with normal weight after the multivariate adjustment.Our study demonstrated a significant relationship between adiposity parameters and cognition that supports the "jolly fat" hypothesis. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Fewer studies have been published on the association between daily mortality and ambient air pollution in Asia than in the United States and Europe. This study was undertaken in Wuhan, China, to investigate the acute effects of air pollution on mortality with an emphasis on particulate matter (PM*). There were three primary aims: (1) to examine the associations of daily mortality due to all natural causes and daily cause-specific mortality (cardiovascular [CVD], stroke, cardiac [CARD], respiratory [RD], cardiopulmonary [CP], and non-cardiopulmonary [non-CP] causes) with daily mean concentrations (microg/m3) of PM with an aerodynamic diameter--10 pm (PM10), sulfur dioxide (SO2), nitrogen dioxide (NO2), or ozone (O3); (2) to investigate the effect modification of extremely high temperature on the association between air pollution and daily mortality due to all natural causes and daily cause-specific mortality; and (3) to assess the uncertainty of effect estimates caused by the change in International Classification of Disease (ICD) coding of mortality data from Revision 9 (ICD-9) to Revision 10 (ICD-10) code. Wuhan is called an "oven city" in China because of its extremely hot summers (the average daily temperature in July is 37.2 degrees C and maximum daily temperature often exceeds 40 degrees C). Approximately 4.5 million residents live in the core city area of 201 km2, where air pollution levels are higher and ranges are wider than the levels in most cities studied in the published literature. We obtained daily mean levels of PM10, SO2, and NO2 concentrations from five fixed-site air monitoring stations operated by the Wuhan Environmental Monitoring Center (WEMC). O3 data were obtained from two stations, and 8-hour averages, from 10:00 to 18:00, were used. Daily mortality data were obtained from the Wuhan Centres for Disease Prevention and Control (WCDC) during the study period of July 1, 2000, to June 30, 2004. To achieve the first aim, we used a regression of the logarithm of daily counts of mortality due to all natural causes and cause-specific mortality on the daily mean concentrations of the four pollutants while controlling for weather, temporal factors, and other important covariates with generalized additive models (GAMs). We derived pollutant effect estimations for 0-day, 1-day, 2-day, 3-day, and 4-day lagged exposure levels, and the averages of 0-day and 1-day lags (lag 0-1 day) and of 0-day, 1-day, 2-day, and 3-day lags (lag 0-3 days) before the event of death. In addition, we used individual-level data (e.g., age and sex) to classify subgroups in stratified analyses. Furthermore, we explored the nonlinear shapes ("thresholds") of the exposure-response relations. To achieve the second aim, we tested the hypothesis that extremely high temperature modifies the associations between air pollution and daily mortality. We developed three corresponding weather indicators: "extremely hot," "extremely cold," and "normal temperatures." The estimates were obtained from the models for the main effects and for the pollutant-temperature interaction for each pollutant and each cause of mortality. To achieve the third aim, we conducted an additional analysis. We examined the concordance rates and kappa statistics between the ICD-9-coded mortality data and the ICD-10-coded mortality data for the year 2002. We also compared the magnitudes of the estimated effects resulting from the use of the two types of ICD-coded mortality data. In general, the largest pollutant effects were observed at lag 0-1 day. Therefore, for this report, we focused on the results obtained from the lag 0-1 models. We observed consistent associations between PM10 and mortality: every 10-microg/m3 increase in PM10 daily concentration at lag 0-1 day produced a statistically significant association with an increase in mortality due to all natural causes (0.43%; 95% confidence interval [CI], 0.24 to 0.62), CVD (0.57%; 95% CI, 0.31 to 0.84), stroke (0.57%; 95% CI, 0.25 to 0.88), CARD (0.49%; 95% CI, 0.04 to 0.94), RD (0.87%; 95% CI, 0.34 to 1.41), CP (0.52%; 95% CI, 0.27 to 0.77), and non-CP (0.30%; 95% CI, 0.05 to 0.54). In general, these effects were stronger in females than in males and were also stronger among the elderly (> or = 65 years) than among the young. The results of sensitivity testing over the range of exposures from 24.8 to 477.8 microg/m3 also suggest the appropriateness of assuming a linear relation between daily mortality and PM10. Among the gaseous pollutants, we also observed statistically significant associations of mortality with NO, and SO2, and that the estimated effects of these two pollutants were stronger than the PM10 effects. The patterns of NO2 and SO2 associations were similar to those of PM10 in terms of sex, age, and linearity. O3 was not associated with mortality. In the analysis of the effect modification of extremely high temperature on the association between air pollution and daily mortality, only the interaction of PM10 with temperature was statistically significant. Specifically, the interaction terms were statistically significant for mortality due to all natural (P = 0.014), CVD (P = 0.007), and CP (P = 0.014) causes. Across the three temperature groups, the strongest PM10 effects occurred mainly on days with extremely high temperatures for mortality due to all natural (2.20%; 95% CI, 0.74 to 3.68), CVD (3.28%; 95% CI, 1.24 to 5.37), and CP (3.02%; 95% CI, 1.03 to 5.04) causes. The weakest effects occurred at normal temperature days, with the effects on days with low temperatures in the middle. To assess the uncertainty of the effect estimates caused by the change from ICD-9-coded mortality data to ICD-10-coded mortality data, we compared the two sets of data and found high concordance rates (> 99.3%) and kappa statistics close to 1.0 (> 0.98). All effect estimates showed very little change. All statistically significant levels of the estimated effects remained unchanged. In conclusion, the findings for the aims from the current study are consistent with those in most previous studies of air pollution and mortality. The small differences between mortality effects for deaths coded using ICD-9 and ICD-10 show that the change in coding had a minimal impact on our study. Few published papers have reported synergistic effects of extremely high temperatures and air pollution on mortality, and further studies are needed. Establishing causal links between heat, PM10, and mortality will require further toxicologic and cohort studies. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The rate of alcohol-related mortality in people experiencing homelessness and alcohol use disorder is high and necessitates accessible and effective treatment for alcohol use disorder. However, typical abstinence-based treatments do not optimally engage this population. Recent studies have shown that harm-reduction treatment, which does not require abstinence, but instead aims to incrementally reduce alcohol-related harm and improve health-related quality of life, is acceptable to and effective for this population. The aim of this study was to test the efficacy of combined pharmacological and behavioural harm-reduction treatment for alcohol use disorder (HaRT-A) in people experiencing homelessness and alcohol use disorder. This randomised clinical trial was done at three community-based service sites (low-barrier shelters and housing programmes) in Seattle (WA, USA). Eligible participants were adults (aged 21-65 years) who met the DSM-IV-TR criteria for alcohol use disorder and who experienced homelessness in the past year. Participants were randomly assigned (1:1:1:1) by permuted block randomisation, stratified by site, to receive either HaRT-A plus intramuscular injections of 380 mg extended-release naltrexone (XR-NTX; HaRT-A plus XR-NTX group); HaRT-A plus placebo injection (HaRT-A plus placebo group); HaRT-A alone (HaRT-A alone group); or community-based supportive services as usual (services-as-usual control group). Patients assigned to receive HaRT-A attended sessions at baseline (week 0) and in weeks 1, 4, 8, and 12. XR-NTX and placebo injections were administered in weeks 0, 4, and 8. During the study, participants, interventionists, and investigators were masked to group assignment in the two injection arms. All participants were invited to follow-up assessments at weeks 4, 8, 12, 24, and 36. The primary outcomes were self-reported alcohol use quantity (ie, alcohol quantity consumed on peak drinking occasion, as measured with the Alcohol Quantity Use Assessment questionnaire) and frequency (measured with the Addiction Severity Index), alcohol-related harm (measured with the Short Inventory of Problems-2R questionnaire), and physical and mental health-related quality of life (measured with the Short Form-12 survey). Using piecewise growth modelling and an intention-to-treat model, we compared the effects of the three active treatment groups with the services-as-usual control group, and the HaRT-A plus XR-NTX group with the HaRT-A plus placebo group, over the 12-week treatment course and during the 24 weeks following treatment withdrawal. Safety analyses were done on an intention-to-treat basis. This trial is registered with ClinicalTrials.gov, NCT01932801. Between Oct 14, 2013, and Nov 30, 2017, 417 individuals experiencing homelessness and alcohol use disorder were screened, of whom 308 were eligible and randomly assigned to the HaRT-A plus XR-NTX group (n=74), the HaRT-A plus placebo group (n=78), the HaRT-A alone group (n=79), or the services-as-usual control group (n=77). Compared with the services-as-usual control group, the HaRT-A plus XR-NTX group showed significant improvements from baseline to 12 weeks post-treatment across four of the five primary outcomes: peak alcohol quantity (linear B -0·48 [95% CI -0·79 to -0·18] p=0·010; full model Cohen's d=-0·68), alcohol frequency (linear B -4·42 [-8·09 to -0·76], p=0·047; full model Cohen's d=-0·16), alcohol-related harm (linear B -2·22 [-3·39 to -1·06], p=0·002; full model Cohen's d=-0·56), and physical health-related quality of life (linear B 0·66 [0·23 to 1·10], p=0·012; full model Cohen's d=0·43). Compared with the services-as-usual control group, the HaRT-A plus placebo group showed significant improvements in three of the five primary outcomes: peak alcohol quantity (linear B -0·41 [95% CI -0·67 to -0·15] p=0·010; full model Cohen's d=-0·23), alcohol frequency (linear B -5·95 [-9·72 to -2·19], p=0·009; full model Cohen's d=-0·13), and physical health-related quality of life (linear B 0·53 [0·09 to 0·98], p=0·050; full model Cohen's d=0·35). Compared with the services-as-usual control group, the HaRT-A alone group showed significant improvements in two of the five primary outcomes: alcohol-related harm (linear B -1·58 [95% CI -2·73 to -0·42] p=0·025; full model Cohen's d=-0·40) and physical health-related quality of life (linear B 0·63 [0·18 to 1·07], p=0·020; full model Cohen's d=0·41). After treatment discontinuation at 12 weeks, the active treatment groups plateaued, whereas the services-as-usual group showed improvements. Thus, during the post-treatment period (weeks 12 to 36), the services-as-usual control group showed greater reductions in alcohol-related harm compared with both the HaRT-A plus XR-NTX group (linear B 0·96 [0·24 to 1·67], p=0·028; full model Cohen's d=0·24) and the HaRT-A alone group (linear B 1·02 [0·35 to 1·70], p=0·013; full model Cohen's d=0·26). During the post-treatment period, the services-as-usual control group significantly improved on mental health-related quality of life compared with the HaRT-A alone group (linear B -0·46 [-0·79 to -0·12], p=0·024; full model Cohen's d=-0·28), and on physical health-related quality of life compared with the HaRT-A plus XR-NTX group (linear B -0·42 [-0·67 to -0·17], p=0·006; full model Cohen's d=-0·27), the HaRT-A plus placebo group (linear B -0·42 [-0·69 to -0·15], p=0·009; full model Cohen's d=-0·27), and the HaRT-A alone group (linear B -0·47 [-0·72 to -0·22], p=0·002; full model Cohen's d=-0·31). For all other primary outcomes, there were no significant linear differences between the services-as-usual and active treatment groups. When comparing the HaRT-A plus placebo group with the HaRT-A plus XR-NTX group, there were no significant differences for any of the primary outcomes. Missing data analysis indicated that participants were more likely to drop out in the services-as-usual control group than in the active treatment groups; however, primary outcome findings were found to be robust to attrition. Participants in the HaRT-A plus XR-NTX, HaRT-A plus placebo, and HaRT-A alone groups were not more likely to experience adverse events than those in the services-as-usual control group. Compared with existing services, combined pharmacological and behavioural harm-reduction treatment resulted in decreased alcohol use and alcohol-related harm and improved physical health-related quality of life during the 12-week treatment period for people experiencing homelessness and alcohol use disorder. Although not as consistent, there were also positive findings for behavioural harm-reduction treatment alone. Considering the non-significant differences between participants receiving HaRT-A plus placebo and HaRT-A plus XR-NTX, the combined pharmacological and behavioural treatment effect cannot be attributed to XR-NTX alone. Future studies are needed to further investigate the relative contributions of the pharmacological and behavioural components of harm-reduction treatment for alcohol use disorder, and to ascertain whether a maintenance treatment approach could extend these positive outcome trajectories. National Institute on Alcohol Abuse and Alcoholism. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Malaria has been existing in Armenia since antiquity. In the 1920"s to 1930s, thousands of people suffered from this disease in the country. Enormous efforts were required to prevent further spread of the disease. A network was set up, which consisted of a research institute and stations. A total of 200,000 cases of malaria were still notified in 1934. Rapid development of the health infrastructure and better socioeconomic conditions improved the malaria situation and reduced the number of cases in 1946. Malaria was completely eradicated in Armenia in 1963, and the malaria-free situation retained till 1994. During that period, comprehensive activities were undertaken in the country to prevent and control malaria. Since 1990, following the collapse of the Soviet Union, the situation became critical in many newly independent states. Economic crisis, human migration, worsening levels of health services, and the lack of necessary medicines, equipment, and insecticides significantly affected the malaria epidemiological situation in the country. Malaria cases started to penetrate into Armenia from neighboring countries. In 1994, a hundred ninety six military men contacted malaria in Karabakh, which was unfavorable in terms of malaria, as well on as the border with Iran and along the Araks river. The first cases recorded in Armenia were imported, afterwards they led to the incidence of indigenous cases, given the fact that all the prerequisites for malaria mosquito breeding and development were encountered in 17 regions and 3 towns of the country. In 1995, there were 502 imported cases and in 1996 the situation changed: out of 347 registered cases, 149 were indigenous. The Ministry of Health undertook a range of preventive measures. In 1997 versus 1996, the total number of malaria cases increased 2.3-fold: 841 registered cases of which 567 were indigenous (a 3.8-fold increase). The overwhelming majority of cases were recorded in the Ararat and Armavir marzes. In 1998, there were a total of 1156 cases, of them 542 being locally contacted. The situation became stable thanks to joint efforts of WHO, IFRX, the Armenian Red Cross Society, UNICEF, the Ministry of Health of Armenia and its Government. Under Minister's Decree No. 292 of May 17, 1999, a malaria project implementation office was established in the Masis Sanitary and Epidemiological Surveillance Center of Hygienic and Antiepidemic Surveillance to improve progress of the malaria control programme in Armenia. WHO allocated some 7,700 USD for 5-month maintenance and work of the office. Thus, analyzing the malaria cases registered in 1999 and 1998 indicates a 1.9-fold decrease (616/77). The setting up the malaria programme field office under the Minister's decree was instrumental in planning and implementing activities in situ. In 1999, four cases of tropical malaria were recorded in Armenia. The patients were Armenian pilots who contacted malaria during duty travels: 1 in Sudan and 3 in Congo. The list of pilots making flying to endemic countries was submitted to the Republican Center to implement preventive measures in the future. In Armenia malaria surveillance has been improved to ensure timely detection of all suspected cases and to carry out malaria control activities. In this regard, a seminar was held for 21 entomologists and 12 parasitologists. UNICEF and WHO Armenian offices provided a substantial support to organize seminars. To facilitate the seminars, the manual "Malaria parasitology and entomology" was published and distributed among their participants. On April 19, 1999, the session of the Ministry's Executive Board (Collegium) gave recommendations to reinforce malaria control activities in the country. Decrees No. 256 of May 31, 1999, No. 47 of May 29, 1999, and No. 245 of April 30, 1999, "On malaria and preventive and control activities" were issued by the Ministry of Health, the Ministry of Defense, and the Ministry of Internal Affairs and National Security to serve as a guideline for planning and implementing activities. The Ministry of Agriculture undertook to clean the collective irrigation (drainage) system covering 102 and 77 km in the Ararat and Armavir marzes, the Ministry of Health provided a list of endemic foci where cleaning was a priority. Taking into account the importance of the people's participation in ensuring effective prevention and control, emphasis was laid on health education activities: publication of leaflets, as well as articles in local newspapers, radio broadcasts and TV shows. Throughout the season, the early detection of malaria cases, timely hospitalization (in no later than 1-3 days) for at least 5 days and subsequent treatment under direct supervision of a physician were successfully carried out due to home-to-home visits. Entomological studies conducted in the malaria foci show an increase in the presence and density of a malaria vector in the buildings. As far as treatment is concerned, the overall surface of stagnant waters comprised 2642 ha in 1999 (2733 ha in 1998), including 1285 ha of anophelogenic stagnant waters (2276 ha in 1998). The biggest stagnant water surfaces were in the Ararat and Armavir marzes--2209 ha, where the majority of malaria cases were recorded. A total of 1,283,111 and 559,213 sq. m. of constructions were treated in 1999 and 1998, respectively, out them there were 1,259,637 sq. m. in 5 endemic regions. Stagnant water surfaces were treated with bacticulicides on 250.7 and 743.8 (almost 3 times more) in 1998 and 1999, respectively. In 1999, 740 ha of surface were biologically treated using Gambusia compared to 900 ha treated in 1998. There is no highly qualified diagnostic specialists in many regions of the country, which necessitates the holding of further seminars involving relevant specialists, in all malaria regions. There is a tendency of geographical spread of malaria: malaria cases occur in new regions and dwellings. A country-wide action plan was drafted for 2000, mainly focusing on staff training. With WHO assistance, a seminar was held for 324 specialists from endemic regions. During the first quarter of 2000, 13 cases of tertian malaria were recorded as compared 59 cases during the same period of last year. All these patients contacted malaria in the previous season and demonstrated long incubation periods. Thus, the malaria control plan recommended by WHO and the rational and targeted use of its assistance has shown a 2-fold decrease in the incidence of malaria. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
When the kidney fails the blood borne metabolites of protein breakdown and water cannot be excreted. The principle of haemodialysis is that such substances can be removed when blood is passed over a semipermeable membrane. Natural membrane materials can be used including cellulose or modified cellulose, more recently various synthetic membranes have been developed. Synthetic membranes are regarded as being more "biocompatible" in that they incite less of an immune response than cellulose-based membranes. To assess the effects of different haemodialysis membrane material in patients with end-stage renal disease (ESRD). We searched Medline (1966 to December 2000), Embase (1981 to November 2000), PreMedline (29 November 2000), HealthStar (1975 to December 2000), Cinahl (1982 to October 2000), The Cochrane Controlled Trials Register (Issue 1, 1996), Biosis (1989 to June 1995), Sigle (1980 to June 1996), Crib (10th edition, 1995), UK National Research Register (September 1996), and reference lists of relevant articles. We contacted biomedical companies, investigators and we hand searched Kidney International (1980 to 1997). Date of the most recent searches: November 2000. All randomised or quasi-randomised clinical trials comparing different haemodialysis membrane material in patients with ESRD. Two reviewers independently assessed the methodological quality of studies. Data was abstracted from included studies onto a standard form by one reviewer and checked by another. Twenty seven studies met our inclusion criteria and where possible data from these were summated by meta-analyses (Peto's odds ratio (OR) and weighted mean difference (WMD) with 95% confidence intervals (CI)). Twenty two outcome measures were sought in 10 broad areas. For two (number of episodes of significant infection per year and quality of life) no data were available. For the comparison of cellulose with synthetic membranes, data for 12/20 outcome measures were available in only a single trial. For modified cellulose and synthetic membranes, data for three outcome measures were available in one trial only and for 12 of the outcomes no data were found, crossover studies were analysed separately and studies which randomised by patient yet analysed by dialysis sessions adjusted for clustering. Pre-dialysis beta2 microglobulin concentrations were significantly lower at the end of the studies in patients treated with synthetic membranes (WMD - 14.5; 95% CI -17.4 to -11.6). One crossover study showed a lowering of beta2 microglobulin when low flux synthetic membranes were used. When analysed for a change in beta2 microglobulin across a trial a fall was only noted when high flux membranes were used. In one very small study the incidence of amyloid was less in patients who were dialysed for six years with high flux synthetic membranes (OR 0.05; 95% CI 0.01 to 0.18). In the single study which measured triglyceride values there was a significant difference in favour of the synthetic (high flux) membrane (WMD -0.66; 95% CI -1.18 to -0.14). Serum albumin was higher in patients treated with synthetic membranes (both low and high flux) although this just bordered statistical significance (WMD -0.09; 95% CI -0.18 to 0.00). Dialysis adequacy measured by Kt/V was marginally higher when cellulose membranes were used (WMD 0.10; 95% CI 0.04 to 0.16). There was no significant difference between these membranes for any of the other clinical outcomes measures but confidence intervals were generally wide. No differences were found between modified cellulose and synthetic membranes although many fewer trials were carried out for this comparison. For clinical practice This systematic literature review has generated no evidence of benefit when synthetic membranes were used compared with cellulose/modified cellulose membranes in terms of reduced mortality nor reduction in dialysis related adverse symptoms. Despite the relatively large number of RCTs undertaken in this area none of the included studies reported any measures of quality of life. End-of-study beta2 microglobulin values, and possibly the development of amyloid disease, were less in patients treated with synthetic membranes compared with cellulose membranes. Plasma triglyceride values were also lower with synthetic membranes in the single study that measured this outcome. Differences in these outcomes may have reflected the high flux of the synthetic membrane. Serum albumin was higher when synthetic membranes of both high and low flux were used. Kt/V and urea reduction ratio were higher when cellulose or modified cellulose membranes were used in the few studies that measured these outcomes. We are hesitant to recommend the universal use of synthetic membranes for haemodialysis in patients with ESRD because of; the small number of trials (particularly for modified cellulose membranes, most with low patient numbers), the heterogeneity of many of the trials compared, the variations in membrane flux, the differences in exclusion criteria, particularly relating to comorbidity and the relative lack of patient-centred outcomes studied. Such evidence as we have favours synthetic membranes but even if we assume extra benefit it may be at considerable cost, particularly if high flux synthetic membranes were to be used. For further research A further systematic review of RCTs comparing high and low flux haemodialysis membranes, subgrouped according to membrane composition (cellulose, modified cellulose, synthetic) and reporting clinical outcomes of major importance to patients needs to be undertaken. Further pragmatic RCTs are required to compare the different dialysis membranes available. We recommend that they: - Take into account other properties including flux as well as the material from which the membrane is made and test modified cellulose membranes as well as standard ones. - Record an agreed minimum dataset on primary outcomes of major importance to patients. - Explicitly record whether symptoms are patient- or staff-reported recognising that generally patient reporting will be more appropriate for evaluating effectiveness but staff reported data may be necessary for calculating the cost of treating complications. - Be multi-centre (and possibly multinational) to have sufficient patients to complete the study to allow for a considerable number of withdrawals and dropouts. - Have sufficient length of follow up to draw conclusions for important clinical outcome measures and continue to follow patients who have renal transplants. - Include older patients and those with comorbid illnesses and take into account age and comorbidity when assessing outcomes (possibly by stratification at trial entry). - Carry out, in parallel, an economic evaluation of the different policies being compared in the trial. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
A process of infecting the chaffinch nestlings Fringilla coelebs with three analgoid feather mites, Analges passerinus L., 1758, Monojoubertia microphylla (Robin, 1877), and Pteronyssoides striatus (Robin, 1977), commonly occurred on this bird species was investigated. 15 nests contained totally 65 nestlings, from 2 to 6 individuals in a brood, have been examined from the day of hatching till 11th day. Observations were held in the neighbourhood of the bird banding station "Rybachy" (Russia, Kaliningrad Province) in June of 1982. Number of mites on alive nestlings taken temporarily from their nest was counted by means of binocular lens under the magnification x12.5 and x25. The nestlings receive the mites from the chaffinch female during the night time, when the female sits together with the young birds and heats them. In the condition of this prolonged direct contact the mites migrate from the female onto the nestlings. As it was shown in our study of seasonal dynamics of mites on the chaffinch (Mironov, 2000), the chaffinch female only gives its mites to young generation and looses about three quarter of its mite micropopulation during the nesting period (June), hile in the chaffinch males the number of mites continues to increase during all summer. The infections with three feather mite species happen in the second part of the nestling's stay in the nest. The starting time of this process, its intensity, and sex and age structure of mite micropopulations on the nestlings just before their leaving the nest are different in the mite species examined. These peculiarities of feather mite species are determined by the biology of examined species, and first of all by their morphological characteristic and specialisation to different microhabitats, i.e. certain structural zones of plumage. Pteronyssoides striatus (Pteronyssidae) is rather typical mite specialised to feathers with vanes. In adult birds with completely developed plumage this species occupies the ventral surface of the big upper coverts of primary flight feathers. This species appears on the chaffinch nestlings in a significant number on 7th day. The mites occupy the basal parts of primary flight feathers represented in that moment by the rods only. They sit on practically open and smooth surface of this microhabitat, which is uncommon for them, because the vanes of the big upper coverts are not yet open and also represented by thin rods. During the period of the last 5 days (from 7 to 11th day) the mean number of mites per one nestling increases from 2.3 +/- 0.5 to 17.1 +/- 1.8 mites. Just before the day, when the nestling leave the nest, the tritonymphs absolutely predominate (82.4%) in the micropopulation of P. striatus. Analges passerinus (Analgidae) is specialised to live in the friable layer formed by numerous not-engaged thread barbles of the down feathers and basal parts of the body covert feathers. Mites have special hooks on legs used for hard attaching to the barbles and for fast moving in the friable layer of feathers. On the chaffinch nestlings, these mites appear usually on 8th day, when the rod-like body covert feathers begin to open on apices and form short brushes; however some individuals occur on the skin of nestlings even on 6th day. The mean number of mites per nestling on the 11th day reaches 16.5 +/- 1.4 individuals. The micropopulation of A. passerinus is represented on the nestlings mainly by the females (45.5%), tritonymphs (23.6%) and males (11.5%). Monojobertia microphylla (Proctophyllodidae) is a typical dweller of feathers with large vanes. Mites of this species commonly occupy the ventral surface of primary and secondary flight feathers and also respective big upper covert feathers of wings. M. microphylla appears on the nestlings in a significant number (7.1 +/- 1.2 mites) on 9th day, only when the primary flight feathers already have short vanes about 10 mm in length. In next three days the number of mites increases very fast and reaches on 11th day 60.3 +/- 5.7 mites per nestling. In the micropopulation of this species, the tritonymphs count 38.3%, and the quota of males and females is 25.3% each. The migration of this species goes most intensively, than in two other species. An analitic selection of logistic curves shows, that the increasing of mite number during the process of infection with three mite species may be most adequately described by the sigmoid curves with clearly recognizable levels of saturation, which can be theoretically reached. Indeed, the number of mite individuals being able to migrate onto the nestlings is limited by their number on a respective chaffinch female. In a contrast, the increasing of plumage indices, for instance the length of flight feathers, has almost linear character during the period of observation. The beginning of mite migration is determined by the development of respective microhabitats in the plumage of nestlings, or at least by the development of certain structure elements of plumage, where mites are able to attach for a while, before that moment, when the nestlings will develop the plumage completely and begin to fly. In three mite species examined, the process of infection was performed by older stages, namely by the imago and/or tritonymphs. This can be explained by two reasons. On the one hand, the older stages are most active in their movement, resistible and able to survive successfully on new host individuals. On the other hand, the older stage are ready for the reproduction or will be ready after one moulting. The older stages of mites can quickly create a large and self-supporting micropopulations on the birds, therefore this strategy ensures a successful subsequent existence of the parasite species. In cases, when mites (A. passerinus, M. microphylla) migrate into the respective microhabitats structurally corresponding to their normal microhabitats on adult birds, the micropopulations of these mite species include a significant or dominant quota of females and males. When the normal microhabitat is not yet formed, feather mites migrate into neighboring structure elements of plumage, where they can survive and wait for the development of normal microhabitat, to which they are well adapted. Therefore, in the case of P. striatus, its micropopulations on the chaffinch nestlings are represented mainly by the tritonymphs. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Toxicology and carcinogenesis studies of erythromycin stearate (USP grade, greater than 96% pure) were conducted by administering the antibiotic in feed to groups of F344/N rats and B6C3F1 mice of each sex for 14 days, 13 weeks, or 2 years. Erythromycin stearate was studied because of its widespread use in humans as a broad-spectrum macrolide antibiotic and because of the lack of adequate long-term studies for carcinogenicity. Fourteen-Day and Thirteen-Week Studies: In the 14-day studies, none of the rats (at dietary concentrations up to 50,000 ppm) and 2/5 female mice that received 50,000 ppm died before the end of the studies. Final mean body weights of male rats that received 12,500, 25,000, or 50,000 ppm were 10%, 30%, or 36% lower, respectively, than that of controls; final mean body weights of female rats were 10%, 12%, or 32% lower. None of the dosed mouse groups gained weight. The final mean body weight of male mice that received 50,000 ppm was 10% lower than that of controls. In the 13-week studies, none of the rats or mice (at dietary concentrations up to 20,000 ppm) died before the end of the studies. Final mean body weights of the 20,000-ppm groups of rats were more than 12% lower than that of the controls for males and 7% lower for females. Final mean body weights of mice that received 10,000 or 20,000 ppm were 15% or 19% lower than that of controls for males and 5% or 14% lower for females. Multinucleated syncytial hepatocytes were observed in 10/10 male rats that received 20,000 ppm but in 0/10 male rats that received 10,000 ppm. No compound-related gross or microscopic pathologic effects were observed in mice. Based on these results, 2-year studies of erythromycin stearate were conducted by feeding diets containing 0, 5,000, or 10,000 ppm erythromycin stearate to groups of 50 rats of each sex for 103 weeks. Diets containing 0, 2,500, or 5,000 ppm were fed to groups of 50 mice of each sex for 103 weeks. Body Weight and Survival in the Two-Year Studies: Mean body weights of high dose male rats were comparable to those of controls throughout the studies. Mean body weights of high dose female rats were 5%-10% lower than those of controls. Mean body weights of dosed and control mice were comparable. The average daily feed consumption was similar for dosed and control male and female rats. For mice, estimated daily feed consumption by low and high dose males was similar to that of the controls and by low and high dose females was 92% that of the controls. The average amount of erythromycin stearate consumed per day was approximately 180 or 370 mg/kg for male rats and 210 or 435 mg/kg for female rats; for mice, the average amounts were 270 or 545 mg/kg for males and 250 or 500 mg/kg for females. No significant differences in survival were observed between any groups of rats or mice of either sex (final survival-- male rats: control, 28/50; low dose, 23/50; high dose, 27/50; female rats: 29/50; 30/50; 38/50; male mice: 34/50; 33/50; 40/50; female mice: 38/50; 34/50; 40/50). Nonneoplastic and Neoplastic Effects in the Two-Year Studies: Granulomas of the liver were observed at increased incidences in high dose rats (male: 1/50; 2/50; 10/50; female: 18/50; 27/50; 43/50). Granulomatous inflammation or granulomas of the spleen were observed in dosed female rats (0/48; 1/49; 3/50). Reticulum cell hyperplasia in the bone marrow occurred at increased incidences in high dose female rats (10/50; 14/50; 25/50). Squamous cell papillomas of the oral mucosa were observed in 1/50 control, 2/50 low dose, and 3/50 high dose female rats. These tumors were considered to be marginal and not related to exposure. Hyperplasia of the oral mucosa was not observed. Pheochromocytomas of the adrenal gland in female rats occurred with a positive trend (1/50; 4/49; 5/50). The incidences in the dosed groups are similar to the average historical incidence (9%) of this tumor in untreated control female F344/N rats at the study laboratory. This marginal tumor increase is not considered to be biologically important. No increases in incidences of study laboratory. This marginal tumor increase is not considered to be biologically important. No increases in incidences of neoplasms were observed at any site in dosed male rats. Inflammation in the glandular stomach was observed at increased incidences in dosed male mice (1/49; 4/50; 6/50). Lymphoid hyperplasia in the urinary bladder was observed at increased incidences in dosed female mice (1/50; 9/47; 7/48). No increases in incidences of neoplasms were observed at any site in dosed male or female mice. Genetic Toxicology: Erythromycin stearate was not mutagenic in Salmonella typhimurium strains TA98, TA100, TA1535, or TA1537 when tested both with or without exogenous metabolic activation. Erythromycin stearate demonstrated equivocal mutagenicity in the mouse L5178Y lymphoma cell assay in the absence of exogenous metabolic activation (S9); erythromycin stearate was not mutagenic in the presence of S9. Treatment of cultured Chinese hamster ovary cells with erythromycin stearate did not produce an increase in the frequency of sister chromatid exchanges or chromosomal aberrations in either the presence or absence of metabolic activation. Audit: The data, documents, and pathology materials from the 2-year studies of erythromycin stearate have been audited. The audit findings show that the conduct of the studies is documented adequately and support the data and results given in this Technical Report. Conclusions: Under the conditions of these 2-year studies, there was no evidence of carcinogenic activity of erythromycin stearate for male or female F344/N rats administered erythromycin stearate in the diet at 5,000 or 10,000 ppm. There was no evidence of carcinogenic activity of erythromycin stearate for male or female B6C3F1 mice administered erythromycin stearate in the diet at 2,500 or 5,000 ppm. Dose-related increases in the incidences of granulomas of the liver were observed in male and female rats. The absence of any biologically important chemical-associated effects in mice suggests that higher doses could have been given to male and female mice. Synonyms: erythrocin stearate; erythromycin octadecanoate Trade Names: Abbotcine; Bristamycin; Dowmycin E; Eratrex; Erypar; Ethril; Gallimycin; HSDB 4178; OE 7; Pantomicina; Pfizer-E; SK-Erythromycin; Wyamycin S | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Docosahexaenoic acid (DHA) and arachidonic acid (ARA) are long-chain polyunsaturated fatty acids found in breast milk and recently added to infant formulas. Their importance in infant nutrition was recognized by the rapid accretion of these fatty acids in the brain during the first postnatal year, reports of enhanced intellectual development in breastfed children, and recognition of the physiologic importance of DHA in visual and neural systems from studies in animal models. These considerations led to clinical trials to evaluate whether infant formulas that are supplemented with DHA or both DHA and ARA would enhance visual and cognitive development or whether conversion of linoleic acid and alpha-linolenic acid, the essential fatty acid precursors of ARA and DHA, respectively, at the levels found in infant formulas is sufficient to support adequately visual and cognitive development. Visual and cognitive development were not different with supplementation in some studies, whereas other studies reported benefits of adding DHA or both DHA and ARA to formula. One of the first trials with term infants that were fed formula supplemented with DHA or both DHA and ARA evaluated growth, visual acuity (Visual Evoked Potential; Acuity Card Procedure), mental and motor development (Bayley Scales of Infant Development), and early language development (MacArthur Communicative Developmental Inventories). Growth, visual acuity, and mental and motor development were not different among the 3 formula groups or between the breastfed and formula-fed infants in the first year of life. At 14 months of age, infants who were fed the formula with DHA but no ARA had lower vocabulary production and comprehension scores than infants who were fed the unsupplemented control formula or who were breastfed, respectively. The present follow-up study evaluated IQ, receptive and expressive vocabulary, visual-motor function, and visual acuity of children from the original trial when they reached 39 months of age. Infants were randomized within 1 week after birth and fed a control formula (n = 65), one containing DHA (n = 65), or one containing both ARA and DHA (n = 66) to 1 year of age. A comparison group (n = 80) was exclusively breastfed for at least 3 months after which the infants continued to be exclusively breastfed or were supplemented with and/or weaned to infant formula. At 39 months, standard tests of IQ (Stanford Binet IQ), receptive vocabulary (Peabody Picture Vocabulary Test-Revised), expressive vocabulary (mean length of utterance), visual-motor function (Beery Visual-Motor Index), and visual acuity (Acuity Card Procedure) were administered. Growth, red blood cell fatty acid levels, and morbidity also were evaluated. Results were analyzed using analysis of variance or linear regression models. The regression model for IQ, receptive and expressive language, and the visual-motor index controlled for site, birth weight, sex, maternal education, maternal age, and the child's age at testing. The regression model for visual acuity controlled for site only. A variable selection model also identified which of 22 potentially prognostic variables among different categories (feeding groups, the child and family demographics, indicators of illness since birth, and environment) were most influential for IQ and expressive vocabulary. A total of 157 (80%) of the 197 infants studied at 12 months participated in this follow-up study. Characteristics of the families were representative of US families with children up to 5 years of age, and there were no differences in the demographic or family characteristics among the randomized formula groups. As expected, the formula and breastfed groups differed in ethnicity, marital status, parental education, and the prevalence of smoking. Sex, ethnicity, gestational age at birth, and birth weight for those who participated at 39 months did not differ from those who did not. The 12-month Bayley mental and motor scores and 14-month vocabulary scores of the children who participated also were were not different from those who did not. At 39 months, IQ, receptive and expressive language, visual-motor function, and visual acuity were not different among the 3 randomized formula groups or between the breastfed and formula groups. The adjusted means for the control, ARA+DHA, DHA, and breastfed groups were as follows: IQ scores, 104, 101, 100, 106; Peabody Picture Vocabulary Test, 99.2, 97.2, 95.1, 97.4; mean length of utterance, 3.64, 3.75, 3.93, 4.08; the visual-motor index, 2.26, 2.24, 2.05, 2.40; and visual acuity (cycles/degree), 30.4, 27.9, 27.5, 28.6, respectively. IQ was positively associated with female sex and maternal education and negatively associated with the number of siblings and exposure to cigarette smoking in utero and/or postnatally. Expressive language also was positively associated with maternal education and negatively associated with the average hours in child care per week and hospitalizations since birth but only when the breastfed group was included in the analysis. The associations between maternal education and child IQ scores are consistent with previous reports as are the associations between prenatal exposure to cigarette smoke and IQ and early language development. Approximately one third of the variance for IQ was explained by sex, maternal education, the number of siblings, and exposure to cigarette smoke. Growth achievement, red blood cell fatty acid levels, and morbidity did not differ among groups. We reported previously that infants who were fed an unsupplemented formula or one with DHA or with both DHA and ARA through 12 months or were breastfed showed no differences in mental and motor development, but those who were fed DHA without ARA had lower vocabulary scores on a standardized, parent-report instrument at 14 months of age when compared with infants who were fed the unsupplemented formula or who were breastfed. When the infants were reassessed at 39 months using age-appropriate tests of receptive and expressive language as well as IQ, visual-motor function and visual acuity, no differences among the formula groups or between the formula and breastfed groups were found. The 14-month observation thus may have been a transient effect of DHA (without ARA) supplementation on early vocabulary development or may have occurred by chance. The absence of differences in growth achievement adds to the evidence that DHA with or without ARA supports normal growth in full-term infants. In conclusion, adding both DHA and ARA when supplementing infant formulas with long-chain polyunsaturated fatty acids supports visual and cognitive development through 39 months. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
This review was conducted to assess the effectiveness of negative pressure wound therapy. TARGET POPULATION AND CONDITION Many wounds are difficult to heal, despite medical and nursing care. They may result from complications of an underlying disease, like diabetes; or from surgery, constant pressure, trauma, or burns. Chronic wounds are more often found in elderly people and in those with immunologic or chronic diseases. Chronic wounds may lead to impaired quality of life and functioning, to amputation, or even to death. The prevalence of chronic ulcers is difficult to ascertain. It varies by condition and complications due to the condition that caused the ulcer. There are, however, some data on condition-specific prevalence rates; for example, of patients with diabetes, 15% are thought to have foot ulcers at some time during their lives. The approximate community care cost of treating leg ulcers in Canada, without reference to cause, has been estimated at upward of $100 million per year. Surgically created wounds can also become chronic, especially if they become infected. For example, the reported incidence of sternal wound infections after median sternotomy is 1% to 5%. Abdominal surgery also creates large open wounds. Because it is sometimes necessary to leave these wounds open and allow them to heal on their own (secondary intention), some may become infected and be difficult to heal. Yet, little is known about the wound healing process, and this makes treating wounds challenging. Many types of interventions are used to treat wounds. Current best practice for the treatment of ulcers and other chronic wounds includes debridement (the removal of dead or contaminated tissue), which can be surgical, mechanical, or chemical; bacterial balance; and moisture balance. Treating the cause, ensuring good nutrition, and preventing primary infection also help wounds to heal. Saline or wet-to-moist dressings are reported as traditional or conventional therapy in the literature, although they typically are not the first line of treatment in Ontario. Modern moist interactive dressings are foams, calcium alginates, hydrogels, hydrocolloids, and films. Topical antibacterial agents-antiseptics, topical antibiotics, and newer antimicrobial dressings-are used to treat infection. Negative pressure wound therapy is not a new concept in wound therapy. It is also called subatmospheric pressure therapy, vacuum sealing, vacuum pack therapy, and sealing aspirative therapy. The aim of the procedure is to use negative pressure to create suction, which drains the wound of exudate (i.e., fluid, cells, and cellular waste that has escaped from blood vessels and seeped into tissue) and influences the shape and growth of the surface tissues in a way that helps healing. During the procedure, a piece of foam is placed over the wound, and a drain tube is placed over the foam. A large piece of transparent tape is placed over the whole area, including the healthy tissue, to secure the foam and drain the wound. The tube is connected to a vacuum source, and fluid is drawn from the wound through the foam into a disposable canister. Thus, the entire wound area is subjected to negative pressure. The device can be programmed to provide varying degrees of pressure either continuously or intermittently. It has an alarm to alert the provider or patient if the pressure seal breaks or the canister is full. Negative pressure wound therapy may be used for patients with chronic and acute wounds; subacute wounds (dehisced incisions); chronic, diabetic wounds or pressure ulcers; meshed grafts (before and after); or flaps. It should not be used for patients with fistulae to organs/body cavities, necrotic tissue that has not been debrided, untreated osteomyelitis, wound malignancy, wounds that require hemostasis, or for patients who are taking anticoagulants. The inclusion criteria were as follows: Randomized controlled trial (RCT) with a sample size of 20 or moreHuman studyPublished in English Seven international health technology assessments on NPWT were identified. Included in this list of health technology assessments is the original health technology review on NPWT by the Medical Advisory Secretariat from 2004. The Medical Advisory Secretariat found that the health technology assessments consistently reported that NPWT may be useful for healing various types of wounds, but that its effectiveness could not be empirically quantified because the studies were poorly done, the patient populations and outcome measures could not be compared, and the sample sizes were small. Six RCTs were identified that compared NPWT to standard care. Five of the 6 studies were of low or very low quality according to Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria. The low and very low quality RCTs were flawed owing to small sample sizes, inconsistent reporting of results, and patients lost to follow-up. The highest quality study, which forms the basis of this health technology policy assessment, found that: There was not a statistically significant difference (≥ 20%) between NPWT and standard care in the rate of complete wound closure in patients who had complete wound closure but did not undergo surgical wound closure (P = .15).The authors of this study did not report the length of time to complete wound closure between NPWT and standard care in patients who had complete wound closure but who did not undergo surgical wound closureThere was no statistically significant difference (≥ 20%) in the rate of secondary amputations between the patients that received NPWT and those that had standard care (P = .06)There may be an increased risk of wound infection in patients that receive NPWT compared with those that receive standard care. Based on the evidence to date, the clinical effectiveness of NPWT to heal wounds is unclear. Furthermore, saline dressings are not standard practice in Ontario, thereby rendering the literature base irrelevant in an Ontario context. Nonetheless, despite the lack of methodologically sound studies, NPWT has diffused across Ontario. Discussions with Ontario clinical experts have highlighted some deficiencies in the current approach to wound management, especially in the community. Because NPWT is readily available, easy to administer, and may save costs, compared with multiple daily conventional dressing changes, it may be used inappropriately. The discussion group highlighted the need to put in place a coordinated, multidisciplinary strategy for wound care in Ontario to ensure the best, continuous care of patients. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Are extracellular vesicles (EVs) in the murine oviduct (oviductosomes, OVS) conserved in humans and do they play a role in the fertility of Pmca4-/- females? OVS and their fertility-modulating proteins are conserved in humans, arise via the apocrine pathway, and mediate a compensatory upregulation of PMCA1 (plasma membrane Ca2+-ATPase 1) in Pmca4-/- female mice during proestrus/estrus, to account for their fertility. Recently murine OVS were identified and shown during proestrus/estrus to express elevated levels of PMCA4 which they can deliver to sperm. PMCA4 is the major Ca2+ efflux pump in murine sperm and Pmca4 deletion leads to loss of sperm motility and male infertility as there is no compensatory upregulation of the remaining Ca2+ pump, PMCA1. Of the four family members of PMCAs (PMCA1-4), PMCA1 and PMCA4 are ubiquitous, and to date there have been no reports of one isoform being upregulated to compensate for another in any organ/tissue. Since Pmca4-/- females are fertile, despite the abundant expression of PMCA4 in wild-type (WT) OVS, we propose that OVS serve a role of packaging and delivering to sperm elevated levels of PMCA1 in Pmca4-/- during proestrus/estrus to compensate for PMCA4's absence. Fallopian tubes from pre-menopausal women undergoing hysterectomy were used to study EVs in the luminal fluid. Oviducts from sexually mature WT mice were sectioned after perfusion fixation to detect EVs in situ. Oviducts were recovered from WT and Pmca4-/- after hormonally induced estrus and sectioned for PMCA1 immunofluorescence (IF) (detected with confocal microscopy) and hematoxylin and eosin staining. Reproductive tissues, luminal fluids and EVs were recovered after induced estrus and after natural cycling for western blot analysis of PMCA1 and qRT-PCR of Pmca1 to compare expression levels in WT and Pmca4-/-. OVS, uterosomes, and epididymal luminal fluid were included in the comparisons. WT and Pmca4-/- OVS were analyzed for the presence of known PMCA4 partners in sperm and their ability to interact with PMCA1, via co-immunoprecipitation. In vitro uptake of PMCA1 from OVS was analyzed in capacitated and uncapacitated sperm via quantitative western blot analysis, IF localization and flow cytometry. Caudal sperm were also assayed for uptake of tyrosine-phosphorylated proteins which were shown to be present in OVS. Finally, PMCA1 and PMCA4 in OVS and that delivered to sperm were assayed for enzymatic activity. Human fallopian tubes were flushed to recover luminal fluid which was processed for OVS via ultracentrifugation. Human OVS were negatively stained for transmission electron microscopy (TEM) and subjected to immunogold labeling, to detect PMCA4. Western analysis was used to detect HSC70 (an EV biomarker), PMCA1 and endothelial nitric oxide synthase (eNOS) which is a fertility-modulating protein delivered to human sperm by prostasomes. Oviducts of sexually mature female mice were sectioned after perfusion fixation for TEM tomography to obtain 3D information and to distinguish cross-sections of EVs from those of microvilli and cilia. Murine tissues, luminal fluids and EVs were assayed for PMCA1 (IF and western blot) or qRT-PCR. PMCA1 levels from western blots were quantified, using band densities and compared in WT and Pmca4-/- after induced estrus and in proestrus/estrus and metestrus/diestrus in cycling females. In vitro uptake of PMCA1 and tyrosine-phosphorylated proteins was quantified with flow cytometry and/or quantitative western blot. Ca2+-ATPase activity in OVS and sperm before and after PMCA1 and PMCA4 uptake was assayed, via the enzymatic hydrolysis rate of ATP. TEM revealed that human oviducts contain EVs (exosomal and microvesicular). These EVs contain PMCA4 (immunolabeling), eNOS and PMCA1 (western blot) in their cargo. TEM tomography showed the murine oviduct with EV-containing blebs which typify the apocrine pathway for EV biogenesis. Western blots revealed that during proestrus/estrus PMCA1 was significantly elevated in the oviductal luminal fluid (OLF) (P = 0.02) and in OVS (P = 0.03) of Pmca4-/-, compared to WT. Further, while PMCA1 levels did not fluctuate in OLF during the cycle in WT, they were significantly (P = 0.02) higher in proestrus/estrus than at metestrus/diestrus in Pmca4-/-. The elevated levels of PMCA1 in proestrus/estrus, which mimics PMCA4 in WT, is OLF/OVS-specific, and is not seen in oviductal tissues, uterosomes or epididymal luminal fluid of Pmca4-/-. However, qRT-PCR revealed significantly elevated levels of Pmca1 transcript in Pmca4-/- oviductal tissues, compared to WT. PMCA1 could be transferred from OVS to sperm and the levels were significantly higher for capacitated vs uncapacitated sperm, as assessed by flow cytometry (P = 0.001) after 3 h co-incubation, quantitative western blot (P < 0.05) and the frequency of immuno-labeled sperm (P < 0.001) after 30 min co-incubation. Tyrosine phosphorylated proteins were discovered in murine OVS and could be delivered to sperm after their co-incubation with OVS, as detected by western, immunofluorescence localization, and flow cytometry. PMCA1 and PMCA4 in OVS were shown to be enzymatically active and this activity increased in sperm after OVS interaction. None. Although oviductal tissues of WT and Pmca4-/- showed no significant difference in PMCA1 levels, Pmca4-/- levels of OVS/OLF during proestrus/estrus were significantly higher than in WT. We have attributed this enrichment or upregulation of PMCA1 in Pmca4-/- partly to selective packaging in OVS to compensate for the lack of PMCA4. However, in the absence of a difference between WT and Pmca4-/- in the PMCA1 levels in oviductal tissues as a whole, we cannot rule out significantly higher PMCA1 expression in the oviductal epithelium that gives rise to the OVS as significantly higher Pmca1 transcripts were detected in Pmca4-/-. Since OVS and fertility-modulating cargo components are conserved in humans, it suggests that murine OVS role in regulating the expression of proteins required for capacitation and fertility is also conserved. Secondly, OVS may explain some of the differences in in vivo and in vitro fertilization for mouse mutants, as seen in mice lacking the gene for FER which is the enzyme required for sperm protein tyrosine phosphorylation. Our observation that murine OVS carry and can modulate sperm protein tyrosine phosphorylation by delivering them to sperm provides an explanation for the in vivo fertility of Fer mutants, not seen in vitro. Finally, our findings have implications for infertility treatment and exosome therapeutics. The work was supported by National Institute of Health (RO3HD073523 and 5P20RR015588) grants to P.A.M.-D. There are no conflicts of interests. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
To explore the application of machine learning algorithm in the prediction and evaluation of cesarean section, predicting the amount of blood transfusion during cesarean section and to analyze the risk factors of hypothermia during anesthesia recovery. (1)Through the hospital electronic medical record of medical system, a total of 600 parturients who underwent cesarean section in our hospital from June 2019 to December 2020 were included. The maternal age, admission time, diagnosis, and other case data were recorded. The routine method of cesarean section was intraspinal anesthesia, and general anesthesia was only used for patients' strong demand, taboo, or failure of intraspinal anesthesia. According to the standard of intraoperative bleeding, the patients were divided into two groups: the obvious bleeding group (MH group, <iN</i = 154) and nonobvious hemorrhage group (NMH group, <iN</i = 446). The preoperative, intraoperative, and postoperative indexes of parturients in the two groups were analyzed and compared. Then, the risk factors of intraoperative bleeding were screened by logistic regression analysis with the occurrence of obvious bleeding as the dependent variable and the factors in the univariate analysis as independent variables. In order to further predict intraoperative blood transfusion, the standard cases of recesarean section and variables with possible clinical significance were included in the prediction model. Logistic regression, XGB, and ANN3 machine learning algorithms were used to construct the prediction model of intraoperative blood transfusion. The area under ROC curve (AUROC), accuracy, recall rate, and <iF</i1 value were calculated and compared. (2) According to whether hypothermia occurred in the anesthesia recovery room, the patients were divided into two groups: the hypothermia group (<iN</i = 244) and nonhypothermia group (<iN</i = 356). The incidence of hypothermia was calculated, and the relevant clinical data were collected. On the basis of consulting the literatures, the factors probably related to hypothermia were collected and analyzed by univariate statistical analysis, and the statistically significant factors were analyzed by multifactor logistic regression analysis to screen the independent risk factors of hypothermia in anesthetic convalescent patients. (1) First of all, we compared the basic data of the blood transfusion group and the nontransfusion group. The gestational age of the transfusion group was lower than that of the nontransfusion group, and the times of cesarean section and pregnancy in the transfusion group were higher than those of the non-transfusion group. Secondly, we compared the incidence of complications between the blood transfusion group and the nontransfusion group. The incidence of pregnancy complications was not significantly different between the two groups (<iP</i > 0.05). The incidence of premature rupture of membranes in the nontransfusion group was higher than that in the transfusion group (<iP</i < 0.05). There was no significant difference in the fetal umbilical cord around neck, amniotic fluid index, and fetal heart rate before operation in the blood transfusion group, but the thickness of uterine anterior wall and the levels of Hb, PT, FIB, and TT in the blood transfusion group were lower than those in the nontransfusion group, while the number of placenta previa and the levels of PLT and APTT in the blood transfusion group were higher than those in the nontransfusion group. The XGB prediction model finally got the 8 most important features, in the order of importance from high to low: preoperative Hb, operation time, anterior wall thickness of the lower segment of uterus, uterine weakness, preoperative fetal heart, placenta previa, ASA grade, and uterine contractile drugs. The higher the score, the greater the impact on the model. There was a linear correlation between the 8 features (including the correlation with the target blood transfusion). The indexes with strong correlation with blood transfusion included the placenta previa, ASA grade, operation time, uterine atony, and preoperative Hb. Placenta previa, ASA grade, operation time, and uterine atony were positively correlated with blood transfusion, while preoperative Hb was negatively correlated with blood transfusion. In order to further compare the prediction ability of the three machine learning methods, all the samples are randomly divided into two parts: the first 75% training set and the last 25% test set. Then, the three models are trained again on the training set, and at this time, the model does not come into contact with the samples in any test set. After the model training, the trained model was used to predict the test set, and the real blood transfusion status was compared with the predicted value, and the <iF</i1, accuracy, recall rate, and AUROC4 indicators were checked. In terms of training samples and test samples, the AUROC of XGB was higher than that of logistic regression, and the <iF</i1, accuracy, and recall rate of XGB of ANN were also slightly higher than those of logistic regression and ANN. Therefore, the performance of XGB algorithm is slightly better than that of logistic regression and ANN. (2) According to the univariate analysis of hypothermia during the recovery period of anesthesia, there were significant differences in ASA grade, mode of anesthesia, infusion volume, blood transfusion, and operation duration between the normal body temperature group and hypothermia group (<iP</i < 0.05). Logistic regression analysis showed that ASA grade, anesthesia mode, infusion volume, blood transfusion, and operation duration were all risk factors of hypothermia during anesthesia recovery. In this study, three machine learning algorithms were used to analyze the large sample of clinical data and predict the results. It was found that five important predictive variables of blood transfusion during recesarean section were preoperative Hb, expected operation time, uterine weakness, placenta previa, and ASA grade. By comparing the three algorithms, the prediction effect of XGB may be more accurate than that of logistic regression and ANN. The model can provide accurate individual prediction for patients and has good prediction performance and has a good prospect of clinical application. Secondly, through the analysis of the risk factors of hypothermia during the recovery period of cesarean section, it is found that ASA grade, mode of anesthesia, amount of infusion, blood transfusion, and operation time are all risk factors of hypothermia during the recovery period of cesarean section. In line with this, the observation of this kind of patients should be strengthened during cesarean section. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Coeliac disease is an autoimmune disorder triggered by ingesting gluten. It affects approximately 1% of the UK population, but only one in three people is thought to have a diagnosis. Untreated coeliac disease may lead to malnutrition, anaemia, osteoporosis and lymphoma. The objectives were to define at-risk groups and determine the cost-effectiveness of active case-finding strategies in primary care. (1) Systematic review of the accuracy of potential diagnostic indicators for coeliac disease. (2) Routine data analysis to develop prediction models for identification of people who may benefit from testing for coeliac disease. (3) Systematic review of the accuracy of diagnostic tests for coeliac disease. (4) Systematic review of the accuracy of genetic tests for coeliac disease (literature search conducted in April 2021). (5) Online survey to identify diagnostic thresholds for testing, starting treatment and referral for biopsy. (6) Economic modelling to identify the cost-effectiveness of different active case-finding strategies, informed by the findings from previous objectives. For the first systematic review, the following databases were searched from 1997 to April 2021: MEDLINE<sup>®</sup> (National Library of Medicine, Bethesda, MD, USA), Embase<sup>®</sup> (Elsevier, Amsterdam, the Netherlands), Cochrane Library, Web of Science™ (Clarivate™, Philadelphia, PA, USA), the World Health Organization International Clinical Trials Registry Platform ( WHO ICTRP ) and the National Institutes of Health Clinical Trials database. For the second systematic review, the following databases were searched from January 1990 to August 2020: MEDLINE, Embase, Cochrane Library, Web of Science, Kleijnen Systematic Reviews ( KSR ) Evidence, WHO ICTRP and the National Institutes of Health Clinical Trials database. For prediction model development, Clinical Practice Research Datalink GOLD, Clinical Practice Research Datalink Aurum and a subcohort of the Avon Longitudinal Study of Parents and Children were used; for estimates for the economic models, Clinical Practice Research Datalink Aurum was used. For review 1, cohort and case-control studies reporting on a diagnostic indicator in a population with and a population without coeliac disease were eligible. For review 2, diagnostic cohort studies including patients presenting with coeliac disease symptoms who were tested with serological tests for coeliac disease and underwent a duodenal biopsy as reference standard were eligible. In both reviews, risk of bias was assessed using the quality assessment of diagnostic accuracy studies 2 tool. Bivariate random-effects meta-analyses were fitted, in which binomial likelihoods for the numbers of true positives and true negatives were assumed. People with dermatitis herpetiformis, a family history of coeliac disease, migraine, anaemia, type 1 diabetes, osteoporosis or chronic liver disease are 1.5-2 times more likely than the general population to have coeliac disease; individual gastrointestinal symptoms were not useful for identifying coeliac disease. For children, women and men, prediction models included 24, 24 and 21 indicators of coeliac disease, respectively. The models showed good discrimination between patients with and patients without coeliac disease, but performed less well when externally validated. Serological tests were found to have good diagnostic accuracy for coeliac disease. Immunoglobulin A tissue transglutaminase had the highest sensitivity and endomysial antibody the highest specificity. There was little improvement when tests were used in combination. Survey respondents (<i>n</i> = 472) wanted to be 66% certain of the diagnosis from a blood test before starting a gluten-free diet if symptomatic, and 90% certain if asymptomatic. Cost-effectiveness analyses found that, among adults, and using serological testing alone, immunoglobulin A tissue transglutaminase was most cost-effective at a 1% pre-test probability (equivalent to population screening). Strategies using immunoglobulin A endomysial antibody plus human leucocyte antigen or human leucocyte antigen plus immunoglobulin A tissue transglutaminase with any pre-test probability had similar cost-effectiveness results, which were also similar to the cost-effectiveness results of immunoglobulin A tissue transglutaminase at a 1% pre-test probability. The most practical alternative for implementation within the NHS is likely to be a combination of human leucocyte antigen and immunoglobulin A tissue transglutaminase testing among those with a pre-test probability above 1.5%. Among children, the most cost-effective strategy was a 10% pre-test probability with human leucocyte antigen plus immunoglobulin A tissue transglutaminase, but there was uncertainty around the most cost-effective pre-test probability. There was substantial uncertainty in economic model results, which means that there would be great value in conducting further research. The interpretation of meta-analyses was limited by the substantial heterogeneity between the included studies, and most included studies were judged to be at high risk of bias. The main limitations of the prediction models were that we were restricted to diagnostic indicators that were recorded by general practitioners and that, because coeliac disease is underdiagnosed, it is also under-reported in health-care data. The cost-effectiveness model is a simplification of coeliac disease and modelled an average cohort rather than individuals. Evidence was weak on the probability of routine coeliac disease diagnosis, the accuracy of serological and genetic tests and the utility of a gluten-free diet. Population screening with immunoglobulin A tissue transglutaminase (1% pre-test probability) and of immunoglobulin A endomysial antibody followed by human leucocyte antigen testing or human leucocyte antigen testing followed by immunoglobulin A tissue transglutaminase with any pre-test probability appear to have similar cost-effectiveness results. As decisions to implement population screening cannot be made based on our economic analysis alone, and given the practical challenges of identifying patients with higher pre-test probabilities, we recommend that human leucocyte antigen combined with immunoglobulin A tissue transglutaminase testing should be considered for adults with at least a 1.5% pre-test probability of coeliac disease, equivalent to having at least one predictor. A more targeted strategy of 10% pre-test probability is recommended for children (e.g. children with anaemia). Future work should consider whether or not population-based screening for coeliac disease could meet the UK National Screening Committee criteria and whether or not it necessitates a long-term randomised controlled trial of screening strategies. Large prospective cohort studies in which all participants receive accurate tests for coeliac disease are needed. This study is registered as PROSPERO CRD42019115506 and CRD42020170766. This project was funded by the National Institute for Health and Care Research ( NIHR ) Health Technology Assessment programme and will be published in full in <i>Health Technology Assessment</i>; Vol. 26, No. 44. See the NIHR Journals Library website for further project information. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
There remain large discrepancies between pediatricians' practice patterns and the American Academy of Pediatrics (AAP) guidelines for the assessment and treatment of children with attention-deficit/hyperactivity disorder (ADHD). Several studies raise additional concerns about access to ADHD treatment for girls, blacks, and poorer individuals. Barriers may occur at multiple levels, including identification and referral by school personnel, parents' help-seeking behavior, diagnosis by the medical provider, treatment decisions, and acceptance of treatment. Such findings confirm the importance of establishing appropriate mechanisms to ensure that children of both genders and all socioeconomic, racial, and ethnic groups receive appropriate assessment and treatment. Publication of the AAP ADHD toolkit provides resources to assist with implementing the ADHD guidelines in clinical practice. These resources address a number of the barriers to office implementation, including unfamiliarity with Diagnostic and Statistical Manual of Mental Disorders criteria, difficulty identifying comorbidities, and inadequate knowledge of effective coding practices. Also crucial to the success of improved processes within clinical practice is community collaboration in care, particularly collaboration with the educational system. Such collaboration addresses other barriers to good care, such as pressures from parents and schools to prescribe stimulants, cultural biases that may prevent schools from assessing children for ADHD or may prevent families from seeking health care, and inconsistencies in recognition and referral among schools in the same system. Collaboration may also create efficiencies in collection of data and school-physician communications, thereby decreasing physicians' non-face-to-face (and thus nonreimbursable) elements of care. This article describes a process used in Guilford County, North Carolina, to develop a consensus among health care providers, educators, and child advocates regarding the assessment and treatment of children with symptoms of ADHD. The outcome, ie, a community protocol followed by school personnel and community physicians for >10 years, ensures communication and collaboration between educators and physicians in the assessment and treatment of children with symptoms of ADHD. This protocol has the potential to increase practice efficiency, improve practice standards for children with ADHD, and enhance identification of children in schools. Perhaps most importantly, the community process through which the protocol was developed and implemented has an educational component that increases the knowledge of school personnel about ADHD and its treatment, increasing the likelihood that referrals will be appropriate and increasing the likelihood that children will benefit from coordination of interventions among school personnel, physicians, and parents. The protocol reflects a consensus of school personnel and community health care providers regarding the following: (1) ideal ADHD assessment and management principles; (2) a common entry point (a team) at schools for children needing assessment because of inattention and classroom behavior problems, whether the problems present first to a medical provider, the behavioral health system, or the school; (3) a protocol followed by the school system, recognizing the schools' resource limitations but meeting the needs of community health care providers for classroom observations, psychoeducational testing, parent and teacher behavior rating scales, and functional assessment; (4) a packet of information about each child who is determined to need medical assessment; (5) a contact person or team at each physician's office to receive the packet from the school and direct it to the appropriate clinician; (6) an assessment process that investigates comorbidities and applies appropriate diagnostic criteria; (7) evidence-based interventions; (8) processes for follow-up monitoring of children after establishment of a treatment plan; (9) roles for central participants (school personnel, physicians, school nurses, and mental health professionals) in assessment, management, and follow-up monitoring of children with attention problems; (10) forms for collecting and exchanging information at every step; (11) processes and key contacts for flow of communication at every step; and (12) a plan for educating school and health care professionals about the new processes. A replication of the community process, initiated in Forsyth County, North Carolina, in 2001, offers insights into the role of the AAP ADHD guidelines in facilitating development of a community consensus protocol. This replication also draws attention to identification and referral barriers at the school level. The following recommendations, drawn from the 2 community processes, describe a role for physicians in the collaborative community care of children with symptoms of ADHD. (1) Achieve consensus with the school system regarding the role of school personnel in collecting data for children with learning and behavior problems; components to consider include (a) vision and hearing screening, (b) school/academic histories, (c) classroom observation by a counselor, (d) parent and teacher behavior rating scales (eg, Vanderbilt, Conner, or Achenbach scales), (e) consideration of speech/language evaluation, (f) screening intelligence testing, (g) screening achievement testing, (h) full intelligence and achievement testing if discrepancies are apparent in abbreviated tests, and (i) trials of classroom interventions. (2) Use pediatric office visits to identify children with academic or behavior problems and symptoms of inattention (history or questionnaire). (3) Refer identified children to the contact person at each child's school, requesting information in accordance with community consensus. (4) Designate a contact person to receive school materials for the practice. (5) Review the packet from the school and incorporate school data into the clinical assessment. (6) Reinforce with the parents and the school the need for multimodal intervention, including academic and study strategies for the classroom and home, in-depth psychologic testing of children whose discrepancies between cognitive level and achievement suggest learning or language disabilities and the need for an individualized educational plan (special education), consideration of the "other health impaired" designation as an alternate route to an individualized educational plan or 504 plan (classroom accommodations), behavior-modification techniques for targeted behavior problems, and medication trials, as indicated. (7) Refer the patient to a mental health professional if the assessment suggests coexisting conditions. (8) Use communication forms to share diagnostic and medication information, recommended interventions, and follow-up plans with the school and the family. (9) Receive requested teacher and parent follow-up reports and make adjustments in therapy as indicated by the child's functioning in targeted areas. (10) Maintain communication with the school and the parents, especially at times of transition (eg, beginning and end of the school year, change of schools, times of family stress, times of change in management, adolescence, and entry into college or the workforce). | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
To investigate the effect of the application of pulse contour cardiac output (PiCCO) monitoring technology on delayed resuscitation of patients with extensive burn in a mass casualty. The clinical data of 41 patients injured in Kunshan dash explosion hospitalized in the First Affiliated Hospital of Soochow University, the 100th Hospital of the People's Liberation Army, and Suzhou Municipal Hospital were retrospectively analyzed. The patients were divided into traditional monitoring group (T, n=22) and PiCCO monitoring group (P, n=19) according to the monitoring technic during delayed resuscitation. The input volumes of electrolyte, colloids, and water of patients in the two groups within 2 hours after admission, the first, second, and third 8 hours post injury (HPI), and the first 24 HPI were recorded. The fluid infusion coefficients of patients in the two groups within 2 hours after admission, the first, second, and third 8 HPI, and the first, second, third, and fourth 24 HPI were calculated. The urine volume, mean arterial pressure (MAP), and central venous pressure (CVP) of patients in the two groups at post injury hour (PIH) 8, 16, 24, 48, 72, and 96 were recorded. The blood lactate, base excess, hematocrit (HCT), and platelet count of patients in the two groups at PIH 24, 48, 72, and 96 were recorded. Complications and death of patients in the two groups were recorded. Data were processed with analysis of variance for repeated measurement, Chi-square test, t test, and Wilcoxon test. The deviations between figure 2 and the fluid infusion coefficients of the first or second 24 HPI, and the deviations between figure 1 and the fluid infusion coefficients of the second, third or fourth 24 HPI were calculated, and the three groups deviations were analyzed by Pearson correlation analysis. (1) The input volumes of electrolyte of patients in group P were significantly more than those in group T within the first 8 and 24 HPI (with Z values respectively -3.506 and -2.654, P<0.05 or P<0.01), and the input volumes of electrolyte of patients in the two groups were similar within the other time periods (with Z values from -1.871 to -0.680, P values above 0.05). The input volumes of colloid of patients in group P were significantly less than those in group T within the second, third 8 HPI, and the first 24 HPI (with Z values from -4.720 to -2.643, P<0.05 or P<0.01), and the input volumes of colloid of patients in the two groups were similar within the other time periods (with Z values respectively -2.376 and -2.303, P values above 0.05). The input volumes of water of patients in the two groups were similar within each time period (with Z values from -1.959 to -0.241, P values above 0.05). (2) The fluid infusion coefficients of patients in group T within 2 hours after admission, the first, second, and third 8 HPI, and the first, second, third, and fourth 24 HPI were respectively (0.59±0.18), (0.70±0.23), (0.94±0.24), (0.74±0.14), (2.38±0.44), (1.70±0.56), (1.35±0.67), and (0.92±0.46) mL·kg(-1)·%TBSA(-1,) and the values in group P were respectively (0.59±0.29), (0.82±0.37), (0.86±0.38), (0.59±0.24), (2.27±0.85), (2.13±0.68), (1.59±3.78), and (1.46±0.56) mL·kg(-1)·%TBSA(-1). The fluid infusion coefficients of patients in the two groups were similar within 2 hours after admission, the first, second 8 HPI, and the first, third 24 HPI (with t values from -1.262 to 0.871, P values above 0.05). The fluid infusion coefficient of patients in group P was significantly lower than that in group T within the third 8 HPI (t=2.456, P<0.05), and the fluid infusion coefficient of patients in group P were significantly higher than that in group T within the second and fourth 24 HPI (with t values respectively -2.234 and -3.370, P<0.05 or P<0.01). There was obviously negative correlation between the deviations of figure 2 and the fluid infusion coefficient of the first 24 HPI and that of the second 24 HPI (r=-0.438, P<0.01). There was no obvious correlation between the deviations of figure 1 and the fluid infusion coefficient of the second 24 HPI and that of the third 24 HPI (r=0.091, P>0.05). There was obviously positive correlation between the deviations of figure 1 and the fluid infusion coefficient of the second 24 HPI and that of the fourth 24 HPI (r=0.695, P<0.01). (3) The urine volumes and MAP of patients in the two groups were similar at each time point (with Z values from -1.884 to 0, P values above 0.05). The CVP of patients in group P were significantly higher than that in group T at PIH 16, 24, 48, and 72 (with Z values from -4.341 to -2.213, P<0.05 or P<0.01), and the CVP of patients in the two groups were similar at the other time points (with Z values respectively -0.132 and -1.208, P values above 0.05). The blood lactate of patients in group P was significantly higher than that in group T at PIH 72 (Z= -2.958, P<0.01) , and the blood lactate of patients in the two groups were similar at the other time points (with Z values from -1.742 to -0.433, P values above 0.05). The base excess of patients in group P were significantly lower than that in group T at PIH 24, 48, 72, and 96 (with Z values from -4.970 to -4.734, P values below 0.01). The HCT of patients in the two groups were similar at PIH 24, 48, 72, and 96 (with Z values from -2.239 to -0.196, P values above 0.05). There were significant differences in the platelet count of patients in the two groups at PIH 24, 72, and 96 (with Z values from -4.578 to -2.512, P<0.05 or P<0.01). (4) There were 15 cases in group T accompanied by complications, and 7 cases died, while 13 cases in group P accompanied by complications, and 9 cases died. The occurrence of complications and death of patients in the two groups were similar (with χ(2) values respectively <0.001 and 1.306, P values above 0.05). On the basis of traditional burn shock monitoring index, the effect of fluid resuscitation in patients with severe burn monitored by PiCCO technology is not so good and still needs further clinical research. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The modern clinical research on prostatitis started with the work of Stamey and coworkers who developed the basic principles we are still using. They established the segmented culture technique for localizing the infections in the males to the urethra, the bladder, or the prostate and to differentiate the main categories of prostatitis. Such categories with slight modifications are still used according to the NIH classification: acute bacterial prostatitis, chronic bacterial prostatitis, Chronic Pelvic Pain Syndrome (CPPS) and asymptomatic prostatitis. Prostatic inflammation is considered an important factor in influencing both prostatic growth and progression of symptoms of benign prostatic hyperplasia and prostatitis. Chronic inflammation/neuroinflammation is a result of a deregulated acute phase response of the innate immune system affecting surrounding neural tissue at molecular, structural and functional levels. Clinical observations suggest that chronic inflammation correlates with chronic prostatitis/chronic pelvic pain syndrome (CP/CPPS) and benign prostatic hyperplasia (BPH) and an history of clinical chronic prostatitis significantly increases the odds for prostate cancer. The NIHNIDDK classification based on the use of the microbiological 4- glasses localization test or simplified 2-glasses test, is currently accepted worldwide. The UPOINT system identifies groups of clinicians with homogeneous clinical presentation and is used to recognize phenotypes to be submitted to specific treatments. The UPOINTS algorithm implemented the original UPOINT adding to the urinary domains (U), psycho-social (P), organspecific (O), infection (I), neurological (N), muscle tension and tenderness (T) a further domain related to sexuality (S). In fact sexual dysfunction (erectile, ejaculatory, libido loss) has been described in 46-92% of cases with a high impact on the quality of life of patients with CP/CPPS. Prostatic ultrasound represents the most popular imaging test in the work-up of either acute and chronic prostatitis although no specific hypo-hyperechoic pattern has been clearly associated with chronic bacterial prostatitis and CPPS. Use of a digital-processing software to calculate the extension of prostatic calcification area at ultrasound demonstrated a higher percentage of prostatic calcification in patients with chronic bacterial prostatitis. Multiparametric Magnetic Resonance Imaging (mpMRI) is the current state-of-the art imaging modality in the assessment of patients with prostate cancer although a variety of benign conditions, including inflammation, may mimic prostate cancer and act as confounding factors in the discrimination between neoplastic and non-neoplastic lesions. Bacteria can infect prostate gland by: ascending the urethra, reflux of urine into the prostatic ducts, direct inoculation of bacteria through inserted biopsy needles or hematogenous seeding. Enterobacteriaceae are the predominant pathogens in acute and chronic bacterial prostatitis, but an increasing role of Enterococci has been reported. Many strains of these uropathogens exhibit the ability to form biofilm and multidrug- resistance. Sexually Transmitted Infections (STI) agents, in particular Chlamydia trachomatis and Mycoplasma genitalium, have been also considered as causative pathogens of chronic bacterial prostatitis. On the contrary the effective role in genital diseases of other "genital mycoplasmas" is still a much debated issue. Sexually Transmitted Infections agents should be investigated by molecular methods in both patient and sexual partner. "Next generation" investigations, such as cytokine analysis, cytological typing of immune cells could help stratifying the immune response. Epigenetic dysregulation of inflammatory factors should be investigated according to systemic and compartment-specific signals. The search for biomarkers should also include evaluation of hormonal pathways, as measurement of estrogen levels in semen. Antimicrobials are the first line agents for the treatment of bacterial prostatitis. The success of antimicrobial treatment depends on the antibacterial activity and the pharmacokinetic characteristics of the drug which must reach high concentrations in prostate secretion and prostate tissue. Acute bacterial prostatitis can be a serious infection with a potential risk for urosepsis For iInitial treatment of severely ill patients, intravenous administration of high doses of bactericidal antimicrobials, such as broad-spectrum penicillins, third-generation cephalosporins or fluoroquinolones, is recommended in combination with an aminoglycoside. Use of piperacillin-tazobactam and meropenem is justified in presence of multiresistant gramnegative pathogens. The antibiotic treatment of chronic prostatitis is currently based on the use of fluoroquinolones that, given for 2 to 4 weeks, cured about 70% of men with chronic bacterial prostatitis. For the treatment of Chlamydial prostatitis macrolides were shown to be more effective than fluoroquinolones, whereas no differences were observed in microbiological and clinical efficacy between macrolides and tetracyclines for the treatment of infections caused by intracellular pathogens. Aminoglycosides and fosfomycin could be considered as a therapeutic alternative for the treatment of quinolone resistant prostatitis. Use of alpha-blockers in CP/CPPS patients with urinary symptoms and analgesics +/- non steroidal anti-inflammatory drugs (NSAID), in presence of pain demonstrated a reduction of symptoms reduction and an improvement of quality of life, although long term use of NSAID is limited by side effect profile. However, the multimodal therapeutic regimen by contemporary use of alphablockers, antibiotics and anti-inflammatory showed a better control of prostatitis symptoms than single drug treatment. Novel therapeutic substances for the treatment of pain, such as the cannabinoid anandamide would be highly interesting to test. An alternative for the treatment of chronic prostatitis/chronic pelvic pain syndrome is phytotherapy, as primary therapy or in association with other drugs. Quercetin, pollen extract, extract of Serenoa repens and other mixtures of herbal extracts showed a positive effect on symptoms and quality of life without side effects. The association of CP/CPPS with alterations of intestinal function has been described. Diet has its effects on inflammation by regulation of the composition of intestinal flora and direct action on the intestinal cells (sterile inflammation). Intestinal bacteria (microbiota) interacts with food influencing the metabolic, immune and inflammatory response of the organism. The intestinal microbiota has protective function against pathogenic bacteria, metabolic function by synthesis of vitamins, decomposition of bile acids and production of trophic factors (butyrate), and modulation of the intestinal immune system. The alteration of the microbiota is called "dysbiosis" causing invasive intestinal diseases pathologies (leaky gut syndrome and food intolerances, irritable bowel syndrome or chronic inflammatory bowel diseases) and correlating with numerous systemic diseases including acute and chronic prostatitis. Administration of live probiotics bacteria can be used to regulate the balance if intestinal flora. Sessions of hydrocolontherapy can represent an integration to this therapeutic approach. Finally, microbiological examination of sexual partners can offer supplementary information for treatment. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
To compare the behaviors of black, white, and Hispanic children who were 18 months to 18 years of age and exposed to intimate partner violence with an age- and ethnically similar sample of children who were not exposed to violence and to compare both exposed and nonexposed children to normative samples. As part of a study on treatments for abused women in primary care public health clinics and Women, Infants and Children clinics in a large urban area, 258 abused mothers completed the Child Behavior Checklist (CBCL) on 1 of their randomly selected children between the ages of 18 months and 18 years. An ethnically similar sample of 72 nonabused mothers also completed the CBCL. The CBCL is a standardized instrument that provides a parental report of the extent of a child's behavioral problems and social competencies. The CBCL consists of a form for children 18 months to 5 years and a version for ages 6 to 18 years. The CBCL is orally administered to a parent, who rates the presence and frequency of certain behaviors on a 3-point scale (0 = not true, 1 = somewhat or sometimes true, and 2 = very true or often true). The time period is the last 6 months for the child 6 to 18 years of age and 2 months for the child 18 months to 5 years of age. Examples of behaviors for the child age 6 to 18 years include "gets in many fights," "truancy, skips school." Examples of behaviors for the child 18 months to 5 years of age include "cruel to animals," "physically attacks people," and "doesn't want to sleep alone." Both forms of the CBCL consist of 2 broadband factors of behavioral problems: internalizing and externalizing with mean scale scores for national normative samples as well as clinically referred and nonreferred samples of children. Internalizing behaviors include anxiety/depression, withdrawal, and somatic complaints. Externalizing behaviors include attention problems, aggressive behavior, and rule-breaking actions. Behavior scales yield a score of total behavioral problems. Scores are summed and then converted to normalized T scores. T scores >or=60 are within the borderline/clinical referral range-higher scores represent more deviant behavior. Multivariate analyses of variance (MANOVAs) were used to determine whether children from abused mothers differed significantly in their internalizing behaviors, externalizing behaviors, and total behavior problems from children of nonabused mothers. One sample t tests were used to compare children from abused and nonabused mothers to the matched clinically referred and nonreferred normative sample. Four pair-wise comparisons were considered: 1) children from abused women to referred norm, 2) children from abused women to nonreferred norm, 3) children from nonabused women to referred norm, and 4) children from nonabused to nonreferred norm. The internal, external, and total behavior problem T scores were dichotomized into a referral status: nonreferred = T score < 60, referred = T score >or= 60. Frequencies and percentages were used to describe the distribution of referral status among the children from the abused and nonabused women, and chi(2) tests of independence were used to determine whether the groups were significantly different. No significant differences in demographic characteristics between children from the abused women and nonabused women were observed. The sample consisted of a large number of Hispanic children (68.9%) and slightly more girls (53.6%), and nearly half (45.2%) had annual household incomes <10,000 dollars. Means, standard deviations, and results from the MANOVAs performed on internal, external, and total behavior problem scores between children from abused and nonabused women revealed no significant differences (F[3,139] = 1.21) for children ages 18 months through 5 years. Results from the MANOVA performed for ages 6 through 18 years revealed a significant group difference (F[3,183] = 3.13). Univariate tests revealed significant group differences for internalizing behavior (F[1,185] = 6.81), externalizing behav = 6.81), externalizing behavior (F[1,185] = 7.84), and total behavior problems (F[1,185] = 9.45). Overall, children of abused mothers had significantly higher internalizing (58.5 +/- 12.1), externalizing (55.5 +/- 12.4), and total behavior problems (57.6 +/- 12.3) scores than the internalizing (52.9 +/- 13.7), externalizing (49.7 +/- 10.6), and total behavior problems (51.0 +/- 13.0) scores exhibited for children of nonabused mothers. Most comparisons of children from the abused women to the referred and nonreferred norms are significant. The mean internal, external, and total behavior problem scores from children of abused women were significantly higher than the nonreferred norms and significantly lower than the referred norms. In contrast, all comparisons for children from nonabused women were not significantly different from the nonreferred norms. Children, ages 6 to 18 years, of abused mothers exhibit significantly more internalizing, externalizing, and total behavior problems than children for the same age and sex of nonabused mothers. In addition, the mean internalizing behavior score for boys 6 to 11 years of age as well as girls and boys 12 to 18 years of age of abused mothers were not significantly different from the clinical referral norms. Internalizing behaviors of anxiety, withdrawal, and depression are consistent with suicidal risk. The association of a child's exposure to intimate partner violence and subsequent attempted and/or completed suicide demands research. Our data demonstrate that children of abused mothers have significantly more behavioral problems than the nonclinically referred norm children but also, for most children, display significantly fewer problems than the clinically referred children. These children of abused mothers are clearly suspended above normal and below deviant, with children ages 6 to 18 being at the greatest risk. If abused mothers can be identified and treated, then perhaps behavior problems of their children can be arrested and behavioral scores improved. The American Academy of Pediatrics Committee on Child Abuse and Neglect recommends routine screening of all women for abuse at the time of the well-child visit and implementation of a protocol that includes a safety plan for the entire family. Clinicians can use this research information to assess for intimate partner violence during child health visits and inform abused mothers of the potential effects on their children's behavior. Early detection and treatment for intimate partner violence against women has the potential to interrupt and prevent behavioral problems for their children. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Recurrent abdominal pain (RAP) is a common problem in children and adolescents. Evaluation and treatment of children with RAP continue to challenge physicians because of the lack of a psychometrically sound measure for RAP. A major obstacle to progress in research on RAP has been the lack of a biological marker for RAP and the lack of a reliable and valid clinical measure for RAP. The objectives of this study were (1) to develop and test a multidimensional measure for RAP (MM-RAP) in children to serve as a primary outcome measure for clinical trials, (2) to evaluate the reliability of the measure and compare its responses across different populations, and (3) to examine the reliabilities of the measure scales in relation to the demographic variables of the studied population. We conducted 3 cross-sectional studies. Two studies were clinic-based studies that enrolled children with RAP from 1 pediatric gastroenterology clinic and 6 primary care clinics. The third study was a community-based study in which children from 1 elementary and 2 middle schools were screened for frequent episodes of abdominal pain. The 3 studies were conducted in Houston, Texas. Inclusion criteria for the clinic-based studies were (1) age of 4 to 18 years; (2) abdominal pain that had persisted for 3 or more months; (3) abdominal pain that was moderate to severe and interfered with some or all regular activities; (4) abdominal pain that may or may not be accompanied by upper-gastrointestinal symptoms; and (5) children were accompanied by a parent or guardian who was capable of giving informed consent, and children over the age of 10 years were capable of giving informed assent. The community-based study used standardized questionnaires that were offered to 1080 children/parents from the 3 participating schools; 700 completed and returned the questionnaires (65% response rate). The questionnaire was designed to elicit data concerning the history of abdominal pain or discomfort. A total of 160 children met Apley's criteria and were classified as having RAP. Inclusion criteria were identical to those criteria for the clinic-based studies. Participating children in the 3 studies received a standardized questionnaire that asked about socioeconomic variables, abdominal pain (intensity; frequency; duration; nature of abdominal pain, if present, and possible relationships with school activities; and other upper gastrointestinal symptoms). We used 4 scales for the MM-RAP: pain intensity scale (3 items), nonpain symptoms scale (12 items), disability scale (3 items), and satisfaction scale (2 items). Age 7 was used as a cutoff point for the analysis as the 7-year-olds have been shown to exhibit more sophisticated knowledge of illness than younger children. A total of 295 children who were aged 4 to 18 years participated in the study: 155 children from the pediatric gastroenterology clinics, 82 from the primary care clinics, and 58 from the schools. The interitem consistency (Cronbach's coefficient alpha) for the pain intensity items, nonpain symptoms items, disability items, and satisfaction items were 0.75, 0.81, 0.80, and 0.78, respectively, demonstrating good reliability of the measure. The internal consistencies of the 4 scales did not significantly differ between younger (< or =7 years) and older (>7 years) children. There was also no significant variation in the coefficient alpha of each of the 4 scales in relation to gender or the level of the parent's education. Reliability was identical for the pain-intensity items (0.74) among children who sought medical attention from primary care or pediatric gastroenterology clinics. The intercorrelations of factor scores among the 4 scales showed a strong relationship among the factors but not high enough that correlations would be expected to be measuring the same items. The results of the factor analysis identified 5 components instead of 4 components representing the 4 scales. The 12 items of the nonpain symptoms scale were classified into 2 components; 1 component included heartburn, burping, passing gas, bloating, problem with ingestion of milk, bad breath, and sour taste (nonpain symptoms I), and the other included nausea/vomiting, diarrhea, and constipation (nonpain symptoms II). The program ordered the 5 components on the basis of the percentage of the total variance explained by each component and consequently by the strength of each components in the following order: nonpain symptoms I, pain intensity, pain disability, satisfaction, and nonpain symptoms II. Of the 20 items that composed the MM-RAP, 17 met the inclusion criteria of having a correlation of > or =0.40 on the primary factor analyses. The 3 items that assessed pain intensity met the inclusion criteria as well as the 2 items that assessed satisfaction. Two of the 3 items that assessed disability met the inclusion criteria; however, the missed school item did not. The sleep problem and the loss of appetite items in the nonpain items also did not meet the inclusion criteria in both components of the nonpain symptoms scale. However, the loss of appetite item met the inclusion criteria in the disability scale with a correlation of 0.6. The 2 items that did not meet the inclusion criteria (missed school days and sour taste) will be eliminated in the revised measure for RAP. The MM-RAP demonstrated good reliability evidence in population samples. Children who have RAP and are seen at pediatric gastroenterology or primary care pediatric clinics have similar responses, showing that the measure performed well across several populations. Age did not affect the reliability of responses. The MM-RAP included 4 dimensions, each with several items that may identify disease-specific dimensions. In addition, dividing the nonpain symptoms scale into 2 components instead of 1 component could assist in creating a disease-specific measure. The present study focused exclusively on developing the multidimensional measure for RAP in children that could assist physicians in evaluating the efficacy of RAP treatment independent of psychological evaluations. In addition, the measure was designed for use in clinical trials that evaluate the efficacy of RAP treatment and to allow comparison between intervention studies. In conclusion, we were able to identify 4 dimensions of RAP in children (pain intensity, nonpain symptoms, pain disability, and satisfaction with health). We demonstrated that these dimensions can be measured in a reliable manner that is applicable to children who experience RAP in various settings. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
L-Ascorbic Acid, Calcium Ascorbate, Magnesium Ascorbate, Magnesium Ascorbyl Phosphate, Sodium Ascorbate, and Sodium Ascorbyl Phosphate function in cosmetic formulations primarily as antioxidants. Ascorbic Acid is commonly called Vitamin C. Ascorbic Acid is used as an antioxidant and pH adjuster in a large variety of cosmetic formulations, over 3/4 of which were hair dyes and colors at concentrations between 0.3% and 0.6%. For other uses, the reported concentrations were either very low (<0.01%) or in the 5% to 10% range. Calcium Ascorbate and Magnesium Ascorbate are described as antioxidants and skin conditioning agents--miscellaneous for use in cosmetics, but are not currently used. Sodium Ascorbyl Phosphate functions as an antioxidant in cosmetic products and is used at concentrations ranging from 0.01% to 3%. Magnesium Ascorbyl Phosphate functions as an antioxidant in cosmetics and was reported being used at concentrations from 0.001% to 3%. Sodium Ascorbate also functions as an antioxidant in cosmetics at concentrations from 0.0003% to 0.3%. Related ingredients (Ascorbyl Palmitate, Ascorbyl Dipalmitate, Ascorbyl Stearate, Erythorbic Acid, and Sodium Erythorbate) have been previously reviewed by the Cosmetic Ingredient Review (CIR) Expert Panel and found "to be safe for use as cosmetic ingredients in the present practices of good use." Ascorbic Acid is a generally recognized as safe (GRAS) substance for use as a chemical preservative in foods and as a nutrient and/or dietary supplement. Calcium Ascorbate and Sodium Ascorbate are listed as GRAS substances for use as chemical preservatives. L-Ascorbic Acid is readily and reversibly oxidized to L-dehydroascorbic acid and both forms exist in equilibrium in the body. Permeation rates of Ascorbic Acid through whole and stripped mouse skin were 3.43 +/- 0.74 microg/cm(2)/h and 33.2 +/- 5.2 microg/cm(2)/h. Acute oral and parenteral studies in mice, rats, rabbits, guinea pigs, dogs, and cats demonstrated little toxicity. Ascorbic Acid and Sodium Ascorbate acted as a nitrosation inhibitor in several food and cosmetic product studies. No compound-related clinical signs or gross or microscopic pathological effects were observed in either mice, rats, or guinea pigs in short-term studies. Male guinea pigs fed a control basal diet and given up to 250 mg Ascorbic Acid orally for 20 weeks had similar hemoglobin, blood glucose, serum iron, liver iron, and liver glycogen levels compared to control values. Male and female F344/N rats and B6C3F(1) mice were fed diets containing up to 100,000 ppm Ascorbic Acid for 13 weeks with little toxicity. Chronic Ascorbic Acid feeding studies showed toxic effects at dosages above 25 mg/kg body weight (bw) in rats and guinea pigs. Groups of male and female rats given daily doses up to 2000 mg/kg bw Ascorbic Acid for 2 years had no macro- or microscopically detectable toxic lesions. Mice given Ascorbic Acid subcutaneous and intravenous daily doses (500 to 1000 mg/kg bw) for 7 days had no changes in appetite, weight gain, and general behavior; and histological examination of various organs showed no changes. Ascorbic Acid was a photoprotectant when applied to mice and pig skin before exposure to ultraviolet (UV) radiation. The inhibition of UV-induced suppression of contact hypersensitivity was also noted. Magnesium Ascorbyl Phosphate administration immediately after exposure in hairless mice significantly delayed skin tumor formation and hyperplasia induced by chronic exposure to UV radiation. Pregnant mice and rats were given daily oral doses of Ascorbic Acid up to 1000 mg/kg bw with no indications of adult-toxic, teratogenic, or fetotoxic effects. Ascorbic Acid and Sodium Ascorbate were not genotoxic in several bacterial and mammalian test systems, consistent with the antioxidant properties of these chemicals. In the presence of certain enzyme systems or metal ions, evidence of genotoxicity was seen. The National Toxicology Program (NTP) conducted a 2-year oral carcinogenesis bioassay of Ascorbic Acid (25,000 and 50,000 ppm) in F344/N rats and B6C3F(1) mice. Ascorbic Acid was not carcinogenic in either sex of both rats and mice. Inhibition of carcinogenesis and tumor growth related to Ascorbic Acid's antioxidant properties has been reported. Sodium Ascorbate has been shown to promote the development of urinary carcinomas in two-stage carcinogenesis studies. Dermal application of Ascorbic Acid to patients with radiation dermatitis and burn victims had no adverse effects. Ascorbic Acid was a photoprotectant in clinical human UV studies at doses well above the minimal erythema dose (MED). An opaque cream containing 5% Ascorbic Acid did not induce dermal sensitization in 103 human subjects. A product containing 10% Ascorbic Acid was nonirritant in a 4-day minicumulative patch assay on human skin and a facial treatment containing 10% Ascorbic Acid was not a contact sensitizer in a maximization assay on 26 humans. Because of the structural and functional similarities of these ingredients, the Panel believes that the data on one ingredient can be extrapolated to all of them. The Expert Panel attributed the finding that Ascorbic Acid was genotoxic in these few assay systems due to the presence of other chemicals, e.g., metals, or certain enzyme systems, which effectively convert Ascorbic Acid's antioxidant action to that of a pro-oxidant. When Ascorbic Acid acts as an antioxidant, the Panel concluded that Ascorbic Acid is not genotoxic. Supporting this view were the carcinogenicity studies conducted by the NTP, which demonstrated no evidence of carcinogenicity. Ascorbic Acid was found to effectively inhibit nitrosamine yield in several test systems. The Panel did review studies in which Sodium Ascorbate acted as a tumor promoter in animals. These results were considered to be related to the concentration of sodium ions and the pH of urine in the test animals. Similar effects were seen with sodium bicarbonate. Because of the concern that certain metal ions may combine with these ingredients to produce pro-oxidant activity, the Panel cautioned formulators to be certain that these ingredients are acting as antioxidants in cosmetic formulations. The Panel believed that the clinical experience in which Ascorbic Acid was used on damaged skin with no adverse effects and the repeat-insult patch test (RIPT) using 5% Ascorbic Acid with negative results supports the finding that this group of ingredients does not present a risk of skin sensitization. These data coupled with an absence of reports in the clinical literature of Ascorbic Acid sensitization strongly support the safety of these ingredients. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Most people who stop smoking gain weight. There are some interventions that have been designed to reduce weight gain when stopping smoking. Some smoking cessation interventions may also limit weight gain although their effect on weight has not been reviewed. To systematically review the effect of: (1) Interventions targeting post-cessation weight gain on weight change and smoking cessation.(2) Interventions designed to aid smoking cessation that may also plausibly affect weight on post-cessation weight change. Part 1 - We searched the Cochrane Tobacco Addiction Group's Specialized Register and CENTRAL in September 2011.Part 2 - In addition we searched the included studies in the following "parent" Cochrane reviews: nicotine replacement therapy (NRT), antidepressants, nicotine receptor partial agonists, cannabinoid type 1 receptor antagonists and exercise interventions for smoking cessation published in Issue 9, 2011 of the Cochrane Library. Part 1 - We included trials of interventions that were targeted at post-cessation weight gain and had measured weight at any follow up point and/or smoking cessation six or more months after quit day.Part 2 - We included trials that had been included in the selected parent Cochrane reviews if they had reported weight gain at any time point. We extracted data on baseline characteristics of the study population, intervention, outcome and study quality. Change in weight was expressed as difference in weight change from baseline to follow up between trial arms and was reported in abstinent smokers only. Abstinence from smoking was expressed as a risk ratio (RR). We used the most rigorous definition of abstinence available in each trial. Where appropriate, we performed meta-analysis using the inverse variance method for weight and Mantel-Haenszel method for smoking using a fixed-effect model. Part 1: Some pharmacological interventions tested for limiting post cessation weight gain (PCWG) resulted in a significant reduction in WG at the end of treatment (dexfenfluramine (Mean difference (MD) -2.50 kg, 95% confidence interval (CI) -2.98 to -2.02, 1 study), phenylpropanolamine (MD -0.50 kg, 95% CI -0.80 to -0.20, N=3), naltrexone (MD -0.78 kg, 95% CI -1.52 to -0.05, N=2). There was no evidence that treatment reduced weight at 6 or 12 months (m). No pharmacological intervention significantly affected smoking cessation rates.Weight management education only was associated with no reduction in PCWG at end of treatment (6 or 12m). However these interventions significantly reduced abstinence at 12m (Risk ratio (RR) 0.66, 95% CI 0.48 to 0.90, N=2). Personalised weight management support reduced PCWG at 12m (MD -2.58 kg, 95% CI -5.11 to -0.05, N=2) and was not associated with a significant reduction of abstinence at 12m (RR 0.74, 95% CI 0.39 to 1.43, N=2). A very low calorie diet (VLCD) significantly reduced PCWG at end of treatment (MD -3.70 kg, 95% CI -4.82 to -2.58, N=1), but not significantly so at 12m (MD -1.30 kg, 95% CI -3.49 to 0.89, N=1). The VLCD increased chances of abstinence at 12m (RR 1.73, 95% CI 1.10 to 2.73, N=1). There was no evidence that cognitive behavioural therapy to allay concern about weight gain (CBT) reduced PCWG, but there was some evidence of increased PCWG at 6m (MD 0.74, 95% CI 0.24 to 1.24). It was associated with improved abstinence at 6m (RR 1.83, 95% CI 1.07 to 3.13, N=2) but not at 12m (RR 1.25, 95% CI 0.83 to 1.86, N=2). However, there was significant statistical heterogeneity.Part 2: We found no evidence that exercise interventions significantly reduced PCWG at end of treatment (MD -0.25 kg, 95% CI -0.78 to 0.29, N=4) however a significant reduction was found at 12m (MD -2.07 kg, 95% CI -3.78 to -0.36, N=3).Both bupropion and fluoxetine limited PCWG at the end of treatment (bupropion MD -1.12 kg, 95% CI -1.47 to -0.77, N=7) (fluoxetine MD -0.99 kg, 95% CI -1.36 to -0.61, N=2). There was no evidence that the effect persisted at 6m (bupropion MD -0.58 kg, 95% CI -2.16 to 1.00, N=4), (fluoxetine MD -0.01 kg, 95% CI -1.11 to 1.10, N=2) or 12m (bupropion MD -0.38 kg, 95% CI -2.00 to 1.24, N=4). There were no data on WG at 12m for fluoxetine.Overall, treatment with NRT attenuated PCWG at the end of treatment (MD -0.69 kg, 95% CI -0.88 to -0.51, N=19), with no strong evidence that the effect differed for the different forms of NRT. There was evidence of significant statistical heterogeneity caused by one study which reported a 4.3 kg reduction in PCWG due to NRT. With this study removed, the difference in weight change at end of treatment was -0.45 kg (95% CI -0.66 to -0.27, N=18). There was no evidence of an effect on PCWG at 12m (MD -0.42 kg, 95% CI -0.92 to 0.08, N=15).We found evidence that varenicline significantly reduced PCWG at end of treatment (MD -0.41 kg, 95% CI -0.63 to -0.19, N=11), but this effect was not maintained at 6 or 12m. Three studies compared the effect of bupropion to varenicline. Participants taking bupropion gained significantly less weight at the end of treatment (-0.51 kg (95% CI -0.93 to -0.09 kg), N=3). Direct comparison showed no significant difference in PCWG between varenicline and NRT. Although some pharmacotherapies tested to limit PCWG show evidence of short-term success, other problems with them and the lack of data on long-term efficacy limits their use. Weight management education only, is not effective and may reduce abstinence. Personalised weight management support may be effective and not reduce abstinence, but there are too few data to be sure. One study showed a VLCD increased abstinence but did not prevent WG in the longer term. CBT to accept WG did not limit PCWG and may not promote abstinence in the long term. Exercise interventions significantly reduced weight in the long term, but not the short term. More studies are needed to clarify whether this is an effect of treatment or a chance finding. Bupropion, fluoxetine, NRT and varenicline reduce PCWG while using the medication. Although this effect was not maintained one year after stopping smoking, the evidence is insufficient to exclude a modest long-term effect. The data are not sufficient to make strong clinical recommendations for effective programmes to prevent weight gain after cessation. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Reducing the transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a global priority. Contact tracing identifies people who were recently in contact with an infected individual, in order to isolate them and reduce further transmission. Digital technology could be implemented to augment and accelerate manual contact tracing. Digital tools for contact tracing may be grouped into three areas: 1) outbreak response; 2) proximity tracing; and 3) symptom tracking. We conducted a rapid review on the effectiveness of digital solutions to contact tracing during infectious disease outbreaks. To assess the benefits, harms, and acceptability of personal digital contact tracing solutions for identifying contacts of an identified positive case of an infectious disease. An information specialist searched the literature from 1 January 2000 to 5 May 2020 in CENTRAL, MEDLINE, and Embase. Additionally, we screened the Cochrane COVID-19 Study Register. We included randomised controlled trials (RCTs), cluster-RCTs, quasi-RCTs, cohort studies, cross-sectional studies and modelling studies, in general populations. We preferentially included studies of contact tracing during infectious disease outbreaks (including COVID-19, Ebola, tuberculosis, severe acute respiratory syndrome virus, and Middle East respiratory syndrome) as direct evidence, but considered comparative studies of contact tracing outside an outbreak as indirect evidence. The digital solutions varied but typically included software (or firmware) for users to install on their devices or to be uploaded to devices provided by governments or third parties. Control measures included traditional or manual contact tracing, self-reported diaries and surveys, interviews, other standard methods for determining close contacts, and other technologies compared to digital solutions (e.g. electronic medical records). Two review authors independently screened records and all potentially relevant full-text publications. One review author extracted data for 50% of the included studies, another extracted data for the remaining 50%; the second review author checked all the extracted data. One review author assessed quality of included studies and a second checked the assessments. Our outcomes were identification of secondary cases and close contacts, time to complete contact tracing, acceptability and accessibility issues, privacy and safety concerns, and any other ethical issue identified. Though modelling studies will predict estimates of the effects of different contact tracing solutions on outcomes of interest, cohort studies provide empirically measured estimates of the effects of different contact tracing solutions on outcomes of interest. We used GRADE-CERQual to describe certainty of evidence from qualitative data and GRADE for modelling and cohort studies. We identified six cohort studies reporting quantitative data and six modelling studies reporting simulations of digital solutions for contact tracing. Two cohort studies also provided qualitative data. Three cohort studies looked at contact tracing during an outbreak, whilst three emulated an outbreak in non-outbreak settings (schools). Of the six modelling studies, four evaluated digital solutions for contact tracing in simulated COVID-19 scenarios, while two simulated close contacts in non-specific outbreak settings. Modelling studies Two modelling studies provided low-certainty evidence of a reduction in secondary cases using digital contact tracing (measured as average number of secondary cases per index case - effective reproductive number (R <subeff</sub)). One study estimated an 18% reduction in R <subeff</sub with digital contact tracing compared to self-isolation alone, and a 35% reduction with manual contact-tracing. Another found a reduction in R <subeff</sub for digital contact tracing compared to self-isolation alone (26% reduction) and a reduction in R <subeff</sub for manual contact tracing compared to self-isolation alone (53% reduction). However, the certainty of evidence was reduced by unclear specifications of their models, and assumptions about the effectiveness of manual contact tracing (assumed 95% to 100% of contacts traced), and the proportion of the population who would have the app (53%). Cohort studies Two cohort studies provided very low-certainty evidence of a benefit of digital over manual contact tracing. During an Ebola outbreak, contact tracers using an app found twice as many close contacts per case on average than those using paper forms. Similarly, after a pertussis outbreak in a US hospital, researchers found that radio-frequency identification identified 45 close contacts but searches of electronic medical records found 13. The certainty of evidence was reduced by concerns about imprecision, and serious risk of bias due to the inability of contact tracing study designs to identify the true number of close contacts. One cohort study provided very low-certainty evidence that an app could reduce the time to complete a set of close contacts. The certainty of evidence for this outcome was affected by imprecision and serious risk of bias. Contact tracing teams reported that digital data entry and management systems were faster to use than paper systems and possibly less prone to data loss. Two studies from lower- or middle-income countries, reported that contact tracing teams found digital systems simpler to use and generally preferred them over paper systems; they saved personnel time, reportedly improved accuracy with large data sets, and were easier to transport compared with paper forms. However, personnel faced increased costs and internet access problems with digital compared to paper systems. Devices in the cohort studies appeared to have privacy from contacts regarding the exposed or diagnosed users. However, there were risks of privacy breaches from snoopers if linkage attacks occurred, particularly for wearable devices. The effectiveness of digital solutions is largely unproven as there are very few published data in real-world outbreak settings. Modelling studies provide low-certainty evidence of a reduction in secondary cases if digital contact tracing is used together with other public health measures such as self-isolation. Cohort studies provide very low-certainty evidence that digital contact tracing may produce more reliable counts of contacts and reduce time to complete contact tracing. Digital solutions may have equity implications for at-risk populations with poor internet access and poor access to digital technology. Stronger primary research on the effectiveness of contact tracing technologies is needed, including research into use of digital solutions in conjunction with manual systems, as digital solutions are unlikely to be used alone in real-world settings. Future studies should consider access to and acceptability of digital solutions, and the resultant impact on equity. Studies should also make acceptability and uptake a primary research question, as privacy concerns can prevent uptake and effectiveness of these technologies. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Pancreatitis is an infrequent complication among patients with cystic fibrosis (CF). It has mainly been reported for patients with pancreatic sufficiency (PS). Previous studies involved only a small number of patients because they contained data from single centers. The aim of this study was to evaluate the incidence of pancreatitis in a large heterogeneous CF population, to determine the relationship with pancreatic function, and to assess whether pancreatitis is associated with specific CFTR mutations. Physicians caring for patients with CF were approached through the CF Thematic Network or through the European Cystic Fibrosis Foundation newsletter. They were asked to provide data on their current patient cohort through a standardized questionnaire and to report how many patients they had ever diagnosed as having pancreatitis. A detailed questionnaire was then sent, to be filled out for all of their patients for whom pancreatitis had ever occurred. We defined pancreatitis as an episode of acute abdominal pain associated with serum amylase levels elevated above the ranges established by each participating center's laboratory. General clinical data included age, genotype, age at diagnosis of CF, sweat chloride concentrations, pancreatic status, biometric findings, and respiratory status. CFTR mutations were also reported according to the functional classification of classes I to V. Patients were categorized as having PS, pancreatic insufficiency (PI), or PI after an initial period of PS. PI was defined as a 72-hour stool fat loss of >7 g/day, fat absorption of <93%, or fecal elastase levels of <200 microg/g feces. Clinical data on pancreatitis included age at the first episode, amylase and lipase levels, possible triggers, and occurrence of relapses or complications. A total of 10071 patients with CF, from 29 different countries, who were undergoing follow-up monitoring in 2002 were surveyed. Among this group, pancreatitis had ever been diagnosed for 125 patients (1.24%; 95% confidence interval [CI]: 1.02-1.46%). There was variability in the reported rates of pancreatitis for different countries. Twenty-six centers in 15 different countries sent detailed clinical data on their patients with pancreatitis and on their whole CF clinic. This involved 3306 patients with CF and 61 cases of pancreatitis, leading to a prevalence of 1.84% (95% CI: 1.39-2.30%). The mean age of the patients with pancreatitis ever was 24.4 years (SD: 10.8 years). The first episode of pancreatitis occurred at a mean age of 19.9 years (SD: 9.6 years). The median serum amylase level at the time of pancreatitis was 746 IU/L (interquartile range: 319-1630 IU/L), and the median lipase level was 577 IU/L (interquartile range: 229-1650 IU/L). The majority of patients had PS (34 of 61 patients, 56%; 95% CI: 43-68%). Pancreatitis occurred for 15 patients with PI (25%; 95% CI: 14-35%). Eight patients developed PI after initial PS. The occurrence of pancreatitis among patients with PS was 34 cases per 331 patients, ie, 10.27% (95% CI: 7.00-13.55%); the occurrence of pancreatitis among patients with PI was 15 cases per 2971 patients, ie, 0.5% (95% CI: 0.25-0.76%). The mean age (in 2002) of the CF cohort with pancreatitis did not differ between the PS and PI subgroups. The forced expiratory volume in 1 second was significantly lower among the patients with PI than among the patients with PS, ie, 65% (SEM: 7%) vs 79% (SEM: 4%). The mean age at the occurrence of pancreatitis and the amylase and lipase levels during pancreatitis were not different for patients with pancreatitis and PI versus PS. In the group with PS, 31 of 34 patients carried at least 1 class IV or V CFTR mutation. In the groups with PI and PI after PS, 5 of 15 patients and 3 of 8 patients, respectively, carried 2 class I, II, or III CFTR mutations. Relapses and/or evolution to chronic pancreatitis occurred for 42 patients. Pancreatitis preceded the diagnosis of CF in 18 of 61 cases. These patients were significantly older than the rest of the cohort, ie, age of 28.4 years (SEM: 3.4 years) vs 22.7 years (SEM: 1.3 years). Their median age at the diagnosis of CF was also significantly greater, ie, 21.5 years (interquartile range: 11.9-31 years) vs 7.6 years (interquartile range: 0.4-17.0 years). However, the ages at the occurrence of pancreatitis were similar, ie, 21.0 years (SEM: 3.0 years) vs 19.5 years (SEM: 1.2 years). This study of 10071 patients with CF from 29 different countries revealed an estimated overall occurrence of pancreatitis among patients with CF of 1.24% (95% CI: 1.02-1.46%). The incidence of pancreatitis was much higher among patients with PS. However, pancreatitis was also reported for 15 patients with PI from 11 centers in 9 different countries. A correct diagnosis of pancreatitis for the reported patients with PI was supported by amylase and lipase levels increased above 500 IU/L, similar to those for patients with PS and pancreatitis. A correct diagnosis of PI for these patients with pancreatitis was supported by the adequacy of the methods used. We chose the cutoff values used to distinguish between patients with PI and control subjects without gastrointestinal disease. For one half of the patients, the diagnosis of PI was established on the basis of low levels of stool elastase (mean: 97 mug/g stool). With a cutoff value of 200 microg/g stool, this noninvasive test has high sensitivity (>95%) and high specificity (>90%) to differentiate patients with PI from control subjects with normal pancreatic function. For the other one half of the patients with PI in the cohort, the pancreatic status was determined on the basis of the 3-day fecal fat balance, with the widely used cutoff value of >7 g of fat loss per day. The most likely reason for pancreatitis occurring among patients with PI is that some residual pancreatic tissue is present among these patients. Pancreatitis is a rare complication among patients with CF. It occurred for 1.24% (95% CI: 1.02-1.46%) of a large CF cohort. Pancreatitis occurs mainly during adolescence and young adulthood. It is much more common among patients with CF and PS (10.3%), but it can occur among patients with PI (0.5%). Pancreatitis can be the first manifestation of CF. Pancreatitis was reported for patients carrying a wide range of mutations. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Excessive amount of adipose tissue in children and adolescents and simple obesity in particular constitute a growing health problem throughout the world. Adverse health effects of obesity of children justify the need to look for efficient treatments, among them the dietary treatment. THE AIM OF THE STUDY was to examine the effectiveness of dietary treatment in children with simple obesity on the basis of thorough analysis of their state of nutrition, method of nutrition and eating habits and the impact of other environmental factors. Four research hypotheses were formulated: 1. simple obesity of children is influenced by selected environmental factors such as parents' level of education, familial inclination to obesity and health habits, 2. chosen and accepted by the child and/or its mother/parents programme of dietary treatment in the form of low-energy diet with elements of low glycemic index results in the loss of body mass in children, 3. implemented dietary treatment translates into the modification of basic anthropometric features--body mass, body height, thickness of skin and adipose folds on arm, below shoulder blade (scapula), on abdomen as well as arm circumference and anthropometric parameters of examined children--body mass index BMI, waste-hip ratio, body fat content, 4. implemented dietary treatment has an impact on modification of certain biochemical indicators--lipid profile of children with increased indicators of lipid metabolism. The research concerned 236 children living in the Mazowsze region with diagnosed simple obesity (relative body mass index rel BMI =20%), directed to Gastroenterological and Endocrinological Unit of the Institute of Mother and Child, after the children and/or their mothers/parents had accepted participating in a ten-week long research programme. The state of nutrition was evaluated on the basis of the examination of 8 basic features and 5 anthropometric parameters and basic biochemical indicators of metabolism of carbohydrates and fats, before and after the beginning of dietary treatment. The assessments of the method of nutrition, including eating habits, proportions of food products in the food ration and nutritional value of daily food rations was conducted on the basis of 10-14-day records of the child's diet before the implementation of dietary treatment and on the basis of randomly selected 3-day records of the child's diet from the diet book after the dietary treatment was introduced. The environmental data were collected on the basis of a questionnaire, constructed for this study. The main risk factors for simple obesity in examined children (n=236) aged 3-15 yrs were familial and environmental conditions. A significant correlation was found between the children's obesity expressed by a normalized body mass index BMI z-score, unrelated to age and sex, and mother's level of education and father's obesity (Chi(2) test, p<0.05). A positive correlation was demonstrated between the normalized relative body mass index (BMI z-score) and children's anthropometrical parameters--thickness of skin and adipose folds on the arm, below the shoulder blade (scapula), on the abdomen and their sum, arm circumference, waste-hip ratio and body fat content and the children's parents body mass index (father's BMI, mother's BMI). In boys with simple obesity the tendency to central obesity was observed since early childhood. In the examined group of children no distortions of metabolism of carbohydrates were observed (correct fasting levels of glucose), while in children with obesity the irregularities of metabolism of fats were noted. The implemented dietary treatment (low energy diet with elements of low glycemic index) had a significant impact on improvement of lipid metabolism in all children in whom the irregularities of metabolism of fats were noted. Modification of the diet of children aged 3-6 by implementing dietary recommendations, including the increased frequency of meals and the choice of products with low glycemic index, did not have a significant impact on the decrease of the body mass index in 95% of examined children. A considerable number of children aged 3-6 (n=12) continued to eat only three meals a day and their model of nutrition, including the selection of products, was not significantly modified. The introduced low energy diet with elements of low glycemic index in children of school age (7-15 years) with simple obesity positively influenced the decrease of analyzed features and parameters (p<0.0001). During dietary treatment statistically significant decrease of the children's body mass was observed as well as a decrease of the thickness of skin and adipose folds on the arm, below the shoulder blade (scapula), on the abdomen and a decrease of arm circumference and body fat content. The change of the energy content of a daily food ration, the amount of consumed carbohydrates and products from the group of sugar and sweets, cereal foodstuffs and fat and products from the group of other fats was positively correlated with body mass loss expressed as the difference between z-score BMI before and after the dietary treatment. The modification of the eating habits--increased frequency of meals and reduction or elimination of eating between the meals during the nutrition intervention were not significantly linked to the change of normalized body mass index in the examined children. Only the frequency of eating sweets was related to the change of z-score BMI (p<0.05). The implemented dietary treatment in obese children aged 7-15 yrs significantly influenced the body mass loss. In children (n=38/236) with lipid metabolism abnormalities, the low energy diet with elements of low glycemic index had a favorable impact on the lipid profile. The increased levels of total cholesterol, LDL cholesterol and triglycerides returned to normal. 1. Simple obesity in children aged 3-15 yrs is connected with familial and environmental factors, including incorrect eating habits. 2. Dietary treatment consisting in the lowering of energetic value of the diet through the reduction of fat consumption and quantity and quality changes with respect to carbohydrates consumption decreased the children's obesity, and was more effective in the older age group (7-15 yrs). Dietary treatment normalizes the lipid profile in children. 3. Significant body mass loss has been observed in children in whose diet the amount of proteins and their share in the total energy value only slightly differs from the level before the dietary treatment. The amount of proteins in the children's diet was within the range of physiological recommendations. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Allogeneic hematopoietic cell transplantation (HCT) represents a potentially curative treatment modality in a range of hematologic malignancies. High-dose myeloablative radio-chemotherapy has conventionally been used as part of the preparative regimen before HCT for two reasons: it has a profound immunosuppressive effect on the host, limiting the ability to reject the graft and it has substantial anti-tumor efficacy. Graft rejection is an example of alloreactivity as alloreactivity denotes the immunologic reactions that occur when tissues are transplanted between two individuals within the same species. If the immune system of the host is suppressed to a degree where rejection does not occur, the possibility arises that immunocompetent donor cells can attack the recipient tissues. This phenomenon is termed Graft-versus-Host disease (GVHD) if healthy tissues of the host are attacked and the Graft-versus-Tumor (GVT) effect if the malignant cells are the targets of the reaction. Clinical studies have shown that patients who develop GVHD have a lower risk of relapse of the malignant disease and that donor lymphocyte infusion can induce durable remissions in patients with relapsed disease following the transplant. These observations indicate that a GVT effect can be present following allogeneic HCT and that this effect, like GVHD, is an alloreactive response. The toxicity of HCT with myeloablative conditioning is considerable and this limits the use of this procedure to patients below 50-60 years of age. A large proportion of the patients with hematologic malignancies are older than 60 years at diagnosis and they are therefore not eligible for this treatment. During the last decade, conditioning regimens that are nonmyeloablative or have reduced intensity have been developed. The purpose of this development has been to extend the use of allogeneic HCT to older patients and to patients who due to the malignant disease or to comorbidities are unable to tolerate myeloablative conditioning. In allogeneic HCT with nonmyeloablative conditioning the curative potential relies entirely on the ability of the donor cells to elicit a GVT effect. Allogeneic HCT with nonmyeloablative conditioning was introduced at Department of Hematology at Rigshospitalet in March 2000. The results of this treatment modality have been promising and we and others have shown that durable remissions can be obtained in patients who are heavily pretreated. One of the goals of allogeneic HCT with nonmyeloablative conditioning was to perform both the actual transplant procedure and the clinical follow up in the outpatient setting. In the first 30 patients transplanted at Rigshospitalet, we observed that the transplant itself and the first weeks post transplant could be performed as an outpatient procedure in a number of patients. However, all the patients were admitted and the median duration of hospitalization was 44 days during the first year post transplant. Complications such as infections and GVHD were common causes of hospitalization and studies from other centers have shown that infections, GVHD and relapse of the malignancy are the major obstacles to a good result of allogeneic HCT with nonmyeloablative conditioning. One way to improve the results of this treatment would therefore be to reduce the incidence of GVHD without compromising the GVT effect. In HCT with nonmyeloablative conditioning the relatively well-defined antineoplastic effect of high-dose myeloablative radio-chemotherapy is substituted with the alloreactive effect of the donor cells. Because the level of alloreactivity varies widely between different donor-recipient pairs, the ability to monitor the level of alloreactivity following the transplant would therefore be desirable. To this end we have investigated the ability of different immunologic and molecular methods to quantify the level of ongoing alloreactivity following the transplant. By simultaneous determination of the fraction of T cells of donor origin (donor T-cell chimerism) and the total number of T cells in the peripheral blood, we observed that patients with a high number of donor CD8 + T cells on day +14 had a high risk of acute GVHD. Other studies have shown that the level of donor T-cell chimerism early after transplant predicts the development of acute GVHD. One way to exploit this knowledge could be to individualize the pharmacologic immunosuppression given post transplant. This immunosuppression is given primarily to prevent the development of GVHD but may also inhibit the GVT effect. In patients with a low risk of GVHD early tapering of the immunosuppression could be done, while the period of immunosuppression could be extended in patients with a high risk of GVHD. In this way the GVT effect could theoretically be optimized in each patient and the results of the treatment improved. In another study we used limiting dilution analysis to monitor the frequencies of interleukin (IL)-2 producing helper T cells responding to recipient or donor antigens following the transplant. The conclusion from this study was that both the technical performance and the data analysis were to complex for this method to be used as a routine clinical tool. However, the study showed that immune responses following HCT are subject to a tight regulation and suggested that this regulation could be due to regulatory cell populations. Such regulatory cell populations have been used successfully in animal models to treat acute GVHD. The secretion of cytokines is an important aspect of immune responses. We analyzed cytokine gene expression in mononuclear cells obtained from patients and donors before and after HCT. Patients with acute GVHD had lower levels of IL-10 mRNA on day +14 than patients who did not develop acute GVHD. Patients who experienced progression or relapse of the malignant disease were characterized by higher levels of IL-10 mRNA before the transplant than patients who remained in remission. The conclusion of this study was that IL-10 might be an inhibitor of alloreactivity following allogeneic HCT with nonmyeloablative conditioning. Allogeneic HCT with nonmyeloablative conditioning represents a major step forward in the treatment of patients with hematologic malignancies. However, many issues such as whom to transplant and when the transplant should be performed remain to be clarified. Large prospective studies, involving collaboration between centers, are needed to define the role of HCT with nonmyeloablative conditioning along with other treatment modalities. In addition, it is important to continue to elucidate the immunologic mechanisms that are responsible for GVHD and the GVT effect. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
In this review, we address the identification of residual chemical hazards in shellfish collected from the marine environment or in marketed shellfish. Data, assembled on the concentration of contaminants detected, were compared with the appropriate regulatory and food safety standards. Moreover, data on human exposure and body burden levels were evaluated in the context of potential health risks.Shellfish farming is a common industry along European coasts. The primary types of shellfish consumed in France are oysters, mussels, king scallops, winkles,whelks, cockles, clams, and other scallops. Shellfish filter large volumes of water to extract their food and are excellent bioaccumulators. Metals and other pollutants that exist in the marine environment partition into particular organs, according to their individual chemical characteristics. In shellfish, accumulation often occurs in the digestive gland, which plays a role in assimilation, excretion, and detoxification of contaminants. The concentrations of chemical contaminants in bivalve mollusks are known to fluctuate with the seasons.European regulations limit the amount and type of contaminants that can appear in foodstuffs. Current European standards regulate the levels of micro-biological agents, phycotoxins, and some chemical contaminants in food. Since 2006, these regulations have been compiled into the "Hygiene Package." Bivalve mollusks must comply with maximum levels of certain contaminants as follows:lead (1.5 mg kg-1), cadmium (1 mg kg-1), mercury (0.5 mg kg-1), dioxins (4 pg g-1 and dioxins + DL-PCBs 8 pg g-1), and benzo[a]pyrene (10 μp.g kg-1).In this review, we identify the levels of major contaminants that exist in shellfish(collected from the marine environment and/or in marketed shellfish). The follow-ing contaminants are among those that are profiled: Cd, Pb, Hg, As, Ni, Cr, V,Mn, Cu, Zn, Co, Se, Mg, Mo, radionuclides, benzo[a]pyrene, PCBs, dioxins and furans, PAHs, TBT, HCB, dieldrin, DDT, lindane, triazines, PBDE, and chlorinated paraffins.In France, the results of contaminant monitoring have indicated that Cd, but not lead (< 0.26 mg kg-1) or mercury (< 0.003 mg kg-1), has had some non-compliances. Detections for PCBs and dioxins in shellfish were far below the regulatory thresholds in oysters (< 0.6 pg g-l), mussels (< 0.6 pg g-1), and king scallops (< 0.4 pg g-1). The benzo[a]pyrene concentration in marketed mussels and farmed shellfish does not exceed the regulatory threshold. Some monitoring data are available on shellfish flesh contamination for unregulated organic contaminants.Of about 100 existing organo stannic compounds, residues of the mono-, di-, and tributyltin (MBT, DBT, and TBT) and mono-, di-, and triphenyltin (MPT, DPT,and TPT) compounds are the most frequently detected in fishery products. Octyltins are not found in fishery products. Some bivalve mollusks show arsenic levels up to 15.8 mg kg-1. It seems that the levels of arsenic in the environment derive less from bioaccumulation, than from whether the arsenic is in an organic or an inorganic form. In regard to the other metals, levels of zinc and magnesium are higher in oysters than in mussels.To protect shellfish from chemical contamination, programs have been established to monitor water masses along coastal areas. The French monitoring network(ROCCH) focuses on environmental matrices that accumulate contaminants. These include both biota and sediment. Example contaminants were studied in a French coastal lagoon (Arcachon Bay) and in an estuary (Bay of Seine), and these were used to illustrate the usefulness of the monitoring programs. Twenty-one pesticidal and biocidal active substances were detected in the waters of Arcachon Bay during the summers from 1999 to 2003, at concentrations ranging from a few nanograms per liter to several hundred nanograms per liter. Most of the detected substances were herbicides, including some that are now banned. Organotin compounds have been detected in similarly semi-enclosed waters elsewhere (bays, estuaries, and harbors).However, the mean concentrations of cadmium, mercury, lead, and benzo[a]pyrene,in transplanted mussels, were below the regulatory limits.In 2007, the mean daily consumption of shellfish in the general French population was estimated to be 4.5 g in adults; however, a wide variation occurs by region and season (INCA 2 study). Tabulated as a proportion of the diet, shellfish consumption represents only 0.16% of overall solid food intake. However, the INCA 2 survey was not well suited to estimating shellfish consumption because of the small number of shellfish consumers sampled. In contrast, the mean consumption rate of bivalve mollusks among adult high consumers of fish and seafood products, i.e., adults who eat fish or seafood at least twice a week, was estimated to be 153 g week-1 (8 kg yr-1). The highest mean consumption is for king scallops (39 g week-1), followed by oysters (34 g week-1) and mussels (22 g week-1). Thus, for high seafood consumers, the contribution of shellfish to inorganic contaminant levels is 1-10% TWI or PTWI for Cd, MeHg, and Sn (up to 19% for Sn), and the arsenic body burden is higher for 22% of individuals studied.The human health risks associated with consuming chemical contaminants in shellfish are difficult to assess for several reasons: effects may only surface after long-term exposure (chronic risk), exposures may be discontinuous, and contamination may derive from multiple sources (food, air, occupational exposure, etc.).Therefore, it is not possible to attribute a high body burden specifically to shellfish consumption even if seafood is a major dietary contributor of any contaminant, e.g.,arsenic and mercury.The data assembled in this review provide the arguments for maintaining the chemical contaminant monitoring programs for shellfish. Moreover, the results presented herein suggest that monitoring programs should be extended to other chemicals that are suspected of presenting a risk to consumers, as illustrated by the high concentration reported for arsenic (in urine) of high consumers of seafood products from the CALIPSO study. In addition, the research conducted in shellfish-farming areas of Arcachon Bay highlights the need to monitor TBT and PAH contamination levels to ensure that these chemical pollutants do not migrate from the harbor to oyster farms.Finally, we have concluded that shellfish contamination from seawater offers a rather low risk to the general French population, because shellfish do not constitute a major contributor to dietary exposure of chemical contaminants. Notwithstanding,consumer vigilance is necessary among regular shellfish consumers, and especially for those residing in fishing communities, for pregnant and breast-feeding women,and for very young children. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Dear Editor, Pitted keratolysis (PK), also known as keratosis plantaris sulcatum, is a non-inflammatory, bacterial, superficial cutaneous infection, characterized by many discrete superficial crateriform ''pits'' and erosions in the thickly keratinized skin of the weight-bearing regions of the soles of the feet (1). The disease often goes unnoticed by the patient, but when it is noticed it is because of the unbearable malodor and hyperhidrosis of the feet, which are socially unacceptable and cause great anxiety to many of the patients. PK occurs worldwide, with the incidence rates varying based on the environment and occupation. The prevalence of this condition does not differ significantly based on age, sex, or race. People who sweat profusely or wash excessively, who wear occlusive footwear, or are barefoot especially in hot and humid weather are extremely prone to this condition (2). Physicians commonly misdiagnose it as tinea pedis or plantar warts. Treatment is quite simple and straightforward, with an excellent expected outcome if treated properly. We report a case of a 32-year-old male patient with skin changes of approximately one-year duration diagnosed as plantar verrucae, who was referred to our Department for cryotherapy. The patient presented with asymptomatic, malodorous punched-out pits and erosions along with hyperkeratotic skin on the heel and metatarsal region of the plantar aspect of both feet. The arches, toes, and sides of the feet were spared (Figure 1). Except for these skin changes, the patient was healthy and denied any other medical issues. He was an athlete active in martial arts and had a history of sweating of feet and training barefoot on the tatami mat for extended periods of time. The diagnosis of PK was established based on the clinical findings (crateriform pitting and malodor), negative KOH test for hyphae, and a history of prolonged sweating in addition to contact of the skin with tatami mats, which are often a source of infection if hygiene measures are not adequately implemented. Swabs could have been helpful to identify causative organisms, but they were not crucial for the diagnosis and treatment. The patient was prescribed with general measures to prevent excessive sweating (cotton socks, open footwear, and proper hygiene), antiseptic potassium permanganate foot soaks followed by clindamycin 1% and benzoyl peroxide 5% in a gel vehicle twice daily. At the one-month follow-up visit, the skin changes, hyperhidrosis, and malodor were entirely resolved (Figure 2). Pitted keratolysis is common among athletes (3,4). The manifestations of PK are due to a superficial cutaneous infection caused by several bacterial Gram-positive species including Corynebacterium species, Kytococcus sedentarius, Dermatophilus congolensis, Actynomices keratolytica, and Streptomyces that proliferate and produce proteinase and sulfur-compound by-products under appropriate moist conditions (5-7). Proteinases digest the keratin and destroy the stratum corneum, producing the characteristic skin findings, while sulfur compounds (sulfides, thiols, and thioesters) are responsible for the malodor. Athletes and soldiers who wear occlusive footwear for prolonged periods of time or even barefooted people that sweat extensively and spend time on wet surfaces such as laborers, farmers, and marine workers are more prone to this problem (3,4,8-11). Martial arts athletes are at greater risk of skin infections due to the constant physical contact that can lead to transmission of viral, bacterial, and fungal pathogens directly but also indirectly through contact with the mat and the skin flora of an another infected individual. A national survey of the epidemiology of skin infections among US high school athletes conducted by Ashack et al. supported the prevalent theory that contact sports are associated with an increased risk of skin infections. In this study, wrestling had the highest skin infection rate of predominantly bacterial origin (53.8%), followed by tinea (35.7%) and herpetic lesions (6.7%), which is consistent with other literature reporting (12). Being barefoot on the tatami mat in combination with excessive sweating and non-compliance with hygiene measures makes martial arts athletes more susceptible to skin infections, including PK. The diagnosis is clinical, by means of visual examination and recognition of the characteristic odor. Dermoscopy can be useful, revealing abundant pits with well-marked walls that sometimes show the bacterial colonies (13). Cultures, if taken, show Gram-positive bacilli or coccobacilli. Because of the ease of diagnosis on clinical findings, biopsy of pitted keratolysis is rarely performed. Skin scraping is often performed to exclude tinea pedis, which is one of the main differential diagnosis, the others including verrucae, punctate palmoplantar keratoderma, keratolysis exfoliativa, circumscribed palmoplantar hypokeratosis, and basal cell nevus syndrome. If unrecognized and left untreated, skin findings and smelly feet can last for many years. Sometimes, if unrecognized, PK can be mistreated with antifungals, or even with aggressive treatment modalities such as cryotherapy. Appropriate treatment includes keeping feet dry with adequate treatment of hyperhidrosis, preventive measures, and topical antibiotic therapy. Topical forms of salicylic acid, sulfur, antibacterial soaps, neomycin, erythromycin, mupirocin, clindamycin and benzoyl peroxide, clotrimazole, imidazoles, and injectable botulinum toxin are all successful in treatment and prevention of PK (14,15). Topical antibiotics are the first line of medical treatment, among which fusidic acid, erythromycin 1% (solution or gel), mupirocin 2%, or clindamycin are the most recommended (14). As in our case, a fixed combination of two approved topical drugs - clindamycin 1%-benzoyl peroxide 5% gel, had been already demonstrated by Vlahovich et al. as an excellent treatment option with high adherence and no side-effect (16). The combined effect of this combination showed significantly greater effect due to the bactericidal and keratolytic properties of benzoyl peroxide. Additionally, this combination also lowers the risk of resistance of causative microorganisms to clindamycin. Skin infections are an important aspect of sports-related adverse events. Due to the interdisciplinary nature, dermatologists are not the only ones who should be aware of the disease, but also family medicine doctors, sports medicine specialists, and occupational health doctors who should educate patients about the etiology of the skin disorder, adequate prevention, and treatment. Athletes must enforce the disinfecting and sanitary cleaning of the tatami mats and other practice areas. Keeping up with these measures could significantly limit the spread of skin infections that can infect athletes indirectly, leading to significant morbidity, time loss from competition, and social anxiety as well. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
SARS-CoV-2 infection presents a high transmission in the group of health professionals in Spain (12-15% infected). Currently there is no accepted chemoprophylaxis but hydroxychloroquine (HDQ) is known to inhibit the coronavirus in vitro. Our hypothesis is that oral administration of hydroxychloroquine to healthcare professionals can reduce the incidence and prevalence of infection as well as its severity in this group. Design: Prospective, single center, double blind, randomised, controlled trial (RCT). Adult health-care professionals (18-65 years) working in areas of high exposure and high risk of transmission of SARS-COV-2 (COVID areas, Intensive Care Unit -ICUs-, Emergency, Anesthesia and all those performing aerosol-generating procedures) will be included. Exclusion criteria include previous infection with SARS CoV2 (positive SARS-CoV-2 PCR or IgG serology), pregnancy or lactation, any contraindication to hydroxychloroquine or evidence of unstable or clinically significant systemic disease. Patients will be randomized (1:1) to receive once-daily oral Hydroxychloroquine 200mg for two months (HC group) or placebo (P group) in addition to the protective measures appropriate to the level of exposure established by the hospital. A serological evaluation will be carried out every 15 days with PCR in case of seroconversion, symptoms or risk exposure. Primary outcome is the percentage of subjects presenting infection (seroconversion and/or PCR +ve) by the SARS-Cov-2 virus during the observation period. Additionally, both the percentage of subjects in each group presenting Pneumonia with severity criteria (Curb 65 ≥2) and that of subjects requiring admission to ICU will be determined. While awaiting a vaccine, hygiene measures, social distancing and personal protective equipment are the only primary prophylaxis measures against SARS-CoV-2, but they have not been sufficient to protect our healthcare professionals. Some evidence of the in vitro efficacy of hydroxychloroquine against this virus is known, along with some clinical data that would support the study of this drug in the chemoprophylaxis of infection. However, there are still no data from controlled clinical trials in this regard. If our hypothesis is confirmed, hydroxychloroquine can help professionals fight this infection with more guarantees. This is a single-center study that will be carried out at the Marqués de Valdecilla University Hospital. 450 health professionals working at the Hospital Universitario Marqués de Valdecilla in areas of high exposure and high risk of transmission of SARS COV2 (COVID hospital areas, Intensive Care Unit, Emergency, Anesthesia and all those performing aerosol-generating procedures) will be included. 1) Health professionals aged between 18 and 65 years (inclusive) at the time of the first screening visit; 2) They must provide signed written informed consent and agree to comply with the study protocol; 3) Active work in high exposure areas during the last two weeks and during the following weeks. 1) Previous infection with SARS CoV2 (positive coronavirus PCR or positive serology with SARS Cov2 negative PCR and absence of symptoms); 2) Current treatment with hydroxychloroquine or chloroquine; 3) Hypersensitivity, allergy or any contraindication for taking hydroxychloroquine, in the technical sheet; 4) Previous or current treatment with tamoxifen or raloxifene; 5) Previous eye disease, especially maculopathy; 6) Known heart failure (Grade III to IV of the New York Heart Association classification) or prolonged QTc; 7) Any type of cancer (except basal cell) in the last 5 years; 6) Refusal to give informed consent; 8) Evidence of any other unstable or clinically significant untreated immune, endocrine, hematological, gastrointestinal, neurological, neoplastic or psychiatric illness; 9) Antibodies positive for the human immunodeficiency virus; 10) Significant kidney or liver disease; 11) Pregnancy or lactation. Two groups will be analyzed with a 1: 1 randomization rate. 1)Intervention: (n = 225): One 200 mg hydroxychloroquine sulfate coated tablet once daily for two months.2)Comparator (control group) (n = 225): One hydroxychloroquine placebo tablet (identical to that of the drug) once daily for two months MAIN OUTCOMES: The primary outcome of this study will be to evaluate: number and percentage of healthcare personnel presenting symptomatic and asymptomatic infection (see "Diagnosis of SARS CoV2 infection" below) by the SARS-Cov2 virus during the study observation period (8 weeks) in both treatment arms;number and percentage of healthcare personnel in each group presenting with Pneumonia with severity criteria (Curb 65 ≥2) and number and percentage of healthcare personnel requiring admission to the Intensive Care Unit (ICU) in both treatment arms. DIAGNOSIS OF SARS COV2 INFECTION: Determination of IgA, IgM and IgG type antibodies against SARS-CoV-2 using the Anti-SARS-CoV-2 ELISA kit (EUROIMMUN Medizinische Labordiagnostika AG, Germany) every two weeks. In cases of seroconversion, a SARS-CoV-2 PCR will be performed to rule out / confirm an active infection (RT-PCR in One Step: RT performed with mastermix (Takara) and IDT probes, following protocol published and validated by the CDC Evaluation of COVID-19 in case of SARS-CoV-2 infection RANDOMISATION: Participants will be allocated to intervention and comparator groups according to a balanced randomization scheme (1: 1). The assignment will be made through a computer-generated numeric sequence for all participants BLINDING (MASKING): Both participants and investigators responsible for recruiting and monitoring participants will be blind to the assigned arm. Taking into account the current high prevalence of infection in healthcare personnel in Spain (up to 15%), to detect a difference equal to or greater than 8% in the percentage estimates through a two-tailed 95% CI, with a statistical power of 80% and a dropout rate of 5%, a total of 450 participants will need to be included (250 in each arm). The protocol approved by the health authorities in Spain (Spanish Agency for Medicines and Health Products "AEMPS") and the Ethics and Research Committee of Cantabria (CEIm Cantabria) corresponds to version 1.1 of April 2, 2020. Currently, recruitment has not yet started, with the start scheduled for the second week of May 2020. Eudra CT number: 2020-001704-42 (Registered on 29 March 2020) FULL PROTOCOL: The full protocol is attached as an additional file, accessible from the Trials website (Additional file 1). In the interest in expediting dissemination of this material, the familiar formatting has been eliminated; this Letter serves as a summary of the key elements of the full protocol. The study protocol has been reported in accordance with the Standard Protocol Items: Recommendations for Clinical Interventional Trials (SPIRIT) guidelines (Additional file 2). | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Little is known about the independent long-term effect on growth of exposure to maternal human immunodeficiency virus (HIV) infection. Growth patterns in uninfected children who are born to infected mothers have not been described in detail previously beyond early childhood, and patterns over age for infected and uninfected children have not been based on appropriate general population standards. In vertically HIV-infected children, poor growth has been suggested to be an early marker of infection or progression of disease. However, whether growth faltering is an independent HIV-related symptom or caused indirectly by other HIV clinical symptoms requires clarification. This information is needed to inform the debate on a possible effect of antiretroviral combination therapy on the height of infected children and would provide evidence for the use of specific interventions to improve height. The objective of this study was to describe growth (height and weight) patterns in infected and uninfected children who are born to HIV-infected mothers with respect to standards from a general population and to assess age-related differences in height and weight by infection status, allowing for birth weight, gestational age, gender, HIV-related clinical status, and antiretroviral therapy (ART). Since 1987, children who were born to HIV-infected mothers in 11 centers in 8 European countries were enrolled at birth in the European Collaborative Study and followed prospectively according to a standard protocol. Height and weight were measured at every visit, scheduled at birth; 3 and 6 weeks; 3, 6, 9, 12, 15, 18, and 24 months; and every 6 months thereafter. Serial measurements of height and weight from birth to 10 years of age of 1403 uninfected and 184 infected children were assessed. We fitted linear mixed effects models allowing for variance changes over age and within-subject correlation using fractional polynomials and natural cubic splines. Growth patterns were compared with British 1990 growth standards and by infection status. Of the 1587 children enrolled, 810 were male and 777 were female; 1403 were not infected (681 boys, 722 girls), and 184 were infected (88 boys, 96 girls). Neither height nor weight was associated significantly with the main effects of HIV infection status at birth, but differences between infected and uninfected children increased with age. Uninfected children had normal growth patterns from early ages. Infected children were estimated to be significantly shorter and lighter than uninfected children with growth differences increasing with age. Differences in growth velocities between the infected and uninfected children increased after 2 years of age for height and after 4 years of age for weight and were more marked in the latter. Between 6 and 12 months, uninfected children grew an estimated 1.6% faster in height and 6.2% in weight than infected children; between ages 8 and 10 years, these figures were 16% and 44%, respectively. By 10 years, uninfected children were on average an estimated 7 kg heavier and 7.5 cm taller than infected children. Growth in uninfected children who were born before 1994, before the widespread use of ART prophylaxis to reduce vertical transmission, did not substantially differ from that of children who were born after 1994. To investigate whether the growth differences between infected and uninfected children were associated with HIV disease progression, we analyzed growth of infected children using the Centers for Disease Control and Prevention (CDC) clinical classification, in 3 groups: no symptoms, mild or moderate symptoms (A and B), and severe symptoms (C or death). Infected children with mild or serious symptoms lagged behind asymptomatic children in both height and weight, and these differences increased with age. Infected children who were born before availability of ART, before 1988, were more likely to reach a weight below the third centile for age than children who were born after 1994 when effective HIV treatment was widely available. Of the 184 infected children, 67 had been weighed and/or measured at least once while on combination (> or = 2 drugs) ART. Reflecting the longitudinal nature of the European Collaborative Study and the changing availability of HIV treatment, most of these measurements took place after 7 years of age, and therefore analyzing the possible effect of combination therapy on growth is difficult. The z scores for height and weight gain improved substantially in several children who received combination therapy regardless of their CDC clinical classification. To increase available information, we pooled all measurements according to CDC clinical classification and presence of combination therapy at the time of the observation. Weight and height significantly improved for severely ill children after combination therapy. Using data from this large prospective European study, we investigated in comparison with general British standards growth patterns in the first 10 years of life of HIV-infected and uninfected children who were born to HIV-infected mothers. The duration of follow-up of uninfected as well as infected children makes this a unique data set. We allowed for repeated measurements for each child and the increase of variability in height and weight with age. Growth faltering may be related to the social environment, and our finding that uninfected children have normal growth, which is unaffected by exposure to maternal HIV infection, is consistent with observations that in Europe the HIV-infected population is more like the general population and less socioeconomically disadvantaged than that in the United States. However, HIV-infected children grew considerably slower, and differences between infected and uninfected children increased with age. Growth patterns in asymptomatic infected children were similar to those with only mild or moderate symptoms. However, compared with these 2 groups combined, severely ill children had poorer growth at all ages. Although limited by the small number of children who received combination therapy, severely ill children may benefit from such therapy in terms of improvements in weight and, to a smaller extent, in height. Growth faltering, particularly stunting, may adversely affect a child's quality of life, especially once they reach adolescence, and this should be taken into account when making decisions about starting and changing ART. Additional research will help to elucidate the relationship between combination therapy and improved growth, in particular regarding different regimens and the best timing of initiation for optimizing growth of infected children. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
To provide an update to the "Surviving Sepsis Campaign Guidelines for Management of Severe Sepsis and Septic Shock," last published in 2008. A consensus committee of 68 international experts representing 30 international organizations was convened. Nominal groups were assembled at key international meetings (for those committee members attending the conference). A formal conflict of interest policy was developed at the onset of the process and enforced throughout. The entire guidelines process was conducted independent of any industry funding. A stand-alone meeting was held for all subgroup heads, co- and vice-chairs, and selected individuals. Teleconferences and electronic-based discussion among subgroups and among the entire committee served as an integral part of the development. The authors were advised to follow the principles of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system to guide assessment of quality of evidence from high (A) to very low (D) and to determine the strength of recommendations as strong (1) or weak (2). The potential drawbacks of making strong recommendations in the presence of low-quality evidence were emphasized. Some recommendations were ungraded (UG). Recommendations were classified into three groups: 1) those directly targeting severe sepsis; 2) those targeting general care of the critically ill patient and considered high priority in severe sepsis; and 3) pediatric considerations. Key recommendations and suggestions, listed by category, include: early quantitative resuscitation of the septic patient during the first 6 hrs after recognition (1C); blood cultures before antibiotic therapy (1C); imaging studies performed promptly to confirm a potential source of infection (UG); administration of broad-spectrum antimicrobials therapy within 1 hr of recognition of septic shock (1B) and severe sepsis without septic shock (1C) as the goal of therapy; reassessment of antimicrobial therapy daily for de-escalation, when appropriate (1B); infection source control with attention to the balance of risks and benefits of the chosen method within 12 hrs of diagnosis (1C); initial fluid resuscitation with crystalloid (1B) and consideration of the addition of albumin in patients who continue to require substantial amounts of crystalloid to maintain adequate mean arterial pressure (2C) and the avoidance of hetastarch formulations (1C); initial fluid challenge in patients with sepsis-induced tissue hypoperfusion and suspicion of hypovolemia to achieve a minimum of 30 mL/kg of crystalloids (more rapid administration and greater amounts of fluid may be needed in some patients) (1C); fluid challenge technique continued as long as hemodynamic improvement, as based on either dynamic or static variables (UG); norepinephrine as the first-choice vasopressor to maintain mean arterial pressure ≥ 65 mm Hg (1B); epinephrine when an additional agent is needed to maintain adequate blood pressure (2B); vasopressin (0.03 U/min) can be added to norepinephrine to either raise mean arterial pressure to target or to decrease norepinephrine dose but should not be used as the initial vasopressor (UG); dopamine is not recommended except in highly selected circumstances (2C); dobutamine infusion administered or added to vasopressor in the presence of a) myocardial dysfunction as suggested by elevated cardiac filling pressures and low cardiac output, or b) ongoing signs of hypoperfusion despite achieving adequate intravascular volume and adequate mean arterial pressure (1C); avoiding use of intravenous hydrocortisone in adult septic shock patients if adequate fluid resuscitation and vasopressor therapy are able to restore hemodynamic stability (2C); hemoglobin target of 7-9 g/dL in the absence of tissue hypoperfusion, ischemic coronary artery disease, or acute hemorrhage (1B); low tidal volume (1A) and limitation of inspiratory plateau pressure (1B) for acute respiratory distress syndrome (ARDS); application of at least a minimal amount of positive end-expiratory pressure (PEEP) in ARDS (1B); higher rather than lower level of PEEP for patients with sepsis-induced moderate or severe ARDS (2C); recruitment maneuvers in sepsis patients with severe refractory hypoxemia due to ARDS (2C); prone positioning in sepsis-induced ARDS patients with a PaO2/FIO2 ratio of ≤ 100 mm Hg in facilities that have experience with such practices (2C); head-of-bed elevation in mechanically ventilated patients unless contraindicated (1B); a conservative fluid strategy for patients with established ARDS who do not have evidence of tissue hypoperfusion (1C); protocols for weaning and sedation (1A); minimizing use of either intermittent bolus sedation or continuous infusion sedation targeting specific titration endpoints (1B); avoidance of neuromuscular blockers if possible in the septic patient without ARDS (1C); a short course of neuromuscular blocker (no longer than 48 hrs) for patients with early ARDS and a Pao2/Fio2 < 150 mm Hg (2C); a protocolized approach to blood glucose management commencing insulin dosing when two consecutive blood glucose levels are > 180 mg/dL, targeting an upper blood glucose ≤ 180 mg/dL (1A); equivalency of continuous veno-venous hemofiltration or intermittent hemodialysis (2B); prophylaxis for deep vein thrombosis (1B); use of stress ulcer prophylaxis to prevent upper gastrointestinal bleeding in patients with bleeding risk factors (1B); oral or enteral (if necessary) feedings, as tolerated, rather than either complete fasting or provision of only intravenous glucose within the first 48 hrs after a diagnosis of severe sepsis/septic shock (2C); and addressing goals of care, including treatment plans and end-of-life planning (as appropriate) (1B), as early as feasible, but within 72 hrs of intensive care unit admission (2C). Recommendations specific to pediatric severe sepsis include: therapy with face mask oxygen, high flow nasal cannula oxygen, or nasopharyngeal continuous PEEP in the presence of respiratory distress and hypoxemia (2C), use of physical examination therapeutic endpoints such as capillary refill (2C); for septic shock associated with hypovolemia, the use of crystalloids or albumin to deliver a bolus of 20 mL/kg of crystalloids (or albumin equivalent) over 5 to 10 mins (2C); more common use of inotropes and vasodilators for low cardiac output septic shock associated with elevated systemic vascular resistance (2C); and use of hydrocortisone only in children with suspected or proven "absolute"' adrenal insufficiency (2C). Strong agreement existed among a large cohort of international experts regarding many level 1 recommendations for the best care of patients with severe sepsis. Although a significant number of aspects of care have relatively weak support, evidence-based recommendations regarding the acute management of sepsis and septic shock are the foundation of improved outcomes for this important group of critically ill patients. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
<bObjective:</b To explore the preparation and preliminary research on the characteristics of modified nano-bioglass hydrogel. <bMethods:</b (1) The nano-bioglass suspension was prepared by adding 67 mL nano-silica suspension into 400 mL saturated calcium hydroxide solution, and its suspension stability was observed. (2) The hydrogel with final mass fraction of 10% gelatin and 1% sodium alginate was prepared and set as control group. On the basis of the hydrogel in control group, the nano-bioglass suspension prepared in experiment (1) was added to prepare the hydrogel with the final mass fraction of 0.5% bioglass, 10% gelatin, and 1% sodium alginate, and the hydrogel was set as the experimental group. The gelling time at 4 and 25 ℃and the dissolution time at 37 ℃ of hydrogel in 2 groups were recorded, and the gelation at 4 and 25 ℃and dissolution condition at 37 ℃of the hydrogel in 2 groups were observed. The hydrogel in 2 groups were collected and cross-linked with 25 g/L calcium chloride solution after cold bath at 4 ℃, and the compression modulus was measured by Young's modulus tester. In addition, the hydrogel in 2 groups were collected and cross-linked as before, and freeze-drying hydrogel was made at -20 ℃. The relative volumes were measured and the porosity of hydrogel in 2 groups was calculated. The number of sample in the experiment was 3. (3) Fibroblasts (Fbs) were isolated and cultured from 12 C57BL/6J mice of 24 hours old and the morphology was observed by inverted microscope, and the third passage of Fbs were cultured for the following experiment. Fbs were collected to make single cell suspension with the cell concentration of 1×10(5)/mL. The single cell suspension was divided into experimental group and control group according the random number table (the same grouping method below), which were added with hydrogel in experimental group and control group prepared in experiment (2), respectively. At culture hour 12, 24, and 48, cells of 3 wells in each group were collected to detect the survival rate by cell counting kit 8 method. (4) The third passage Fbs were collected to prepare the single cell suspension with the cell concentration of (3.0~4.5)×10(7)/mL, which was divided into experimental group and control group, with 1 tube in each group. The single cell suspension in 2 groups were added with green fluorescent probe DIO for staining and then added with 9 mL hydrogel in experimental group and control group prepared in experiment (2), respectively. The mixed solution of Fbs and hydrogel in 2 groups was cross-linked as before to make cell-loaded hydrogel. On culture day 3, the survival of cells in the hydrogel was observed by laser confocal microscope. The cell-loaded hydrogel was prepared as before and without added with green fluorescent probe DIO. On culture day 7, the adhesion and extension of cells in the hydrogel were observed by scanning electron microscope. (5) Twelve 6-week-old female BALB/c-nu nude mice were collected and divided into experimental group and control group, with 6 mice in each group. A round full-thickness skin defect wound with diameter of 1 cm was made on the back of each mouse. Immediately after injury, one cell-loaded hydrogel block in the experimental group and the control group prepared in experiment (4) was placed in the wound of each mouse in the experimental group and the control group, respectively. On post injury day (PID) 7 and 14, 3 nude mice in each group were sacrificed to collect the wound and wound margin tissue, which was stained with hematoxylin-eosin to observe the wound healing. Data were statistically analyzed with independent sample <it</i test. <bResults:</b (1) The nano-bioglass particles could be uniformly dispersed in water and had good suspension stability. (2) The hydrogels of the 2 groups were molten at 37 ℃, and no precipitation of particle was observed. The dissolving time of the hydrogel in the experimental group and the control group at 37 ℃ was 5 and 10 min, respectively. The gelation time of the hydrogel in the experimental group and the control group at 25 ℃ was 30 and 180 min, respectively, and the gelation time of the 2 groups at 4 ℃ was 5 and 10 min, respectively. The compression modulus of hydrogel in the experimental group was (53±6) kPa, which was significantly higher than (23±6) kPa in the control group (<it</i=6.364, <iP</i<0.01). The porosity of the hydrogel in the experimental group was (86.1±2.1)%, which was similar to (88.2±4.4)% in the control group (<it</i=1.210, <iP</i>0.05). (3) The cells were in long fusiform, and the proportion of nuclei was high, which was accorded with the morphological characteristics of Fbs. At culture hour 12, 24, and 48, the survival rate of cells in the experimental group was (84±4)%, (89±4)%, and (130±10)%, which was similar to (89±5)%, (90±4)%, and (130±11)% in the control group, respectively (<it</i=1. 534, 0.611, 0.148, <iP</i>0.05). (4) On culture day 3, the cells in the two groups had complete morphology in the hydrogel, no nuclear lysis or disappearance were observed, the cytoplasm remained intact, and the fluorescence intensity of the cells in the experimental group was significantly stronger than that in the control group. On culture day 7, the cells in the experimental group and the control group adhered and stretched in the hydrogel, and the number of cells in the experimental group adhered to the hydrogel was significantly more than that in the control group. On PID 7, the wound area of the nude mice in the control group and the experimental group were reduced, the reduction area of mice in the experimental group was more obvious, and a large amount of inflammatory cells were seen in and around the wound in the 2 groups. On PID 14, the wound area of the nude mice in the control group was larger than that of the experimental group, and the number of inflammatory cells in and around the wound was significantly more than that in the experimental group. <bConclusions:</b Nano-bioglass hydrogel possesses good physical, chemical, and biological properties, cell loading potential, and the ability to promote wound healing, which means it has a good potential in clinical application. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The American Academy of Pediatrics (AAP) has promoted pediatrician involvement in the care of children with special health care needs (CSHCN), including the prescription and/or supervision of therapies and durable medical equipment (DME) for children in both medical and educational settings, such as schools and early intervention programs. Through this survey, we attempted to quantify objectively how pediatricians direct and coordinate therapy and DME for CSHCN and how these efforts correlate with AAP recommendations. A survey was mailed to a random sample of 500 physicians listed in the AAP directory, resulting in a final sample of 217 responding physicians who indicated that they provide services to CSHCN. Results of the survey were reported as proportions, means with standard deviations, or medians with interquartile range. Comparisons of proportions among certain subgroups of interest were made using Fisher exact tests. The most recent AAP policy revision addressing the role of physicians in prescribing therapy services for children with motor disabilities appeared in Pediatrics 1996. It listed 6 key items that should be part of a therapy prescription: diagnosis, precautions, type, frequency, anticipated goals (educators may prefer the term "objectives"), and duration. The policy addressed and emphasized the need for what may be additional objectives, namely regular communication between all parties involved, ongoing supervision and reevaluation of the program and problem, and awareness of other community resources for possible referrals. Except for providing a diagnosis, the majority of surveyed pediatricians do not regularly comply with AAP policy recommendations on prescribing therapies and DME in medical and educational settings. Physicians who were trained before 1980 tend to follow AAP recommendations more closely than later graduates. Decreasing involvement of private outpatient pediatricians in coordinating and supervising CSHCN's care was noted. Furthermore, the majority is willing to defer decisions about treatment and goals to nonphysician health care providers (NPHCPs) and, in some cases, even equipment vendors. More than two thirds of the respondents indicated that they would sign a prescription for therapy without their previous initiation if it had been initiated by a therapist. Likewise, most respondents said that they would sign a wheelchair prescription sent to them by a therapist. Few expressed confidence in determining the appropriateness of leg brace (orthosis) prescriptions and arm/hand brace prescriptions. The majority of survey participants said that they give open-ended length of time (no limits under 1 year) on prescriptions for therapy services as part of school-based programs. However, patients' conditions and their therapeutic or equipment needs may change during the school year. Because open-ended prescriptions do not require periodic renewal, they do not provide opportunities for periodic feedback that helps to ensure that the pediatrician is kept abreast of the patient's status and progress. The majority of respondents indicated that they would see a patient before signing either a therapy or DME prescription if they had not seen that patient in the past year. A little more than half of survey respondents said that they would participate initially in recommending which professional services or therapies should be performed as part of early intervention programs most of the time, but one third said that they participated less than half the time and approximately 14% said that they never participated. A majority would require being involved before authorizing therapy services as part of a school-based program, but a substantial minority would provide retroactive authorization for services that they did not initiate themselves. More than three quarters of respondents would prefer to let the therapist or educator set the goals. Only 58% of pediatricians reported receiving a detailed progress report once or twice a year, and approximately one fifth received no reports on patients in school-based programs. A literature review suggested that there are different perceptions among physicians and educationally based service providers regarding the physician's role in initiating and supervising educationally based services and equipment, which may influence the extent of physician involvement. AAP and other professional organizations, such as the American Medical Association and the American Academy of Physical Medicine and Rehabilitation, as well as federal guidelines and third-party payers emphasize the important role of physicians in initiating, determining the medical necessity, and ordering of services as well as in ongoing patient treatment. If therapists through their states' scope of practice guidelines have autonomy of practice or if the school self-funds educationally based services, then there may be no issues regarding physician authorization. However, if a physician's authorization is required for reimbursement, then the physician's professional, legal, and practice guidelines come into play. Physicians should be conscientious about fulfilling their responsibilities in serving as the medical home and supervising and monitoring medical services for their patients in both community and educational settings. Failure to properly fulfill the responsibilities inherent in signing a prescription may bring adverse consequences for the patient as well subject the physician to legal liability if adverse events occur. Ideally, there should be a seamless continuity and cooperation among the environments of medicine, home, community, and education rather than separate and perhaps conflicting domains. All health care professionals and other service providers involved should be acknowledged as collaborative team members. Except for provision of the diagnosis, the majority of surveyed pediatricians do not comply with AAP policy recommendations on prescribing community/medical-based and educationally based services for CSHCN. Furthermore, the majority are willing to defer these decisions to other NPHCP. This raises issues regarding overall continuity of care versus care of the child in a variety of environments, the concept of the medical home, and legal risk as a result of failure to follow federal and state practice guidelines. Also, there seem to be different cultural perceptions among physicians and educationally based service providers regarding the physician's role in educationally based services. These cultural differences should be explored further to promote a greater collegial cooperation and understanding. Decreasing involvement of private outpatient pediatricians in coordinating and supervising CSHCN care and a trend toward greater deference to NPHCP since 1979 were noted. If the numerous policies and guidelines previously promoted by AAP have not had a significant impact on pediatrician practices in these fields, then other, more effective alternatives should be explored. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
This publication contains the pediatric and neonatal sections of the 2005 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science With Treatment Recommendations (COSTR). The consensus process that produced this document was sponsored by the International Liaison Committee on Resuscitation (ILCOR). ILCOR was formed in 1993 and consists of representatives of resuscitation councils from all over the world. Its mission is to identify and review international science and knowledge relevant to cardiopulmonary resuscitation (CPR) and emergency cardiovascular care (ECC) and to generate consensus on treatment recommendations. ECC includes all responses necessary to treat life-threatening cardiovascular and respiratory events. The COSTR document presents international consensus statements on the science of resuscitation. ILCOR member organizations are each publishing resuscitation guidelines that are consistent with the science in this consensus document, but they also take into consideration geographic, economic, and system differences in practice and the regional availability of medical devices and drugs. The American Heart Association (AHA) pediatric and the American Academy of Pediatrics/AHA neonatal sections of the resuscitation guidelines are reprinted in this issue of Pediatrics (see pages e978-e988). The 2005 evidence evaluation process began shortly after publication of the 2000 International Guidelines for CPR and ECC. The process included topic identification, expert topic review, discussion and debate at 6 international meetings, further review, and debate within ILCOR member organizations and ultimate approval by the member organizations, an Editorial Board, and peer reviewers. The complete COSTR document was published simultaneously in Circulation (International Liaison Committee on Resuscitation. 2005 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science With Treatment Recommendations. Circulation. 2005;112(suppl):73-90) and Resuscitation (International Liaison Committee on Resuscitation. 2005 International Consensus Conference on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science With Treatment Recommendations. Resuscitation. 2005;67:271-291). Readers are encouraged to review the 2005 COSTR document in its entirety. It can be accessed through the CPR and ECC link at the AHA Web site: www.americanheart.org. The complete publication represents the largest evaluation of resuscitation literature ever published and contains electronic links to more detailed information about the international collaborative process. To organize the evidence evaluation, ILCOR representatives established 6 task forces: basic life support, advanced life support, acute coronary syndromes, pediatric life support, neonatal life support, and an interdisciplinary task force to consider overlapping topics such as educational issues. The AHA established additional task forces on stroke and, in collaboration with the American Red Cross, a task force on first aid. Each task force identified topics requiring evaluation and appointed international experts to review them. A detailed worksheet template was created to help the experts document their literature review, evaluate studies, determine levels of evidence, develop treatment recommendations, and disclose conflicts of interest. Two evidence evaluation experts reviewed all worksheets and assisted the worksheet reviewers to ensure that the worksheets met a consistently high standard. A total of 281 experts completed 403 worksheets on 275 topics, reviewing more than 22000 published studies. In December 2004 the evidence review and summary portions of the evidence evaluation worksheets, with worksheet author conflict of interest statements, were posted on the Internet at www.C2005.org, where readers can continue to access them. Journal advertisements and e-mails invited public comment. Two hundred forty-nine worksheet authors (141 from the United States and 108 from 17 other countries) and additional invited experts and reviewers attended the 2005 International Consensus Conference for presentation, discussion, and debate of the evidence. All 380 participants at the conference received electronic copies of the worksheets. Internet access was available to all conference participants during the conference to facilitate real-time verification of the literature. Expert reviewers presented topics in plenary, concurrent, and poster conference sessions with strict adherence to a novel and rigorous conflict of interest process. Presenters and participants then debated the evidence, conclusions, and draft summary statements. Wording of science statements and treatment recommendations was refined after further review by ILCOR member organizations and the international editorial board. This format ensured that the final document represented a truly international consensus process. The COSTR manuscript was ultimately approved by all ILCOR member organizations and by an international editorial board. The AHA Science Advisory and Coordinating Committee and the editor of Circulation obtained peer reviews of this document before it was accepted for publication. The most important changes in recommendations for pediatric resuscitation since the last ILCOR review in 2000 include: Increased emphasis on performing high quality CPR: "Push hard, push fast, minimize interruptions of chest compression; allow full chest recoil, and don't provide excessive ventilation" Recommended chest compression-ventilation ratio: For lone rescuers with victims of all ages: 30:2 For health care providers performing 2-rescuer CPR for infants and children: 15:2 (except 3:1 for neonates) Either a 2- or 1-hand technique is acceptable for chest compressions in children Use of 1 shock followed by immediate CPR is recommended for each defibrillation attempt, instead of 3 stacked shocks Biphasic shocks with an automated external defibrillator (AED) are acceptable for children 1 year of age. Attenuated shocks using child cables or activation of a key or switch are recommended in children <8 years old. Routine use of high-dose intravenous (IV) epinephrine is no longer recommended. Intravascular (IV and intraosseous) route of drug administration is preferred to the endotracheal route. Cuffed endotracheal tubes can be used in infants and children provided correct tube size and cuff inflation pressure are used. Exhaled CO2 detection is recommended for confirmation of endotracheal tube placement. Consider induced hypothermia for 12 to 24 hours in patients who remain comatose following resuscitation. Some of the most important changes in recommendations for neonatal resuscitation since the last ILCOR review in 2000 include less emphasis on using 100% oxygen when initiating resuscitation, de-emphasis of the need for routine intrapartum oropharyngeal and nasopharyngeal suctioning for infants born to mothers with meconium staining of amniotic fluid, proven value of occlusive wrapping of very low birth weight infants <28 weeks' gestation to reduce heat loss, preference for the IV versus the endotracheal route for epinephrine, and an increased emphasis on parental autonomy at the threshold of viability. The scientific evidence supporting these recommendations is summarized in the neonatal document (see pages e978-e988). | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
We conducted an extended follow-up and spatial analysis of the American Cancer Society (ACS) Cancer Prevention Study II (CPS-II) cohort in order to further examine associations between long-term exposure to particulate air pollution and mortality in large U.S. cities. The current study sought to clarify outstanding scientific issues that arose from our earlier HEI-sponsored Reanalysis of the original ACS study data (the Particle Epidemiology Reanalysis Project). Specifically, we examined (1) how ecologic covariates at the community and neighborhood levels might confound and modify the air pollution-mortality association; (2) how spatial autocorrelation and multiple levels of data (e.g., individual and neighborhood) can be taken into account within the random effects Cox model; (3) how using land-use regression to refine measurements of air pollution exposure to the within-city (or intra-urban) scale might affect the size and significance of health effects in the Los Angeles and New York City regions; and (4) what exposure time windows may be most critical to the air pollution-mortality association. The 18 years of follow-up (extended from 7 years in the original study [Pope et al. 1995]) included vital status data for the CPS-II cohort (approximately 1.2 million participants) with multiple cause-of-death codes through December 31, 2000 and more recent exposure data from air pollution monitoring sites for the metropolitan areas. In the Nationwide Analysis, the influence of ecologic covariate data (such as education attainment, housing characteristics, and level of income; data obtained from the 1980 U.S. Census; see Ecologic Covariates sidebar on page 14) on the air pollution-mortality association were examined at the Zip Code area (ZCA) scale, the metropolitan statistical area (MSA) scale, and by the difference between each ZCA value and the MSA value (DIFF). In contrast to previous analyses that did not directly include ecologic covariates at the ZCA scale, risk estimates increased when ecologic covariates were included at all scales. The ecologic covariates exerted their greatest effect on mortality from ischemic heart disease (IHD), which was also the health outcome most strongly related with exposure to PM2.5 (particles 2.5 microm or smaller in aerodynamic diameter), sulfate (SO4(2-)), and sulfur dioxide (SO2), and the only outcome significantly associated with exposure to nitrogen dioxide (NO2). When ecologic covariates were simultaneously included at both the MSA and DIFF levels, the hazard ratio (HR) for mortality from IHD associated with PM2.5 exposure (average concentration for 1999-2000) increased by 7.5% and that associated with SO4(2-) exposure (average concentration for 1990) increased by 12.8%. The two covariates found to exert the greatest confounding influence on the PM2.5-mortality association were the percentage of the population with a grade 12 education and the median household income. Also in the Nationwide Analysis, complex spatial patterns in the CPS-II data were explored with an extended random effects Cox model (see Glossary of Statistical Terms at end of report) that is capable of clustering up to two geographic levels of data. Using this model tended to increase the HR estimate for exposure to air pollution and also to inflate the uncertainty in the estimates. Including ecologic covariates decreased the variance of the results at both the MSA and ZCA scales; the largest decrease was in residual variation based on models in which the MSA and DIFF levels of data were included together, which suggests that partitioning the ecologic covariates into between-MSA and within-MSA values more completely captures the sources of variation in the relationship between air pollution, ecologic covariates, and mortality. Intra-Urban Analyses were conducted for the New York City and Los Angeles regions. The results of the Los Angeles spatial analysis, where we found high exposure contrasts within the Los Angeles region, showed that air pollution-mortality risks were nearly 3 times greater than those reported from earlier analyses. This suggests that chronic health effects associated with intra-urban gradients in exposure to PM2.5 may be even larger between ZCAs within an MSA than the associations between MSAs that have been previously reported. However, in the New York City spatial analysis, where we found very little exposure contrast between ZCAs within the New York region, mortality from all causes, cardiopulmonary disease (CPD), and lung cancer was not elevated. A positive association was seen for PM2.5 exposure and IHD, which provides evidence of a specific association with a cause of death that has high biologic plausibility. These results were robust when analyses controlled (1) the 44 individual-level covariates (from the ACS enrollment questionnaire in 1982; see 44 Individual-Level Covariates sidebar on page 22) and (2) spatial clustering using the random effects Cox model. Effects were mildly lower when unemployment at the ZCA scale was included. To examine whether there is a critical exposure time window that is primarily responsible for the increased mortality associated with ambient air pollution, we constructed individual time-dependent exposure profiles for particulate and gaseous air pollutants (PM2.5 and SO2) for a subset of the ACS CPS-II participants for whom residence histories were available. The relevance of the three exposure time windows we considered was gauged using the magnitude of the relative risk (HR) of mortality as well as the Akaike information criterion (AIC), which measures the goodness of fit of the model to the data. For PM2.5, no one exposure time window stood out as demonstrating the greatest HR; nor was there any clear pattern of a trend in HR going from recent to more distant windows or vice versa. Differences in AIC values among the three exposure time windows were also small. The HRs for mortality associated with exposure to SO2 were highest in the most recent time window (1 to 5 years), although none of these HRs were significantly elevated. Identifying critical exposure time windows remains a challenge that warrants further work with other relevant data sets. This study provides additional support toward developing cost-effective air quality management policies and strategies. The epidemiologic results reported here are consistent with those from other population-based studies, which collectively have strongly supported the hypothesis that long-term exposure to PM2.5 increases mortality in the general population. Future research using the extended Cox-Poisson random effects methods, advanced geostatistical modeling techniques, and newer exposure assessment techniques will provide additional insight. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
<bObjective:</b To investigate the role and mechanism of Vγ4 T cells in impaired wound healing of rapamycin-induced full-thickness skin defects in mice. <bMethods:</b The experimental research methods were applied. Eighty-six C57BL/6J male mice (hereinafter briefly referred to as wild-type mice) aged 8-12 weeks were selected for the following experiments. Vγ4 T cells were isolated from axillary lymph nodes of five wild-type mice for the following experiments. Intraperitoneal injection of rapamycin for 42 mice was performed to establish rapamycin-treated mice model for the following experiments. Eighteen wild-type mice were divided into normal control group without any treatment, trauma only group, and trauma+CC chemokine ligand 20 (CCL20) inhibitor group according to the random number table (the same grouping method below), with 6 mice in each group. The full-thickness skin defect wound was made on the back of mice in the latter two groups (the same wound model below), and mice in trauma+CCL20 inhibitor group were continuously injected subcutaneously with CCL20 inhibitor at the wound edge for 3 days after injury. Another 6 rapamycin-treated mice were used to establish wound model as rapamycin+trauma group. On post injury day (PID) 3, the epidermal cells of the skin tissue around the wound of each trauma mice were extracted by enzyme digestion, and the percentage of Vγ4 T cells in the epidermal cells was detected by flow cytometry. In normal control group, the epidermal cells of the normal skin tissue in the back of mice were taken at the appropriate time point for detection as above. Five wild-type mice were used to establish wound models. On PID 3, the epidermal cells were extracted from the skin tissue around the wound. The cell populations were divided into Vγ4 T cells, Vγ3 T cells, and γδ negative cells by fluorescence-activated cell sorter, which were set as Vγ4 T cell group, Vγ3 T cell group, and γδ negative cell group (with cells in each group being mixed with B16 mouse melanoma cells), respectively. B16 mouse melanoma cells were used as melanoma cell control group. The expression of interleukin-22 (IL-22) mRNA in cells of each group was detected by real-time fluorescence quantitative reverse transcription polymerase chain reaction (RT-PCR), with the number of samples being 6. Thirty rapamycin-treated mice were used to establish wound models, which were divided into Vγ4 T cell only group and Vγ4 T cell+IL-22 inhibitor group performed with corresponding injections and rapamycin control group injected with phosphate buffer solution (PBS) immediately after injury, with 10 mice in each group. Another 10 wild-type mice were taken to establish wound models and injected with PBS as wild-type control group. Mice in each group were injected continuously for 6 days. The percentage of wound area of mice in the four groups was calculated on PID 1, 2, 3, 4, 5, and 6 after injection on the same day. Six wild-type mice and 6 rapamycin-treated mice were taken respectively to establish wound models as wild-type group and rapamycin group. On PID 3, the mRNA and protein expressions of IL-22 and CCL20 in the peri-wound epidermis tissue of mice in the two groups were detected by real-time fluorescence quantitative RT-PCR and Western blotting, respectively. The Vγ4 T cells were divided into normal control group without any treatment and rapamycin-treated rapamycin group. After being cultured for 24 hours, the mRNA and protein expressions of IL-22 of cells in the two groups were detected by real-time fluorescence quantitative RT-PCR and Western blotting, respectively, with the number of samples being 6. Data were statistically analyzed with independent sample <it</i test, analysis of variance for repeated measurement, one-way analysis of variance, Bonferroni method, Kruskal-Wallis <iH</i test, and Wilcoxon rank sum test. <bResults:</b The percentage of Vγ4 T cells in the epidermal cells of the skin tissue around the wound of mice in trauma only group on PID 3 was 0.66% (0.52%, 0.81%), which was significantly higher than 0.09% (0.04%, 0.14%) in the epidermal cells of the normal skin tissue of mice in normal control group (<iZ</i=4.31, <iP</i<0.01). The percentages of Vγ4 T cells in the epidermal cells of the skin tissue around the wound of mice in rapamycin+trauma group and trauma+CCL20 inhibitor group on PID 3 were 0.25% (0.16%, 0.37%) and 0.24% (0.17%, 0.35%), respectively, which were significantly lower than that in trauma only group (with <iZ</i values of 2.27 and 2.25, respectively, <iP</i<0.05). The mRNA expression level of IL-22 of cells in Vγ4 T cell group was significantly higher than that in Vγ3 T cell group, γδ negative cell group, and melanoma cell control group (with <iZ</i values of 2.96, 2.45, and 3.41, respectively, <iP</i<0.05 or <iP</i<0.01). Compared with that in wild-type control group, the percentage of wound area of mice in rapamycin control group increased significantly on PID 1-6 (<iP</i<0.01), the percentage of wound area of mice in Vγ4 T cell+IL-22 inhibitor group increased significantly on PID 1 and PID 3-6 (<iP</i<0.05 or <iP</i<0.01). Compared with that in rapamycin control group, the percentage of wound area of mice in Vγ4 T cell only group decreased significantly on PID 1-6 (<iP</i<0.05 or <iP</i<0.01). Compared with that in Vγ4 T cell only group, the percentage of wound area of mice in Vγ4 T cell+IL-22 inhibitor group increased significantly on PID 3-6 (<iP</i<0.05 or <iP</i<0.01). On PID 3, compared with those in wild-type group, the expression levels of IL-22 protein and mRNA (with <it</i values of -7.82 and -5.04, respectively, <iP</i<0.01) and CCL20 protein and mRNA (with <it</i values of -7.12 and -5.73, respectively, <iP</i<0.01) were decreased significantly in the peri-wound epidermis tissue of mice in rapamycin group. After being cultured for 24 hours, the expression levels of IL-22 protein and mRNA in Vγ4 T cells in rapamycin group were significantly lower than those in normal control group (with <it</i values of -7.75 and -6.04, respectively, <iP</i<0.01). <bConclusions:</b In mice with full-thickness skin defects, rapamycin may impair the CCL20 chemotactic system by inhibiting the expression of CCL20, leading to a decrease in the recruitment of Vγ4 T cells to the epidermis, and at the same time inhibit the secretion of IL-22 by Vγ4 T cells, thereby slowing the wound healing rate. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
This gene transfer experiment is the first Parkinson's Disease (PD) protocol to be submitted to the Recombinant DNA Advisory Committee. The principal investigators have uniquely focused their careers on both pre-clinical work on gene transfer in the brain and clinical expertise in management and surgical treatment of patients with PD. They have extensively used rodent models of PD for proof-of-principle experiments on the utility of different vector systems. PD is an excellent target for gene therapy, because it is a complex acquired disease of unknown etiology (apart from some rare familial cases) yet it is characterized by a specific neuroanatomical pathology, the degeneration of dopamine neurons of the substantia nigra (SN) with loss of dopamine input to the striatum. This pathology results in focal changes in the function of several deep brain nuclei, which have been well-characterized in humans and animal models and which account for many of the motor symptoms of PD. Our original approaches, largely to validate in vivo gene transfer in the brain, were designed to facilitate dopamine transmission in the striatum using an AAV vector expressing dopamine-synthetic enzymes. Although these confirmed the safety and potential efficacy of AAV, complex patient responses to dopamine augmenting medication as well as poor results and complications of human transplant studies suggested that this would be a difficult and potentially dangerous clinical strategy using current approaches. Subsequently, we and others investigated the use of growth factors, including GDNF. These showed some encouraging effects on dopamine neuron survival and regeneration in both rodent and primate models; however, uncertain consequences of long-term growth factor expression and question regarding timing of therapy in the disease course must be resolved before any clinical study can be contemplated. We now propose to infuse into the subthalamic nucleus (STN) recombinant AAV vectors expressing the two isoforms of the enzyme glutamic acid decarboxylase (GAD-65 and GAD-67), which synthesizes the major inhibitory neurotransmitter in the brain, GABA. The STN is a very small nucleus (140 cubic mm or 0.02% of the total brain volume, consisting of approximately 300,000 neurons) which is disinhibited in PD, leading to pathological excitation of its targets, the internal segment of the globus pallidus (GPi) and substantia nigra pars reticulata (SNpr). Increased GPi/SNpr outflow is believed responsible for many of the cardinal symptoms of PD, i.e., tremor, rigidity, bradykinesia, and gait disturbance. A large amount of data based on lesioning, electrical stimulation, and local drug infusion studies with GABA-agonists in human PD patients have reinforced this circuit model of PD and the central role of the STN. Moreover, the closest conventional surgical intervention to our proposal, deep brain stimulation (DBS) of the STN, has shown remarkable efficacy in even late stage PD, unlike the early failures associated with recombinant GDNF infusion or cell transplantation approaches in PD. We believe that our gene transfer strategy will not only palliate symptoms by inhibiting STN activity, as with DBS, but we also have evidence that the vector converts excitatory STN projections to inhibitory projections. This additional dampening of outflow GPi/SNpr outflow may provide an additional advantage over DBS. Moreover, of perhaps the greatest interest, our preclinical data suggests that this strategy may also be neuroprotective, so this therapy may slow the degeneration of dopaminergic neurons. We will use both GAD isoforms since both are typically expressed in inhibitory neurons in the brain, and our data suggest that the combination of both isoforms is likely to be most beneficial. Our preclinical data includes three model systems: (1) old, chronically lesioned parkinsonian rats in which intraSTN GAD gene transfer results not only in improvement in both drug-induced asymmetrical behavior (apomorphine symmetrical rotations), but also in spontaneous behaviors. In our second model, GAD gene transfer precedes the generation of a dopamine lesion. Here GAD gene transfer showed remarkable neuroprotection. Finally, we carried out a study where GAD-65 and GAD-67 were used separately in monkeys that were resistant to MPTP lesioning and hence showed minimal symptomatology. Nevertheless GAD gene transfer showed no adverse effects and small improvements in both Parkinson rating scales and activity measures were obtained. In the proposed clinical trial, all patients will have met criteria for and will have given consent for STN DBS elective surgery. Twenty patients will all receive DBS electrodes, but in addition they will be randomized into two groups, to receive either a solution containing rAAV-GAD, or a solution which consists just of the vector vehicle, physiological saline. Patients, care providers, and physicians will be blind as to which solution any one patient receives. All patients, regardless of group, will agree to not have the DBS activated until the completion and unblinding of the study. Patients will be assessed with a core clinical assessment program modeled on the CAPSIT, and in addition will also undergo a preop and several postop PET scans. At the conclusion of the study, if any patient with sufficient symptomatic improvement will be offered DBS removal if they so desire. Any patients with no benefit will simply have their stimulators activated, which would normally be appropriate therapy for them and which requires no additional operations. If any unforeseen symptoms occur from STN production of GABA, this might be controlled by blocking STN GABA release with DBS, or STN lesioning could be performed using the DBS electrode. Again, this treatment would not subject the patient to additional invasive brain surgery. The trial described here reflects an evolution in our thinking about the best strategy to make a positive impact in Parkinson Disease by minimizing risk and maximizing potential benefit. To our knowledge, this proposal represents the first truly blinded, completely controlled gene or cell therapy study in the brain, which still provides the patient with the same surgical procedure which they would normally receive and should not subject the patient to additional surgical procedures regardless of the success or failure of the study. This study first and foremost aims to maximally serve the safety interests of the individual patient while simultaneously serving the public interest in rigorously determining in a scientific fashion if gene therapy can be effective to any degree in treating Parkinson's disease. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Rubitecan [Orathecin, 9-nitrocamptothecin, 9NC, RFS 2000] is a topoisomerase I inhibitor extracted from the bark and leaves of the Camptotheca acuminata tree, which is native to China. Rubitecan is an oral compound being developed for the treatment of pancreatic cancer and other solid tumours by SuperGen. One of the major benefits of rubitecan is that it can be administered in an outpatient setting, so patients can be treated in their homes. Rubitecan was isolated by the Stehlin Foundation in the US. SuperGen is currently awaiting regulatory approval in the US and the EU for rubitecan in the treatment of pancreatic cancer. At the BIO-2004 conference, SuperGen announced it is seeking a partner for rubitecan for territories outside the US. SuperGen acquired exclusive worldwide rights to rubitecan from the Stehlin Foundation in 1997 except in Mexico, Canada, Spain, Japan, the UK, France, Italy and Germany. SuperGen has also received approval from the US FDA to use its own manufactured rubitecan in clinical trials. SuperGen and the Stehlin Foundation have an 8-year research agreement that secures global rights to other camptothecins and additional anticancer compounds for the former. In December 1999, SuperGen and Abbott signed a worldwide sales and marketing agreement for rubitecan. Under the terms of the agreement, Abbott had exclusive distribution and promotion rights for rubitecan outside the US, and co-promotion rights with SuperGen within the US. In return, Abbott made an initial equity investment in SuperGen. SuperGen and Abbott Laboratories ended their collaboration agreement in February 2002 by mutual consent with SuperGen stating that the dissolution of the agreement was based on commercial motivation rather than anything to do with rubitecan's safety or efficacy. Abbott no longer has rights or obligations to purchase shares of SuperGen stock or an option to purchase up to 49% of the company. For its part, SuperGen will no longer receive milestone payments worth up to $US57 million. SuperGen has formed a clinical and business alliance with US Oncology (created by the merger between American Oncology Resources and Physician Reliance Network in the US), and will collaborate on clinical trials of rubitecan. SuperGen believes that this relationship will increase the patient population available for trials and enable it to market the drug directly to Oncologists. SuperGen and Capital Research and Management Company have completed a $US16.6 million private placement transaction that will enable future funding for the rubitecan programme as well as other oncology programmes. In July 2004, SuperGen's European subsidiary, EuroGen Pharmaceuticals, submitted a Marketing Authorisation Application for rubitecan in the treatment of pancreatic cancer. The application will be reviewed under the EMEA Centralised Procedure. In June 2003, the EMEA granted SuperGen orphan drug status for rubitecan for the treatment of pancreatic cancer. The US FDA has also granted orphan drug status for rubitecan in the treatment of pancreatic cancer and fast-track status for rubitecan for the treatment of locally advanced or metastatic pancreatic cancer that is resistant or refractory to chemotherapy. SuperGen has conducted three phase III pivotal trials in patients with pancreatic cancer. A phase III randomised trial in chemotherapy-naive patients was conducted at 132 centres throughout the US. The trial enrolled approximately 994 patients who were randomised to receive rubitecan or gemcitabine. Enrollment was completed in October 2001. Another phase III trial has compared rubitecan with the most appropriate chemotherapy in chemotherapy-resistant patients. Enrollment of over 400 patients at 200 medical centres across the US was completed in June 2001. Results from the trial were presented at the 39th Annual Meeting of the American Society of Clinical Oncology (ASCO-2003) [Chicago, US; 31 May - 3 June 2003], after they had been compiled, analysed and submitted to the FDA. The results of the study showed that rubitecan could not help all chemotherapy-resistant patients, but could increase survival in those that do respond. The other phase III pivotal trial was conducted in patients with pancreatic cancer who had failed treatment with gemcitabine. This trial completed enrollment in October 2001, and had enrolled approximately 448 patients. SuperGen is conducting phase II trials of rubitecan in patients with solid tumours in the UK, Italy, France, Germany, the Netherlands and Denmark. Each trial will enroll 100-150 patients with various tumour types, including colorectal, lung, breast, gastric, prostate, cervical and head and neck cancers. Phase I/II trials are underway to investigate rubitecan as a radiosensitiser in patients with lung cancer, and phase II trials in patients with breast cancer are also being conducted. A phase II study in ovarian cancer patients is also being conducted. Results from an ongoing phase II study in cancer patients have shown that rubitecan was effective against chordomas, a rare type of bone cancer. Phase II studies are also underway in haematological malignancies including myelodysplastic syndrome (preleukaemia) and chronic myelomonocytic leukaemia. In February 2000, SuperGen announced that its IND submission for rubitecan had been approved by the Therapeutics Products Programme of Canada. The company stated that it intended to begin clinical trials in Canada in the near future. In February 2004, SuperGen announced an offering of shares of its common stock to finance the commercialisation of rubitecan capsules. In July 2003, SuperGen was granted a US patent covering combination therapies with chemotherapeutic anthracycline agents and structural modifications that may one day lead to next-generation rubitecan compounds. In December 2002, SuperGen was granted US patent No. 6,482,830, covering its polymorphic formulations of rubitecan. The patent also covers a class of polymorphs that are similar to the one at the centre of rubitecan. In addition, SuperGen was also issued US patent No. 6,485,514 in December 2002, covering the local delivery of rubitecan via stents and/or catheters to sites of proliferating cells. Stent- or catheter-delivered rubitecan may be beneficial in certain types of cardiac procedures, such as ablation or angioplasty, as well as for direct injection into a certain number of solid tumours. SuperGen is also developing an inhaled, liposomal formulation of rubitecan. It acquired the worldwide rights to this formulation from the Clayton Foundation in December 1999. Inhaled rubitecan is in clinical trials in the US for the treatment of lung cancer and pulmonary metastatic cancer. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
In 2017, approximately 67,000 persons died of violence-related injuries in the United States. This report summarizes data from CDC's National Violent Death Reporting System (NVDRS) on violent deaths that occurred in 34 states, four California counties, the District of Columbia, and Puerto Rico in 2017. Results are reported by sex, age group, race/ethnicity, method of injury, type of location where the injury occurred, circumstances of injury, and other selected characteristics. 2017. NVDRS collects data regarding violent deaths obtained from death certificates, coroner and medical examiner reports, and law enforcement reports. This report includes data collected for violent deaths that occurred in 2017. Data were collected from 34 states (Alaska, Arizona, Colorado, Connecticut, Delaware, Georgia, Illinois, Indiana, Iowa, Kansas, Kentucky, Maine, Maryland, Massachusetts, Michigan, Minnesota, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, Utah, Vermont, Virginia, Washington, West Virginia, and Wisconsin), four California counties (Los Angeles, Sacramento, Shasta, and Siskiyou), the District of Columbia, and Puerto Rico. NVDRS collates information for each death and links deaths that are related (e.g., multiple homicides, homicide followed by suicide, or multiple suicides) into a single incident. For 2017, NVDRS collected information on 45,141 fatal incidents involving 46,389 deaths that occurred in 34 states, four California counties, and the District of Columbia; in addition, information was collected on 961 fatal incidents involving 1,027 deaths in Puerto Rico. Data for Puerto Rico were analyzed separately. Of the 46,389 deaths in the 34 states, four California counties, and District of Columbia, the majority (63.5%) were suicides, followed by homicides (24.9%), deaths of undetermined intent (9.7%), legal intervention deaths (1.4%) (i.e., deaths caused by law enforcement and other persons with legal authority to use deadly force acting in the line of duty, excluding legal executions), and unintentional firearm deaths (<1.0%). (The term "legal intervention" is a classification incorporated into the International Classification of Diseases, Tenth Revision, and does not denote the lawfulness or legality of the circumstances surrounding a death caused by law enforcement.) Demographic patterns and circumstances varied by manner of death. The suicide rate was higher among males than among females and was highest among adults aged 45-64 years and ≥85 years and non-Hispanic American Indians/Alaska Natives and non-Hispanic Whites. The most common method of injury for suicide was a firearm among males and poisoning among females. Suicide was most often preceded by a mental health, intimate partner, or physical health problem or a recent or impending crisis during the previous or upcoming 2 weeks. The homicide rate was highest among persons aged 20-24 years and was higher among males than females. Non-Hispanic Black males had the highest homicide rate of any racial/ethnic group. The most common method of injury for homicide was a firearm. When the relationship between a homicide victim and a suspect was known, the suspect was most frequently an acquaintance or friend for male victims and a current or former intimate partner for female victims. Homicide most often was precipitated by an argument or conflict, occurred in conjunction with another crime, or, for female victims, was related to intimate partner violence. Among intimate partner violence-related homicides, the largest proportion occurred among adults aged 35-54 years, and the most common method of injury was a firearm. When the relationship between an intimate partner violence-related homicide victim and a suspect was known, most female victims were killed by a current or former intimate partner, whereas approximately half of male victims were killed by a suspect who was not their intimate partner. Almost all legal intervention deaths were among males, and the legal intervention death rate was highest among men aged 25-29 years. Non-Hispanic American Indian/Alaska Native males had the highest legal intervention death rate, followed by non-Hispanic Black males. A firearm was used in the majority of legal intervention deaths. When a specific type of crime was known to have precipitated a legal intervention death, the type of crime was most frequently assault/homicide. The most frequent circumstances for legal intervention deaths were reported use of a weapon by the victim in the incident and a mental health or substance use problem (other than alcohol use). Unintentional firearm deaths more frequently occurred among males, non-Hispanic Whites, and persons aged 15-24 years. These deaths most often occurred while the shooter was playing with a firearm and most frequently were precipitated by a person unintentionally pulling the trigger or mistakenly thinking the firearm was unloaded. The rate of death when the manner was of undetermined intent was highest among males, particularly among non-Hispanic Black and non-Hispanic American Indian/Alaska Native males, and persons aged 30-34 years. Poisoning was the most common method of injury in deaths of undetermined intent, and opioids were detected in nearly 80% of decedents tested for those substances. This report provides a detailed summary of data from NVDRS on violent deaths that occurred in 2017. The suicide rate was highest among non-Hispanic American Indian/Alaska Native and non-Hispanic White males, whereas the homicide rate was highest among non-Hispanic Black males. Intimate partner violence precipitated a large proportion of homicides for females. Mental health problems, intimate partner problems, interpersonal conflicts, and acute life stressors were primary circumstances for multiple types of violent death. NVDRS data are used to monitor the occurrence of violence-related fatal injuries and assist public health authorities in developing, implementing, and evaluating programs and policies to reduce and prevent violent deaths. For example, South Carolina VDRS and Colorado VDRS are using their data to support suicide prevention programs through systems change and the Zero Suicide framework. North Carolina VDRS and Kentucky VDRS data were used to examine intimate partner violence-related deaths beyond homicides to inform prevention efforts. Findings from these studies suggest that intimate partner violence might also contribute to other manners of violent death, such as suicide, and preventing intimate partner violence might reduce the overall number of violent deaths. In 2019, NVDRS expanded data collection to include all 50 states, the District of Columbia, and Puerto Rico, providing more comprehensive and actionable violent death information for public health efforts to reduce violent deaths. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
1. Although the number of cadaver donor transplants did not increase substantially over the past 10 years, unrelated living donor grafts increased from 153 in 1991 to 1,661 through 2000. Use of spousal and other unrelated donor organs contributed to this increase. There was a modest increase in living-related donor transplants from 2,328 in 1991 to 3,451 in 2000. 2. Cadaver donor graft survival at one year improved from 84% in 1991 to 90% in 2000. In contrast, one-year graft survival of living donor transplants only improved from 93% in 1991 to 95% in 2000. 3. Throughout the 10-year period, approximately 13% of transplants were repeat transplants from cadaver donors and roughly 8% were regrafts from live donors. 4. Cadaver donor transplants into White recipients declined from 68% in 1991 to 60% in 2000. For living donors, the percentage of White patients remained constant at about 70%. 5. Graft survival in patients of all races was about equal at one year but diverged at 3 years, with Asians having the highest and Blacks having the lowest 3-year graft survival rates. 6. Average donor age increased from 31.7 in 1991 to 36 in 2000 for cadaveric donor transplants and 37.9 in 1991 to 40.4 in 2000 for living donor transplants. Cadaveric kidneys from donors older than 50 years of age yielded significantly lower 3-year graft survival. 7. Average recipient age for cadaveric donor transplants increased from 42.1 in 1991 to 46.8 in 2000. The average recipient age for living donor transplants also increased steadily from 33.7 in 1991 to 42.9 in 2000. There was relatively little effect on graft survival rates for advanced age recipients. 8. The percentage of sensitized recipients receiving cadaver donor grafts declined from 27% in 1991 to 21% in 2000. Similarly, sensitized recipients receiving living donor grafts decreased from 17% in 1991 to 13% in 2000. Graft survival in patients with more than 50% PRA was lower at 3 years for patients receiving cadaveric donor grafts. Highly sensitized patients receiving living donor grafts had graft survival rates similar to those who were not sensitized. 9. Cold ischemia times decreased from an average of 24.2 hours in 1991 to 18.9 hours in 2000. Improved graft survival rates over those 10 years were noted in all groups, and even cold ischemia times more than 36 hours yielded 3-year graft survivals comparable to those with lower cold ischemia times in 1998. 10. The need for dialysis has remained constant at about 23% over the last 10 years for patients receiving kidneys from cadaveric donors. The rate of dialysis for patients receiving kidneys from living donors was about 5% for each of the 10 years examined. First day anuria increased from 11% in 1991 to 16% in 2000 for cadaver donor transplants and 3% in 1999 to 5% in 2000 for living donor grafts. 11. Cadaveric donor patients requiring dialysis had a 3-year graft survival rate of 63% if there was no first day anuria and 56% if they had first day anuria. This is in contrast to 80% 3-year graft survival for those with immediate diuresis and no need for dialysis. The 3-year graft survival rate for those receiving living donor grafts and needing dialysis was 58% if they had first day diuresis and 41% if they ware anuric on the first day. Conversely, those who had first day function and did not require dialysis had 89% 3-year graft survival. 12. Among the patients receiving cadaveric grafts with first day diuresis there was a marked reduction in those with rejection, from 21% in 1991 to 5% in 2000. Similarly, for this type of patient receiving living donor grafts, the reduction was 17% in 1991 to 5% in 2000. However, graft survival among these patients did not change significantly. The greatest improvement was noted in those with first day anuria and no rejection. 13. Patients who did not require dialysis, and had rejection prior to discharge decreased markedly from 17% in 1991 to 3% in 2000 in those receiving cadaveric grafts and 15% in 1991 to 3.9% in 2000 for those receiving living donors. Graft survival of cadaveric transplants in those needing dialysis, with and without rejection, improved the most in the 10 year period. 14. Hospitalization days for cadaveric transplant recipients were reduced from 19 days in 1991 to 10 days in 2000 and 16 days in 1991 to 8 days in 2000 for recipients of living donor grafts. There was an increase in discharge serum creatinine values from 2.3 mg/dl in 1991 to 3.3 mg/dl in 2000 for cadaver donor grafts. 15. Double therapy was utilized for about 15% of cadaveric and living donors. There was a sharp increase in induction therapy, peaking at 51% in 1994 and decreasing to 5% by 2000 for cadaveric donor transplants. Induction did not improve graft survival for either cadaver or living donor transplant recipients. 16. Triple therapy improved graft survival of White and Black patients, but did not affect the half-lives in either race. 17. The lower graft survival from older donors was not affected by triple therapy for cadaver donor transplants. Triple therapy removed the donor age effect for recipients of living donor grafts. 18. Triple therapy practically eliminated the effect of sensitization for cadaveric donor grafts. Both double and triple therapy virtually eliminated the sensitization effect for living donors. 19. Triple therapy significantly improved the survival of kidneys with more than 36 hours cold ischemia time so that 3-year graft survival was 76% at 3 years compared with 81% for kidneys stored 1-12 hours. 20. Triple therapy improved the 3-year graft survival of kidneys with first day anuria from 50% for double therapy to 69% for triple therapy in cadaver donor transplants. For living donor transplants, there was a similar improvement from 57% with double therapy to 72% with triple therapy. 21. Triple therapy improved the 3-year cadaveric graft survival rate of kidneys requiring dialysis from 51% with double therapy to 67% for triple therapy. There was a similar improvement for living donors needing dialysis from 37% to 61% at 3 years. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Immunoprophylaxis with influenza vaccine is the primary method for reducing the effect of influenza on children, and inactivated influenza vaccine has been shown to be safe and effective in children. The Advisory Committee on Immunization Practices recommends that children 6 to 23 months of age who are receiving trivalent inactivated influenza vaccine for the first time be given 2 doses; however, delivering 2 doses of trivalent inactivated influenza vaccine > or = 4 weeks apart each fall can be logistically challenging. We evaluated an alternate spring dosing schedule to assess whether a spring dose of trivalent inactivated influenza vaccine was capable of "priming" the immune response to a fall dose of trivalent inactivated influenza vaccine containing 2 different antigens. Healthy children born between November 1, 2002, and December 31, 2003, were recruited in the spring and randomly assigned to either the alternate spring schedule or standard fall schedule. The 2003-2004 licensed trivalent inactivated influenza vaccine was administered in the spring; the fall 2004-2005 vaccine had the same A/H1N1 antigen but contained drifted A/H3N2 antigen and B antigen with a major change in strain lineage. Reactogenicity was assessed by parental diaries and telephone surveillance. Blood was obtained after the second dose of trivalent inactivated influenza vaccine for all of the children and after the first dose of trivalent inactivated influenza vaccine in the fall group. The primary outcome of this study was to demonstrate noninferiority of the antibody response after a spring-fall dosing schedule compared with the standard fall dosing schedule. Noninferiority was based on the proportion of subjects in each group achieving a hemagglutination-inhibition antibody titer of > or = 1:32 after vaccination to 2 of the 3 antigens (H1N2, H3N2, and B) contained in the 2004-2005 vaccine. For each antigen, the antibody response was proposed to be noninferior if, within the upper bound of 95% confidence interval, there was < 15% difference between the proportion of children in the fall and spring groups with postvaccination titers > or = 1:32. A total of 468 children were randomly assigned to either the spring (n = 233) or fall (n = 235) trivalent inactivated influenza vaccine schedule. Excellent response rates to A/H1N1, as measured by antibody levels > or = 1:32, were noted in both the spring (86%) and fall groups (93%). The A/H1N1 response rate of the spring group was noninferior to that of the fall group. Noninferiority of the spring schedule was not met with respect to the other 2 influenza antigens: for A/H3N2 the response was 70% in the spring group versus 83% for the fall group, and the response to B was 39% in the spring group versus 88% for the fall group. After 2 doses of vaccine, the geometric mean antibody titers also were less robust in the spring group for both A/H3N2 and B antigens. For each of the 3 vaccine antigens, the respective geometric mean antibody titers for the spring group versus the fall group were: A/H1N1, 79.5 +/- 3.3 and 91.9 +/- 2.6; A/H3N2, 57.1 +/- 4.1 and 77.8 +/- 3.7; and B, 18.0 +/- 2.4 and 61.6 +/- 2.5. However, a significantly higher proportion of children in the spring group achieved potentially protective levels of antibody to all 3 antigens after their first fall dose of trivalent inactivated influenza vaccine than children in the fall group after receiving their first fall dose. For influenza A/H1N1, there was an antibody level > or = 1:32 in 86% of children in the spring group versus 55% of children in the fall group. Likewise, for influenza A/H3N2, 70% of children in the spring group and 47% of children in the fall group had antibody levels > 1:32; for influenza B, the proportions were 39% of children in the spring group and 16% of children in the fall group. Reactogenicity after trivalent inactivated influenza vaccine in both groups of children was minimal and did not differ by dose. Although the immune response to the identical A/H1N1 vaccine antigen was similar in both groups, priming with different A/H3N2 antigens and B antigens in the spring produced a lower immune response to both antigens than that shown in children who received 2 doses of the same vaccine in the fall. However, approximately 70% of children in the spring group had a protective response to the H3N2 antigen after 2 doses. Initiating influenza immunization in the spring was superior to 1 dose of trivalent inactivated influenza vaccine in the fall. The goal of delivering 2 doses of influenza vaccine a month apart to vaccine-naive children within the narrow flu vaccination season is a challenge not yet met; thus far, only about half of children aged 6 to 23 months of age are receiving influenza vaccine. By using the spring schedule, we were able to administer 2 doses of trivalent inactivated influenza vaccine to a higher proportion of children earlier in the influenza vaccination season. In years when there is an ample supply of trivalent inactivated influenza vaccine, and vaccine remains at the end of the season, priming influenza vaccine-naive infants with a spring dose will lead to the earlier protection of a higher proportion of infants in the fall. This strategy may be particularly advantageous when there is an early start to an influenza season as occurred in the fall of 2003. A priming dose of influenza vaccine in the spring may also offer other advantages. Many vaccine-naive children may miss the second dose of fall trivalent inactivated influenza vaccine because of vaccine shortages or for other reasons, such as the potential implementation of new antigens at a late date. Even with seasonal changes in influenza vaccine antigens, by giving a springtime dose of trivalent inactivated influenza vaccine, such children would be more protected against influenza than would children who were only able to receive 1 dose in the fall. In summary, our data suggest that identical influenza antigens are not necessary for priming vaccine-naive children and that innovative uses of influenza vaccine, such as a springtime dose of vaccine, could assist in earlier and more complete immunization of young children. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Ulcerative colitis (UC) and Crohn's disease (CD), collectively referred to as inflammatory bowel disease (IBD), are chronic immune disorders affecting the gastrointestinal tract. The aetiology of IBD remains an enigma, but increasing evidence suggests that the development of IBD may be triggered by a disturbance in the balance between gut commensal bacteria and host response in the intestinal mucosa. It is now known that epithelial cells have the capacity to secrete and respond to a range of immunological mediators and this suggests that these cells play a prominent role in the pathogenesis of IBD. Current knowledge about the intestinal epithelium has mainly been obtained using models based on animal cells, transformed human intestinal cell lines and isolated cells from resected colonic bowel segments. Species difference, malignant origin and confounders related to surgery, obviously make these cell models however less applicable for patophysiological studies. Consequently, there was a clear need for models of representative intestinal epithelial cells that would allow functional and dynamic studies of the differentiated human colonic epithelium in vitro. The primary purpose of this thesis was to explore and validate the optimal conditions for establishing a model based on short-term cultures of human colonic epithelial cells obtained from endoscopical biopsies. The cell cultures were accordingly used to describe the interplay between proinflammatory cytokines and colonic epithelium, with focus on alterations in viability, butyrate metabolism and secretion of a chemokine and metalloproteinases (MMP). Finally, the model was used to characterize expression and activation of receptors like toll like receptor (TLR)9 and peroxisome activated proliferators (PPAR)- known to be important players in regulation of innate and adaptive immune responses in human colonic epithelium. The results showed that it is possible to establish short-term cultures of representative, viable human colonic epithelial cells from endoscopic mucosal biopsies of patients with IBD. Short-time isolation by EGTA/EDTA from colonic biopsies allowed establishment of small scale cultures of epithelial cells which were viable and metabolic active for up to 48 hours in vitro. The cell model preserved important cellular metabolic and immunological functions of the human colonic epithelium, including the ability to oxidate butyrate, detoxificate phenolic compounds and secrete the chemokine interleukin (IL)-8 in vitro. Tumour necrosis factor (TNF)-α and interferon (IFN)-γ are pro-inflammatory cytokines, which are present in increased amounts in inflamed colonic mucosa. The precise mechanisms of cytokine-mediated mucosal injury are unknown, but one might be that TNF-α and IFN-γ directly impair epithelial cell function similar to effects seen on distinct target cells in other autoimmune diseases. Using the model, both cytokines were found directly to impair the viability of colonic epithelial cells and to induce secretion of IL-8 in vitro. Interestingly, the cells from inflamed IBD mucosa were less sensitive to cytokine-induced damage, which suggests that an intrinsic defense mechanism is triggered in these cells, perhaps as a result of exposure to toxic luminal factors or high local cytokine levels in vivo. TNF-α and IFN-γ may also be involved in regulation of intestinal inflammation through stimulation of MMP expression and proteolytic activity. We found that colonic epithelial cells express a range of MMPs and moreover that expression of distinct MMPs is increased in cells from inflamed IBD mucosa. Using a functional peptide cleavage assay it was shown that epithelial cells secreted proteolytic active enzymes and that the functional MMP activity was increased in inflamed IBD mucosa. This suggests that colonic epithelial cells, like myofibroblasts and immune cells, may contribute to local intestinal mucosal damage, through secretion of active MMPs. Disturbance of recognition and discrimination of potentially harmful pathogens from commensals in the intestinal mucosa have increasingly been implicated in the pathogenesis of IBD. Our results revealed that colonic epithelial cells express TLR9, a key pattern recognition receptor. Interestingly, the differentiated epithelial cells, which have been exposed to the luminal bacterial flora in vivo, were unresponsive to TLR9 ligand stimulation, contrasting findings in the epithelial cell line HT-29 that is cultured continuously in bacteria free environment. These findings suggest, theoretically, that colonic epithelium may regulate immune responses to microbial antigens including commensal bacterial DNA through modulation of the TLR9 pathway. Currently, the results are in line with the emerging view, that the epithelium represents an important frontline cellular component of the innate immune system in the gut. PPARγ is a nuclear receptor involved in the regulation of lipid and carbonhydrate metabolism. Recent studies in rodent colitis models suggest that PPARγ also is involved in modulation of inflammatory processes in the colon. Using the model, we characterise expression and activity of PPARs in human colonic epithelium and, additionally, evaluated the functional significance of a possible imbalanced PPARγ regulation in relation to inflammation. Our experiments showed that colonic epithelial cells express PPARγ and furthermore that PPARγ signalling was impaired in inflamed UC epithelium. It was possible to restore PPARγ signalling in the cell cultures by stimulation with rosiglitazone (a synthetic PPARγ ligand) in vitro. Hence, these experiments prompted us to design a small controlled, clinical study exploring the possible stimulatory effects of rosiglitazone (a PPAR ligand) in vivo. Interestingly, it was found that topical application of rosiglitazone in patients with active distal UC reduced clinical activity and mucosal inflammation similar to the effects measured in patients treated with mesalazine enemas. Moreover, rectal application of rosiglitazone induced PPARγ signalling in the epithelium in vivo, supporting the view that activation of PPARγ may be a new potential therapeutic target in the treatment of UC. Overall, the in vitro model of representative human colonic epithelial cells has shown to be a useful technique for detailed studies of metabolic and immunological functions that are important for homeostasis of the colonic epithelium. Currently, the findings support the view that intestinal epithelial cells actively participate in immunological processes in the colonic mucosa. Additionally, the model seems to be applicable for generating and evaluating new therapeutic approaches from laboratory bench to bed line as illustrated by the PPARγ study. It is therefore probable, that studies in models of representative colonic epithelial cells, as the one described here, could contribute with important knowledge about the pathogenesis of human inflammatory colonic diseases also in the future. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Methacrylonitrile is an aliphatic nitrile used extensively in the preparation of homo- and copolymers, elastomers, and plastics and as a chemical intermediate in the preparation of acids, amides, esters, and other nitriles. This aliphatic nitrile is also used as a replacement for acrylonitrile in the manufacture of an acrylonitrile/butadiene/styrene-like polymer. Methacrylonitrile was nominated for toxicity and carcinogenicity testing by the National Cancer Institute due to its high production volume and extensive use, the lack of chronic or carcinogenicity data, and its structural resemblance to the known rat carcinogen acrylonitrile. The current 13-week studies were conducted as part of an overall effort by the NTP to assess the toxicity and carcinogenicity of methacrylonitrile. During the 13-week studies, groups of 20 male and 20 female F344/N rats were administered 0, 7.5, 15, 30, 60, or 120 mg methacrylonitrile/kg body weight in deionized, purified water by gavage. Groups of 20 male and 20 female B6C3F1 mice were administered 0, 0.75, 1.5, 3, 6, or 12 mg/kg methacrylonitrile. Ten male and ten female rats and mice from each group were evaluated on day 32. The results of these studies clearly revealed that male rats are more sensitive than females to methacrylonitrile treatment. In the rat study, 19 males and one female administered 120 mg/kg and two males administered 60 mg/kg died during the first week of the study. Males in the 60 mg/kg group at the 32-day interim evaluation and at 13 weeks and females in the 120 mg/kg group at 13 weeks had significantly lower final mean body weights and body weight gains than did the vehicle controls; the surviving male in the 120 mg/kg group also weighed less than the controls at the 32-day interim evaluation. Clinical findings of toxicity were dose dependent and included lethargy, lacrimation, tremors, convulsions, ataxia, and abnormal breathing. There was hematologic evidence indicating that administration of methacrylonitrile induced minimal, normocytic, normochromic anemia. At the 32-day interim evaluation, a minimal dose-related anemia was evidenced by decreases in hematocrit values, hemoglobin concentrations, and erythrocyte counts in male and female rats. The anemia ameliorated by week 13. Administration of methacrylonitrile resulted in dose-related increases in serum thiocyanate and blood cyanide concentrations of male and female rats. These changes were expected and would be consistent with the in vivo metabolism of methacrylonitrile to cyanide. Blood cyanide concentrations were generally higher in males than in females, which may explain the higher sensitivity of males to the lethal effect of methacrylonitrile. There was also biochemical evidence of increased hepatocellular leakage and/or altered function in dosed male rats, suggesting that the liver may be a target organ for toxic effects of methacrylonitrile. Minimal, but significant, decreases in absolute right kidney and thymus weights (32-day interim evaluation) and increases in liver and stomach weights (week 13) occurred in male rats that received 60 mg/kg compared to the vehicle controls. In female rats, stomach weights of the 60 and 120 mg/kg groups were significantly greater and thymus weights of the 120 mg/kg group were significantly less than those of the controls on day 32 and at week 13; liver weights were also significantly greater in females in the 120 mg/kg group than in the vehicle controls on day 32. Male and female rats administered 60 mg/kg and females administered 120 mg/kg had significantly greater incidences of metaplasia of the nasal olfactory epithelium on day 32 and at the end of the study than did the vehicle controls; incidences of olfactory epithelial necrosis were also significantly greater in females in the 60 and 120 mg/kg groups than in the vehicle controls on day 32. Incidence and/or severity increased with increasing dose in females; however, the mortality in male rats administered 120 mg/kg made it difficult to assess the dose-response relationship in males. The no-observed-adverse-effect level for the nasal cavity of rats was 30 mg/kg. Female rats administered 60 or 120 mg/kg methacrylonitrile had significantly longer estrous cycles than did the vehicle controls. Females in the 60 mg/kg group spent more time in diestrus than the vehicle controls. One male and one female mouse in the 12 mg/kg groups died early. Methacrylonitrile administration caused no significant differences in final mean body weights or body weight gains. Clinical findings included lethargy, tremors, ataxia, convulsions, and abnormal breathing. At the 32-day interim evaluation, stomach weights of males administered 3 mg/kg or greater were significantly greater and thymus weights of males in the 12 mg/kg group were significantly less than those of the vehicle controls. At week 13, however, the stomach weights of only males in the 12 mg/kg group were increased relative to the vehicle controls. No treatment-related histopathologic lesions occurred in mice. Methacrylonitrile did not induce mutations in any of several strains of Salmonella typhimurium, with or without S9 activation, and did not induce sex-linked recessive lethal mutations in germ cells of male Drosophila melanogaster fed methacrylonitrile during the larval stage. Results of in vivo bone marrow micronucleus tests with methacrylonitrile in male rats and mice were also negative. In summary, gavage administration of methacrylonitrile to rats and mice resulted in dose-dependent lethargy, tremors, lacrimation, convulsions, and abnormal breathing. However, these effects were more pronounced in rats than mice; these differences may be attributed to the higher doses of methacrylonitrile administered to rats. Body weight gain and survival data of rats demonstrated that males are more sensitive to methacrylonitrile dosing than females. There is an apparent correlation between blood cyanide concentrations and survival rates, with males having greater cyanide concentrations and lower survival rates than female rats administered methacrylonitrile. Microscopically, the only target of methacrylonitrile toxicity was the olfactory epithelium of the nasal cavity. Necrotic and metaplastic effects were induced in male and female rats that received 60 or 120 mg/kg per day. No similar lesions were observed in mice administered methacrylonitrile. The no-observed-adverse-effect level for olfactory epithelial lesions in male and female rats administered methacrylonitrile for 13 weeks was 30 mg/kg per day. No clear chemical-related effects were observed in male or female mice administered methacrylonitrile for 13 weeks by gavage at doses up to 12 mg/kg per day. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
During the past decade, we have observed advance in tuberculosis research including novel vaccines, innate immunity (TLR), SNIP analysis and molecular mechanism of drug resistance. Worldwide genome project enabled the whole genome sequence of host resistant against tuberculosis as well as the whole genome sequence of M. tuberculosis H37Rv. DNA technology has also provided a great impact on the development of novel vaccine against TB. In this symposium, we have invited leading researchers in the field of the frontier study of Mycobacterium research in order to provide general overview of the cutting edge of frontier research. Molecular mechanism of drug resistance of M. tuberculosis has been clarified. On the other hand, molecular mechanism of host-defence (insusceptibility of host) against M. tuberculosis has not yet elucidated. Dr. Taro Shirakawa (Kyoto University) reviewed the susceptibility genes of host in TB infection and presented candidate genes associated with multi-drug resistant tuberculosis. Dr. Naoto Keicho (International Medical Center of Japan) tried to identify host genetic factors involved in susceptibility to pulmonary Mycobacterium avium complex (MAC) infection by candidate gene approach and genome-wide approach. In Japan, Dr. Masaji Okada (National Hospital Organization Kinki-Chuo Chest Medical Center) has been engaged actively in the development of new tuberculosis vaccines (HVJ-liposome/Hsp65 DNA + IL-12 DNA vaccine and recombinant 72f BCG vaccine). He showed basic strategy for construction of new candidate vaccines and also showed significant efficacy on the protection of tuberculosis infection using cynomolgus monkeys, which are very similar to human tuberculosis. Dr. Hatsumi Taniguchi (University of Occupational and Environmental Health) presented that M. tuberculosis mIHF and the neighbor genes went into a dormacy-like state of M. smegmatis in J774 macrophage cells. This study might provide a weapon for elucidating the mechanism of dormacy of M. tuberculosis and the development of novel therapy. Dr. Chiyoji Abe (Nippon Becton Dickinson Co.) reviewed the molecular basis of the resistance to anti-tuberculosis drugs. Most cases of resistance are related to simple nucleotide substitutions rather than to acquisition of new elements. Dr. Kiyoshi Takeda (Kyushu University) showed interesting finding. He analyzed whether Toll-like receptor (TLR)-mediated activation of innate immunity in host defense against mycobacterial infection. MyD88/TRIF double defi-indicating that innate immunity is involved in anti-mycobacterial infection. (1) SNP (single nucleotide polymorphism) analysis in association with Mycobacterium tuberculosis: Taro SHIRAKAWA (Department of Health Promotion & Human Behavior, Kyoto University Medical School, and RIKEN SRC Center) Candidate gene approach was made on 18 SNPs in 11 genes in association with M. tuberculosis. Patients with multi-drug resistance against M. tuberculosis are also subjected. SNPs in NRAMP1 gene were associated with the disease, and drug resistance, its mechanisms remain unknown. (2) Search for genes susceptible to pulmonary Mycobacterium avium complex infection: Naoto KEICHO (Department of Respiratory Diseases, Research Institute, International Medical Center of Japan) Interaction among pathogens and host factors is important for development of infectious diseases. We are trying to identify host genetic factors involved in susceptibility to nonimmunocompromized pulmonary Mycobacterium avium complex (MAC) infection by candidate gene approach and genome-wide approach. Elucidation of functional significance of susceptibility gene polymorphisms will lead to a new strategy for control and prevention of the disease. (3) T cell immunity against Tuberculosis in host and the establishment of novel vaccine: Masaji OKADA (Clinical Research Center, National Hospital Organization Kinki-Chuo Chest Medical Center) T cell (CTL, Th1) immunity including granulysin play an important role in host defense against tuberculosis (TB) in human. Patients with TB or Multi-drug resistant TB showed suppression of all these immunities. HVJ-liposome/Hsp65 DNA + IL-12 DNA vaccination was 100 fold more efficient than BCG on the elimination of Mycobacterium tuberculosis (M.TB) in the BALB/c mice. Cytotoxic T cells activity against M. TB was augmented. By using these new vaccines (Hsp 65 DNA + IL-12 DNA, recombinant 72f BCG) and the cynomolgus monkey models which are very similar to human tuberculosis, the prophylactic effect of vaccines was observed. Thus, these novel vaccines should provide a useful tool for the prevention of human TB infection. (4) Mycobacterium tuberculosis mIHF and the neighbor genes go into a dormancy-like state of M. smegmatis J15CS in J774 cells: Hatsumi TANIGUCHI (Department of Microbiology, School of Medicine, University of Occupational and Environmental Health) Mycobacterium smegmatis J15CS transformants harboring the mIHF gene or the mIHF-gmk-Rv1390 genes showed no difference in in vitro growth and acid-fastness. However, transformants harboring mIHF-gmk-Rv1390 formed short-rod cell morphology and decreased acid-fastness in the mouse macrophage-like cell line J774 compared to those of the other transformants, and the nuclei of the infected J774 cells also changed. Nevertheless, the colony forming units were similar. These indicate that mIHF and the neighbor genes of M. tuberculosis might regulate a growth of mycobacteria in macrophages. (5) Molecular basis of the resistance to anti-tuberculosis drugs: Chiyoji ABE (Nippon Becton Dickinson Company, Ltd.) Considerable progress has been made toward understanding the molecular basis of the resistance to anti-tuberculosis drugs. Most cases of resistance are related usually to simple nucleotide substitutions rather than to acquisition of new elements. Multi-drug resistant isolates of Mycobacterium tuberculosis arise a consequence of sequential accumulation of mutation conferring resistance to single therapeutic agents. The basis of resistance is not able to be explained yet in a substantial percentage of strains for other anti-tuberculosis drugs than rifampin and pyrazinamide. Further studies are required to fully understand the molecular mechanisms of resistance. (6) Toll-like receptors in anti-mycobacterial immune responses: Kiyoshi TAKEDA (Department of Molecular Genetics, Medical Institute of Bioregulation, Kyushu University) Toll-like receptors (TLRs) play an essential role in the recognition of specific patterns of microbial components. TLRs mediate activation of innate immunity and further development of antigen-specific adaptive immunity. In TLR signaling pathways, Toll/IL-1 receptor (TIR) domain-containing adaptors, such as MyD88, TIRAP, TRIF, and TRAM, have been shown to play pivotal roles. Thus, the molecular mechanisms for TLR-mediated activation of innate immunity have been largely understood. We analyzed whether TLR-mediated activation of innate immunity is involved in host defense against mycobacterial infection. MyD88/TRIF double deficient mice, in which TLR-dependent activation of innate immunity is abolished, showed high sensitivity to mycobacterial infection, indicating that innate immunity is critically involved in anti-mycobacterial responses. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The growing shift toward home care services assumes that "being home is good" and that this is the most desirable option. Although ethical issues in medical decision-making have been examined in numerous contexts, home care decisions for technology-dependent children and the moral dilemmas that this population confronts remain virtually unknown. This study explored the moral dimension of family experience through detailed accounts of life with a child who requires assisted ventilation at home. This study involved an examination of moral phenomena inherent in (1) the individual experiences of the ventilator-assisted child, siblings, and parents and (2) everyday family life as a whole. A qualitative method based on Richard Zaner's interpretive framework was selected for this study. The population of interest for this study was the families of children who are supported by a ventilator or a positive-pressure device at home. Twelve families (38 family members) were recruited through the Quebec Program for Home Ventilatory Assistance. Children in the study population fell into 4 diagnostic groups: (1) abnormal ventilatory control (eg, central hypoventilation syndrome), (2) neuromuscular disorders, (3) spina bifida, and (4) craniofacial or airway abnormalities resulting in upper airway obstruction. All 4 of these diagnostic groups were included in this study. Among the 12 children recruited, 4 received ventilation via tracheostomies, and 8 received ventilation with face masks. All of the latter received ventilation only at night, except for 1 child, who received ventilation 24 hours a day. Family moral experiences were investigated using semistructured interviews and fieldwork observations conducted in the families' homes. Data analysis identified 6 principal themes. The themes raised by families whose children received ventilation invasively via a tracheostomy were not systematically different or more distressed than were families of children with face masks. The principal themes were (1) confronting parental responsibility: parental responsibility was described as stressful and sometimes overwhelming. Parents needed to devote extraordinary care and attention to their children's needs. They struggled with the significant emotional strain, physical and psychological dependence of the child, impact on family relationships, living with the daily threat of death, and feeling that there was "no free choice" in the matter: they could not have chosen to let their child die. (2) Seeking normality: all of the families devoted significant efforts toward normalizing their experiences. They created common routines so that their lives could resemble those of "normal" families. These efforts seemed motivated by a fundamental striving for a stable family and home life. This "striving for stability" was sometimes undermined by limitations in family finances, family cohesion, and unpredictability of the child's condition. (3) Conflicting social values: families were offended by the reactions that they faced in their everyday community. They believe that the child's life is devalued, frequently referred to as a life not worth maintaining. They felt like strangers in their own communities, sometimes needing to seclude themselves within their homes. (4) Living in isolation: families reported a deep sense of isolation. In light of the complex medical needs of these children, neither the extended families nor the medical system could support the families' respite needs. (5) What about the voice of the child? The children in this study (patients and siblings) were generally silent when asked to talk about their experience. Some children described their ventilators as good things. They helped them breathe and feel better. Some siblings expressed resentment toward the increased attention that their ventilated sibling was receiving. (6) Questioning the moral order: most families questioned the "moral order" of their lives. They contemplated how "good things" and "bad things" are determined in their world. Parents described their life as a very unfair situation, yet there was nothing that they could do about it. Finally, an overarching phenomenon that best characterizes these families' experiences was identified: daily living with distress and enrichment. Virtually every aspect of the lives of these families was highly complicated and frequently overwhelming. An immediate interpretation of these findings is that families should be fully informed of the demands and hardships that would await them, encouraging parents perhaps to decide otherwise. This would be but a partial reading of the findings, because despite the enormous difficulties described by these families, they also reported deep enrichments and rewarding experiences that they could not imagine living without. Life with a child who requires assisted ventilation at home involves living every day with a complex tension between the distresses and enrichments that arise out of this experience. The conundrum inherent in this situation is that there are no simple means for reconciling this tension. This irreconcilability is particularly stressful for these families. Having their child permanently institutionalized or "disconnected" from ventilation (and life) would eliminate both the distresses and the enrichments. These options are outside the realm of what these families could live with, aside from the 1 family whose child is now permanently hospitalized, at a tremendous cost of guilt to the family. These findings make important contributions by (1) advancing our understanding of the moral experiences of this group of families; (2) speaking to the larger context of other technology-dependent children who require home care; (3) relating home care experiences to neonatal, critical care, and other hospital services, suggesting that these settings examine their approaches to this population that may impose preventable burdens on the lives of these children and their families; and (4) examining a moral problem with an empirical method. Such problems are typically investigated through conceptual analyses, without directly examining lived experience. These findings advance our thinking about how we ought to care for these children, through a better understanding of what it is like to care for them and the corresponding major distresses and rewarding enrichments. These findings call for an increased sensitization to the needs of this population among staff in critical care, acute, and community settings. Integrated community support services are required to help counter the significant distress endured by these families. Additional research is required to examine the experience of other families who have decided either not to bring home their child who requires ventilation or withdraw ventilation and let the child die. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
To assess what is known about the effectiveness, safety, affordability, cost-effectiveness and organisational impact of endoscopic surveillance in preventing morbidity and mortality from adenocarcinoma in patients with Barrett's oesophagus. In addition, to identify important areas of uncertainty in current knowledge for these programmes and to identify areas for further research. Electronic databases up to March 2004. Experts in Barrett's oesophagus from the UK. A systematic review of the effectiveness of endoscopic surveillance of Barrett's oesophagus was carried out following methodological guidelines. Experts in Barrett's oesophagus from the UK were invited to contribute to a workshop held in London in May 2004 on surveillance of Barrett's oesophagus. Small group discussion, using a modified nominal group technique, identified key areas of uncertainty and ranked them for importance. A Markov model was developed to assess the cost-effectiveness of a surveillance programme for patients with Barrett's oesophagus compared with no surveillance and to quantify important areas of uncertainty. The model estimates incremental cost--utility and expected value of perfect information for an endoscopic surveillance programme compared with no surveillance. A cohort of 1000 55-year-old men with a diagnosis of Barrett's oesophagus was modelled for 20 years. The base case used costs in 2004 and took the perspective of the UK NHS. Estimates of expected value of information were included. No randomised controlled trials (RCTs) or well-designed non-randomised controlled studies were identified, although two comparative studies and numerous case series were found. Reaching clear conclusions from these studies was impossible owing to lack of RCT evidence. In addition, there was incomplete reporting of data particularly about cause of death, and changes in surveillance practice over time were mentioned but not explained in several studies. Three cost--utility analyses of surveillance of Barrett's oesophagus were identified, of which one was a further development of a previous study by the same group. Both sets of authors used Markov modelling and confined their analysis to 50- or 55-year-old white men with gastro-oesophageal reflux disease (GORD) symptoms. The models were run either for 30 years or to age 75 years. As these models are American, there are almost certainly differences in practice from the UK and possible underlying differences in the epidemiology and natural history of the disease. The costs of the procedures involved are also likely to be very different. The expert workshop identified the following key areas of uncertainty that needed to be addressed: the contribution of risk factors for the progression of Barrett's oesophagus to the development of high-grade dysplasia (HGD) and adenocarcinoma of the oesophagus; possible techniques for use in the general population to identify patients with high risk of adenocarcinoma; effectiveness of treatments for Barrett's oesophagus in altering cancer incidence; how best to identify those at risk in order to target treatment; whether surveillance programmes should take place at all; and whether there are clinical subgroups at higher risk of adenocarcinoma. Our Markov model suggests that the base case scenario of endoscopic surveillance of Barrett's oesophagus at 3-yearly intervals, with low-grade dysplasia surveyed yearly and HGD 3-monthly, does more harm than good when compared with no surveillance. Surveillance produces fewer quality-adjusted life-years (QALYs) for higher cost than no surveillance, therefore it is dominated by no surveillance. The cost per cancer identified approaches pound 45,000 in the surveillance arm and there is no apparent survival advantage owing to high recurrence rates and increased mortality due to more oesophagectomies in this arm. Non-surveillance continues to cost less and result in better quality of life whatever the surveillance intervals for Barrett's oesophagus and dysplastic states and whatever the costs (including none) attached to endoscopy and biopsy as the surveillance test. The probabilistic analyses assess the overall uncertainty in the model. According to this, it is very unlikely that surveillance will be cost-effective even at relatively high levels of willingness to pay. The simulation showed that, in the majority of model runs, non-surveillance continued to cost less and result in better quality of life than surveillance. At the population level (i.e. people with Barrett's oesophagus in England and Wales), a value of pound 6.5 million is placed on acquiring perfect information about surveillance for Barrett's oesophagus using expected value of perfect information (EVPI) analyses, if the surveillance is assumed to be relevant over 10 years. As with the one-way sensitivity analyses, the partial EVPI highlighted recurrence of adenocarcinoma of the oesophagus (ACO) after surgery and time taken for ACO to become symptomatic as particularly important parameters in the model. The systematic review concludes that there is insufficient evidence available to assess the clinical effectiveness of surveillance programmes of Barrett's oesophagus. There are numerous gaps in the evidence, of which the lack of RCT data is the major one. The expert workshop reflected these gaps in the range of topics raised as important in answering the question of the effectiveness of surveillance. Previous models of cost-effectiveness have most recently shown that surveillance programmes either do more harm than good compared with no surveillance or are unlikely to be cost-effective at usual levels of willingness to pay. Our cost--utility model has shown that, across a range of values for the various parameters that have been chosen to reflect uncertainty in the inputs, it is likely that surveillance programmes do more harm than good -- costing more and conferring lower quality of life than no surveillance. Probabilistic analysis shows that, in most cases, surveillance does more harm and costs more than no surveillance. It is unlikely, but still possible, that surveillance may prove to be cost-effective. The cost-effectiveness acceptability curve, however, shows that surveillance is unlikely to be cost-effective at either the 'usual' level of willingness to pay ( pound 20,000-30,000 per QALY) or at much higher levels. The expected value of perfect information at the population level is pound 6.5 million. Future research should target both the overall effectiveness of surveillance and the individual elements that contribute to a surveillance programme, particularly the performance of the test and the effectiveness of treatment for both Barrett's oesophagus and ACO. In addition, of particular importance is the clarification of the natural history of Barrett's oesophagus. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Vermeulen H, van Hattem JM, Storm-Versloot MN, Ubbink DT. Topical silver for treating infected wounds. Cochrane Database Syst Rev. 2007(1);CD005486. What is the clinical evidence base for silver dressings in the management of contaminated and infected acute and chronic wounds? Investigations were identified by Cochrane Wounds Group Specialized Register (2006), CENTRAL (2006), MEDLINE (2002-2006), EMBASE (2002-2006), CINAHL (2002-2006), and digital dissertations (2006) searches. Product manufacturers were contacted to identify additional eligible studies. The search terms included wound infection, surgical wound infection, ulcer, wound healing, and silver. Each study fulfilled the following criteria: (1) The study was a randomized controlled trial of human participants that compared dressings containing silver with any dressings without silver, dressings with other antiseptics, or dressings with different dosages of silver. (2) The participants were aged 18 years and older with contaminated and infected open wounds of any cause. (3) The study had to evaluate the effectiveness of the dressings using an objective measure of healing. No language or publication status restrictions were imposed, and participants could be recruited in any care setting. Studies were excluded if the wounds were ostomies (surgically formed passages). Study quality assessment was conducted independently by 3 authors using the Dutch Institute for Health Care Improvement and Dutch Cochrane Centre protocols. Characteristics of the study, participants, interventions, and outcome measures were extracted by one author and verified by a second using a standard form. The principal outcome measure was healing (time to complete healing, rate of change in wound area and volume, number and proportion of wounds healed within trial period). Secondary measures were adverse events (eg, pain, maceration, erythema), dressing leakage, and wound odor. Based on the unique comparisons in the studies, a meta-analysis was not conducted. As a result, summary estimates of treatment effect were calculated for each outcome comparison. RevMan software (version 4.2; Cochrane Centre, Oxford, United Kingdom) was used for statistical analysis. Specific search criteria identified 31 studies for review, of which 3 met the inclusion and exclusion criteria. Lack of randomization and absence of wound infections excluded the majority of studies from the review. In the 3 studies selected, silver-containing dressings were compared with nonsilver dressings and dressings with other antimicrobials. One group used a silver-containing foam dressing and a nonsilver foam dressing; another group used a silver-containing alginate and a nonsilver alginate; and a third group used a silver-containing foam and various dressings (nonsilver foams, alginates, hydrocolloids, and gauze and other antimicrobial dressings). Sample sizes ranged between 99 and 619 participants. Most of the wounds in the included studies were pressure, diabetic, and venous leg ulcers. Wound infection was subjectively defined by 1 group as the presence of 2 or more signs and symptoms (eg, continuous pain, erythema, heat, or moderate to high levels of exudate) and by the other 2 groups as signs of critical colonization (eg, delayed healing, increased pain and exudate levels, discoloration, and odor). The primary measure in the included studies was healing outcome. The 3 groups used various assessments of healing, including relative and absolute reduction in wound area and number of wounds healed during the trial period. The trial period in each study was 4 weeks. In the 3 trials, the authors randomized the participants to the treatment groups. Examining healing, one group (129 participants) compared Contreet silver foam (Coloplast A/S, Humlebaek, Denmark) with Allevyn foam (Smith & Nephew, St-Laurent, Quebec, Canada). The authors reported no differences for rates of complete healing (risk difference [RD] = 0.00, 95% confidence interval [CI] = -0.09, 0.09) and median wound area reduction (weighted mean difference [WMD] = -0.30 cm(2), 95% CI = -2.92, 2.35). However, Contreet was favored over Allevyn (P = .034) for median relative reduction in wound area (WMD = -15.70 cm(2), 95% CI = -29.5, -1.90). One group (99 participants) compared Silvercel silver alginate (Johnson & Johnson Wound Management, Somerville, NJ) with Algosteril alginate (Johnson & Johnson Wound Management). The authors found no differences in rates of complete healing (RD = 0.00, 95% CI = -0.06, 0.05), mean absolute (WMD = 4.50 cm(2), 95% CI = -0.93, 9.93) and relative wound area reduction (WMD = -0.30 cm(2), 95% CI = -17.08, 16.48), or healing rate per day (week 1 to 4) (WMD = 0.16 cm(2), 95% CI = -0.03, 0.35). One group (619 participants) compared Contreet with various dressings (nonsilver foams, alginates, hydrocolloids, and gauze and other antimicrobial dressings). For median relative wound area reduction, the authors noted a superiority of Contreet over the various dressings (P = .0019). Examining secondary outcomes, 2 groups used subjective analysis to compare adverse reactions among the dressings. One group reported no difference between Contreet (in satellite ulcers, deterioration of periwound tissue) and Allevyn (in satellite ulcers, maceration, eczema) (RD = 0.02, 95% CI = -0.07, 0.12), and one group found no difference between Silvercel (in pain during dressing change, eczema, periwound erythema, maceration) and Algosteril (in pain during dressing change, eczema, erythema) (RD = -0.01, 95% CI = -0.12, 0.11). Two groups subjectively assessed leakage among silver and nonsilver dressings. The data from one group demonstrated superiority of Contreet over Allevyn (P = .002; RD = -0.30, 95% CI = -0.47, -0.13), and one group found Contreet better than various dressings (eg, nonsilver foams, alginates, hydrocolloids, and gauze, and other antimicrobial dressings) (P = .0005; RD = -0.11, 95% CI = -0.18, -0.05). Using a subjective 4-point scale, one group compared silver and nonsilver dressings and reported a difference favoring Contreet over Allevyn in terms of wound odor (P = .030; RD = -0.19, 95% CI = -0.36, -0.03). Overall, this review provides no clear evidence to support the use of silver-containing foam and alginate dressings in the management of infected chronic wounds for up to 4 weeks. However, the use of silver foam dressings resulted in a greater reduction in wound size and more effective control of leakage and odor than did use of nonsilver dressings. Randomized controlled trials using standardized outcome measures and longer follow-up periods are needed to determine the most appropriate dressing for contaminated and infected acute and chronic wounds. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Malaria in humans is caused by intraerythrocytic protozoa of the genus Plasmodium. These parasites are transmitted by the bite of an infective female Anopheles species mosquito. The majority of malaria infections in the United States occur among persons who have traveled to regions with ongoing malaria transmission. However, malaria is occasionally acquired by persons who have not traveled out of the country through exposure to infected blood products, congenital transmission, laboratory exposure, or local mosquitoborne transmission. Malaria surveillance in the United States is conducted to provide information on its occurrence (e.g., temporal, geographic, and demographic), guide prevention and treatment recommendations for travelers and patients, and facilitate transmission control measures if locally acquired cases are identified. This report summarizes confirmed malaria cases in persons with onset of illness in 2015 and summarizes trends in previous years. Malaria cases diagnosed by blood film microscopy, polymerase chain reaction, or rapid diagnostic tests are reported to local and state health departments by health care providers or laboratory staff members. Case investigations are conducted by local and state health departments, and reports are transmitted to CDC through the National Malaria Surveillance System (NMSS), the National Notifiable Diseases Surveillance System (NNDSS), or direct CDC consultations. CDC reference laboratories provide diagnostic assistance and conduct antimalarial drug resistance marker testing on blood samples submitted by health care providers or local or state health departments. This report summarizes data from the integration of all NMSS and NNDSS cases, CDC reference laboratory reports, and CDC clinical consultations. CDC received reports of 1,517 confirmed malaria cases, including one congenital case, with an onset of symptoms in 2015 among persons who received their diagnoses in the United States. Although the number of malaria cases diagnosed in the United States has been increasing since the mid-1970s, the number of cases decreased by 208 from 2014 to 2015. Among the regions of acquisition (Africa, West Africa, Asia, Central America, the Caribbean, South America, Oceania, and the Middle East), the only region with significantly fewer imported cases in 2015 compared with 2014 was West Africa (781 versus 969). Plasmodium falciparum, P. vivax, P. ovale, and P. malariae were identified in 67.4%, 11.7%, 4.1%, and 3.1% of cases, respectively. Less than 1% of patients were infected by two species. The infecting species was unreported or undetermined in 12.9% of cases. CDC provided diagnostic assistance for 13.1% of patients with confirmed cases and tested 15.0% of P. falciparum specimens for antimalarial resistance markers. Of the U.S. resident patients who reported purpose of travel, 68.4% were visiting friends or relatives. A lower proportion of U.S. residents with malaria reported taking any chemoprophylaxis in 2015 (26.5%) compared with 2014 (32.5%), and adherence was poor in this group. Among the U.S residents for whom information on chemoprophylaxis use and travel region were known, 95.3% of patients with malaria did not adhere to or did not take a CDC-recommended chemoprophylaxis regimen. Among women with malaria, 32 were pregnant, and none had adhered to chemoprophylaxis. A total of 23 malaria cases occurred among U.S. military personnel in 2015. Three cases of malaria were imported from the approximately 3,000 military personnel deployed to an Ebola-affected country; two of these were not P. falciparum species, and one species was unspecified. Among all reported cases in 2015, 17.1% were classified as severe illnesses and 11 persons died, compared with an average of 6.1 deaths per year during 2000-2014. In 2015, CDC received 153 P. falciparum-positive samples for surveillance of antimalarial resistance markers (although certain loci were untestable for some samples); genetic polymorphisms associated with resistance to pyrimethamine were identified in 132 (86.3%), to sulfadoxine in 112 (73.7%), to chloroquine in 48 (31.4%), to mefloquine in six (4.3%), and to artemisinin in one (<1%), and no sample had resistance to atovaquone. Completion of data elements on the malaria case report form decreased from 2014 to 2015 and remains low, with 24.2% of case report forms missing at least one key element (species, travel history, and resident status). The decrease in malaria cases from 2014 to 2015 is associated with a decrease in imported cases from West Africa. This finding might be related to altered or curtailed travel to Ebola-affected countries in in this region. Despite progress in reducing malaria worldwide, the disease remains endemic in many regions, and the use of appropriate prevention measures by travelers is still inadequate. The best way to prevent malaria is to take chemoprophylaxis medication during travel to a country where malaria is endemic. As demonstrated by the U.S. military during the Ebola response, use of chemoprophylaxis and other protection measures is possible in stressful environments, and this can prevent malaria, especially P. falciparum, even in high transmission areas. Detailed recommendations for preventing malaria are available to the general public at the CDC website (https://www.cdc.gov/malaria/travelers/drugs.html). Malaria infections can be fatal if not diagnosed and treated promptly with antimalarial medications appropriate for the patient's age and medical history, the likely country of malaria acquisition, and previous use of antimalarial chemoprophylaxis. Health care providers should consult the CDC Guidelines for Treatment of Malaria in the United States and contact the CDC's Malaria Hotline for case management advice when needed. Malaria treatment recommendations are available online (https://www.cdc.gov/malaria/diagnosis_treatment) and from the Malaria Hotline (770-488-7788 or toll-free at 855-856-4713). Persons submitting malaria case reports (care providers, laboratories, and state and local public health officials) should provide complete information because incomplete reporting compromises case investigations and efforts to prevent infections and examine trends in malaria cases. Compliance with recommended malaria prevention strategies is low among U.S. travelers visiting friends and relatives. Evidence-based prevention strategies that effectively target travelers who are visiting friends and relatives need to be developed and implemented to reduce the numbers of imported malaria cases in the United States. Molecular surveillance of antimalarial drug resistance markers (https://www.cdc.gov/malaria/features/ars.html) has enabled CDC to track, guide treatment, and manage drug resistance in malaria parasites both domestically and internationally. More samples are needed to improve the completeness of antimalarial drug resistance marker analysis; therefore, CDC requests that blood specimens be submitted for all cases diagnosed in the United States. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
This is an update of the Cochrane review published in Issue 5, 2011. Worldwide, cervical cancer is the fourth commonest cancer affecting women. High-risk human papillomavirus (HPV) infection is causative in 99.7% of cases. Other risk factors include smoking, multiple sexual partners, the presence of other sexually transmitted diseases and immunosuppression. Primary prevention strategies for cervical cancer focus on reducing HPV infection via vaccination and data suggest that this has the potential to prevent nearly 90% of cases in those vaccinated prior to HPV exposure. However, not all countries can afford vaccination programmes and, worryingly, uptake in many countries has been extremely poor. Secondary prevention, through screening programmes, will remain critical to reducing cervical cancer, especially in unvaccinated women or those vaccinated later in adolescence. This includes screening for the detection of pre-cancerous cells, as well as high-risk HPV. In the UK, since the introduction of the Cervical Screening Programme in 1988, the associated mortality rate from cervical cancer has fallen. However, worldwide, there is great variation between countries in both coverage and uptake of screening. In some countries, national screening programmes are available whereas in others, screening is provided on an opportunistic basis. Additionally, there are differences within countries in uptake dependent on ethnic origin, age, education and socioeconomic status. Thus, understanding and incorporating these factors in screening programmes can increase the uptake of screening. This, together with vaccination, can lead to cervical cancer becoming a rare disease. To assess the effectiveness of interventions aimed at women, to increase the uptake, including informed uptake, of cervical screening. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), Issue 6, 2020. MEDLINE, Embase and LILACS databases up to June 2020. We also searched registers of clinical trials, abstracts of scientific meetings, reference lists of included studies and contacted experts in the field. Randomised controlled trials (RCTs) of interventions to increase uptake/informed uptake of cervical screening. Two review authors independently extracted data and assessed risk of bias. Where possible, the data were synthesised in a meta-analysis using standard Cochrane methodology. Comprehensive literature searches identified 2597 records; of these, 70 met our inclusion criteria, of which 69 trials (257,899 participants) were entered into a meta-analysis. The studies assessed the effectiveness of invitational and educational interventions, lay health worker involvement, counselling and risk factor assessment. Clinical and statistical heterogeneity between trials limited statistical pooling of data. Overall, there was moderate-certainty evidence to suggest that invitations appear to be an effective method of increasing uptake compared to control (risk ratio (RR) 1.71, 95% confidence interval (CI) 1.49 to 1.96; 141,391 participants; 24 studies). Additional analyses, ranging from low to moderate-certainty evidence, suggested that invitations that were personalised, i.e. personal invitation, GP invitation letter or letter with a fixed appointment, appeared to be more successful. More specifically, there was very low-certainty evidence to support the use of GP invitation letters as compared to other authority sources' invitation letters within two RCTs, one RCT assessing 86 participants (RR 1.69 95% CI 0.75 to 3.82) and another, showing a modest benefit, included over 4000 participants (RR 1.13, 95 % CI 1.05 to 1.21). Low-certainty evidence favoured personalised invitations (telephone call, face-to-face or targeted letters) as compared to standard invitation letters (RR 1.32, 95 % CI 1.11 to 1.21; 27,663 participants; 5 studies). There was moderate-certainty evidence to support a letter with a fixed appointment to attend, as compared to a letter with an open invitation to make an appointment (RR 1.61, 95 % CI 1.48 to 1.75; 5742 participants; 5 studies). Low-certainty evidence supported the use of educational materials (RR 1.35, 95% CI 1.18 to 1.54; 63,415 participants; 13 studies) and lay health worker involvement (RR 2.30, 95% CI 1.44 to 3.65; 4330 participants; 11 studies). Other less widely reported interventions included counselling, risk factor assessment, access to a health promotion nurse, photo comic book, intensive recruitment and message framing. It was difficult to deduce any meaningful conclusions from these interventions due to sparse data and low-certainty evidence. However, having access to a health promotion nurse and attempts at intensive recruitment may have increased uptake. One trial reported an economic outcome and randomised 3124 participants within a national screening programme to either receive the standard screening invitation, which would incur a fee, or an invitation offering screening free of charge. No difference in the uptake at 90 days was found (574/1562 intervention versus 612/1562 control, (RR 0.94, 95% CI: 0.86 to 1.03). The use of HPV self-testing as an alternative to conventional screening may also be effective at increasing uptake and this will be covered in a subsequent review. Secondary outcomes, including cost data, were incompletely documented. The majority of cluster-RCTs did not account for clustering or adequately report the number of clusters in the trial in order to estimate the design effect, so we did not selectively adjust the trials. It is unlikely that reporting of these trials would impact the overall conclusions and robustness of the results. Of the meta-analyses that could be performed, there was considerable statistical heterogeneity, and this should be borne in mind when interpreting these findings. Given this and the low to moderate evidence, further research may change these findings. The risk of bias in the majority of trials was unclear, and a number of trials suffered from methodological problems and inadequate reporting. We downgraded the certainty of evidence because of an unclear or high risk of bias with regards to allocation concealment, blinding, incomplete outcome data and other biases. There is moderate-certainty evidence to support the use of invitation letters to increase the uptake of cervical screening. Low-certainty evidence showed lay health worker involvement amongst ethnic minority populations may increase screening coverage, and there was also support for educational interventions, but it is unclear what format is most effective. The majority of the studies were from developed countries and so the relevance of low- and middle-income countries (LMICs), is unclear. Overall, the low-certainty evidence that was identified makes it difficult to infer as to which interventions were best, with exception of invitational interventions, where there appeared to be more reliable evidence. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Eczema and food allergy are common health conditions that usually begin in early childhood and often occur in the same people. They can be associated with an impaired skin barrier in early infancy. It is unclear whether trying to prevent or reverse an impaired skin barrier soon after birth is effective for preventing eczema or food allergy. Primary objective To assess the effects of skin care interventions such as emollients for primary prevention of eczema and food allergy in infants. Secondary objective To identify features of study populations such as age, hereditary risk, and adherence to interventions that are associated with the greatest treatment benefit or harm for both eczema and food allergy. We performed an updated search of the Cochrane Skin Specialised Register, CENTRAL, MEDLINE, and Embase in September 2021. We searched two trials registers in July 2021. We checked the reference lists of included studies and relevant systematic reviews, and scanned conference proceedings to identify further references to relevant randomised controlled trials (RCTs). SELECTION CRITERIA: We included RCTs of skin care interventions that could potentially enhance skin barrier function, reduce dryness, or reduce subclinical inflammation in healthy term (&gt; 37 weeks) infants (≤ 12 months) without pre-existing eczema, food allergy, or other skin condition. Eligible comparisons were standard care in the locality or no treatment. Types of skin care interventions could include moisturisers/emollients; bathing products; advice regarding reducing soap exposure and bathing frequency; and use of water softeners. No minimum follow-up was required. This is a prospective individual participant data (IPD) meta-analysis. We used standard Cochrane methodological procedures, and primary analyses used the IPD dataset. Primary outcomes were cumulative incidence of eczema and cumulative incidence of immunoglobulin (Ig)E-mediated food allergy by one to three years, both measured at the closest available time point to two years. Secondary outcomes included adverse events during the intervention period; eczema severity (clinician-assessed); parent report of eczema severity; time to onset of eczema; parent report of immediate food allergy; and allergic sensitisation to food or inhalant allergen. We identified 33 RCTs comprising 25,827 participants. Of these, 17 studies randomising 5823 participants reported information on one or more outcomes specified in this review. We included 11 studies, randomising 5217 participants, in one or more meta-analyses (range 2 to 9 studies per individual meta-analysis), with 10 of these studies providing IPD; the remaining 6 studies were included in the narrative results only. Most studies were conducted at children's hospitals. Twenty-five studies, including all those contributing data to meta-analyses, randomised newborns up to age three weeks to receive a skin care intervention or standard infant skin care. Eight of the 11 studies contributing to meta-analyses recruited infants at high risk of developing eczema or food allergy, although the definition of high risk varied between studies. Durations of intervention and follow-up ranged from 24 hours to three years. All interventions were compared against no skin care intervention or local standard care. Of the 17 studies that reported information on our prespecified outcomes, 13 assessed emollients. We assessed most of the evidence in the review as low certainty and had some concerns about risk of bias. A rating of some concerns was most often due to lack of blinding of outcome assessors or significant missing data, which could have impacted outcome measurement but was judged unlikely to have done so. We assessed the evidence for the primary food allergy outcome as high risk of bias due to the inclusion of only one trial, where findings varied based on different assumptions about missing data. Skin care interventions during infancy probably do not change the risk of eczema by one to three years of age (risk ratio (RR) 1.03, 95% confidence interval (CI) 0.81 to 1.31; risk difference 5 more cases per 1000 infants, 95% CI 28 less to 47 more; moderate-certainty evidence; 3075 participants, 7 trials) or time to onset of eczema (hazard ratio 0.86, 95% CI 0.65 to 1.14; moderate-certainty evidence; 3349 participants, 9 trials). Skin care interventions during infancy may increase the risk of IgE-mediated food allergy by one to three years of age (RR 2.53, 95% CI 0.99 to 6.49; low-certainty evidence; 976 participants, 1 trial) but may not change risk of allergic sensitisation to a food allergen by age one to three years (RR 1.05, 95% CI 0.64 to 1.71; low-certainty evidence; 1794 participants, 3 trials). Skin care interventions during infancy may slightly increase risk of parent report of immediate reaction to a common food allergen at two years (RR 1.27, 95% CI 1.00 to 1.61; low-certainty evidence; 1171 participants, 1 trial); however, this was only seen for cow's milk, and may be unreliable due to over-reporting of milk allergy in infants. Skin care interventions during infancy probably increase risk of skin infection over the intervention period (RR 1.33, 95% CI 1.01 to 1.75; risk difference 17 more cases per 1000 infants, 95% CI one more to 38 more; moderate-certainty evidence; 2728 participants, 6 trials) and may increase the risk of infant slippage over the intervention period (RR 1.42, 95% CI 0.67 to 2.99; low-certainty evidence; 2538 participants, 4 trials) and stinging/allergic reactions to moisturisers (RR 2.24, 95% 0.67 to 7.43; low-certainty evidence; 343 participants, 4 trials), although CIs for slippages and stinging/allergic reactions were wide and include the possibility of no effect or reduced risk. Preplanned subgroup analyses showed that the effects of interventions were not influenced by age, duration of intervention, hereditary risk, filaggrin (FLG) mutation, chromosome 11 intergenic variant rs2212434, or classification of intervention type for risk of developing eczema. We could not evaluate these effects on risk of food allergy. Evidence was insufficient to show whether adherence to interventions influenced the relationship between skin care interventions and eczema or food allergy development. Based on low- to moderate-certainty evidence, skin care interventions such as emollients during the first year of life in healthy infants are probably not effective for preventing eczema; may increase risk of food allergy; and probably increase risk of skin infection. Further study is needed to understand whether different approaches to infant skin care might prevent eczema or food allergy. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Sudden infant death syndrome (SIDS) victims were regarded as normal as a matter of definition (Beckwith 1970) until 1952 when Kinney and colleagues argued for elimination of the clause, "unexpected by history." They argued that "not all SIDS victims were normal," and referred to their hypothesis that SIDS results from brain abnormalities, which they postulated "to originate in utero and lead to sudden death during a vulnerable postnatal period." Bergman (1970) argued that SIDS did not depend on any "single characteristic that ordains a infant for death," but on an interaction of risk factors with variable probabilities. Wedgwood (1972) agreed and grouped risk factors into the first "triple risk hypothesis" consisting of general vulnerability, age-specific risks, and precipitating factors. Raring (1975), based on a bell-shaped curve of age of death (log-transformed), concluded that SIDS was a random process with multifactorial causation. Rognum and Saugstad (1993) developed a "fatal triangle" in 1993, with groupings similar to those of Wedgwood, but included mucosal immunity under a vulnerable developmental stage of the infant. Filiano and Kinney (1994) presented the best known triple risk hypothesis and emphasized prenatal injury of the brainstem. They added a qualifier, "in at least a subset of SIDS," but, the National Institute of Child Health and Development SIDS Strategic Plan 2000, quoting Kinney's work, states unequivocally that "SIDS is a developmental disorder. Its origins are during fetal development." Except for the emphasis on prenatal origin, all 3 triple risk hypotheses are similar. Interest in the brainstem of SIDS victims began with Naeye's 1976 report of astrogliosis in 50% of all victims. He concluded that these changes were caused by hypoxia and were not the cause of SIDS. He noted an absence of astrogliosis in some older SIDS victims, compatible with a single, terminal episode of hypoxia without previous hypoxic episodes, prenatal or postnatal. Kinney and colleagues (1983) reported gliosis in 22% of their SIDS victims. Subsequently, they instituted studies of neurotransmitter systems in the brainstem, particularly the muscarinic (1995) and serotenergic systems (2001). The major issue is when did the brainstem abnormalities, astrogliosis, or neurotransmitter changes occur and whether either is specific to SIDS. There is no published method known to us of determining the time of origin of these markers except that the injury causing astrogliosis must have occurred at least 4 days before death (Del Bigio and Becker, 1994). Because the changes in neurotransmitter systems found in the arcuate nucleus in SIDS victims were also found in the chronic controls with known hypoxia, specificity of these markers for SIDS has not been established. It seems likely that the "acute control" group of Kinney et al (1995) died too quickly to develop gliosis or severe depletion of the neurotransmitter systems. We can conclude that the acute controls had no previous episodes of severe hypoxia, unlike SIDS or their "chronic controls." Although the average muscarinic cholinergic receptor level in the SIDS victim was significantly less than in the acute controls, the difference was only 27%, and only 21 of 41 SIDS victims had values below the mean of the acute controls. The study of the medullary serotonergic network by Kinney et al (2001) revealed greater reductions in the SIDS victims than in acute controls, but the questions of cause versus effect of the abnormalities, and whether they occurred prenatally or postnatally, remain unanswered. Hypoplasia of the arcuate nucleus was stated to occur in 5% of their SIDS cases by Kinney et al (2001), but this is a "primary developmental defect" according to Matturri et al (2002) with a larger series, many of whom were stillbirths. These cases should not be included under the rubric of SIDS, by definition. There are difficulties with Filiano and Kinney's (1994) explanation of the age at death distribution of SIDS. They postulate that the period between 1 and 6 months represents an unstable time for virtually all physiologic systems. However, this period demonstrates much less instability than does the neonatal period, when most deaths from congenital defects and severe maternal anemia occur. We present data for infants born to mothers who were likely to have suffered severe anemia as a consequence of placenta previa, abruptio placentae, and excessive bleeding during pregnancy; these infants presumably are at increased risk of hypoxia and brainstem injury. The total neonatal mortality rate in these 3 groups of infants is 4 times greater than the respective postneonatal mortality, and in the postneonatal period the non-SIDS mortality rate is between 14 and 22 times greater than the postneonatal SIDS rate in these 3 groups. A preponderance of deaths in the neonatal period is also found for congenital anomalies, a category that logically should include infants who experienced prenatal hypoxia or ischemia; this distribution of age of death is very different from that for SIDS, which mostly spares the first month and peaks between 2 and 3 months of age. Finally, evidence inconsistent with prenatal injury as a frequent cause of SIDS comes from prospective studies of ventilatory control in neonates who subsequently died of SIDS; no significant respiratory abnormalities in these infants have been found (Waggener et al 1990; Schectman et al 1991). We conclude that none of the triple risk hypotheses presented so far have significantly improved our understanding of the cause of SIDS. Bergman's and Raring's concepts of multifactorial causation with interaction of risk factors with variable probabilities is less restrictive and more in keeping with the large number of demonstrated risk factors and their varying prevalence. If prenatal hypoxic damage of the brainstem occurred, it seems likely that the infant so afflicted would be at risk for SIDS, but it is even more likely that their death would occur in the neonatal period, as we have demonstrated in infants who have known maternal risk factors that involve severe anemia. This is in contrast to the delay until the postneonatal period of most SIDS deaths. A categorical statement that the origin of SIDS is prenatal is unwarranted by the evidence. Brainstem abnormalities have not been shown to cause SIDS, but are more likely a nonspecific effect of hypoxia. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Mid-19th century European visitors to Old Calabar, an eastern province of Nigeria, could not avoid becoming aware of native belief in the power of the seeds of a local plant to determine whether individuals were innocent or guilty of some serious misdemeanour. The seeds were those of a previously unknown legume and soon referred to as the ordeal bean of Old Calabar. Their administration was known locally as 'chop nut'. Missionaries who arrived in Calabar in 1846 estimated that chop nut caused some 120 deaths annually and documented the course of poisoning. The latter information and samples of the beans rapidly found their way to Scotland, the home of the missionaries' parent church, explaining why the early toxicology of physostigmine, quantitatively the most important of three active alkaloids in the beans, has such strong Scottish, predominantly Edinburgh, associations. However, it was 1855 before the first of many medical scientists, Robert Christison, a toxicologist of repute, investigated the effects of the beans to the extent of eating part of one himself and documenting the moderate, if not severe, consequences. A further 6 years were to pass before Balfour's comprehensive botanical description of the bean plant appeared. It was he who named it Physostigma venenosum. It was not so long until the next event, one that sparked more intensive and international interest in the beans. In 1863 a young Edinburgh ophthalmologist, Argyll Robertson, published a paper announcing the arrival of the first agent that constricted the pupil of the eye. The drug was an extract of Calabar beans and Argyll Robertson openly admitted that he had been alerted to its unusual property by his physician friend, Thomas Fraser. A minor flood of contributions on the ophthalmic uses of bean extracts followed in the medical press in the next few months; those on their systemic toxicity were fewer. Fraser's MD thesis, submitted to the University of Edinburgh in 1862 and clearly pre-dating Argyll Robertson's involvement with the beans, became generally available a few weeks after the appearance of Argyll Robertson's paper and was the first to address in detail the features of systemic administration of extracts of the beans. A major problem facing all early researchers of the beans was that of deciding how best to extract their active principle, a task made all the more difficult because bioassays were the only means of determining if the toxin was being tracked. The stability of extracts was an inevitable issue and the active principle finally became known as physostigma or physostigmine, after the botanical name of the parent plant. The features of physostigmine toxicity were soon exhaustively documented, both in animals and humans. How they were mediated was another matter altogether. Fraser maintained that muscular paralysis, the cardinal feature, was the result of depression of the spinal cord and was generally, but far from unanimously, supported. Of those who had reservations, Harley was the most prominent. He concluded that paralysis was secondary to effects on the motor nerve endings and, in so doing, came nearest to present-day knowledge at a time when acetylcholine, cholinesterases and cholinesterase inhibitors were not even imagined. Differences of opinion on the mode of action of the beans were to be expected and it is hardly surprising that they were not resolved. No standard formulation of physostigmine was available so the potency of those used would have varied from one investigator to another, the range of animals experimented upon was large while the number used by any researcher was commonly in single figures, more readily available cold-blooded creatures seemed less sensitive to physostigmine toxicity than warm-blooded ones and only Fraser determinedly pursued an answer; in general, the others made one foray into bean research then turned their attentions elsewhere. The same problems would beset other aspects of bean research. While Fraser did not get as close to the mode of action of physostigmine as Harley, he reigns supreme when it comes to antagonism between physostigmine and atropine. By this time, the 1870s had dawned and although the concept of antagonism between therapeutic agents was not new, it had little, if any, reliable scientific foundation. This was about to change; antagonism was becoming exciting and rational. Fraser's firm belief that physostigmine and atropine were mutually antagonistic at a physiological level was contrary to the conventional wisdom of his contemporaries. This alone would earn him a place in history but his contribution goes much, much further. Unlike any other at the time, he investigated it with scientific rigour, experimenting on only one species, ensuring as best he could the animals were the same weight, adjusting the doses of drugs he gave them for bodyweight, determining the minimum lethal dose of each drug before assessing their antagonistic effects, adopting a single, incontrovertible endpoint for efficacy and carrying out sufficient numbers of experiments to appear convincing in a later era where the statistical power of studies is all-important. To crown it all, he presented his results graphically. Fraser never claimed to have discovered the antagonism between physostigmine and atropine. Bartholow in 1873 did, based on work done in 1869. But his data hardly justify it. If anyone can reasonably claim this particular scientific crown it is an ophthalmologist, Niemetschek, working in Prague in 1864. His colleague in the same discipline, Kleinwächter, was faced with treating a young man with atropine intoxication. Knowing of the contrary actions of the two drugs on the pupil, Niemetschek suggested that Calabar bean extract might be useful. Kleinwächter had the courage to take the advice and his patient improved dramatically. Clearly, this evidence is nothing more than anecdotal, but the ophthalmologists were correct and, to the present day, physostigmine has had an intermittent role in the management of anticholinergic poisoning. The converse, giving atropine to treat poisoning with cholinesterase inhibitors, of which physostigmine was the first, has endured more consistently and remains standard practice today. It is salutary to realise that the doses and dosage frequency of atropine together with the endpoints that define they are adequate were formulated by Fraser and others a century and a half ago. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Subgroup analyses are common in randomised controlled trials (RCTs). There are many easily accessible guidelines on the selection and analysis of subgroups but the key messages do not seem to be universally accepted and inappropriate analyses continue to appear in the literature. This has potentially serious implications because erroneous identification of differential subgroup effects may lead to inappropriate provision or withholding of treatment. (1) To quantify the extent to which subgroup analyses may be misleading. (2) To compare the relative merits and weaknesses of the two most common approaches to subgroup analysis: separate (subgroup-specific) analyses of treatment effect and formal statistical tests of interaction. (3) To establish what factors affect the performance of the two approaches. (4) To provide estimates of the increase in sample size required to detect differential subgroup effects. (5) To provide recommendations on the analysis and interpretation of subgroup analyses. The performances of subgroup-specific and formal interaction tests were assessed by simulating data with no differential subgroup effects and determining the extent to which the two approaches (incorrectly) identified such an effect, and simulating data with a differential subgroup effect and determining the extent to which the two approaches were able to (correctly) identify it. Initially, data were simulated to represent the 'simplest case' of two equal-sized treatment groups and two equal-sized subgroups. Data were first simulated with no differential subgroup effect and then with a range of types and magnitudes of subgroup effect with the sample size determined by the nominal power (50-95%) for the overall treatment effect. Additional simulations were conducted to explore the individual impact of the sample size, the magnitude of the overall treatment effect, the size and number of treatment groups and subgroups and, in the case of continuous data, the variability of the data. The simulated data covered the types of outcomes most commonly used in RCTs, namely continuous (Gaussian) variables, binary outcomes and survival times. All analyses were carried out using appropriate regression models, and subgroup effects were identified on the basis of statistical significance at the 5% level. While there was some variation for smaller sample sizes, the results for the three types of outcome were very similar for simulations with a total sample size of greater than or equal to 200. With simulated simplest case data with no differential subgroup effects, the formal tests of interaction were significant in 5% of cases as expected, while subgroup-specific tests were less reliable and identified effects in 7-66% of cases depending on whether there was an overall treatment effect. The most common type of subgroup effect identified in this way was where the treatment effect was seen to be significant in one subgroup only. When a simulated differential subgroup effect was included, the results were dependent on the nominal power of the simulated data and the type and magnitude of the subgroup effect. However, the performance of the formal interaction test was generally superior to that of the subgroup-specific analyses, with more differential effects correctly identified. In addition, the subgroup-specific analyses often suggested the wrong type of differential effect. The ability of formal interaction tests to (correctly) identify subgroup effects improved as the size of the interaction increased relative to the overall treatment effect. When the size of the interaction was twice the overall effect or greater, the interaction tests had at least the same power as the overall treatment effect. However, power was considerably reduced for smaller interactions, which are much more likely in practice. The inflation factor required to increase the sample size to enable detection of the interaction with the same power as the overall effect varied with the size of the interaction. For an interaction of the same magnitude as the overall effect, the inflation factor was 4, and this increased dramatically to of greater than or equal to 100 for more subtle interactions of < 20% of the overall effect. Formal interaction tests were generally robust to alterations in the number and size of the treatment and subgroups and, for continuous data, the variance in the treatment groups, with the only exception being a change in the variance in one of the subgroups. In contrast, the performance of the subgroup-specific tests was affected by almost all of these factors with only a change in the number of treatment groups having no impact at all. While it is generally recognised that subgroup analyses can produce spurious results, the extent of the problem is almost certainly under-estimated. This is particularly true when subgroup-specific analyses are used. In addition, the increase in sample size required to identify differential subgroup effects may be substantial and the commonly used 'rule of four' may not always be sufficient, especially when interactions are relatively subtle, as is often the case. CONCLUSIONS--RECOMMENDATIONS FOR SUBGROUP ANALYSES AND THEIR INTERPRETATION: (1) Subgroup analyses should, as far as possible, be restricted to those proposed before data collection. Any subgroups chosen after this time should be clearly identified. (2) Trials should ideally be powered with subgroup analyses in mind. However, for modest interactions, this may not be feasible. (3) Subgroup-specific analyses are particularly unreliable and are affected by many factors. Subgroup analyses should always be based on formal tests of interaction although even these should be interpreted with caution. (4) The results from any subgroup analyses should not be over-interpreted. Unless there is strong supporting evidence, they are best viewed as a hypothesis-generation exercise. In particular, one should be wary of evidence suggesting that treatment is effective in one subgroup only. (5) Any apparent lack of differential effect should be regarded with caution unless the study was specifically powered with interactions in mind. CONCLUSIONS--RECOMMENDATIONS FOR RESEARCH: (1) The implications of considering confidence intervals rather than p-values could be considered. (2) The same approach as in this study could be applied to contexts other than RCTs, such as observational studies and meta-analyses. (3) The scenarios used in this study could be examined more comprehensively using other statistical methods, incorporating clustering effects, considering other types of outcome variable and using other approaches, such as Bootstrapping or Bayesian methods. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Treatment of aggressive lymphoma in relapse is difficult. Patients who initially present with these diseases often know they have a malignancy considered curable in many cases, and diagnosis of relapse can be devastating. For this reason, it is useful to know the individual patient's risk of relapse prior to starting initial therapy, since it may be appropriate to treat patients with poor prognoses with intensive programs or investigational studies. In the private practice setting, most patients with these diseases receive CHOP or similar cyclophosphamide and doxorubicin-containing regimens at the time of initial diagnosis. However, there are certain disease-related features which determine whether these patients have a high or low risk of relapse, and investigators are now using combinations of these features to determine which patients may be safely treated with CHOP and which may benefit from more intensive chemotherapy management. For example, the International Prognostic Factor Index system, now in common usage, delineates four different groups of patients with differing complete remission, freedom from progression, and overall survival rates. The Tumor Score System, developed at MDACC, delineates only two groups with very different survival rates, and may be a better scoring system for patients with diffuse large cell lymphomas, primarily because of its inclusion of the serum beta(2)-microglobulin level prior to treatment, an important predictor of relapse. In addition to pretreatment features, certain treatment-related factors are also important in determining the risk of relapse, including the dose of chemotherapy administered and the rapidity of response. Results of a gallium scan with SPECT imaging may be an important method of confirming complete response, and should be incorporated into treatment programs, whether the treatment is standard CHOP or an investigational program. For the patient with relapse or progressive disease following induction with CHOP or a similar regimen, the type of response to initial therapy plays an important role in determining potential response to salvage therapy, including high-dose therapy followed by stem cell rescue. Patients for whom initial treatment fails to achieve any response have a very poor chance of responding to any currently used standard-dose program for relapse. Those with partial responses have a better chance of responding to relapse therapy, but a high risk of disease progression or early relapse, and those with a prior complete response to initial therapy have a good chance of responding to relapse therapy, especially those in whom the complete response lasted more than a year. For these reasons, stem cell transplant (SCT) protocols routinely require complete response with initial therapy as a requirement for entry, although "good partial remission" may be acceptable at certain centers. Other limitations for SCT protocols include age greater than 60 or 65 years, significant chronic obstructive pulmonary, renal, or cardiac disease, a poor performance status, and central nervous system or marrow involvement. For these reasons, there is a continued need for newer treatment programs which offer the potential for higher response rates and better survival rates, not only for those for whom SCT is not an option, but also for those who must have an adequate response to "standard dose therapy" prior to selecting SCT as a treatment option. Three broad groups of relapse therapy for aggressive lymphoma have been described, based upon the drugs contained within these regimens. These include platinum-based, mitoxantrone-based, and ifosfamide-based chemotherapy regimens. Results with these programs vary widely and are likely different because of tumor-related features prior to relapse therapy, including size of mass, beta(2)-microglobulin level, LDH level, and type of response to initial therapy. Other features, such as dose of therapy, specific drugs utilized, and number of prior treatments also play important roles in determining results with relapse therapy. In a study of DHAP followed by transplant or more DHAP, DHAP induced a response in 56% of patients, and at 5 years, significantly more of the responders to DHAP who were subsequently treated with high-dose therapy and bone marrow transplant were free of disease compared to those who continued to receive DHAP after response to this regimen. Therefore, high-dose therapy is clearly better for DHAP responders than is continued DHAP. However, results for the overall population are still not good when non-responders are included in the analysis, and DHAP, a first-generation platinum regimen, may not be the optimal regimen to use prior to high-dose therapy followed by peripheral stem cell rescue. At MDACC, we have extensively investigated various combinations containing ifosfamide and etoposide. The most recently reported regimen, MINE-ESHAP, contains mesna, ifosfamide, mitoxantrone, and etoposide, followed after adequate response with etoposide, methylprednisolone, high-dose cytarabine, and continuous infusion as cis-platinum, a second-generation platinum regimen. This strategy resulted in a complete response in 47% of the patients treated, with a 44% complete response in patients with intermediate-grade lymphoma, 56% in those with low-grade lymphoma and 36% in those with transformed lymphoma. Results varied according to type of response achieved with initial therapy, and serum LDH and beta(2)-microglobulin levels prior to treatment with MINE-ESHAP. Using more intensive doses of ifosfamide and etoposide, we have described therapy for 36 patients with relapsed aggressive lymphomas, prior to pheresis and SCT. Results of this study are encouraging: 42% entered complete response with ifosfamide-etoposide and the overall survival was 52%, with a progression-free survival of 32%. Therapy with a similar regimen, combining ifosfamide, carboplatin, and etoposide in standard doses (ICE) has also been described. This regimen has been extensively studied in patients with relapsed aggressive lymphomas and Hodgkin's disease, followed by SCT. In patients with relapsed lymphomas, ICE has achieved a 66% complete response rate, with 89% undergoing transplant. Overall survival in these studies is affected by the quality of the response to ICE. The same program was used to treat 65 patients with Hodgkin's disease. The response rate to ICE was 88%, and the 5-year event-free survival for those transplanted was 68%. These factors predicted outcome: B symptoms, extranodal disease, and complete response less than 1 year. Finally, we have recently studied paclitaxel in combination with topotecan for relapsed and refractory aggressive lymphomas. These and newer combinations should be further developed to treat patients in relapse of aggressive lymphomas. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The contextual differences in the patterns of relations among various motivational, cognitive, and metacognitive components of self-regulated learning and performance in two key curriculum subject areas, language and mathematics, were examined in a sample of 263 Greek primary school children of fifth- and sixth-grade classrooms. Age and gender differences were also investigated. Students were asked to complete the Motivated Strategies for Learning Questionnaire (Pintrich & De Groot, 1990 ), which comprised five factors: (a) Self-efficacy, (b) Intrinsic Value, (c) Test Anxiety, (d) Cognitive Strategy Use, and (e) Self-regulation Strategies. They responded to the statements of the questionnaire on a 7-point Likert scale in terms of their behaviour in mathematics and language classes, respectively. Moreover, their teachers were asked to evaluate each of their students' academic achievement in Greek language and mathematics on a 1- to 20-point comparative scale in relation to the rest of the class. The results of the study indicated very few differences in the pattern of relations among self-regulated components within and across the two subject areas and at the same time revealed a context-specific character of self-regulated components at a mean level differences. Further, the current study (a) confirmed the mediatory role of strategies in the motivation-performance relation, (b) stressed the differential role of cognitive and regulatory strategies in predicting performance in subject areas that differ in their structural characteristics of the content, and (c) pointed out the key motivational role of self-efficacy. In fact, self-efficacy proved the most significant predictor not only of performance but of cognitive and regulatory strategy use as well. Gender differences in motivation and strategy use were not reported, while motivation was found to vary mainly with age. The usefulness of these findings for promoting greater clarity among motivational and metacognitive frameworks and ideas for future research are discussed. Cette étude porte sur les différences contextuelles dans les patrons relationnels entre les diverses composantes motivationnelle, cognitive et métacognitive de l'apprentissage et de la performance auto-régulés dans deux domaines d'étude clé du programme, soit la langue et les mathématiques. Ces différences contextuelles ont été examinées dans un échantillon de 263 d'enfants d'une école primaire grecque (of) en cinquième et sixième année. Les différences sexuelles et les différences d'âge ont été aussi étudiées. Les élèves ont été priés de compléter le «Motivated Strategies for Learning Questionnaire» (Pintrich & De Groot, 1990 ) qui comprend cinq facteurs: (a) l'auto-efficacité, (b) la valeur intrinsèque, (c) un test d'anxiété, (d) l'utilisation d'une stratégie cognitive et (e) les stratégies d'auto-régulation. Ils ont répondu aux énoncés du questionnaire sur une échelle de type Likert à 7 points en termes de leur comportement en classe de langue et en classe de mathématiques séparément. De plus, les enseignants ont été priés d'évaluer chaque résultat de leurs élèves dans la langue grecque et en mathématique sur une échelle allant de 1 à 20 points en comparaison au reste de la classe. Les résultats de l'étude ont indiqué très peu de différences dans les patrons relationnels entre les composantes auto-régulées à l'intérieur et entre les domaines d'étude. En même temps, les résultats ont révélé un caractère contextuel spécifique des composantes auto-regulées. De plus, la présente étude (a) a confirmé le rôle médiateur des stratégies dans la relation motivation-performance, (b) a souligné le rôle différé des stratégies cognitive et régulatoire dans la prédiction de la performance dans les domaines d'étude qui diffèrent dans leurs caractéristiques structurelles du contenu et (c) a souligné le rôle motivationnel clé de l'auto-efficacité. En effet, l'auto-efficacité s'est avérée être le prédicteur le plus significatif non seulement de la performance mais aussi de l'utilisation d'une stratégie cognitive et régulatoire. Des différences sexuelles dans la motivation et dans l'utilisation d'une stratégie n'ont pas été rapportées alors qu'il s'est avéré que la motivation variait principalement en fonction de l'âge. L'utilité de ces résultats pour la promotion d'une grande clarté entre les cadres motivationnel et métacognitif et les idées pour les études futures sont discutées. Se examinó, en una muestra de 263 niños griegos de quinto y sexto años de la escuela primaria, las diferencias contextuales en las pautas con las que se dan las relaciones entre varios componentes motivacionales, cognitivos y meta cognitivos del aprendizaje autorregulado y el desempeño en dos áreas clave del currículo, lenguaje y matemáticas. También se investigó las diferencias de edad y género. Se pidió a los alumnos que respondieran el Cuestionario de Estrategias Motivadas para el Aprendizaje (Pintrich & De Groot, 1990 ), compuesto por cinco factores: (a) Auto eficacia, (b) Valor Intrínseco, (c) Ansiedad ante los Exámenes, (d) Empleo de Estrategias Cognitivas, y (e) Estrategias Autorreguladas. Respondieron a los enunciados del cuestionario sobre una escala Likert de 7 puntos en términos de su conducta en las clases de matemáticas y lenguaje por separado. Es más, se pidió a sus profesores que evaluaran el desempeño de cada uno de sus estudiantes en Lenguaje Griego y Matemáticas de acuerdo con una escala comparativa de 1 a 20 puntos, en relación con el resto del grupo escolar. Los resultados del estudio indicaron pocas diferencias en la pauta que describen las relaciones entre los componentes de autorregulación al interior de y entre ambas áreas de estudio y, al mismo tiempo, revelaron un carácter específico del contexto de los componentes de la autorregulación con diferencias en el nivel medio. Más aún, el presente estudio (a) confirmó el papel de mediador que desempeñan las estrategias sobre la relación motivación-desempeño, (b) acentuó el papel diferencial de las estrategias cognitiva y reguladora al predecir el desempeño en áreas de estudio que difieren en las características estructurales de su contenido, y (c) señaló el papel motivacional clave que desempeña la auto eficacia. De hecho, la auto eficacia predijo de manera más significativa no sólo el desempeño, sino también el uso de la estrategia cognitiva y reguladora. No hubo diferencias de género respecto a la motivación y al uso de la estrategia, aunque se encontró que la motivación varía principalmente con la edad. Se discute la utilidad de estos hallazgos para aclarar los referentes motivacionales y meta cognitivos y promover ideas para investigaciones futuras. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Breast cancer mortality is declining in most high-income countries. The role of mammography screening in these declines is much debated. Screening impacts cancer mortality through decreasing the incidence of number of advanced cancers with poor prognosis, while therapies and patient management impact cancer mortality through decreasing the fatality of cancers. The effectiveness of cancer screening is the ability of a screening method to curb the incidence of advanced cancers in populations. Methods for evaluating cancer screening effectiveness are based on the monitoring of age-adjusted incidence rates of advanced cancers that should decrease after the introduction of screening. Likewise, cancer-specific mortality rates should decline more rapidly in areas with screening than in areas without or with lower levels of screening but where patient management is similar. These two criteria have provided evidence that screening for colorectal and cervical cancer contributes to decreasing the mortality associated with these two cancers. In contrast, screening for neuroblastoma in children was discontinued in the early 2000s because these two criteria were not met. In addition, overdiagnosis - i.e. the detection of non-progressing occult neuroblastoma that would not have been life-threatening during the subject's lifetime - is a major undesirable consequence of screening. Accumulating epidemiological data show that in populations where mammography screening has been widespread for a long time, there has been no or only a modest decline in the incidence of advanced cancers, including that of de novo metastatic (stage IV) cancers at diagnosis. Moreover, breast cancer mortality reductions are similar in areas with early introduction and high penetration of screening and in areas with late introduction and low penetration of screening. Overdiagnosis is commonplace, representing 20% or more of all breast cancers among women invited to screening and 30-50% of screen-detected cancers. Overdiagnosis leads to overtreatment and inflicts considerable physical, psychological and economic harm on many women. Overdiagnosis has also exerted considerable disruptive effects on the interpretation of clinical outcomes expressed in percentages (instead of rates) or as overall survival (instead of mortality rates or stage-specific survival). Rates of radical mastectomies have not decreased following the introduction of screening and keep rising in some countries (e.g. the United States of America (USA)). Hence, the epidemiological picture of mammography screening closely resembles that of screening for neuroblastoma. Reappraisals of Swedish mammography trials demonstrate that the design and statistical analysis of these trials were different from those of all trials on screening for cancers other than breast cancer. We found compelling indications that these trials overestimated reductions in breast cancer mortality associated with screening, in part because of the statistical analyses themselves, in part because of improved therapies and underreporting of breast cancer as the underlying cause of death in screening groups. In this regard, Swedish trials should publish the stage-specific breast cancer mortality rates for the screening and control groups separately. Results of the Greater New York Health Insurance Plan trial are biased because of the underreporting of breast cancer cases and deaths that occurred in women who did not participate in screening. After 17 years of follow-up, the United Kingdom (UK) Age Trial showed no benefit from mammography screening starting at age 39-41. Until around 2005, most proponents of breast screening backed the monitoring of changes in advanced cancer incidence and comparative studies on breast cancer mortality for the evaluation of breast screening effectiveness. However, in an attempt to mitigate the contradictions between results of mammography trials and population data, breast-screening proponents have elected to change the criteria for the evaluation of cancer screening effectiveness, giving precedence to incidence-based mortality (IBM) and case-control studies. But practically all IBM studies on mammography screening have a strong ecological component in their design. The two IBM studies done in Norway that meet all methodological requirements do not document significant reductions in breast cancer mortality associated with mammography screening. Because of their propensity to exaggerate the health benefits of screening, case-control studies may demonstrate that mammography screening could reduce the risk of death from diseases other than breast cancer. Numerous statistical model approaches have been conducted for estimating the contributions of screening and of patient management to reductions in breast cancer mortality. Unverified assumptions are needed for running these models. For instance, many models assume that if screening had not occurred, the majority of screen-detected asymptomatic cancers would have progressed to symptomatic advanced cancers. This assumption is not grounded in evidence because a large proportion of screen-detected breast cancers represent overdiagnosis and hence non-progressing tumours. The accumulation of population data in well-screened populations diminishes the relevance of model approaches. The comparison of the performance of different screening modalities - e.g. mammography, digital mammography, ultrasonography, magnetic resonance imaging (MRI), three-dimensional tomosynthesis (TDT) - concentrates on detection rates, which is the ability of a technique to detect more cancers than other techniques. However, a greater detection rate tells little about the capacity to prevent interval and advanced cancers and could just reflect additional overdiagnosis. Studies based on the incidence of advanced cancers and on the evaluation of overdiagnosis should be conducted before marketing new breast-imaging technologies. Women at high risk of breast cancer (i.e. 30% lifetime risk and more), such as women with BRCA1/2 mutations, require a close breast surveillance. MRI is the preferred imaging method until more radical risk-reduction options are eventually adopted. For women with an intermediate risk of breast cancer (i.e. 10-29% lifetime risk), including women with extremely dense breast at mammography, there is no evidence that more frequent mammography screening or screening with other modalities actually reduces the risk of breast cancer death. A plethora of epidemiological data shows that, since 1985, progress in the management of breast cancer patients has led to marked reductions in stage-specific breast cancer mortality, even for patients with disseminated disease (i.e. stage IV cancer) at diagnosis. In contrast, the epidemiological data point to a marginal contribution of mammography screening in the decline in breast cancer mortality. Moreover, the more effective the treatments, the less favourable are the harm-benefit balance of screening mammography. New, effective methods for breast screening are needed, as well as research on risk-based screening strategies. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The estimation of parasitic pressure on the host populations is frequently required in parasitological investigations. The empirical values of prevalence of infection are used for this, however the latter one as an estimation of parasitic pressure on the host population is insufficient. For example, the same prevalence of infection can be insignificant for the population with high reproductive potential and excessive for the population with the low reproductive potential. Therefore the development of methods of an estimation of the parasitic pressure on the population, which take into account the features the host population, is necessary. Appropriate parameters are to be independent on view of the researcher, have a clear biological sense and be based on easily available characteristics. The methods of estimation of parasitic pressure on the host at the organism level are based on various individual viability parameters: longevity, resistance to difficult environment etc. The natural development of this approach for population level is the analysis of viability parameters of groups, namely, the changing of extinction probability of host population under the influence of parasites. Obviously, some critical values of prevalence of infection should exist; above theme the host population dies out. Therefore the heaviest prevalence of infection, at which the probability of host population size decreases during the some period is less than probability of that increases or preserves, can serve as an indicator of permissible parasitic pressure on the host population. For its designation the term "parasite capacity of the host population" is proposed. The real parasitic pressure on the host population should be estimated on the comparison with its parasite capacity. Parasite capacity of the host population is the heaviest possible prevalence of infection, at which, with the generation number T approaching infinity, there exists at least one initial population size ni(0) for which the probability of size decrease through T generations is less than the probability of its increase. [formula: see text] The estimation of the probabilities of host population size changes is necessary for the parasite capacity determination. The classical methods for the estimation of extinction probability of population are unsuitable in this case, as these methods require the knowledge of population growth rates and their variances for all possible population sizes. Thus, the development methods of estimate of extinction probability of population, based on the using of available parameters (sex ratio, fecundity, mortality, prevalence of infection PI) is necessary. The population size change can be considered as the Markov process. The probabilities of all changes of population size for a generation in this case are described by a matrix of transition probabilities of Markov process (pi) with dimensions Nmax x Nmax (maximum population size). The probabilities of all possible size changes for T generations can be calculated as pi T. Analyzing the behaviour matrix of transition at various prevalence of infection, it is possible to determine the parasite capacity of the host population. In constructing of the matrix of transition probabilities, should to be taken into account the features the host population and the influence of parasites on its reproductive potential. The set of the possible population size at a generation corresponds to each initial population size. The transition probabilities for the possible population sizes at a generation can be approximated to the binomial distribution. The possible population sizes at a generation nj(t + 1) can be calculated as sums of the number of survived parents N1 and posterities N2; their probabilities--as P(N1) x P(N2). The probabilities of equal sums N1 + N2 and nj(t + 1) > or = Nmax are added. The number of survived parents N1 may range from 0 to (1-PI) x ni(t). The survival probabilities can be estimated for each N1 as [formula: see text] The number of survived posterities N2 may range from 0 to N2max (the maximum number of posterities). N2max is [formula: see text] and the survival probabilities for each N2, is defined as [formula: see text] where [formula: see text], ni(t) is the initial population size (including of males and infected specimens of host), PI is the prevalence of infection, Q1 is the survival probabilities of parents, Pfemales is the frequency of females in the host population, K is the number of posterities per a female, and Q2 is the survival probabilities of posterities. When constructing matrix of transition probabilities of Markov process (pi), the procedure outlined above should be repeated for all possible initial population size. Matrix of transition probabilities for T generations is defined as pi T. This matrix (pi T) embodies all possible transition probabilities from the initial population sizes to the final population sizes and contains a wealth of information by itself. From the practical point of view, however, the plots of the probability of population size decrease are more suitable for analysis. They can be received by summing the probabilities within of lines of matrix from 0 to ni--1 (ni--the population size, which corresponds to the line of the matrix). Offered parameter has the number of advantages. Firstly, it is independent on a view of researcher. Secondly, it has a clear biological sense--this is a limit of prevalence, which is safe for host population. Thirdly, only available parameters are used in the calculation of parasite capacity: population size, sex ratio, fecundity, mortality. Lastly, with the availability of modern computers calculations do not make large labour. Drawbacks of this parameter: 1. The assumption that prevalence of infection, mortality, fecundity and sex ratio are constant in time (the situations are possible when the variability of this parameters can not be neglected); 2. The term "maximum population size" has no clear biological sense; 3. Objective restrictions exist for applications of this mathematical approach for populations with size, which exceeds 1000 specimens (huge quantity of computing operations--order Nmax 3*(T-1), work with very low probabilities). The further evolution of the proposed approach will allow to transfer from the probabilities of size changes of individual populations to be probabilities of size changes of population systems under the influence of parasites. This approach can be used at the epidemiology and in the conservation biology. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The aim of treatment for attention-deficit/hyperactivity disorder (ADHD) is to decrease symptoms, enhance functionality, and improve well-being for the child and his or her close contacts. However, the measurement of treatment response is often limited to measuring symptoms using behavior rating scales and checklists completed by teachers and parents. Because so much of the focus has been on symptom reduction, less is known about other possible health problems, which can be measured easily using health-related quality-of-life (HRQL) questionnaires, which are designed to gather information across a range of health domains. The aim of our study was to measure HRQL in a clinic-based sample of children who had a diagnosis of ADHD and consider the impact of 2 clinical factors, symptom severity and comorbidity, on HRQL. Our specific hypotheses were that parent-reported HRQL would be poorer in children with ADHD than in normative US and Australian pediatric samples, in children with increasing severity of ADHD symptoms, and in children who had diagnoses of comorbid psychiatric disorders. Cross-sectional survey was conducted in British Columbia, Canada. The sample included 165 respondents of 259 eligible children (63.7% response rate) who were referred to the ADHD Clinic in British Columbia between November 2001 and October 2002. Children who are seen in this clinic come from all parts of the province and are diverse in terms of socioeconomic status and case mix. ADHD was diagnosed in 131 children, 68.7% of whom had a comorbid psychiatric disorder. Some children had >1 comorbidity: 23 had 2, 5 had 3, and 1 had 4. Fifty-one children had a comorbid learning disorder (LD), 45 had oppositional defiant disorder or conduct disorder (ODD/CD), and 27 had some other comorbid diagnosis. The mean age of children was 10 years (standard deviation: 2.8). Boys composed 80.9% (N = 106) of the sample. We used the 50-item parent version of the Child Health Questionnaire to measure physical and psychosocial health. Physical domains include the following: physical functioning (PF), role/social limitations as a result of physical health (RP), bodily pain/discomfort (BP), and general health perception (GH). Psychosocial domains include the following: role/social limitations as a result of emotional-behavioral problems (REB), self-esteem (SE), mental health (MH), general behavior (BE), emotional impact on parent (PTE), and time impact on parents (PTT). A separate domain measures limitations in family activities (FA). There is also a single-item measure of family cohesion (FC). Individual scale scores and summary scores for physical (PhS) and psychosocial health (PsS) can be computed. Symptom severity data (parent and teacher) came from the Child/Adolescent Symptom Inventory 4. These checklists provide information on symptoms for the 3 ADHD subtypes (inattentive, hyperactive, and combined). Each child underwent a comprehensive psychiatric assessment by 1 of 4 child psychiatrists. Documentation included a full 5-axis Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition diagnosis on the basis of a comprehensive assessment. Clinical information for each child was extracted from hospital notes. Compared with both population samples, children with ADHD had comparable physical health but clinically important deficits in HRQL in all psychosocial domains, FA, FC, and PsS, with effect sizes as follows: FC = -0.66, SE = -0.90, MH = -0.97, PTT = -1.07, REB = -1.60, BE = -1.73, PTE = -1.87, FA = -1.95, and PsS = -1.98. Poorer HRQL for all domains of psychosocial health, FA, and PsS correlated significantly with more parent-reported inattentive, hyperactive, and combined symptoms of ADHD. Children with > or =2 comorbid disorders differed significantly from those with no comorbidity in most areas, including RP, GH, REB, BE, MH, SE, PTT, FA, and PsS, and from those with 1 comorbid disorder in 3 domains, including BE, MH, and FA and the PsS. The mean PsS score for children in the ODD/CD group (mean difference: -12.9; effect size = -1.11) and children in the other comorbidity group (-9.0; effect size = -.77) but not children in the LD group were significantly lower than children with no comorbid disorder. Predictors of physical health in a multiple regression model included child's gender (beta = .177) and number of comorbid conditions (beta = -.197). These 2 variables explained very little variation in the PhS. Predictors of psychosocial health included the number of comorbid conditions (beta = -.374) and parent-rated combined ADHD symptoms (beta = -.362). These 2 variables explained 31% of the variation in the PsS. Our study shows that ADHD has a significant impact on multiple domains of HRQL in children and adolescents. In support of our hypotheses, compared with normative data, children with ADHD had more parent-reported problems in terms of emotional-behavioral role function, behavior, mental health, and self-esteem. In addition, the problems of children with ADHD had a significant impact on the parents' emotional health and parents' time to meet their own needs, and they interfered with family activities and family cohesion. The differences that we found represent clinically important differences in HRQL. Our study adds new information about the HRQL of children with ADHD in relation to symptom severity and comorbidity. Children with more symptoms of ADHD had worse psychosocial HRQL. Children with multiple comorbid disorders had poorer psychosocial HRQL across a range of domains compared with children with none and 1 comorbid disorder. In addition, compared with children with no comorbidity, psychosocial HRQL was significantly lower in children with ODD/CD and children in the other comorbidity group but not in children with an LD. The demonstration of a differential impact of ADHD on health and well-being in relation to symptom severity and comorbidity has important implications for policies around eligibility for special educational and other supportive services. Because the impact of ADHD is not uniform, decisions about needed supports should incorporate a broader range of relevant indicators of outcome, including HRQL. Although many studies focus on measuring symptoms using rating scales and checklists, in our study, using a multidimensional questionnaire, we were able to show that many areas of health are affected in children with ADHD. We therefore argue that research studies of children with ADHD should include measurement of these broader domains of family impact and child health. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Having a mental illness has been and remains even now, a strong barrier to effective medical care. Most mental illness, such as schizophrenia, bipolar disorder, and depression are associated with undue medical morbidity and mortality. It represents a major health problem, with a 15 to 30 year shorter lifetime compared with the general population. Based these facts, a workshop was convened by a panel of specialists: psychiatrists, endocrinologists, cardiologists, internists, and pharmacologists from some French hospitals to review the information relating to the comorbidity and mortality among the patients with severe mental illness, the risks with antipsychotic treatment for the development of metabolic disorders and finally cardiovascular disease. The French experts strongly agreed on these points: that the patients with severe mental illness have a higher rate of preventable risk factors such as smoking, addiction, poor diet, lack of exercise; the recognition and management of morbidity are made more difficult by barriers related to patients, the illness, the attitudes of medical practitioners, and the structure of healthcare delivery services; and improved detection and treatment of comorbidity medical illness in people with severe mental illness will have significant benefits for their psychosocial functioning and overall quality of life. GUIDELINES FOR INITIATING ANTIPSYCHOTIC THERAPY: Based on these elements, the French experts propose guidelines for practising psychiatrists when initiating and maintaining therapy with antipsychotic compounds. The aim of the guidelines is practical and concerns the detection of medical illness at the first episode of mental illness, management of comorbidity with other specialists, family practitioner and follow-up with some key points. The guidelines are divided into two major parts. The first part provides: a review of mortality and comorbidity of patients with severe mental illness: the increased morbidity and mortality are primarily due to premature cardiovascular disease (myocardial infarction, stroke...).The cardiovascular events are strongly linked to non modifiable risk factors such as age, gender, personal and/or family history, but also to crucial modifiable risk factors, such as overweight and obesity, dyslipidemia, diabetes, hypertension and smoking. Although these classical risk factors exist in the general population, epidemiological studies suggest that patients with severe mental illness have an increased prevalence of these risk factors. The causes of increased metabolic and cardiovascular risk in this population are strongly related to poverty and limited access to medical care, but also to the use of psychotropic medication. A review of major published consensus guidelines for metabolic monitoring of patients treated with antipsychotic medication that have recommended stringent monitoring of metabolic status and cardiovascular risk factors in psychiatric patients receiving antipsychotic drugs. There have been six attempts, all published between 2004 and 2005: Mount Sinai, Australia, ADA-APA, Belgium, United Kingdom, Canada. Each guideline had specific, somewhat discordant, recommendations about which patients and drugs should be monitored. However, there was agreement on the importance of baseline monitoring and follow-up for the first three to four months of treatment, with subsequent ongoing reevaluation. There was agreement on the utility of the following tests and measures: weight and height, waist circumference, blood pressure, fasting plasma glucose, fasting lipid profile. In the second part, the French experts propose guidelines for practising psychiatrists when initiating and maintaining therapy with antipsychotic drugs: the first goal is identification of risk factors for development of metabolic and cardiovascular disorders: non modifiable risk factors: these include: increasing age, gender (increased rates of obesity, diabetes and metabolic syndrome are observed in female patients treated with antipsychotic drugs), personal and family history of obesity, diabetes, heart disease, ethnicity as we know that there are increased rates of diabetes, metabolic syndrome and coronary heart disease in patients of non European ethnicity, especially among South Asian, Hispanic, and Native American people. Modifiable risk factors: these include: obesity, visceral obesity, smoking, physical inactivity, and bad diet habits. Then the expert's panel focussed on all the components of the initial visit such as: family and medical history; baseline weight and BMI should be measured for all patients. Body mass index can be calculated by dividing weight (in kilograms) by height (in meters) squared; visceral obesity measured by waist circumference; blood pressure; fasting plasma glucose; fasting lipid profiles. These are the basic measures and laboratory examinations to do when initiating an antipsychotic treatment. ECG: several of the antipsychotic medications, typical and atypical, have been shown to prolong the QTc interval on the ECG. Prolongation of the QTc interval is of potential concern since the patient may be at risk for wave burst arrhythmia, a potentially serious ventricular arrhythmia. A QTc interval greater than 500 ms places the patient at a significantly increased risk for serious arrhythmia. QTc prolongation has been reported with varying incidence and degrees of severity. The atypical antipsychotics can also cause other cardiovascular adverse effects with, for example, orthostatic hypotension. Risk factors for cardiovascular adverse effects with antipsychotics include: known cardiovascular disease, electrolyte disorders, such as hypokaliemia, hypomagnesaemia, genetic characteristics, increasing age, female gender, autonomic dysfunction, high doses of antipsychotics, the use of interacting drugs, and psychiatric illness itself. In any patient with pre-existing cardiac disease, a pre-treatment ECG with routine follow-up is recommended. Patients on antipsychotic drugs should undergo regular testing of blood sugar, lipid profile, as well as body weight, waist circumference and blood pressure, with recommended time intervals between measures. Clinicians should track the effects of treatment on physical and biological parameters, and should facilitate access to appropriate medical care. In order to prevent or limit possible side effects, information must be given to the patient and his family on the cardiovascular and metabolic risks. The cost-effectiveness of implementing these recommendations is considerable: the costs of laboratory tests and additional equipment costs (such as scales, tape measures, and blood pressure devices) are modest. The issue of responsibility for monitoring for metabolic abnormalities is much debated. However, with the prescription of antipsychotic drugs comes the responsibility for monitoring potential drug-induced metabolic abnormalities. The onset of metabolic disorders will imply specific treatments. A coordinated action of psychiatrists, general practitioners, endocrinologists, cardiologists, nurses, dieticians, and of the family is certainly a key determinant to ensure the optimal care of these patients. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
CONDITION AND TARGET POPULATION There are two main types of glaucoma, primary open angle (POAG) and angle closure glaucoma, of which POAG is the more common type. POAG is diagnosed by assessing degenerative changes in the optic disc and loss of visual field (VF). Risk factors for glaucoma include an increase in intraocular pressure (IOP), a family history of glaucoma, older age and being of African descent. The prevalence of POAG ranges from 1.1% to 3.0% in Western populations and from 4.2% to 8.8% in populations of African descent. Usually the IOP associated with POAG is elevated above the normal distribution (10-20 mmHg), but when IOP is not elevated it is often referred to as normal-tension glaucoma (NTG). In population based studies, approximately one-third to half of the patients with glaucomatous VF loss have normal IOP on initial examination. People with elevated IOP (>21 mmHg), but with no evidence of optic disc or VF damage have ocular hypertension. It has been estimated that 3 to 6 million people in the United States including 4% to 7% of those older than 40 years have elevated IOP without detectable glaucomatous damage on standard clinical tests. An Italian study found the overall prevalence of ocular hypertension, POAG, and NTG in 4,297 people over 40 years of age to be 2.1%, 1.4% and 0.6% respectively. DIURNAL CURVE: In normal individuals, IOP fluctuates 2 to 6 mmHg over a 24 hour period. IOP is influenced by body position with higher readings found in the supine relative to the upright position. As most individuals sleep in the supine position and are upright during the day, IOP is higher on average in people, both with and without glaucoma, in the nocturnal period. IOP is generally higher in the morning compared to the afternoon. Multiple IOP measurements over the course of a day can be used to generate a diurnal curve and may have clinical importance in terms of diagnosis and management of patients with IOP related conditions since a solitary reading in the office may not reveal the peak IOP and fluctuation that a patient experiences. Furthermore, because of diurnal and nocturnal variation in IOP, 24-hour monitoring may reveal higher peaks and wider fluctuations than those found during office-hours and may better determine risk of glaucoma progression than single or office-hour diurnal curve measurements. There is discrepancy in the literature regarding which parameter of IOP measurement (e.g., mean IOP or fluctuation/range of IOP) is most important as an independent risk factor for progression or development of glaucoma. The potential for increased rates or likelihood of worsening glaucoma among those with larger IOP swings within defined time periods has received increasing attention in the literature. According to an expert consultant: The role of a diurnal tension curves is to assess IOP in relationship to either a risk factor for the development or progression of glaucoma or achievement of a target pressure which may direct a therapeutic change.Candidates for a diurnal curve are usually limited to glaucoma suspects (based on optic disc changes or less commonly visual field changes) to assess the risk for development of glaucoma or in patients with progressive glaucoma despite normal single office IOP measurements.Clinically diurnal tension curves are used to determine the peak IOP and range. Intraocular pressure fluctuation as a risk factor for progression of glaucoma has also been examined without the use of diurnal curves. In these cases, single IOP measurements were made every 3-6 months over several months/years. The standard deviation (SD) of the mean IOP was used as a surrogate for fluctuation since no diurnal tension curves were obtained. To determine whether the use of a diurnal tension curve (multiple IOP measurements over a minimum 8 hour duration) is more effective than not using a diurnal tension curve (single IOP measurements) to assess IOP fluctuation as a risk factor for the development or progression of glaucoma.To determine whether the use of a diurnal tension curve is beneficial for glaucoma suspects or patients with progressive glaucoma despite normal single office IOP measurements and leads to a more effective disease management strategy. A literature search was performed on July 22, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2006 until July 14, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology. Open angle glaucoma (established or OHT high risk) in an adult populationIOP measurement by Goldmann applanation tonometry (the gold standard)Number and timing of IOP measurements explicitly reported (e.g., 5 measurements a day for 5 visits to generate a diurnal curve or 1 measurement a day [no diurnal curve] every 3 months for 2 years)IOP parameters include fluctuation (range [peak minus trough] or standard deviation) and meanOutcome measure = progression or development of glaucomaStudy reports results for ≥ 20 eyesMost recent publication if there are multiple publications based on the same study Angle closure glaucoma or pediatric glaucomaCase reportsIOP measured by a technique other than GAT (the gold standard)Number and timing of IOP measurements not explicitly reported Progression or development of glaucoma There is very low quality evidence (retrospective studies, patients on different treatments) for the use of a diurnal tension curve or single measurements to assess short or long-term IOP fluctuation or mean as a risk factor for the development or progression of glaucoma. There is very low quality evidence (expert opinion) whether the use of a diurnal tension curve is beneficial for glaucoma suspects or patients with progressive glaucoma, despite normal single office IOP measurements, and leads to a more effective disease management strategy. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
To provide an update to the "Surviving Sepsis Campaign Guidelines for Management of Severe Sepsis and Septic Shock," last published in 2008. A consensus committee of 68 international experts representing 30 international organizations was convened. Nominal groups were assembled at key international meetings (for those committee members attending the conference). A formal conflict of interest policy was developed at the onset of the process and enforced throughout. The entire guidelines process was conducted independent of any industry funding. A stand-alone meeting was held for all subgroup heads, co- and vice-chairs, and selected individuals. Teleconferences and electronic-based discussion among subgroups and among the entire committee served as an integral part of the development. The authors were advised to follow the principles of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system to guide assessment of quality of evidence from high (A) to very low (D) and to determine the strength of recommendations as strong (1) or weak (2). The potential drawbacks of making strong recommendations in the presence of low-quality evidence were emphasized. Recommendations were classified into three groups: (1) those directly targeting severe sepsis; (2) those targeting general care of the critically ill patient and considered high priority in severe sepsis; and (3) pediatric considerations. Key recommendations and suggestions, listed by category, include: early quantitative resuscitation of the septic patient during the first 6 h after recognition (1C); blood cultures before antibiotic therapy (1C); imaging studies performed promptly to confirm a potential source of infection (UG); administration of broad-spectrum antimicrobials therapy within 1 h of the recognition of septic shock (1B) and severe sepsis without septic shock (1C) as the goal of therapy; reassessment of antimicrobial therapy daily for de-escalation, when appropriate (1B); infection source control with attention to the balance of risks and benefits of the chosen method within 12 h of diagnosis (1C); initial fluid resuscitation with crystalloid (1B) and consideration of the addition of albumin in patients who continue to require substantial amounts of crystalloid to maintain adequate mean arterial pressure (2C) and the avoidance of hetastarch formulations (1B); initial fluid challenge in patients with sepsis-induced tissue hypoperfusion and suspicion of hypovolemia to achieve a minimum of 30 mL/kg of crystalloids (more rapid administration and greater amounts of fluid may be needed in some patients (1C); fluid challenge technique continued as long as hemodynamic improvement is based on either dynamic or static variables (UG); norepinephrine as the first-choice vasopressor to maintain mean arterial pressure ≥65 mmHg (1B); epinephrine when an additional agent is needed to maintain adequate blood pressure (2B); vasopressin (0.03 U/min) can be added to norepinephrine to either raise mean arterial pressure to target or to decrease norepinephrine dose but should not be used as the initial vasopressor (UG); dopamine is not recommended except in highly selected circumstances (2C); dobutamine infusion administered or added to vasopressor in the presence of (a) myocardial dysfunction as suggested by elevated cardiac filling pressures and low cardiac output, or (b) ongoing signs of hypoperfusion despite achieving adequate intravascular volume and adequate mean arterial pressure (1C); avoiding use of intravenous hydrocortisone in adult septic shock patients if adequate fluid resuscitation and vasopressor therapy are able to restore hemodynamic stability (2C); hemoglobin target of 7-9 g/dL in the absence of tissue hypoperfusion, ischemic coronary artery disease, or acute hemorrhage (1B); low tidal volume (1A) and limitation of inspiratory plateau pressure (1B) for acute respiratory distress syndrome (ARDS); application of at least a minimal amount of positive end-expiratory pressure (PEEP) in ARDS (1B); higher rather than lower level of PEEP for patients with sepsis-induced moderate or severe ARDS (2C); recruitment maneuvers in sepsis patients with severe refractory hypoxemia due to ARDS (2C); prone positioning in sepsis-induced ARDS patients with a PaO (2)/FiO (2) ratio of ≤100 mm Hg in facilities that have experience with such practices (2C); head-of-bed elevation in mechanically ventilated patients unless contraindicated (1B); a conservative fluid strategy for patients with established ARDS who do not have evidence of tissue hypoperfusion (1C); protocols for weaning and sedation (1A); minimizing use of either intermittent bolus sedation or continuous infusion sedation targeting specific titration endpoints (1B); avoidance of neuromuscular blockers if possible in the septic patient without ARDS (1C); a short course of neuromuscular blocker (no longer than 48 h) for patients with early ARDS and a PaO (2)/FI O (2) <150 mm Hg (2C); a protocolized approach to blood glucose management commencing insulin dosing when two consecutive blood glucose levels are >180 mg/dL, targeting an upper blood glucose ≤180 mg/dL (1A); equivalency of continuous veno-venous hemofiltration or intermittent hemodialysis (2B); prophylaxis for deep vein thrombosis (1B); use of stress ulcer prophylaxis to prevent upper gastrointestinal bleeding in patients with bleeding risk factors (1B); oral or enteral (if necessary) feedings, as tolerated, rather than either complete fasting or provision of only intravenous glucose within the first 48 h after a diagnosis of severe sepsis/septic shock (2C); and addressing goals of care, including treatment plans and end-of-life planning (as appropriate) (1B), as early as feasible, but within 72 h of intensive care unit admission (2C). Recommendations specific to pediatric severe sepsis include: therapy with face mask oxygen, high flow nasal cannula oxygen, or nasopharyngeal continuous PEEP in the presence of respiratory distress and hypoxemia (2C), use of physical examination therapeutic endpoints such as capillary refill (2C); for septic shock associated with hypovolemia, the use of crystalloids or albumin to deliver a bolus of 20 mL/kg of crystalloids (or albumin equivalent) over 5-10 min (2C); more common use of inotropes and vasodilators for low cardiac output septic shock associated with elevated systemic vascular resistance (2C); and use of hydrocortisone only in children with suspected or proven "absolute"' adrenal insufficiency (2C). Strong agreement existed among a large cohort of international experts regarding many level 1 recommendations for the best care of patients with severe sepsis. Although a significant number of aspects of care have relatively weak support, evidence-based recommendations regarding the acute management of sepsis and septic shock are the foundation of improved outcomes for this important group of critically ill patients. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
In this monograph, we ask whether various kinds of intellectual, physical, and social activities produce cognitive enrichment effects-that is, whether they improve cognitive performance at different points of the adult life span, with a particular emphasis on old age. We begin with a theoretical framework that emphasizes the potential of behavior to influence levels of cognitive functioning. According to this framework, the undeniable presence of age-related decline in cognition does not invalidate the view that behavior can enhance cognitive functioning. Instead, the course of normal aging shapes a zone of possible functioning, which reflects person-specific endowments and age-related constraints. Individuals influence whether they function in the higher or lower ranges of this zone by engaging in or refraining from beneficial intellectual, physical, and social activities. From this point of view, the potential for positive change, or plasticity, is maintained in adult cognition. It is an argument that is supported by newer research in neuroscience showing neural plasticity in various aspects of central nervous system functioning, neurochemistry, and architecture. This view of human potential contrasts with static conceptions of cognition in old age, according to which decline in abilities is fixed and individuals cannot slow its course. Furthermore, any understanding of cognition as it occurs in everyday life must make a distinction between basic cognitive mechanisms and skills (such as working-memory capacity) and the functional use of cognition to achieve goals in specific situations. In practice, knowledge and expertise are critical for effective functioning, and the available evidence suggests that older adults effectively employ specific knowledge and expertise and can gain new knowledge when it is required. We conclude that, on balance, the available evidence favors the hypothesis that maintaining an intellectually engaged and physically active lifestyle promotes successful cognitive aging. First, cognitive-training studies have demonstrated that older adults can improve cognitive functioning when provided with intensive training in strategies that promote thinking and remembering. The early training literature suggested little transfer of function from specifically trained skills to new cognitive tasks; learning was highly specific to the cognitive processes targeted by training. Recently, however, a new generation of studies suggests that providing structured experience in situations demanding executive coordination of skills-such as complex video games, task-switching paradigms, and divided attention tasks-train strategic control over cognition that does show transfer to different task environments. These studies suggest that there is considerable reserve potential in older adults' cognition that can be enhanced through training. Second, a considerable number of studies indicate that maintaining a lifestyle that is intellectually stimulating predicts better maintenance of cognitive skills and is associated with a reduced risk of developing Alzheimer's disease in late life. Our review focuses on longitudinal evidence of a connection between an active lifestyle and enhanced cognition, because such evidence admits fewer rival explanations of observed effects (or lack of effects) than does cross-sectional evidence. The longitudinal evidence consistently shows that engaging in intellectually stimulating activities is associated with better cognitive functioning at later points in time. Other studies show that meaningful social engagement is also predictive of better maintenance of cognitive functioning in old age. These longitudinal findings are also open to important rival explanations, but overall, the available evidence suggests that activities can postpone decline, attenuate decline, or provide prosthetic benefit in the face of normative cognitive decline, while at the same time indicating that late-life cognitive changes can result in curtailment of activities. Given the complexity of the dynamic reciprocal relationships between stimulating activities and cognitive function in old age, additional research will be needed to address the extent to which observed effects validate a causal influence of an intellectually engaged lifestyle on cognition. Nevertheless, the hypothesis that an active lifestyle that requires cognitive effort has long-term benefits for older adults' cognition is at least consistent with the available data. Furthermore, new intervention research that involves multimodal interventions focusing on goal-directed action requiring cognition (such as reading to children) and social interaction will help to address whether an active lifestyle enhances cognitive function. Third, there is a parallel literature suggesting that physical activity, and aerobic exercise in particular, enhances older adults' cognitive function. Unlike the literature on an active lifestyle, there is already an impressive array of work with humans and animal populations showing that exercise interventions have substantial benefits for cognitive function, particularly for aspects of fluid intelligence and executive function. Recent neuroscience research on this topic indicates that exercise has substantial effects on brain morphology and function, representing a plausible brain substrate for the observed effects of aerobic exercise and other activities on cognition. Our review identifies a number of areas where additional research is needed to address critical questions. For example, there is considerable epidemiological evidence that stress and chronic psychological distress are negatively associated with changes in cognition. In contrast, less is known about how positive attributes, such as self-efficacy, a sense of control, and a sense of meaning in life, might contribute to preservation of cognitive function in old age. It is well known that certain personality characteristics such as conscientiousness predict adherence to an exercise regimen, but we do not know whether these attributes are also relevant to predicting maintenance of cognitive function or effective compensation for cognitive decline when it occurs. Likewise, more information is needed on the factors that encourage maintenance of an active lifestyle in old age in the face of elevated risk for physiological decline, mechanical wear and tear on the body, and incidence of diseases with disabling consequences, and whether efforts to maintain an active lifestyle are associated with successful aging, both in terms of cognitive function and psychological and emotional well-being. We also discuss briefly some interesting issues for society and public policy regarding cognitive-enrichment effects. For example, should efforts to enhance cognitive function be included as part of a general prevention model for enhancing health and vitality in old age? We also comment on the recent trend of business marketing interventions claimed to build brain power and prevent age-related cognitive decline, and the desirability of direct research evidence to back claims of effectiveness for specific products. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The approval of microbubbles with the inert gas sulfur hexafluoride (SF6) and a palmitic acid shell (SonoVue(®), Bracco Geneva, CH) for the diagnostic imaging of liver tumors in adults and children by the FDA in the United States represents a milestone for contrast-enhanced ultrasound (CEUS).This warrants a look back at the history of the development of CEUS. The first publications based on echocardiographic observations of right ventricular contrast phenomena caused by tiny air bubbles following i. v. injection of indocyanine green appeared around 1970 1 2 3. A longer period of sporadic publications but no real progress then followed since, in contrast to X-ray methods, ultrasound works quite well without a contrast agent.It is noteworthy that the foundations for further development were primarily laid in Europe. The development and approval (1991) of the contrast agent Echovist(®) by a German contrast manufacturer for echocardiography unsuitable for passing through lungcapillaries 4 5 resulted in the first extracardiac indications, e. g. for detecting retrovesical reflux and tubal patency, in the mid-1980 s 6 7 8. The sensitivity of color Doppler was not able to compensate for the lack of an ultrasound contrast agent compared to CT with its obligatory contrast administration.Studies of SHU 508 - microbubbles of air moderately stabilized with galactose and palmitic acid - began in 1990 9 10 11 12 13 14 15 and the contrast agent was then introduced in 1995 in Germany as Levovist(®). The most important publications by Blomley, Cosgrove, Leen, and Albrecht are named here on a representative basis 16 17 18 19 20.SHU 508 along with other US contrast agents provided impressive proof of the superiority of CEUS for the diagnosis of liver metastases. However, practical application remained complicated and required skill and technical know-how because of a lack of suitable software on US units 21 22 23 24 25. The monograph regarding the use of contrast agent in the liver by Wermke and Gaßmann is impressive but unfortunately only available in German 26. In addition to being applied in the heart and the liver, CEUS was first used in transcranial applications 27 and in vessels 28, the kidneys 29, and the breast 30. Measurements at transit times were also of particular interest 31. It was difficult to convince ultrasound device manufacturers of the need to adapt US units to US contrast agents and not vice versa.The breakthrough came with low MI phase contrast inversion and the introduction of SonoVue(®) in many European countries in 2001. This more stable US contrast agent is easy to use and is becoming indispensable in diagnostic imaging of the liver 32 33 34 35 36 37 38 39 40. Studies have shown its excellent tolerability 41 and diagnostic reliability comparable to that of MDCT and MRI in the liver 42 43. Today it would be unimaginable to diagnose liver tumors without CEUS 44. This also applies to very small lesions 45 46.EFSUMB published the first CEUS guidelines in 2004 47 which have since been reissued and divided into hepatic 48 and extrahepatic applications 49. The first recommendations regarding quantitative assessment have also been published 50.The increasing scientific interest in CEUS is evident based on the greater number of PubMed hits for Echovist(®) (ca. 130), Levovist(®) (ca. 500) and SonoVue(®) (ca. 1500) as well as on the fact that publications regarding CEUS comprise almost 20 % of UiM/EJU articles in the last 10 years. The number of CEUS articles in UiM/EJU continues to be high 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75.In the clinical reality, CEUS has been able to become established alongside CT and MRI according to the saying "better is the enemy of good" 76 as the method of choice after B-mode ultrasound in the evaluation of liver tumor malignancy in Germany, where the technically challenging method is promoted. In the case of unclear CT and MRI findings, CEUS performed by an experienced examiner/clinician often provides the solution, particularly in the case of small lesions, and is the last resort before US-guided biopsy 45 46. However, there is a lack of competent CEUS examiners and Germany continues to be the world champion of X-ray examinations with no noticeable reverse trend. In almost every doctor's office and hospital, ultrasound costs are by far not fully covered, resulting in an extremely high frequency of CT use with CT being available to everyone regardless of insurance status.The USA is now in the starting position for CEUS. It will be exciting to see how the method will develop there. The FDA's decision to approve sulfur hexafluoride (Lumason(®) = SonoVue(®)) should be considered against the background of the radiation exposure caused by CT examinations and the fact that MRI using gadolinium-containing contrast agents is no longer considered noninvasive because of nephrogenic systemic fibrosis (NSF) and the accumulation of the agent in the cerebrum. An essential point of the campaign regarding the avoidance of diagnostic radiation exposure triggered in the USA by the publications of Brenner et al. 77 78 was that the agent was approved for use in the liver even for children 79 80 - still off label in Europe - without additional comprehensive studies due to the available scientific results and the very low side effects profile of Lumason(®) (= SonoVue(®)). It is admittedly unclear why other indications (except the heart which has been approved since 2014) are excluded even though the microbubbles as a pure blood pool contrast agent can be diagnostically used in the entire vascular system and bed of all organs. To our knowledge, there is no such restriction on the approval of X-ray contrast agents.Like echocardiography and emergency ultrasound, CEUS began in Europe but will probably only establish its final diagnostic value as a "reimport".This is a major opportunity to permanently define the role of Ultrasound as a highly valuable, patient-centered imaging method in the German health care system.This may prompt some of our international readers to reflect upon the role of CEUS in their own countries. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
<bObjective:</b To explore the expression characteristics and role of Krüppel-like factor 4 (KLF4) in macrophage inflammatory response and its effects on inflammatory response and organ injury in septic mice, so as to lay a theoretical foundation for targeted treatment of burns and trauma sepsis. <bMethods:</b The method of experimental research was used. Mouse RAW264.7 macrophages and primary peritoneal macrophages (PMs) isolated from 10 male C57BL/6J mice aged 6-8 weeks were used for the experiments. RAW264.7 macrophages and PMs were treated with endotoxin/lipopolysaccharide (LPS) for 0 (without treatment), 1, 2, 4, 6, 8, 12, and 24 h, respectively, to establish macrophage inflammatory response model. The mRNA expression of interleukin 1β (IL-1β), IL-6, CC chemokine ligand 2 (CCL2) and tumor necrosis factor-α (TNF-α) were detected by real-time fluorescence quantitative reverse transcription polymerase chain reaction (RT-PCR), and the LPS treatment time was determined for some of the subsequent experiments. RAW264.7 macrophages were treated with LPS for 0 and 8 h, the localization and protein expression of KLF4 were detected by immunofluorescence method, transcriptome sequencing of the cells was performed using the high-throughput sequencing technology platform, and the differently expressed genes (DEGs) between the two time points treated cells were screened by DESeq2 software. RAW264.7 macrophages and PMs were treated with LPS for 0, 1, 2, 4, 6, 8, 12, and 24 h, respectively, and the mRNA and protein expressions of KLF4 were detected by real-time fluorescence quantitative RT-PCR and Western blotting, respectively. RAW264.7 macrophages were divided into negative control (NC) group and KLF4-overexpression group according to the random number table, which were treated with LPS for 0 and 8 h respectively after transfection of corresponding plasmid. The mRNA expressions of KLF4, IL-1β, IL-6, CCL2, and TNF-α were detected by real-time fluorescence quantitative RT-PCR, while the protein expression of KLF4 was detected by Western blotting. The number of samples in aforementioned experiments was all 3. Forty male C57BL/6J mice aged 6-8 weeks were divided into KLF4-overexpression group and NC group (with 20 mice in each group) according to the random number table, and the sepsis model of cecal ligation perforation was established after the corresponding transfection injection was injected respectively. Twelve mice were selected from each of the two groups according to the random number table, and the survival status within 72 hours after modeling was observed. Eight hours after modeling, the remaining 8 mice in each of the two groups were selected, the eyeball blood samples were collected to detect the levels of IL-1β and IL-6 in serum by enzyme-linked immunosorbent assay, and the levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) in serum by dry chemical method. Subsequently, the heart, lung, and liver tissue was collected, and the injury was observed after hematoxylin-eosin staining. Data were statistically analyzed with independent sample <it</i test, Cochran & Cox approximate <it</i test, one-way analysis of variance, Dunnett test, Brown-Forsythe and Welch one-way analysis of variance, Dunnett T3 test, log-rank (Mantel-Cox) test. <bResults:</b Compared with that of LPS treatment for 0 h, the mRNA expressions of IL-1β in RAW264.7 macrophages treated with LPS for 6 h and 8 h, the mRNA expressions of IL-6 in RAW264.7 macrophages treated with LPS for 4-12 h, the mRNA expressions of CCL2 in RAW264.7 macrophages treated with LPS for 8 h and 12 h, and the mRNA expressions of TNF-α in RAW264.7 macrophages treated with LPS for 4-8 h were significantly up-regulated (<iP</i<0.05 or <iP</i<0.01), while the mRNA expressions of IL-1β and CCL2 in PMs treated with LPS for 4-8 h, the mRNA expressions of IL-6 in PMs treated with LPS for 2-24 h, and the mRNA expressions of TNF-α in PMs treated with LPS for 2-12 h were significantly up-regulated (<iP</i<0.05 or <iP</i<0.01). Eight hours was selected as the LPS treatment time for some of the subsequent experiments. KLF4 mainly located in the nucleus of RAW264.7 macrophages. Compared with those of LPS treatment for 0 h, the protein expression of KLF4 in RAW264.7 macrophages treated with LPS for 8 h was obviously decreased, and there were 1 470 statistically differentially expressed DEGs in RAW264.7 macrophages treated with LPS for 8 h, including KLF4 with significantly down-regulated transcriptional expression (false discovery rate<0.05, log<sub2</sub (fold change)=-2.47). Compared with those of LPS treatment for 0 h, the mRNA expressions of KLF4 in RAW264.7 macrophages treated with LPS for 6-24 h, the protein expressions of KLF4 in RAW264.7 macrophages and PMs treated with LPS for 1-24 h, and the mRNA expressions of KLF4 in PM treated with LPS for 4-24 h were significantly decreased (<iP</i<0.05 or <iP</i<0.01). Compared with those in NC group, the mRNA (with <it'</i values of 17.03 and 8.61, respectively, <iP</i<0.05 or <iP</i<0.01) and protein expressions of KLF4 in RAW264.7 macrophages treated with LPS for 0 h and 8 h in KLF4-overexpression group were significantly increased, the mRNA expressions of IL-6 and CCL2 increased significantly in RAW264.7 macrophages treated with LPS for 0 h (with <it</i values of 6.29 and 3.40, respectively, <iP</i<0.05 or <iP</i<0.01), while the mRNA expressions of IL-1β, IL-6, CCL2, and TNF-α decreased significantly in RAW264.7 macrophages treated with LPS for 8 h (with <it</i values of 10.52, 9.60, 4.58, and 8.58, respectively, <iP</i<0.01). The survival proportion of mice within 72 h after modeling in KLF4-overexpression group was significantly higher than that in NC group (<iχ</i<sup2</sup=4.01, <iP</i<0.05). Eight hours after modeling, the serum levels of IL-1β, IL-6 and ALT, AST of mice in KLF4-overexpression group were (161±63), (476±161) pg/mL and (144±24), (264±93) U/L, respectively, which were significantly lower than (257±58), (654±129) pg/mL and (196±27), (407±84) U/L (with <it</i values of 3.16, 2.44 and 4.04, 3.24, respectively, <iP</i<0.05 or <iP</i<0.01) in NC group. Eight hours after modeling, compared with those in NC group, the disorder of tissue structure of heart, lung, and liver, inflammatory exudation, and pathological changes of organ parenchyma cells in KLF4-overexpression group were obviously alleviated. <bConclusions:</b The expression of KLF4 is significantly down-regulated in LPS-induced macrophage inflammatory response, which significantly inhibits the macrophage inflammatory response. KLF4 significantly enhances the survival rate of septic mice and alleviates inflammatory response and sepsis-related organ injury. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The high prevalence and chronic evolution of schizophrenia are responsible for a major social cost. The adverse consequences of such psychiatric disorders for relatives have been studied since the early 1950s, when psychiatric institutions began discharging patients into the community. According to Treudley (1946) "burden on the family" refers to the consequences for those in close contact with a severely disturbed psychiatric patient. Grad and Sainsbury (1963) and Hoenig and Hamilton (1966) developed the first burden scales for caregivers of severely mentally ill patients, and a number of authors further developed instruments trying to distinguish between "objective" and "subjective" burden. Objective burden concerns the patient's symptoms, behaviour and socio-demographic characteristics, but also the changes in household routine, family or social relations, work, leisure time, physical health.... Subjective burden is the mental health and subjective distress among family members. While the first authors referred to those problems which are deemed to be related to, or caused by the patient, Platt et al. (1983) tried to distinguish between the occurrence of a problem, its alleged aetiology, and the perceived distress, when developing the SBAS questionnaire. These authors also proposed separate evaluations of behavioral disturbance and social performance by relatives, and a report of extra-disease stressors in family life. The SBAS is actually the most complete, but also complex instrument for evaluating burden in caregivers. Since 1967 Pasamanick and others proposed questionnaires for burden evaluation in relatives of schizophrenic patients. Relatives may be included in specific psychoeducational programs, but few of these programs have been evaluated in terms of caregiver burden. The theoretical approach in schizophrenia was not different from that one adopted in mentally ill population. Some instruments were validated first in a mentally ill group and then adapated for schizophrenic population. This paper describes the available data about intruments measuring caregiver burden in relatives of schizophrenic patients. Measures are summarized according to purpose, content and psychometric properties. Sixteen instruments have been collected from the litterature (1955-2001), and 2 instruments developed for relatives of mentally ill have also been taken into account. A group of 5 instruments focuses on the measurement of behavioural disturbance in persons with schizophrenia as perceived by their family members. Eleven scales include behavioural disturbance in a more complete decription of objective burden. Thirteen questionnaires also report the subjective distress in caregivers. One instrument has been developed in french language. Few of these instruments have been developed from a verbatim and really describe the caregiver's point of view. Most of them rely on experts point of view or on previously published studies. The content and domains explored by these instruments are variable. The psychometric properties are poorly documented for a number of them and no information is published about responsiveness. Some validated instruments are the Perceived Family Burden Scale (PFBS) the Involvement Evaluation Questionnaire (IEQ) and the Experience of Caregiving Inventory (ECI). In past studies, researchers more or less agreed about the dimensions that comprise the family burden. There was less agreement with regard to the definition of objective and subjective burden, and quite no agreement about the theoretical approach to the concept of burden. The evaluation of behavioural disturbance should now be excluded from the objective burden dimension. It is a specific domain, both objective and subjective, which can be described as a stressor in the stress-appraisal-coping model. A good approach of this domain can be found in the PFBS. It comprises 24 items and the principal components analysis produces 2 factors ("active" and "passive"), explaining 35% of the variance, with good consitency and acceptable test-retest reliability. The evaluation is both objective (presence or absence) and subjective (induced distress). The Behavior Disturbance Scale (BDS) may also be taken into account, although it is less validated. This scale derives from the SBAS, modified as a self-questionnaire, with both objective and subjective evaluations of all items. The concept of burden was recently modified in a new theoretical approach by Schene, when developing the IEQ. According to this author, the burden scale is supposed to exclude stressors (patient's behaviors), as well as outcome variables (distress or psychological impairment in caregiver). The "caregiving consequences" section comprises 36 items, which focus on the subjective aspects of the caregiver's experience. Principal component factor analysis generates 4 factors which explain 45% of the variance: tension, supervision, worrying, urging. The overall caregiving score substantially explains the connection between patient, caregiver, relationship variables and the caregiver's distress. This scale is a valid and simple instrument for caregiving eveluation The ECI also introduces a new approach of caregiving and rejects the notion of burden. The 66 item version is composed of 10 factors (8 "negative" and 2 "positive") with good internal consistency. The introduction of two positive factors (rewarding personal experiences, good aspects of the relationship with the patient) might be the basis of a useful outcome measure for intervention aimed at promoting caregiver well-being. Nevertheless, the authors fail to develop an overall score that includes these factors, and focus on the negative dimensions as predictors of morbidity and well-being. None of the variables included in the regression model explain a significant percent of the variance of the ECI positive score. None of these instruments was employed for evaluating programs or treatments, even psychoeducational programs for caregivers. This may be partly due to the lack of data about sensitivity to change. No instrument is now available for evaluating therapeutics from the caregiver's point of view. Developing such an instrument is necessary, in view of the increasing role of families in care for schizophrenic patients. These data and the review of the literature leeds us to propose the development of a self-administered questionnaire for evaluating subjective health-related quality of life in caregivers of schizophrenic patients. The instrument should be developed from the caregiver's point of view and be derived from qualitative interviews with relatives of patients suffering from schizophrenia. It's responsiveness should be documented before inclusion in clinical trials or evaluation of psychoeducational programs. We are now working with the National Union of Friends and Families of Patients to validate an instrument in french language. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Objectives The objective of this review was to assess the effectiveness of interventions that assist caregivers to provide support for people living with dementia in the community. Inclusion criteria Types of participants Adult caregivers who provide support for people with dementia living in the community (non-institutional care). Types of interventions Interventions designed to support caregivers in their role such as skills training, education to assist in caring for a person living with dementia and support groups/programs. Interventions of formal approaches to care designed to support caregivers in their role, care planning, case management and specially designated members of the healthcare team - for example dementia nurse specialist or volunteers trained in caring for someone with dementia. Types of studies This review considered any meta-analyses, systematic reviews, randomised control trials, quasi-experimental studies, cohort studies, case control studies and observational studies without control groups that addressed the effectiveness of interventions that assist caregivers to provide support for people living with dementia in the community. Search strategy The search sought to identify published studies from 2000 to 2005 through the use of electronic databases. Only studies in English were considered for inclusion. The initial search was conducted of the databases, CINAHL, MEDLINE and PsychINFO using search strategies adapted from the Cochrane Dementia and Cognitive Improvement Group. A second more extensive search was then conducted using the appropriate Medical Subject Headings (MeSH) and keywords for other available databases. Finally, hand searching of reference lists of articles retrieved and of core dementia, geriatric and psycho geriatric journals was undertaken. Assessment of quality Methodological quality of each of the articles was assessed by two independent reviewers using appraisal checklist developed by the Joanna Briggs Institute and based on the work of the Cochrane Collaboration and Centre for Reviews and Dissemination. Data collection and analysis Standardised mean differences or weighted mean differences and their 95% confidence intervals were calculated for each included study reported in the meta-analysis. Results from comparable groups of studies were pooled in statistical meta-analysis using Review Manager Software from the Cochrane Collaboration. Heterogeneity between combined studies was tested using standard chi-square test. Where statistical pooling was not appropriate or possible, the findings are summarised in narrative form. Results A comprehensive search of relevant databases, hand searching and cross referencing found 685 articles that were assessed for relevance to the review. Eighty-five papers appeared to meet the inclusion criteria based on title and abstract, and the full paper was retrieved. Of the 85 full papers reviewed, 40 were accepted for inclusion, three were systematic reviews, three were meta-analysis, and the remaining 34 were randomised controlled trials. For the randomised controlled trials that were able to be included in a meta-analysis, standardised mean differences or weighted mean differences and their 95% confidence intervals were calculated for each. Results from comparable groups of studies were pooled in statistical meta-analysis using Review Manager Software and heterogeneity between combined studies was assessed by using the chi-square test. Where statistical pooling was not appropriate or possible, the findings are summarised in narrative form. The results are discussed in two main sections. Firstly it was possible to assess the effectiveness of different types of caregiver interventions on the outcome categories of depression, health, subjective well-being, self-efficacy and burden. Secondly, results are reported by main outcome category. For each of these sections, meta-analysis was conducted where it was possible; otherwise, a narrative summary describes the findings. Effectiveness of intervention type Four categories of intervention were included in the review - psycho-educational, support, multi-component and other. Psycho-educational Thirteen studies used psycho-educational interventions, and all but one showed positive results across a range of outcomes. Eight studies were entered in a meta-analysis. No significant impact of psycho-educational interventions was found for the outcome categories of subjective well-being, self-efficacy or health. However, small but significant results were found for the categories of depression and burden. Support Seven studies discussed support only interventions and two of these showed significant results. These two studies were suitable for meta-analysis and demonstrated a small but significant improvement on caregiver burden. Multi-component Twelve of the studies report multi-component interventions and 10 of these report significant outcomes across a broad range of outcome measures including self-efficacy, depression, subjective well-being and burden. Unfortunately because of the heterogeneity of study designs and outcome measures, no meta-analysis was possible. Other interventions Other interventions included the use of exercise or nutrition which resulted in improvements in psychological distress and health benefits. Case management and a computer aided support intervention provided mixed results. One cognitive behavioural therapy study reported a reduction in anxiety and positive impacts on patient behaviour. Effectiveness of interventions using specific outcome categories In addition to analysis by type of intervention it was possible to analyse results based on some outcome categories that were used across the studies. In particular the impact of interventions on caregiver depression was available for meta-analysis from eight studies. This indicated that multi-component and psycho-educational interventions showed a small but significant positive effect on caregiver depression. Five studies using the outcome category of caregiver burden were entered into a meta-analysis and findings indicated that there were no significant effects of any of interventions. No meta-analysis was possible for the outcome categories of health, self-efficacy or subjective well-being. Implications for practice From this review there is evidence to support the use of well-designed psycho-educational or multi-component interventions for caregivers of people with dementia who live in the community. Factors that appear to positively contribute to effective interventions are those which: • Provide opportunities within the intervention for the person with dementia as well as the caregiver to be involved • Encourage active participation in educational interventions for caregivers • Offer individualised programs rather than group sessions • Provide information on an ongoing basis, with specific information about services and coaching regarding their new role • Target the care recipient particularly by reduction in behaviours Factors which do not appear to have benefit in interventions are those which: • Simply refer caregivers to support groups • Only provide self help materials • Only offer peer support. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Most people who stop smoking gain weight. This can discourage some people from making a quit attempt and risks offsetting some, but not all, of the health advantages of quitting. Interventions to prevent weight gain could improve health outcomes, but there is a concern that they may undermine quitting. To systematically review the effects of: (1) interventions targeting post-cessation weight gain on weight change and smoking cessation (referred to as 'Part 1') and (2) interventions designed to aid smoking cessation that plausibly affect post-cessation weight gain (referred to as 'Part 2'). Part 1 - We searched the Cochrane Tobacco Addiction Group's Specialized Register and CENTRAL; latest search 16 October 2020. Part 2 - We searched included studies in the following 'parent' Cochrane reviews: nicotine replacement therapy (NRT), antidepressants, nicotine receptor partial agonists, e-cigarettes, and exercise interventions for smoking cessation published in Issue 10, 2020 of the Cochrane Library. We updated register searches for the review of nicotine receptor partial agonists. Part 1 - trials of interventions that targeted post-cessation weight gain and had measured weight at any follow-up point or smoking cessation, or both, six or more months after quit day. Part 2 - trials included in the selected parent Cochrane reviews reporting weight change at any time point. Screening and data extraction followed standard Cochrane methods. Change in weight was expressed as difference in weight change from baseline to follow-up between trial arms and was reported only in people abstinent from smoking. Abstinence from smoking was expressed as a risk ratio (RR). Where appropriate, we performed meta-analysis using the inverse variance method for weight, and Mantel-Haenszel method for smoking. Part 1: We include 37 completed studies; 21 are new to this update. We judged five studies to be at low risk of bias, 17 to be at unclear risk and the remainder at high risk. An intermittent very low calorie diet (VLCD) comprising full meal replacement provided free of charge and accompanied by intensive dietitian support significantly reduced weight gain at end of treatment compared with education on how to avoid weight gain (mean difference (MD) -3.70 kg, 95% confidence interval (CI) -4.82 to -2.58; 1 study, 121 participants), but there was no evidence of benefit at 12 months (MD -1.30 kg, 95% CI -3.49 to 0.89; 1 study, 62 participants). The VLCD increased the chances of abstinence at 12 months (RR 1.73, 95% CI 1.10 to 2.73; 1 study, 287 participants). However, a second study found that no-one completed the VLCD intervention or achieved abstinence. Interventions aimed at increasing acceptance of weight gain reported mixed effects at end of treatment, 6 months and 12 months with confidence intervals including both increases and decreases in weight gain compared with no advice or health education. Due to high heterogeneity, we did not combine the data. These interventions increased quit rates at 6 months (RR 1.42, 95% CI 1.03 to 1.96; 4 studies, 619 participants; I<sup2</sup = 21%), but there was no evidence at 12 months (RR 1.25, 95% CI 0.76 to 2.06; 2 studies, 496 participants; I<sup2</sup = 26%). Some pharmacological interventions tested for limiting post-cessation weight gain (PCWG) reduced weight gain at the end of treatment (dexfenfluramine, phenylpropanolamine, naltrexone). The effects of ephedrine and caffeine combined, lorcaserin, and chromium were too imprecise to give useful estimates of treatment effects. There was very low-certainty evidence that personalized weight management support reduced weight gain at end of treatment (MD -1.11 kg, 95% CI -1.93 to -0.29; 3 studies, 121 participants; I<sup2</sup = 0%), but no evidence in the longer-term 12 months (MD -0.44 kg, 95% CI -2.34 to 1.46; 4 studies, 530 participants; I<sup2</sup = 41%). There was low to very low-certainty evidence that detailed weight management education without personalized assessment, planning and feedback did not reduce weight gain and may have reduced smoking cessation rates (12 months: MD -0.21 kg, 95% CI -2.28 to 1.86; 2 studies, 61 participants; I<sup2</sup = 0%; RR for smoking cessation 0.66, 95% CI 0.48 to 0.90; 2 studies, 522 participants; I<sup2</sup = 0%). Part 2: We include 83 completed studies, 27 of which are new to this update. There was low certainty that exercise interventions led to minimal or no weight reduction compared with standard care at end of treatment (MD -0.25 kg, 95% CI -0.78 to 0.29; 4 studies, 404 participants; I<sup2</sup = 0%). However, weight was reduced at 12 months (MD -2.07 kg, 95% CI -3.78 to -0.36; 3 studies, 182 participants; I<sup2</sup = 0%). Both bupropion and fluoxetine limited weight gain at end of treatment (bupropion MD -1.01 kg, 95% CI -1.35 to -0.67; 10 studies, 1098 participants; I<sup2</sup = 3%); (fluoxetine MD -1.01 kg, 95% CI -1.49 to -0.53; 2 studies, 144 participants; I<sup2</sup = 38%; low- and very low-certainty evidence, respectively). There was no evidence of benefit at 12 months for bupropion, but estimates were imprecise (bupropion MD -0.26 kg, 95% CI -1.31 to 0.78; 7 studies, 471 participants; I<sup2</sup = 0%). No studies of fluoxetine provided data at 12 months. There was moderate-certainty that NRT reduced weight at end of treatment (MD -0.52 kg, 95% CI -0.99 to -0.05; 21 studies, 2784 participants; I<sup2</sup = 81%) and moderate-certainty that the effect may be similar at 12 months (MD -0.37 kg, 95% CI -0.86 to 0.11; 17 studies, 1463 participants; I<sup2</sup = 0%), although the estimates are too imprecise to assess long-term benefit. There was mixed evidence of the effect of varenicline on weight, with high-certainty evidence that weight change was very modestly lower at the end of treatment (MD -0.23 kg, 95% CI -0.53 to 0.06; 14 studies, 2566 participants; I<sup2</sup = 32%); a low-certainty estimate gave an imprecise estimate of higher weight at 12 months (MD 1.05 kg, 95% CI -0.58 to 2.69; 3 studies, 237 participants; I<sup2</sup = 0%). Overall, there is no intervention for which there is moderate certainty of a clinically useful effect on long-term weight gain. There is also no moderate- or high-certainty evidence that interventions designed to limit weight gain reduce the chances of people achieving abstinence from smoking. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
This study considered the role of magnetic resonance imaging (MRI) in the diagnosis of knee injuries in a district general hospital (DGH) setting. The principal objective was to identify whether the use of MRI had a major impact on the clinical management of patients presenting with chronic knee problems, in whom surgery was being considered, whether it reduced overall costs and whether it improved patient outcome. In addition, the research: (1) explored the 'diagnostic accuracy' of initial clinical investigation of the knee by an orthopaedic trainee, consultant knee specialist and consultant radiologist; (2) considered the variability and diagnostic accuracy of interpretations of knee MRI investigations between radiologists; (3) measured the strength of preference for the potential diagnostic/therapeutic impact of knee MRI (i.e. the avoidance of surgery). METHODS - RANDOMISED CONTROLLED TRIAL: The research was based on a single-centre randomised controlled trial conducted at Kent and Canterbury Hospital. Patients attending with knee problems in whom surgery was being considered were recruited from routine orthopaedic clinics. Most patients had been referred by their general practitioner. Patients were randomised to either investigation using an MRI scan (MRI trial arm) or investigation using arthroscopy (no-MRI trial arm). The study investigated the benefits of knee MRI at two levels: diagnostic/therapeutic impact (i.e. avoidance of surgery) and patient outcome (using the Short Form with 36 items and EQ-5D quality-of-life measurement instruments). Quality of life was assessed at baseline and at 6 and 12 months. Costs were assessed from the perspectives of the NHS and patients. All analyses were by intention to treat. METHODS - SUBSTUDIES (INVESTIGATION OF DIAGNOSTIC ACCURACY): For the investigation of diagnostic accuracy of initial clinical investigation, the sample comprised 114 patients recruited in a separate study conducted at St Thomas' Hospital. The sample was drawn from patients presenting at the Accident and Emergency Department with an acute knee injury. All study patients received an MRI scan, but initial diagnosis was made without access to the scan or the radiologist's report. After 12 months, all clinical notes and MRI scans of study patients were reviewed and a final 'reference standard' diagnosis for each patient was reached. Comparison was made between the diagnosis recorded by each clinician (i.e. orthopaedic trainee, knee specialist and consultant radiologist) and the reference diagnosis. METHODS - SUBSTUDIES (INVESTIGATION OF THE GENERALISABILITY OF RESULTS): For this substudy, the MRI images from 80 patients (recruited at St Thomas' Hospital) were interpreted independently by seven consultant radiologists at DGHs and the St Thomas' Hospital MRI radiologist. For each area of the knee, the level of agreement (measured using weighted kappa) between the responses of the eight radiologists and the reference standard diagnosis was assessed. METHODS - SUBSTUDIES (INVESTIGATION OF PREFERENCES): The investigation of potential patient preferences for the diagnostic/therapeutic impact of MRI was explored using a discrete choice conjoint measurement research design. Choices involved selecting between two alternative scenarios described using four attributes, and data were collected from 585 undergraduate sports science students and analysed using a random-effects probit model. RESULTS - RANDOMISED CONTROLLED TRIAL: The trial recruited 118 patients (59 randomly allocated to each arm). The two groups were similar in important respects at baseline. The central finding was of no statistically significant differences between groups in all measures of health outcome, although a trend in favour of the no-MRI group was observed. However, the use of MRI was found to be associated with a positive diagnostic/therapeutic impact: a significantly smaller proportion of patients in the MRI group underwent surgery (MRI = 0.41, no-MRI = 0.71; p = 0.001). There was a similar mean overall NHS cost for both groups. RESULTS - SUBSTUDIES (INVESTIGATION OF DIAGNOSTIC ACCURACY): The exploration of diagnostic accuracy found that, when compared to orthopaedic trainees (44% correct diagnoses) or to radiologists reporting an MRI scan (68% correct diagnoses), the accuracy rate was higher for knee specialists (72% correct diagnoses). RESULTS - SUBSTUDIES (INVESTIGATION OF THE GENERALISABILITY OF RESULTS): This generalisability study indicated that, in general terms, radiologists in DGHs provide accurate interpretations of knee MRI images that are similar to a radiologist at a specialist centre. The one area of the knee for which this did not hold was the lateral collateral ligament. RESULTS - SUBSTUDIES (INVESTIGATION OF PREFERENCES): The central finding for this substudy was that, on average and within the range specified, choices in this group of potential patients were not significantly influenced by variation in the chance of avoiding surgery. CONCLUSIONS - IMPLICATIONS FOR HEALTHCARE: The evidence presented in this report supports the conclusions that the use of MRI in patients presenting at DGHs with chronic knee problems in whom arthroscopy was being considered did not increase NHS costs overall, was not associated with significantly worse outcomes and avoided surgery in a significant proportion of patients. CONCLUSIONS - RECOMMENDATIONS FOR FURTHER RESEARCH (IN PRIORITY ORDER): (1) The trial data demonstrated that the use of MRI in patients with chronic knee problems reduced the need for surgery. However, the link between diagnostic processes and changes in health outcome is indirect and the finding of no-MRI-related effect on health outcome may, therefore, be a consequence of the limited power of the trial. Further research to confirm (or contradict) these findings would be valuable. (2) The investigation of diagnostic accuracy involved comparison with a reference diagnosis established by a panel of two clinical members of the research team. It would be interesting to explore the extent to which the results would differ using an external panel. (3) The result from the preference study, indicating that the potential diagnostic/therapeutic impact of knee MRI was not highly valued, is a surprising finding that would be important to explore in general public or patient populations. (4) The focus for the trial-based aspects of this research was the DGH and patients presenting with chronic knee problems who were being considered for surgery. Care should be taken in generalising from these results to other patient groups (e.g. acute knee injuries) or to other settings (e.g. specialist centres). Further clinical trials would be required in order to answer such questions. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
To determine the causes and preventability of child deaths; to assess the accuracy of death certificate information; and to assess the number of child abuse deaths that are misdiagnosed as deaths attributable to natural or accidental causes. Analysis of deaths of children <18 years old that occurred between 1995-1999 using the data collected by the Arizona Child Fatality Review Program (ACFRP). From 1995-1999, local multidisciplinary child fatality review teams (CFRTs) have reviewed 95% of all deaths of children <18 years old in Arizona. Each team has access to the child's death certificate, autopsy report, hospital records, child protective services records, law enforcement reports, and any other relevant documents that provide insight into the cause and preventability of a child's death. After reviewing these documents, the team determines the cause of death, its preventability, and the accuracy of the death certificate. The ACFRP defines a child's death as preventable if an individual or the community could reasonably have done something that would have changed the circumstances that led to the child's death. The ACFRP determined that 29% (1416/4806) of these deaths could have been prevented, and preventability increased with the age of the child. Only 5% (81/1781) of neonatal deaths were considered preventable, whereas the deaths of 38% of all children older than 28 days were considered preventable. By 9 years of age, the majority of child deaths (56%) were considered preventable. Deaths attributable to medical conditions were far less likely to be considered preventable than deaths attributable to unintentional injuries. Although 62% of all deaths in Arizona during the 5-year period were attributable to medical conditions, only 8% (253/2983) of these deaths were considered preventable. In contrast, 91% (852/934) of the deaths attributable to unintentional injuries were considered preventable. Motor vehicle crashes accounted for 634 of the deaths resulting from injuries, and drowning accounted for 187 deaths. Motor vehicle crashes were the leading cause of death for all children in Arizona over 1 year of age. Only 18% of child passengers and 3% of adolescent drivers who died were known to be appropriately restrained. The typical drowning victim was a young child who drowned in the family's backyard pool. Indeed, 70% (131/187) of the drowning victims were <5 years old, and 62% (81/131) of these children died in a backyard pool. Supervision of the child and pool fencing could have prevented 90% of these deaths. Most deaths attributable to medical conditions occurred in the first year of life. Prematurity was the most common medical condition (1036 deaths) followed by congenital anomalies (662 deaths) and infectious diseases (470 deaths). Some of the reasons why CFRTs believed a medical death was preventable included inadequate emergency medical services, poor continuity of care, and delay in seeking care because of lack of health insurance. There were 4 deaths resulting from infections that were vaccine-preventable. There were 263 deaths attributable to sudden infant death syndrome. Only 38 of these infants were found lying on their back; 35 were found lying on their side. The death rate from sudden infant death syndrome decreased from 1.1 per 1000 infants <1 year of age in 1995 to 0.5 in 1999. There were 33 deaths that the CFRTs concluded were attributable to unsafe sleeping arrangements that resulted in unintentional suffocation. From 1995-1999, 317 Arizona children died from gun shot wounds. Most of these deaths were homicides (175) or suicides (109). All suicide deaths occurred in children >9 years old, and 77% of these children were >14 years old. The typical suicide victim was male (83%) and used a gun (70%) to kill himself. After review by the CFRTs, it was determined that 5 of the 67 child abuse deaths were misdiagnosed as attributable to natural or accidental causes on the death certificate. In 3 of these 5 cases, the child was in a persistent vegetative state and died many years after the episode of child abuse. Although inaction or inappropriate action by Child Protective Services (CPS) is often thought to be the cause of child abuse deaths, the ACFRP determined that in 79% of child abuse deaths, there had been no previous CPS involvement with the child's family. Although 61% of child abuse deaths were considered to be preventable, much of the responsibility for preventing these deaths rests with community members (eg, relatives, neighbors) who were aware of the abuse but failed to report the family to CPS. The CFRTs, who had received training in the proper completion of death certificates, reported that the cause of death was incorrect on 13% of all death certificates and in 16 cases, the CFRTs disagreed with the medical examiner on the manner of death (eg, natural, accidental, undetermined). Because CFRTs have access to additional information that may not have been available to the physician who completes a child's death certificate, CFRTs may be able to more accurately determine the cause and manner of death than the physician who completed the death certificate. Arizona's child death rate is above the national average (82.16/100 000), but the ACFRP determined that many of these deaths could have been prevented by using known prevention strategies (eg, child safety restraints, pool fencing). Most child mortality data are based on death certificate information that often is incorrect and cannot be used to assess preventability. Although most states have child fatality review programs that review suspected child abuse deaths, <3% of all preventable deaths in Arizona were attributable to child abuse. If all child deaths in the United States were reviewed from a prevention/needs assessment perspective, targeted and data-driven recommendations for prevention could be developed for each community, and potentially 38% of all child deaths that occur after the first month of life could be prevented. The ACFRP is an excellent example of a statewide system with a public health focus. To assist other states in developing similar programs, national support is needed. The establishment of a public health focused federally funded national program would provide us with the opportunity to standardize data collection among states and better utilize this data at a national level. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The authors have prospectively documented that men who undergo orthotopic bladder substitution more frequently experience bacteriuria than do normal men [19] or men with carcinoma of the prostate scheduled to have radical prostatectomy (see Table 1). Because the frequency of bacteriuria in men after prostatectomy was also lower than that after orthotopic bladder substitution (see Table 1), removal of the prostate and any of its presumed antibacterial properties probably does not account for this difference. Furthermore, the authors' data (see Table 5 ) and that of Woodside and associates [23] demonstrate that intestine incorporated into the genitourinary tract generates a local antibody response against urinary bacteria. Although others have suggested that the incorporation of bowel in the urinary tract may be associated with increased bacteriuria, this effect has never been documented prospectively. The mechanism of this increased frequency of bacteriuria is unknown. Because the anatomy of the male secretory genitourinary system may be altered after radical prostatectomy and orthotopic bladder substitution, the authors evaluated local antibody production before and after these operations. More than 20 years ago, Burdon [5] found that the initial portion of the VB1 sample in men had significantly higher levels of IgA compared with the VB2 specimen, whereas the levels of IgG were similar in the two portions. This latter finding was confirmed by Shorliffe and co-workers [22] when they examined prostatic secretion. Other investigators have found high levels of IgA in human prostatic tissue and fluid. [24,25]. On the basis of these findings, it was believed that, in men, the prostate produces most of the urinary IgA, whereas the bladder or upper urinary tracts make most of the urinary IgG. Although the authors' study confirms that most local urinary tract IgG is produced by the bladder or upper urinary tracts, this study documents that the prostate is not the only source of urethral IgA in men. Despite almost complete removal or prostate secretory epithelium by radical prostatectomy, as evidenced by a dramatic fall in postoperative VB1 and VB2 PSA compared with preoperative levels (Table 3). men who had this operation had only slightly decreased IgA levels after the operation (Table 4, Fig I). The source of this IgA must be urethral because the VB1 urinary stream contains more IgA than the VB2 urine even after radical prostatectomy. The authors have not determined whether the urinary IgA concentrations observed after radical prostatectomy are the true baseline values for a man without a prostate, or whether they actually reflect abnormal production of local IgA stimulated by radical prostatectomy. Because post-prostatectomy bacteriuris occurred frequently during urethral catheter drainage, the authors screened for postoperative IgA titers to mix 1 and mix 2 to determine whether specific production of antibody against gram-negative organisms might account for some of the postoperative IgA measured. Postradical prostatectomy mix 1 and mix 2 titers were not elevated, compared with preoperative measurements. Because urethral glandular tissue other than prostatic tissue is present in the male urethra, these glands also might be responsible for significant local antibody production. The high levels of urinary IgA and IgG after cystoprostatectomy with ileal orthotopic bladder substitution document that intestine incorporated into the urinary tract is still capable of producing local antibody. This observation corresponds with the findings of Mansson and associated [26] of elevated IgA and IgG in ileal reservoir urine compared with normal urinary tracts. It has been estimated that 1 m of intestine may secrete up to 780 mg/d of IgA [27], indicating that normal intestine production of antibody alone can account for the high IgA and IgG levels found in the patients who underwent bladder substitution. Interestingly, the ratio of IgA to IgG concentration in smal intestine fluid is 2:129, similar to the ratio of IgA to IgG in bladder substitution urine (2.92.1:52, Table 4). Because mix 1 and mix 2 IgA concentrations were elevated in VB1 and VB2 urine after ileal bladder substitution (see Table 5), some of this antibody was produced by the ileal bladder substitution in response to the inevitable bacteriuria that occurs during the prolonged postoperative catheter drainage. The findings is absent after radical prostatectomy alone. In addition, some of this increased antibody might be a result of the increased bacteriuria noted in the patients who underwent ileal bladder substitution after the initial postoperative period. The significance of the increased bacteriuria and elevated antibody levels after ileal bladder substitution is unclear. Because most of these episodes of bacteriuria were asymptomatic, whether they represent clinical infections that should be treated is not known. Bishop and associates [28] found that the bacterial flora of ileal conduits with asymptomatic bacteriuria had bacterial counts of 1000 or fewer colonies, and they noted that the healthy ileum in situ may contain more than 10,000 organisms per milliliter [29]. Because the normal urinary tract is usually sterile, it is possible that the bacteriuria found by the authors after ileal bladder substitution represents some form of bowel colonization more commonly associated with the bowel rather than clinical urinary tract infection and has limited clinial importance. Trinchieri and associated [30] found that urinary from patients with ileocystoplasty prevented attachment of E. coli to human uroepithelial cells more effectively that urine from patients with recurrent urinary infections. This observation suggests that the relatively large quantities of Iga produced by the ileal bladder substitution may, in fact, prevent clinical infection by preventing tissue invasion by the bacteria. Only long-term follow-up of patients with ileocystoplasty or ileal bladder substitution will determine the clinical significance of the bacteriuria. The authors' study had documented an increased incidence of bacteriuria in men after ileal bladder substitution and no such increase after radical prostatectomy. Analysis of the data shows that male sources other than the prostate--probably urethral glands-- must produce significant quantities of local urinary tract IgA. After ileal bladder substitution, the incorporated ileum may produce volumes of local antibody that may exceed the amounts ordinarily produce by the normal urinary tract. The clinical significance of the increased incidence of bacteriuria and elevated antibody levels in patients after illeal bladder substitution is unclear. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The virus-induced leakage of host-cell constituents represents a true increase in cellular permeability rather than an unpeeling of cell surface components, since an intracellular enzyme participates in the leakage. All of the T-system bacteriophages exhibit this leakage. The leakage does not occur with salt concentrations which permit only reversible virus-cell attachment but no penetration. These facts support the idea that the reaction underlying cell leakage is a part of the invasive mechanism. With increasing multiplicity of T2 infection of young, fresh Escherichia coli B cells, progressively larger molecules leak out of the cell. Acid-soluble P(32) appears in large amounts with single infection. Appreciable amounts of galactosidase enzyme and RNA do not leak until multiplicities of 5 to 30 are attained. Cellular DNA is not liberated unless sufficiently high multiplicities are used to cause the extensive cell destruction and clearing of the suspension characteristic of lysis-from-without. This progression is interpreted as an increase with T2 multiplicity in the maximum hole size produced in the cell membrane. Calculation shows that this increase in hole size must result from a spreading change in the character of the cell wall, rather than the coincidental juxtaposition of 2 or more viruses at adjacent attachment sites. T1 virus liberates less macromolecular constituents than T2 from E. coli B. The following experimental results constitute evidence that in the course of normal virus infection, a resealing reaction is rapidly instituted in the cell wall which reverses the effect of the original permeability increase, and renders the cell refractory to a second lytic reaction by a homologous virus: (a) Cell leakage induced by T2 virus in the course of normal infection markedly slows down or stops within a few minutes, even when only a small fraction of the material potentially available for leakage has been released, (b) Superinfection after 8 minutes at 37 degrees C., of a cell previously infected with a homologous virus causes little or no appearance of a second leakage of cell constituents. This experiment also leads to the conclusion that the sealing reaction, like that which causes the leakage, also involves a disturbance which spreads over all or most of the cell wall. (c) If a multiple virus infection is allowed to occur at 0 degrees C. and then the cells are placed in a 37 degrees C. bath after completion of attachment, a much greater cell leakage results than if the entire course had occurred at 37 degrees C., as would be expected if a resealing reaction comes into play at 37 degrees C. within a time less than that required by the completion of attachment. The virus particles attaching secondarily at 37 degrees C. are prevented from exercising their permeability-increasing effect by the sealing reaction of the virus which had penetrated first. Although a second homologous cell infection with T1 or T2 phages after a 37 degrees C. incubation fails to yield a second leakage, a second heterologous infection always causes exacerbation of new leakage, which, especially if T1 has preceded T2, may be much greater than the sum of those produced individually by each virus in separate cell suspensions. This phenomenon may be the action responsible for the "depressor" effect which occurs when 2 unrelated viruses attack the same cell. The properties of the sealing phenomenon are such as to make it appear a logical candidate for the mechanism underlying the exclusion of a superinfecting phage from participating in reproductive processes in a cell previously infected with a homologous virus, since the DNA of the second virus would be unable to penetrate the new barrier. Experiments to test this hypothesis revealed that the DNA from such superinfecting virus is completely extractable from cells by washing in dilute buffer, whereas about 40 to 50 per cent of the attached DNA of virus which has invaded virgin cells remains bound to the cells. Most of the viral DNA which appears in the original supernatant when P(32)-labelled T2 invades E. coli B in a multiplicity less than one, does not represent inert material but rather virus DNA which has been split, or split and hydrolyzed as a result of its interaction with the cells, as judged by the altered susceptibility to hydrolytic enzyme or to TCA precipitation. This suggests that 25 per cent or more of the virus DNA may be expendable, at least after the penetration stage of the infection cycle. Mg(++) which strongly depresses the amount of cell leakage attending T2 infection, does not prevent T2 penetration nor does it block the appearance of the exclusion reaction. Hence, if the initial leakage does mirror the lytic process by which a hole for the DNA injection is provided, the Mg(++) does not function by preventing this hole formation. Its effect would have to lie in prevention of the spreading lysis-potentiating reaction or in augmenting the sealing mechanism. A large number of independent lines of evidence indicate that the phenomenon of lysis-from-without exhibited by the T-even coliphages is the result of failure of the sealing mechanism to keep pace with the lytic reaction. This can result from an excess of infecting phages or inhibition of the cellular energy-liberating reaction required by the sealing mechanism. The complete parallelism between the development of refractoriness to lysis-from-without and development of refractoriness to the production of a new leakage from a homologous superinfection is especially convincing in this connection. It is proposed that the early phase of bacteriophage invasion involves the following steps: reversible electrostatic attachment; splitting of the viral DNA from its protein coat; initiation of a lytic reaction in the cell wall at the site of virus attachment; injection of the DNA through the hole so produced; a spreading disturbance over the cell surface which makes it momentarily more susceptible to the lytic reaction; sealing of the hole and a concommittant spread over the cell wall of a reaction making the cell refractory to initiation of a second lytic reaction. Na(+), K(+), and Mg(++) all behave differently in their effect on the leakage produced in the course of T2 invasion of E. coli. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Surgery for deep venous reflux (DVR) in the lower limb had displayed, for various reasons a much more limited development than arterial surgery including endovascular techniques. Importance and frequency of DVR in chronic venous disease and particularly in chronic venous insufficiency (CVI) has been fully identified only in the last 20 Years, thanks to the development of duplex-scanning. Despite its effectiveness, deep reconstructive surgery remains controversial which probably explains why this specific surgery is performed by few units worldwide. Furthermore as deep reconstructive surgery is usually combined with superficial and perforator surgery, assessment of its specific benefit is difficult. In patients with severe CVI, venous valvular reflux involves deep vein as an isolated abnormality in less than 10%, but is associated with superficial reflux or/and perforator incompetence in 46%. The most common etiology in DVR is post-thrombotic syndrome accounting for an estimated 60-85% of patients with CVI. Primary reflux is the result of structural abnormalities in the vein wall and the valve itself. A very rare cause of reflux is the absence of valves secondary to agenesis. Surgical techniques for treating DVR can be classified into two groups: those that do and those that do not involve phlebotomy. The first group includes internal valvuloplasty, transposition, transplantation, neo valve and cryopreserved allograft. The second group involves wrapping, Psathakis II procedure, external valvuloplasty (transmural and transcommissural) angioscopy assisted or not, external valve construction and percutaneous placed devices. There are some clinical features that enable distinguishing superficial venous insufficiency from deep venous insufficiency but they are not reliable enough as both are frequently combined. In addition primary reflux is difficult to identify from secondary deep reflux. Duplex scanning provides both hemodynamic and anatomic information. Photoplethysmography as air plethysmography can help when superficial and deep venous reflux are combined to identify the predominant pathological component. It would seem logical to go beyond these investigations only in those patients in whom surgery for DVR may be considered. That means that the decision to continue investigations is dominated by the clinical context and absence of contraindication (uncorrectable coagulation disorder, ineffective calf pump). When surgery is considered, complementary investigations must be carried out: ambulatory venous pressure measurement and venography including ascending and descending phlebography. The goal of DVR surgery is to correct the reflux related to deep venous insufficiency at the subinguinal. But it must be kept in mind that DVR is frequently combined with superficial and perforator reflux, consequently all these mechanisms have to be corrected in order to reduce the permanent increased venous pressure. As mentioned previously, surgery results for DVR are somewhat difficult to assess as superficial venous surgery and/or perforator surgery have often been performed in combination with DVR surgery. Valvuloplasty is the most frequent procedure used for primary deep reflux. On the whole, valvuloplasty is credited with achieving a good result in 70% of cases in terms of clinical outcome defined as a freedom of ulcer recurrence and the reduction of pain, valve competence and hemodynamic improvement over a follow-up period of more than 5 years. In all series, a good correlation was observed between these three criteria. External transmural valvuloplasty does not seem to be as reliable as internal valvuloplasty in providing long-term valve competence or ulcer free-survival. In PTS, long-term results are available for transposition and transplantation. In terms of clinical result and valve competence, a meta-analysis demonstrates that a good result is achieved in 50% of cases over a follow-up period of more than 5 years, with a poor correlation between clinical and hemodynamic outcome. Results with others techniques including Psathakis II technique, neovalve and cryopreserved valves are less satisfactory. DVR surgery indications for reflux rely on clinical severity, hemodynamics and imaging: most of the authors recommend surgery in patients severe disease graded C4 and C 5-6. When superficial and perforator reflux are associated, they must be treated, for some Authors as a first step, for others shortly before DVR surgery in the same hospitalization stay. Contraindications as previously stipulated have to be kept in mind. Hemodynamics and imaging criteria: only reflux graded 3-4 according to Kistner are usually treated with DVR surgery. It is generally recognized that, to be significantly abnormal, venous refill time must be less than 12 s, and the difference between pressure at rest and after standardized exercise in the standing position must be less than 40%. The decision to operate should be based on the clinical status of the patient, not the non-invasive data, since the patient's symptoms and signs may not correlate with the laboratory findings. Indications according to etiology: the indications for surgery can be simplified according to the clinical, hemodynamic and imaging criteria described above. In primary reflux, reconstructive surgery is recommended after failure of conservative treatment and in young and active patients reluctant to wear permanent compression. Valvuloplasty is the most suitable technique, with Kistner, Perrin and Sottiurai favoring internal valvuloplasty and Raju transcommissural external valvuloplasty. In PTS, obstruction may be associated with reflux; most of the authors agree that when significant obstruction is localized above the inguinal ligament, obstruction must be treated first. Secondary deep venous reflux, mainly post-thrombotic syndrome may be treated only after failure of conservative treatment as the results achieved by subfascial endoscopic perforator surgery associated or not with superficial venous surgery are not convincing. It is recommended that this procedure might be carried out in combination with deep reconstructive surgery. The techniques to be used, given that valvuloplasty is rarely feasible, in order of recommendation, are: transposition, transplantation, neovalve and cryopreserved allograft. Patients must be informed that in PTS surgery for reflux has a relatively high failure rate. as large randomized control trials comparing conservative treatment and DVR surgery for DVR shall or should be difficult to conduct we must rely on the outcome of present series treated by DVR surgery. Analysis of those series provides recommendation grade C. Better results are obtained in the treatment of primary reflux compared with secondary reflux. Such surgery is not however, often indicated, and the procedure must be performed on specialized and high-trained centers. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The first specimen of Ammocoetes branchialis that showed histologically any atrophic changes in the endostyle was taken on July 16. These changes proceeded relatively rapidly for about a month, after which the endostyle as such was no longer recognized. All specimens examined after August 15 showed in cross section the characteristic ductless follicles more or less completely formed. More gradual and minor changes in the way of further absorption of cell remnants and completion of the follicles continued at least until September 1. Two specimens taken from the creek on September 4, 1911. showed complete follicle formation with some stainable colloid (figures 14 and 15). There was still yellow granular pigment in the fibrous tissue between the follicles. In two specimens taken on October 14, 1909, the pigment was absent and the follicles were more closely set, larger, and contained homogenous colloid. In the twenty-four specimens of ammocoetes studied, there were variations in the time of the onset of metamorphosis. There may also be variations in the rate of progress of the changes in different specimens. There is no evidence that removal of the animals from their native environment to the laboratory either increases or decreases the rate of metamorphosis. Schneider states that he was unable to get specimens kept in the laboratory to undergo metamorphosis. Gage, however, has repeatedly observed the metamorphosis under laboratory conditions, and the six of our specimens kept in the laboratory-some for forty days-remained in excellent condition and the metamorphosis proceeded as well as in those living in the creek. I know of no observations bearing on the question as to whether the metamorphosis may be hastened or delayed as it can be in tadpoles and other amphibia. It is probable, however, that physical conditions greatly influence the transformation. These observations as to the length of time from the inception to the completion of metamorphosis indicate that a month and probably longer is necessary for the lake and brook lampreys of Central New York. This is in agreement with the observations of Gage and of Muller on metamorphosis in general, but is at variance with the views of Bujor, who states that the process takes place within three to four days. The first endostylar changes are a gradual shrinkage in the whole organ with thickening of the capsule and septum and proliferation of the connective tissue in the periendostylar zone. The tongue anlage is developed in this thickening just dorsal to the endostyle and anterior to the gland orifice. The size of the chambers progressively decreases and with the thickening of the septum the halves of the endostyle are both absolutely and relatively more separated. All the five types of epithelia are affected, the first to show the change being type I, the four fan-shaped bundles of cuneiform cells of each half of the endostyle. These disappear totally quite early. The next type to show marked changes is type III, or the cells with yellow pigment granules. Here the change is progressive and these cell groups in different stages of atrophy may be traced through to the fully developed follicles. The epithelium of type V, or the endothelial-like lining of the parietal walls of the chambers, is piled up and extruded laterally as the chambers contract or shrink. These cells in different stages of atrophy may be followed until the metamorphosis is nearing completion. It is certain that the cells of types I, III, and V play no part in the formation of the ductless follicles. With types II and IV the question is not so easily settled as it is from one or the other or from both of these types that the permanent follicles arise. One can say definitely that type IV plays the major role, but whether the cells of type II after fusion with the basal group of type IV do not also share in the formation of the ventral follicle of the given chamber, I cannot decide, but from the evidence obtainable this seems probable. It is significant that the cells of type IV are continuous with, and indistinguishable from, the cells lining the orifice and are continued anteriorly in the deep pharyngeal groove and peripharyngeal grooves as well as posteriorally from the orifice in the small pharyngeal groove. As to the fate of this extraglandular epithelium of type IV I have no data save that with the closing of the orifice and the formation of the permanent branchial sac these grooves with their ciliated epithelium disappear and the whole sac comes to be lined with plain stratified epithelium. The fact that the cells of the pharyngeal grooves and the lining cells of the gland orifice are continuous with the cells of the endostyle from which the permanent thyroid follicles are formed is not without significance in relation to the development of the thyroid of the higher chordates. One or more very large follicles are formed from the lower portion of this orificial epithelium of type IV. Four ductless follicles are the maximum number that may be formed primarily in each half of the endostyle from the four areas of epithelium of type IV. From the specimens studied this maximum is frequently not obtained. Posterior to the orifice where four chambers exist, each corresponding to one half of an anterior chamber, but two follicles may be formed from each chamber, but in the coil these are proportionately increased, in cross section. Most of the detailed studies here recorded have been made on the part of the endostyle posterior to the coil where the simplest conditions exist. Here two follicles are ordinarily formed from each chamber. In cross sections the follicles are at first only long tubules whose cavities are the remnants of the original endostyle chambers, but when the metamorphosis is completed each of these primary tubules is cut up into several elongated closed sacs corresponding to the true ductless follicles of all higher chordates. New follicles also arise by budding from these primary ones, and this process is probably of normal occurrence at the metamorphosis. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The question of the relationship of streptococci to the etiology of infectious arthritis and of rheumatic fever is of the utmost importance. If a streptococcus or group of streptococci could be shown to be associated See PDF for Structure with either disease, some form of specific treatment might be available. The possibility of primary streptococcic infection as the cause of rheumatic fever, and, to a less extent, of acute infectious arthritis would seem to be a reasonable conjecture because of the frequency of associated throat, sinus or other focal infection. To consider that these same streptococci remain in or about the affected joint and to such an extent that they are found in the blood stream in cases of chronic infectious arthritis of years' duration demands a rather unique conception. Recent investigative work has certainly tended to confirm the importance of streptococci in these diseases, but, if all the published reports are considered as a group, one can not help being impressed with the inconsistency and peculiarities of the findings. In blood cultures from cases of rheumatic fever Clawson (7) recovered Streptococcus viridans, Small (8) and Birkhaug (9) non-hemolytic (gamma type) streptococci, and Cecil et al. (3) Streptococcus viridans, rarely hemolytic and non-hemolytic streptococci. In blood cultures from cases of infectious arthritis Cecil et al. (2) recovered attenuated hemolytic streptococci and occasionally Streptococcus viridans, non-hemolytic streptococci and diphtheroids and Margolis and Dorsey (10) green-producing and indifferent streptococci and diphtheroids, whereas from synovial fluids and regional lymph nodes Forkner, Shands and Poston (4) and Poston (5) obtained Streptococcus viridans and Margolis and Dorsey (11) recovered green-producing and indifferent streptococci and diphtheroids from epiphyseal marrows, bones and synovial membranes. On the other hand, Jordan (12) and Nye and Seegal (13) and the work reported in this paper have failed to confirm these findings. If a streptococcus is the infective agent, it is difficult to explain why the organisms recovered so consistently by certain groups of investigators are so different and why the findings of other groups are entirely negative. The chances of contamination, even with the most careful manipulation, are extremely favorable when using a cultural technique which demands subculturing a fluid medium every 3 to 5 days for a period of 4 to 6 weeks. The question arises as to what organisms should be considered as contaminants. Cultures found to contain Staphylococcus albus have never been judged significant. Margolis and Dorsey (11) have excluded Gram-positive bacilli from their series, but have included diphtheroids. Cecil et al. (2) reported the recovery of diphtheroids and Micrococcus zymogenes from blood cultures which they did not consider contaminated. Jordan (12) recovered short Gram-negative bacilli and Gram-positive bacilli and questioned their importance. The occurrence of such bacteria would appear to be not unlike that of the non-hemolytic (gamma type) streptococci reported by Small (7), Birkhaug (9) and, rarely, by Cecil et al. (2) and Margolis and Dorsey (10,11). In the work reported in this paper and in previously published work (13) with blood cultures from cases of rheumatic fever, staphylococci, Gram-positive bacilli and diplococci and diphtheroids have been isolated from a certain number of the cultures. Since these were found, as a rule, in only 1 of 2 or more cultures from the same blood and since such organisms occurred about as frequently in cultures from control cases, they have been considered merely contaminants. The findings in the 9 cases of arthritis with positive blood cultures reported by Margolis and Dorsey (10) are certainly far from convincing. These authors found green-producing streptococci in 5 of the 9 cases, but duplicate cultures on the same blood, with one exception, were always negative and the two repeat cultures were negative. Furthermore, in the 3 cases yielding indifferent streptococci in one subculture all subsequent subcultures were negative. In the work reported in this paper a green-producing and a non-hemolytic streptococcus were isolated from blood cultures, but a Gram-positive diplococcus and a Gram-positive bacillus, respectively, were recovered from the same cultures. A consideration of the above would seem to point to the probability, even certainty, of streptococci occurring in some cultures as contaminants; and the work of Olitsky and Long (14) and Long, Olitsky and Stewart (15) has clearly demonstrated that the air contamination of cultures of ground material with non-hemolytic green-producing streptococci can occur just as easily as with diphtheroids and staphylococci. It is obvious, in such cases, that the types of organisms recovered are dependent on the flora of the air of the laboratory or of the throat of the laboratory worker, and this point may well explain the differences in the cultural findings under consideration. A number of years ago numerous articles (16-19) appeared relative to the bacteriologic flora of lymph nodes, particularly those from cases of Hodgkin's disease, and it is interesting to note that the organisms recovered were quite similar to those which have been recovered from the blood and tissues of cases of arthritis and rheumatic fever, with the exception that the diphtheroids were, at that time, much more in prominence. Regardless of elaborate serological studies and animal experiments, streptococci must be recovered consistently by several groups of laboratory workers before their etiologic rôle in chronic arthritis and rheumatic fever can be accepted. Duplicate cultures and repeat cultures should yield the same organism in a generous percentage of cases and cultures from cases of other diseases or from normal persons should be negative. Positive cultures from duplicate cultures opened only at the time when the first culture showed growth would make the findings more significant than if the whole series were subcultured every 3 to 5 days. The work of Cecil, Nicholls and Stainsby on the bacteriology of the blood and joints in chronic arthritis and rheumatic fever has apparently been carried out most carefully and thoroughly and their results are very consistent and convincing. In spite of attempts to follow their methods in the selection of patients and in cultural technique, the results on the relatively small series of cases which are reported here fail to confirm their findings. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The impact factor (IF) for 2015 was recently released and this could be the time to once again reflect on its use as a metric of a journal. Problems and concerns regarding the IF have been addressed extensively elsewhere 1 2. The principle of the IF for a given year is that it represents the average number of citations of articles published in the journal in the two previous years.While authors frequently cite the IF as a determining factor for submission, the IF does not predict how many times individual articles will be cited. In a study from a peer-reviewed cardiovascular journal, nearly half of all published articles were poorly cited, i. e., less than five citations in five years 3. A similar percentage seems to apply to our journal. In nearly all journals we estimate that the majority of citations relate to a minority of the articles. Some articles are never cited. 13 % of the articles published in our journal from 2010 to 2013 have never been cited. Even authors of poorly cited articles benefit from the IF since many institutions use the combined impact factors of their published papers to measure research activity and this may be reflected in their research budgets.The competition for the printed pages in the six annual issues of Ultraschall in der Medizin/European Journal of Ultrasound (UiM/EJU) has resulted in high rejection rates (between 80 % and 90 %). One negative review with recommendation of major revision may therefore result in rejection. Peer-review fraud where the submitting author listed recommended reviewers with fake email addresses supplying fabricated peer reviews has recently been described in the New England Journal of Medicine 4. Some of the editors of our journal believe they have experienced this as well. Fabricating reviews in order to get a high IF for an article is to be considered fraud and is inexcusable.One aspect of using impact factors as a measure of the quality of a journal is that the IF only goes back two years. There may be differences between journals for different medical specialties since the citations in some areas seem to "burn out" within a few years while some articles continue to be cited even after several years. Therefore, a citation window that is longer than 2 years has been proposed 5.For this editorial we took a look at the 60 articles published in UiM/EJU in 2010. Half of them were no longer being cited in 2015. However, 10 articles were cited more than 5 times in 2015, and 5 of these were cited more than 10 times 6 7 8 9 10. It therefore seems that many of our articles have a long scientific life and generate more citations than indicated by the IF. Moreover, some articles have the highest number of citations after three years when they are no longer contributing to the impact factor. The most frequently cited articles from 2010 were multicenter studies, recommendations, and papers on hot topics like contrast-enhanced ultrasound (CEUS) and elastography, but it should be noted that there were also articles on the same topics that were poorly cited.The same trending topics continued into 2013 now topped by European guidelines and recommendations 11 12 13. 9 of the 10 most cited articles we published in 2014 were on CEUS or elastography 14 15 16 17 18 19 20 21 22, but the most cited article from that year so far has been on peripheral nerves 23. Surprisingly many good scientific papers on obstetrics/fetal US and musculoskeletal US have low citation rates 24 25 26. Our predictions for 2016 based on the topics of submitted articles in the last 12 months are that CEUS and elastography will continue to be popular topics.It is also worth mentioning that there can be a discrepancy between which titles are cited and which are accessed online. In addition to international guidelines, our CME articles are usually popular according to online access. CME articles are well established educational papers but they are rarely cited for the IF. Looking at the most read full-text recent articles on our journal's website shows that multicenter studies as well as recommendations backed by a national society or by the EFSUMB (European Federation of Societies for Ultrasound in Medicine and Biology) are still important 27 28 29 30 31 32 33. Upcoming important topics appear to be pediatric use of CEUS, simulation training and the introduction of ultrasound to medical students 34 35 36 37. Some of these are also backed by EFSUMB.A recent paper on the IF of radiology journals found that subspecialty radiology journals had a higher IF than general radiology journals 38. This could prove a challenge to interdisciplinary journals like ours but we take pride in continuing to cover all aspects of ultrasound in more than 15 fields.The distribution between reviews, original articles and case reports in a journal is worth addressing. An important aspect of a journal is the publication of original scientific research articles. CME articles, pictorials and letters are important for other reasons but are cited at a lower rate. The value of case reports with regard to the IF is low since they are rarely cited 39 and we have observed that some journals have abandoned the publication of case reports, thus leaving them to spin-off journals. The rationale is that keeping case reports in a journal will only increase the denominator, thereby decreasing the IF 39. At our journal we have seen a decline in case report submissions but still want to publish them and even put one case on the front cover of every issue. Case reports still hold an educational value 40 and are important to our readers.In conclusion, a healthy mix of original articles, CME articles, reviews and case reports combined with a few international guidelines and recommendations is important to UIM/EJU. Although we see popular topics like CEUS and elastography, it is not possible to predict which articles will be read or even cited based on the topic, with multicenter studies being the exception. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Across the developed world, we are witnessing an increasing emphasis on the need for more closely coordinated forms of health and social care provision. Integrated care pathways (ICPs) have emerged as a response to this aspiration and are believed by many to address the factors which contribute to service integration. ICPs map out a patient's journey, providing coordination of services for users. They aim to have: 'the right people, doing the right things, in the right order, at the right time, in the right place, with the right outcome'. The value for ICPs in supporting the delivery of care across organisational boundaries, providing greater consistency in practice, improving service continuity and increasing collaboration has been advocated by many. However, there is little evidence to support their use, and the need for systematic evaluations in order to measure their effectiveness has been widely identified. A recent Cochrane review assessed the effects of ICPs on functional outcome, process of care, quality of life and hospitalisation costs of inpatients with acute stroke, but did not specifically focus on service integration or its derivatives. To the best of our knowledge, no such systematic review of the literature exists. INCLUSION CRITERIA: Types of participants The review focused on the care of adult patients who had suffered a stroke. It included the full spectrum of services - acute care, rehabilitation and long-term support - in hospital and community settings.Types of intervention(s)/phenomena of interest Integrated care pathways were the intervention of interest, defined for the purpose of this review as 'a multidisciplinary tool to improve the quality and efficiency of evidence based care and is used as a communication tool between professionals to manage and standardise the outcome orientated care'. Here 'multidisciplinary' is taken to refer to the involvement of two or more disciplines.Types of outcomes Service integration' was the outcome of interest however, this was defined and measured in the selected studies.Types of studies This review was concerned with how 'service integration' was defined in evaluations of ICPs; the type of evidence utilised in measuring the impact of the intervention and the weight of evidence to support the effectiveness of care pathway technologies on 'service integration'. Studies that made an explicit link between ICPs and service integration were included in the review. Evidence generated from randomised controlled trials, quasi-experimental, qualitative and health economics research was sought. The search was limited to publications after 1980, coinciding with the emergence of ICPs in the healthcare context. Assessment for inclusion of foreign papers was based on the English-language abstract, where available. These were included only if an English translation was available. This review excluded studies that: SEARCH STRATEGY: In order to avoid replication, the Joanna Briggs Institute for Evidence Based Nursing and Midwifery Database and the Cochrane Library were searched to establish that no systematic reviews existed and none were in progress. A three-stage search strategy was then used to identify both published and unpublished studies (see ). Our search strategy located 2123 papers, of which 39 were retrieved for further evaluation. We critically appraised seven papers, representing five studies. These were all evaluation studies and, as is typical in this field, comprised a range of study designs and data collection methods. Owing to the diversity of the study types included in the review, we developed a single-appraisal checklist and data-extractiontool which could be applied to all research designs. The tool drew on the Joanna Briggs Institute (JBI) appraisal checklists for experimental studies and interpretive and critical research, and also incorporated specific information and issues which were relevant for our purposes (see ). This extends the thinking outlined in Lyne et al. in which, drawing on Campbell and Stanley's classic paper, the case is made for developing an appraisal tool which is applicable to all types of evaluation, irrespective of study design.In assessing the quality of the papers, we were sympathetic to the methodological challenges of evaluating complex interventions such as ICPs. We were also cognisant of the very real constraints in which service evaluations are frequently undertaken in healthcare contexts. In accordance with the aims of this particular review, we have included studies, which are methodologically weaker than is typical of many systematic reviews because, in our view, in the absence of stronger evidence, they yield useful information. Given the heterogeneity of the included studies, meta-analysis and/or qualitative synthesis was not possible. A narrative summary of the study findings is presented. Therefore, we do not know whether the costs of ICP development and implementation are justified by any of the reported benefits. Implications for practice There is some evidence that ICPs may support certain elements of service integration in the context of stroke care. This seems to be as a result of their ability to support the timely implementation of clinical interventions and the mobilisation of resources around the patient without incurring additional increases in length of stay. ICPs appear to be most successful in improving service coordination in the acute stroke context where patient care trajectories are predictable. Their value in the context of rehabilitation settings in which recovery pathways are more variable is less clear. There is some evidence that ICPs may be effective in bringing about behavioural changes in contexts where deficiencies in service provision have been identified. Their value in contexts where inter-professional working is well established is less clear. While earlier before and after studies show a reduction in length of stay in ICP-managed care, this may reflect wider healthcare trends, and the failure of later studies to demonstrate further reductions suggests that there may be limits as to how far this can continue to be reduced. There is some evidence to suggest that ICPs bring about improvements in documentation, but we do not know how far documented practice reflects actual practice. It is unclear how ICPs have their effects and the relative importance of the process of development and the artefact in use. As none of the studies reviewed included an economic evaluation, moreover, it remains unclear whether the benefits of ICPs justify the costs of their implementation. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Cervical length screening by transvaginal sonography (TVS) has been shown to be a good predictive test for spontaneous preterm birth (PTB) in symptomatic singleton pregnancy with threatened preterm labor (PTL). The aim of this review and meta-analysis of individual participant data was to evaluate the effect of knowledge of the TVS cervical length (CL) in preventing PTB in singleton pregnancies presenting with threatened PTL. We searched the Cochrane Pregnancy and Childbirth Group's Trials Register and the Cochrane Complementary Medicine Field's Trials Register (May 2016) and reference lists of retrieved studies. Selection criteria included randomized controlled trials of singleton gestations with threatened PTL randomized to management based mainly on CL screening (intervention group), or CL screening with no knowledge of results or no CL screening (control group). Participants included women with singleton gestations at 23 + 0 to 36 + 6 weeks with threatened PTL. We contacted corresponding authors of included trials to request access to the data and perform a meta-analysis of individual participant data. Data provided by the investigators were merged into a master database constructed specifically for the review. The primary outcome was PTB < 37 weeks. Summary measures were reported as relative risk (RR) or as mean difference (MD) with 95% CI. Three trials including a total of 287 singleton gestations with threatened PTL between 24 + 0 and 35 + 6 weeks were included in the meta-analysis, of which 145 were randomized to CL screening with knowledge of results and 142 to no knowledge of CL. Compared with the control group, women who were randomized to the known CL group had a significantly lower rate of PTB < 37 weeks (22.1% vs 34.5%; RR, 0.64 (95% CI, 0.44-0.94); three trials; 287 participants) and a later gestational age at delivery (MD, 0.64 (95% CI, 0.03-1.25) weeks; MD, 4.48 (95% CI, 1.18-8.98) days; three trials; 287 participants). All other outcomes for which there were available data were similar in the two groups. There is a significant association between knowledge of TVS CL and lower incidence of PTB and later gestational age at delivery in symptomatic singleton gestations with threatened PTL. Given that in the meta-analysis we found a significant 36% reduction in the primary outcome, but other outcomes were mostly statistically similar, further study needs to be undertaken to understand better whether the predictive characteristics of CL screening by TVS can be translated into better clinical management and therefore better outcomes and under what circumstances. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. CRIBADO MEDIANTE LA LONGITUD CERVICAL PARA LA PREVENCIÓN DEL PARTO PRETÉRMINO EN EMBARAZOS CON FETO ÚNICO Y RIESGO DE PARTO PREMATURO: REVISIÓN SISTEMÁTICA Y METAANÁLISIS DE ENSAYOS CONTROLADOS ALEATORIZADOS HACIENDO USO DE LOS DATOS INDIVIDUALES DE LAS PACIENTES: RESUMEN OBJETIVO: El cribado mediante la longitud cervical obtenida con ecografía transvaginal (ETV) ha demostrado ser una buena prueba para la predicción del parto pretérmino espontáneo (PPTE) en embarazos con feto único sintomáticos debido a la amenaza de parto pretérmino (PPT). El objetivo de esta revisión y metaanálisis de los datos de participantes individuales fue evaluar el efecto de medir la longitud cervical (LC) mediante ETV con el fin de prevenir el parto prematuro en embarazos únicos con amenaza de PPT. MÉTODOS: Se realizaron búsquedas en los ficheros de ensayos de Cochrane Pregnancy and Childbirth Group y Complementary Medicine Field (mayo de 2016), y en las listas de referencias de los estudios encontrados. Los criterios de selección incluyeron ensayos controlados aleatorizados de embarazos con feto único y riesgo de PPT con aleatorización de la paciente basada principalmente en el cribado mediante la LC (grupo de intervención), el cribado mediante la LC sin conocimiento de los resultados, o sin cribado de LC (grupo de control). Las participantes fueron mujeres embarazadas con feto único desde las 23 + 0 hasta las 36 + 6 semanas y con riesgo de PPT. Se estableció contacto con los autores de los ensayos incluidos para solicitar el acceso a los datos y llevar a cabo un metaanálisis de los datos de las participantes individualmente. Los datos proporcionados por los investigadores se agregaron a una base de datos maestra creada específicamente para esta revisión. El resultado primario fue el PPTE < 37 semanas. Las medidas resumen se reportaron como riesgo relativo (RR) o como diferencia de medias (DM) con IC del 95%. En el metaanálisis se incluyeron tres ensayos con un total de 287 embarazos con feto único y riesgo de PPT entre 24 + 0 y 35 + 6 semanas, de los cuales 145 fueron asignados al azar a un cribado mediante la LC con conocimiento de los resultados y 142 a aquellos para los que se desconocía la LC. En comparación con el grupo control, las mujeres que fueron asignadas aleatoriamente al grupo en el que se conocía la LC tuvieron una tasa de parto prematuro a < 37 semanas significativamente menor (22,1% vs. 34,5%; RR 0,64 (IC 95%, 0,44-0,94); 3 ensayos; 287 participantes ) y una edad gestacional al momento del parto más tardía (DM 0,64 (IC 95%, 0.03-1.25) semanas; DM 4,48 (IC 95%, 1,18-8,98) días; 3 ensayos; 287 participantes). El resto de los resultados para los cuales había datos disponibles fueron similares en ambos grupos. Existe una asociación significativa entre el conocimiento de la LC obtenida mediante ETV y una menor incidencia de PPTE y edad gestacional más tardía en el momento del parto en embarazos con feto único sintomáticos debido al riesgo de parto pretérmino (PPT). Teniendo en cuenta que en el metaanálisis se encontró una reducción significativa del 36% en el resultado primario, pero que los otros resultados fueron estadísticamente similares en su mayoría, serán necesarios más estudios para entender mejor si las propiedades predictivas del cribado mediante la LC obtenida con ETV se pueden traducir en una mejor atención clínica y por lo tanto mejores resultados dependiendo de las circunstancias. :META: : ,(preterm labor,PTL),(transvaginal sonography,TVS)(spontaneous preterm birth,PTB)。metaPTL,TVS(cervical length,CL)PTB。 : CochraneCochrane(20165)。PTL,CL()CLCL()。23 + 036+6PTL。,,meta。。37PTB。(relative risk,RR)95%CI(mean difference,MD)。 : meta3,28724 + 035+6PTL,145CL,142CL。,CL37PTB[22.1%34.5%;RR,0.64(95% CI,0.44 ~ 0.94);3;287],[MD,0.64(95% CI,0.03 ~ 1.25);MD,4.48(95% CI,1.18 ~ 8.98);3;287]。2,。 : PTL,TVS CLPTB。meta36%,,,TVSCL,。. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
The pelidnotine scarabs (Scarabaeidae: Rutelinae: Rutelini) are a speciose, paraphyletic assemblage of beetles that includes spectacular metallic species ("jewel scarabs") as well as species that are ecologically important as herbivores, pollinators, and bioindicators. These beetles suffer from a complicated nomenclatural history, due primarily to 20<supth</sup century taxonomic and nomenclatural errors. We review the taxonomic history of the pelidnotine scarabs, present a provisional key to genera with overviews of all genera, and synthesize a catalog of all taxa with synonyms, distributional data, type specimen information, and 107 images of exemplar species. As a result of our research, the pelidnotine leaf chafers (a paraphyletic group) include 27 (26 extant and 1 extinct) genera and 420 valid species and subspecies (419 extant and 1 extinct). Our research makes biodiversity research on this group tractable and accessible, thus setting the stage for future studies that address evolutionary and ecological trends. Based on our research, 1 new species is described, 1 new generic synonym and 12 new species synonyms are proposed, 11 new lectotypes and 1 new neotype are designated, many new or revised nomenclatural combinations, and many unavailable names are presented. The following taxonomic changes are made: New generic synonym: The genus <iHeteropelidnota</i Ohaus, 1912 is a <bnew junior synonym</b of <iPelidnota</i MacLeay, 1819. New species synonyms: <iPlusiotis adelaida pavonacea</i Casey, 1915 is a <bsyn. n.</b of <iChrysina adelaida</i (Hope, 1841); <iOdontognathus gounellei</i Ohaus, 1908 is a <brevised synonym</b of <iPelidnota ebenina</i (Blanchard, 1842); <iPelidnota francoisgenieri</i Moore & Jameson, 2013 is a <bsyn. n.</b of <iPelidnota punctata</i (Linnaeus, 1758); <iPelidnota genieri</i Soula, 2009 is a <bsyn. n.</b of <iPelidnota punctata</i (Linnaeus, 1758); <iPelidnota lutea</i (Olivier, 1758) is a <brevised synonym</b of <iPelidnota punctata</i (Linnaeus, 1758); Pelidnota (Pelidnota) texensis Casey, 1915 is a <brevised synonym</b of <iPelidnota punctata</i (Linnaeus, 1758); Pelidnota (Strigidia) zikani (Ohaus, 1922) is a <brevised synonym</b of <iPelidnota tibialis tibialis</i Burmeister, 1844; <iPelidnota ludovici</i Ohaus, 1905 is a <bsyn. n.</b of <iPelidnota burmeisteri tricolor</i Nonfried, 1894; <iRutela fulvipennis</i Germar, 1824 is <bsyn. n.</b of <iPelidnota cuprea</i (Germar, 1824); <iPelidnota pulchella blanda</i Burmeister, 1844 is a <bsyn. n.</b of <iPelidnota pulchella pulchella</i (Kirby, 1819); <iPelidnota pulchella scapularis</i Burmeister, 1844 is a <bsyn. n.</b of <iPelidnota pulchella pulchella</i (Kirby, 1819); <iPelidnota xanthogramma</i Perty, 1830 is a <bsyn. n.</b of <iPelidnota pulchella pulchella</i (Kirby, 1819). New or revised statuses: <iPelidnota fabricelavalettei</i Soula, 2009, <brevised status</b, is considered a species; <iPelidnota rioensis</i Soula, 2009, <bstat. n.</b, is considered a species; <iPelidnota semiaurata semiaurata</i Burmeister, 1844, <bstat. rev.</b, is considered a subspecies. New or comb. rev. and revised status: <iPlusiotis guaymi</i Curoe, 2001 is formally transferred to the genus <iChrysina</i (<iC. guaymi</i (Curoe, 2001), <bcomb. n.</b); <iPlusiotis transvolcanica</i Morón & Nogueira, 2016 is transferred to the genus <iChrysina</i (<iC. transvolcanica</i (Morón & Nogueira, 2016), <bcomb. n.</b). <iHeteropelidnota kuhnti</i Ohaus, 1912 is transferred to the genus <iPelidnota</i (<iP. kuhnti</i (Ohaus, 1912), <bcomb. n.</b); <iOdontognathus riedeli</i Ohaus, 1905 is considered a subspecies of <iPelidnota rubripennis</i Burmeister, 1844 (<iPelidnota rubripennis riedeli</i (Ohaus, 1905), <brevised status</b and <bcomb. rev.)</b; Pelidnota (Strigidia) acutipennis (F. Bates, 1904) is transferred to the genus <iSorocha</i (<iSorocha acutipennis</i (F. Bates, 1904), <bcomb. rev.</b); Pelidnota (Odontognathus) nadiae Martínez, 1978 is transferred to the genus <iSorocha</i (<iSorocha nadiae</i (Martínez, 1978), <bcomb. rev.</b); Pelidnota (Ganonota) plicipennis Ohaus, 1934 is transferred to the genus <iSorocha</i (<iSorocha plicipennis</i (Ohaus, 1934), <bcomb. rev.)</b; <iPelidnota similis</i Ohaus, 1908 is transferred to the genus <iSorocha</i (<iSorocha similis</i (Ohaus, 1908), <bcomb. rev.</b); Pelidnota (Ganonota) yungana Ohaus, 1934 is transferred to <iSorocha</i (<iSorocha yungana</i (Ohaus, 1934), <bcomb. rev.</b); <iPelidnota malyi</i Soula, 2010: 58, <brevised status</b; <iXenopelidnota anomala porioni</i Chalumeau, 1985, <brevised subspecies status</b. To stabilize the classification of the group, a <bneotype</b is designated for the following species: <iPelidnota thiliezi</i Soula, 2009. <bLectotypes</b are designated for the following names (given in their original combinations): <iPelidnota brevicollis</i Casey, 1915, <iPelidnota brevis</i Casey, 1915, <iPelidnota debiliceps</i Casey, 1915, <iPelidnota hudsonica</i Casey, 1915, <iPelidnota oblonga</i Casey, 1915, <iPelidnota pallidipes</i Casey, 1915, <iPelidnota ponderella</i Casey, 1915, <iPelidnota strenua</i Casey, 1915, <iPelidnota tarsalis</i Casey, 1915, <iPelidnota texensis</i Casey, 1915, and <iScarabaeus punctatus</i Linnaeus, 1758. The following published infrasubspecific names are <bunavailable</b per ICZN Article 45.6.1: Pelidnota (Odontognathus) cuprea var. coerulea Ohaus, 1913; Pelidnota (Odontognathus) cuprea var. rufoviolacea Ohaus, 1913; Pelidnota (Odontognathus) cuprea var. nigrocoerulea Ohaus, 1913; Pelidnota pulchella var. fulvopunctata Ohaus, 1913; Pelidnota pulchella var. sellata Ohaus, 1913; Pelidnota pulchella var. reducta Ohaus, 1913; Pelidnota unicolor var. infuscata Ohaus, 1913. The following published species name is <bunavailable</b per ICZN Article 11.5: <iNeopatatra synonyma</i Moore & Jameson, 2013. The following published species name is <bunavailable</b per application of ICZN Article 16.1: <iParhoplognathus rubripennis</i Soula, 2008. The following published species name is <bunavailable</b per application of ICZN Article 16.4.1: <iStrigidia testaceovirens argentinica</i Soula, 2006, Pelidnota (Strigidia) testaceovirens argentinica (Soula, 2006), and <iPelidnota testaceovirens argentinica</i (Soula, 2006). The following published species names are <bunavailable</b per application of ICZN Article 16.4.2: <iHomonyx digennaroi</i Soula, 2010; <iHomonyx lecourti</i Soula, 2010; <iHomonyx mulliei</i Soula, 2010; <iHomonyx simoensi</i Soula, 2010; <iHomonyx wagneri</i Soula, 2010; <iHomonyx zovii</i Demez & Soula, 2011; <iPelidnota arnaudi</i Soula, 2009; <iPelidnota brusteli</i Soula, 2010; <iPelidnota chalcothorax septentrionalis</i Soula, 2009; <iPelidnota degallieri</i Soula, 2010; <iPelidnota lavalettei</i Soula, 2008; <iPelidnota lavalettei</i Soula, 2009; <iPelidnota dieteri</i Soula, 2011; <iStrigidia gracilis decaensi</i Soula, 2008, Pelidnota (Strigidia) gracilis decaensi (Soula, 2008), and <iPelidnota gracilis decaensi</i (Soula, 2008); <iPelidnota halleri</i Demez & Soula, 2011; <iPelidnota injantepalominoi</i Demez & Soula, 2011; <iPelidnota kucerai</i Soula, 2009; <iPelidnota malyi</i Soula, 2010: 36-37; <iPelidnota mezai</i Soula, 2009; <iPelidnota polita darienensis</i Soula, 2009; <iPelidnota polita orozcoi</i Soula, 2009; <iPelidnota polita pittieri</i Soula, 2009; <iPelidnota punctulata decolombia</i Soula, 2009; <iPelidnota punctulata venezolana</i Soula, 2009; <iPelidnota raingeardi</i Soula, 2009; <iPelidnota schneideri</i Soula, 2010; <iPelidnota simoensi</i Soula, 2009; <iPelidnota unicolor subandina</i Soula, 2009; <iSorocha carloti</i Demez & Soula, 2011; <iSorocha castroi</i Soula, 2008; <iSorocha fravali</i Soula, 2011; <iSorocha jeanmaurettei</i Demez & Soula, 2011; <iSorocha yelamosi</i Soula, 2011; <iXenopelidnota bolivari</i Soula, 2009; <iXenopelidnota pittieri pittieri</i Soula, 2009. Due to unavailability of the name <iPseudogeniates cordobaensis</i Soula 2009, we describe the species as intentionally new (<iPseudogeniates cordobaensis</i Moore, Jameson, Garner, Audibert, Smith, and Seidel, <bsp. n.</b). | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
This PhD thesis is based on three original articles. The studies were performed at the Department of Obstetrics and Gynaecology, Herlev University Hospital and at the Center for Clinical Epidemiology, Odense University Hospital.
Urinary incontinence (UI) is a frequent disorder among women, which for the individual can have physical, psychological and social consequences. The current standard of surgical treatment is the synthetic midurethral sling (MUS), which is a minimal invasive procedure.
As the synthetic MUSs (TVT,TVT-O,TOT) were introduced in the late 1990s, there are only a few studies at the long-term follow-up based on nationwide populations; only a few have reported on the risk of reoperation and there is sparse evidence on which treatment should be used subsequently to failure of synthetic MUSs.
Several surgical specialties have documented that department volume, surgeon volume and patient-related factors influence the quality of care. There is little knowledge regarding this in the surgical treatment for UI.
The aims of the thesis were therefore:
1. To describe the five-year incidence of reoperation after different surgical procedures for UI based on a nationwide population over a ten-year period (1998-2007) and to evaluate the influence of department volume (Study I).
2. To describe the choice of repeat surgery after failed synthetic MUSs and the departmental volume for the surgical treatment at reoperation over a ten-year period (1998-2007) based on a nationwide background population (Study II).
3. To evaluate efficacy of urethral injection therapy (UIT) based on patient reported outcome measures (PROMs) and hospital contacts within 30 days for women registered in the Danish Urogynaecological Database (DugaBase) over a five-year period (2007-2011) and the influence of department volume, surgeon volume and patient-related factors (Study III).
Study I: A total of 8671 women were recorded in the Danish National Patient Registry as having undergone surgical treatment for UI from 1998 through 2007.
The lowest rate of reoperation within five years was observed among women who had pubovaginal slings (6%), TVT (6%) and Burch colposuspension (6%) followed TOT (9%), and miscellaneous operations (12%), while the highest observed risk was for UIT (44%). After adjustment for patient´s age, department volume and calendar effect TOT carried a 2-fold higher risk of reoperation (HR, 2.1; 95% CI, 1.5 -2.9) compared with TVT.
Study II: A total of 5820 women had synthetic MUSs at baseline from 1998 through 2007 and were registered in the Danish National Patient Registry and 354 (6%) of these women had a reoperation.
The first choice treatment for reoperation was a synthetic MUS and UIT was a frequent second choice. At reoperation, 289 (82%) of the women were treated at the department where they had undergone the primary synthetic MUS. Fewer treatment modalities were in usage and significantly more TOTs were implanted at low volume departments compared to high volume departments.
Study III: A total of 731 women of age 18 or older with first time UITs were registered with first-time UIT in the DugaBase from 2007 through 2011. Logistic regression was used to predict the odds of success pertaining to department volume, surgeon volume and patient-related factors on the Incontinence Questionnaire-Short Form (ICIQ-SF) (frequency of UI, amount of leakage and impact of UI on daily life) and the rate of 30-day hospital contacts.
We applied the definition of "cure" as set out by the steering committee of the DugaBase where a satisfactory result is leakage once a week or less, often or never based on the frequency score and similarly "no leakage at all" based on the frequency score as answering never to leakage.
Among the 252 women who pre- and postoperatively had answered both questionnaires, 75 (29.8%) were cured and 23 (9.1%) achieved no leakage at all at three months follow-up. There was a statistically significant improvement on all three scores of the ICIQ-SF. The mean total ICIQ-SF score was 16.0 (SD 3.8) and after injection at three months follow-up 10.6 (SD 6.2) (p < 0.001).
UIT was performed at 16 departments, of which four high volume departments performed 547 of 814 UITs (67.2%). The risk of hospital contacts was lower for women treated at a high volume department (adjusted OR 0.27; 95% CI 0.09-0.76). Women treated by a high volume surgeon (> 75 UITs during the career as a surgeon) had a higher chance of cure on the frequency score than the low volume surgeon (≤ 25 UITs) (adjusted OR 4.51; 95% CI, 1.21-16.82) and a lower risk of 30-day hospital contacts (adjusted OR 0.35; 95% CI, 0.16-0.79). Women with severe UI had less likelihood of cure in all ICIQ-SF scores. A preoperative use of antimuscarinic drugs lowered the chance of cure on the frequency (adjusted OR 0.14; 95%, CI 0.04-0.41) and the amount score (adjusted OR 0.33; 95%, CI 0.13-0.82).
Conclusions:
Study I: The study provided physicians with a representative evaluation of the rate of reoperations after different surgical procedures for UI. The observation that TOT was associated with a significantly higher risk of reoperation than TVT is novel in the literature and has important implications for both surgeons and patients when they consider surgical options for UI.
Study II: The majority of women had reoperation at the same department as the primary synthetic MUS. Fewer treatment modalities were in use at low volume departments compared with high volume departments. It seems appropriate in the absence of evidence for the best treatment after failed synthetic MUSs, that women are referred to highly specialized departments for diagnosing and treatment.
Study III: This national population-based cohort study represented cure among women who had UIT in an everyday life setting. Results seemed to be in the lower end of the spectrum compared to the literature. A learning curve for UIT indicated that the treatment should be restricted to fewer hands to improve the surgical education and consequently be a success for women with UIT. The severity of UI was a strong predictor for a lower degree of cure. Similarly, the use of antimuscarinic drug preoperatively decreased the likelihood of cure indicating that women with severe MUI or UUI also have less chance of cure. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
3,3'-Dimethoxybenzidine dihydrochloride is an off-white powder with a melting point of 274 degrees C. 3,3'-Dimethoxybenzidine is used principally as an intermediate in the production of commercial bisazobiphenyl dyes for coloring textiles, paper, plastic, rubber, and leather. In the synthesis of the bisazobiphenyl dyes, the amine groups of 3,3'-dimethoxybenzidine are chemically linked with other aromatic amines. A small quantity of 3,3'-dimethoxybenzidine is also used as an intermediate in the production of o-dianisidine diisocyanate, which is used in isocyanate-based adhesive systems and as a component of polyurethane elastomers. 3,3'-Dimethoxybenzidine dihydrochloride was evaluated in toxicity and carcinogenicity studies as part of the National Toxicology Program's Benzidine Dye Initiative. This Initiative was designed to evaluate the representative benzidine congeners and benzidine congener-derived and benzidine-derived dyes. 3,3'-Dimethoxybenzidine dihydrochloride was nominated for study because of the potential for human exposure during production of bisazobiphenyl dyes and because benzidine, a structurally related chemical, is a known human carcinogen. NTP Toxicology and Carcinogenesis studies were conducted by administering 3,3'-dimethoxybenzidine dihydrochloride (greater than 97.5% pure) in drinking water to groups of F344/N rats of each sex for 14 days, 13 weeks, 9 months, or 21-months. The 21-month studies were intended to last 24 months but were terminated early because of rapidly declining survival due to neoplasia. Studies were performed only in rats because similar studies are being performed in mice at the National Center for Toxicology Research. Genetic toxicology studies were conducted with Salmonella typhimurium, Chinese hamster over (CHO) cells, and Drosophila melanogaster. Fourteen-Day Studies: All rats receiving drinking water concentrations up to 4,500 ppm lived to the end of the studies. Rats that received water containing 4,500 ppm 3,3'-dimethoxybenzidine dihydrochloride lost weight. Water consumption decreased with increasing concentration of chemical and at 4,500 ppm was less than one-fourth that by the controls. Lymphoid depletion of the thymus in males and hypocellularity of the bone marrow in males and females were seen at the 4,500-ppm concentration, but not at the next lower concentration or in controls. Thirteen-Week Studies: All rats receiving concentrations up to 2,500 ppm lived to the end of the studies. Final mean body weights of rats given drinking water containing 1,250 or 2,500 ppm 3,3'-dimethoxybenzidine dihydrochloride were 5%-20% lower than those of controls. Water consumption at these concentrations was 40%-60% that consumed by controls. Compound-related effects in rats given water containing 2,500 ppm 3,3'-dimethoxybenzidine dihydrochloride included a mind exacerbation of naturally occurring nephropathy and the presence of a yellow-brown pigment (lipofuscin) in the cytoplasm of thyroid follicular cells. Serum triiodothyronine (T3) and thyroxin (T4) concentrations in females receiving 330 ppm or more and T4 concentrations in males receiving 170 ppm or more were significantly lower than in controls. Thyrotropin (TSH) concentrations were comparable in controls and exposed rats. Based on the chemical-related nephropathy and reductions in water consumption and body weight gain observed in the 13-week studies, doses for the long-term studies in male and female rats were 0 or 330 ppm 3,3'-dimethoxybenzidine dihydrochloride in drinking water administered for 9 months and 0, 80, 170, or 330 ppm administered for 21 months. Nine-Month Studies: Ten rats of each sex in control and 330-ppm groups were evaluated after 9 months. Significant decreases in T3 and T4 concentrations were seen in exposed male and female rats. Other lesions seen in exposed rats included foci of alteration in the liver, a carcinoma of the preputial gland in one male, a carcinoma of the clitoral gland in one female, and carcinoma of the Zymbal gland in two males. Body Weights and Survival in the Twenty-One-Month Studies: The average amount of 3,3'-dimethoxybenzidine dihydrochloride consumed per day was approximately 6, 12, or 21 mg/kg for low, mid, or high dose male rats and 7, 14, or 23 mg/kg for low, mid, or high dose female rats. Mean body weights of male and female rats began to decrease relative to those of controls after about 1 year of exposure at 170 or 330 ppm and were 6%-22% lower for males and 7%-17% lower for females. Survival of rats exposed to 3,3'-dimethoxybenzidine dihydrochloride was reduced because animals were dying with neoplasms or being killed in a moribund condition (survival at 21 months--male: control, 44/60, 73%; low dose, 8/45, 18%; mid dose, 0/75; high dose, 0/60; female: 45/60, 75%; 15/45, 33%; 6/75, 8%; 0/60). Because of these early compound-related deaths, the studies were terminated at 21 months. Nonneoplastic and Neoplastic Effects in the Twenty-One-Month Studies: Increased incidences of several nonneoplastic lesions were observed in exposed rats, including hematopoietic cell proliferation in the spleen and cystic and centrilobular degeneration and necrosis of the liver. Neoplasms attributed to 3,3'-dimethoxybenzidine dihydrochloride exposure were observed in rats at many tissue sites, including the skin, Zymbal gland, preputial and clitoral glands, oral cavity, small and large intestines, liver, brain, mesothelium, mammary gland, and uterus/cervix. The incidences of these neoplasms in male and female rats are given in the abstract summary table (see page 5 of the Technical Report). Genetic Toxicology: 3,3'-Dimethoxybenzidine was mutagenic in S. typhimurium strain TA100 with exogenous metabolic activation and in strain TA98 without activation; a weakly positive response was observed in strain TA1535 with metabolic activation. 3,3'-Dimethoxybenzidine induced sister chromatid exchanges and chromosomal aberrations in CHO cells with and without exogenous metabolic activation. 3,3'-Dimethoxybenzidine did not induce sex-linked recessive lethal mutations in adult male D. melanogaster exposed via feeding or injection. Conclusions: Under the conditions of these 21-month drinking water studies, there was clear evidence of carcinogenic activity of 3,3'-dimethoxybenzidine dihydrochloride for male F344/N rats, as indicated by benign and malignant neoplasms of the skin, Zymbal gland, preputial gland, oral cavity, intestine, liver, and mesothelium. Increased incidences of astrocytomas of the brain may have been related to chemical administration. There was clear evidence of carcinogenic activity of 3,3'-dimethoxybenzidine dihydrochloride for female F344/N rats, as indicated by benign and malignant neoplasms of the Zymbal gland, clitoral gland, and mammary gland. Increases in neoplasms of the skin, oral cavity, large intestine, liver, and uterus/cervix were also considered to be related to chemical administration of 3,3'-dimethoxybenzidine dihydrochloride. Synonyms: o-dianisidine dihydrochloride; 3,3'-dimethoxy-1,1-biphenyl)-4,4'-diamine dihydrochloride; 3,3'-dimethoxy-4,4'-diaminobiphenyl dihydrochloride | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Hexachloroethane is used in organic synthesis as a retarding agent in fermentation, as a camphor substitute in nitrocellulose, in pyrotechnics and smoke devices, in explosives, and as a solvent. In previous long-term gavage studies with B6C3F1 mice and Osbourne-Mendel rats (78 weeks of exposure followed by 12-34 weeks of observation), hexachloroethane caused increased incidences of hepatocellular carcinomas in mice. However, survival of low and high dose rats was reduced compared with that of vehicle controls, and the effects on rats were inconclusive. Therefore, additional toxicology and carcinogenesis studies were conducted in F344/N rats by administering hexachloroethane (approximately 99% pure) in corn oil by gavage to groups of males and females for 16 days, 13 weeks, or 2 years. Genetic toxicology studies were conducted in Salmonella typhimurium and in Chinese hamster ovary (CHO) cells. Urinalysis was performed in conjunction with the 13-week studies. Sixteen-Day Studies: In the 16-day studies (dose range, 187-3,000 mg/kg), all rats that received 1,500 or 3,000 mg/kg and 1/5 males and 2/5 females that received 750 mg/kg died before the end of the studies. Final mean body weights of rats that received 750 mg/kg were 25% lower than that of vehicle controls for males and 37% lower for females. Compound-related clinical signs seen at 750 mg/kg or more included dyspnea, ataxia, prostration, and excessive lacrimation. Other compound-related effects included hyaline droplet formation in the tubular epithelial cells in all dosed males and tubular cell regeneration and granular casts in the tubules at the corticomedullary junction in the kidney in males receiving 187 and 375 mg/kg. Thirteen-Week Studies: In the 13-week studies (dose range, 47-750 mg/kg), 5/10 male rats and 2/10 female rats that received 750 mg/kg died before the end of the studies. The final mean body weight of male rats that received 750 mg/kg was 19% lower than that of vehicle controls. Compound-related clinical signs for both sexes included hyperactivity at doses of 94 mg/kg or higher and convulsions at doses of 375 or 750 mg/kg. The relative weights of liver, heart, and kidney were increased for exposed males and females. Kidney lesions were seen in all dosed male groups, and the severity increased with dose. Papillary necrosis and tubular cell necrosis and degeneration in the kidney and hemorrhagic necrosis in the urinary bladder were observed in the five male rats that received 750 mg/kg and died before the end of the studies; at all lower doses, hyaline droplets, tubular regeneration, and granular casts were present in the kidney. No chemical-related kidney lesions were observed in females. Foci of hepatocellular necrosis were observed in several male and female rats at doses of 188 mg/kg or higher. Dose selection for the 2-year studies was based primarily on the lesions of the kidney in males and of the liver in females. Studies were conducted by administering hexachloroethane in corn oil by gavage at 0, 10, or 20 mg/kg body weight, 5 days per week, to groups of 50 male rats. Groups of 50 female rats were administered 0, 80, or 160 mg/kg on the same schedule. Body Weight and Survival in the Two-Year Studies: Mean body weights of high dose rats were slightly (5%-9%) lower than those of vehicle controls toward the end of the studies. No significant differences in survival were observed between any groups of rats (male: vehicle control, 31/50; 10 mg/kg, 29/50; 20 mg/kg, 26/50; female: vehicle control, 32/50; 80 mg/kg, 27/50; 160 mg/kg, 32/50). Nonneoplastic and Neoplastic Effects in the Two-Year Studies: Incidences of kidney mineralization (vehicle control, 2/50; low dose, 15/50; high dose, 32/50) and hyperplasia of the pelvic transitional epithelium (0/50; 7/50; 7/50) were increased in dosed male rats. Renal tubule hyperplasia was observed at an increased incidence in high dose male rats (2/50; 4/50; 11/50). These lesions have been described as characteristic of the hyaline droplet nephropathy that is associated with an accumulation of liver-generateted with an accumulation of liver-generated a2&mu;-globulin in the cytoplasm of tubular epithelial cells. The severity of nephropathy was increased in high dose male rats (moderate vs. mild), and the incidences and severity of nephropathy were increased in dosed females (22/50; 42/50; 45/50). The incidences of adenomas (1/50; 2/50; 4/50), carcinomas (0/50; 0/50; 3/50), and adenomas or carcinomas (combined) (1/50; 2/50; 7/50) of the renal tubule were also increased in the high dose male group. One of the carcinomas in the high dose group metastasized to the lung. No compound-related neoplasms were observed in females. The incidence of pheochromocytomas of the adrenal gland in low dose male rats was significantly greater than that in vehicle controls (15/50; 28/50; 21/49), and the incidences for both dosed groups were greater than the mean historical control incidence (28&percnt; &plusmn; 11&percnt;). Genetic Toxicology: Hexachloroethane was not mutagenic in S. typhimurium strains TA98, TA100, TA1535, or TA1537 when tested with and without exogenous metabolic activation. In CHO cells, hexachloroethane did not induce chromosomal aberrations with or with out metabolic activation but did produce sister chromatid exchanges in the presence of exogenous metabolic activation. Audit: The data, documents, and pathology materials from the 2-year studies of hexachloroethane have been audited. The audit findings show that the conduct of the studies is documented adequately and support the data and results given in this Technical Report. Conclusions: Under the conditions of these 2-year gavage studies, there was clear evidence of carcinogenic activity of hexachloroethane for male F344/N rats, based on the increased incidences of renal neoplasms. The marginally increased incidences of pheochromocytomas of the adrenal gland may have been related to hexachloroethane administration to male rats. There was no evidence of carcinogenic activity of hexachloroethane for female F344/N rats administered 80 or 160 mg/kg by gavage for 103 weeks. The severity of nephropathy and incidences of linear mineralization of the renal papillae and hyperplasia of the transitional epithelium of the renal pelvis were increased in dosed male rats. The incidences and severity of nephropathy were increased in dosed female rats. Synonyms: carbon hexachloride; ethane hexachloride; hexachlorethane; hexachloroethylene; 1,1,1,2,2,2-hexachloroethane; perchloroethane Trade Names: Avlothane; Distokal; Distopan; Distopin; Egitol; Falkitol; Fasciolin; Mottenhexe; Phenohep | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Since the 1990s, there has been a growing recognition of the importance of the out-of-school context for children and adolescents. Fueled in part by family demographics that include substantial numbers of employed mothers and single mothers, in part by concerns about poor academic performance and problem behaviors, and in part by intensified efforts to find ways to promote positive youth development, researchers and practitioners have focused their attention on two particular out-of-school settings: after-school programs and structured activities. The research findings pertaining to full-time (i.e., 5 days a week) after-school programs are mixed, which may reflect the substantial heterogeneity of the programs in terms of children being served, the types of activities offered, and the training and background of the staff. The federal funding of the 21st Century CLCs and various state and local initiatives has increased the numbers of low-income and English-learning students participating in after-school programs. A substantial number of programs are becoming more school-like. The available research suggests that (under some conditions) attending after-school programs is linked to improved social and academic outcomes. Children are more likely to show academic and social benefits when staff-child relationships are positive and nonconflictual, when programs offer a variety of age-appropriate activities from which children can select those of interest, and when children attend on a regular basis. The research findings about voluntary structured activities are more straightforward. Participation in these activities has been consistently linked to positive academic and social developmental outcomes in numerous studies. What appears to be key is that the activities are voluntary, are characterized by sustained engagement and effort, and provide opportunities to build or develop skills. Although the available research has begun to inform our understanding of the out-of-school context, further research is sorely needed. First, there is a need for research to identify the social, cognitive, and linguistic processes by which participation in programs and structured activities influences child and youth developmental outcomes. For example, researchers need to consider the competitiveness of sport activities in relation to children's social and emotional functioning. Researchers also might examine after-school experiences as settings in which complex thought processes can develop. Heath (1999) has conducted initial work in the area of language development by obtaining language samples during voluntary structured activities and analyzing their content. In the initial samples, students engaged in few sustained conversations on a topic and they frequently changed topics. After 3-4 weeks at the program, however, Heath noted substantial changes in the students' conversations and language. The use of conditionals (should, would, could) increased. She also noted increases in strategies to obtain clarifications from others and increases in the use of shifted registers and genres. Heath's (1999) linguistic analyses in conjunction with research that considers social and motivational processes underscore the broader point that the out-of-school context is complex and multi-layered and likely to be of substantial importance in the lives of children and youth. Research is needed to identify other important developmental processes in programs and structured activities. A promising procedure for identifying these processes is experience sampling (Csikszentmihalyi & Larson, 1987; Larson, 1989). Experience sampling methodology allows researchers to collect systematic data about an individual's activities, thoughts, and affective states by obtaining reports from participants at multiple randomly sampled points in time. Participants are signaled to provide a report in a variety of ways, such as with beepers or alarm watches. This record of experiences is not usually captured by other data collection methods. For example, program observations provide data on observed activities, interactions, and program climate, but do not offer insights into students' feelings and experiences within the after-school environment. Questionnaire and survey data are retrospective, asking respondents to recall past experiences and feelings regarding their after-school activities. Experience sampling could be used to examine any number of processes in after-school programs and structured activities. A better understanding of the effects of program content also is needed. Whether after-school programs should focus exclusively on enrichment activities or exclusively on academic activities, or include both enrichment and academic components, is the subject of heated debate. Some after-school scholars (Halpern, 1999; Heath, 1999; Eccles, in press) have argued forcefully that a focus on academics undermines the unique strengths and role of programs, and that programs should emphasize extracurricular enrichment activities. Others (Noam, 2004) have supported the move by policy makers and educators to make programs more academic, with an emphasis on homework help, tutoring, and preparation for academic achievement tests. The effects of different approaches to after-school programming have not been evaluated systematically. Research that describes, compares, and then tests effects of different program content models is needed to determine which types of programs are successful in attracting and keeping students (a necessary condition for programs to effect change), and to determine whether different types of programs are differentially associated with improvements in student outcomes such as school attendance, academic achievement, social competencies, and behavioral adjustment. A related question is whether structured activities that are obligatory or required have the same effects as voluntary structured activities do. Researchers also should further examine the impact of different attendance patterns on child developmental outcomes. We do not have solid information about optimal intensity and duration of attendance in terms of outcomes. There are suggestions in the literature that long-term, frequent attendance at programs is associated with positive outcomes for low-income children. Research needs to examine whether these results hold for middle-income children and youth as well. Finally, experimental studies should be conducted in which children and adolescents are randomly assigned to after-school programs and structured activities. All of the research to date on structured activities, and most of the research on after-school programs, has been nonexperimental, so questions about selection bias remain. Experimental studies in which children and adolescents are randomly assigned to participation in programs and activities would be a valuable next step in understanding relations between participation and child and youth outcomes. Such research should not be conducted until we have more information about the components of high-quality programming in terms of program content and developmental processes, however. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Because children spend a significant proportion of their day in school, pediatric emergencies such as the exacerbation of medical conditions, behavioral crises, and accidental/intentional injuries are likely to occur. Recently, both the American Academy of Pediatrics and the American Heart Association have published guidelines stressing the need for school leaders to establish emergency-response plans to deal with life-threatening medical emergencies in children. The goals include developing an efficient and effective campus-wide communication system for each school with local emergency medical services (EMS); establishing and practicing a medical emergency-response plan (MERP) involving school nurses, physicians, athletic trainers, and the EMS system; identifying students at risk for life-threatening emergencies and ensuring the presence of individual emergency care plans; training staff and students in first aid and cardiopulmonary resuscitation (CPR); equipping the school for potential life-threatening emergencies; and implementing lay rescuer automated external defibrillator (AED) programs. The objective of this study was to use published guidelines by the American Academy of Pediatrics and the American Heart Association to examine the preparedness of schools to respond to pediatric emergencies, including those involving children with special care needs, and potential mass disasters. A 2-part questionnaire was mailed to 1000 randomly selected members of the National Association of School Nurses. The first part included 20 questions focusing on: (1) the clinical background of the school nurse (highest level of education, years practicing as a school health provider, CPR training); (2) demographic features of the school (student attendance, grades represented, inner-city or rural/suburban setting, private or public funding, presence of children with special needs); (3) self-reported frequency of medical and psychiatric emergencies (most common reported school emergencies encountered over the past school year, weekly number of visits to school nurses, annual number of "life-threatening" emergencies requiring activation of EMS); and (4) the preparedness of schools to manage life-threatening emergencies (presence of an MERP, presence of emergency care plans for asthmatics, diabetics, and children with special needs, presence of a school nurse during all school hours, CPR training of staff and students, availability of athletic trainers during all athletic events, presence of an MERP for potential mass disasters). The second part included 10 clinical scenarios measuring the availability of emergency equipment and the confidence level of the school nurse to manage potential life-threatening emergencies. Of the 675 questionnaires returned, 573 were eligible for analysis. A majority of responses were from registered nurses who have been practicing for >5 years in a rural or suburban setting. The most common reported school emergencies were extremity sprains and shortness of breath. Sixty-eight percent (391 of 573 [95% confidence interval (CI): 64-72%]) of school nurses have managed a life-threatening emergency requiring EMS activation during the past school year. Eighty-six percent (95% CI: 84-90%) of schools have an MERP, although 35% (95% CI: 31-39%) of schools do not practice the plan. Thirteen percent (95% CI: 10-16%) of schools do not identify authorized personnel to make emergency medical decisions. When stratified by mean student attendance, school setting, and funding classification, schools with and without an MERP did not differ significantly. Of the 205 schools that do not have a school nurse present on campus during all school hours, 17% (95% CI: 12-23%) do not have an MERP, 17% (95% CI: 12-23%) do not identify an authorized person to make medical decisions when faced with a life-threatening emergency, and 72% (95% CI: 65-78%) do not have an effective campus-wide communication system. CPR training is offered to 76% (95% CI: 70-81%) of the teachers, 68% (95% CI: 61-74%) of the administrative staff, and 28% (95% CI: 22-35%) of the students. School nurses reported the availability of a bronchodilator meter-dosed inhaler (78% [95% CI: 74-81%]), AED (32% [95% CI: 28-36%]), and epinephrine autoinjector (76% [95% CI: 68-79%]) in their school. When stratified by inner-city and rural/suburban school setting, the availability of emergency equipment did not differ significantly except for the availability of an oxygen source, which was higher in rural/suburban schools (15% vs 5%). School-nurse responders self-reported more confidence in managing respiratory distress, airway obstruction, profuse bleeding/extremity fracture, anaphylaxis, and shock in a diabetic child and comparatively less confidence in managing cardiac arrest, overdose, seizure, heat illness, and head injury. When analyzing schools with at least 1 child with special care needs, 90% (95% CI: 86-93%) have an MERP, 64% (95% CI: 58-69%) have a nurse available during all school hours, and 32% (95% CI: 27-38%) have an efficient and effective campus-wide communication system linked with EMS. There are no identified authorized personnel to make medical decisions when the school nurse is not present on campus in 12% (95% CI: 9-16%) of the schools with children with special care needs. When analyzing the confidence level of school nurses to respond to common potential life-threatening emergencies in children with special care needs, 67% (95% CI: 61-72%) of school nurses felt confident in managing seizures, 88% (95% CI: 84-91%) felt confident in managing respiratory distress, and 83% (95% CI: 78-87%) felt confident in managing airway obstruction. School nurses reported having the following emergency equipment available in the event of an emergency in a child with special care needs: glucose source (94% [95% CI: 91-96%]), bronchodilator (79% [95% CI: 74-83%]), suction (22% [95% CI: 18-27%]), bag-valve-mask device (16% [95% CI: 12-21%]), and oxygen (12% [95% CI: 9-16%]). An MERP designed specifically for potential mass disasters was present in 418 (74%) of 573 schools (95% CI: 70-77%). When stratified by mean student attendance, school setting, and funding classification, schools with and without an MERP for mass disasters did not differ significantly. Although schools are in compliance with many of the recommendations for emergency preparedness, specific areas for improvement include practicing the MERP several times per year, linking all areas of the school directly with EMS, identifying authorized personnel to make emergency medical decisions, and increasing the availability of AED in schools. Efforts should be made to increase the education of school nurses in the assessment and management of life-threatening emergencies for which they have less confidence, particularly cardiac arrest, overdose, seizures, heat illness, and head injury. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Objective Neck masses are common in adults, but often the underlying etiology is not easily identifiable. While infections cause most of the neck masses in children, most persistent neck masses in adults are neoplasms. Malignant neoplasms far exceed any other etiology of adult neck mass. Importantly, an asymptomatic neck mass may be the initial or only clinically apparent manifestation of head and neck cancer, such as squamous cell carcinoma (HNSCC), lymphoma, thyroid, or salivary gland cancer. Evidence suggests that a neck mass in the adult patient should be considered malignant until proven otherwise. Timely diagnosis of a neck mass due to metastatic HNSCC is paramount because delayed diagnosis directly affects tumor stage and worsens prognosis. Unfortunately, despite substantial advances in testing modalities over the last few decades, diagnostic delays are common. Currently, there is only 1 evidence-based clinical practice guideline to assist clinicians in evaluating an adult with a neck mass. Additionally, much of the available information is fragmented, disorganized, or focused on specific etiologies. In addition, although there is literature related to the diagnostic accuracy of individual tests, there is little guidance about rational sequencing of tests in the course of clinical care. This guideline strives to bring a coherent, evidence-based, multidisciplinary perspective to the evaluation of the neck mass with the intention to facilitate prompt diagnosis and enhance patient outcomes. Purpose The primary purpose of this guideline is to promote the efficient, effective, and accurate diagnostic workup of neck masses to ensure that adults with potentially malignant disease receive prompt diagnosis and intervention to optimize outcomes. Specific goals include reducing delays in diagnosis of HNSCC; promoting appropriate testing, including imaging, pathologic evaluation, and empiric medical therapies; reducing inappropriate testing; and promoting appropriate physical examination when cancer is suspected. The target patient for this guideline is anyone ≥18 years old with a neck mass. The target clinician for this guideline is anyone who may be the first clinician whom a patient with a neck mass encounters. This includes clinicians in primary care, dentistry, and emergency medicine, as well as pathologists and radiologists who have a role in diagnosing neck masses. This guideline does not apply to children. This guideline addresses the initial broad differential diagnosis of a neck mass in an adult. However, the intention is only to assist the clinician with a basic understanding of the broad array of possible entities. The intention is not to direct management of a neck mass known to originate from thyroid, salivary gland, mandibular, or dental pathology as management recommendations for these etiologies already exist. This guideline also does not address the subsequent management of specific pathologic entities, as treatment recommendations for benign and malignant neck masses can be found elsewhere. Instead, this guideline is restricted to addressing the appropriate work-up of an adult patient with a neck mass that may be malignant in order to expedite diagnosis and referral to a head and neck cancer specialist. The Guideline Development Group sought to craft a set of actionable statements relevant to diagnostic decisions made by a clinician in the workup of an adult patient with a neck mass. Furthermore, where possible, the Guideline Development Group incorporated evidence to promote high-quality and cost-effective care. Action Statements The development group made a strong recommendation that clinicians should order a neck computed tomography (or magnetic resonance imaging) with contrast for patients with a neck mass deemed at increased risk for malignancy. The development group made the following recommendations: (1) Clinicians should identify patients with a neck mass who are at increased risk for malignancy because the patient lacks a history of infectious etiology and the mass has been present for ≥2 weeks without significant fluctuation or the mass is of uncertain duration. (2) Clinicians should identify patients with a neck mass who are at increased risk for malignancy based on ≥1 of these physical examination characteristics: fixation to adjacent tissues, firm consistency, size >1.5 cm, or ulceration of overlying skin. (3) Clinicians should conduct an initial history and physical examination for patients with a neck mass to identify those with other suspicious findings that represent an increased risk for malignancy. (4) For patients with a neck mass who are not at increased risk for malignancy, clinicians or their designees should advise patients of criteria that would trigger the need for additional evaluation. Clinicians or their designees should also document a plan for follow-up to assess resolution or final diagnosis. (5) For patients with a neck mass who are deemed at increased risk for malignancy, clinicians or their designees should explain to the patient the significance of being at increased risk and explain any recommended diagnostic tests. (6) Clinicians should perform, or refer the patient to a clinician who can perform, a targeted physical examination (including visualizing the mucosa of the larynx, base of tongue, and pharynx) for patients with a neck mass deemed at increased risk for malignancy. (7) Clinicians should perform fine-needle aspiration (FNA) instead of open biopsy, or refer the patient to someone who can perform FNA, for patients with a neck mass deemed at increased risk for malignancy when the diagnosis of the neck mass remains uncertain. (8) For patients with a neck mass deemed at increased risk for malignancy, clinicians should continue evaluation of patients with a cystic neck mass, as determined by FNA or imaging studies, until a diagnosis is obtained and should not assume that the mass is benign. (9) Clinicians should obtain additional ancillary tests based on the patient's history and physical examination when a patient with a neck mass is deemed at increased risk for malignancy who does not have a diagnosis after FNA and imaging. (10) Clinicians should recommend evaluation of the upper aerodigestive tract under anesthesia, before open biopsy, for patients with a neck mass deemed at increased risk for malignancy and without a diagnosis or primary site identified with FNA, imaging, and/or ancillary tests. The development group recommended against clinicians routinely prescribing antibiotic therapy for patients with a neck mass unless there are signs and symptoms of bacterial infection. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Specific diagnostic tests to detect severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and resulting COVID-19 disease are not always available and take time to obtain results. Routine laboratory markers such as white blood cell count, measures of anticoagulation, C-reactive protein (CRP) and procalcitonin, are used to assess the clinical status of a patient. These laboratory tests may be useful for the triage of people with potential COVID-19 to prioritize them for different levels of treatment, especially in situations where time and resources are limited. To assess the diagnostic accuracy of routine laboratory testing as a triage test to determine if a person has COVID-19. On 4 May 2020 we undertook electronic searches in the Cochrane COVID-19 Study Register and the COVID-19 Living Evidence Database from the University of Bern, which is updated daily with published articles from PubMed and Embase and with preprints from medRxiv and bioRxiv. In addition, we checked repositories of COVID-19 publications. We did not apply any language restrictions. We included both case-control designs and consecutive series of patients that assessed the diagnostic accuracy of routine laboratory testing as a triage test to determine if a person has COVID-19. The reference standard could be reverse transcriptase polymerase chain reaction (RT-PCR) alone; RT-PCR plus clinical expertise or and imaging; repeated RT-PCR several days apart or from different samples; WHO and other case definitions; and any other reference standard used by the study authors. Two review authors independently extracted data from each included study. They also assessed the methodological quality of the studies, using QUADAS-2. We used the 'NLMIXED' procedure in SAS 9.4 for the hierarchical summary receiver operating characteristic (HSROC) meta-analyses of tests for which we included four or more studies. To facilitate interpretation of results, for each meta-analysis we estimated summary sensitivity at the points on the SROC curve that corresponded to the median and interquartile range boundaries of specificities in the included studies. We included 21 studies in this review, including 14,126 COVID-19 patients and 56,585 non-COVID-19 patients in total. Studies evaluated a total of 67 different laboratory tests. Although we were interested in the diagnotic accuracy of routine tests for COVID-19, the included studies used detection of SARS-CoV-2 infection through RT-PCR as reference standard. There was considerable heterogeneity between tests, threshold values and the settings in which they were applied. For some tests a positive result was defined as a decrease compared to normal vaues, for other tests a positive result was defined as an increase, and for some tests both increase and decrease may have indicated test positivity. None of the studies had either low risk of bias on all domains or low concerns for applicability for all domains. Only three of the tests evaluated had a summary sensitivity and specificity over 50%. These were: increase in interleukin-6, increase in C-reactive protein and lymphocyte count decrease. Blood count Eleven studies evaluated a decrease in white blood cell count, with a median specificity of 93% and a summary sensitivity of 25% (95% CI 8.0% to 27%; very low-certainty evidence). The 15 studies that evaluated an increase in white blood cell count had a lower median specificity and a lower corresponding sensitivity. Four studies evaluated a decrease in neutrophil count. Their median specificity was 93%, corresponding to a summary sensitivity of 10% (95% CI 1.0% to 56%; low-certainty evidence). The 11 studies that evaluated an increase in neutrophil count had a lower median specificity and a lower corresponding sensitivity. The summary sensitivity of an increase in neutrophil percentage (4 studies) was 59% (95% CI 1.0% to 100%) at median specificity (38%; very low-certainty evidence). The summary sensitivity of an increase in monocyte count (4 studies) was 13% (95% CI 6.0% to 26%) at median specificity (73%; very low-certainty evidence). The summary sensitivity of a decrease in lymphocyte count (13 studies) was 64% (95% CI 28% to 89%) at median specificity (53%; low-certainty evidence). Four studies that evaluated a decrease in lymphocyte percentage showed a lower median specificity and lower corresponding sensitivity. The summary sensitivity of a decrease in platelets (4 studies) was 19% (95% CI 10% to 32%) at median specificity (88%; low-certainty evidence). Liver function tests The summary sensitivity of an increase in alanine aminotransferase (9 studies) was 12% (95% CI 3% to 34%) at median specificity (92%; low-certainty evidence). The summary sensitivity of an increase in aspartate aminotransferase (7 studies) was 29% (95% CI 17% to 45%) at median specificity (81%) (low-certainty evidence). The summary sensitivity of a decrease in albumin (4 studies) was 21% (95% CI 3% to 67%) at median specificity (66%; low-certainty evidence). The summary sensitivity of an increase in total bilirubin (4 studies) was 12% (95% CI 3.0% to 34%) at median specificity (92%; very low-certainty evidence). Markers of inflammation The summary sensitivity of an increase in CRP (14 studies) was 66% (95% CI 55% to 75%) at median specificity (44%; very low-certainty evidence). The summary sensitivity of an increase in procalcitonin (6 studies) was 3% (95% CI 1% to 19%) at median specificity (86%; very low-certainty evidence). The summary sensitivity of an increase in IL-6 (four studies) was 73% (95% CI 36% to 93%) at median specificity (58%) (very low-certainty evidence). Other biomarkers The summary sensitivity of an increase in creatine kinase (5 studies) was 11% (95% CI 6% to 19%) at median specificity (94%) (low-certainty evidence). The summary sensitivity of an increase in serum creatinine (four studies) was 7% (95% CI 1% to 37%) at median specificity (91%; low-certainty evidence). The summary sensitivity of an increase in lactate dehydrogenase (4 studies) was 25% (95% CI 15% to 38%) at median specificity (72%; very low-certainty evidence). Although these tests give an indication about the general health status of patients and some tests may be specific indicators for inflammatory processes, none of the tests we investigated are useful for accurately ruling in or ruling out COVID-19 on their own. Studies were done in specific hospitalized populations, and future studies should consider non-hospital settings to evaluate how these tests would perform in people with milder symptoms. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Adults are at risk for illness, hospitalization, disability and, in some cases, death from vaccine-preventable diseases, particularly influenza and pneumococcal disease. CDC recommends vaccinations for adults on the basis of age, health conditions, prior vaccinations, and other considerations. Updated vaccination recommendations from CDC are published annually in the U.S. Adult Immunization Schedule. Despite longstanding recommendations for use of many vaccines, vaccination coverage among U.S. adults remains low. August 2017-June 2018 (for influenza vaccination) and January-December 2018 (for pneumococcal, herpes zoster, tetanus and diphtheria [Td]/tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis [Tdap], hepatitis A, hepatitis B, and human papillomavirus [HPV] vaccination). The National Health Interview Survey (NHIS) is a continuous, cross-sectional national household survey of the noninstitutionalized U.S. civilian population. In-person interviews are conducted throughout the year in a probability sample of households, and NHIS data are compiled and released annually. NHIS's objective is to monitor the health of the U.S. population and provide estimates of health indicators, health care use and access, and health-related behaviors. Adult receipt of influenza, pneumococcal, herpes zoster, Td/Tdap, hepatitis A, hepatitis B, and at least 1 dose of HPV vaccines was assessed. Estimates were derived for a new composite adult vaccination quality measure and by selected demographic and access-to-care characteristics (e.g., age, race/ethnicity, indication for vaccination, travel history [travel to countries where hepatitis infections are endemic], health insurance status, contacts with physicians, nativity, and citizenship). Trends in adult vaccination were assessed during 2010-2018. Coverage for the adult age-appropriate composite measure was low in all age groups. Racial and ethnic differences in coverage persisted for all vaccinations, with lower coverage for most vaccinations among non-White compared with non-Hispanic White adults. Linear trend tests indicated coverage increased from 2010 to 2018 for most vaccines in this report. Few adults aged ≥19 years had received all age-appropriate vaccines, including influenza vaccination, regardless of whether inclusion of Tdap (13.5%) or inclusion of any tetanus toxoid-containing vaccine (20.2%) receipt was measured. Coverage among adults for influenza vaccination during the 2017-18 season (46.1%) was similar to the estimate for the 2016-17 season (45.4%), and coverage for pneumococcal (adults aged ≥65 years [69.0%]), herpes zoster (adults aged ≥50 years and aged ≥60 years [24.1% and 34.5%, respectively]), tetanus (adults aged ≥19 years [62.9%]), Tdap (adults aged ≥19 years [31.2%]), hepatitis A (adults aged ≥19 years [11.9%]), and HPV (females aged 19-26 years [52.8%]) vaccination in 2018 were similar to the estimates for 2017. Hepatitis B vaccination coverage among adults aged ≥19 years and health care personnel (HCP) aged ≥19 years increased 4.2 and 6.7 percentage points to 30.0% and 67.2%, respectively, from 2017. HPV vaccination coverage among males aged 19-26 years increased 5.2 percentage points to 26.3% from the 2017 estimate. Overall, HPV vaccination coverage among females aged 19-26 years did not increase, but coverage among Hispanic females aged 19-26 years increased 10.8 percentage points to 49.6% from the 2017 estimate. Coverage for the following vaccines was lower among adults without health insurance compared with those with health insurance: influenza vaccine (among adults aged ≥19 years, 19-49 years, and 50-64 years), pneumococcal vaccine (among adults aged 19-64 years at increased risk), Td vaccine (among all age groups), Tdap vaccine (among adults aged ≥19 years and 19-64 years), hepatitis A vaccine (among adults aged ≥19 years overall and among travelers aged ≥19 years), hepatitis B vaccine (among adults aged ≥19 years and 19-49 years and among travelers aged ≥19 years), herpes zoster vaccine (among adults aged ≥60 years), and HPV vaccine (among males and females aged 19-26 years). Adults who reported having a usual place for health care generally reported receipt of recommended vaccinations more often than those who did not have such a place, regardless of whether they had health insurance. Vaccination coverage was higher among adults reporting ≥1 physician contact during the preceding year compared with those who had not visited a physician during the preceding year, regardless of whether they had health insurance. Even among adults who had health insurance and ≥10 physician contacts during the preceding year, depending on the vaccine, 20.1%-87.5% reported not having received vaccinations that were recommended either for all persons or for those with specific indications. Overall, vaccination coverage among U.S.-born adults was significantly higher than that of foreign-born adults, including influenza vaccination (aged ≥19 years), pneumococcal vaccination (all ages), tetanus vaccination (all ages), Tdap vaccination (all ages), hepatitis B vaccination (aged ≥19 years and 19-49 years and travelers aged ≥19 years), herpes zoster vaccination (all ages), and HPV vaccination among females aged 19-26 years. Vaccination coverage also varied by citizenship status and years living in the United States. NHIS data indicate that many adults remain unprotected against vaccine-preventable diseases. Coverage for the adult age-appropriate composite measures was low in all age groups. Individual adult vaccination coverage remained low as well, but modest gains occurred in vaccination coverage for hepatitis B (among adults aged ≥19 years and HCP aged ≥19 years), and HPV (among males aged 19-26 years and Hispanic females aged 19-26 years). Coverage for other vaccines and groups with Advisory Committee on Immunization Practices vaccination indications did not improve from 2017. Although HPV vaccination coverage among males aged 19-26 years and Hispanic females aged 19-26 years increased, approximately 50% of females aged 19-26 years and 70% of males aged 19-26 years remained unvaccinated. Racial/ethnic vaccination differences persisted for routinely recommended adult vaccines. Having health insurance coverage, having a usual place for health care, and having ≥1 physician contacts during the preceding 12 months were associated with higher vaccination coverage; however, these factors alone were not associated with optimal adult vaccination coverage, and findings indicate missed opportunities to vaccinate remained. Substantial improvement in adult vaccination uptake is needed to reduce the burden of vaccine-preventable diseases. Following the Standards for Adult Immunization Practice (https://www.cdc.gov/vaccines/hcp/adults/for-practice/standards/index.html), all providers should routinely assess adults' vaccination status at every clinical encounter, strongly recommend appropriate vaccines, either offer needed vaccines or refer their patients to another provider who can administer the needed vaccines, and document vaccinations received by their patients in an immunization information system. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Asthma and chronic obstructive pulmonary disease (COPD) are common diseases of the airways and lungs that have a major impact on the health of the population. The mainstay of treatment is by inhalation of medication to the site of the disease process. This can be achieved by a number of different device types, which have wide variations in costs to the health service. A number of different inhalation devices are available. The pressurised metered-dose inhaler (pMDI) is the most commonly used and cheapest device, which may also be used in conjunction with a spacer device. Newer chlorofluorocarbons (CFC)-free inhaler devices using hydrofluoroalkanes (HFAs) have also been developed. The drug is dissolved or suspended in the propellant under pressure. When activated, a valve system releases a metered volume of drug and propellant. Other devices include breath-actuated pMDIs (BA-pMDI), such as Autohaler and Easi-Breathe. They incorporate a mechanism activated during inhalation that triggers the metered-dose inhaler. Dry powder inhalers (DPI), such as Turbohaler, Diskhaler, Accuhaler and Rotahaler, are activated by inspiration by the patient. The powdered drug is dispersed into particles by the inspiration. With nebulisers oxygen, compressed air, or ultrasonic power is used to break up solutions or suspensions of medication into droplets for inhalation. The aerosol is administered by mask or by a mouthpiece. There has been no previous systematic review of the evidence of clinical effectiveness and cost-effectiveness of these different inhaler devices. To review systematically the clinical effectiveness and cost-effectiveness of inhaler devices in asthma and COPD. The different aspects of inhaler devices were separated into the most clinically relevant comparisons. Methods involved systematic searching of electronic databases and bibliographies for randomised controlled trials (RCTs) and systematic reviews. Pharmaceutical companies and experts in the field were contacted for further information. Trials that met the inclusion criteria were appraised and data extraction was under-taken by one reviewer and checked by a second reviewer, with any discrepancies being resolved through agreement. RESULTS--IN VITRO CHARACTERISTICS VERSUS IN VIVO TESTING AND CLINICAL RESPONSE: There is evidence that when comparative testing is performed on inhaler devices using the same methods, there is some correlation between particle size measurements and clinical response. However, the measurements are dependent upon the methods used, and a single measure of a device in isolation is of limited value. Also, there is little data on comparing devices of different types. There is currently insufficient data to verify the ability of in vitro assessments to predict inhaler performance in vivo. RESULTS--EFFECTIVENESS OF METERED-DOSE INHALERS FOR THE DELIVERY OF CORTICOSTEROIDS IN ASTHMA: The review of three trials in children and 21 trials in adults demonstrated no evidence to suggest clinical benefits of any other inhaler device over a pMDI in corticosteroid delivery. RESULTS--EFFECTIVENESS OF METERED-DOSE INHALERS FOR THE DELIVERY OF BETA-AGONISTS IN STABLE ASTHMA: In children, 11 studies were reviewed, of which seven compared the Turbohaler with the pMDI. One study found a significant treatment difference in peak expiratory flow rate, although there were differences in the patients' baseline characteristics. In adults, a review of 70 studies found no demonstrable difference in the clinical bronchodilator effect of short-acting b2-agonists delivered by the standard pMDI compared with that produced by any other DPI, HFA-pMDI or the Autohaler device. The finding that HFA-pMDIs may reduce treatment failure and oral steroid requirement in beta-agonist delivery needs further confirmatory research in adequately randomised clinical trials. RESULTS--EFFECTIVENESS OF NEBULISERS VERSUS METERED-DOSE INHALERS FOR THE DELIVERY OF BRONCHODILATORS IN STABLE ASTHMA: In children, three included trials compared different devices with a nebuliser and demonstrated no evidence of clinical superiority of nebulisers over inhaler devices in bronchodilator delivery. A total of 23 studies in adults found no equivalence for the main pulmonary outcomes and no evidence of difference in other outcomes. RESULTS--EFFECTIVENESS OF METERED-DOSE INHALERS FOR THE DELIVERY OF BETA-AGONISTS IN COPD: Only two studies were included in this review. No evidence of clinical difference was found in beta-agonist delivery. RESULTS--EFFECTIVENESS OF NEBULISERS VERSUS METERED-DOSE INHALERS FOR THE DELIVERY OF BRONCHODILATORS IN COPD: Evidence from 14 trials demonstrated equivalence for the main outcomes of pulmonary function. For other outcomes there was no evidence of treatment difference in bronchodilator delivery. RESULTS--PATIENTS' ABILITY TO USE METERED-DOSE INHALERS: Differences among studies and the heterogeneity of the results make it difficult to draw conclusions about inhaler technique differences between device types. The review of technique after teaching the correct technique suggests that there is no difference in patients' ability to use DPI or pMDIs. RESULTS--ECONOMIC ANALYSIS: The total number of NHS prescriptions for inhaler therapy for asthma in 1998 was over 31 million, with a net ingredient cost in excess of 392 million GB pounds. This economic assessment uses decision analysis to estimate the relative cost-effectiveness of inhaler devices for the delivery of bronchodilator and corticosteroid inhaled therapy. Overall, there were no differences in patient outcomes among the devices. On the assumption that the devices were clinically equivalent, pMDIs were the most cost-effective devices for asthma treatment. This systematic review examined the evidence from clinical trials evaluating the clinical effectiveness of different inhaler devices in the delivery of inhaled corticosteroids and beta2-bronchodilators for patients with asthma and COPD. The evidence from the published clinical literature demonstrates no difference in clinical effectiveness between nebulisers and alternative inhaler devices compared to standard pMDI with or without a spacer device. The cost-effectiveness evidence therefore favours pMDIs (or the cheapest inhaler device) as first-line treatment in all patients with stable asthma unless other specific reasons are identified. Patients can use pMDIs as effectively as other inhaler devices as long as the correct inhalation technique is taught. CONCLUSIONS--RECOMMENDATIONS FOR RESEARCH: Further clinical trials are required to demonstrate any differences in the clinical effectiveness and cost-effectiveness of inhaler devices and nebulisers compared with pMDIs. These should be of sufficient statistical power and methodological rigour to demonstrate any clinical benefit. Trials should be undertaken in community settings to ensure the generalisability of results. Outcome measures should be more patient-centred and report adverse effects more completely. Reporting of data from trials should be improved. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
To investigate the cost-effectiveness of using prognostic information to identify patients with breast cancer who should receive adjuvant therapy. Electronic databases from 1980 through to February 2002. A survey of clinical practice in UK cancer centres and units. Large retrospective dataset containing data on prognostic factors, treatments and outcomes for women with early breast cancer treated in Oxford. Between six and nine databases were searched by an information expert. Evidence-based methods were used to review and select those studies and the quality of each included paper was assessed using standard assessment tools reported in the literature or piloted and developed for this study. A survey of clinical practice in UK cancer centres and units was carried out to ensure that conclusions drawn from the report could be implemented. These data, along with the information gathered in the systematic reviews, informed the methodological approach adopted for the health economic modelling. An illustrative framework was developed for incorporating patient-level prediction within a health economic decision model. This framework was applied to a large retrospective dataset containing data on prognostic factors, treatments and outcomes for women with early breast cancer treated in Oxford. The data were used to estimate directly a parametric regression-based risk equation, from which a prognostic index was developed, and prognosis-specific estimates of the baseline breast cancer hazard could be observed. Published estimates of treatment effects, health service treatment costs and utilities were used to construct a decision analytic framework around this risk equation, thus enabling simulation of the effectiveness and cost-effectiveness of adjuvant therapy for all possible combinations of prognostic factors included in the model. The lack of good-quality systematic reviews and well-conducted studies of prognostic factors in breast cancer is a striking finding. There are no registers of studies of prognostic factors or of reviews of prognostic studies. Many of the reviews used weak methods, primary studies are similar with poor methodology and reporting of results. In addition, there is much variation in patient populations, assay methods, analysis of results, definitions used and reporting of results. Most studies appear to be retrospective and some use inappropriate methods likely to inflate outcomes such as optimising cut points and failing to test the results in an independent population. Very few reviews used meta-analysis to conduct a pooled analysis and to provide an estimate of the average size of any association. Instead, most reviews relied on vote counting. Although many prognostic models for breast cancer have been published, remarkably few have been re-examined by independent groups in independent settings. The few validation studies have been carried out on ill-defined samples, sometimes of smaller size and short follow-up, and sometimes using different patient outcomes when validating a model. The evidence from the validation studies shows support for the prognostic value of the Nottingham Prognostic Index (NPI). No new prognostic factors have been shown to add substantially to those identified in the 1980s. Improvement of this index depends on finding factors that are as important as, but independent of, lymph node, stage and pathological grade. The NPI remains a useful clinical tool, although additional factors may enhance its use. We accepted that hormone receptor status (ER) for hormonal therapy such as tamoxifen and prediction of response to trastuzumab by HER2 did not require systematic review, as the mechanism of action of these drugs requires intact receptors. There was no clear evidence that other factors were useful predictors of response and survival. The survey confirmed pathological nodal status, tumour grade, tumour size and ER status as the most clinically important factors for consideration when selecting women with early breast cancer for adjuvant systemic therapy in the UK. The protocols revealed that although UK cancer centres appear to be using the same prognostic and predictive factors when selecting women to receive adjuvant therapy, much variation in clinical practice exists. Some centres use protocols based upon the NPI whereas others do not use a single index score. Within NPI and non-NPI users, between-centre variability exists in guidelines for women for whom the benefits are uncertain. Consensus amongst units appears to be greatest when selecting women for adjuvant hormone therapy with the decision based primarily upon ER or progesterone receptor status rather than combinations of a number of factors. Guidelines as to who should receive adjuvant chemotherapy, however, were found to be much less uniform. Searches of the literature revealed only five published papers that had previously examined the cost-effectiveness of using prognostic information for clinical decision-making. These studies were of varying quality and highlight the fact that economic evaluation in this area appears still to be in its infancy. By combining methodologies used in determining prognosis with those used in health economic evaluation, it was possible to illustrate an approach for simulating the effectiveness (survival and quality-adjusted survival) and the cost-effectiveness associated with the decision to treat individual women or groups of women with different prognostic characteristics. The model showed that effectiveness and cost-effectiveness of adjuvant systemic therapy have the potential to vary substantially depending upon prognosis. For some women therapy may prove very effective and cost-effective, whereas for others it may actually prove detrimental (i.e. the reductions in health-related quality of life outweigh any survival benefit). Outputs from the framework constructed using the methods described here have the potential to be useful for clinicians, attempting to determine whether net benefits can be obtained from administering adjuvant therapy for any presenting woman; and also for policy makers, who must be able to determine the total costs and outcomes associated with different prognosis based treatment protocols as compared with more conventional treat all or treat none policies. A risk table format enabling clinicians to look up a patient's prognostic factors to determine the likely benefits (survival and quality-adjusted survival) from administering therapy may be helpful. For policy makers, it was demonstrated that the model's output could be used to evaluate the cost-effectiveness of different treatment protocols based upon prognostic information. The framework should also be valuable in evaluating the likely impact and cost-effectiveness of new potential prognostic factors and adjuvant therapies. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
From Fig. 1 it may be seen that the effect of elevated temperature during the pyrexial period upon 1/K and therefore on the dissociation curve of oxyhemoglobin was, on the average, greater than would have been expected from experiments on normal blood in vitro, and greater than would be expected in view of the alkalosis occurring See PDF for Structure during fever. Temperature rise, and excess hydroxyl ion acting in vitro in the opposite directions, seemed to indicate a more stable state of affairs than was found. Apparently other factors have come into play, as, for example, alterations in the proportions and concentrations of the various electrolytes. In pneumonia, for instance, there is a retention of chloride during the febrile period with excessive loss of phosphates. The variations were not due to variations in the hemoglobin molecule itself since from the work of Adair, Barcroft, and Bock (18) hemoglobin must apparently be reckoned as having identical properties in normal individuals of the same species. If Barcroft's (19) hypothesis be right, namely that the C(H) within the corpuscle is higher than that of the plasma, the observed variations of 1/K may not be so surprising. In view of the fact that the hemoglobin inside the corpuscle is enclosed within a semipermeable membrane, the possibility arises of the setting up of membrane equilibria which will protect the respiratory pigment from excessive changes of reaction that may occur in the plasma, and thus the optimum conditions for the carriage of oxygen to the tissues may be maintained. Krogh and Leitch (10) in 1919 also drew attention to the protected situation of hemoglobin inside the corpuscle. In Case 6 it seems as if the alkalosis consequent on the febrile state had gained the upper hand and had extinguished the normal temperature reaction. This is rather confirmed by the fact that clinically the case showed one of the earlier signs of an alkalosis; namely, twitching of the facial muscles. Case 10, who had been on salicylate, also showed an analogous effect, when 6 days after the first observation the temperature shift was practically nil. The relationship between pH, 1/K, and the febrile temperature still awaits investigation. The extent of the shift of the dissociation curve was not by any means uniform; in neither Fig. 1 nor Fig. 2 is the highest value of 1/K at the highest temperature recorded. Fig. 2 shows the effect of temperature rise upon 1/K after cessation of the pyrexia; the effect is not so marked. Some cases, however, showed a variation in excess of the normal as if there was not yet complete return to normal. See PDF for Structure Biologically these changes are of importance in that this shifting of the dissociation curve to the right in fever means that there is more oxygen available for the tissues than normally, more especially at higher pressures. The tension of unloading is raised. This, in addition to the accelerated circulation and the probable increased velocity o[ reaction, means that even in a localized area of inflammation, if there is increased temperature, the tissues are placed in a better position for resisting infection as a result of their better oxygenation. That there is increased metabolism during fever has been conclusively shown by Du Bois (20) and others using large bed calorimeters. Du Bois has shown that the increase in metabolism obeys van't Hoff's law, increasing 13 per cent for each 1 degrees C. rise. This shifting of the curve then falls into line with these observations as an adaptive response to the febrile condition, and the febrile temperature, if not too great, would seem to be a purposive attempt to aid the combating of infection. This shifting of the curve probably explains Uyeno's (21) observation on the effect of increased temperature on the circulation in the cat; namely, increased coefficient of utilization, and increased fall in the saturation of the mixed venous blood. Turning now to Table II, we find that, if the CO(2) dissociation curve of Haldane (12) is accepted as normal, the bicarbonate reserve of five of the cases was above normal, of one normal, and of the rest below normal. Gastric secretion was not the cause of the varying curves, since the time of drawing the blood in all cases was during intestinal digestion. No observations having been made on the blood pH or the alveolar CO(2), we cannot be absolutely certain as to the actual reactions, more especially in the last cases; Case 6, however, was probably one showing a partially compensated CO(2) deficit in view of the absolute lowering of the total bicarbonate and evidence clinically of a tendency to alkalosis. Case 3, which had a lowered reserve, was probably similar. Koehler (22) gives a series of blood pH determinations in acute fevers in which ten out of twelve cases showed an uncompensated alkalosis when the temperature was 103 degrees F. (39.4 degrees C.) or over. Pemberton and Crouter (23) in a study on the response to the therapeutic application of external heat also observed a tendency for the reaction to shift to the alkaline side as shown by the alteration in the pH of the sweat. Hill and Flack (24) and Bazett and Haldane (25) observed that in thermal fever there was an excessive loss of CO(2) comparable to the effect of hyperpnea. These facts are of importance in view of the above results regarding the bicarbonate reserve. That there was a definite alkalosis in some of the cases is at least shown by the value of 1/K at 37.0 degrees C. during the pyrexial period in Cases 1, 2, and 7. The upper limits of 1/K at 37.0 degrees C. both during pyrexia and after were similar. Of other factors that might be considered, reference may be made to the work of Barbour and his associates (26-29), who showed that in hyperthermia and fever there is an alteration in the concentration of the blood. But the changes were hardly of such magnitude as to cause the variations above detailed. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Chronic diseases (e.g., diabetes, cancer, heart disease, and stroke) are the leading causes of morbidity and mortality in the United States. Data on health risk behaviors that increase the risk for chronic diseases and use of preventive practices are essential for the development, implementation, and evaluation of health promotion programs, policies, and intervention strategies to decrease or prevent the leading causes of morbidity and mortality. Surveillance data from states and territories, selected metropolitan and micropolitan areas, and counties are vital components of these various prevention and intervention strategies. January-December 2008 DESCRIPTION OF THE SYSTEM: The Behavioral Risk Factor Surveillance System (BRFSS) is an ongoing, state-based, random-digit--dialed telephone survey of noninstitutionalized adults residing in the United States. BRFSS collects data on health risk behaviors, preventive health services and practices, and access to health care related to the leading causes of death and disability in the United States. This report presents results for 2008 for all 50 states, the District of Columbia, Puerto Rico, Guam, the U.S. Virgin Islands, 177 metropolitan and micropolitan statistical areas (MMSAs), and 266 counties. In 2008, the estimated prevalence of high-risk behaviors, chronic diseases and conditions, screening practices, and use of preventive health-care services varied substantially by state and territory, MMSA, and county. The following is a summary of results listed by BRFSS question topic. Each set of proportions refers to the range of estimated prevalence for the disease, condition, or behavior as reported by the survey respondent. Adults reporting good or better health: 68% to 89% for states and territories and 69% to 93% for selected MMSAs and counties. Health care insurance coverage: 72% to 96% for states and territories, 61% to 97% for MMSAs, and 61% to 98% for counties. Teeth extractions among persons aged ≥65 years: 10% to 38% for states and territories, 5% to 36% for MMSAs, and 4% to 34% for counties. Adults who had a checkup during the preceding 12 months: 56% to 81% for states and territories, 51% to 85% for MMSAs, and 51% to 89% for counties. Influenza vaccination among persons aged ≥65 years: 31% to 78% for states and territories, 52% to 82% for MMSAs, and 51% to 86% for counties. Pneumococcal vaccination among persons aged ≥65 years: 28% to 73% for states and territories, 46% to 82% for MMSAs, and 41% to 83% for counties. Adults aged ≥50 years who had a sigmoidoscopy/colonoscopy: 38% to 74% for states and territories, 45% to 78% for selected MMSAs, and 45% to 80% for counties. Adults aged ≥50 years who had a blood stool test during the preceding 2 years: 8% to 29% for states and territories, 7% to 51% for MMSAs, and 7% to 40% for counties. Among women aged ≥18 years who had a Papanicolaou test during the preceding 3 years: 67% to 89% for states and territories, 66% to 93% for selected MMSAs, and 66% to 96% for counties. Women aged ≥40 years who had a mammogram during the preceding 2 years: 64% to 85% for states and territories, and 61% to 88% for MMSAs and counties. Men aged ≥40 years who had a Prostate-Specific Antigen (PSA) test during the preceding 2 years: 34% to 66% for states and territories, 39% to 70% for MMSAs, and 37% to 71% for counties. Current cigarette smoking among adults aged ≥18 years: 6% to 27% for states and territories, 5% to 31% for MMSAs, and 5% to 30% for counties. Adults who reported binge drinking during the preceding month: 8% to 23% for states and territories, 3% to 25% for selected MMSAs, and 3% to 26% for counties. Heavy drinking among adults during the preceding month: 3% to 8% for states and territories, <1% to 10% for MMSAs, and 1% to 11% for counties. Adults who reported no leisure-time physical activity: 18% to 47% for states and territories, 12% to 40% for MMSAs, and 10% to 40% for selected counties. Adults who were overweight (BMI ≥25.0 and <30.0): 33% to 40% for states and territories, 31% to 46% for selected MMSAs, and 28% to 50% for counties. Adults aged ≥20 years who were obese (BMI ≥30.0): 20% to 34% for states and territories, 15% to 40% for MMSAs, and 13% to 40% for counties. Asthma among adults: 5% to 11% for states and territories, 4% to 13% for MMSAs, and 4% to 15% for counties. Diabetes among adults: 6% to 12% for states and territories, 3% to 17% for selected MMSAs, and 3% to 14% for counties. Adults aged ≥18 years who had limited activity because of physical, mental, or emotional problems: 10% to 30% for states and territories, 13% to 33% for MMSAs, and 12% to 31% for counties. Adults who required use of special equipment: 4% to 11% for states and territories, 3% to 12% for MMSAs, and 2% to 13% for counties. Angina and coronary heart disease among adults aged ≥45 years: 5% to 19% for states and territories, 6% to 22% for MMSAs, and 4% to 22% for counties. Adults aged ≥45 years with a history of stroke: 3% to 7% for states and territories, 2% to 11% for selected MMSAs, and 1% to 12% for counties. The findings in this report indicate substantial variation in health-risk behaviors, chronic diseases and conditions, and use of preventive health-care services among U.S. adults at the state and territory, MMSA, and county level. The findings underscore the continued need for surveillance of health-risk behaviors, chronic diseases and conditions, and the use of preventive health services. Healthy People 2010 objectives have been established to monitor health behaviors and the use of preventive health services. Local and state health departments and federal agencies use BRFSS data to identify populations at high risk for certain health behaviors, chronic diseases and conditions, and to evaluate the use of preventive services. In addition, BRFSS data are used to direct, implement, monitor, and evaluate public health programs and policies that can lead to a reduction in morbidity and mortality from adverse effects of health-risk behaviors and subsequent chronic conditions. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
To observe the efficacy of fenofibrate for hepatic steatosis in rats after severe burn. Twenty-seven male SD rats were divided into sham injury group, burn group, and burn+ fenofibrate group according to the random number table, with 9 rats in each group. Rats in sham injury group were sham injured on the back by immersing in 37 ℃ warm water for 15 s and then remained without other treatment. Rats in burn group and burn+ fenofibrate group were inflicted with 30% total body surface area full-thickness scald (hereinafter referred to as burn) on the back by immersing in 98 ℃ hot water for 15 s, and then they were intraperitoneally injected with lactated Ringer's solution at post injury hour (PIH) 1. From PIH 24 to post injury day (PID) 8, rats in burn+ fenofibrate group were treated with fenofibrate in the dose of 80 mg·kg(-1)·d(-1), while those in burn group were treated with equivalent volume of saline. (1) Three rats of each group were respectively selected on PID 4, 6, and 8 for the collection of inferior vena caval blood samples. Serum content of total cholesterol (TC), triglyceride (TG), free fatty acid (FFA), high density lipoprotein (HDL), and low density lipoprotein (LDL) was determined with fully automatic biochemical analyzer. Body mass of each rat was measured immediately after blood sampling, and then rats were sacrificed to collect liver tissue for weighing wet mass. The ratio of wet mass of liver tissue to body mass (liver index) was calculated. Meanwhile, gross observation of liver was performed. (2) One liver tissue sample was harvested from each rat at each time point to observe histopathologic changes with HE staining. One liver tissue slice of each rat at each time point was collected to evaluate degree of hepatic steatosis, and the number of rats in each group in each grade of hepatic steatosis was recorded. Measurement data were processed with analysis of variance of factorial design and SNK test, and enumeration data were processed with Kruskal-Wallis test and Nemenyi test. (1) The content of TC, TG, FFA, and HDL of rats in burn group on PID 4 was obviously different from that in sham injury group (with P values below 0.05). Compared with that in burn group, the content of TC, TG, and FFA of rats was significantly decreased (with P values below 0.05), while the content of HDL of rats was not obviously changed in burn+ fenofibrate group on PID 4 (P>0.05). There were no obvious differences in the content of LDL of rats among 3 groups on PID 4 (with P values above 0.05). The content of TC, TG, and HDL of rats in burn group on PID 6 was obviously different from that in sham injury group (with P values below 0.05). Compared with that in burn group, the content of TC and TG of rats was significantly decreased (with P values below 0.05), while the content of HDL of rats was significantly increased in burn+ fenofibrate group on PID 6 (P<0.05). There were no obvious differences in the content of FFA and LDL of rats among 3 groups on PID 6 (with P values above 0.05). The content of TC and HDL of rats in burn group on PID 8 was obviously different from that in sham injury group (with P values below 0.05). Compared with that in burn group, the content of TC of rats was significantly decreased (P<0.05), while the content of HDL of rats was not obviously changed in burn+ fenofibrate group on PID 8 (P>0.05). There were no obvious differences in content of TG, FFA, and LDL of rats among 3 groups on PID 8 (with P values above 0.05). (2) The texture of liver tissue of rats in burn+ fenofibrate group at each time point was tender and soft, without oil or fat on the section, which was close to the gross condition of liver of rats in sham injury group. Dark yellow plaque scattered on the surface of liver tissue of rats in burn group at each time point with oil and fat on the section, which was especially obvious on PID 6. There was no obvious difference in liver index of rats among 3 groups on PID 4 (F=1.63, P>0.05). On PID 6 and 8, the liver indexes of rats in sham injury group, burn group, and burn+ fenofibrate group were 0.0416±0.0016, 0.0533±0.0054, and 0.0370±0.0069; 0.0423±0.0034, 0.0624±0.0005, and 0.0444±0.0042 respectively. The liver indexes of rats in burn group on PID 6 and 8 were significantly higher than those in the other two groups (with P values below 0.05). There were no obvious differences in the liver indexes of rats between burn+ fenofibrate group and sham injury group on PID 6 and 8 (with P values above 0.05). (3) The liver tissue structure of rats in sham injury group was normal at each time point. Hepatic steatosis of rats in burn group at each time point appeared microvesicular and disperse, which was especially obvious on PID 6. Mild hepatic steatosis was observed in rats of burn+ fenofibrate group on PID 4, and then the structure of liver tissue gradually recovered to normal level from PID 6 on. The degree of hepatic steatosis of rats in sham injury group was 0 grade. One rat in I grade, 1 rat in II grade, and 7 rats in III grade were observed in hepatic steatosis of rats in burn group. Three rats in 0 grade, 4 rats in I grade, and 2 rats in II grade were observed in hepatic steatosis of rats in burn+ fenofibrate group. The degree of hepatic steatosis of rats in burn group was more severe than that in the other two groups (with χ(2) values respectively 56.25 and 162.44, P values below 0.05). The degree of hepatic steatosis of rats in burn+ fenofibrate group was more severe than that in sham injury group (χ(2)=27.51, P<0.05). Fenofibrate can ameliorate the dyslipidemia of severely burned rat, and it can alleviate the degree of hepatic steatosis in certain degree. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
<iStaphylococcus aureus (S. aureus)</i is the most common cause of surgical site infections, and the nose is the most common site for <iS. aureus</i colonization. Pre-surgical (in the days prior to surgery) nasal decolonization of <iS. aureus</i may reduce the bacterial load and prevent the organisms from being transferred to the surgical site, thus reducing the risk of surgical site infection. We conducted a health technology assessment of nasal decolonization of <iS. aureus</i (including methicillin-susceptible and methicillin-resistant strains) with or without topical antiseptic body wash to prevent surgical site infection in patients undergoing scheduled surgery, which included an evaluation of effectiveness, safety, cost-effectiveness, the budget impact of publicly funding nasal decolonization of <iS. aureus</i, and patient preferences and values. We performed a systematic literature search of the clinical evidence to retrieve systematic reviews and selected and reported results from one review that was recent, of high quality, and relevant to our research question. We complemented the chosen systematic review with a literature search to identify randomized controlled trials published since the systematic review was published in 2019. We used the Risk of Bias in Systematic Reviews (ROBIS) tool to assess the risk of bias of each included systematic review and the Cochrane risk-of-bias tool for randomized controlled trials to assess the risk of bias of each included primary study. We assessed the quality of the body of evidence according to the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Working Group criteria. We performed a systematic economic literature search and conducted both cost-effectiveness and cost-utility analyses using a decision-tree model with a 1-year time horizon from the perspective of Ontario's Ministry of Health. We also analyzed the budget impact of publicly funding nasal decolonization of <iS. aureus</i in pre-surgical patients in Ontario. To contextualize the potential value of nasal decolonization, we spoke with people who had recently undergone surgery, some of whom had received nasal decolonization, and one family member of a person who had recently had surgery. We also engaged participants through an online survey. We included one systematic review and three randomized controlled trials in the clinical evidence review. In universal decolonization, compared with placebo or no intervention, nasal mupirocin alone may result in little to no difference in the incidence of overall and <iS. aureus</i-related surgical site infections in pre-surgical patients undergoing orthopaedic, cardiothoracic, general, oncologic, gynaecologic, neurologic, or abdominal digestive surgeries, regardless of <iS. aureus</i carrier status (GRADE: Moderate to Very low). Compared with placebo, nasal mupirocin alone may result in little to no difference in the incidence of overall and <iS. aureus</i-related surgical site infections in pre-surgical patients who are <iS. aureus</i carriers undergoing cardiothoracic, vascular, orthopaedic, gastrointestinal, general, oncologic, gynaecologic, or neurologic surgery (GRADE: Moderate to Very low). In targeted decolonization, compared with placebo, nasal mupirocin combined with chlorhexidine body wash lowers the incidence of <iS. aureus</i-related surgical site infection (risk ratio: 0.32 [95% confidence interval: 0.16-0.62]) in pre-surgical patients who are <iS. aureus</i carriers undergoing cardiothoracic, vascular, orthopaedic, gastrointestinal, or general surgery (GRADE: High). Compared with no intervention, nasal mupirocin combined with chlorhexidine body wash in pre-surgical patients who are not <iS. aureus</i carriers undergoing orthopaedic surgery may have little to no effect on overall surgical site infection, but the evidence is very uncertain (GRADE: Very low). Most included studies did not separate methicillin-susceptible and methicillin-resistant strains of <iS. aureus</i. No significant antimicrobial resistance was identified in the evidence reviewed; however, the existing literature was not adequately powered and did not have sufficient follow-up time to evaluate antimicrobial resistance.Our economic evaluation found that universal nasal decolonization using mupirocin combined with chlorhexidine body wash is less costly and more effective than both targeted and no nasal decolonization. Compared with no nasal decolonization treatment, universal and targeted nasal decolonization using mupirocin combined with chlorhexidine body wash would prevent 32 and 22 <iS. aureus</i-related surgical site infections, respectively, per 10,000 patients. Universal nasal decolonization would lead to cost savings, whereas targeted nasal decolonization would increase the overall cost for the health care system since patients must first be screened for <iS. aureus</i carrier status before receiving nasal decolonization with mupirocin. The annual budget impact of publicly funding universal nasal decolonization in Ontario over the next 5 years ranges from a savings of $2.98 million in year 1 to a savings of $15.09 million in year 5. The annual budget impact of publicly funding targeted nasal decolonization ranges from an additional cost of $0.08 million in year 1 to an additional cost of $0.39 million in year 5.Our interview and survey respondents felt strongly about the value of preventing surgical site infections, and most favoured a universal approach. Based on the best evidence available, decolonization of <iS. aureus</i using nasal mupirocin combined with chlorhexidine body wash prior to cardiothoracic, vascular, orthopaedic, gastrointestinal, or general surgery lowers the incidence of surgical site infection caused by <iS. aureus</i in patients who are <iS. aureus</i carriers (including methicillin-susceptible and methicillin-resistant strains) (i.e., targeted decolonization). However, nasal mupirocin alone may result in little to no difference in overall surgical site infections and <iS. aureus</i-related surgical site infections in pre-surgical patients prior to orthopaedic, cardiothoracic, general, oncologic, gynaecologic, neurologic, or abdominal digestive surgeries, regardless of their <iS. aureus</i carrier status (i.e., universal decolonization). No significant antimicrobial resistance was identified in the evidence reviewed.Compared with no nasal decolonization treatment, universal nasal decolonization with mupirocin combined with chlorhexidine body wash may reduce <iS. aureus</i-related surgical site infections and lead to cost savings. Targeted nasal decolonization with mupirocin combined with chlorhexidine body wash may also reduce <iS. aureus</i-related surgical site infections but increase the overall cost of treatment for the health care system. We estimate that publicly funding universal nasal decolonization using mupirocin combined with chlorhexidine body wash would result in a total cost savings of $45.08 million over the next 5 years, whereas publicly funding targeted nasal decolonization using mupirocin combined with chlorhexidine body wash would incur an additional cost of $1.17 million over the next 5 years.People undergoing surgery value treatments aimed at preventing surgical site infections. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
In otitis media with effusion (OME), hearing loss is a core sign/symptom and basis of concern, with absolute pure-tone threshold sensitivity (in dB HL) by air-conduction providing the default measure of hearing. However several fundamental problems limiting the value of HL measures in otitis media are insufficiently appreciated. To appraise the joint value and implications of multiple hearing measures towards more comprehensive hearing assessment in OM, we examine in two related articles the interrelations and common or diverging determinants of three measures, two of them objective: binaural HL, and ACET (the published quasi-continuous scaling of binaural tympanometry to HL). The third measure is partly subjective: parentally reported hearing difficulties (RHD-4); this is the precision-scored total of the 4 items selected for the OM8-30 general purpose questionnaire for parents in OM. The Eurotitis-2 study (Total N=2886) internationally standardises OM8-30 and its OMQ-14 short form. The clinical and parent-response variables acquired cover many issues in diagnosis, symptomatology and impact of OM. Data acquisition was built upon routine clinic practice, enabling us also to document some properties of that practice, such as patterns of missing HL data. To address possible confounding or loss of representativeness from this, we investigated the implications of substituting tympanometry-based ACET for missing HL to give an HL/ACET hybrid. ACET is the mapping of categorical tympanometry to continuous HL. We simulated degrees of artificial missingness of HL up to 35% on the 1430 complete-data cases, using random deletion, with 1000-version bootstrapping. Correlations of this HL/ACET hybrid with pure (100%) HL then documented the degree of correlation retained under dilution of HL by an admixture of ACET; we also documented distribution shapes. For RHD-4, we then probed the determining influences on severity of score as an auditory disability measure, both background ones (from centre, age, sex, socio-economic status, length of history, diagnosis and season) and the two underlying objective hearing measures (HL, ACET). We ran these multiple regressions (GLMs), for representativeness and generality, both on 1430 complete-data cases (i.e. all 3 hearing variables present) and also on supplemented samples according to data required only for particular analyses (N increased by +56% to +68%). A further method of sample supplementation (by up to +96%) used the HL/ACET hybrid. Sex made negligible difference in any analysis. The particular collaborating centre, age, season and diagnosis collectively influenced presence/absence of HL data very strongly. (Area under ROC 0.944). Socio-economic status did not influence HL presence; surprisingly, nor did RHD, ACET or length of history, after control for centre, age, diagnosis and season. Of the inter-correlations between hearing measures, only the one between ACET and RHD was influenced (slightly reduced) by the inclusion of cases without HL data. In the simulated substitutions, Pearson correlation of hybrid HL/ACET with true HL remained above 0.90 for substitution by ACET of up to 30% rate of artificially 'missing' HL. Centre differences were adequately summarised by simple absolute additive differences in mean local case severity. In the determinant models for RHD on the 1430 complete-data cases, HL and the set of background determinants collectively explained broadly similar proportions of RHD's variability, totalling 36.8% explained. On the larger maximum case samples, slightly less absolute variability was explicable than on complete-case data, but relative magnitudes of contribution from individual determinants, both background and hearing measures, remained similar. The expected mean differences in RHD between diagnoses (RAOM, OME, and combined) were found, but the patterns of background and objective measure influences determining RHD did not differ significantly between the diagnoses. (1) In the Eurotitis-2 database, descriptive differences in various background demographic and clinical measures between cases on whom HL data were obtained versus not, were only of material magnitude for length of history and reported hearing difficulties. Such descriptive differences are not necessarily bases of confounding, so using our framework of 6 background adjuster variables, (particular collaborating centre, age, season, diagnosis, socioeconomic status and length of history) we isolated the determinants of HL data presence. The first four listed strongly predicted HL data presence/absence so are sufficient to control analyses well for any bias or confounding by HL data presence. (2) Diagnoses as OME and combined (OME+RAOM) had higher probability of HL data being present relative to RAOM, indicating that HL acquisition is chiefly seen as confirming and quantifying hearing loss in (suspect) OME, not as ruling it out (e.g. in suspected RAOM). Given this, also using RHD and or ACET as pre-triage to efficiently target capacity and/or reduce costs and opportunity costs of acquiring HL would be rational, but there was no evidence of such precise use of initial hearing-related information to decide on HL acquisition. (3) The full six background variables explained comparable variance in Reported Hearing Difficulties (RHD) to what was explained by ACET, but not quite as much as by HL. Achieving a high percentage explained (32-37% from good models) required both classes of determinant to be entered as predictors. The pattern of background determining influences for RHD was largely stable, with or without objective measures as additional predictors, and on maximum or complete-data cases. Length of history strongly determines RHD for a given concurrent HL. (4) Accepting ACET as substitute where HL was missing in OM cases gave a sample-size enhancement of 17% in Eurotitis-2, with negligible difference in the pattern of determinants. This hybrid measure can be recommended as reasonable next-best when moderate percentages of HL data are missing. (5) The stable pattern of prediction of RHD suggests that our six background determinants provide a very promising low-cost yet comprehensive framework for determination. It hence offers pluripotent statistical adjustment against confounding, applicable to RAOM, OME and combined diagnoses in any analysis using this database. Claims that it thereby offers a sufficient framework for full European standardisation of all the scores from the OM8-30 questionnaire measures await parallel demonstrations for symptom areas other than RHD. As 25% of the variance in RHD severity can be explained by the six adjusters in our framework, none of the six variables should be omitted from acquisition and analytic use in future OM research. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |
Between October and November 2003, several infants with encephalopathy were hospitalized in pediatric intensive care units in Israel. Two died of cardiomyopathy. Analysis of the accumulated data showed that all had been fed the same brand of soy-based formula (Remedia Super Soya 1), specifically manufactured for the Israeli market. The source was identified on November 6, 2003, when a 5.5-month-old infant was admitted to Sourasky Medical Center with upbeat nystagmus, ophthalmoplegia, and vomiting. Wernicke's encephalopathy was suspected, and treatment with supplementary thiamine was started. His condition improved within hours. Detailed history revealed that the infant was being fed the same formula, raising suspicions that it was deficient in thiamine. The formula was tested by the Israeli public health authorities, and the thiamine level was found to be undetectable (<0.5 microg/g). The product was pulled from the shelves, and the public was alerted. Thiamine deficiency in infants is very rare in developed countries. The aim of this study was to report the epidemiology of the outbreak and to describe the diagnosis, clinical course, and outcome of 9 affected infants in our care. After the index case, an additional 8 infants were identified in our centers by medical history, physical examination, and laboratory testing. The group consisted of 6 male and 3 female infants aged 2 to 12 months. All were assessed with the erythrocyte transketolase activity assay, wherein the extent of thiamine deficiency is expressed in percentage stimulation compared with baseline (thiamine pyrophosphate effect [TPPE]). Normal values range from 0% to 15%; a value of 15% to 25% indicates thiamine deficiency, and >25% indicates severe deficiency. Blood lactate levels (normal: 0.5-2 mmol/L) were measured in 6 infants, cerebrospinal fluid lactate in 2 (normal: 0.5-2 mmol/L), and blood pyruvate in 4 (normal: 0.03-0.08 mmol/L). The diagnostic criteria for thiamine deficiency were abnormal transketolase activity and/or unexplained lactic acidosis. Treatment consisted of intramuscular thiamine 50 mg/day for 14 days combined with a switch to another infant formula. Early symptoms were nonspecific and included mainly vomiting (n = 8), lethargy (n = 7), irritability (n = 5), abdominal distension (n = 4), diarrhea (n = 4), respiratory symptoms (n = 4), developmental delay (n = 3), and failure to thrive (n = 2). Infection was found in all cases. Six infants were admitted with fever. One patient had clinical dysentery and group C Salmonella sepsis; the others had mild infection: acute gastroenteritis (n = 2); upper respiratory infection (n = 2); and bronchopneumonia, acute bronchitis, and viral infection (n = 1 each). Two infants were treated with antibiotics. Three infants had neurologic symptoms of ophthalmoplegia with bilateral abduction deficit with or without upbeat nystagmus. All 3 had blood lactic acidosis, and 2 had high cerebrospinal fluid lactate levels. Patient 1, our index case, was hospitalized for upbeat nystagmus and ophthalmoplegia, in addition to daily vomiting episodes since 4 months of age and weight loss of 0.5 kg. Findings on brain computed tomography were normal. Blood lactate levels were high, and TPPE was 37.8%. Brain magnetic resonance imaging (MRI) revealed no abnormalities. Patient 2, who presented at 5 months with lethargy, vomiting, grunting, and abdominal tenderness, was found to have intussusception on abdominal ultrasound and underwent 2 attempts at reduction with air enema several hours apart. However, the lethargy failed to resolve and ophthalmoplegia appeared the next day, leading to suspicions of Wernicke's encephalopathy. Laboratory tests showed severe thiamine deficiency (TPPE 31.2%). In patients 1 and 2, treatment led to complete resolution of symptoms. The third infant, a 5-month-old girl, was admitted on October 10, 2003, well before the outbreak was recognized, with vomiting, fever, and ophthalmoplegia. Her condition deteriorated to seizures, apnea, and coma. Brain MRI showed a bilateral symmetrical hyperintense signal in the basal ganglia, mamillary bodies, and periaqueductal gray matter. Suspecting a metabolic disease, vitamins were added to the intravenous solution, including thiamine 250 mg twice a day. Clinical improvement was noted 1 day later. TPPE assay performed after treatment with thiamine was started was still abnormal (17.6%). Her formula was substituted after 4 weeks, after the announcement about the thiamine deficiency. Although the MRI findings improved 5 weeks later, the infant had sequelae of ophthalmoplegia and motor abnormalities and is currently receiving physiotherapy. All 3 patients with neurologic manifestations were fed exclusively with the soy-based formula for 2 to 3.5 months, whereas the others had received solid food supplements. Longer administration of the formula (ie, chronic thiamine deficiency) was associated with failure to thrive. For example, one 12-month-old girl who received the defective formula for 8 months presented with refusal to eat, vomiting, failure to thrive (75th to <5th percentile), hypotonia, weakness, and motor delay. Extensive workup was negative for malabsorption and immunodeficiency. On admission, the patient had Salmonella gastroenteritis and sepsis and was treated with antibiotics. After thiamine deficiency was diagnosed, she received large doses of thiamine (50 mg/day) for 2 weeks. Like the other 5 infants without neurologic involvement, her clinical signs and symptoms disappeared completely within 2 to 3 weeks of treatment, and TPPE levels normalized within 1 to 7 days. There were no side effects. As part of its investigation, the Israel Ministry of Health screened 156 infants who were fed the soy-based formula for thiamine deficiency. However, by that time, most were already being fed alternative formulas and had begun oral thiamine treatment. Abnormal TPPE results (>15%) were noted in 8 infants, 3 male and 5 female, all >1 year old, who were receiving solid food supplements. Although their parents failed to notice any symptoms, irritability, lethargy, vomiting, anorexia, failure to thrive, and developmental delay were documented by the examining physicians. None had signs of neurologic involvement. Treatment consisted of oral thiamine supplements for 2 weeks. Clinician awareness of the possibility of thiamine deficiency even in well-nourished infants is important for early recognition and prevention of irreversible brain damage. Therapy with large doses of thiamine should be initiated at the earliest suspicion of vitamin depletion, even before laboratory evidence is available and before neurologic or cardiologic symptoms appear. | Given the following content, create a question whose answer can be found within the content. Then, provide the answer to that question. Ensure the answer is derived directly from the content. Format the question and answer in the following JSON structure: {Question: '', Answer: ''}. |