title
stringlengths
2
58
text
stringlengths
374
73.7k
relevans
float64
0.76
0.81
popularity
float64
0.94
1
ranking
float64
0.74
0.81
Protein (nutrient)
Proteins are essential nutrients for the human body. They are one of the building blocks of body tissue and can also serve as a fuel source. As a fuel, proteins provide as much energy density as carbohydrates: 4 kcal (17 kJ) per gram; in contrast, lipids provide 9 kcal (37 kJ) per gram. The most important aspect and defining characteristic of protein from a nutritional standpoint is its amino acid composition. Proteins are polymer chains made of amino acids linked together by peptide bonds. During human digestion, proteins are broken down in the stomach to smaller polypeptide chains via hydrochloric acid and protease actions. This is crucial for the absorption of the essential amino acids that cannot be biosynthesized by the body. There are nine essential amino acids which humans must obtain from their diet in order to prevent protein-energy malnutrition and resulting death. They are phenylalanine, valine, threonine, tryptophan, methionine, leucine, isoleucine, lysine, and histidine. There has been debate as to whether there are 8 or 9 essential amino acids. The consensus seems to lean towards 9 since histidine is not synthesized in adults. There are five amino acids which humans are able to synthesize in the body. These five are alanine, aspartic acid, asparagine, glutamic acid and serine. There are six conditionally essential amino acids whose synthesis can be limited under special pathophysiological conditions, such as prematurity in the infant or individuals in severe catabolic distress. These six are arginine, cysteine, glycine, glutamine, proline and tyrosine. Dietary sources of protein include grains, legumes, nuts, seeds, meats, dairy products, fish, eggs, edible insects, and seaweeds. Protein functions in human body Protein is a nutrient needed by the human body for growth and maintenance. Aside from water, proteins are the most abundant kind of molecules in the body. Protein can be found in all cells of the body and is the major structural component of all cells in the body, especially muscle. This also includes body organs, hair and skin. Proteins are also used in membranes, such as glycoproteins. When broken down into amino acids, they are used as precursors to nucleic acid, co-enzymes, hormones, immune response, cellular repair, and other molecules essential for life. Additionally, protein is needed to form blood cells. Sources Protein occurs in a wide range of food. On a worldwide basis, plant protein foods contribute over 60% of the per capita supply of protein. In North America, animal-derived foods contribute about 70% of protein sources. Insects are a source of protein in many parts of the world. In parts of Africa, up to 50% of dietary protein derives from insects. It is estimated that more than 2 billion people eat insects daily. Meat, dairy, eggs, soybeans, fish, whole grains, and cereals are sources of protein. Examples of food staples and cereal sources of protein, each with a concentration greater than 7%, are (in no particular order) buckwheat, oats, rye, millet, maize (corn), rice, wheat, sorghum, amaranth, and quinoa. Game meat is an affordable protein source in some countries. Plant sources of proteins include legumes, nuts, seeds, grains, and some vegetables and fruits. Plant foods with protein concentrations greater than 7% include (but are not limited to) soybeans, lentils, kidney beans, white beans, mung beans, chickpeas, cowpeas, lima beans, pigeon peas, lupines, wing beans, almonds, Brazil nuts, cashews, pecans, walnuts, cotton seeds, pumpkin seeds, hemp seeds, sesame seeds, and sunflower seeds. Photovoltaic-driven microbial protein production uses electricity from solar panels and carbon dioxide from the air to create fuel for microbes, which are grown in bioreactor vats and then processed into dry protein powders. The process makes highly efficient use of land, water and fertiliser. People eating a balanced diet do not need protein supplements. The table below presents food groups as protein sources. Colour key: Protein source with highest density of respective amino acid. Protein source with lowest density of respective amino acid. Protein powders – such as casein, whey, egg, rice, soy and cricket flour– are processed and manufactured sources of protein. Testing in foods The classic assays for protein concentration in food are the Kjeldahl method and the Dumas method. These tests determine the total nitrogen in a sample. The only major component of most food which contains nitrogen is protein (fat, carbohydrate and dietary fiber do not contain nitrogen). If the amount of nitrogen is multiplied by a factor depending on the kinds of protein expected in the food the total protein can be determined. This value is known as the "crude protein" content. The use of correct conversion factors is heavily debated, specifically with the introduction of more plant-derived protein products. However, on food labels the protein is calculated by the nitrogen multiplied by 6.25, because the average nitrogen content of proteins is about 16%. The Kjeldahl test is typically used because it is the method the AOAC International has adopted and is therefore used by many food standards agencies around the world, though the Dumas method is also approved by some standards organizations. Accidental contamination and intentional adulteration of protein meals with non-protein nitrogen sources that inflate crude protein content measurements have been known to occur in the food industry for decades. To ensure food quality, purchasers of protein meals routinely conduct quality control tests designed to detect the most common non-protein nitrogen contaminants, such as urea and ammonium nitrate. In at least one segment of the food industry, the dairy industry, some countries (at least the U.S., Australia, France and Hungary) have adopted "true protein" measurement, as opposed to crude protein measurement, as the standard for payment and testing: "True protein is a measure of only the proteins in milk, whereas crude protein is a measure of all sources of nitrogen and includes nonprotein nitrogen, such as urea, which has no food value to humans. ... Current milk-testing equipment measures peptide bonds, a direct measure of true protein." Measuring peptide bonds in grains has also been put into practice in several countries including Canada, the UK, Australia, Russia and Argentina where near-infrared reflectance (NIR) technology, a type of infrared spectroscopy is used. The Food and Agriculture Organization of the United Nations (FAO) recommends that only amino acid analysis be used to determine protein in, inter alia, foods used as the sole source of nourishment, such as infant formula, but also provides: "When data on amino acids analyses are not available, determination of protein based on total N content by Kjeldahl (AOAC, 2000) or similar method ... is considered acceptable." The testing method for protein in beef cattle feed has grown into a science over the post-war years. The standard text in the United States, Nutrient Requirements of Beef Cattle, has been through eight editions over at least seventy years. The 1996 sixth edition substituted for the fifth edition's crude protein the concept of "metabolizeable protein", which was defined around the year 2000 as "the true protein absorbed by the intestine, supplied by microbial protein and undegraded intake protein". The limitations of the Kjeldahl method were at the heart of the Chinese protein export contamination in 2007 and the 2008 China milk scandal in which the industrial chemical melamine was added to the milk or glutens to increase the measured "protein". Protein quality The most important aspect and defining characteristic of protein from a nutritional standpoint is its amino acid composition. There are multiple systems which rate proteins by their usefulness to an organism based on their relative percentage of amino acids and, in some systems, the digestibility of the protein source. They include biological value, net protein utilization, and PDCAAS (Protein Digestibility Corrected Amino Acids Score) which was developed by the FDA as a modification of the Protein efficiency ratio (PER) method. The PDCAAS rating was adopted by the US Food and Drug Administration (FDA) and the Food and Agricultural Organization of the United Nations/World Health Organization (FAO/WHO) in 1993 as "the preferred 'best'" method to determine protein quality. These organizations have suggested that other methods for evaluating the quality of protein are inferior. In 2013 FAO proposed changing to Digestible Indispensable Amino Acid Score. Digestion Most proteins are decomposed to single amino acids by digestion in the gastro-intestinal tract. Digestion typically begins in the stomach when pepsinogen is converted to pepsin by the action of hydrochloric acid, and continued by trypsin and chymotrypsin in the small intestine. Before the absorption in the small intestine, most proteins are already reduced to single amino acid or peptides of several amino acids. Most peptides longer than four amino acids are not absorbed. Absorption into the intestinal absorptive cells is not the end. There, most of the peptides are broken into single amino acids. Absorption of the amino acids and their derivatives into which dietary protein is degraded is done by the gastrointestinal tract. The absorption rates of individual amino acids are highly dependent on the protein source; for example, the digestibilities of many amino acids in humans, the difference between soy and milk proteins and between individual milk proteins, beta-lactoglobulin and casein. For milk proteins, about 50% of the ingested protein is absorbed between the stomach and the jejunum and 90% is absorbed by the time the digested food reaches the ileum. Biological value (BV) is a measure of the proportion of absorbed protein from a food which becomes incorporated into the proteins of the organism's body. Newborn Newborns of mammals are exceptional in protein digestion and assimilation in that they can absorb intact proteins at the small intestine. This enables passive immunity, i.e., transfer of immunoglobulins from the mother to the newborn, via milk. Dietary requirements Considerable debate has taken place regarding issues surrounding protein intake requirements. The amount of protein required in a person's diet is determined in large part by overall energy intake, the body's need for nitrogen and essential amino acids, body weight and composition, rate of growth in the individual, physical activity level, the individual's energy and carbohydrate intake, and the presence of illness or injury. Physical activity and exertion as well as enhanced muscular mass increase the need for protein. Requirements are also greater during childhood for growth and development, during pregnancy, or when breastfeeding in order to nourish a baby or when the body needs to recover from malnutrition or trauma or after an operation. Dietary recommendations According to US & Canadian Dietary Reference Intake guidelines, women ages 19–70 need to consume 46 grams of protein per day while men ages 19–70 need to consume 56 grams of protein per day to minimize risk of deficiencies. These Recommended Dietary Allowances (RDAs) were calculated based on 0.8 grams protein per kilogram body weight and average body weights of 57 kg (126 pounds) and 70 kg (154 pounds), respectively. However, this recommendation is based on structural requirements but disregards use of protein for energy metabolism. This requirement is for a normal sedentary person. In the United States, average protein consumption is higher than the RDA. According to results of the National Health and Nutrition Examination Survey (NHANES 2013–2014), average protein consumption for women ages 20 and older was 69.8 grams and for men 98.3 grams/day. According to research from Harvard University, the National Academy of Medicine suggests that adults should consume at least 0.8 grams of protein per kilogram of body weight daily, which is roughly equivalent to a little more than 7 grams for every 20 pounds of body weight. This recommendation is widely accepted by health professionals as a guideline for maintaining muscle mass, supporting metabolic functions, and promoting overall health. Active people Several studies have concluded that active people and athletes may require elevated protein intake (compared to 0.8 g/kg) due to increase in muscle mass and sweat losses, as well as need for body repair and energy source. Suggested amounts vary from 1.2 to 1.4 g/kg for those doing endurance exercise to as much as 1.6-1.8 g/kg for strength exercise and up to 2.0 g/kg/day for older people, while a proposed maximum daily protein intake would be approximately 25% of energy requirements i.e. approximately 2 to 2.5 g/kg. However, many questions still remain to be resolved. In addition, some have suggested that athletes using restricted-calorie diets for weight loss should further increase their protein consumption, possibly to 1.8–2.0 g/kg, in order to avoid loss of lean muscle mass. Aerobic exercise protein needs Endurance athletes differ from strength-building athletes in that endurance athletes do not build as much muscle mass from training as strength-building athletes do. Research suggests that individuals performing endurance activity require more protein intake than sedentary individuals so that muscles broken down during endurance workouts can be repaired. Although the protein requirement for athletes still remains controversial (for instance see Lamont, Nutrition Research Reviews, pages 142 - 149, 2012), research does show that endurance athletes can benefit from increasing protein intake because the type of exercise endurance athletes participate in still alters the protein metabolism pathway. The overall protein requirement increases because of amino acid oxidation in endurance-trained athletes. Endurance athletes who exercise over a long period (2–5 hours per training session) use protein as a source of 5–10% of their total energy expended. Therefore, a slight increase in protein intake may be beneficial to endurance athletes by replacing the protein lost in energy expenditure and protein lost in repairing muscles. One review concluded that endurance athletes may increase daily protein intake to a maximum of 1.2–1.4 g per kg body weight. Anaerobic exercise protein needs Research also indicates that individuals performing strength training activity require more protein than sedentary individuals. Strength-training athletes may increase their daily protein intake to a maximum of 1.4–1.8 g per kg body weight to enhance muscle protein synthesis, or to make up for the loss of amino acid oxidation during exercise. Many athletes maintain a high-protein diet as part of their training. In fact, some athletes who specialize in anaerobic sports (e.g., weightlifting) believe a very high level of protein intake is necessary, and so consume high protein meals and also protein supplements. Special populations Protein allergies A food allergy is an abnormal immune response to proteins in food. The signs and symptoms may range from mild to severe. They may include itchiness, swelling of the tongue, vomiting, diarrhea, hives, trouble breathing, or low blood pressure. These symptoms typically occurs within minutes to one hour after exposure. When the symptoms are severe, it is known as anaphylaxis. The following eight foods are responsible for about 90% of allergic reactions: cow's milk, eggs, wheat, shellfish, fish, peanuts, tree nuts and soy. Chronic kidney disease While there is no conclusive evidence that a high protein diet can cause chronic kidney disease, there is a consensus that people with this disease should decrease consumption of protein. According to one 2009 review updated in 2018, people with chronic kidney disease who reduce protein consumption have less likelihood of progressing to end stage kidney disease. Moreover, people with this disease while using a low protein diet (0.6 g/kg/d - 0.8 g/kg/d) may develop metabolic compensations that preserve kidney function, although in some people, malnutrition may occur. Phenylketonuria Individuals with phenylketonuria (PKU) must keep their intake of phenylalaninean essential amino acidextremely low to prevent a mental disability and other metabolic complications. Phenylalanine is a component of the artificial sweetener aspartame, so people with PKU need to avoid low calorie beverages and foods with this ingredient. Excess consumption The U.S. and Canadian Dietary Reference Intake review for protein concluded that there was not sufficient evidence to establish a Tolerable upper intake level, i.e., an upper limit for how much protein can be safely consumed. When amino acids are in excess of needs, the liver takes up the amino acids and deaminates them, a process converting the nitrogen from the amino acids into ammonia, further processed in the liver into urea via the urea cycle. Excretion of urea occurs via the kidneys. Other parts of the amino acid molecules can be converted into glucose and used for fuel. When food protein intake is periodically high or low, the body tries to keep protein levels at an equilibrium by using the "labile protein reserve" to compensate for daily variations in protein intake. However, unlike body fat as a reserve for future caloric needs, there is no protein storage for future needs. Excessive protein intake may increase calcium excretion in urine, occurring to compensate for the pH imbalance from oxidation of sulfur amino acids. This may lead to a higher risk of kidney stone formation from calcium in the renal circulatory system. One meta-analysis reported no adverse effects of higher protein intakes on bone density. Another meta-analysis reported a small decrease in systolic and diastolic blood pressure with diets higher in protein, with no differences between animal and plant protein. High protein diets have been shown to lead to an additional 1.21 kg of weight loss over a period of 3 months versus a baseline protein diet in a meta-analysis. Benefits of decreased body mass index as well as HDL cholesterol were more strongly observed in studies with only a slight increase in protein intake rather where high protein intake was classified as 45% of total energy intake. Detrimental effects to cardiovascular activity were not observed in short-term diets of 6 months or less. There is little consensus on the potentially detrimental effects to healthy individuals of a long-term high protein diet, leading to caution advisories about using high protein intake as a form of weight loss. The 2015–2020 Dietary Guidelines for Americans (DGA) recommends that men and teenage boys increase their consumption of fruits, vegetables and other under-consumed foods, and that a means of accomplishing this would be to reduce overall intake of protein foods. The 2015 - 2020 DGA report does not set a recommended limit for the intake of red and processed meat. While the report acknowledges research showing that lower intake of red and processed meat is correlated with reduced risk of cardiovascular diseases in adults, it also notes the value of nutrients provided from these meats. The recommendation is not to limit intake of meats or protein, but rather to monitor and keep within daily limits the sodium (< 2300 mg), saturated fats (less than 10% of total calories per day), and added sugars (less than 10% of total calories per day) that may be increased as a result of consumption of certain meats and proteins. While the 2015 DGA report does advise for a reduced level of consumption of red and processed meats, the 2015-2020 DGA key recommendations recommend that a variety of protein foods be consumed, including both vegetarian and non-vegetarian sources of protein. Protein deficiency Protein deficiency and malnutrition (PEM) can lead to variety of ailments including Intellectual disability and kwashiorkor. Symptoms of kwashiorkor include apathy, diarrhea, inactivity, failure to grow, flaky skin, fatty liver, and edema of the belly and legs. This edema is explained by the action of lipoxygenase on arachidonic acid to form leukotrienes and the normal functioning of proteins in fluid balance and lipoprotein transport. PEM is fairly common worldwide in both children and adults and accounts for 6 million deaths annually. In the industrialized world, PEM is predominantly seen in hospitals, is associated with disease, or is often found in the elderly. See also Azotorrhea Biological value Bodybuilding supplement Leaf protein concentrate Low-protein diet Ninja diet Protein bar Single-cell protein List of proteins in the human body References Proteins Nutrients
0.773808
0.997658
0.771996
Hyperosmolar hyperglycemic state
Hyperosmolar hyperglycemic state (HHS), also known as hyperosmolar non-ketotic state (HONK), is a complication of diabetes mellitus in which high blood sugar results in high osmolarity without significant ketoacidosis. Symptoms include signs of dehydration, weakness, leg cramps, vision problems, and an altered level of consciousness. Onset is typically over days to weeks. Complications may include seizures, disseminated intravascular coagulopathy, mesenteric artery occlusion, or rhabdomyolysis. The main risk factor is a history of diabetes mellitus type 2. Occasionally it may occur in those without a prior history of diabetes or those with diabetes mellitus type 1. Triggers include infections, stroke, trauma, certain medications, and heart attacks. Diagnosis is based on blood tests finding a blood sugar greater than 30 mmol/L (600 mg/dL), osmolarity greater than 320 mOsm/kg, and a pH above 7.3. Initial treatment generally consists of intravenous fluids to manage dehydration, intravenous insulin in those with significant ketones, low molecular weight heparin to decrease the risk of blood clotting, and antibiotics among those in whom there are concerns of infection. The goal is a slow decline in blood sugar levels. Potassium replacement is often required as the metabolic problems are corrected. Efforts to prevent diabetic foot ulcers are also important. It typically takes a few days for the person to return to baseline. While the exact frequency of the condition is unknown, it is relatively common. Older people are most commonly affected. The risk of death among those affected is about 15%. It was first described in the 1880s. Signs and symptoms Symptoms of high blood sugar including increased thirst (polydipsia), increased volume of urination (polyuria), and increased hunger (polyphagia). Symptoms of HHS include: Altered level of consciousness Neurologic signs including: blurred vision, headaches, focal seizures, myoclonic jerking, reversible paralysis Motor abnormalities including flaccidity, depressed reflexes, tremors or fasciculations Hyperviscosity and increased risk of blood clot formation Dehydration Weight loss Nausea, vomiting, and abdominal pain Weakness Low blood pressure with standing Cause The main risk factor is a history of diabetes mellitus type 2. Occasionally it may occur in those without a prior history of diabetes or those with diabetes mellitus type 1. Triggers include infections, stroke, trauma, certain medications, and heart attacks. Other risk factors: Lack of sufficient insulin (but enough to prevent ketosis) Poor kidney function Poor fluid intake (dehydration) Older age (50–70 years) Certain medical conditions (cerebral vascular injury, myocardial infarction, sepsis) Certain medications (glucocorticoids, beta-blockers, thiazide diuretics, calcium channel blockers, and phenytoin) Pathophysiology HHS is usually precipitated by an infection, myocardial infarction, stroke or another acute illness. A relative insulin deficiency leads to a serum glucose that is usually higher than 33 mmol/L (600 mg/dL), and a resulting serum osmolarity that is greater than 320 mOsm. This leads to excessive urination (more specifically an osmotic diuresis), which, in turn, leads to volume depletion and hemoconcentration that causes a further increase in blood glucose level. Ketosis is absent because the presence of some insulin inhibits hormone-sensitive lipase-mediated fat tissue breakdown. Diagnosis Criteria According to the American Diabetes Association, diagnostic features include: Plasma glucose level >30 mmol/L (>600 mg/dL) Serum osmolality >320 mOsm/kg Profound dehydration, up to an average of 9L (and therefore substantial thirst (polydipsia)) Serum pH >7.30 Bicarbonate >15 mEq/L Small ketonuria (~+ on dipstick) and absent-to-low ketonemia (<3 mmol/L) Some alteration in consciousness BUN > 30 mg/dL (increased) Creatinine > 1.5 mg/dL (increased) Imaging Cranial imaging is not used for diagnosis of this condition. However, if an MRI is performed, it may show cortical restricted diffusion with unusual characteristics of reversible T2 hypointensity in the subcortical white matter. Differential diagnosis The major differential diagnosis is diabetic ketoacidosis (DKA). In contrast to DKA, serum glucose levels in HHS are extremely high, usually greater than 40-50 mmol/L (600 mg/dL). Metabolic acidosis is absent or mild. A temporary state of confusion (delirium) is also more common in HHS than DKA. HHS also tends to affect older people more. DKA may have fruity breath, and rapid and deep breathing. DKA often has serum glucose level greater than 300 mg/dL (HHS is >600 mg/dL). DKA usually occurs in type 1 diabetics whereas HHS is more common in type 2 diabetics. DKA is characterized by a rapid onset, and HHS occurs gradually over a few days. DKA also is characterized by ketosis due to the breakdown of fat for energy. Both DKA and HHS may show symptoms of dehydration, increased thirst, increased urination, increased hunger, weight loss, nausea, vomiting, abdominal pain, blurred vision, headaches, weakness, and low blood pressure with standing. Management Phases and timelines The JBDS HHS care pathway comprises 3 main themes to consider when managing a patient with HHS: clinical assessment and monitoring interventions assessments and prevention of harm To streamline management, there are 5 phases of therapy from the time of recognition of the condition to resolution: 0–60 min 1–6 hours 6–12 hours 12–24 hours 24–72 hours Intravenous fluids Treatment of HHS begins with reestablishing tissue perfusion using intravenous fluids. People with HHS can be dehydrated by 8 to 12 liters. Attempts to correct this usually take place over 24 hours with initial rates of normal saline often in the range of 1 L/h for the first few hours or until the condition stabilizes. Electrolyte replacement Potassium replacement is often required as the metabolic problems are corrected. It is generally replaced at a rate 10 mEq per hour as long as there is adequate urinary output. Insulin Insulin is given to reduce blood glucose concentration; however, as it also causes the movement of potassium into cells, serum potassium levels must be sufficiently high or dangerously low blood potassium levels may result. Once potassium levels have been verified to be greater than 3.3 mEq/L, then an insulin infusion of 0.1 units/kg/hr is started. The goal for resolution is a blood glucose of less than 200 mg/dL. References External links Medical emergencies Complications of diabetes Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate
0.775486
0.995407
0.771925
Medication
A medication (also called medicament, medicine, pharmaceutical drug, medicinal drug or simply drug) is a drug used to diagnose, cure, treat, or prevent disease. Drug therapy (pharmacotherapy) is an important part of the medical field and relies on the science of pharmacology for continual advancement and on pharmacy for appropriate management. Drugs are classified in many ways. One of the key divisions is by level of control, which distinguishes prescription drugs (those that a pharmacist dispenses only on the order of a physician, physician assistant, or qualified nurse) from over-the-counter drugs (those that consumers can order for themselves). Another key distinction is between traditional small molecule drugs, usually derived from chemical synthesis, and biopharmaceuticals, which include recombinant proteins, vaccines, blood products used therapeutically (such as IVIG), gene therapy, monoclonal antibodies and cell therapy (for instance, stem cell therapies). Other ways to classify medicines are by mode of action, route of administration, biological system affected, or therapeutic effects. An elaborate and widely used classification system is the Anatomical Therapeutic Chemical Classification System. The World Health Organization keeps a list of essential medicines. Drug discovery and drug development are complex and expensive endeavors undertaken by pharmaceutical companies, academic scientists, and governments. As a result of this complex path from discovery to commercialization, partnering has become a standard practice for advancing drug candidates through development pipelines. Governments generally regulate what drugs can be marketed, how drugs are marketed, and in some jurisdictions, drug pricing. Controversies have arisen over drug pricing and disposal of used Medicine . Definition Medication is a medicine or a chemical compound used to treat or cure illness. According to Encyclopædia Britannica, medication is "a substance used in treating a disease or relieving pain". As defined by the National Cancer Institute, dosage forms of medication can include tablets, capsules, liquids, creams, and patches. Medications can be administered in different ways, such as by mouth, by infusion into a vein, or by drops put into the ear or eye. A medication that does not contain an active ingredient and is used in research studies is called a placebo. In Europe, the term is "medicinal product", and it is defined by EU law as: "Any substance or combination of substances presented as having properties for treating or preventing disease in human beings; or" "Any substance or combination of substances which may be used in or administered to human beings either with a view to restoring, correcting, or modifying physiological functions by exerting a pharmacological, immunological or metabolic action or to making a medical diagnosis." In the US, a "drug" is: A substance (other than food) intended to affect the structure or any function of the body. A substance intended for use as a component of a medicine but not a device or a component, part, or accessory of a device. A substance intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease. A substance recognized by an official pharmacopeia or formulary. Biological products are included within this definition and are generally covered by the same laws and regulations, but differences exist regarding their manufacturing processes (chemical process versus biological process). Usage Drug use among elderly Americans has been studied; in a group of 2,377 people with an average age of 71 surveyed between 2005 and 2006, 84% took at least one prescription drug, 44% took at least one over-the-counter (OTC) drug, and 52% took at least one dietary supplement; in a group of 2245 elderly Americans (average age of 71) surveyed over the period 2010 – 2011, those percentages were 88%, 38%, and 64%. Classification One of the key classifications is between traditional small molecule drugs; usually derived from chemical synthesis and biological medical products; which include recombinant proteins, vaccines, blood products used therapeutically (such as IVIG), gene therapy, and cell therapy (for instance, stem cell therapies). Pharmaceuticals or drugs or medicines are classified into various other groups besides their origin on the basis of pharmacological properties like mode of action and their pharmacological action or activity, such as by chemical properties, mode or route of administration, biological system affected, or therapeutic effects. An elaborate and widely used classification system is the Anatomical Therapeutic Chemical Classification System (ATC system). The World Health Organization keeps a list of essential medicines. A sampling of classes of medicine includes: Antipyretics: reducing fever (pyrexia/pyresis) Analgesics: reducing pain (painkillers) Antimalarial drugs: treating malaria Antibiotics: inhibiting germ growth Antiseptics: prevention of germ growth near burns, cuts,and wounds Mood stabilizers: lithium and valproate Hormone replacements: Premarin Oral contraceptives: Enovid, "biphasic" pill, and "triphasic" pill Stimulants: methylphenidate, amphetamine Tranquilizers: meprobamate, chlorpromazine, reserpine, chlordiazepoxide, diazepam, and alprazolam Statins: lovastatin, pravastatin, and simvastatin Pharmaceuticals may also be described as "specialty", independent of other classifications, which is an ill-defined class of drugs that might be difficult to administer, require special handling during administration, require patient monitoring during and immediately after administration, have particular regulatory requirements restricting their use, and are generally expensive relative to other drugs. Types of medicines For the digestive system Lower digestive tract: laxatives, antispasmodics, antidiarrhoeals, bile acid sequestrants, opioids. Upper digestive tract: antacids, reflux suppressants, antiflatulents, antidopaminergics, proton pump inhibitors (PPIs), H2-receptor antagonists, cytoprotectants, prostaglandin analogues. For the cardiovascular system Affecting blood pressure/(antihypertensive drugs): ACE inhibitors, angiotensin receptor blockers, beta-blockers, α blockers, calcium channel blockers, thiazide diuretics, loop diuretics, aldosterone inhibitors. Coagulation: anticoagulants, heparin, antiplatelet drugs, fibrinolytics, anti-hemophilic factors, haemostatic drugs. General: β-receptor blockers ("beta blockers"), calcium channel blockers, diuretics, cardiac glycosides, antiarrhythmics, nitrate, antianginals, vasoconstrictors, vasodilators. HMG-CoA reductase inhibitors (statins) for lowering LDL cholesterol inhibitors: hypolipidaemic agents. For the central nervous system Drugs affecting the central nervous system include psychedelics, hypnotics, anaesthetics, antipsychotics, eugeroics, antidepressants (including tricyclic antidepressants, monoamine oxidase inhibitors, lithium salts, and selective serotonin reuptake inhibitors (SSRIs)), antiemetics, anticonvulsants/antiepileptics, anxiolytics, barbiturates, movement disorder (e.g., Parkinson's disease) drugs, nootropics, stimulants (including amphetamines), benzodiazepines, cyclopyrrolones, dopamine antagonists, antihistamines, cholinergics, anticholinergics, emetics, cannabinoids, and 5-HT (serotonin) antagonists. For pain The main classes of painkillers are NSAIDs, opioids, and local anesthetics. For consciousness (anesthetic drugs) Some anesthetics include benzodiazepines and barbiturates. For musculoskeletal disorders The main categories of drugs for musculoskeletal disorders are: NSAIDs (including COX-2 selective inhibitors), muscle relaxants, neuromuscular drugs, and anticholinesterases. For the eye Anti-allergy: mast cell inhibitors. Anti-fungal: imidazoles, polyenes. Anti-glaucoma: adrenergic agonists, beta-blockers, carbonic anhydrase inhibitors/hyperosmotics, cholinergics, miotics, parasympathomimetics, prostaglandin agonists/prostaglandin inhibitors, nitroglycerin. Anti-inflammatory: NSAIDs, corticosteroids. Antibacterial: antibiotics, topical antibiotics, sulfa drugs, aminoglycosides, fluoroquinolones. Antiviral drugs. Diagnostic: topical anesthetics, sympathomimetics, parasympatholytics, mydriatics, cycloplegics. General: adrenergic neurone blocker, astringent. For the ear, nose, and oropharynx Antibiotics, sympathomimetics, antihistamines, anticholinergics, NSAIDs, corticosteroids, antiseptics, local anesthetics, antifungals, and cerumenolytics. For the respiratory system Bronchodilators, antitussives, mucolytics, decongestants, inhaled and systemic corticosteroids, beta2-adrenergic agonists, anticholinergics, mast cell stabilizers, leukotriene antagonists. For endocrine problems Androgens, antiandrogens, estrogens, gonadotropin, corticosteroids, human growth hormone, insulin, antidiabetics (sulfonylureas, biguanides/metformin, thiazolidinediones, insulin), thyroid hormones, antithyroid drugs, calcitonin, diphosphonate, vasopressin analogues. For the reproductive system or urinary system Antifungal, alkalinizing agents, quinolones, antibiotics, cholinergics, anticholinergics, antispasmodics, 5-alpha reductase inhibitor, selective alpha-1 blockers, sildenafils, fertility medications. For contraception Hormonal contraception. Ormeloxifene. Spermicide. For obstetrics and gynecology NSAIDs, anticholinergics, haemostatic drugs, antifibrinolytics, Hormone Replacement Therapy (HRT), bone regulators, beta-receptor agonists, follicle stimulating hormone, luteinising hormone, LHRH, gamolenic acid, gonadotropin release inhibitor, progestogen, dopamine agonists, oestrogen, prostaglandins, gonadorelin, clomiphene, tamoxifen, diethylstilbestrol. For the skin Emollients, anti-pruritics, antifungals, antiseptics, scabicides, pediculicides, tar products, vitamin A derivatives, vitamin D analogues, keratolytics, abrasives, systemic antibiotics, topical antibiotics, hormones, desloughing agents, exudate absorbents, fibrinolytics, proteolytics, sunscreens, antiperspirants, corticosteroids, immune modulators. For infections and infestations Antibiotics, antifungals, antileprotics, antituberculous drugs, antimalarials, anthelmintics, amoebicides, antivirals, antiprotozoals, probiotics, prebiotics, antitoxins, and antivenoms. For the immune system Vaccines, immunoglobulins, immunosuppressants, interferons, and monoclonal antibodies. For allergic disorders Anti-allergics, antihistamines, NSAIDs, corticosteroids. For nutrition Tonics, electrolytes and mineral preparations (including iron preparations and magnesium preparations), parenteral nutrition, vitamins, anti-obesity drugs, anabolic drugs, haematopoietic drugs, food product drugs. For neoplastic disorders Cytotoxic drugs, therapeutic antibodies, sex hormones, aromatase inhibitors, somatostatin inhibitors, recombinant interleukins, G-CSF, erythropoietin. For diagnostics Contrast media. For euthanasia A euthanaticum is used for euthanasia and physician-assisted suicide. Euthanasia is not permitted by law in many countries, and consequently, medicines will not be licensed for this use in those countries. Administration A single drug may contain single or multiple active ingredients. The administration is the process by which a patient takes medicine. There are three major categories of drug administration: enteral (via the human gastrointestinal tract), injection into the body, and by other routes (dermal, nasal, ophthalmic, otologic, and urogenital). Oral administration, the most common form of enteral administration, can be performed using various dosage forms including tablets or capsules and liquid such as syrup or suspension. Other ways to take the medication include buccally (placed inside the cheek), sublingually (placed underneath the tongue), eye and ear drops (dropped into the eye or ear), and transdermally (applied to the skin). They can be administered in one dose, as a bolus. Administration frequencies are often abbreviated from Latin, such as every 8 hours reading Q8H from Quaque VIII Hora. The drug frequencies are often expressed as the number of times a drug is used per day (e.g., four times a day). It may include event-related information (e.g., 1 hour before meals, in the morning, at bedtime), or complimentary to an interval, although equivalent expressions may have different implications (e.g., every 8 hours versus 3 times a day). Drug discovery In the fields of medicine, biotechnology, and pharmacology, drug discovery is the process by which new drugs are discovered. Historically, drugs were discovered by identifying the active ingredient from traditional remedies or by serendipitous discovery. Later chemical libraries of synthetic small molecules, natural products, or extracts were screened in intact cells or whole organisms to identify substances that have a desirable therapeutic effect in a process known as classical pharmacology. Since sequencing of the human genome which allowed rapid cloning and synthesis of large quantities of purified proteins, it has become common practice to use high throughput screening of large compound libraries against isolated biological targets which are hypothesized to be disease-modifying in a process known as reverse pharmacology. Hits from these screens are then tested in cells and then in animals for efficacy. Even more recently, scientists have been able to understand the shape of biological molecules at the atomic level and to use that knowledge to design (see drug design) drug candidates. Modern drug discovery involves the identification of screening hits, medicinal chemistry, and optimization of those hits to increase the affinity, selectivity (to reduce the potential of side effects), efficacy/potency, metabolic stability (to increase the half-life), and oral bioavailability. Once a compound that fulfills all of these requirements has been identified, it will begin the process of drug development prior to clinical trials. One or more of these steps may, but not necessarily, involve computer-aided drug design. Despite advances in technology and understanding of biological systems, drug discovery is still a lengthy, "expensive, difficult, and inefficient process" with a low rate of new therapeutic discovery. In 2010, the research and development cost of each new molecular entity (NME) was approximately US$1.8 billion. Drug discovery is done by pharmaceutical companies, sometimes with research assistance from universities. The "final product" of drug discovery is a patent on the potential drug. The drug requires very expensive Phase I, II, and III clinical trials, and most of them fail. Small companies have a critical role, often then selling the rights to larger companies that have the resources to run the clinical trials. Drug discovery is different from Drug Development. Drug Discovery is often considered the process of identifying new medicine. At the same time, Drug development is delivering a new drug molecule into clinical practice. In its broad definition, this encompasses all steps from the basic research process of finding a suitable molecular target to supporting the drug's commercial launch. Development Drug development is the process of bringing a new drug to the market once a lead compound has been identified through the process of drug discovery. It includes pre-clinical research (microorganisms/animals) and clinical trials (on humans) and may include the step of obtaining regulatory approval to market the drug. Drug Development Process Discovery: The Drug Development process starts with Discovery, a process of identifying a new medicine. Development: Chemicals extracted from natural products are used to make pills, capsules, or syrups for oral use. Injections for direct infusion into the blood drops for eyes or ears. Preclinical research: Drugs go under laboratory or animal testing, to ensure that they can be used on Humans. Clinical testing: The drug is used on people to confirm that it is safe to use. FDA Review: drug is sent to FDA before launching the drug into the market. FDA post-Market Review: The drug is reviewed and monitored by FDA for the safety once it is available to the public. Regulation The regulation of drugs varies by jurisdiction. In some countries, such as the United States, they are regulated at the national level by a single agency. In other jurisdictions, they are regulated at the state level, or at both state and national levels by various bodies, as is the case in Australia. The role of therapeutic goods regulation is designed mainly to protect the health and safety of the population. Regulation is aimed at ensuring the safety, quality, and efficacy of the therapeutic goods which are covered under the scope of the regulation. In most jurisdictions, therapeutic goods must be registered before they are allowed to be marketed. There is usually some degree of restriction on the availability of certain therapeutic goods depending on their risk to consumers. Depending upon the jurisdiction, drugs may be divided into over-the-counter drugs (OTC) which may be available without special restrictions, and prescription drugs, which must be prescribed by a licensed medical practitioner in accordance with medical guidelines due to the risk of adverse effects and contraindications. The precise distinction between OTC and prescription depends on the legal jurisdiction. A third category, "behind-the-counter" drugs, is implemented in some jurisdictions. These do not require a prescription, but must be kept in the dispensary, not visible to the public, and be sold only by a pharmacist or pharmacy technician. Doctors may also prescribe prescription drugs for off-label use – purposes which the drugs were not originally approved for by the regulatory agency. The Classification of Pharmaco-Therapeutic Referrals helps guide the referral process between pharmacists and doctors. The International Narcotics Control Board of the United Nations imposes a world law of prohibition of certain drugs. They publish a lengthy list of chemicals and plants whose trade and consumption (where applicable) are forbidden. OTC drugs are sold without restriction as they are considered safe enough that most people will not hurt themselves accidentally by taking it as instructed. Many countries, such as the United Kingdom have a third category of "pharmacy medicines", which can be sold only in registered pharmacies by or under the supervision of a pharmacist. Medical errors include over-prescription and polypharmacy, mis-prescription, contraindication and lack of detail in dosage and administration instructions. In 2000 the definition of a prescription error was studied using a Delphi method conference; the conference was motivated by ambiguity in what a prescription error is and a need to use a uniform definition in studies. Drug pricing In many jurisdictions, drug prices are regulated. United Kingdom In the UK, the Pharmaceutical Price Regulation Scheme is intended to ensure that the National Health Service is able to purchase drugs at reasonable prices. The prices are negotiated between the Department of Health, acting with the authority of Northern Ireland and the UK Government, and the representatives of the Pharmaceutical industry brands, the Association of the British Pharmaceutical Industry (ABPI). For 2017 this payment percentage set by the PPRS will be 4,75%. Canada In Canada, the Patented Medicine Prices Review Board examines drug pricing and determines if a price is excessive or not. In these circumstances, drug manufacturers must submit a proposed price to the appropriate regulatory agency. Furthermore, "the International Therapeutic Class Comparison Test is responsible for comparing the National Average Transaction Price of the patented drug product under review" different countries that the prices are being compared to are the following: France, Germany, Italy, Sweden, Switzerland, the United Kingdom, and the United States Brazil In Brazil, the prices are regulated through legislation under the name of Medicamento Genérico (generic drugs) since 1999. India In India, drug prices are regulated by the National Pharmaceutical Pricing Authority. United States In the United States, drug costs are partially unregulated, but instead are the result of negotiations between drug companies and insurance companies. High prices have been attributed to monopolies given to manufacturers by the government. New drug development costs continue to rise as well. Despite the enormous advances in science and technology, the number of new blockbuster drugs approved by the government per billion dollars spent has halved every 9 years since 1950. Blockbuster drug A blockbuster drug is a drug that generates more than $1 billion in revenue for a pharmaceutical company in a single year. Cimetidine was the first drug ever to reach more than $1 billion a year in sales, thus making it the first blockbuster drug. History Prescription drug history Antibiotics first arrived on the medical scene in 1932 thanks to Gerhard Domagk; and were coined the "wonder drugs". The introduction of the sulfa drugs led to the mortality rate from pneumonia in the U.S. to drop from 0.2% each year to 0.05% by 1939. Antibiotics inhibit the growth or the metabolic activities of bacteria and other microorganisms by a chemical substance of microbial origin. Penicillin, introduced a few years later, provided a broader spectrum of activity compared to sulfa drugs and reduced side effects. Streptomycin, found in 1942, proved to be the first drug effective against the cause of tuberculosis and also came to be the best known of a long series of important antibiotics. A second generation of antibiotics was introduced in the 1940s: aureomycin and chloramphenicol. Aureomycin was the best known of the second generation. Lithium was discovered in the 19th century for nervous disorders and its possible mood-stabilizing or prophylactic effect; it was cheap and easily produced. As lithium fell out of favor in France, valpromide came into play. This antibiotic was the origin of the drug that eventually created the mood stabilizer category. Valpromide had distinct psychotrophic effects that were of benefit in both the treatment of acute manic states and in the maintenance treatment of manic depression illness. Psychotropics can either be sedative or stimulant; sedatives aim at damping down the extremes of behavior. Stimulants aim at restoring normality by increasing tone. Soon arose the notion of a tranquilizer which was quite different from any sedative or stimulant. The term tranquilizer took over the notions of sedatives and became the dominant term in the West through the 1980s. In Japan, during this time, the term tranquilizer produced the notion of a psyche-stabilizer and the term mood stabilizer vanished. Premarin (conjugated estrogens, introduced in 1942) and Prempro (a combination estrogen-progestin pill, introduced in 1995) dominated the hormone replacement therapy (HRT) during the 1990s. HRT is not a life-saving drug, nor does it cure any disease. HRT has been prescribed to improve one's quality of life. Doctors prescribe estrogen for their older female patients both to treat short-term menopausal symptoms and to prevent long-term diseases. In the 1960s and early 1970s, more and more physicians began to prescribe estrogen for their female patients. Between 1991 and 1999, Premarin was listed as the most popular prescription and best-selling drug in America. The first oral contraceptive, Enovid, was approved by FDA in 1960. Oral contraceptives inhibit ovulation and so prevent conception. Enovid was known to be much more effective than alternatives including the condom and the diaphragm. As early as 1960, oral contraceptives were available in several different strengths by every manufacturer. In the 1980s and 1990s, an increasing number of options arose including, most recently, a new delivery system for the oral contraceptive via a transdermal patch. In 1982, a new version of the Pill was introduced, known as the "biphasic" pill. By 1985, a new triphasic pill was approved. Physicians began to think of the Pill as an excellent means of birth control for young women. Stimulants such as Ritalin (methylphenidate) came to be pervasive tools for behavior management and modification in young children. Ritalin was first marketed in 1955 for narcolepsy; its potential users were middle-aged and the elderly. It was not until some time in the 1980s along with hyperactivity in children that Ritalin came onto the market. Medical use of methylphenidate is predominantly for symptoms of attention deficit/hyperactivity disorder (ADHD). Consumption of methylphenidate in the U.S. out-paced all other countries between 1991 and 1999. Significant growth in consumption was also evident in Canada, New Zealand, Australia, and Norway. Currently, 85% of the world's methylphenidate is consumed in America. The first minor tranquilizer was Meprobamate. Only fourteen months after it was made available, meprobamate had become the country's largest-selling prescription drug. By 1957, meprobamate had become the fastest-growing drug in history. The popularity of meprobamate paved the way for Librium and Valium, two minor tranquilizers that belonged to a new chemical class of drugs called the benzodiazepines. These were drugs that worked chiefly as anti-anxiety agents and muscle relaxants. The first benzodiazepine was Librium. Three months after it was approved, Librium had become the most prescribed tranquilizer in the nation. Three years later, Valium hit the shelves and was ten times more effective as a muscle relaxant and anti-convulsant. Valium was the most versatile of the minor tranquilizers. Later came the widespread adoption of major tranquilizers such as chlorpromazine and the drug reserpine. In 1970, sales began to decline for Valium and Librium, but sales of new and improved tranquilizers, such as Xanax, introduced in 1981 for the newly created diagnosis of panic disorder, soared. Mevacor (lovastatin) is the first and most influential statin in the American market. The 1991 launch of Pravachol (pravastatin), the second available in the United States, and the release of Zocor (simvastatin) made Mevacor no longer the only statin on the market. In 1998, Viagra was released as a treatment for erectile dysfunction. Ancient pharmacology Using plants and plant substances to treat all kinds of diseases and medical conditions is believed to date back to prehistoric medicine. The Kahun Gynaecological Papyrus, the oldest known medical text of any kind, dates to about 1800 BC and represents the first documented use of any kind of drug. It and other medical papyri describe Ancient Egyptian medical practices, such as using honey to treat infections and the legs of bee-eaters to treat neck pains. Ancient Babylonian medicine demonstrated the use of medication in the first half of the 2nd millennium BC. Medicinal creams and pills were employed as treatments. On the Indian subcontinent, the Atharvaveda, a sacred text of Hinduism whose core dates from the second millennium BC, although the hymns recorded in it are believed to be older, is the first Indic text dealing with medicine. It describes plant-based drugs to counter diseases. The earliest foundations of ayurveda were built on a synthesis of selected ancient herbal practices, together with a massive addition of theoretical conceptualizations, new nosologies and new therapies dating from about 400 BC onwards. The student of Āyurveda was expected to know ten arts that were indispensable in the preparation and application of his medicines: distillation, operative skills, cooking, horticulture, metallurgy, sugar manufacture, pharmacy, analysis and separation of minerals, compounding of metals, and preparation of alkalis. The Hippocratic Oath for physicians, attributed to fifth century BC Greece, refers to the existence of "deadly drugs", and ancient Greek physicians imported drugs from Egypt and elsewhere. The pharmacopoeia , written between 50 and 70 CE by the Greek physician Pedanius Dioscorides, was widely read for more than 1,500 years. Medieval pharmacology Al-Kindi's ninth century AD book, De Gradibus and Ibn Sina (Avicenna)'s The Canon of Medicine, covers a range of drugs known to the practice of medicine in the medieval Islamic world. Medieval medicine of Western Europe saw advances in surgery compared to previously, but few truly effective drugs existed, beyond opium (found in such extremely popular drugs as the "Great Rest" of the Antidotarium Nicolai at the time) and quinine. Folklore cures and potentially poisonous metal-based compounds were popular treatments. Theodoric Borgognoni, (1205–1296), one of the most significant surgeons of the medieval period, responsible for introducing and promoting important surgical advances including basic antiseptic practice and the use of anaesthetics. Garcia de Orta described some herbal treatments that were used. Modern pharmacology For most of the 19th century, drugs were not highly effective, leading Oliver Wendell Holmes Sr. to famously comment in 1842 that "if all medicines in the world were thrown into the sea, it would be all the better for mankind and all the worse for the fishes". During the First World War, Alexis Carrel and Henry Dakin developed the Carrel-Dakin method of treating wounds with an irrigation, Dakin's solution, a germicide which helped prevent gangrene. In the inter-war period, the first anti-bacterial agents such as the sulpha antibiotics were developed. The Second World War saw the introduction of widespread and effective antimicrobial therapy with the development and mass production of penicillin antibiotics, made possible by the pressures of the war and the collaboration of British scientists with the American pharmaceutical industry. Medicines commonly used by the late 1920s included aspirin, codeine, and morphine for pain; digitalis, nitroglycerin, and quinine for heart disorders, and insulin for diabetes. Other drugs included antitoxins, a few biological vaccines, and a few synthetic drugs. In the 1930s, antibiotics emerged: first sulfa drugs, then penicillin and other antibiotics. Drugs increasingly became "the center of medical practice". In the 1950s, other drugs emerged including corticosteroids for inflammation, rauvolfia alkaloids as tranquilizers and antihypertensives, antihistamines for nasal allergies, xanthines for asthma, and typical antipsychotics for psychosis. As of 2007, thousands of approved drugs have been developed. Increasingly, biotechnology is used to discover biopharmaceuticals. Recently, multi-disciplinary approaches have yielded a wealth of new data on the development of novel antibiotics and antibacterials and on the use of biological agents for antibacterial therapy. In the 1950s, new psychiatric drugs, notably the antipsychotic chlorpromazine, were designed in laboratories and slowly came into preferred use. Although often accepted as an advance in some ways, there was some opposition, due to serious adverse effects such as tardive dyskinesia. Patients often opposed psychiatry and refused or stopped taking the drugs when not subject to psychiatric control. Governments have been heavily involved in the regulation of drug development and drug sales. In the U.S., the Elixir Sulfanilamide disaster led to the establishment of the Food and Drug Administration, and the 1938 Federal Food, Drug, and Cosmetic Act required manufacturers to file new drugs with the FDA. The 1951 Humphrey-Durham Amendment required certain drugs to be sold by prescription. In 1962, a subsequent amendment required new drugs to be tested for efficacy and safety in clinical trials. Until the 1970s, drug prices were not a major concern for doctors and patients. As more drugs became prescribed for chronic illnesses, however, costs became burdensome, and by the 1970s nearly every U.S. state required or encouraged the substitution of generic drugs for higher-priced brand names. This also led to the 2006 U.S. law, Medicare Part D, which offers Medicare coverage for drugs. As of 2008, the United States is the leader in medical research, including pharmaceutical development. U.S. drug prices are among the highest in the world, and drug innovation is correspondingly high. In 2000, U.S.-based firms developed 29 of the 75 top-selling drugs; firms from the second-largest market, Japan, developed eight, and the United Kingdom contributed 10. France, which imposes price controls, developed three. Throughout the 1990s, outcomes were similar. Controversies Controversies concerning pharmaceutical drugs include patient access to drugs under development and not yet approved, pricing, and environmental issues. Access to unapproved drugs Governments worldwide have created provisions for granting access to drugs prior to approval for patients who have exhausted all alternative treatment options and do not match clinical trial entry criteria. Often grouped under the labels of compassionate use, expanded access, or named patient supply, these programs are governed by rules which vary by country defining access criteria, data collection, promotion, and control of drug distribution. Within the United States, pre-approval demand is generally met through treatment IND (investigational new drug) applications (INDs), or single-patient INDs. These mechanisms, which fall under the label of expanded access programs, provide access to drugs for groups of patients or individuals residing in the US. Outside the US, Named Patient Programs provide controlled, pre-approval access to drugs in response to requests by physicians on behalf of specific, or "named", patients before those medicines are licensed in the patient's home country. Through these programs, patients are able to access drugs in late-stage clinical trials or approved in other countries for a genuine, unmet medical need, before those drugs have been licensed in the patient's home country. Patients who have not been able to get access to drugs in development have organized and advocated for greater access. In the United States, ACT UP formed in the 1980s, and eventually formed its Treatment Action Group in part to pressure the US government to put more resources into discovering treatments for AIDS and then to speed release of drugs that were under development. The Abigail Alliance was established in November 2001 by Frank Burroughs in memory of his daughter, Abigail. The Alliance seeks broader availability of investigational drugs on behalf of terminally ill patients. In 2013, BioMarin Pharmaceutical was at the center of a high-profile debate regarding expanded access of cancer patients to experimental drugs. Access to medicines and drug pricing Essential medicines, as defined by the World Health Organization (WHO), are "those drugs that satisfy the health care needs of the majority of the population; they should therefore be available at all times in adequate amounts and in appropriate dosage forms, at a price the community can afford." Recent studies have found that most of the medicines on the WHO essential medicines list, outside of the field of HIV drugs, are not patented in the developing world, and that lack of widespread access to these medicines arise from issues fundamental to economic development – lack of infrastructure and poverty. Médecins Sans Frontières also runs a Campaign for Access to Essential Medicines campaign, which includes advocacy for greater resources to be devoted to currently untreatable diseases that primarily occur in the developing world. The Access to Medicine Index tracks how well pharmaceutical companies make their products available in the developing world. World Trade Organization negotiations in the 1990s, including the TRIPS Agreement and the Doha Declaration, have centered on issues at the intersection of international trade in pharmaceuticals and intellectual property rights, with developed world nations seeking strong intellectual property rights to protect investments made to develop new drugs, and developing world nations seeking to promote their generic pharmaceuticals industries and their ability to make medicine available to their people via compulsory licenses. Some have raised ethical objections specifically with respect to pharmaceutical patents and the high prices for drugs that they enable their proprietors to charge, which poor people around the world, cannot afford. Critics also question the rationale that exclusive patent rights and the resulting high prices are required for pharmaceutical companies to recoup the large investments needed for research and development. One study concluded that marketing expenditures for new drugs often doubled the amount that was allocated for research and development. Other critics claim that patent settlements would be costly for consumers, the health care system, and state and federal governments because it would result in delaying access to lower cost generic medicines. Novartis fought a protracted battle with the government of India over the patenting of its drug, Gleevec, in India, which ended up in a Supreme Court in a case known as Novartis v. Union of India & Others. The Supreme Court ruled narrowly against Novartis, but opponents of patenting drugs claimed it as a major victory. Environmental issues Pharmaceutical medications are commonly described as "ubiquitous" in nearly every type of environmental medium (i.e. lakes, rivers, streams, estuaries, seawater, and soil) worldwide. Their chemical components are typically present at relatively low concentrations in the ng/L to μg/L ranges. The primary avenue for medications reaching the environment are through the effluent of wastewater treatment plants, both from industrial plants during production, and from municipal plants after consumption. Agricultural pollution is another significant source derived from the prevalence of antibiotic use in livestock. Scientists generally divide environmental impacts of a chemical into three primary categories: persistence, bioaccumulation, and toxicity. Since medications are inherently bio-active, most are naturally degradable in the environment, however they are classified as "pseudopersistent" because they are constantly being replenished from their sources. These Environmentally Persistent Pharmaceutical Pollutants (EPPPs) rarely reach toxic concentrations in the environment, however they have been known to bioaccumulate in some species. Their effects have been observed to compound gradually across food webs, rather than becoming acute, leading to their classification by the US Geological Survey as "Ecological Disrupting Compounds." See also Adherence Deprescribing Drug nomenclature List of drugs List of pharmaceutical companies Orphan drug Overmedication Pharmaceutical code Pharmacy References External links Drug Reference Site Directory – OpenMD Drugs & Medications Directory – Curlie European Medicines Agency NHS Medicines A–Z U.S. Food & Drug Administration: Drugs WHO Model List of Essential Medicines Chemicals in medicine Pharmaceutical industry Products of chemical industry
0.772907
0.998701
0.771902
Acute chest syndrome
The acute chest syndrome is a vaso-occlusive crisis of the pulmonary vasculature commonly seen in people with sickle cell anemia. This condition commonly manifests with a new opacification of the lung(s) on a chest x-ray. Signs and symptoms The crisis is a common complication in sickle-cell patients and can be associated with one or more symptoms including fever, cough, excruciating pain, sputum production, shortness of breath, or low oxygen levels. Cause Acute chest syndrome is often precipitated by a lung infection, and the resulting inflammation and loss of oxygen saturation leads to further sickling of red cells, thus exacerbating pulmonary and systemic hypoxemia, sickling, and vaso-occlusion. Diagnosis The diagnosis of acute chest syndrome is made difficult by its similarity in presentation with pneumonia. Both may present with a new opacification of the lung on chest x-ray. The presence of fevers, low oxygen levels in the blood, increased respiratory rate, chest pain, and cough are also common in acute chest syndrome. Diagnostic workup includes chest x-ray, complete cell count, reticulocyte count, ECG, and blood and sputum cultures. Patients may also require additional blood tests or imaging (e.g. a CT scan) to exclude a heart attack or other pulmonary pathology. Prevention Hydroxyurea is a medication that can help to prevent acute chest syndrome. It may cause a low white blood cell count, which can predispose the person to some types of infection. Treatment Broad spectrum antibiotics to cover common infections such as Streptococcus pneumoniae and mycoplasma, pain control, and blood transfusion. Acute chest syndrome is an indication for exchange transfusion. Bronchodilators may be useful but have not been well studied. Prognosis It may result in death, and it is one of the most common causes of death for people with sickle cell anemia. References External links Red blood cell disorders Syndromes affecting blood
0.781902
0.987007
0.771743
Decompensation
In medicine, decompensation is the functional deterioration of a structure or system that had been previously working with the help of compensation. Decompensation may occur due to fatigue, stress, illness, or old age. When a system is "compensated," it is able to function despite stressors or defects. Decompensation describes an inability to compensate for these deficiencies. It is a general term commonly used in medicine to describe a variety of situations. Medical term For example, cardiac decompensation may refer to the failure of the heart to maintain adequate blood circulation, after long-standing (previously compensated) vascular disease (see heart failure). Short-term treatment of cardiac decompensation can be achieved through administration of dobutamine, resulting in an increase in heart contractility via an inotropic effect. Kidney failure can also occur following a slow degradation of kidney function due to an underlying untreated illness; the symptoms of the latter can then become much more severe due to the lack of efficient compensation by the kidney. Psychology In psychology, the term refers to an individual's loss of healthy defense mechanisms in response to stress, resulting in personality disturbance or psychological imbalance. References External links Heffner, C.L. (2001). Psychology 101. Tucker-Ladd, C.E. (1996-2000). Psychological Self-Help. Psychoanalytic terminology Medical terminology Narcissism Borderline personality disorder
0.77864
0.990918
0.771568
Systemic disease
A systemic disease is one that affects a number of organs and tissues, or affects the body as a whole. Examples Mastocytosis, including mast cell activation syndrome and eosinophilic esophagitis Chronic fatigue syndrome Systemic vasculitis e.g. SLE, PAN Sarcoidosis – a disease that mainly affects the lungs, brain, joints and eyes, found most often in young African-American women. Hypothyroidism – where the thyroid gland produces too little thyroid hormones. Diabetes mellitus – an imbalance in blood glucose (sugar) levels. Fibromyalgia Ehlers-Danlos syndromes - an inherited connective tissue disorder with multiple subcategories Adrenal insufficiency – where the adrenal glands don't produce enough steroid hormones Coeliac disease – an autoimmune disease triggered by gluten consumption, which may involve several organs and cause a variety of symptoms, or be completely asymptomatic. Ulcerative colitis – an inflammatory bowel disease Crohn's disease – an inflammatory bowel disease Hypertension (high blood pressure) Metabolic syndrome AIDS – a disease caused by a virus that cripples the body's immune defenses. Graves' disease – a thyroid disorder, most often in women, which can cause a goiter (swelling in the front part of the neck) and protruding eyes. Systemic lupus erythematosus – a connective tissue disorder involving mainly the skin, joints and kidneys. Rheumatoid arthritis – an inflammatory disease which mainly attacks the joints. But can also affect a person's skin, eyes, lungs and mouth. Atherosclerosis – a hardening of the arteries Sickle cell disease – an inherited blood disorder that can block circulation throughout the body, primarily affecting people of sub-Saharan origin. Myasthenia gravis Systemic sclerosis Sinusitis Sjogren's Syndrome - an autoimmune disease that primarily attacks the lacrimal and salivary glands, but also impacts other organs such as the lungs, kidneys, liver, and nervous system. Detection Getting a regular eye exam may play a role in identifying the signs of some systemic diseases. "The eye is composed of many different types of tissue. This unique feature makes the eye susceptible to a wide variety of diseases as well as provides insights into many body systems. Almost any part of the eye can give important clues to the diagnosis of systemic diseases. Signs of a systemic disease may be evident on the outer surface of the eye (eyelids, conjunctiva and cornea), middle of the eye and at the back of the eye (retina)." Since 500 B.C., some researchers have believed that the physical condition of the fingernails and toenails can indicate various systemic diseases. Careful examination of the fingernails and toenails may provide clues to underlying systemic diseases , since some diseases have been found to cause disruptions in the nail growth process. The nail plate is the hard keratin cover of the nail. The nail plate is generated by the nail matrix located just under the cuticle. As the nail grows, the area closest to becoming exposed to the outside world (distal) produces the deeper layers of the nail plate, while the part of the nail matrix deeper inside the finger (proximal) makes the superficial layers. Any disruption in this growth process can lead to an alteration in the shape and texture. For example, pitting looks like depressions in the hard part of the nail. Pitting is to be associated with psoriasis, affecting 10% - 50% of patients with that disorder. Pitting also may be caused by a variety of systemic diseases, including reactive arthritis and other connective tissue disorders, sarcoidosis, pemphigus, alopecia areata, and incontinentia pigmenti. Because pitting is caused by defective layering of the superficial nail plate by the proximal nail matrix, any localized dermatitis (e.g., atopic dermatitis or chemical dermatitis) that disrupts orderly growth in that area also can cause pitting. See also Disease Disseminated disease Fred Siguier List of systemic diseases with ocular manifestations Localized disease Marfan syndrome Systemic autoimmune diseases Systemic inflammation Oral manifestations of systemic disease References Diseases and disorders Medical terminology
0.776298
0.993895
0.771559
Delirium tremens
Delirium tremens (DTs; ) is a rapid onset of confusion usually caused by withdrawal from alcohol. When it occurs, it is often three days into the withdrawal symptoms and lasts for two to three days. Physical effects may include shaking, shivering, irregular heart rate, and sweating. People may also hallucinate. Occasionally, a very high body temperature or seizures (colloquially known as "rum fits") may result in death. Delirium tremens typically occurs only in people with a high intake of alcohol for more than a month, followed by sharply reduced intake. A similar syndrome may occur with benzodiazepine and barbiturate withdrawal. In a person with delirium tremens, it is important to rule out other associated problems such as electrolyte abnormalities, pancreatitis, and alcoholic hepatitis. Prevention is by treating withdrawal symptoms using similarly acting compounds to taper off the use of the precipitating substance in a controlled fashion. If delirium tremens occurs, aggressive treatment improves outcomes. Treatment in a quiet intensive care unit with sufficient light is often recommended. Benzodiazepines are the medication of choice with diazepam, lorazepam, chlordiazepoxide, and oxazepam all commonly used. They should be given until a person is lightly sleeping. Nonbenzodiazepines are often used as adjuncts to manage the sleep disturbance associated with condition. The antipsychotic haloperidol may also be used in order to combat the overactivity and possible excitotoxicity caused by the withdrawal from a GABA-ergic substance. Thiamine (vitamin B1) is recommended to be given intramuscularly, because long-term high alcohol intake and the often attendant nutritional deficit damages the small intestine, leading to a thiamine deficiency, which sometimes cannot be rectified by supplement pills alone. Mortality without treatment is between 15% and 40%. Currently death occurs in about 1% to 4% of cases. About half of people with alcoholism will develop withdrawal symptoms upon reducing their use. Of these, 3% to 5% develop DTs or have seizures. The name delirium tremens was first used in 1813; however, the symptoms were well described since the 1700s. The word "delirium" is Latin for "going off the furrow," a plowing metaphor for disordered thinking. It is also called the shaking frenzy and Saunders-Sutton syndrome. There are numerous nicknames for the condition, including "the DTs" and "seeing pink elephants". Signs and symptoms The main symptoms of delirium tremens are nightmares, agitation, global confusion, disorientation, visual and auditory hallucinations, tactile hallucinations, fever, high heart rate, high blood pressure, heavy sweating, and other signs of autonomic hyperactivity. These symptoms may appear suddenly but typically develop two to three days after the stopping of heavy drinking, being worst on the fourth or fifth day. These symptoms are characteristically worse at night. For example, in Finnish, this nightlike condition is called , , for its sweatiness, general unease, and hallucinations tending towards the unseemly and frightening. In general, DT is considered the most severe manifestation of withdrawal from alcohol or other GABAergic drugs, and can occur between the second and tenth days after the last drink. It often overcomes the patient by surprise, because a brief period of uneventful sobriety of 1–2 days tends to precede it, it can fully manifest itself within a single hour, and unlike most other alcohol withdrawal symptoms, it is generally not relieved by more alcohol. Other common symptoms include intense perceptual disturbance such as visions or feelings of insects, snakes, or rats. These may be hallucinations or illusions related to the environment, e.g., patterns on the wallpaper or in the peripheral vision that the patient falsely perceives as a resemblance to the morphology of an insect, and are also associated with tactile hallucinations such as sensations of something crawling on the subject—a phenomenon known as formication. Delirium tremens usually includes feelings of "impending doom". Anxiety and expecting imminent death are common DT symptoms. DT can sometimes be associated with severe, uncontrollable tremors of the extremities, and secondary symptoms such as anxiety, panic attacks, and paranoia. Confusion is often noticeable to onlookers as those with DT will have trouble forming simple sentences or making basic logical calculations. DT should be distinguished from alcoholic hallucinosis, the latter of which occurs in approximately 20% of hospitalized alcoholics and does not carry a significant risk of mortality. In contrast, DT occurs in 5–10% of alcoholics and carries up to 15% mortality with treatment and up to 35% mortality without treatment. The most common conditions leading to death in patients with DTs are respiratory failure and cardiac arrhythmias. Causes Delirium tremens is mainly caused by a long period of drinking being stopped abruptly. Withdrawal leads to a biochemical regulation cascade. Delirium tremens is most common in people who are in alcohol withdrawal, especially in those who drink 10–11 standard drinks (equivalent of of beer, of wine or of distilled beverage) daily. Delirium tremens commonly affects those with a history of habitual alcohol use or alcoholism that has existed for more than 10 years. Pathophysiology Delirium tremens is a component of alcohol withdrawal hypothesized to be the result of compensatory changes in response to chronic heavy alcohol use. Alcohol positively allosterically modulates the binding of GABA, enhancing its effect and resulting in inhibition of neurons projecting into the nucleus accumbens, as well as inhibiting NMDA receptors. This combined with desensitization of alpha-2 adrenergic receptors, results in a homeostatic upregulation of these systems in chronic alcohol use. When alcohol use ceases, the unregulated mechanisms result in hyperexcitability of neurons as natural GABAergic systems are down-regulated and excitatory glutamatergic systems are upregulated. This combined with increased noradrenergic activity results in the symptoms of delirium tremens. Diagnosis Diagnosis is mainly based on symptoms. In a person with delirium tremens, it is important to rule out other associated problems, such as electrolyte abnormalities, pancreatitis, and alcoholic hepatitis. Treatment Delirium tremens due to alcohol withdrawal can be treated with benzodiazepines. High doses may be necessary to prevent death. Amounts given are based on the symptoms. Typically the person is kept sedated with benzodiazepines, such as diazepam, lorazepam, chlordiazepoxide, or oxazepam. In some cases antipsychotics, such as haloperidol may also be used. Older drugs such as paraldehyde and clomethiazole were formerly the traditional treatment but have now largely been superseded by the benzodiazepines. Acamprosate is occasionally used in addition to other treatments, and is then carried on into long-term use to reduce the risk of relapse. If status epilepticus occurs it is treated in the usual way. It can also be helpful to provide a well lit room as people often have hallucinations. Alcoholic beverages can also be prescribed as a treatment for delirium tremens, but this practice is not universally supported. High doses of thiamine often by the intravenous route is also recommended. Delirium tremens in literature French writer Émile Zola's novel The Drinking Den (L'Assommoir) includes a character – Coupeau, the main character Gervaise's husband – who has delirium tremens by the end of the book. In English Writer Mona Caird's feminist novel The Daughters of Danaus (1894), "[a]s for taking enfeeblement as a natural dispensation," the character Hadria "would as soon regard delirium tremens in that light." American writer Mark Twain describes an episode of delirium tremens in his book The Adventures of Huckleberry Finn (1884). In chapter 6, Huck states about his father, "After supper pap took the jug, and said he had enough whisky there for two drunks and one delirium tremens. That was always his word." Subsequently, Pap Finn runs around with hallucinations of snakes and chases Huck around their cabin with a knife in an attempt to kill him, thinking Huck is the "Angel of Death". One of the characters in Joseph Conrad's novel Lord Jim experiences "DTs of the worst kind" with symptoms that include seeing millions of pink frogs. English author M. R. James mentions delirium tremens in his 1904 ghost story 'Oh, Whistle, and I'll Come to You, My Lad'. Professor Parkins while staying at the Globe Inn when in coastal Burnstow to "improve his game" of golf, despite being "a convinced disbeliever in what is called the 'supernatural, when face to face with an entity in his "double-bed room" during the story's climax, is heard "uttering cry upon cry at the utmost pitch of his voice" though later "was somehow cleared of the ready suspicion of delirium tremens". American writer Jack Kerouac details his experiences with delirium tremens in his book Big Sur. English author George Eliot provides a case involving delirium tremens in her novel Middlemarch (187172). Alcoholic scoundrel John Raffles, both an abusive stepfather of Joshua Riggs and blackmailing nemesis of financier Nicholas Bulstrode, dies, whose "death was due to delirium tremens" while at Peter Featherstone's Stone Court property. Housekeeper Mrs. Abel provides Raffles' final night of care per Bulstrode's instruction whose directions given to Abel stand adverse to Tertius Lydgate's orders. Delirium tremens in film and TV In the 1945 film The Lost Weekend, Ray Milland won the Academy Award for Best Actor for his depiction of a character who experiences delirium tremens after being hospitalized, hallucinating that he saw a bat fly in and eat a mouse poking through a wall. The M*A*S*H TV series episode "Bottoms Up" (season 9, episode 15, aired on March 2, 1981) featured a side story about a nurse (Capt. Helen Whitfield) who was found to be drinking heavily off-duty. By the culmination of the episode, after a confrontation by Maj. Margaret Houlihan, the character swears off alcohol and presumably quits immediately. At mealtime, roughly 48 hours later, Whitfield becomes hysterical upon being served food in the Mess tent, claiming that things are crawling onto her from it. Margaret and Col. Sherman Potter subdue her. Potter, having recognized the symptoms of delirium tremens orders 5 ml of paraldehyde from a witnessing nurse. During the filming of the 1975 film Monty Python and the Holy Grail, Graham Chapman developed delirium tremens due to the lack of alcohol on the set. It was particularly bad during the filming of the bridge of death scene where Chapman was visibly shaking, sweating and could not cross the bridge. His fellow Pythons were astonished as Chapman was an accomplished mountaineer. In the 1995 film Leaving Las Vegas, Nicolas Cage plays a suicidal alcoholic who rids himself of all his possessions and travels to Las Vegas to drink himself to death. During his travels, he experiences delirium tremens on a couch after waking up from a binge and crawls in pain to the refrigerator for more vodka. Cage's performance as Ben Sanderson in the film won the Academy Award for Best Actor in 1996. Delirium tremens in music Irish singer-songwriter Christy Moore has a song on his 1985 album, Ordinary Man, called "Delirium Tremens" which is a satirical song, directed towards the leaders in Irish politics and culture. Some of the people mentioned in the song include Charles Haughey (former Fianna Fáil leader), Ruairi Quinn (at the time a Labour TD, later the party leader), Dick Spring (former Labour Party leader) and Roger Casement (who was captured bringing German guns to Ireland for the 1916 Easter Rising). English band Brotherly has a song called "DTs" on their album One Sweet Life. Russian composer Modest Mussorgsky (1839-1881) died of delirium tremens. Delirium tremens in popular culture Nicknames for delirium tremens include "the DTs", "the shakes", "the oopizootics", "barrel-fever", "the blue horrors", "the rat's", "bottleache", "bats", "the drunken horrors", "seeing pink elephants", "gallon distemper", "quart mania", "janky jerks", "heebie jeebies", "pink spiders", and "riding the ghost train", as well as "ork orks", "the zoots", "the 750 itch", and "pint paralysis". Another nickname is "the Brooklyn Boys", found in Eugene O'Neill's one-act play Hughie set in Times Square in the 1920s. Delirium tremens was also given an alternate medical definition since at least the 1840s, being known as mania a potu, which translates to 'mania from drink'. The Belgian beer "Delirium Tremens," introduced in 1988, is a direct reference and also uses a pink elephant as its logo to highlight one of the symptoms of delirium tremens. See also Alcohol dementia Alcohol detoxification Delusional parasitosis Excited delirium On the wagon References External links Why Does Alcohol Cause the Shakes? | Alcohol Withdrawal Syndrome Tremors | Dr Peter MCcann MCC, MBBS | Castle Craig Hospital Causes of death Health effects of alcohol Addiction psychiatry Intensive care medicine Neurological disorders Latin medical words and phrases Medical emergencies Alcohol abuse Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate
0.772265
0.999061
0.771539
Germ theory of disease
The germ theory of disease is the currently accepted scientific theory for many diseases. It states that microorganisms known as pathogens or "germs" can cause disease. These small organisms, which are too small to be seen without magnification, invade humans, other animals, and other living hosts. Their growth and reproduction within their hosts can cause disease. "Germ" refers to not just a bacterium but to any type of microorganism, such as protists or fungi, or other pathogens that can cause disease, such as viruses, prions, or viroids. Diseases caused by pathogens are called infectious diseases. Even when a pathogen is the principal cause of a disease, environmental and hereditary factors often influence the severity of the disease, and whether a potential host individual becomes infected when exposed to the pathogen. Pathogens are disease-carrying agents that can pass from one individual to another, both in humans and animals. Infectious diseases are caused by biological agents such as pathogenic microorganisms (viruses, bacteria, and fungi) as well as parasites. Basic forms of germ theory were proposed by Girolamo Fracastoro in 1546, and expanded upon by Marcus von Plenciz in 1762. However, such views were held in disdain in Europe, where Galen's miasma theory remained dominant among scientists and doctors. By the early 19th century, the first vaccine, smallpox vaccination was commonplace in Europe, though doctors were unaware of how it worked or how to extend the principle to other diseases. A transitional period began in the late 1850s with the work of Louis Pasteur. This work was later extended by Robert Koch in the 1880s. By the end of that decade, the miasma theory was struggling to compete with the germ theory of disease. Viruses were initially discovered in the 1890s. Eventually, a "golden era" of bacteriology ensued, during which the germ theory quickly led to the identification of the actual organisms that cause many diseases. Miasma theory The miasma theory was the predominant theory of disease transmission before the germ theory took hold towards the end of the 19th century; it is no longer accepted as a correct explanation for disease by the scientific community. It held that diseases such as cholera, chlamydia infection, or the Black Death were caused by a (, Ancient Greek: "pollution"), a noxious form of "bad air" emanating from rotting organic matter. Miasma was considered to be a poisonous vapor or mist filled with particles from decomposed matter (miasmata) that was identifiable by its foul smell. The theory posited that diseases were the product of environmental factors such as contaminated water, foul air, and poor hygienic conditions. Such infections, according to the theory, were not passed between individuals but would affect those within a locale that gave rise to such vapors. Development of germ theory Greece and Rome In Antiquity, the Greek historian Thucydides ( – ) was the first person to write, in his account of the plague of Athens, that diseases could spread from an infected person to others. One theory of the spread of contagious diseases that were not spread by direct contact was that they were spread by spore-like "seeds" (Latin: semina) that were present in and dispersible through the air. In his poem, De rerum natura (On the Nature of Things, ), the Roman poet Lucretius ( – ) stated that the world contained various "seeds", some of which could sicken a person if they were inhaled or ingested. The Roman statesman Marcus Terentius Varro (116–27 BC) wrote, in his Rerum rusticarum libri III (Three Books on Agriculture, 36 BC): "Precautions must also be taken in the neighborhood of swamps... because there are bred certain minute creatures which cannot be seen by the eyes, which float in the air and enter the body through the mouth and nose and there cause serious diseases." The Greek physician Galen (AD 129 – ) speculated in his On Initial Causes that some patients might have "seeds of fever". In his On the Different Types of Fever, Galen speculated that plagues were spread by "certain seeds of plague", which were present in the air. And in his Epidemics, Galen explained that patients might relapse during recovery from fever because some "seed of the disease" lurked in their bodies, which would cause a recurrence of the disease if the patients did not follow a physician's therapeutic regimen. The Middle Ages A hybrid form of miasma and contagion theory was proposed by Persian physician Ibn Sina (known as Avicenna in Europe) in The Canon of Medicine (1025). He mentioned that people can transmit disease to others by breath, noted contagion with tuberculosis, and discussed the transmission of disease through water and dirt. During the early Middle Ages, Isidore of Seville (–636) mentioned "plague-bearing seeds" (pestifera semina) in his On the Nature of Things. Later in 1345, Tommaso del Garbo (–1370) of Bologna, Italy mentioned Galen's "seeds of plague" in his work Commentaria non-parum utilia in libros Galeni (Helpful commentaries on the books of Galen). The 16th century Reformer Martin Luther appears to have had some idea of the contagion theory, commenting, "I have survived three plagues and visited several people who had two plague spots which I touched. But it did not hurt me, thank God. Afterwards when I returned home, I took up Margaret," (born 1534), "who was then a baby, and put my unwashed hands on her face, because I had forgotten; otherwise I should not have done it, which would have been tempting God." In 1546, Italian physician Girolamo Fracastoro published De Contagione et Contagiosis Morbis (On Contagion and Contagious Diseases), a set of three books covering the nature of contagious diseases, categorization of major pathogens, and theories on preventing and treating these conditions. Fracastoro blamed "seeds of disease" that propagate through direct contact with an infected host, indirect contact with fomites, or through particles in the air. The Early Modern Period In 1668, Italian physician Francesco Redi published experimental evidence rejecting spontaneous generation, the theory that living creatures arise from nonliving matter. He observed that maggots only arose from rotting meat that was uncovered. When meat was left in jars covered by gauze, the maggots would instead appear on the gauze's surface, later understood as rotting meat's smell passing through the mesh to attract flies that laid eggs. Microorganisms are said to have been first directly observed in the 1670s by Anton van Leeuwenhoek, an early pioneer in microbiology, considered "the Father of Microbiology". Leeuwenhoek is said to be the first to see and describe bacteria in 1674, yeast cells, the teeming life in a drop of water (such as algae), and the circulation of blood corpuscles in capillaries. The word "bacteria" didn't exist yet, so he called these microscopic living organisms "animalcules", meaning "little animals". Those "very little animalcules" he was able to isolate from different sources, such as rainwater, pond and well water, and the human mouth and intestine. Yet German Jesuit priest and scholar Athanasius Kircher (or "Kirchner", as it is often spelled) may have observed such microorganisms prior to this. One of his books written in 1646 contains a chapter in Latin, which reads in translation: "Concerning the wonderful structure of things in nature, investigated by microscope...who would believe that vinegar and milk abound with an innumerable multitude of worms." Kircher defined the invisible organisms found in decaying bodies, meat, milk, and secretions as "worms." His studies with the microscope led him to the belief, which he was possibly the first to hold, that disease and putrefaction, or decay were caused by the presence of invisible living bodies, writing that "a number of things might be discovered in the blood of fever patients." When Rome was struck by the bubonic plague in 1656, Kircher investigated the blood of plague victims under the microscope. He noted the presence of "little worms" or "animalcules" in the blood and concluded that the disease was caused by microorganisms. Kircher was the first to attribute infectious disease to a microscopic pathogen, inventing the germ theory of disease, which he outlined in his Scrutinium Physico-Medicum, published in Rome in 1658. Kircher's conclusion that disease was caused by microorganisms was correct, although it is likely that what he saw under the microscope were in fact red or white blood cells and not the plague agent itself. Kircher also proposed hygienic measures to prevent the spread of disease, such as isolation, quarantine, burning clothes worn by the infected, and wearing facemasks to prevent the inhalation of germs. It was Kircher who first proposed that living beings enter and exist in the blood. In the 18th century, more proposals were made, but struggled to catch on. In 1700, physician Nicolas Andry argued that microorganisms he called "worms" were responsible for smallpox and other diseases. In 1720, Richard Bradley theorised that the plague and "all pestilential distempers" were caused by "poisonous insects", living creatures viewable only with the help of microscopes. In 1762, the Austrian physician Marcus Antonius von Plenciz (1705–1786) published a book titled Opera medico-physica. It outlined a theory of contagion stating that specific animalcules in the soil and the air were responsible for causing specific diseases. Von Plenciz noted the distinction between diseases which are both epidemic and contagious (like measles and dysentery), and diseases which are contagious but not epidemic (like rabies and leprosy). The book cites Anton van Leeuwenhoek to show how ubiquitous such animalcules are and was unique for describing the presence of germs in ulcerating wounds. Ultimately, the theory espoused by von Plenciz was not accepted by the scientific community. 19th and 20th centuries Agostino Bassi, Italy During the early 19th century, driven by economic concerns over collapsing silk production, Italian entomologist Agostino Bassi researched a silkworm disease known as "muscardine" in French and "calcinaccio" or "mal del segno" in Italian, causing white fungal spots along the caterpillar. From 1835 to 1836, Bassi published his findings that fungal spores transmitted the disease between individuals. In recommending the rapid removal of diseased caterpillars and disinfection of their surfaces, Bassi outlined methods used in modern preventative healthcare. Italian naturalist Giuseppe Gabriel Balsamo-Crivelli named the causative fungal species after Bassi, currently classified as Beauveria bassiana. Louis-Daniel Beauperthuy, France In 1838 French specialist in tropical medicine Louis-Daniel Beauperthuy pioneered using microscopy in relation to diseases and independently developed a theory that all infectious diseases were due to parasitic infection with "animalcules" (microorganisms). With the help of his friend M. Adele de Rosseville, he presented his theory in a formal presentation before the French Academy of Sciences in Paris. By 1853, he was convinced that malaria and yellow fever were spread by mosquitos. He even identified the particular group of mosquitos that transmit yellow fever as the "domestic species" of "striped-legged mosquito", which can be recognised as Aedes aegypti, the actual vector. He published his theory in 1854 in the Gaceta Oficial de Cumana ("Official Gazette of Cumana"). His reports were assessed by an official commission, which discarded his mosquito theory. Ignaz Semmelweis, Austria Ignaz Semmelweis, a Hungarian obstetrician working at the Vienna General Hospital (Allgemeines Krankenhaus) in 1847, noticed the dramatically high maternal mortality from puerperal fever following births assisted by doctors and medical students. However, those attended by midwives were relatively safe. Investigating further, Semmelweis made the connection between puerperal fever and examinations of delivering women by doctors, and further realized that these physicians had usually come directly from autopsies. Asserting that puerperal fever was a contagious disease and that matter from autopsies was implicated in its spread, Semmelweis made doctors wash their hands with chlorinated lime water before examining pregnant women. He then documented a sudden reduction in the mortality rate from 18% to 2.2% over a period of a year. Despite this evidence, he and his theories were rejected by most of the contemporary medical establishment. Gideon Mantell, UK Gideon Mantell, the Sussex doctor more famous for discovering dinosaur fossils, spent time with his microscope, and speculated in his Thoughts on Animalcules (1850) that perhaps "many of the most serious maladies which afflict humanity, are produced by peculiar states of invisible animalcular life". John Snow, UK British physician John Snow is credited as a founder of modern epidemiology for studying the 1854 Broad Street cholera outbreak. Snow criticized the Italian anatomist Giovanni Maria Lancisi for his early 18th century writings that claimed swamp miasma spread malaria, rebutting that bad air from decomposing organisms was not present in all cases. In his 1849 pamphlet On the Mode of Communication of Cholera, Snow proposed that cholera spread through the fecal–oral route, replicating in human lower intestines. In the book's second edition, published in 1855, Snow theorized that cholera was caused by cells smaller than human epithelial cells, leading to Robert Koch's 1884 confirmation of the bacterial species Vibrio cholerae as the causative agent. In recognizing a biological origin, Snow recommended boiling and filtering water, setting the precedent for modern boil-water advisory directives. Through a statistical analysis tying cholera cases to specific water pumps associated with the Southwark and Vauxhall Waterworks Company, which supplied sewage-polluted water from the River Thames, Snow showed that areas supplied by this company experienced fourteen times as many deaths as residents using Lambeth Waterworks Company pumps that obtained water from the upriver, cleaner Seething Wells. While Snow received praise for convincing the Board of Guardians of St James's Parish to remove the handles of contaminated pumps, he noted that the outbreak's cases were already declining as scared residents fled the region. Louis Pasteur, France During the mid-19th century, French microbiologist Louis Pasteur showed that treating the female genital tract with boric acid killed the microorganisms causing postpartum infections while avoiding damage to mucous membranes. Building on Redi's work, Pasteur disproved spontaneous generation by constructing swan neck flasks containing nutrient broth. Since the flask contents were only fermented when in direct contact with the external environment's air by removing the curved tubing, Pasteur demonstrated that bacteria must travel between sites of infection to colonize environments. Similar to Bassi, Pasteur extended his research on germ theory by studying pébrine, a disease that causes brown spots on silkworms. While Swiss botanist Carl Nägeli discovered the fungal species Nosema bombycis in 1857, Pasteur applied the findings to recommend improved ventilation and screening of silkworm eggs, an early form of disease surveillance. Robert Koch, Germany In 1884, German bacteriologist Robert Koch published four criteria for establishing causality between specific microorganisms and diseases, now known as Koch's postulates: The microorganism must be found in abundance in all organisms with the disease, but should not be found in healthy organisms. The microorganism must be isolated from a diseased organism and grown in pure culture. The cultured microorganism should cause disease when introduced into a healthy organism. The microorganism must be re-isolated from the inoculated, diseased experimental host and identified as being identical to the original specific causative agent. During his lifetime, Koch recognized that the postulates were not universally applicable, such as asymptomatic carriers of cholera violating the first postulate. For this same reason, the third postulate specifies "should", rather than "must", because not all host organisms exposed to an infectious agent will acquire the infection, potentially due to differences in prior exposure to the pathogen.Limiting the second postulate, it was later discovered that viruses cannot be grown in pure cultures because they are obligate intracellular parasites, making it impossible to fulfill the second postulate. Similarly, pathogenic misfolded proteins, known as prions, only spread by transmitting their structure to other proteins, rather than self-replicating. While Koch's postulates retain historical importance for emphasizing that correlation does not imply causation, many pathogens are accepted as causative agents of specific diseases without fulfilling all of the criteria. In 1988, American microbiologist Stanley Falkow published a molecular version of Koch's postulates to establish correlation between microbial genes and virulence factors. Joseph Lister, UK After reading Pasteur's papers on bacterial fermentation, British surgeon Joseph Lister recognized that compound fractures, involving bones breaking through the skin, were more likely to become infected due to exposure to environmental microorganisms. He recognized that carbolic acid could be applied to the site of injury as an effective antiseptic. See also Alexander Fleming Cell theory Cooties Epidemiology Germ theory denialism History of emerging infectious diseases Robert Hooke Rudolf Virchow Zymotic disease References External links Stephen T. Abedon Germ Theory of Disease Supplemental Lecture (98/03/28 update), www.mansfield.ohio-state.edu William C. Campbell The Germ Theory Timeline, germtheorytimeline.info Science's war on infectious diseases, www.creatingtechnology.org Biology theories Infectious diseases Microbiology
0.772548
0.998556
0.771432
Cancer
Cancer is a group of diseases involving abnormal cell growth with the potential to invade or spread to other parts of the body. These contrast with benign tumors, which do not spread. Possible signs and symptoms include a lump, abnormal bleeding, prolonged cough, unexplained weight loss, and a change in bowel movements. While these symptoms may indicate cancer, they can also have other causes. Over 100 types of cancers affect humans. Tobacco use is the cause of about 22% of cancer deaths. Another 10% are due to obesity, poor diet, lack of physical activity or excessive alcohol consumption. Other factors include certain infections, exposure to ionizing radiation, and environmental pollutants. Infection with specific viruses, bacteria and parasites is an environmental factor causing approximately 16–18% of cancers worldwide. These infectious agents include Helicobacter pylori, hepatitis B, hepatitis C, human papillomavirus infection, Epstein–Barr virus, Human T-lymphotropic virus 1, Kaposi's sarcoma-associated herpesvirus and Merkel cell polyomavirus. Human immunodeficiency virus (HIV) does not directly cause cancer but it causes immune deficiency that can magnify the risk due to other infections, sometimes up to several thousand fold (in the case of Kaposi's sarcoma). Importantly, vaccination against hepatitis B and human papillomavirus have been shown to nearly eliminate risk of cancers caused by these viruses in persons successfully vaccinated prior to infection. These environmental factors act, at least partly, by changing the genes of a cell. Typically, many genetic changes are required before cancer develops. Approximately 5–10% of cancers are due to inherited genetic defects. Cancer can be detected by certain signs and symptoms or screening tests. It is then typically further investigated by medical imaging and confirmed by biopsy. The risk of developing certain cancers can be reduced by not smoking, maintaining a healthy weight, limiting alcohol intake, eating plenty of vegetables, fruits, and whole grains, vaccination against certain infectious diseases, limiting consumption of processed meat and red meat, and limiting exposure to direct sunlight. Early detection through screening is useful for cervical and colorectal cancer. The benefits of screening for breast cancer are controversial. Cancer is often treated with some combination of radiation therapy, surgery, chemotherapy and targeted therapy. Pain and symptom management are an important part of care. Palliative care is particularly important in people with advanced disease. The chance of survival depends on the type of cancer and extent of disease at the start of treatment. In children under 15 at diagnosis, the five-year survival rate in the developed world is on average 80%. For cancer in the United States, the average five-year survival rate is 66% for all ages. In 2015, about 90.5 million people worldwide had cancer. In 2019, annual cancer cases grew by 23.6 million people, and there were 10 million deaths worldwide, representing over the previous decade increases of 26% and 21%, respectively. The most common types of cancer in males are lung cancer, prostate cancer, colorectal cancer, and stomach cancer. In females, the most common types are breast cancer, colorectal cancer, lung cancer, and cervical cancer. If skin cancer other than melanoma were included in total new cancer cases each year, it would account for around 40% of cases. In children, acute lymphoblastic leukemia and brain tumors are most common, except in Africa, where non-Hodgkin lymphoma occurs more often. In 2012, about 165,000 children under 15 years of age were diagnosed with cancer. The risk of cancer increases significantly with age, and many cancers occur more commonly in developed countries. Rates are increasing as more people live to an old age and as lifestyle changes occur in the developing world. The global total economic costs of cancer were estimated at US$1.16 trillion (equivalent to $ trillion in ) per year . Etymology and definitions The word comes from the ancient Greek , meaning 'crab' and 'tumor'. Greek physicians Hippocrates and Galen, among others, noted the similarity of crabs to some tumors with swollen veins. The word was introduced in English in the modern medical sense around 1600. Cancers comprise a large family of diseases that involve abnormal cell growth with the potential to invade or spread to other parts of the body. They form a subset of neoplasms. A neoplasm or tumor is a group of cells that have undergone unregulated growth and will often form a mass or lump, but may be distributed diffusely. All tumor cells show the six hallmarks of cancer. These characteristics are required to produce a malignant tumor. They include: Cell growth and division absent the proper signals Continuous growth and division even given contrary signals Avoidance of programmed cell death Limitless number of cell divisions Promoting blood vessel construction Invasion of tissue and formation of metastases The progression from normal cells to cells that can form a detectable mass to cancer involves multiple steps known as malignant progression. Signs and symptoms When cancer begins, it produces no symptoms. Signs and symptoms appear as the mass grows or ulcerates. The findings that result depend on cancer's type and location. Few symptoms are specific. Many frequently occur in individuals who have other conditions. Cancer can be difficult to diagnose and can be considered a "great imitator". People may become anxious or depressed post-diagnosis. The risk of suicide in people with cancer is approximately double. Local symptoms Local symptoms may occur due to the mass of the tumor or its ulceration. For example, mass effects from lung cancer can block the bronchus resulting in cough or pneumonia; esophageal cancer can cause narrowing of the esophagus, making it difficult or painful to swallow; and colorectal cancer may lead to narrowing or blockages in the bowel, affecting bowel habits. Masses in breasts or testicles may produce observable lumps. Ulceration can cause bleeding that can lead to symptoms such as coughing up blood (lung cancer), anemia or rectal bleeding (colon cancer), blood in the urine (bladder cancer), or abnormal vaginal bleeding (endometrial or cervical cancer). Although localized pain may occur in advanced cancer, the initial tumor is usually painless. Some cancers can cause a buildup of fluid within the chest or abdomen. Systemic symptoms Systemic symptoms may occur due to the body's response to the cancer. This may include fatigue, unintentional weight loss, or skin changes. Some cancers can cause a systemic inflammatory state that leads to ongoing muscle loss and weakness, known as cachexia. Some cancers, such as Hodgkin's disease, leukemias, and liver or kidney cancers, can cause a persistent fever. Shortness of breath, called dyspnea, is a common symptom of cancer and its treatment. The causes of cancer-related dyspnea can include tumors in or around the lung, blocked airways, fluid in the lungs, pneumonia, or treatment reactions including an allergic response. Treatment for dyspnea in patients with advanced cancer can include fans, bilevel ventilation, acupressure/reflexology and multicomponent nonpharmacological interventions. Some systemic symptoms of cancer are caused by hormones or other molecules produced by the tumor, known as paraneoplastic syndromes. Common paraneoplastic syndromes include hypercalcemia, which can cause altered mental state, constipation and dehydration, or hyponatremia, which can also cause altered mental status, vomiting, headaches, or seizures. Metastasis Metastasis is the spread of cancer to other locations in the body. The dispersed tumors are called metastatic tumors, while the original is called the primary tumor. Almost all cancers can metastasize. Most cancer deaths are due to cancer that has metastasized. Metastasis is common in the late stages of cancer and it can occur via the blood or the lymphatic system or both. The typical steps in metastasis are: Local invasion Intravasation into the blood or lymph. Circulation through the body. Extravasation into the new tissue. Proliferation Angiogenesis Different types of cancers tend to metastasize to particular organs. Overall, the most common places for metastases to occur are the lungs, liver, brain, and the bones. While some cancers can be cured if detected early, metastatic cancer is more difficult to treat and control. Nevertheless, some recent treatments are demonstrating encouraging results. Causes The majority of cancers, some 90–95% of cases, are due to genetic mutations from environmental and lifestyle factors. The remaining 5–10% are due to inherited genetics. Environmental refers to any cause that is not inherited, such as lifestyle, economic, and behavioral factors and not merely pollution. Common environmental factors that contribute to cancer death include tobacco use (25–30%), diet and obesity (30–35%), infections (15–20%), radiation (both ionizing and non-ionizing, up to 10%), lack of physical activity, and pollution. Psychological stress does not appear to be a risk factor for the onset of cancer, though it may worsen outcomes in those who already have cancer. Environmental or lifestyle factors that caused cancer to develop in an individual can be identified by analyzing mutational signatures from genomic sequencing of tumor DNA. For example, this can reveal if lung cancer was caused by tobacco smoke, if skin cancer was caused by UV radiation, or if secondary cancers were caused by previous chemotherapy treatment. Cancer is generally not a transmissible disease. Exceptions include rare transmissions that occur with pregnancies and occasional organ donors. However, transmissible infectious diseases such as hepatitis B, Epstein-Barr virus, Human Papilloma Virus and HIV, can contribute to the development of cancer. Chemicals Exposure to particular substances have been linked to specific types of cancer. These substances are called carcinogens. Tobacco smoke, for example, causes 90% of lung cancer. Tobacco use can cause cancer throughout the body including in the mouth and throat, larynx, esophagus, stomach, bladder, kidney, cervix, colon/rectum, liver and pancreas. Tobacco smoke contains over fifty known carcinogens, including nitrosamines and polycyclic aromatic hydrocarbons. Tobacco is responsible for about one in five cancer deaths worldwide and about one in three in the developed world. Lung cancer death rates in the United States have mirrored smoking patterns, with increases in smoking followed by dramatic increases in lung cancer death rates and, more recently, decreases in smoking rates since the 1950s followed by decreases in lung cancer death rates in men since 1990. In Western Europe, 10% of cancers in males and 3% of cancers in females are attributed to alcohol exposure, especially liver and digestive tract cancers. Cancer from work-related substance exposures may cause between 2 and 20% of cases, causing at least 200,000 deaths. Cancers such as lung cancer and mesothelioma can come from inhaling tobacco smoke or asbestos fibers, or leukemia from exposure to benzene. Exposure to perfluorooctanoic acid (PFOA), which is predominantly used in the production of Teflon, is known to cause two kinds of cancer. Chemotherapy drugs such as platinum-based compounds are carcinogens that increase the risk of secondary cancers Azathioprine, an immunosuppressive medication, is a carcinogen that can cause primary tumors to develop. Diet and exercise Diet, physical inactivity, and obesity are related to up to 30–35% of cancer deaths. In the United States, excess body weight is associated with the development of many types of cancer and is a factor in 14–20% of cancer deaths. A UK study including data on over 5 million people showed higher body mass index to be related to at least 10 types of cancer and responsible for around 12,000 cases each year in that country. Physical inactivity is believed to contribute to cancer risk, not only through its effect on body weight but also through negative effects on the immune system and endocrine system. More than half of the effect from the diet is due to overnutrition (eating too much), rather than from eating too few vegetables or other healthful foods. Some specific foods are linked to specific cancers. A high-salt diet is linked to gastric cancer. Aflatoxin B1, a frequent food contaminant, causes liver cancer. Betel nut chewing can cause oral cancer. National differences in dietary practices may partly explain differences in cancer incidence. For example, gastric cancer is more common in Japan due to its high-salt diet while colon cancer is more common in the United States. Immigrant cancer profiles mirror those of their new country, often within one generation. Infection Worldwide, approximately 18% of cancer deaths are related to infectious diseases. This proportion ranges from a high of 25% in Africa to less than 10% in the developed world. Viruses are the usual infectious agents that cause cancer but bacteria and parasites may also play a role. Oncoviruses (viruses that can cause human cancer) include: Human papillomavirus (cervical cancer), Epstein–Barr virus (B-cell lymphoproliferative disease and nasopharyngeal carcinoma), Kaposi's sarcoma herpesvirus (Kaposi's sarcoma and primary effusion lymphomas), Hepatitis B and hepatitis C viruses (hepatocellular carcinoma) Human T-cell leukemia virus-1 (T-cell leukemias). Merkel cell polyomavirus (Merkel cell carcinoma) Bacterial infection may also increase the risk of cancer, as seen in Helicobacter pylori-induced gastric carcinoma. Colibactin, a genotoxin associated with Escherichia coli infection (colorectal cancer) Parasitic infections associated with cancer include: Schistosoma haematobium (squamous cell carcinoma of the bladder) The liver flukes, Opisthorchis viverrini and Clonorchis sinensis (cholangiocarcinoma). Radiation Radiation exposure such as ultraviolet radiation and radioactive material is a risk factor for cancer. Many non-melanoma skin cancers are due to ultraviolet radiation, mostly from sunlight. Sources of ionizing radiation include medical imaging and radon gas. Ionizing radiation is not a particularly strong mutagen. Residential exposure to radon gas, for example, has similar cancer risks as passive smoking. Radiation is a more potent source of cancer when combined with other cancer-causing agents, such as radon plus tobacco smoke. Radiation can cause cancer in most parts of the body, in all animals and at any age. Children are twice as likely to develop radiation-induced leukemia as adults; radiation exposure before birth has ten times the effect. Medical use of ionizing radiation is a small but growing source of radiation-induced cancers. Ionizing radiation may be used to treat other cancers, but this may, in some cases, induce a second form of cancer. It is also used in some kinds of medical imaging. Prolonged exposure to ultraviolet radiation from the sun can lead to melanoma and other skin malignancies. Clear evidence establishes ultraviolet radiation, especially the non-ionizing medium wave UVB, as the cause of most non-melanoma skin cancers, which are the most common forms of cancer in the world. Non-ionizing radio frequency radiation from mobile phones, electric power transmission and other similar sources has been described as a possible carcinogen by the World Health Organization's International Agency for Research on Cancer. Evidence, however, has not supported a concern. This includes that studies have not found a consistent link between mobile phone radiation and cancer risk. Heredity The vast majority of cancers are non-hereditary (sporadic). Hereditary cancers are primarily caused by an inherited genetic defect. Less than 0.3% of the population are carriers of a genetic mutation that has a large effect on cancer risk and these cause less than 3–10% of cancer. Some of these syndromes include: certain inherited mutations in the genes BRCA1 and BRCA2 with a more than 75% risk of breast cancer and ovarian cancer, and hereditary nonpolyposis colorectal cancer (HNPCC or Lynch syndrome), which is present in about 3% of people with colorectal cancer, among others. Statistically for cancers causing most mortality, the relative risk of developing colorectal cancer when a first-degree relative (parent, sibling or child) has been diagnosed with it is about 2. The corresponding relative risk is 1.5 for lung cancer, and 1.9 for prostate cancer. For breast cancer, the relative risk is 1.8 with a first-degree relative having developed it at 50 years of age or older, and 3.3 when the relative developed it when being younger than 50 years of age. Taller people have an increased risk of cancer because they have more cells than shorter people. Since height is genetically determined to a large extent, taller people have a heritable increase of cancer risk. Physical agents Some substances cause cancer primarily through their physical, rather than chemical, effects. A prominent example of this is prolonged exposure to asbestos, naturally occurring mineral fibers that are a major cause of mesothelioma (cancer of the serous membrane) usually the serous membrane surrounding the lungs. Other substances in this category, including both naturally occurring and synthetic asbestos-like fibers, such as wollastonite, attapulgite, glass wool and rock wool, are believed to have similar effects. Non-fibrous particulate materials that cause cancer include powdered metallic cobalt and nickel and crystalline silica (quartz, cristobalite and tridymite). Usually, physical carcinogens must get inside the body (such as through inhalation) and require years of exposure to produce cancer. Physical trauma resulting in cancer is relatively rare. Claims that breaking bones resulted in bone cancer, for example, have not been proven. Similarly, physical trauma is not accepted as a cause for cervical cancer, breast cancer or brain cancer. One accepted source is frequent, long-term application of hot objects to the body. It is possible that repeated burns on the same part of the body, such as those produced by kanger and kairo heaters (charcoal hand warmers), may produce skin cancer, especially if carcinogenic chemicals are also present. Frequent consumption of scalding hot tea may produce esophageal cancer. Generally, it is believed that cancer arises, or a pre-existing cancer is encouraged, during the process of healing, rather than directly by the trauma. However, repeated injuries to the same tissues might promote excessive cell proliferation, which could then increase the odds of a cancerous mutation. Chronic inflammation has been hypothesized to directly cause mutation. Inflammation can contribute to proliferation, survival, angiogenesis and migration of cancer cells by influencing the tumor microenvironment. Oncogenes build up an inflammatory pro-tumorigenic microenvironment. Hormones Hormones also play a role in the development of cancer by promoting cell proliferation. Insulin-like growth factors and their binding proteins play a key role in cancer cell proliferation, differentiation and apoptosis, suggesting possible involvement in carcinogenesis. Hormones are important agents in sex-related cancers, such as cancer of the breast, endometrium, prostate, ovary and testis and also of thyroid cancer and bone cancer. For example, the daughters of women who have breast cancer have significantly higher levels of estrogen and progesterone than the daughters of women without breast cancer. These higher hormone levels may explain their higher risk of breast cancer, even in the absence of a breast-cancer gene. Similarly, men of African ancestry have significantly higher levels of testosterone than men of European ancestry and have a correspondingly higher level of prostate cancer. Men of Asian ancestry, with the lowest levels of testosterone-activating androstanediol glucuronide, have the lowest levels of prostate cancer. Other factors are relevant: obese people have higher levels of some hormones associated with cancer and a higher rate of those cancers. Women who take hormone replacement therapy have a higher risk of developing cancers associated with those hormones. On the other hand, people who exercise far more than average have lower levels of these hormones and lower risk of cancer. Osteosarcoma may be promoted by growth hormones. Some treatments and prevention approaches leverage this cause by artificially reducing hormone levels and thus discouraging hormone-sensitive cancers. Autoimmune diseases There is an association between celiac disease and an increased risk of all cancers. People with untreated celiac disease have a higher risk, but this risk decreases with time after diagnosis and strict treatment. This may be due to the adoption of a gluten-free diet, which seems to have a protective role against development of malignancy in people with celiac disease. However, the delay in diagnosis and initiation of a gluten-free diet seems to increase the risk of malignancies. Rates of gastrointestinal cancers are increased in people with Crohn's disease and ulcerative colitis, due to chronic inflammation. Immunomodulators and biologic agents used to treat these diseases may promote developing extra-intestinal malignancies. Pathophysiology Genetics Cancer is fundamentally a disease of tissue growth regulation. For a normal cell to transform into a cancer cell, the genes that regulate cell growth and differentiation must be altered. The affected genes are divided into two broad categories. Oncogenes are genes that promote cell growth and reproduction. Tumor suppressor genes are genes that inhibit cell division and survival. Malignant transformation can occur through the formation of novel oncogenes, the inappropriate over-expression of normal oncogenes, or by the under-expression or disabling of tumor suppressor genes. Typically, changes in multiple genes are required to transform a normal cell into a cancer cell. Genetic changes can occur at different levels and by different mechanisms. The gain or loss of an entire chromosome can occur through errors in mitosis. More common are mutations, which are changes in the nucleotide sequence of genomic DNA. Large-scale mutations involve the deletion or gain of a portion of a chromosome. Genomic amplification occurs when a cell gains copies (often 20 or more) of a small chromosomal locus, usually containing one or more oncogenes and adjacent genetic material. Translocation occurs when two separate chromosomal regions become abnormally fused, often at a characteristic location. A well-known example of this is the Philadelphia chromosome, or translocation of chromosomes 9 and 22, which occurs in chronic myelogenous leukemia and results in production of the BCR-abl fusion protein, an oncogenic tyrosine kinase. Small-scale mutations include point mutations, deletions, and insertions, which may occur in the promoter region of a gene and affect its expression, or may occur in the gene's coding sequence and alter the function or stability of its protein product. Disruption of a single gene may also result from integration of genomic material from a DNA virus or retrovirus, leading to the expression of viral oncogenes in the affected cell and its descendants. Replication of the data contained within the DNA of living cells will probabilistically result in some errors (mutations). Complex error correction and prevention are built into the process and safeguard the cell against cancer. If a significant error occurs, the damaged cell can self-destruct through programmed cell death, termed apoptosis. If the error control processes fail, then the mutations will survive and be passed along to daughter cells. Some environments make errors more likely to arise and propagate. Such environments can include the presence of disruptive substances called carcinogens, repeated physical injury, heat, ionising radiation, or hypoxia. The errors that cause cancer are self-amplifying and compounding, for example: A mutation in the error-correcting machinery of a cell might cause that cell and its children to accumulate errors more rapidly. A further mutation in an oncogene might cause the cell to reproduce more rapidly and more frequently than its normal counterparts. A further mutation may cause loss of a tumor suppressor gene, disrupting the apoptosis signaling pathway and immortalizing the cell. A further mutation in the signaling machinery of the cell might send error-causing signals to nearby cells. The transformation of a normal cell into cancer is akin to a chain reaction caused by initial errors, which compound into more severe errors, each progressively allowing the cell to escape more controls that limit normal tissue growth. This rebellion-like scenario is an undesirable survival of the fittest, where the driving forces of evolution work against the body's design and enforcement of order. Once cancer has begun to develop, this ongoing process, termed clonal evolution, drives progression towards more invasive stages. Clonal evolution leads to intra-tumour heterogeneity (cancer cells with heterogeneous mutations) that complicates designing effective treatment strategies and requires an evolutionary approach to designing treatment. Characteristic abilities developed by cancers are divided into categories, specifically evasion of apoptosis, self-sufficiency in growth signals, insensitivity to anti-growth signals, sustained angiogenesis, limitless replicative potential, metastasis, reprogramming of energy metabolism and evasion of immune destruction. Epigenetics The classical view of cancer is a set of diseases driven by progressive genetic abnormalities that include mutations in tumor-suppressor genes and oncogenes, and in chromosomal abnormalities. A role for epigenetic alterations was identified in the early 21st century. Epigenetic alterations are functionally relevant modifications to the genome that do not change the nucleotide sequence. Examples of such modifications are changes in DNA methylation (hypermethylation and hypomethylation), histone modification and changes in chromosomal architecture (caused by inappropriate expression of proteins such as HMGA2 or HMGA1). Each of these alterations regulates gene expression without altering the underlying DNA sequence. These changes may remain through cell divisions, endure for multiple generations, and can be considered as equivalent to mutations. Epigenetic alterations occur frequently in cancers. As an example, one study listed protein coding genes that were frequently altered in their methylation in association with colon cancer. These included 147 hypermethylated and 27 hypomethylated genes. Of the hypermethylated genes, 10 were hypermethylated in 100% of colon cancers and many others were hypermethylated in more than 50% of colon cancers. While epigenetic alterations are found in cancers, the epigenetic alterations in DNA repair genes, causing reduced expression of DNA repair proteins, may be of particular importance. Such alterations may occur early in progression to cancer and are a possible cause of the genetic instability characteristic of cancers. Reduced expression of DNA repair genes disrupts DNA repair. This is shown in the figure at the 4th level from the top. (In the figure, red wording indicates the central role of DNA damage and defects in DNA repair in progression to cancer.) When DNA repair is deficient DNA damage remains in cells at a higher than usual level (5th level) and causes increased frequencies of mutation and/or epimutation (6th level). Mutation rates increase substantially in cells defective in DNA mismatch repair or in homologous recombinational repair (HRR). Chromosomal rearrangements and aneuploidy also increase in HRR defective cells. Higher levels of DNA damage cause increased mutation (right side of figure) and increased epimutation. During repair of DNA double strand breaks, or repair of other DNA damage, incompletely cleared repair sites can cause epigenetic gene silencing. Deficient expression of DNA repair proteins due to an inherited mutation can increase cancer risks. Individuals with an inherited impairment in any of 34 DNA repair genes (see article DNA repair-deficiency disorder) have increased cancer risk, with some defects ensuring a 100% lifetime chance of cancer (e.g. p53 mutations). Germ line DNA repair mutations are noted on the figure's left side. However, such germline mutations (which cause highly penetrant cancer syndromes) are the cause of only about 1 percent of cancers. In sporadic cancers, deficiencies in DNA repair are occasionally caused by a mutation in a DNA repair gene but are much more frequently caused by epigenetic alterations that reduce or silence expression of DNA repair genes. This is indicated in the figure at the 3rd level. Many studies of heavy metal-induced carcinogenesis show that such heavy metals cause a reduction in expression of DNA repair enzymes, some through epigenetic mechanisms. DNA repair inhibition is proposed to be a predominant mechanism in heavy metal-induced carcinogenicity. In addition, frequent epigenetic alterations of the DNA sequences code for small RNAs called microRNAs (or miRNAs). miRNAs do not code for proteins, but can "target" protein-coding genes and reduce their expression. Cancers usually arise from an assemblage of mutations and epimutations that confer a selective advantage leading to clonal expansion (see Field defects in progression to cancer). Mutations, however, may not be as frequent in cancers as epigenetic alterations. An average cancer of the breast or colon can have about 60 to 70 protein-altering mutations, of which about three or four may be "driver" mutations and the remaining ones may be "passenger" mutations. Metastasis Metastasis is the spread of cancer to other locations in the body. The dispersed tumors are called metastatic tumors, while the original is called the primary tumor. Almost all cancers can metastasize. Most cancer deaths are due to cancer that has metastasized. Metastasis is common in the late stages of cancer and it can occur via the blood or the lymphatic system or both. The typical steps in metastasis are local invasion, intravasation into the blood or lymph, circulation through the body, extravasation into the new tissue, proliferation and angiogenesis. Different types of cancers tend to metastasize to particular organs, but overall the most common places for metastases to occur are the lungs, liver, brain and the bones. Metabolism Normal cells typically generate only about 30% of energy from glycolysis, whereas most cancers rely on glycolysis for energy production (Warburg effect). But a minority of cancer types rely on oxidative phosphorylation as the primary energy source, including lymphoma, leukemia, and endometrial cancer. Even in these cases, however, the use of glycolysis as an energy source rarely exceeds 60%. A few cancers use glutamine as the major energy source, partly because it provides nitrogen required for nucleotide (DNA, RNA) synthesis. Cancer stem cells often use oxidative phosphorylation or glutamine as a primary energy source. Diagnosis Most cancers are initially recognized either because of the appearance of signs or symptoms or through screening. Neither of these leads to a definitive diagnosis, which requires the examination of a tissue sample by a pathologist. People with suspected cancer are investigated with medical tests. These commonly include blood tests, X-rays, (contrast) CT scans and endoscopy. The tissue diagnosis from the biopsy indicates the type of cell that is proliferating, its histological grade, genetic abnormalities and other features. Together, this information is useful to evaluate the prognosis and to choose the best treatment. Cytogenetics and immunohistochemistry are other types of tissue tests. These tests provide information about molecular changes (such as mutations, fusion genes and numerical chromosome changes) and may thus also indicate the prognosis and best treatment. Cancer diagnosis can cause psychological distress and psychosocial interventions, such as talking therapy, may help people with this. Some people choose to disclose the diagnosis widely; others prefer to keep the information private, especially shortly after the diagnosis, or to disclose it only partially or to selected people. Classification Cancers are classified by the type of cell that the tumor cells resemble and is therefore presumed to be the origin of the tumor. These types include: Carcinoma: Cancers derived from epithelial cells. This group includes many of the most common cancers and include nearly all those in the breast, prostate, lung, pancreas and colon. Most of these are of the adenocarcinoma type, which means that the cancer has gland-like differentiation. Sarcoma: Cancers arising from connective tissue (i.e. bone, cartilage, fat, nerve), each of which develops from cells originating in mesenchymal cells outside the bone marrow. Lymphoma and leukemia: These two classes arise from hematopoietic (blood-forming) cells that leave the marrow and tend to mature in the lymph nodes and blood, respectively. Germ cell tumor: Cancers derived from pluripotent cells, most often presenting in the testicle or the ovary (seminoma and dysgerminoma, respectively). Blastoma: Cancers derived from immature "precursor" cells or embryonic tissue. Cancers are usually named using -carcinoma, -sarcoma or -blastoma as a suffix, with the Latin or Greek word for the organ or tissue of origin as the root. For example, cancers of the liver parenchyma arising from malignant epithelial cells is called hepatocarcinoma, while a malignancy arising from primitive liver precursor cells is called a hepatoblastoma and a cancer arising from fat cells is called a liposarcoma. For some common cancers, the English organ name is used. For example, the most common type of breast cancer is called ductal carcinoma of the breast. Here, the adjective ductal refers to the appearance of cancer under the microscope, which suggests that it has originated in the milk ducts. Benign tumors (which are not cancers) are named using -oma as a suffix with the organ name as the root. For example, a benign tumor of smooth muscle cells is called a leiomyoma (the common name of this frequently occurring benign tumor in the uterus is fibroid). Confusingly, some types of cancer use the -noma suffix, examples including melanoma and seminoma. Some types of cancer are named for the size and shape of the cells under a microscope, such as giant cell carcinoma, spindle cell carcinoma and small-cell carcinoma. Prevention Cancer prevention is defined as active measures to decrease cancer risk. The vast majority of cancer cases are due to environmental risk factors. Many of these environmental factors are controllable lifestyle choices. Thus, cancer is generally preventable. Between 70% and 90% of common cancers are due to environmental factors and therefore potentially preventable. Greater than 30% of cancer deaths could be prevented by avoiding risk factors including: tobacco, excess weight/obesity, poor diet, physical inactivity, alcohol, sexually transmitted infections and air pollution. Further, poverty could be considered as an indirect risk factor in human cancers. Not all environmental causes are controllable, such as naturally occurring background radiation and cancers caused through hereditary genetic disorders and thus are not preventable via personal behavior. In 2019, ~44% of all cancer deaths – or ~4.5 M deaths or ~105 million lost disability-adjusted life years – were due to known clearly preventable risk factors, led by smoking, alcohol use and high BMI, according to a GBD systematic analysis. Dietary While many dietary recommendations have been proposed to reduce cancer risks, the evidence to support them is not definitive. The primary dietary factors that increase risk are obesity and alcohol consumption. Diets low in fruits and vegetables and high in red meat have been implicated but reviews and meta-analyses do not come to a consistent conclusion. A 2014 meta-analysis found no relationship between fruits and vegetables and cancer. Coffee is associated with a reduced risk of liver cancer. Studies have linked excessive consumption of red or processed meat to an increased risk of breast cancer, colon cancer and pancreatic cancer, a phenomenon that could be due to the presence of carcinogens in meats cooked at high temperatures. In 2015 the IARC reported that eating processed meat (e.g., bacon, ham, hot dogs, sausages) and, to a lesser degree, red meat was linked to some cancers. Dietary recommendations for cancer prevention typically include an emphasis on vegetables, fruit, whole grains and fish and an avoidance of processed and red meat (beef, pork, lamb), animal fats, pickled foods and refined carbohydrates. Medication Medications can be used to prevent cancer in a few circumstances. In the general population, NSAIDs reduce the risk of colorectal cancer; however, due to cardiovascular and gastrointestinal side effects, they cause overall harm when used for prevention. Aspirin has been found to reduce the risk of death from cancer by about 7%. COX-2 inhibitors may decrease the rate of polyp formation in people with familial adenomatous polyposis; however, it is associated with the same adverse effects as NSAIDs. Daily use of tamoxifen or raloxifene reduce the risk of breast cancer in high-risk women. The benefit versus harm for 5-alpha-reductase inhibitor such as finasteride is not clear. Vitamin supplementation does not appear to be effective at preventing cancer. While low blood levels of vitamin D are correlated with increased cancer risk, whether this relationship is causal and vitamin D supplementation is protective is not determined. One 2014 review found that supplements had no significant effect on cancer risk. Another 2014 review concluded that vitamin D3 may decrease the risk of death from cancer (one fewer death in 150 people treated over 5 years), but concerns with the quality of the data were noted. Beta-Carotene supplementation increases lung cancer rates in those who are high risk. Folic acid supplementation is not effective in preventing colon cancer and may increase colon polyps. Selenium supplementation has not been shown to reduce the risk of cancer. Vaccination Vaccines have been developed that prevent infection by some carcinogenic viruses. Human papillomavirus vaccine (Gardasil and Cervarix) decrease the risk of developing cervical cancer. The hepatitis B vaccine prevents infection with hepatitis B virus and thus decreases the risk of liver cancer. The administration of human papillomavirus and hepatitis B vaccinations is recommended where resources allow. Screening Unlike diagnostic efforts prompted by symptoms and medical signs, cancer screening involves efforts to detect cancer after it has formed, but before any noticeable symptoms appear. This may involve physical examination, blood or urine tests or medical imaging. Cancer screening is not available for many types of cancers. Even when tests are available, they may not be recommended for everyone. Universal screening or mass screening involves screening everyone. Selective screening identifies people who are at higher risk, such as people with a family history. Several factors are considered to determine whether the benefits of screening outweigh the risks and the costs of screening. These factors include: Possible harms from the screening test: for example, X-ray images involve exposure to potentially harmful ionizing radiation The likelihood of the test correctly identifying cancer The likelihood that cancer is present: Screening is not normally useful for rare cancers. Possible harms from follow-up procedures Whether suitable treatment is available Whether early detection improves treatment outcomes Whether cancer will ever need treatment Whether the test is acceptable to the people: If a screening test is too burdensome (for example, extremely painful), then people will refuse to participate. Cost Recommendations U.S. Preventive Services Task Force The U.S. Preventive Services Task Force (USPSTF) issues recommendations for various cancers: Strongly recommends cervical cancer screening in women who are sexually active and have a cervix at least until the age of 65. Recommend that Americans be screened for colorectal cancer via fecal occult blood testing, sigmoidoscopy, or colonoscopy starting at age 50 until age 75. Evidence is insufficient to recommend for or against screening for skin cancer, oral cancer, lung cancer, or prostate cancer in men under 75. Routine screening is not recommended for bladder cancer, testicular cancer, ovarian cancer, pancreatic cancer, or prostate cancer. Recommends mammography for breast cancer screening every two years from ages 50–74, but does not recommend either breast self-examination or clinical breast examination. A 2013 Cochrane review concluded that breast cancer screening by mammography had no effect in reducing mortality because of overdiagnosis and overtreatment. Japan Screens for gastric cancer using photofluorography due to the high incidence there. Genetic testing Genetic testing for individuals at high-risk of certain cancers is recommended by unofficial groups. Carriers of these mutations may then undergo enhanced surveillance, chemoprevention, or preventative surgery to reduce their subsequent risk. Management Many treatment options for cancer exist. The primary ones include surgery, chemotherapy, radiation therapy, hormonal therapy, targeted therapy and palliative care. Which treatments are used depends on the type, location and grade of the cancer as well as the patient's health and preferences. The treatment intent may or may not be curative. Chemotherapy Chemotherapy is the treatment of cancer with one or more cytotoxic anti-neoplastic drugs (chemotherapeutic agents) as part of a standardized regimen. The term encompasses a variety of drugs, which are divided into broad categories such as alkylating agents and antimetabolites. Traditional chemotherapeutic agents act by killing cells that divide rapidly, a critical property of most cancer cells. It was found that providing combined cytotoxic drugs is better than a single drug, a process called the combination therapy, which has an advantage in the statistics of survival and response to the tumor and in the progress of the disease. A Cochrane review concluded that combined therapy was more effective to treat metastasized breast cancer. However, generally it is not certain whether combination chemotherapy leads to better health outcomes, when both survival and toxicity are considered. Targeted therapy is a form of chemotherapy that targets specific molecular differences between cancer and normal cells. The first targeted therapies blocked the estrogen receptor molecule, inhibiting the growth of breast cancer. Another common example is the class of Bcr-Abl inhibitors, which are used to treat chronic myelogenous leukemia (CML). Currently, targeted therapies exist for many of the most common cancer types, including bladder cancer, breast cancer, colorectal cancer, kidney cancer, leukemia, liver cancer, lung cancer, lymphoma, pancreatic cancer, prostate cancer, skin cancer, and thyroid cancer as well as other cancer types. The efficacy of chemotherapy depends on the type of cancer and the stage. In combination with surgery, chemotherapy has proven useful in cancer types including breast cancer, colorectal cancer, pancreatic cancer, osteogenic sarcoma, testicular cancer, ovarian cancer and certain lung cancers. Chemotherapy is curative for some cancers, such as some leukemias, ineffective in some brain tumors, and needless in others, such as most non-melanoma skin cancers. The effectiveness of chemotherapy is often limited by its toxicity to other tissues in the body. Even when chemotherapy does not provide a permanent cure, it may be useful to reduce symptoms such as pain or to reduce the size of an inoperable tumor in the hope that surgery will become possible in the future. Radiation Radiation therapy involves the use of ionizing radiation in an attempt to either cure or improve symptoms. It works by damaging the DNA of cancerous tissue, causing mitotic catastrophe resulting in the death of the cancer cells. To spare normal tissues (such as skin or organs, which radiation must pass through to treat the tumor), shaped radiation beams are aimed from multiple exposure angles to intersect at the tumor, providing a much larger dose there than in the surrounding, healthy tissue. As with chemotherapy, cancers vary in their response to radiation therapy. Radiation therapy is used in about half of cases. The radiation can be either from internal sources (brachytherapy) or external sources. The radiation is most commonly low energy X-rays for treating skin cancers, while higher energy X-rays are used for cancers within the body. Radiation is typically used in addition to surgery and or chemotherapy. For certain types of cancer, such as early head and neck cancer, it may be used alone. Radiation therapy after surgery for brain metastases has been shown to not improve overall survival in patients compared to surgery alone. For painful bone metastasis, radiation therapy has been found to be effective in about 70% of patients. Surgery Surgery is the primary method of treatment for most isolated, solid cancers and may play a role in palliation and prolongation of survival. It is typically an important part of definitive diagnosis and staging of tumors, as biopsies are usually required. In localized cancer, surgery typically attempts to remove the entire mass along with, in certain cases, the lymph nodes in the area. For some types of cancer this is sufficient to eliminate the cancer. Palliative care Palliative care is treatment that attempts to help the patient feel better and may be combined with an attempt to treat the cancer. Palliative care includes action to reduce physical, emotional, spiritual and psycho-social distress. Unlike treatment that is aimed at directly killing cancer cells, the primary goal of palliative care is to improve quality of life. People at all stages of cancer treatment typically receive some kind of palliative care. In some cases, medical specialty professional organizations recommend that patients and physicians respond to cancer only with palliative care. This applies to patients who: Display low performance status, implying limited ability to care for themselves Received no benefit from prior evidence-based treatments Are not eligible to participate in any appropriate clinical trial No strong evidence implies that treatment would be effective Palliative care may be confused with hospice and therefore only indicated when people approach end of life. Like hospice care, palliative care attempts to help the patient cope with their immediate needs and to increase comfort. Unlike hospice care, palliative care does not require people to stop treatment aimed at the cancer. Multiple national medical guidelines recommend early palliative care for patients whose cancer has produced distressing symptoms or who need help coping with their illness. In patients first diagnosed with metastatic disease, palliative care may be immediately indicated. Palliative care is indicated for patients with a prognosis of less than 12 months of life even given aggressive treatment. Immunotherapy A variety of therapies using immunotherapy, stimulating or helping the immune system to fight cancer, have come into use since 1997. Approaches include: Monoclonal antibody therapy Checkpoint therapy (therapy that targets the immune checkpoints or regulators of the immune system) Adoptive cell transfer Laser therapy Laser therapy uses high-intensity light to treat cancer by shrinking or destroying tumors or precancerous growths. Lasers are most commonly used to treat superficial cancers that are on the surface of the body or the lining of internal organs. It is used to treat basal cell skin cancer and the very early stages of others like cervical, penile, vaginal, vulvar, and non-small cell lung cancer. It is often combined with other treatments, such as surgery, chemotherapy, or radiation therapy. Laser-induced interstitial thermotherapy (LITT), or interstitial laser photocoagulation, uses lasers to treat some cancers using hyperthermia, which uses heat to shrink tumors by damaging or killing cancer cells. Laser are more precise than surgery and cause less damage, pain, bleeding, swelling, and scarring. A disadvantage is surgeons must have specialized training. It may be more expensive than other treatments. Alternative medicine Complementary and alternative cancer treatments are a diverse group of therapies, practices and products that are not part of conventional medicine. "Complementary medicine" refers to methods and substances used along with conventional medicine, while "alternative medicine" refers to compounds used instead of conventional medicine. Most complementary and alternative medicines for cancer have not been studied or tested using conventional techniques such as clinical trials. Some alternative treatments have been investigated and shown to be ineffective but still continue to be marketed and promoted. Cancer researcher Andrew J. Vickers stated, "The label 'unproven' is inappropriate for such therapies; it is time to assert that many alternative cancer therapies have been 'disproven'." Prognosis Survival rates vary by cancer type and by the stage at which it is diagnosed, ranging from majority survival to complete mortality five years after diagnosis. Once a cancer has metastasized, prognosis normally becomes much worse. About half of patients receiving treatment for invasive cancer (excluding carcinoma in situ and non-melanoma skin cancers) die from that cancer or its treatment. A majority of cancer deaths are due to metastases of the primary tumor. Survival is worse in the developing world, partly because the types of cancer that are most common there are harder to treat than those associated with developed countries. Those who survive cancer develop a second primary cancer at about twice the rate of those never diagnosed. The increased risk is believed to be due to the random chance of developing any cancer, the likelihood of surviving the first cancer, the same risk factors that produced the first cancer, unwanted side effects of treating the first cancer (particularly radiation therapy), and better compliance with screening. Predicting short- or long-term survival depends on many factors. The most important are the cancer type and the patient's age and overall health. Those who are frail with other health problems have lower survival rates than otherwise healthy people. Centenarians are unlikely to survive for five years even if treatment is successful. People who report a higher quality of life tend to survive longer. People with lower quality of life may be affected by depression and other complications and/or disease progression that both impairs quality and quantity of life. Additionally, patients with worse prognoses may be depressed or report poorer quality of life because they perceive that their condition is likely to be fatal. People with cancer have an increased risk of blood clots in their veins which can be life-threatening. The use of blood thinners such as heparin decrease the risk of blood clots but have not been shown to increase survival in people with cancer. People who take blood thinners also have an increased risk of bleeding. Although extremely rare, some forms of cancer, even from an advanced stage, can heal spontaneously. This phenomenon is known as spontaneous remission. Epidemiology Estimates are that in 2018, 18.1 million new cases of cancer and 9.6 million deaths occur globally. About 20% of males and 17% of females will get cancer at some point in time while 13% of males and 9% of females will die from it. In 2008, approximately 12.7 million cancers were diagnosed (excluding non-melanoma skin cancers and other non-invasive cancers) and in 2010 nearly 7.98 million people died. Cancers account for approximately 16% of deaths. The most common are lung cancer (1.76 million deaths), colorectal cancer (860,000) stomach cancer (780,000), liver cancer (780,000), and breast cancer (620,000). This makes invasive cancer the leading cause of death in the developed world and the second leading in the developing world. Over half of cases occur in the developing world. Deaths from cancer were 5.8 million in 1990. Deaths have been increasing primarily due to longer lifespans and lifestyle changes in the developing world. The most significant risk factor for developing cancer is age. Although it is possible for cancer to strike at any age, most patients with invasive cancer are over 65. According to cancer researcher Robert A. Weinberg, "If we lived long enough, sooner or later we all would get cancer." Some of the association between aging and cancer is attributed to immunosenescence, errors accumulated in DNA over a lifetime and age-related changes in the endocrine system. Aging's effect on cancer is complicated by factors such as DNA damage and inflammation promoting it and factors such as vascular aging and endocrine changes inhibiting it. Some slow-growing cancers are particularly common, but often are not fatal. Autopsy studies in Europe and Asia showed that up to 36% of people have undiagnosed and apparently harmless thyroid cancer at the time of their deaths and that 80% of men develop prostate cancer by age 80. As these cancers do not cause the patient's death, identifying them would have represented overdiagnosis rather than useful medical care. The three most common childhood cancers are leukemia (34%), brain tumors (23%) and lymphomas (12%). In the United States cancer affects about 1 in 285 children. Rates of childhood cancer increased by 0.6% per year between 1975 and 2002 in the United States and by 1.1% per year between 1978 and 1997 in Europe. Death from childhood cancer decreased by half between 1975 and 2010 in the United States. History Cancer has existed for all of human history. The earliest written record regarding cancer is from in the Egyptian Edwin Smith Papyrus and describes breast cancer. Hippocrates described several kinds of cancer, referring to them with the Greek word καρκίνος karkinos (crab or crayfish). This name comes from the appearance of the cut surface of a solid malignant tumor, with "the veins stretched on all sides as the animal the crab has its feet, whence it derives its name". Galen stated that "cancer of the breast is so called because of the fancied resemblance to a crab given by the lateral prolongations of the tumor and the adjacent distended veins". Celsus ( – 50 AD) translated karkinos into the Latin cancer, also meaning crab and recommended surgery as treatment. Galen (2nd century AD) disagreed with the use of surgery and recommended purgatives instead. These recommendations largely stood for 1000 years. In the 15th, 16th and 17th centuries, it became acceptable for doctors to dissect bodies to discover the cause of death. The German professor Wilhelm Fabry believed that breast cancer was caused by a milk clot in a mammary duct. The Dutch professor Francois de la Boe Sylvius, a follower of Descartes, believed that all disease was the outcome of chemical processes and that acidic lymph fluid was the cause of cancer. His contemporary Nicolaes Tulp believed that cancer was a poison that slowly spreads and concluded that it was contagious. The physician John Hill described tobacco sniffing as the cause of nose cancer in 1761. This was followed by the report in 1775 by British surgeon Percivall Pott that chimney sweeps' carcinoma, a cancer of the scrotum, was a common disease among chimney sweeps. With the widespread use of the microscope in the 18th century, it was discovered that the 'cancer poison' spread from the primary tumor through the lymph nodes to other sites ("metastasis"). This view of the disease was first formulated by the English surgeon Campbell De Morgan between 1871 and 1874. Society and culture Although many diseases (such as heart failure) may have a worse prognosis than most cases of cancer, cancer is the subject of widespread fear and taboos. The euphemism of "a long illness" to describe cancers leading to death is still commonly used in obituaries, rather than naming the disease explicitly, reflecting an apparent stigma. Cancer is also euphemised as "the C-word"; Macmillan Cancer Support uses the term to try to lessen the fear around the disease. In Nigeria, one local name for cancer translates into English as "the disease that cannot be cured". This deep belief that cancer is necessarily a difficult and usually deadly disease is reflected in the systems chosen by society to compile cancer statistics: the most common form of cancer—non-melanoma skin cancers, accounting for about one-third of cancer cases worldwide, but very few deaths—are excluded from cancer statistics specifically because they are easily treated and almost always cured, often in a single, short, outpatient procedure. Western conceptions of patients' rights for people with cancer include a duty to fully disclose the medical situation to the person, and the right to engage in shared decision-making in a way that respects the person's own values. In other cultures, other rights and values are preferred. For example, most African cultures value whole families rather than individualism. In parts of Africa, a diagnosis is commonly made so late that cure is not possible, and treatment, if available at all, would quickly bankrupt the family. As a result of these factors, African healthcare providers tend to let family members decide whether, when and how to disclose the diagnosis, and they tend to do so slowly and circuitously, as the person shows interest and an ability to cope with the grim news. People from Asian and South American countries also tend to prefer a slower, less candid approach to disclosure than is idealized in the United States and Western Europe, and they believe that sometimes it would be preferable not to be told about a cancer diagnosis. In general, disclosure of the diagnosis is more common than it was in the 20th century, but full disclosure of the prognosis is not offered to many patients around the world. In the United States and some other cultures, cancer is regarded as a disease that must be "fought" to end the "civil insurrection"; a War on Cancer was declared in the US. Military metaphors are particularly common in descriptions of cancer's human effects, and they emphasize both the state of the patient's health and the need to take immediate, decisive actions himself rather than to delay, to ignore or to rely entirely on others. The military metaphors also help rationalize radical, destructive treatments. In the 1970s, a relatively popular alternative cancer treatment in the US was a specialized form of talk therapy, based on the idea that cancer was caused by a bad attitude. People with a "cancer personality"—depressed, repressed, self-loathing and afraid to express their emotions—were believed to have manifested cancer through subconscious desire. Some psychotherapists claimed that treatment to change the patient's outlook on life would cure the cancer. Among other effects, this belief allowed society to blame the victim for having caused the cancer (by "wanting" it) or having prevented its cure (by not becoming a sufficiently happy, fearless and loving person). It also increased patients' anxiety, as they incorrectly believed that natural emotions of sadness, anger or fear shorten their lives. The idea was ridiculed by Susan Sontag, who published Illness as Metaphor while recovering from treatment for breast cancer in 1978. Although the original idea is now generally regarded as nonsense, the idea partly persists in a reduced form with a widespread, but incorrect, belief that deliberately cultivating a habit of positive thinking will increase survival. This notion is particularly strong in breast cancer culture. One idea about why people with cancer are blamed or stigmatized, called the just-world fallacy, is that blaming cancer on the patient's actions or attitudes allows the blamers to regain a sense of control. This is based upon the blamers' belief that the world is fundamentally just and so any dangerous illness, like cancer, must be a type of punishment for bad choices, because in a just world, bad things would not happen to good people. Economic effect The total health care expenditure on cancer in the US was estimated to be $80.2 billion in 2015. Even though cancer-related health care expenditure have increased in absolute terms during recent decades, the share of health expenditure devoted to cancer treatment has remained close to 5% between the 1960s and 2004. A similar pattern has been observed in Europe where about 6% of all health care expenditure are spent on cancer treatment. In addition to health care expenditure and financial toxicity, cancer causes indirect costs in the form of productivity losses due to sick days, permanent incapacity and disability as well as premature death during working age. Cancer causes also costs for informal care. Indirect costs and informal care costs are typically estimated to exceed or equal the health care costs of cancer. Workplace In the United States, cancer is included as a protected condition by the Equal Employment Opportunity Commission (EEOC), mainly due to the potential for cancer having discriminating effects on workers. Discrimination in the workplace could occur if an employer holds a false belief that a person with cancer is not capable of doing a job properly, and may ask for more sick leave than other employees. Employers may also make hiring or firing decisions based on misconceptions about cancer disabilities, if present. The EEOC provides interview guidelines for employers, as well as lists of possible solutions for assessing and accommodating employees with cancer. Effect on divorce A study found women were around six times more likely to be divorced soon after a diagnosis of cancer compared to men. Rate of separation for cancer-survivors showed correlations with race, age, income, and comorbidities in a study. A review found a somewhat decreased divorce rate for most cancer types, and noted study heterogeneity and methodological weaknesses for many studies on effects of cancer on divorce. Research Because cancer is a class of diseases, it is unlikely that there will ever be a single "cure for cancer" any more than there will be a single treatment for all infectious diseases. Angiogenesis inhibitors were once incorrectly thought to have potential as a "silver bullet" treatment applicable to many types of cancer. Angiogenesis inhibitors and other cancer therapeutics are used in combination to reduce cancer morbidity and mortality. Experimental cancer treatments are studied in clinical trials to compare the proposed treatment to the best existing treatment. Treatments that succeeded in one cancer type can be tested against other types. Diagnostic tests are under development to better target the right therapies to the right patients, based on their individual biology. Cancer research focuses on the following issues: Agents (e.g. viruses) and events (e.g. mutations) that cause or facilitate genetic changes in cells destined to become cancer. The precise nature of the genetic damage and the genes that are affected by it. The consequences of those genetic changes on the biology of the cell, both in generating the defining properties of a cancer cell and in facilitating additional genetic events that lead to further progression of the cancer. The improved understanding of molecular biology and cellular biology due to cancer research has led to new treatments for cancer since US President Richard Nixon declared the "War on Cancer" in 1971. Since then, the country has spent over $200 billion on cancer research, including resources from public and private sectors. The cancer death rate (adjusting for size and age of the population) declined by five percent between 1950 and 2005. Competition for financial resources appears to have suppressed the creativity, cooperation, risk-taking and original thinking required to make fundamental discoveries, unduly favoring low-risk research into small incremental advancements over riskier, more innovative research. Other consequences of competition appear to be many studies with dramatic claims whose results cannot be replicated and perverse incentives that encourage grantee institutions to grow without making sufficient investments in their own faculty and facilities. Virotherapy, which uses convert viruses, is being studied. In the wake of the COVID-19 pandemic, there has been a worry that cancer research and treatment are slowing down. On 2 December 2023, Nano Today published a groundbreaking discovery involving "NK cell-engaging nanodrones" for targeted cancer treatment. The development of "NK cell-engaging nanodrones" represents a significant leap forward in cancer treatment, showcasing how cutting-edge nanotechnology and immunotherapy can be combined to target and eliminate cancer cells with unprecedented precision. These nanodrones are designed to harness the power of natural killer (NK) cells, which play a crucial role in the body's immune response against tumors. By directing these NK cells specifically to the sites of tumors, the nanodrones can effectively concentrate the immune system's attack on the cancer cells, potentially leading to better outcomes for patients. The key innovation here lies in the use of protein cage nanoparticle-based systems. These systems are engineered to carry signals that attract NK cells directly to the tumor, overcoming one of the major challenges in cancer immunotherapy: ensuring that the immune cells find and attack only the cancer cells without harming healthy tissue. This targeted approach not only increases the efficacy of the treatment but also minimizes side effects, a common concern with broader-acting cancer therapies. Pregnancy Cancer affects approximately 1 in 1,000 pregnant women. The most common cancers found during pregnancy are the same as the most common cancers found in non-pregnant women during childbearing ages: breast cancer, cervical cancer, leukemia, lymphoma, melanoma, ovarian cancer and colorectal cancer. Diagnosing a new cancer in a pregnant woman is difficult, in part because any symptoms are commonly assumed to be a normal discomfort associated with pregnancy. As a result, cancer is typically discovered at a somewhat later stage than average. Some imaging procedures, such as MRIs (magnetic resonance imaging), CT scans, ultrasounds and mammograms with fetal shielding are considered safe during pregnancy; some others, such as PET scans, are not. Treatment is generally the same as for non-pregnant women. However, radiation and radioactive drugs are normally avoided during pregnancy, especially if the fetal dose might exceed 100 cGy. In some cases, some or all treatments are postponed until after birth if the cancer is diagnosed late in the pregnancy. Early deliveries are often used to advance the start of treatment. Surgery is generally safe, but pelvic surgeries during the first trimester may cause miscarriage. Some treatments, especially certain chemotherapy drugs given during the first trimester, increase the risk of birth defects and pregnancy loss (spontaneous abortions and stillbirths). Elective abortions are not required and, for the most common forms and stages of cancer, do not improve the mother's survival. In a few instances, such as advanced uterine cancer, the pregnancy cannot be continued and in others, the patient may end the pregnancy so that she can begin aggressive chemotherapy. Some treatments can interfere with the mother's ability to give birth vaginally or to breastfeed. Cervical cancer may require birth by Caesarean section. Radiation to the breast reduces the ability of that breast to produce milk and increases the risk of mastitis. Also, when chemotherapy is given after birth, many of the drugs appear in breast milk, which could harm the baby. Other animals Veterinary oncology, concentrating mainly on cats and dogs, is a growing specialty in wealthy countries and the major forms of human treatment such as surgery and radiotherapy may be offered. The most common types of cancer differ, but the cancer burden seems at least as high in pets as in humans. Animals, typically rodents, are often used in cancer research and studies of natural cancers in larger animals may benefit research into human cancer. Across wild animals, there is still limited data on cancer. Nonetheless, a study published in 2022, explored cancer risk in (non-domesticated) zoo mammals, belonging to 191 species, 110,148 individual, demonstrated that cancer is a ubiquitous disease of mammals and it can emerge anywhere along the mammalian phylogeny. This research also highlighted that cancer risk is not uniformly distributed along mammals. For instance, species in the order Carnivora are particularly prone to be affected by cancer (e.g. over 25% of clouded leopards, bat-eared foxes and red wolves die of cancer), while ungulates (especially even-toed ungulates) appear to face consistently low cancer risks. In non-humans, a few types of transmissible cancer have also been described, wherein the cancer spreads between animals by transmission of the tumor cells themselves. This phenomenon is seen in dogs with Sticker's sarcoma (also known as canine transmissible venereal tumor), and in Tasmanian devils with devil facial tumour disease (DFTD). See also Cancer screening Cancer treatment Causes of cancer Epidemiology of cancer Occupational cancer Oncology References Further reading External links IARC Publications (WHO) | Publications.iarc.fr "On telling cancer patients to have a positive attitude" at The Atlantic WHO fact sheet on cancer National Firefighter Registry (NFR) for Cancer, National Institute for Occupational Safety and Health (NIOSH), USA Stop carcinogens at work, EU OSHA. The site shares information to help prevent workers from being exposed to carcinogens in the workplace. Occupational Cancer, NIOSH. NIOSH Pocket guide to chemical hazards, Appendix A- NIOSH Potential Occupational Carcinogens. Aging-associated diseases Cancer Causes of amputation Latin words and phrases Wikipedia medicine articles ready to translate Articles containing video clips
0.771594
0.999634
0.771312
Medical biology
Medical biology is a field of biology that has practical applications in medicine, health care, and laboratory diagnostics. It includes many biomedical disciplines and areas of specialty that typically contains the "bio-" prefix such as: molecular biology, biochemistry, biophysics, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, microbiology, virology, parasitology, physiology, pathology, toxicology, and many others that generally concern life sciences as applied to medicine. Medical biology is the cornerstone of modern health care and laboratory diagnostics. It concerned a wide range of scientific and technological approaches: from an in vitro diagnostics to the in vitro fertilisation, from the molecular mechanisms of a cystic fibrosis to the population dynamics of the HIV, from the understanding molecular interactions to the study of the carcinogenesis, from a single-nucleotide polymorphism (SNP) to the gene therapy. Medical biology based on molecular biology combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy. See also External links References Clinical medicine Biomedicine
0.79118
0.97485
0.771282
Human skeleton
The human skeleton is the internal framework of the human body. It is composed of around 270 bones at birth – this total decreases to around 206 bones by adulthood after some bones get fused together. The bone mass in the skeleton makes up about 14% of the total body weight (ca. 10–11 kg for an average person) and reaches maximum mass between the ages of 25 and 30. The human skeleton can be divided into the axial skeleton and the appendicular skeleton. The axial skeleton is formed by the vertebral column, the rib cage, the skull and other associated bones. The appendicular skeleton, which is attached to the axial skeleton, is formed by the shoulder girdle, the pelvic girdle and the bones of the upper and lower limbs. The human skeleton performs six major functions: support, movement, protection, production of blood cells, storage of minerals, and endocrine regulation. The human skeleton is not as sexually dimorphic as that of many other primate species, but subtle differences between sexes in the morphology of the skull, dentition, long bones, and pelvis exist. In general, female skeletal elements tend to be smaller and less robust than corresponding male elements within a given population. The human female pelvis is also different from that of males in order to facilitate childbirth. Unlike most primates, human males do not have penile bones. Divisions Axial The axial skeleton (80 bones) is formed by the vertebral column (32–34 bones; the number of the vertebrae differs from human to human as the lower 2 parts, sacral and coccygeal bone may vary in length), a part of the rib cage (12 pairs of ribs and the sternum), and the skull (22 bones and 7 associated bones). The upright posture of humans is maintained by the axial skeleton, which transmits the weight from the head, the trunk, and the upper extremities down to the lower extremities at the hip joints. The bones of the spine are supported by many ligaments. The erector spinae muscles are also supporting and are useful for balance. Appendicular The appendicular skeleton (126 bones) is formed by the pectoral girdles, the upper limbs, the pelvic girdle or pelvis, and the lower limbs. Their functions are to make locomotion possible and to protect the major organs of digestion, excretion and reproduction. Functions The skeleton serves six major functions: support, movement, protection, production of blood cells, storage of minerals and endocrine regulation. Support The skeleton provides the framework which supports the body and maintains its shape. The pelvis, associated ligaments and muscles provide a floor for the pelvic structures. Without the rib cages, costal cartilages, and intercostal muscles, the lungs would collapse. Movement The joints between bones allow movement, some allowing a wider range of movement than others, e.g. the ball and socket joint allows a greater range of movement than the pivot joint at the neck. Movement is powered by skeletal muscles, which are attached to the skeleton at various sites on bones. Muscles, bones, and joints provide the principal mechanics for movement, all coordinated by the nervous system. It is believed that the reduction of human bone density in prehistoric times reduced the agility and dexterity of human movement. Shifting from hunting to agriculture has caused human bone density to reduce significantly. Protection The skeleton helps to protect many vital internal organs from being damaged. The skull protects the brain The vertebrae protect the spinal cord. The rib cage, spine, and sternum protect the lungs, heart and major blood vessels. Blood cell production The skeleton is the site of haematopoiesis, the development of blood cells that takes place in the bone marrow. In children, haematopoiesis occurs primarily in the marrow of the long bones such as the femur and tibia. In adults, it occurs mainly in the pelvis, cranium, vertebrae, and sternum. Storage The bone matrix can store calcium and is involved in calcium metabolism, and bone marrow can store iron in ferritin and is involved in iron metabolism. However, bones are not entirely made of calcium, but a mixture of chondroitin sulfate and hydroxyapatite, the latter making up 70% of a bone. Hydroxyapatite is in turn composed of 39.8% of calcium, 41.4% of oxygen, 18.5% of phosphorus, and 0.2% of hydrogen by mass. Chondroitin sulfate is a sugar made up primarily of oxygen and carbon. Endocrine regulation Bone cells release a hormone called osteocalcin, which contributes to the regulation of blood sugar (glucose) and fat deposition. Osteocalcin increases both insulin secretion and sensitivity, in addition to boosting the number of insulin-producing cells and reducing stores of fat. Sex differences Anatomical differences between human males and females are highly pronounced in some soft tissue areas, but tend to be limited in the skeleton. The human skeleton is not as sexually dimorphic as that of many other primate species, but subtle differences between sexes in the morphology of the skull, dentition, long bones, and pelvis are exhibited across human populations. In general, female skeletal elements tend to be smaller and less robust than corresponding male elements within a given population. It is not known whether or to what extent those differences are genetic or environmental. Skull A variety of gross morphological traits of the human skull demonstrate sexual dimorphism, such as the median nuchal line, mastoid processes, supraorbital margin, supraorbital ridge, and the chin. Dentition Human inter-sex dental dimorphism centers on the canine teeth, but it is not nearly as pronounced as in the other great apes. Long bones Long bones are generally larger in males than in females within a given population. Muscle attachment sites on long bones are often more robust in males than in females, reflecting a difference in overall muscle mass and development between sexes. Sexual dimorphism in the long bones is commonly characterized by morphometric or gross morphological analyses. Pelvis The human pelvis exhibits greater sexual dimorphism than other bones, specifically in the size and shape of the pelvic cavity, ilia, greater sciatic notches, and the sub-pubic angle. The Phenice method is commonly used to determine the sex of an unidentified human skeleton by anthropologists with 96% to 100% accuracy in some populations. Women's pelvises are wider in the pelvic inlet and are wider throughout the pelvis to allow for child birth. The sacrum in the women's pelvis is curved inwards to allow the child to have a "funnel" to assist in the child's pathway from the uterus to the birth canal. Clinical significance There are many classified skeletal disorders. One of the most common is osteoporosis. Also common is scoliosis, a side-to-side curve in the back or spine, often creating a pronounced "C" or "S" shape when viewed on an x-ray of the spine. This condition is most apparent during adolescence, and is most common with females. Arthritis Arthritis is a disorder of the joints. It involves inflammation of one or more joints. When affected by arthritis, the joint or joints affected may be painful to move, may move in unusual directions or may be immobile completely. The symptoms of arthritis will vary differently between types of arthritis. The most common form of arthritis, osteoarthritis, can affect both the larger and smaller joints of the human skeleton. The cartilage in the affected joints will degrade, soften and wear away. This decreases the mobility of the joints and decreases the space between bones where cartilage should be. Osteoporosis Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined by the World Health Organization in women as a bone mineral density 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average, as measured by dual energy X-ray absorptiometry, with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and be at risk of fracture. Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium supplements may also be advised, as may vitamin D. When medication is used, it may include bisphosphonates, strontium ranelate, and osteoporosis may be one factor considered when commencing hormone replacement therapy. History India The Sushruta Samhita, composed between the 6th century BCE and 5th century CE speaks of 360 bones. Books on Salya-Shastra (surgical science) know of only 300. The text then lists the total of 300 as follows: 120 in the extremities (e.g. hands, legs), 117 in the pelvic area, sides, back, abdomen and breast, and 63 in the neck and upwards. The text then explains how these subtotals were empirically verified. The discussion shows that the Indian tradition nurtured diversity of thought, with Sushruta school reaching its own conclusions and differing from the Atreya-Caraka tradition. The differences in the count of bones in the two schools is partly because Charaka Samhita includes 32 tooth sockets in its count, and their difference of opinions on how and when to count a cartilage as bone (which both sometimes do, unlike modern anatomy). Hellenistic world The study of bones in ancient Greece started under Ptolemaic kings due to their link to Egypt. Herophilos, through his work by studying dissected human corpses in Alexandria, is credited to be the pioneer of the field. His works are lost but are often cited by notable persons in the field such as Galen and Rufus of Ephesus. Galen himself did little dissection though and relied on the work of others like Marinus of Alexandria, as well as his own observations of gladiator cadavers and animals. According to Katherine Park, in medieval Europe dissection continued to be practiced, contrary to the popular understanding that such practices were taboo and thus completely banned. The practice of holy autopsy, such as in the case of Clare of Montefalco further supports the claim. Alexandria continued as a center of anatomy under Islamic rule, with Ibn Zuhr a notable figure. Chinese understandings are divergent, as the closest corresponding concept in the medicinal system seems to be the meridians, although given that Hua Tuo regularly performed surgery, there may be some distance between medical theory and actual understanding. Renaissance Leonardo da Vinci made studies of the skeleton, albeit unpublished in his time. Many artists, Antonio del Pollaiuolo being the first, performed dissections for better understanding of the body, although they concentrated mostly on the muscles. Vesalius, regarded as the founder of modern anatomy, authored the book De humani corporis fabrica, which contained many illustrations of the skeleton and other body parts, correcting some theories dating from Galen, such as the lower jaw being a single bone instead of two. Various other figures like Alessandro Achillini also contributed to the further understanding of the skeleton. 18th century As early as 1797, the death goddess or folk saint known as Santa Muerte has been represented as a skeleton. See also List of bones of the human skeleton Distraction osteogenesis References Bibliography Endocrine system Human anatomy
0.77191
0.999045
0.771173
Contagious disease
A contagious disease is an infectious disease that can be spread rapidly in several ways, including direct contact, indirect contact, and Droplet contact. These diseases are caused by organisms such as parasites, Bacteria, Fungi, and viruses. While many types of organisms live on the human body and are usually harmless, these organisms can sometimes cause disease. Some common infectious diseases are Influenza, COVID-19, Ebola, hepatitis, HIV/AIDS, Human papillomavirus infection, Polio, and Zika virus. A disease is often known to be contagious before medical science discovers its causative agent. Koch's postulates, which were published at the end of the 19th century, were the standard for the next 100 years or more, especially with diseases caused by bacteria. Microbial pathogenesis attempts to account for diseases caused by a virus. Historical meaning Originally, the term referred to a contagion or disease transmissible only by direct physical contact. In the modern-day, the term has sometimes been broadened to encompass any communicable or infectious disease. Often the word can only be understood in context, where it is used to emphasize very infectious, easily transmitted, or especially severe communicable diseases. In 1849, John Snow first proposed that cholera was a contagious disease. Effect on public health response Most epidemics are caused by contagious diseases, with occasional exceptions, such as yellow fever. The spread of non-contagious communicable diseases is changed either very little or not at all by medical isolation of ill persons or medical quarantine for exposed persons. Thus, a "contagious disease" is sometimes defined in practical terms, as a disease for which isolation or quarantine are useful public health responses. Some locations are better suited for the research into the contagious pathogens due to the reduced risk of transmission afforded by a remote or isolated location. Negative room pressure is a technique in health care facilities based on aerobiological designs. See also Germ theory of disease Herd immunity Notifiable disease References Infectious diseases Microbiology Epidemiology Causality
0.776645
0.99286
0.771099
Body of water
A body of water or waterbody is any significant accumulation of water on the surface of Earth or another planet. The term most often refers to oceans, seas, and lakes, but it includes smaller pools of water such as ponds, wetlands, or more rarely, puddles. A body of water does not have to be still or contained; rivers, streams, canals, and other geographical features where water moves from one place to another are also considered bodies of water. Most are naturally occurring geographical features, but some are artificial. There are types that can be either. For example, most reservoirs are created by engineering dams, but some natural lakes are used as reservoirs. Similarly, most harbors are naturally occurring bays, but some harbors have been created through construction. Bodies of water that are navigable are known as waterways. Some bodies of water collect and move water, such as rivers and streams, and others primarily hold water, such as lakes and oceans. Bodies of water are affected by gravity, which is what creates the tidal effects. Moreso, the impact of climate change on water is likely to intensify as observed through the rising sea levels, water acidification and flooding. This means that climate change has pressure on water bodies. Types Bodies of water can be categorized into: Rain water Surface water Underground water There are some geographical features involving water that are not bodies of water, for example, waterfalls, geysers and rapids. Gallery See also Glossary of landforms References Sources Mitsch, W.J. and J.G. Gosselink. 2007. Wetlands, 4th ed., John Wiley & Sons, Inc., New York, 582 pp. Citations External links Types of Water Bodies (archived 12 November 2011)
0.772427
0.998266
0.771088
Biophysics
Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. Biophysical research shares significant overlap with biochemistry, molecular biology, physical chemistry, physiology, nanotechnology, bioengineering, computational biology, biomechanics, developmental biology and systems biology. The term biophysics was originally introduced by Karl Pearson in 1892. The term biophysics is also regularly used in academia to indicate the study of the physical quantities (e.g. electric current, temperature, stress, entropy) in biological systems. Other biological sciences also perform research on the biophysical properties of living organisms including molecular biology, cell biology, chemical biology, and biochemistry. Overview Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions. Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules. In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology involving both experimental and theoretical tools. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems. Biophysical models are used extensively in the study of electrical conduction in single neurons, as well as neural circuit analysis in both tissue and whole brain. Medical physics, a branch of biophysics, is any application of physics to medicine or healthcare, ranging from radiology to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanomachines). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom. History The studies of Luigi Galvani (1737–1798) laid groundwork for the later field of biophysics. Some of the earlier studies in biophysics were conducted in the 1840s by a group known as the Berlin school of physiologists. Among its members were pioneers such as Hermann von Helmholtz, Ernst Heinrich Weber, Carl F. W. Ludwig, and Johannes Peter Müller. William T. Bovie (1882–1958) is credited as a leader of the field's further development in the mid-20th century. He was a leader in developing electrosurgery. The popularity of the field rose when the book What Is Life?'' by Erwin Schrödinger was published. Since 1957, biophysicists have organized themselves into the Biophysical Society which now has about 9,000 members over the world. Some authors such as Robert Rosen criticize biophysics on the ground that the biophysical method does not take into account the specificity of biological phenomena. Focus as a subfield While some colleges and universities have dedicated departments of biophysics, usually at the graduate level, many do not have university-level biophysics departments, instead having groups in related departments such as biochemistry, cell biology, chemistry, computer science, engineering, mathematics, medicine, molecular biology, neuroscience, pharmacology, physics, and physiology. Depending on the strengths of a department at a university differing emphasis will be given to fields of biophysics. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments. Biology and molecular biology – Gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics, virophysics. Structural biology – Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof. Biochemistry and chemistry – biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships. Computer science – Neural networks, biomolecular and drug databases. Computational chemistry – molecular dynamics simulation, molecular docking, quantum chemistry Bioinformatics – sequence alignment, structural alignment, protein structure prediction Mathematics – graph/network theory, population modeling, dynamical systems, phylogenetics. Medicine – biophysical research that emphasizes medicine. Medical biophysics is a field closely related to physiology. It explains various aspects and systems of the body from a physical and mathematical perspective. Examples are fluid dynamics of blood flow, gas physics of respiration, radiation in diagnostics/treatment and much more. Biophysics is taught as a preclinical subject in many medical schools, mainly in Europe. Neuroscience – studying neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permittivity. Pharmacology and physiology – channelomics, electrophysiology, biomolecular interactions, cellular membranes, polyketides. Physics – negentropy, stochastic processes, and the development of new physical techniques and instrumentation as well as their application. Quantum biology – The field of quantum biology applies quantum mechanics to biological objects and problems. Decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing. Agronomy and agriculture Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were biologists, chemists or physicists by training. See also Biophysical Society Index of biophysics articles List of publications in biology – Biophysics List of publications in physics – Biophysics List of biophysicists Outline of biophysics Biophysical chemistry European Biophysical Societies' Association Mathematical and theoretical biology Medical biophysics Membrane biophysics Molecular biophysics Neurophysics Physiomics Virophysics Single-particle trajectory References Sources External links Biophysical Society Journal of Physiology: 2012 virtual issue Biophysics and Beyond bio-physics-wiki Link archive of learning resources for students: biophysika.de (60% English, 40% German) Applied and interdisciplinary physics
0.774942
0.994899
0.770989
Urophagia
Urophagia is the consumption of urine. Urine was used in several ancient cultures for various health, healing, and cosmetic purposes; urine drinking is still practiced today. In extreme cases, people may drink urine if no other fluids are available, although numerous credible sources (including the US Army Field Manual) advise against using it. Urine may also be consumed as a sexual activity. Reasons for urophagia As an emergency survival technique Survival guides such as the US Army Field Manual, the SAS Survival Handbook, and others generally advise against drinking urine for survival. These guides state that drinking urine tends to worsen rather than relieve dehydration due to the salts in it, and that urine should not be consumed in a survival situation, even when no other fluid is available. In one incident, Aron Ralston drank urine when trapped for several days with his arm under a boulder. Survivalist television host Bear Grylls drank urine and encouraged others to do so on several episodes on his TV shows. Sexual pleasure See soupeur, urolagnia Folk medicine In various cultures, alternative medicine applications exist of urine from humans, or animals such as camels or cattle, for medicinal or cosmetic purposes, including drinking of one's own urine, but no evidence supports their use. Forced People may be forced to drink urine as a form of torture or humiliation, as in the case of a Dalit boy in Jaunpur, India, who in 2023 was accused by local youths of sexually harassing a girl. Health warnings The World Health Organization has found that the pathogens contained in urine rarely pose a health risk. However, it does caution that in areas where Schistosoma haematobium, a parasitic flatworm, is prevalent, it can be transmitted from person to person. References External links Urine Therapy explained by Boulder resident Brother Sage Shivambhu, non profit organization Alternative medicine Biologically based therapies Pica (disorder) Sexual acts Paraphilias Survival skills Urine
0.773647
0.996481
0.770924
Burn
A burn is an injury to skin, or other tissues, caused by heat, cold, electricity, chemicals, friction, or ionizing radiation (such as sunburn, caused by ultraviolet radiation). Most burns are due to heat from hot liquids (called scalding), solids, or fire. Burns occur mainly in the home or the workplace. In the home, risks are associated with domestic kitchens, including stoves, flames, and hot liquids. In the workplace, risks are associated with fire and chemical and electric burns. Alcoholism and smoking are other risk factors. Burns can also occur as a result of self-harm or violence between people (assault). Burns that affect only the superficial skin layers are known as superficial or first-degree burns. They appear red without blisters, and pain typically lasts around three days. When the injury extends into some of the underlying skin layer, it is a partial-thickness or second-degree burn. Blisters are frequently present and they are often very painful. Healing can require up to eight weeks and scarring may occur. In a full-thickness or third-degree burn, the injury extends to all layers of the skin. Often there is no pain and the burnt area is stiff. Healing typically does not occur on its own. A fourth-degree burn additionally involves injury to deeper tissues, such as muscle, tendons, or bone. The burn is often black and frequently leads to loss of the burned part. Burns are generally preventable. Treatment depends on the severity of the burn. Superficial burns may be managed with little more than simple pain medication, while major burns may require prolonged treatment in specialized burn centers. Cooling with tap water may help pain and decrease damage; however, prolonged cooling may result in low body temperature. Partial-thickness burns may require cleaning with soap and water, followed by dressings. It is not clear how to manage blisters, but it is probably reasonable to leave them intact if small and drain them if large. Full-thickness burns usually require surgical treatments, such as skin grafting. Extensive burns often require large amounts of intravenous fluid, due to capillary fluid leakage and tissue swelling. The most common complications of burns involve infection. Tetanus toxoid should be given if not up to date. In 2015, fire and heat resulted in 67 million injuries. This resulted in about 2.9 million hospitalizations and 176,000 deaths. Among women in much of the world, burns are most commonly related to the use of open cooking fires or unsafe cook stoves. Among men, they are more likely a result of unsafe workplace conditions. Most deaths due to burns occur in the developing world, particularly in Southeast Asia. While large burns can be fatal, treatments developed since 1960 have improved outcomes, especially in children and young adults. In the United States, approximately 96% of those admitted to a burn center survive their injuries. The long-term outcome is related to the size of burn and the age of the person affected. History Cave paintings from more than 3,500 years ago document burns and their management. The earliest Egyptian records on treating burns describes dressings prepared with milk from mothers of baby boys, and the 1500 BCE Edwin Smith Papyrus describes treatments using honey and the salve of resin. Many other treatments have been used over the ages, including the use of tea leaves by the Chinese documented to 600 BCE, pig fat and vinegar by Hippocrates documented to 400 BCE, and wine and myrrh by Celsus documented to the 1st century CE. French barber-surgeon Ambroise Paré was the first to describe different degrees of burns in the 1500s. Guillaume Dupuytren expanded these degrees into six different severities in 1832. The first hospital to treat burns opened in 1843 in London, England, and the development of modern burn care began in the late 1800s and early 1900s. During World War I, Henry D. Dakin and Alexis Carrel developed standards for the cleaning and disinfecting of burns and wounds using sodium hypochlorite solutions, which significantly reduced mortality. In the 1940s, the importance of early excision and skin grafting was acknowledged, and around the same time, fluid resuscitation and formulas to guide it were developed. In the 1970s, researchers demonstrated the significance of the hypermetabolic state that follows large burns. The "Evans formula", described in 1952, was the first burn resuscitation formula based on body weight and surface area (BSA) damaged. The first 24 hours of treatment entails 1ml/kg/% BSA of crystalloids plus 1 ml/kg/% BSA colloids plus 2000ml glucose in water, and in the next 24 hours, crystalloids at 0.5 ml/kg/% BSA, colloids at 0.5 ml/kg/% BSA, and the same amount of glucose in water. Signs and symptoms The characteristics of a burn depend upon its depth. Superficial burns cause pain lasting two or three days, followed by peeling of the skin over the next few days. Individuals with more severe burns may indicate discomfort or complain of feeling pressure rather than pain. Full-thickness burns may be entirely insensitive to light touch or puncture. While superficial burns are typically red in color, severe burns may be pink, white or black. Burns around the mouth or singed hair inside the nose may indicate that burns to the airways have occurred, but these findings are not definitive. More worrisome signs include: shortness of breath, hoarseness, and stridor or wheezing. Itchiness is common during the healing process, occurring in up to 90% of adults and nearly all children. Numbness or tingling may persist for a prolonged period of time after an electrical injury. Burns may also produce emotional and psychological distress. Cause Burns are caused by a variety of external sources classified as thermal (heat-related), chemical, electrical, and radiation. In the United States, the most common causes of burns are: fire or flame (44%), scalds (33%), hot objects (9%), electricity (4%), and chemicals (3%). Most (69%) burn injuries occur at home or at work (9%), and most are accidental, with 2% due to assault by another, and 1–2% resulting from a suicide attempt. These sources can cause inhalation injury to the airway and/or lungs, occurring in about 6%. Burn injuries occur more commonly among the poor. Smoking and alcoholism are other risk factors. Fire-related burns are generally more common in colder climates. Specific risk factors in the developing world include cooking with open fires or on the floor as well as developmental disabilities in children and chronic diseases in adults. Thermal In the United States, fire and hot liquids are the most common causes of burns. Of house fires that result in death, smoking causes 25% and heating devices cause 22%. Almost half of injuries are due to efforts to fight a fire. Scalding is caused by hot liquids or gases and most commonly occurs from exposure to hot drinks, high temperature tap water in baths or showers, hot cooking oil, or steam. Scald injuries are most common in children under the age of five and, in the United States and Australia, this population makes up about two-thirds of all burns. Contact with hot objects is the cause of about 20–30% of burns in children. Generally, scalds are first- or second-degree burns, but third-degree burns may also result, especially with prolonged contact. Fireworks are a common cause of burns during holiday seasons in many countries. This is a particular risk for adolescent males. In the United States, for non-fatal burn injuries to children, white males under the age of 6 comprise most cases.  Thermal burns from grabbing/touching and spilling/splashing were the most common type of burn and mechanism, while the bodily areas most impacted were hands and fingers followed by head/neck. Chemical Chemical burns can be caused by over 25,000 substances, most of which are either a strong base (55%) or a strong acid (26%). Most chemical burn deaths are secondary to ingestion. Common agents include: sulfuric acid as found in toilet cleaners, sodium hypochlorite as found in bleach, and halogenated hydrocarbons as found in paint remover, among others. Hydrofluoric acid can cause particularly deep burns that may not become symptomatic until some time after exposure. Formic acid may cause the breakdown of significant numbers of red blood cells. Electrical Electrical burns or injuries are classified as high voltage (greater than or equal to 1000 volts), low voltage (less than 1000 volts), or as flash burns secondary to an electric arc. The most common causes of electrical burns in children are electrical cords (60%) followed by electrical outlets (14%). Lightning may also result in electrical burns. Risk factors for being struck include involvement in outdoor activities such as mountain climbing, golf and field sports, and working outside. Mortality from a lightning strike is about 10%. While electrical injuries primarily result in burns, they may also cause fractures or dislocations secondary to blunt force trauma or muscle contractions. In high voltage injuries, most damage may occur internally and thus the extent of the injury cannot be judged by examination of the skin alone. Contact with either low voltage or high voltage may produce cardiac arrhythmias or cardiac arrest. Radiation Radiation burns may be caused by protracted exposure to ultraviolet light (such as from the sun, tanning booths or arc welding) or from ionizing radiation (such as from radiation therapy, X-rays or radioactive fallout). Sun exposure is the most common cause of radiation burns and the most common cause of superficial burns overall. There is significant variation in how easily people sunburn based on their skin type. Skin effects from ionizing radiation depend on the amount of exposure to the area, with hair loss seen after 3 Gy, redness seen after 10 Gy, wet skin peeling after 20 Gy, and necrosis after 30 Gy. Redness, if it occurs, may not appear until some time after exposure. Radiation burns are treated the same as other burns. Microwave burns occur via thermal heating caused by the microwaves. While exposures as short as two seconds may cause injury, overall this is an uncommon occurrence. Non-accidental In those hospitalized from scalds or fire burns, 310% are from assault. Reasons include: child abuse, personal disputes, spousal abuse, elder abuse, and business disputes. An immersion injury or immersion scald may indicate child abuse. It is created when an extremity, or sometimes the buttocks are held under the surface of hot water. It typically produces a sharp upper border and is often symmetrical, known as "sock burns", "glove burns", or "zebra stripes" - where folds have prevented certain areas from burning. Deliberate cigarette burns most often found on the face, or the back of the hands and feet. Other high-risk signs of potential abuse include: circumferential burns, the absence of splash marks, a burn of uniform depth, and association with other signs of neglect or abuse. Bride burning, a form of domestic violence, occurs in some cultures, such as India where women have been burned in revenge for what the husband or his family consider an inadequate dowry. In Pakistan, acid burns represent 13% of intentional burns, and are frequently related to domestic violence. Self-immolation (setting oneself on fire) is also used as a form of protest in various parts of the world. Pathophysiology At temperatures greater than , proteins begin losing their three-dimensional shape and start breaking down. This results in cell and tissue damage. Many of the direct health effects of a burn are caused by failure of the skin to perform its normal functions, which include: protection from bacteria, skin sensation, body temperature regulation, and prevention of evaporation of the body's water. Disruption of these functions can lead to infection, loss of skin sensation, hypothermia, and hypovolemic shock via dehydration (i.e. water in the body evaporated away). Disruption of cell membranes causes cells to lose potassium to the spaces outside the cell and to take up water and sodium. In large burns (over 30% of the total body surface area), there is a significant inflammatory response. This results in increased leakage of fluid from the capillaries, and subsequent tissue edema. This causes overall blood volume loss, with the remaining blood suffering significant plasma loss, making the blood more concentrated. Poor blood flow to organs like the kidneys and gastrointestinal tract may result in kidney failure and stomach ulcers. Increased levels of catecholamines and cortisol can cause a hypermetabolic state that can last for years. This is associated with increased cardiac output, metabolism, a fast heart rate, and poor immune function. Diagnosis Burns can be classified by depth, mechanism of injury, extent, and associated injuries. The most commonly used classification is based on the depth of injury. The depth of a burn is usually determined via examination, although a biopsy may also be used. It may be difficult to accurately determine the depth of a burn on a single examination and repeated examinations over a few days may be necessary. In those who have a headache or are dizzy and have a fire-related burn, carbon monoxide poisoning should be considered. Cyanide poisoning should also be considered. Size The size of a burn is measured as a percentage of total body surface area (TBSA) affected by partial thickness or full thickness burns. First-degree burns that are only red in color and are not blistering are not included in this estimation. Most burns (70%) involve less than 10% of the TBSA. There are a number of methods to determine the TBSA, including the Wallace rule of nines, Lund and Browder chart, and estimations based on a person's palm size. The rule of nines is easy to remember but only accurate in people over 16 years of age. More accurate estimates can be made using Lund and Browder charts, which take into account the different proportions of body parts in adults and children. The size of a person's handprint (including the palm and fingers) is approximately 1% of their TBSA. Severity To determine the need for referral to a specialized burn unit, the American Burn Association devised a classification system. Under this system, burns can be classified as major, moderate, and minor. This is assessed based on a number of factors, including total body surface area affected, the involvement of specific anatomical zones, the age of the person, and associated injuries. Minor burns can typically be managed at home, moderate burns are often managed in a hospital, and major burns are managed by a burn center. Severe burn injury represents one of the most devastating forms of trauma. Despite improvements in burn care, patients can be left to suffer for as many as three years post-injury. Prevention Historically, about half of all burns were deemed preventable. Burn prevention programs have significantly decreased rates of serious burns. Preventive measures include: limiting hot water temperatures, smoke alarms, sprinkler systems, proper construction of buildings, and fire-resistant clothing. Experts recommend setting water heaters below . Other measures to prevent scalds include using a thermometer to measure bath water temperatures, and splash guards on stoves. While the effect of the regulation of fireworks is unclear, there is tentative evidence of benefit with recommendations including the limitation of the sale of fireworks to children. Management Resuscitation begins with the assessment and stabilization of the person's airway, breathing and circulation. If inhalation injury is suspected, early intubation may be required. This is followed by care of the burn wound itself. People with extensive burns may be wrapped in clean sheets until they arrive at a hospital. As burn wounds are prone to infection, a tetanus booster shot should be given if an individual has not been immunized within the last five years. In the United States, 95% of burns that present to the emergency department are treated and discharged; 5% require hospital admission. With major burns, early feeding is important. Protein intake should also be increased, and trace elements and vitamins are often required. Hyperbaric oxygenation may be useful in addition to traditional treatments. Intravenous fluids In those with poor tissue perfusion, boluses of isotonic crystalloid solution should be given. In children with more than 10–20% TBSA (Total Body Surface Area) burns, and adults with more than 15% TBSA burns, formal fluid resuscitation and monitoring should follow. This should be begun pre-hospital if possible in those with burns greater than 25% TBSA. The Parkland formula can help determine the volume of intravenous fluids required over the first 24 hours. The formula is based on the affected individual's TBSA and weight. Half of the fluid is administered over the first 8 hours, and the remainder over the following 16 hours. The time is calculated from when the burn occurred, and not from the time that fluid resuscitation began. Children require additional maintenance fluid that includes glucose. Additionally, those with inhalation injuries require more fluid. While inadequate fluid resuscitation may cause problems, over-resuscitation can also be detrimental. The formulas are only a guide, with infusions ideally tailored to a urinary output of >30 mL/h in adults or >1mL/kg in children and mean arterial pressure greater than 60 mmHg. While lactated Ringer's solution is often used, there is no evidence that it is superior to normal saline. Crystalloid fluids appear just as good as colloid fluids, and as colloids are more expensive they are not recommended. Blood transfusions are rarely required. They are typically only recommended when the hemoglobin level falls below 60-80 g/L (6-8 g/dL) due to the associated risk of complications. Intravenous catheters may be placed through burned skin if needed or intraosseous infusions may be used. Wound care Early cooling (within 30 minutes of the burn) reduces burn depth and pain, but care must be taken as over-cooling can result in hypothermia. It should be performed with cool water and not ice water as the latter can cause further injury. Chemical burns may require extensive irrigation. Cleaning with soap and water, removal of dead tissue, and application of dressings are important aspects of wound care. If intact blisters are present, it is not clear what should be done with them. Some tentative evidence supports leaving them intact. Second-degree burns should be re-evaluated after two days. In the management of first and second-degree burns, little quality evidence exists to determine which dressing type to use. It is reasonable to manage first-degree burns without dressings. While topical antibiotics are often recommended, there is little evidence to support their use. Silver sulfadiazine (a type of antibiotic) is not recommended as it potentially prolongs healing time. There is insufficient evidence to support the use of dressings containing silver or negative-pressure wound therapy. Silver sulfadiazine does not appear to differ from silver containing foam dressings with respect to healing. Medications Burns can be very painful and a number of different options may be used for pain management. These include simple analgesics (such as ibuprofen and acetaminophen) and opioids such as morphine. Benzodiazepines may be used in addition to analgesics to help with anxiety. During the healing process, antihistamines, massage, or transcutaneous nerve stimulation may be used to aid with itching. Antihistamines, however, are only effective for this purpose in 20% of people. There is tentative evidence supporting the use of gabapentin and its use may be reasonable in those who do not improve with antihistamines. Intravenous lidocaine requires more study before it can be recommended for pain. Intravenous antibiotics are recommended before surgery for those with extensive burns (>60% TBSA). , guidelines do not recommend their general use due to concerns regarding antibiotic resistance and the increased risk of fungal infections. Tentative evidence, however, shows that they may improve survival rates in those with large and severe burns. Erythropoietin has not been found effective to prevent or treat anemia in burn cases. In burns caused by hydrofluoric acid, calcium gluconate is a specific antidote and may be used intravenously and/or topically. Recombinant human growth hormone (rhGH) in those with burns that involve more than 40% of their body appears to speed healing without affecting the risk of death. The use of steroids is of unclear evidence. Allogeneic cultured keratinocytes and dermal fibroblasts in murine collagen (Stratagraft) was approved for medical use in the United States in June 2021. Surgery Wounds requiring surgical closure with skin grafts or flaps (typically anything more than a small full thickness burn) should be dealt with as early as possible. Circumferential burns of the limbs or chest may need urgent surgical release of the skin, known as an escharotomy. This is done to treat or prevent problems with distal circulation, or ventilation. It is uncertain if it is useful for neck or digit burns. Fasciotomies may be required for electrical burns. Skin grafts can involve temporary skin substitutes, derived from animal (human donor or pig) skin or synthesized. They are used to cover the wound as a dressing, preventing infection and fluid loss, but will eventually need to be removed. Alternatively, human skin can be treated to be left on permanently without rejection. There is no evidence that the use of copper sulphate to visualise phosphorus particles for removal can help with wound healing due to phosphorus burns. Meanwhile, absorption of copper sulphate into the blood circulation can be harmful. Alternative medicine Honey has been used since ancient times to aid wound healing and may be beneficial in first- and second-degree burns. There is moderate evidence that honey helps heal partial thickness burns. The evidence for aloe vera is of poor quality. While it might be beneficial in reducing pain, and a review from 2007 found tentative evidence of improved healing times, a subsequent review from 2012 did not find improved healing over silver sulfadiazine. There were only three randomized controlled trials for the use of plants for burns, two for aloe vera and one for oatmeal. There is little evidence that vitamin E helps with keloids or scarring. Butter is not recommended. In low income countries, burns are treated up to one-third of the time with traditional medicine, which may include applications of eggs, mud, leaves or cow dung. Surgical management is limited in some cases due to insufficient financial resources and availability. There are a number of other methods that may be used in addition to medications to reduce procedural pain and anxiety including: virtual reality therapy, hypnosis, and behavioral approaches such as distraction techniques. Patient support Burn patients require support and care – both physiological and psychological. Respiratory failure, sepsis, and multi-organ system failure are common in hospitalized burn patients. To prevent hypothermia and maintain normal body temperature, burn patients with over 20% of burn injuries should be kept in an environment with the temperature at or above 30 degree Celsius. Metabolism in burn patients proceeds at a higher than normal speed due to the whole-body process and rapid fatty acid substrate cycles, which can be countered with an adequate supply of energy, nutrients, and antioxidants. Enteral feeding a day after resuscitation is required to reduce risk of infection, recovery time, non-infectious complications, hospital stay, long-term damage, and mortality. Controlling blood glucose levels can have an impact on liver function and survival. Risk of thromboembolism is high and acute respiratory distress syndrome (ARDS) that does not resolve with maximal ventilator use is also a common complication. Scars are long-term after-effects of a burn injury. Psychological support is required to cope with the aftermath of a fire accident, while to prevent scars and long-term damage to the skin and other body structures consulting with burn specialists, preventing infections, consuming nutritious foods, early and aggressive rehabilitation, and using compressive clothing are recommended. Prognosis The prognosis is worse in those with larger burns, those who are older, and females. The presence of a smoke inhalation injury, other significant injuries such as long bone fractures, and serious co-morbidities (e.g. heart disease, diabetes, psychiatric illness, and suicidal intent) also influence prognosis. On average, of those admitted to burn centers in the United States, 4% die, with the outcome for individuals dependent on the extent of the burn injury. For example, admittees with burn areas less than 10% TBSA had a mortality rate of less than 1%, while admittees with over 90% TBSA had a mortality rate of 85%. In Afghanistan, people with more than 60% TBSA burns rarely survive. The Baux score has historically been used to determine prognosis of major burns. However, with improved care, it is no longer very accurate. The score is determined by adding the size of the burn (% TBSA) to the age of the person and taking that to be more or less equal to the risk of death. Burns in 2013 resulted in 1.2 million years lived with disability and 12.3 million disability adjusted life years. Complications A number of complications may occur, with infections being the most common. In order of frequency, potential complications include: pneumonia, cellulitis, urinary tract infections and respiratory failure. Risk factors for infection include: burns of more than 30% TBSA, full-thickness burns, extremes of age (young or old), or burns involving the legs or perineum. Pneumonia occurs particularly commonly in those with inhalation injuries. Anemia secondary to full thickness burns of greater than 10% TBSA is common. Electrical burns may lead to compartment syndrome or rhabdomyolysis due to muscle breakdown. Blood clotting in the veins of the legs is estimated to occur in 6 to 25% of people. The hypermetabolic state that may persist for years after a major burn can result in a decrease in bone density and a loss of muscle mass. Keloids may form subsequent to a burn, particularly in those who are young and dark skinned. Following a burn, children may have significant psychological trauma and experience post-traumatic stress disorder. Scarring may also result in a disturbance in body image. To treat hypertrophic scars (raised, tense, stiff and itchy scars) and limit their effect on physical function and everyday activities, silicone sheeting and compression garments are recommended. In the developing world, significant burns may result in social isolation, extreme poverty and child abandonment. Epidemiology In 2015 fire and heat resulted in 67 million injuries. This resulted in about 2.9 million hospitalizations and 238,000 dying. This is down from 300,000 deaths in 1990. This makes it the fourth leading cause of injuries after motor vehicle collisions, falls, and violence. About 90% of burns occur in the developing world. This has been attributed partly to overcrowding and an unsafe cooking situation. Overall, nearly 60% of fatal burns occur in Southeast Asia with a rate of 11.6 per 100,000. The number of fatal burns has changed from 280,000 in 1990 to 176,000 in 2015. In the developed world, adult males have twice the mortality as females from burns. This is most probably due to their higher risk occupations and greater risk-taking activities. In many countries in the developing world, however, females have twice the risk of males. This is often related to accidents in the kitchen or domestic violence. In children, deaths from burns occur at more than ten times the rate in the developing than the developed world. Overall, in children it is one of the top fifteen leading causes of death. From the 1980s to 2004, many countries have seen both a decrease in the rates of fatal burns and in burns generally. Developed countries An estimated 500,000 burn injuries receive medical treatment yearly in the United States. They resulted in about 3,300 deaths in 2008. Most burns (70%) and deaths from burns occur in males. The highest incidence of fire burns occurs in those 1835 years old, while the highest incidence of scalds occurs in children less than five years old and adults over 65. Electrical burns result in about 1,000 deaths per year. Lightning results in the death of about 60 people a year. In Europe, intentional burns occur most commonly in middle aged men. Developing countries In India, about 700,000 to 800,000 people per year sustain significant burns, though very few are looked after in specialist burn units. The highest rates occur in women 16–35 years of age. Part of this high rate is related to unsafe kitchens and loose-fitting clothing typical to India. It is estimated that one-third of all burns in India are due to clothing catching fire from open flames. Intentional burns are also a common cause and occur at high rates in young women, secondary to domestic violence and self-harm. See also Blister Frostbite Scalding References General and cited references External links WHO fact sheet on burns Parkland Formula Acute pain Emergency medical procedures Hazards of outdoor recreation Heat transfer Medical emergencies Wikipedia emergency medicine articles ready to translate Wikipedia medicine articles ready to translate (full)
0.772717
0.997607
0.770867
Hazard
A hazard is a potential source of harm. Substances, events, or circumstances can constitute hazards when their nature would potentially allow them to cause damage to health, life, property, or any other interest of value. The probability of that harm being realized in a specific incident, combined with the magnitude of potential harm, make up its risk. This term is often used synonymously in colloquial speech. Hazards can be classified in several ways which are not mutually exclusive. They can be classified by causing actor (for example, natural or anthropogenic), by physical nature (e.g. biological or chemical) or by type of damage (e.g., health hazard or environmental hazard). Examples of natural disasters with highly harmful impacts on a society are floods, droughts, earthquakes, tropical cyclones, lightning strikes, volcanic activity and wildfires. Technological and anthropogenic hazards include, for example, structural collapses, transport accidents, accidental or intentional explosions, and release of toxic materials. The term climate hazard is used in the context of climate change. These are hazards that stem from climate-related events and can be associated with global warming, such as wildfires, floods, droughts, sea level rise. Climate hazards can combine with other hazards and result in compound event losses (see also loss and damage). For example, the climate hazard of heat can combine with the hazard of poor air quality. Or the climate hazard flooding can combine with poor water quality. In physics terms, common theme across many forms of hazards is the presence of energy that can cause damage, as it can happen with chemical energy, mechanical energy or thermal energy. This damage can affect different valuable interests, and the severity of the associated risk varies. Definition A hazard is defined as "the potential occurrence of a natural or human-induced physical event or trend that may cause loss of life, injury, or other health impacts, as well as damage and loss to property, infrastructure, livelihoods, service provision, ecosystems and environmental resources." A hazard only exists if there is a pathway to exposure. As an example, the center of the Earth consists of molten material at very high temperatures which would be a severe hazard if contact was made with the core. However, there is no feasible way of making contact with the core, therefore the center of the Earth currently poses no hazard. The frequency and severity of hazards are important aspects for risk management. Hazards may also be assessed in relation to the impact that they have. In defining hazard Keith Smith argues that what may be defined as the hazard is only a hazard if there is the presence of humans to make it a hazard. In this regard, human sensitivity to environmental hazards is a combination of both physical exposure (natural and/or technological events at a location related to their statistical variability) and human vulnerability (about social and economic tolerance of the same location). Relationship with other terms Disaster An example of the distinction between a natural hazard and a disaster is that an earthquake is the hazard which caused the 1906 San Francisco earthquake disaster. A natural disaster is the highly harmful impact on a society or community following a natural hazard event. The term "disaster" itself is defined as follows: "Disasters are serious disruptions to the functioning of a community that exceed its capacity to cope using its own resources. Disasters can be caused by natural, man-made and technological hazards, as well as various factors that influence the exposure and vulnerability of a community." The US Federal Emergency Management Agency (FEMA) explains the relationship between natural disasters and natural hazards as follows: "Natural hazards and natural disasters are related but are not the same. A natural hazard is the threat of an event that will likely have a negative impact. A natural disaster is the negative impact following an actual occurrence of natural hazard in the event that it significantly harms a community. Disaster can take various forms, including hurricane, volcano, tsunami, earthquake, drought, famine, plague, disease, rail crash, car crash, tornado, deforestation, flooding, toxic release, and spills (oil, chemicals). A disaster hazard is an extreme geophysical event that is capable of causing a disaster. 'Extreme' in this case means a substantial variation in either the positive or the negative direction from the normal trend; flood disasters can result from exceptionally high precipitation and river discharge, and drought is caused by exceptionally low values. The fundamental determinants of hazard and the risk of such hazards occurring is timing, location, magnitude and frequency. For example, magnitudes of earthquakes are measured on the Richter scale from 1 to 10, whereby each increment of 1 indicates a tenfold increase in severity. The magnitude-frequency rule states that over a significant period of time many small events and a few large ones will occur. Hurricanes and typhoons on the other hand occur between 5 degrees and 25 degrees north and south of the equator, tending to be seasonal phenomena that are thus largely recurrent in time and predictable in location due to the specific climate variables necessary for their formation. Risk and vulnerability The terms hazard and risk are often used interchangeably. However, in terms of risk assessment, these are two very distinct terms. A hazard is an agent that can cause harm or damage to humans, property, or the environment. Risk is the probability that exposure to a hazard will lead to a negative consequence, or more simply, a hazard poses no risk if there is no exposure to that hazard. Risk is a combination of hazard, exposure and vulnerability. For example in terms of water security: examples of hazards are droughts, floods and decline in water quality. Bad infrastructure and bad governance lead to high exposure to risk. Risk can be defined as the likelihood or probability of a given hazard of a given level causing a particular level of loss of damage. The elements of risk are populations, communities, the built environment, the natural environment, economic activities and services which are under threat of disaster in a given area. Another definition of risk is "the probable frequency and probable magnitude of future losses". This definition also focuses on the probability of future loss whereby the degree of vulnerability to hazard represents the level of risk on a particular population or environment. The threats posed by a hazard are: Hazards to people – death, injury, disease and stress Hazards to goods – property damage and economic loss Hazards to environment –loss of flora and fauna, pollution and loss of amenity Classifications Hazards can be classified in several ways. These categories are not mutually exclusive which means that one hazard can fall into several categories. For example, water pollution with toxic chemicals is an anthropogenic hazard as well as an environmental hazard. One of the classification methods is by specifying the origin of the hazard. One key concept in identifying a hazard is the presence of stored energy that, when released, can cause damage. The stored energy can occur in many forms: chemical, mechanical, thermal, radioactive, electrical, etc. The United Nations Office for Disaster Risk Reduction (UNDRR) explains that "each hazard is characterized by its location, intensity or magnitude, frequency and probability". A distinction can also be made between rapid-onset natural hazards, technological hazards, and social hazards, which are described as being of sudden occurrence and relatively short duration, and the consequences of longer-term environmental degradation such as desertification and drought. Hazards may be grouped according to their characteristics. These factors are related to geophysical events, which are not process specific: Areal extent of damage zone Intensity of impact at a point Duration of impact at a point Rate of onset of the event Predictability of the event By causing actor Natural hazard Damage to valuable human interests can occur due to phenomena and processes of the natural environment. Natural disasters such as earthquakes, floods, volcanoes and tsunami have threatened people, society, the natural environment, and the built environment, particularly more vulnerable people, throughout history, and in some cases, on a day-to-day basis. According to the Red Cross, each year 130,000 people are killed, 90,000 are injured and 140 million are affected by unique events known as natural disasters. Potentially dangerous phenomena which are natural or predominantly natural (for example, exceptions are intentional floods) can be classified in these categories: Meteorological and hydrological hazards, e.g. lightning, storm, flood, sandstorm, fog, rogue wave, tsunami, snow, cold wave, heat wave Geological hazards Earthquake Volcanism, which can cause a wide range of events, such as lava flow and ash fall Surface or near-surface events, especially erosion and mass wasting (e.g. landslide) Extraterrestrial hazards, e.g. solar storm, impact event Natural hazards can be influenced by human actions in different ways and to varying degrees, e.g. land-use change, drainage and construction. Humans play a central role in the existence of natural hazards because "it is only when people and their possessions get in the way of natural processes that hazard exists". A natural hazard can be considered as a geophysical event when it occurs in extremes and a human factor is involved that may present a risk. There may be an acceptable variation of magnitude which can vary from the estimated normal or average range with upper and lower limits or thresholds. In these extremes, the natural occurrence may become an event that presents a risk to the environment or people. For example, above-average wind speeds resulting in a tropical depression or hurricane according to intensity measures on the Saffir–Simpson scale will provide an extreme natural event that may be considered a hazard. Seismic hazard Tsunamis can be caused by geophysical hazards, such as in the 2004 Indian Ocean earthquake and tsunami. Although generally a natural phenomenon, earthquakes can sometimes be induced by human interventions, such as injection wells, large underground nuclear explosions, excavation of mines, or reservoirs. Volcanic hazard Anthropogenic hazard Anthropogenic hazards, or human-induced hazards, are "induced entirely or predominantly by human activities and choices". These can be societal, technological or environmental hazards. Technological hazard Technological hazards are created by the possibility of failure associated with human technology (including emerging technologies), which can also impact the economy, health and national security. For example, technological hazards can arise from the following events: Transport accidents: traffic collisions, navigation accidents, rail accidents, aviation accidents Nuclear materials-related accidents: nuclear plant accidents, nuclear weapon accidents and other nuclear accidents Chemical accidents and accidental explosions Mining accidents Space accidents Technological failures (including those that affect information and communications technology, such as cybersecurity threats) can cause disruptions to the energy industry (e.g. power outages), telecommunications (e.g. Internet outage), healthcare, banking, transportation, food supply, water supply and other important services. Structural failures or construction accidents A mechanical hazard is any hazard involving a machine or industrial process. Motor vehicles, aircraft, and air bags pose mechanical hazards. Compressed gases or liquids can also be considered a mechanical hazard. Hazard identification of new machines and/or industrial processes occurs at various stages in the design of the new machine or process. These hazard identification studies focus mainly on deviations from the intended use or design and the harm that may occur as a result of these deviations. These studies are regulated by various agencies such as the Occupational Safety and Health Administration and the National Highway Traffic Safety Administration. Engineering hazards occur when human structures fail (e.g. building or structural collapse, bridge failures, dam failures) or the materials used in their construction prove to be hazardous. Societal hazard Societal hazards can arise from civil disorders, explosive remnants of war, violence, crowd accidents, financial crises, etc. However, the United Nations Office for Disaster Risk Reduction (UNDRR) Hazard Definition & Classification Review (Sendai Framework 2015 - 2030) specifically excludes armed conflict from the anthropogenic hazard category, as these hazards are already recognised under international humanitarian law. Waste disposal In managing waste many hazardous materials are put in the domestic and commercial waste stream. In part this is because modern technological living uses certain toxic or poisonous materials in the electronics and chemical industries. Which, when they are in use or transported, are usually safely contained or encapsulated and packaged to avoid any exposure. In the waste stream, the waste products exterior or encapsulation breaks or degrades and there is a release and exposure to hazardous materials into the environment, for people working in the waste disposal industry, those living around sites used for waste disposal or landfill and the general environment surrounding such sites. Socionatural hazard There are different ways to group hazards by origin. The definition by UNDRR states: "Hazards may be natural, anthropogenic or socionatural in origin." The socionatural hazards are those that are "associated with a combination of natural and anthropogenic factors, including environmental degradation and climate change". Climate hazard The term climate hazard or climatic hazard is used in the context of climate change, for example in the IPCC Sixth Assessment Report. These are hazards that stem from climate-related events such as wildfires, floods, droughts, sea level rise. Climate hazards in the context of water include: Increased temperatures, changes in rainfall patterns between the wet and dry season (increased rainfall variability) and sea level rise. The reason why increasing temperatures is listed here as a climate hazard is because "warming temperatures may result in higher evapotranspiration, in turn leading to drier soils". Waterborne diseases are also connected to climate hazards. Climate hazards can combine with other hazards and result in compound event losses (see also loss and damage). For example, the climate hazard of heat can combine with the hazard of poor air quality. Or the climate hazard flooding can combine with poor water quality. Climate scientists have pointed out that climate hazards affect different groups of people differently, depending on their climate change vulnerability: There are "factors that make people and groups vulnerable (e.g., poverty, uneven power structures, disadvantage and discrimination due to, for example, social location and the intersectionality or the overlapping and compounding risks from ethnicity or racial discrimination, gender, age, or disability, etc.)". By physical nature Biological hazard Biological hazards, also known as biohazards, originate in biological processes of living organisms and pose threats to the health of humans, the security of property, or the environment. Biological hazards include pathogenic microorganisms, such as viruses and bacteria, epidemics, pandemics, parasites, pests, animal attacks, venomous animals, biological toxins and foodborne illnesses. For example, naturally occurring bacteria such as Escherichia coli and Salmonella are well known pathogens, and a variety of measures have been taken to limit human exposure to these microorganisms through food safety, good personal hygiene, and education. The potential for new biological hazards also exists through the discovery of new microorganisms and the development of new genetically modified (GM) organisms. The use of new GM organisms is regulated by various governmental agencies. The US Environmental Protection Agency (EPA) controls GM plants that produce or resist pesticides (i.e. Bt corn and Roundup ready crops). The US Food and Drug Administration (FDA) regulates GM plants that will be used as food or for medicinal purposes. Biological hazards can include medical waste or samples of a microorganism, virus or toxin (from a biological source) that can affect health. Many biological hazards are associated with food, including certain viruses, parasites, fungi, bacteria, and plant and seafood toxins. Pathogenic Campylobacter and Salmonella are common foodborne biological hazards. The hazards from these bacteria can be avoided through risk mitigation steps such as proper handling, storing, and cooking of food. Diseases can be enhanced by human factors such as poor sanitation or by processes such as urbanization. Chemical hazard A chemical can be considered a hazard if by its intrinsic properties it can cause harm or danger to humans, property, or the environment. Health hazards associated with chemicals are dependent on the dose or amount of the chemical. For example, iodine in the form of potassium iodate is used to produce iodised salt. When applied at a rate of 20  mg of potassium iodate per 1000 mg of table salt, the chemical is beneficial in preventing goitre, while iodine intakes of 1200–9500  mg in one dose has been known to cause death. Some chemicals have a cumulative biological effect, while others are metabolically eliminated over time. Other chemical hazards may depend on concentration or total quantity for their effects. Some harmful chemicals occur naturally in certain geological formations, such as arsenic. Other chemicals include products with commercial uses, such as agricultural and industrial chemicals, as well as products developed for home use. A variety of chemical hazards have been identified. However, every year companies produce more new chemicals to fill new needs or to take the place of older, less effective chemicals. Laws, such as the Federal Food, Drug, and Cosmetic Act and the Toxic Substances Control Act in the US, require protection of human health and the environment for any new chemical introduced. In the US, the EPA regulates new chemicals that may have environmental impacts (i.e., pesticides or chemicals released during a manufacturing process), while the FDA regulates new chemicals used in foods or as drugs. The potential hazards of these chemicals can be identified by performing a variety of tests before the authorization of usage. The number of tests required and the extent to which the chemicals are tested varies, depending on the desired usage of the chemical. Chemicals designed as new drugs must undergo more rigorous tests than those used as pesticides. Pesticides, which are normally used to control unwanted insects and plants, may cause a variety of negative effects on non-target organisms. DDT can build up, or bioaccumulate, in birds, resulting in thinner-than-normal eggshells, which can break in the nest. The organochlorine pesticide dieldrin has been linked to Parkinson's disease. Corrosive chemicals like sulfuric acid, which is found in car batteries and research laboratories, can cause severe skin burns. Many other chemicals used in industrial and laboratory settings can cause respiratory, digestive, or nervous system problems if they are inhaled, ingested, or absorbed through the skin. The negative effects of other chemicals, such as alcohol and nicotine, have been well documented. Organohalogens are a family of synthetic organic molecules which all contain atoms of one of the halogens. Such materials include PCBs, Dioxins, DDT, Freon and many others. Although considered harmless when first produced, many of these compounds are now known to have profound physiological effects on many organisms including man. Many are also fat soluble and become concentrated through the food chain. Radioactive or electromagnetic hazard Radioactive materials produce ionizing radiation which may be very harmful to living organisms. Damage from even a short exposure to radioactivity may have long-term adverse health consequences. Thermal or fire hazard Fire hazard Threats to fire safety are commonly referred to as fire hazards. A fire hazard may include a situation that increases the likelihood of a fire or may impede escape in the event a fire occurs. Casualties resulting from fires, regardless of their source or initial cause, can be aggravated by inadequate emergency preparedness. Such hazards as a lack of accessible emergency exits, poorly marked escape routes, or improperly maintained fire extinguishers or sprinkler systems may result in many more deaths and injuries than might occur with such protections. Kinetic hazard Kinetic energy is involved in hazards associated with noise, falling, or vibration. By type of damage Health hazard Hazards that would affect the health of exposed persons, usually having an acute or chronic illness as the consequence. Fatality would not normally be an immediate consequence. Health hazards may cause measurable changes in the body which are generally indicated by the development of signs and symptoms in the exposed persons, or non-measurable, subjective symptoms. Ergonomic hazard Ergonomic hazards are physical conditions that may pose a risk of injury to the musculoskeletal system, such as the muscles or ligaments of the lower back, tendons or nerves of the hands/wrists, or bones surrounding the knees. Ergonomic hazards include things such as awkward or extreme postures, whole-body or hand/arm vibration, poorly designed tools, equipment, or workstations, repetitive motion, and poor lighting. Ergonomic hazards occur in both occupational and non-occupational settings such as in workshops, building sites, offices, home, school, or public spaces and facilities. Occupational hazard Psychosocial hazard Psychological or psychosocial hazards are hazards that affect the psychological well-being of people, including their ability to participate in a work environment among other people. Psychosocial hazards are related to the way work is designed, organized, and managed, as well as the economic and social contexts of work, and are associated with psychiatric, psychological, and/or physical injury or illness. Linked to psychosocial risks are issues such as occupational stress and workplace violence, which are recognized internationally as major challenges to occupational health and safety. Environmental hazard Property Cultural property Cultural property can be damaged, lost or destroyed by different events or processes, including war, vandalism, theft, looting, transport accident, water leak, human error, natural disaster, fire, pests, pollution and progressive deterioration. By status Hazards are sometimes classified into three modes or statuses: Dormant—The situation environment is currently affected. For instance, a hillside may be unstable, with the potential for a landslide, but there is nothing below or on the hillside that could be affected. Armed—People, property, or environment are in potential harm's way. Active—A harmful incident involving the hazard has actually occurred. Often this is referred to not as an "active hazard" but as an accident, emergency, incident, or disaster. Analysis and management A range of methodologies are used to assess hazards and to manage them: (HACCP) (HAZOP) Hazard symbol See also References External links
0.772983
0.997257
0.770863
Dying
Dying is the final stage of life which will eventually lead to death. Diagnosing dying is a complex process of clinical decision-making, and most practice checklists facilitating this diagnosis are based on cancer diagnoses. Signs of dying The National Cancer Institute in the United States advises that the presence of some of the following signs may indicate that death is approaching: Drowsiness, increased sleep, and/or unresponsiveness (caused by changes in the patient's metabolism). Confusion about time, place, and/or identity of loved ones; restlessness; visions of people and places that are not present; pulling at bed linens or clothing (caused in part by changes in the patient's metabolism). Decreased socialization and withdrawal (caused by decreased oxygen to the brain, decreased blood flow, and mental preparation for dying). Decreased need for food and fluids, and loss of appetite (caused by the body's need to conserve energy and its decreasing ability to use food and fluids properly). Loss of bladder or bowel control (caused by the relaxing of muscles in the pelvic area). Darkened urine or decreased amount of urine (caused by slowing of kidney function and/or decreased fluid intake). Skin becoming cool to the touch, particularly the hands and feet; skin may become bluish in color, especially on the underside of the body (caused by decreased circulation to the extremities). Rattling or gurgling sounds while breathing, which may be loud (death rattle); breathing that is irregular and shallow; decreased number of breaths per minute; breathing that alternates between rapid and slow (caused by congestion from decreased fluid consumption, a buildup of waste products in the body, and/or a decrease in circulation to the organs). Turning of the head toward a light source (caused by decreasing vision). Increased difficulty controlling pain (caused by progression of the disease). Involuntary movements (called myoclonus), increased heart rate, hypertension followed by hypotension, and loss of reflexes in the legs and arms are additional signs that the end of life is near. Cultural perspectives on dying How humans understand and approach the process dying differs across cultures. In some cultures, death is the complete termination of life. In other cultures, death can include altered states of being, like sleep or illness. In some traditions, death marks the transition into a different kind of existence, or involves a cyclic pattern of death and rebirth. These cultural differences affect people's lifestyles, behaviors, and approach to death and dying. United States In the United States, a pervasive "death-defying" culture leads to resistance against the process of dying. Death and illness are often conceived as things to "fight against", with conversations about death and dying considered morbid or taboo. Most people die in a hospital or nursing facility, with only around 30% dying at home. As the United States is a culturally diverse nation, attitudes towards death and dying vary according to cultural and spiritual factors. China In Chinese culture, death is viewed as the end of life — there is no afterlife — resulting in negative perceptions of dying. These attitudes towards death and dying originate from the three dominant religions in China: Taoism, Buddhism, and Confucianism. South Pacific In some cultures of the South Pacific, life is believed to leave a person's body when they are sick or asleep, making for multiple "deaths" in the span of one lifetime. Religious perspectives on dying Christianity In Christian belief, most people agree that believers will only experience death once; however, various traditions hold different beliefs about what happens during the intermediate state, the period between death and the universal resurrection. For many traditions, death is the separation of body and soul, so the soul continues to exist in a disembodied state. Other traditions believe that the soul and body are inseparable, meaning that the body's death renders the soul unconscious until the resurrection. Others believe that the spirit leaves the body to exist in heaven or hell. Islam In Islamic belief, the time of death is predetermined, with dying therefore perceived as the will of Allah. Dying is therefore considered as something to be accepted, with Muslims regularly encouraged to reflect upon death and dying. The majority of Muslims prefer to die at home, surrounded by their loved ones, with large numbers expected at the bedside of those who are dying. Hinduism In Hinduism, people are believed to die and be reborn with a new identity. Buddhism In Chinese Buddhism, it is said that dying patients will experience phases between the state of torment and the state of exultation, and that caretaker must help the dying patient remain in the state of exultation through Nianfo prayers. In some parts of Buddhism, the dead and living exist together, with the former having power and influence over the lives of the latter. Medicalization Resuscitation Resuscitation is the act of reviving of someone and is performed when someone is unconscious or dying. Resuscitation is performed using a variety of techniques, of those the most common is Cardio Pulmonary Resuscitation (CPR). CPR is a procedure consisting of cycles of chest compressions and ventilation support with the goal of maintaining blood flow and oxygen to the vital organs of the body. Defibrillation, or shock, is also provided following CPR in an attempt to jump start the heart. Emergency Medical Services (EMS) are often the first to administer CPR to patients outside of the hospital. Although EMS is not able to pronounce death, they are asked to determine the presence of clear signs of death and gauge whether CPR should be attempted or not. CPR is not indicated if the provider is at risk of harm or injury while attempting CPR, if clear signs of death are present (rigor mortis, dependent lividity, decapitation, transection, decomposition, etc.), or if the patient is exempt from resuscitation. Exemption is typically the case when the patient has an advanced directive, a Physician Orders for Life-Sustaining Treatment (POLST) form indicating that resuscitation is not desired, or a valid Do Not Attempt Resuscitation (DNAR) order. End-of-life care End-of-life care is oriented towards a natural stage in the process of living, unlike other conditions. The National Hospice and Palliative Care Organization (NHPCO) states that hospice care or end-of-life care begins when curative treatments are no longer possible, and a person is diagnosed with a terminal illness with less than six months to live. Hospice care involves palliative care aimed at providing comfort for patients and support for loved ones. This process integrates medical care, pain management, as well as social and emotional support provided by social workers and other members of the healthcare team including family physicians, nurses, counselors, trained volunteers, and home health aides. Hospice care is associated with enhanced symptom relief, facilitates achievement of end-of-life wishes, and results in higher quality of end-of-life care compared with standard care involving extensive hospitalization. Psychological adjustment processes When a person realizes that their life is threatened by a fatal illness, they come to terms with it and with their approaching end. This confrontation has been described in diaries, autobiographies, medical reports, novels, and also in poetry. Since the middle of the 20th century, the "fight" against death has been researched in the social sciences on the basis of empirical data and field studies field research. The developed theories and models are intended to serve helpers in the accompaniment of terminally ill people above anything else. The theories of dying describe psychosocial aspects of dying as well as models for the dying process. Particularly highlighted psychosocial aspects are: Total Pain (C. Saunders), Acceptance (J. M. Hinton, Kübler-Ross), Awareness/Insecurity (B.Glaser, A.Strauß), Response to Challenges (E.S.Shneidman), Appropriateness (A. D. Weisman), Autonomy (H.Müller-Busch), Fear (R. Kastenbaum, G.D.Borasio) and Ambivalence (E. Engelke). Phase and stage models There have been many phase and stage models for the course of dying developed from a psychological and psychosocial perspective. A distinction is made between three and twelve phases that a dying person goes through. A more recently developed and revised phase model is the Illness Constellation Model, first published in 1991. The phases are associated with shock, dizziness, and uncertainty at the first symptoms and diagnosis; changing emotional states and thoughts, efforts to maintain control over one's own life; withdrawal, grief over lost abilities, and suffering from the imminent loss of one's own existence; finally psycho-physical decline. The best known is the Five stages of Grief Model developed by Elisabeth Kübler-Ross, a Swiss-US psychiatrist. In her work, Kübler-Ross compiled various preexisting findings of Thanatology published by John Hinton, Cicely Saunders, Barney G. Glaser and Anselm L. Strauss and others. Because of this, she brought the public's attention to it more than it previously received, which has continued to this day. She focused on the treatment of the dying, with grief and mourning, as well as with studies on death and near-death experiences. The five stages in this model are the following: Denial and Isolation, Anger, Bargaining, Depression, and Consent. According to Kübler-Ross, hope is almost always present in each of the five phases, suggesting that the patients never completely give up and that hope must not be taken away from them. Loss of hope is soon followed by death, and the fear of death can only be overcome by everyone starting with themselves and accepting their own death, according to Kübler-Ross. From Kübler-Ross's research, psychiatrists have set new impulses for dealing with dying and grieving people. Her key message was that the people aiding must first clarify their own fears and life problems ("unfinished business") as far as possible and accept their own death before they can turn to the dying in a helpful way. The five phases of dying were extracted by Kübler-Ross from interviews of terminally ill people describing psychological adjustment processes in the dying process. The five phases are widely referred to, although Kübler-Ross herself critically questions the validity of her phase model several times. Some of her self-critiques include the following: The phases are not experienced in a fixed order one after the other, but they can alternate or repeat; some phases may not be experienced at all; a final acceptance of one's own dying may not take place in every case. In end-of-life care, space is given to psychological conflict, but coping with the phases can rarely be influenced from the outside. In international research on dying, there are a number of scientifically based objections to the phase model and to models that describe dying in terms of staged behaviors in general. Above all, the naïve use of the phase model is viewed critically and even in specialist books, hope–a central aspect of the phase model for Kübler-Ross–is not mentioned. Influencing factors The scientifically based criticism of phase models has led to forgoing defying the dying process in stages, and instead to elaborating on factors that influence the course of dying. Based on research findings from several sciences, Robert J. Kastenbaum says, "Individuality and universality combine in dying." In Kastenbaum's model, individual and societal attitudes influence our dying and how we deal with knowledge about dying and death. Influencing factors are age, gender, interpersonal relationships, the type of illness, the environment in which treatment takes place, religion, and culture. This model is the personal reality of the dying person, where fear, refusal, and acceptance form the core of the dying person's confrontation with death. Ernst Engelke took up Kastenbaum's approach and developed it further with the thesis, "Just as each person's life is unique, so is their death unique. Nevertheless, there are similarities in the death of all people. According to this, all terminally ill people have in common that they are confronted with realizations, responsibilities, and constraints that are typical of dying." For example, a characteristic realization is that the illness is threatening their life. Typical constraints result from the disease, therapies, and side effects. In Engelke's model, the personal and unique aspects of death result from the interaction of many factors in coping with the realizations, responsibilities, and constraints. Important factors include the following: the genetic make-up, personality, life experience, physical, psychological, social, financial, religious, and spiritual resources; the type, degree, and duration of the disease, the consequences and side effects of treatment, the quality of medical treatment and care, the material surroundings (i.e. furnishings of the apartment, clinic, home); and the expectations, norms, and behavior of relatives, carers, doctors and the public. According to Engelke, the complexity of dying and the uniqueness of each dying person creates guidelines for communication with dying people. Awareness Along with medical professionals and relatives, sociologists and psychologists also engage in the question of whether it is ethical to inform terminally ill patients of the infaust prognosis, or the uncertain diagnosis. In 1965, the sociologists Barney G. Glaser and Anselm Strauss published the results of empirical studies where they derived four different types of Awareness of dying patients: Closed Awareness, Suspected Awareness, Mutual Pretense Awareness, and Open Awareness. In Closed Awareness, only relatives, caregivers, and medical professionals recognize the patient's condition; the patient themselves does not recognize their dying. In Suspected Awareness, the patient suspects what those around him know, but they are not told by relatives or medical professionals. In Mutual Pretense Awareness, all participants know about that the person is dying, but they behave as if they did not know. In Open Awareness, all participants behave according to their knowledge. The Hospice Movement in the United Kingdom in particular has since advocated for open, truthful and trustful interaction. The situation does not become easier for all involved if difficult conversations are avoided; rather it intensifies and possibly leads to a disturbed relationship of trust between people, which makes further treatment more difficult or impossible. See also Assisted dying (disambiguation)} References Human life stages Signs of death
0.779383
0.989035
0.770837
Meningitis
Meningitis is acute or chronic inflammation of the protective membranes covering the brain and spinal cord, collectively called the meninges. The most common symptoms are fever, intense headache, vomiting and neck stiffness and occasionally photophobia. Other symptoms include confusion or altered consciousness, nausea, and an inability to tolerate light or loud noises. Young children often exhibit only nonspecific symptoms, such as irritability, drowsiness, or poor feeding. A non-blanching rash (a rash that does not fade when a glass is rolled over it) may also be present. The inflammation may be caused by infection with viruses, bacteria, fungi or parasites. Non-infectious causes include malignancy (cancer), subarachnoid hemorrhage, chronic inflammatory disease (sarcoidosis) and certain drugs. Meningitis can be life-threatening because of the inflammation's proximity to the brain and spinal cord; therefore, the condition is classified as a medical emergency. A lumbar puncture, in which a needle is inserted into the spinal canal to collect a sample of cerebrospinal fluid (CSF), can diagnose or exclude meningitis. Some forms of meningitis are preventable by immunization with the meningococcal, mumps, pneumococcal, and Hib vaccines. Giving antibiotics to people with significant exposure to certain types of meningitis may also be useful. The first treatment in acute meningitis consists of promptly giving antibiotics and sometimes antiviral drugs. Corticosteroids can also be used to prevent complications from excessive inflammation. Meningitis can lead to serious long-term consequences such as deafness, epilepsy, hydrocephalus, or cognitive deficits, especially if not treated quickly. In 2019, meningitis was diagnosed in about 7.7 million people worldwide, of whom 236,000 died, down from 433,000 deaths in 1990. With appropriate treatment, the risk of death in bacterial meningitis is less than 15%. Outbreaks of bacterial meningitis occur between December and June each year in an area of sub-Saharan Africa known as the meningitis belt. Smaller outbreaks may also occur in other areas of the world. The word meningitis comes from the Greek , 'membrane', and the medical suffix -itis, 'inflammation'. Signs and symptoms Clinical features In adults, the most common symptom of meningitis is a severe headache, occurring in almost 90% of cases of bacterial meningitis, followed by neck stiffness (the inability to flex the neck forward passively due to increased neck muscle tone and stiffness). The classic triad of diagnostic signs consists of neck stiffness, sudden high fever, and altered mental status; however, all three features are present in only 44–46% of bacterial meningitis cases. If none of the three signs are present, acute meningitis is extremely unlikely. Other signs commonly associated with meningitis include photophobia (intolerance to bright light) and phonophobia (intolerance to loud noises). Small children often do not exhibit the aforementioned symptoms, and may only be irritable and look unwell. The fontanelle (the soft spot on the top of a baby's head) can bulge in infants aged up to 6 months. Other features that distinguish meningitis from less severe illnesses in young children are leg pain, cold extremities, and an abnormal skin color. Neck stiffness occurs in 70% of bacterial meningitis in adults. Other signs include the presence of positive Kernig's sign or Brudziński sign. Kernig's sign is assessed with the person lying supine, with the hip and knee flexed to 90 degrees. In a person with a positive Kernig's sign, pain limits passive extension of the knee. A positive Brudzinski's sign occurs when flexion of the neck causes involuntary flexion of the knee and hip. Although Kernig's sign and Brudzinski's sign are both commonly used to screen for meningitis, the sensitivity of these tests is limited. They do, however, have very good specificity for meningitis: the signs rarely occur in other diseases. Another test, known as the "jolt accentuation maneuver" helps determine whether meningitis is present in those reporting fever and headache. A person is asked to rapidly rotate the head horizontally; if this does not make the headache worse, meningitis is unlikely. Other problems can produce symptoms similar to those above, but from non-meningitic causes. This is called meningism or pseudomeningitis. Meningitis caused by the bacterium Neisseria meningitidis (known as "meningococcal meningitis") can be differentiated from meningitis with other causes by a rapidly spreading petechial rash, which may precede other symptoms. The rash consists of numerous small, irregular purple or red spots ("petechiae") on the trunk, lower extremities, mucous membranes, conjunctiva, and (occasionally) the palms of the hands or soles of the feet. The rash is typically non-blanching; the redness does not disappear when pressed with a finger or a glass tumbler. Although this rash is not necessarily present in meningococcal meningitis, it is relatively specific for the disease; it does, however, occasionally occur in meningitis due to other bacteria. Other clues on the cause of meningitis may be the skin signs of hand, foot and mouth disease and genital herpes, both of which are associated with various forms of viral meningitis. Early complications Additional problems may occur in the early stage of the illness. These may require specific treatment, and sometimes indicate severe illness or worse prognosis. The infection may trigger sepsis, a systemic inflammatory response syndrome of falling blood pressure, fast heart rate, high or abnormally low temperature, and rapid breathing. Very low blood pressure may occur at an early stage, especially but not exclusively in meningococcal meningitis; this may lead to insufficient blood supply to other organs. Disseminated intravascular coagulation, the excessive activation of blood clotting, may obstruct blood flow to organs and paradoxically increase the bleeding risk. Gangrene of limbs can occur in meningococcal disease. Severe meningococcal and pneumococcal infections may result in hemorrhaging of the adrenal glands, leading to Waterhouse-Friderichsen syndrome, which is often fatal. The brain tissue may swell, pressure inside the skull may increase and the swollen brain may herniate through the skull base. This may be noticed by a decreasing level of consciousness, loss of the pupillary light reflex, and abnormal posturing. The inflammation of the brain tissue may also obstruct the normal flow of CSF around the brain (hydrocephalus). Seizures may occur for various reasons; in children, seizures are common in the early stages of meningitis (in 30% of cases) and do not necessarily indicate an underlying cause. Seizures may result from increased pressure and from areas of inflammation in the brain tissue. Focal seizures (seizures that involve one limb or part of the body), persistent seizures, late-onset seizures and those that are difficult to control with medication indicate a poorer long-term outcome. Inflammation of the meninges may lead to abnormalities of the cranial nerves, a group of nerves arising from the brain stem that supply the head and neck area and which control, among other functions, eye movement, facial muscles, and hearing. Visual symptoms and hearing loss may persist after an episode of meningitis. Inflammation of the brain (encephalitis) or its blood vessels (cerebral vasculitis), as well as the formation of blood clots in the veins (cerebral venous thrombosis), may all lead to weakness, loss of sensation, or abnormal movement or function of the part of the body supplied by the affected area of the brain. Causes Meningitis is typically caused by an infection. Most infections are due to viruses, and others due to bacteria, fungi, and parasites. Mostly the parasites are parasitic worms, but can also rarely include parasitic amoebae. Meningitis may also result from various non-infectious causes. The term aseptic meningitis refers to cases of meningitis in which no bacterial infection can be demonstrated. This type of meningitis is usually caused by viruses, but it may be due to bacterial infection that has already been partially treated, when bacteria disappear from the meninges, or when pathogens infect a space adjacent to the meninges (such as sinusitis). Endocarditis (an infection of the heart valves which spreads small clusters of bacteria through the bloodstream) may cause aseptic meningitis. Aseptic meningitis may also result from infection with spirochetes, a group of bacteria that includes Treponema pallidum (the cause of syphilis) and Borrelia burgdorferi (known for causing Lyme disease), and may also result from cerebral malaria (malaria infecting the brain). Bacterial The types of bacteria that cause bacterial meningitis vary according to the infected individual's age group. In premature babies and newborns up to three months old, common causes are group B streptococci (subtypes III which normally inhabit the vagina and are mainly a cause during the first week of life) and bacteria that normally inhabit the digestive tract such as Escherichia coli (carrying the K1 antigen). Listeria monocytogenes (serotype IVb) can be contracted when consuming improperly prepared food such as dairy products, produce and deli meats, and may cause meningitis in the newborn. Older children are more commonly affected by Neisseria meningitidis (meningococcus) and Streptococcus pneumoniae (serotypes 6, 9, 14, 18 and 23) and those under five by Haemophilus influenzae type B (in countries that do not offer vaccination). In adults, Neisseria meningitidis and Streptococcus pneumoniae together cause 80% of bacterial meningitis cases. Risk of infection with Listeria monocytogenes is increased in people over 50 years old. The introduction of pneumococcal vaccine has lowered rates of pneumococcal meningitis in both children and adults. A head injury potentially allows nasal cavity bacteria to enter the meningeal space. Similarly, devices in the brain and meninges, such as cerebral shunts, extraventricular drains or Ommaya reservoirs, carry an increased risk of meningitis. In these cases, people are more likely to be infected with Staphylococci, Pseudomonas, and other Gram-negative bacteria. These pathogens are also associated with meningitis in people with an impaired immune system. An infection in the head and neck area, such as otitis media or mastoiditis, can lead to meningitis in a small proportion of people. Recipients of cochlear implants for hearing loss are more at risk for pneumococcal meningitis. In rare cases, Enterococcus spp. can be responsible for meningitis, both community and hospital-acquired, usually as a secondary result of trauma or surgery, or due to intestinal diseases (e.g., strongyloidiasis). Tuberculous meningitis, which is meningitis caused by Mycobacterium tuberculosis, is more common in people from countries in which tuberculosis is endemic, but is also encountered in people with immune problems, such as AIDS. Recurrent bacterial meningitis may be caused by persisting anatomical defects, either congenital or acquired, or by disorders of the immune system. Anatomical defects allow continuity between the external environment and the nervous system. The most common cause of recurrent meningitis is a skull fracture, particularly fractures that affect the base of the skull or extend towards the sinuses and petrous pyramids. Approximately 59% of recurrent meningitis cases are due to such anatomical abnormalities, 36% are due to immune deficiencies (such as complement deficiency, which predisposes especially to recurrent meningococcal meningitis), and 5% are due to ongoing infections in areas adjacent to the meninges. Viral Viruses that cause meningitis include enteroviruses, herpes simplex virus (generally type 2, which produces most genital sores; less commonly type 1), varicella zoster virus (known for causing chickenpox and shingles), mumps virus, HIV, LCMV, Arboviruses (acquired from a mosquito or other insect), and the influenza virus. Mollaret's meningitis is a chronic recurrent form of herpes meningitis; it is thought to be caused by herpes simplex virus type 2. Fungal There are a number of risk factors for fungal meningitis, including the use of immunosuppressants (such as after organ transplantation), HIV/AIDS, and the loss of immunity associated with aging. It is uncommon in those with a normal immune system but has occurred with medication contamination. Symptom onset is typically more gradual, with headaches and fever being present for at least a couple of weeks before diagnosis. The most common fungal meningitis is cryptococcal meningitis due to Cryptococcus neoformans. In Africa, cryptococcal meningitis is now the most common cause of meningitis in multiple studies, and it accounts for 20–25% of AIDS-related deaths in Africa. Other less common pathogenic fungi which can cause meningitis include: Coccidioides immitis, Histoplasma capsulatum, Blastomyces dermatitidis, and Candida species. Parasitic A parasitic worm is often assumed to be the cause of eosinophilic meningitis when there is a predominance of eosinophils (a type of white blood cell) found in the cerebrospinal fluid. The most common parasites implicated are Angiostrongylus cantonensis, Gnathostoma spinigerum, Schistosoma, as well as the conditions cysticercosis, toxocariasis, baylisascariasis, paragonimiasis, and a number of rarer infections and noninfective conditions. Rarely, free-living parasitic amoebae can cause naegleriasis, also called amebic meningitis, a type of meningoencephalitis where not only the meninges are affected but also the brain tissue. Non-infectious Meningitis may occur as the result of several non-infectious causes: spread of cancer to the meninges (malignant or neoplastic meningitis) and certain drugs (mainly non-steroidal anti-inflammatory drugs, antibiotics and intravenous immunoglobulins). It may also be caused by several inflammatory conditions, such as sarcoidosis (which is then called neurosarcoidosis), connective tissue disorders such as systemic lupus erythematosus, and certain forms of vasculitis (inflammatory conditions of the blood vessel wall), such as Behçet's disease. Epidermoid cysts and dermoid cysts may cause meningitis by releasing irritant matter into the subarachnoid space. Rarely, migraine may cause meningitis, but this diagnosis is usually only made when other causes have been eliminated. Mechanism The meninges comprise three membranes that, together with the cerebrospinal fluid, enclose and protect the brain and spinal cord (the central nervous system). The pia mater is a delicate impermeable membrane that firmly adheres to the surface of the brain, following all the minor contours. The arachnoid mater (so named because of its spider-web-like appearance) is a loosely fitting sac on top of the pia mater. The subarachnoid space separates the arachnoid and pia mater membranes and is filled with cerebrospinal fluid. The outermost membrane, the dura mater, is a thick durable membrane, which is attached to both the arachnoid membrane and the skull. In bacterial meningitis, bacteria reach the meninges by one of two main routes: through the bloodstream (hematogenous spread) or through direct contact between the meninges and either the nasal cavity or the skin. In most cases, meningitis follows invasion of the bloodstream by organisms that live on mucosal surfaces such as the nasal cavity. This is often in turn preceded by viral infections, which break down the normal barrier provided by the mucosal surfaces. Once bacteria have entered the bloodstream, they enter the subarachnoid space in places where the blood–brain barrier is vulnerable – such as the choroid plexus. Meningitis occurs in 25% of newborns with bloodstream infections due to group B streptococci; this phenomenon is much less common in adults. Direct contamination of the cerebrospinal fluid may arise from indwelling devices, skull fractures, or infections of the nasopharynx or the nasal sinuses that have formed a tract with the subarachnoid space (see above); occasionally, congenital defects of the dura mater can be identified. The large-scale inflammation that occurs in the subarachnoid space during meningitis is not a direct result of bacterial infection but can rather largely be attributed to the response of the immune system to the entry of bacteria into the central nervous system. When components of the bacterial cell membrane are identified by the immune cells of the brain (astrocytes and microglia), they respond by releasing large amounts of cytokines, hormone-like mediators that recruit other immune cells and stimulate other tissues to participate in an immune response. The blood–brain barrier becomes more permeable, leading to "vasogenic" cerebral edema (swelling of the brain due to fluid leakage from blood vessels). Large numbers of white blood cells enter the CSF, causing inflammation of the meninges and leading to "interstitial" edema (swelling due to fluid between the cells). In addition, the walls of the blood vessels themselves become inflamed (cerebral vasculitis), which leads to decreased blood flow and a third type of edema, "cytotoxic" edema. The three forms of cerebral edema all lead to increased intracranial pressure; together with the lowered blood pressure often encountered in sepsis, this means that it is harder for blood to enter the brain; consequently brain cells are deprived of oxygen and undergo apoptosis (programmed cell death). Administration of antibiotics may initially worsen the process outlined above, by increasing the amount of bacterial cell membrane products released through the destruction of bacteria. Particular treatments, such as the use of corticosteroids, are aimed at dampening the immune system's response to this phenomenon. Diagnosis Diagnosing meningitis as promptly as possible can improve outcomes. There are no specific signs or symptoms that can indicate meningitis, and a lumbar puncture (spinal tap) to examine the cerebrospinal fluid is recommended for diagnosis. Lumbar puncture is contraindicated if there is a mass in the brain (tumor or abscess) or the intracranial pressure (ICP) is elevated, as it may lead to brain herniation. If someone is at risk for either a mass or raised ICP (recent head injury, a known immune system problem, localizing neurological signs, or evidence on examination of a raised ICP), a CT or MRI scan is recommended prior to the lumbar puncture. This applies in 45% of all adult cases. There are no physical tests that can rule out or determine if a person has meningitis. The jolt accentuation test is not specific or sensitive enough to completely rule out meningitis. If someone is suspected of having meningitis, blood tests are performed for markers of inflammation (e.g. C-reactive protein, complete blood count), as well as blood cultures. If a CT or MRI is required before LP, or if LP proves difficult, professional guidelines suggest that antibiotics should be administered first to prevent delay in treatment, especially if this may be longer than 30 minutes. Often, CT or MRI scans are performed at a later stage to assess for complications of meningitis. In severe forms of meningitis, monitoring of blood electrolytes may be important; for example, hyponatremia is common in bacterial meningitis. The cause of hyponatremia, however, is controversial and may include dehydration, the inappropriate secretion of the antidiuretic hormone (SIADH), or overly aggressive intravenous fluid administration. Lumbar puncture A lumbar puncture is done by positioning the person, usually lying on the side, applying local anesthetic, and inserting a needle into the dural sac (a sac around the spinal cord) to collect cerebrospinal fluid (CSF). When this has been achieved, the "opening pressure" of the CSF is measured using a manometer. The pressure is normally between 6 and 18 cm water (cmH2O); in bacterial meningitis the pressure is usually elevated. In cryptococcal meningitis, intracranial pressure is markedly elevated. The initial appearance of the fluid may prove an indication of the nature of the infection: cloudy CSF indicates higher levels of protein, white and red blood cells and/or bacteria, and therefore may suggest bacterial meningitis. The CSF sample is examined for presence and types of white blood cells, red blood cells, protein content and glucose level. Gram staining of the sample may demonstrate bacteria in bacterial meningitis, but absence of bacteria does not exclude bacterial meningitis as they are only seen in 60% of cases; this figure is reduced by a further 20% if antibiotics were administered before the sample was taken. Gram staining is also less reliable in particular infections such as listeriosis. Microbiological culture of the sample is more sensitive (it identifies the organism in 70–85% of cases) but results can take up to 48 hours to become available. The type of white blood cell predominantly present (see table) indicates whether meningitis is bacterial (usually neutrophil-predominant) or viral (usually lymphocyte-predominant), although at the beginning of the disease this is not always a reliable indicator. Less commonly, eosinophils predominate, suggesting parasitic or fungal etiology, among others. The concentration of glucose in CSF is normally above 40% of that in blood. In bacterial meningitis it is typically lower; the CSF glucose level is therefore divided by the blood glucose (CSF glucose to serum glucose ratio). A ratio ≤0.4 is indicative of bacterial meningitis; in the newborn, glucose levels in CSF are normally higher, and a ratio below 0.6 (60%) is therefore considered abnormal. High levels of lactate in CSF indicate a higher likelihood of bacterial meningitis, as does a higher white blood cell count. If lactate levels are less than 35 mg/dl and the person has not previously received antibiotics then this may rule out bacterial meningitis. Various other specialized tests may be used to distinguish between different types of meningitis. A latex agglutination test may be positive in meningitis caused by Streptococcus pneumoniae, Neisseria meningitidis, Haemophilus influenzae, Escherichia coli and group B streptococci; its routine use is not encouraged as it rarely leads to changes in treatment, but it may be used if other tests are not diagnostic. Similarly, the limulus lysate test may be positive in meningitis caused by Gram-negative bacteria, but it is of limited use unless other tests have been unhelpful. Polymerase chain reaction (PCR) is a technique used to amplify small traces of bacterial DNA in order to detect the presence of bacterial or viral DNA in cerebrospinal fluid; it is a highly sensitive and specific test since only trace amounts of the infecting agent's DNA is required. It may identify bacteria in bacterial meningitis and may assist in distinguishing the various causes of viral meningitis (enterovirus, herpes simplex virus 2 and mumps in those not vaccinated for this). Serology (identification of antibodies to viruses) may be useful in viral meningitis. If tuberculous meningitis is suspected, the sample is processed for Ziehl–Neelsen stain, which has a low sensitivity, and tuberculosis culture, which takes a long time to process; PCR is being used increasingly. Diagnosis of cryptococcal meningitis can be made at low cost using an India ink stain of the CSF; however, testing for cryptococcal antigen in blood or CSF is more sensitive. A diagnostic and therapeutic difficulty is "partially treated meningitis", where there are meningitis symptoms after receiving antibiotics (such as for presumptive sinusitis). When this happens, CSF findings may resemble those of viral meningitis, but antibiotic treatment may need to be continued until there is definitive positive evidence of a viral cause (e.g. a positive enterovirus PCR). Postmortem Meningitis can be diagnosed after death has occurred. The findings from a post mortem are usually a widespread inflammation of the pia mater and arachnoid layers of the meninges. Neutrophil granulocytes tend to have migrated to the cerebrospinal fluid and the base of the brain, along with cranial nerves and the spinal cord, may be surrounded with pus – as may the meningeal vessels. Prevention For some causes of meningitis, protection can be provided in the long term through vaccination, or in the short term with antibiotics. Some behavioral measures may also be effective. Behavioral Bacterial and viral meningitis are contagious, but neither is as contagious as the common cold or flu. Both can be transmitted through droplets of respiratory secretions during close contact such as kissing, sneezing or coughing on someone, but bacterial meningitis cannot be spread by only breathing the air where a person with meningitis has been. Viral meningitis is typically caused by enteroviruses, and is most commonly spread through fecal contamination. The risk of infection can be decreased by changing the behavior that led to transmission. Vaccination Since the 1980s, many countries have included immunization against Haemophilus influenzae type B in their routine childhood vaccination schemes. This has practically eliminated this pathogen as a cause of meningitis in young children in those countries. In the countries in which the disease burden is highest, however, the vaccine is still too expensive. Similarly, immunization against mumps has led to a sharp fall in the number of cases of mumps meningitis, which prior to vaccination occurred in 15% of all cases of mumps. Meningococcus vaccines exist against groups A, B, C, W135 and Y. In countries where the vaccine for meningococcus group C was introduced, cases caused by this pathogen have decreased substantially. A quadrivalent vaccine now exists, which combines four vaccines with the exception of B; immunization with this ACW135Y vaccine is now a visa requirement for taking part in Hajj. Development of a vaccine against group B meningococci has proved much more difficult, as its surface proteins (which would normally be used to make a vaccine) only elicit a weak response from the immune system, or cross-react with normal human proteins. Still, some countries (New Zealand, Cuba, Norway and Chile) have developed vaccines against local strains of group B meningococci; some have shown good results and are used in local immunization schedules. Two new vaccines, both approved in 2014, are effective against a wider range of group B meningococci strains. In Africa, until recently, the approach for prevention and control of meningococcal epidemics was based on early detection of the disease and emergency reactive mass vaccination of the population at risk with bivalent A/C or trivalent A/C/W135 polysaccharide vaccines, though the introduction of MenAfriVac (meningococcus group A vaccine) has demonstrated effectiveness in young people and has been described as a model for product development partnerships in resource-limited settings. Routine vaccination against Streptococcus pneumoniae with the pneumococcal conjugate vaccine (PCV), which is active against seven common serotypes of this pathogen, significantly reduces the incidence of pneumococcal meningitis. The pneumococcal polysaccharide vaccine, which covers 23 strains, is only administered to certain groups (e.g. those who have had a splenectomy, the surgical removal of the spleen); it does not elicit a significant immune response in all recipients, e.g. small children. Childhood vaccination with Bacillus Calmette-Guérin has been reported to significantly reduce the rate of tuberculous meningitis, but its waning effectiveness in adulthood has prompted a search for a better vaccine. Antibiotics Short-term antibiotic prophylaxis is another method of prevention, particularly of meningococcal meningitis. In cases of meningococcal meningitis, preventative treatment in close contacts with antibiotics (e.g. rifampicin, ciprofloxacin or ceftriaxone) can reduce their risk of contracting the condition, but does not protect against future infections. Resistance to rifampicin has been noted to increase after use, which has caused some to recommend considering other agents. While antibiotics are frequently used in an attempt to prevent meningitis in those with a basilar skull fracture there is not enough evidence to determine whether this is beneficial or harmful. This applies to those with or without a CSF leak. Management Meningitis is potentially life-threatening and has a high mortality rate if untreated; delay in treatment has been associated with a poorer outcome. Thus, treatment with wide-spectrum antibiotics should not be delayed while confirmatory tests are being conducted. If meningococcal disease is suspected in primary care, guidelines recommend that benzylpenicillin be administered before transfer to hospital. Intravenous fluids should be administered if hypotension (low blood pressure) or shock are present. It is not clear whether intravenous fluid should be given routinely or whether this should be restricted. Given that meningitis can cause a number of early severe complications, regular medical review is recommended to identify these complications early and to admit the person to an intensive care unit, if deemed necessary. Mechanical ventilation may be needed if the level of consciousness is very low, or if there is evidence of respiratory failure. If there are signs of raised intracranial pressure, measures to monitor the pressure may be taken; this would allow the optimization of the cerebral perfusion pressure and various treatments to decrease the intracranial pressure with medication (e.g. mannitol). Seizures are treated with anticonvulsants. Hydrocephalus (obstructed flow of CSF) may require insertion of a temporary or long-term drainage device, such as a cerebral shunt. The osmotic therapy, glycerol, has an unclear effect on mortality but may decrease hearing problems. Bacterial meningitis Antibiotics Empiric antibiotics (treatment without exact diagnosis) should be started immediately, even before the results of the lumbar puncture and CSF analysis are known. The choice of initial treatment depends largely on the kind of bacteria that cause meningitis in a particular place and population. For instance, in the United Kingdom, empirical treatment consists of a third-generation cefalosporin such as cefotaxime or ceftriaxone. In the US, where resistance to cefalosporins is increasingly found in streptococci, addition of vancomycin to the initial treatment is recommended. Chloramphenicol, either alone or in combination with ampicillin, however, appears to work equally well. Empirical therapy may be chosen on the basis of the person's age, whether the infection was preceded by a head injury, whether the person has undergone recent neurosurgery and whether or not a cerebral shunt is present. In young children and those over 50 years of age, as well as those who are immunocompromised, the addition of ampicillin is recommended to cover Listeria monocytogenes. Once the Gram stain results become available, and the broad type of bacterial cause is known, it may be possible to change the antibiotics to those likely to deal with the presumed group of pathogens. The results of the CSF culture generally take longer to become available (24–48 hours). Once they do, empiric therapy may be switched to specific antibiotic therapy targeted to the specific causative organism and its sensitivities to antibiotics. For an antibiotic to be effective in meningitis it must not only be active against the pathogenic bacterium but also reach the meninges in adequate quantities; some antibiotics have inadequate penetrance and therefore have little use in meningitis. Most of the antibiotics used in meningitis have not been tested directly on people with meningitis in clinical trials. Rather, the relevant knowledge has mostly derived from laboratory studies in rabbits. Tuberculous meningitis requires prolonged treatment with antibiotics. While tuberculosis of the lungs is typically treated for six months, those with tuberculous meningitis are typically treated for a year or longer. Fluid therapy Fluid given intravenously are an essential part of treatment of bacterial meningitis. There is no difference in terms of mortality or acute severe neurological complications in children given a maintenance regimen over restricted-fluid regimen, but evidence is in favor of the maintenance regimen in terms of emergence of chronic severe neurological complications. Steroids Additional treatment with corticosteroids (usually dexamethasone) has shown some benefits, such as a reduction of hearing loss, and better short term neurological outcomes in adolescents and adults from high-income countries with low rates of HIV. Some research has found reduced rates of death while other research has not. They also appear to be beneficial in those with tuberculosis meningitis, at least in those who are HIV negative. Professional guidelines therefore recommend the commencement of dexamethasone or a similar corticosteroid just before the first dose of antibiotics is given, and continued for four days. Given that most of the benefit of the treatment is confined to those with pneumococcal meningitis, some guidelines suggest that dexamethasone be discontinued if another cause for meningitis is identified. The likely mechanism is suppression of overactive inflammation. Additional treatment with corticosteroids have a different role in children than in adults. Though the benefit of corticosteroids has been demonstrated in adults as well as in children from high-income countries, their use in children from low-income countries is not supported by the evidence; the reason for this discrepancy is not clear. Even in high-income countries, the benefit of corticosteroids is only seen when they are given prior to the first dose of antibiotics, and is greatest in cases of H. influenzae meningitis, the incidence of which has decreased dramatically since the introduction of the Hib vaccine. Thus, corticosteroids are recommended in the treatment of pediatric meningitis if the cause is H. influenzae, and only if given prior to the first dose of antibiotics; other uses are controversial. Adjuvant therapies In addition to the primary therapy of antibiotics and corticosteroids, other adjuvant therapies are under development or are sometimes used to try and improve survival from bacterial meningitis and reduce the risk of neurological problems. Examples of adjuvant therapies that have been trialed include acetaminophen, immunoglobulin therapy, heparin, pentoxifyline, and a mononucleotide mixture with succinic acid. It is not clear if any of these therapies are helpful or worsen outcomes in people with acute bacterial meningitis. Viral meningitis Viral meningitis typically only requires supportive therapy; most viruses responsible for causing meningitis are not amenable to specific treatment. Viral meningitis tends to run a more benign course than bacterial meningitis. Herpes simplex virus and varicella zoster virus may respond to treatment with antiviral drugs such as aciclovir, but there are no clinical trials that have specifically addressed whether this treatment is effective. Mild cases of viral meningitis can be treated at home with conservative measures such as fluid, bedrest, and analgesics. Fungal meningitis Fungal meningitis, such as cryptococcal meningitis, is treated with long courses of high dose antifungals, such as amphotericin B and flucytosine. Raised intracranial pressure is common in fungal meningitis, and frequent (ideally daily) lumbar punctures to relieve the pressure are recommended, or alternatively a lumbar drain. Prognosis Untreated, bacterial meningitis is almost always fatal. According to the WHO, bacterial meningitis has an overall mortality rate of 16.7% (with treatment). Viral meningitis, in contrast, tends to resolve spontaneously and is rarely fatal. With treatment, mortality (risk of death) from bacterial meningitis depends on the age of the person and the underlying cause. Of newborns, 20–30% may die from an episode of bacterial meningitis. This risk is much lower in older children, whose mortality is about 2%, but rises again to about 19–37% in adults. Risk of death is predicted by various factors apart from age, such as the pathogen and the time it takes for the pathogen to be cleared from the cerebrospinal fluid, the severity of the generalized illness, a decreased level of consciousness or an abnormally low count of white blood cells in the CSF. Meningitis caused by H. influenzae and meningococci has a better prognosis than cases caused by group B streptococci, coliforms and S. pneumoniae. In adults, too, meningococcal meningitis has a lower mortality (3–7%) than pneumococcal disease. In children there are several potential disabilities which may result from damage to the nervous system, including sensorineural hearing loss, epilepsy, learning and behavioral difficulties, as well as decreased intelligence. These occur in about 15% of survivors. Some of the hearing loss may be reversible. In adults, 66% of all cases emerge without disability. The main problems are deafness (in 14%) and cognitive impairment (in 10%). Tuberculous meningitis in children continues to be associated with a significant risk of death even with treatment (19%), and a significant proportion of the surviving children have ongoing neurological problems. Just over a third of all cases survives with no problems. Epidemiology Although meningitis is a notifiable disease in many countries, the exact incidence rate is unknown. In 2013 meningitis resulted in 303,000 deaths – down from 464,000 deaths in 1990. In 2010 it was estimated that meningitis resulted in 420,000 deaths, excluding cryptococcal meningitis. Bacterial meningitis occurs in about 3 people per 100,000 annually in Western countries. Population-wide studies have shown that viral meningitis is more common, at 10.9 per 100,000, and occurs more often in the summer. In Brazil, the rate of bacterial meningitis is higher, at 45.8 per 100,000 annually. Sub-Saharan Africa has been plagued by large epidemics of meningococcal meningitis for over a century, leading to it being labeled the "meningitis belt". Epidemics typically occur in the dry season (December to June), and an epidemic wave can last two to three years, dying out during the intervening rainy seasons. Attack rates of 100–800 cases per 100,000 are encountered in this area, which is poorly served by medical care. These cases are predominantly caused by meningococci. The largest epidemic ever recorded in history swept across the entire region in 1996–1997, causing over 250,000 cases and 25,000 deaths. Meningococcal disease occurs in epidemics in areas where many people live together for the first time, such as army barracks during mobilization, university and college campuses and the annual Hajj pilgrimage. Although the pattern of epidemic cycles in Africa is not well understood, several factors have been associated with the development of epidemics in the meningitis belt. They include: medical conditions (immunological susceptibility of the population), demographic conditions (travel and large population displacements), socioeconomic conditions (overcrowding and poor living conditions), climatic conditions (drought and dust storms), and concurrent infections (acute respiratory infections). There are significant differences in the local distribution of causes for bacterial meningitis. For instance, while N. meningitides groups B and C cause most disease episodes in Europe, group A is found in Asia and continues to predominate in Africa, where it causes most of the major epidemics in the meningitis belt, accounting for about 80% to 85% of documented meningococcal meningitis cases. History Some suggest that Hippocrates may have realized the existence of meningitis, and it seems that meningism was known to pre-Renaissance physicians such as Avicenna. The description of tuberculous meningitis, then called "dropsy in the brain", is often attributed to Edinburgh physician Sir Robert Whytt in a posthumous report that appeared in 1768, although the link with tuberculosis and its pathogen was not made until the next century. It appears that epidemic meningitis is a relatively recent phenomenon. The first recorded major outbreak occurred in Geneva in 1805. Several other epidemics in Europe and the United States were described shortly afterward, and the first report of an epidemic in Africa appeared in 1840. African epidemics became much more common in the 20th century, starting with a major epidemic sweeping Nigeria and Ghana in 1905–1908. The first report of bacterial infection underlying meningitis was by the Austrian bacteriologist Anton Weichselbaum, who in 1887 described the meningococcus. Mortality from meningitis was very high (over 90%) in early reports. In 1906, antiserum was produced in horses; this was developed further by the American scientist Simon Flexner and markedly decreased mortality from meningococcal disease. In 1944, penicillin was first reported to be effective in meningitis. The introduction in the late 20th century of Haemophilus vaccines led to a marked fall in cases of meningitis associated with this pathogen, and in 2002, evidence emerged that treatment with steroids could improve the prognosis of bacterial meningitis. See also aseptic meningitis CSF/serum glucose ratio References External links Meningitis U.S. Centers for Disease Control and Prevention (CDC) Disorders causing seizures Medical emergencies Acute pain Wikipedia medicine articles ready to translate (full) Medical triads Wikipedia infectious disease articles ready to translate
0.771013
0.999748
0.770819
Drug
A drug is any chemical substance other than a nutrient or an essential dietary ingredient, which, when administered to a living organism, produces a biological effect. Consumption of drugs can be via inhalation, injection, smoking, ingestion, absorption via a patch on the skin, suppository, or dissolution under the tongue. In pharmacology, a drug is a chemical substance, typically of known structure, which, when administered to a living organism, produces a biological effect. A pharmaceutical drug, also called a medication or medicine, is a chemical substance used to treat, cure, prevent, or diagnose a disease or to promote well-being. Traditionally drugs were obtained through extraction from medicinal plants, but more recently also by organic synthesis. Pharmaceutical drugs may be used for a limited duration, or on a regular basis for chronic disorders. Classification Pharmaceutical drugs are often classified into drug classes—groups of related drugs that have similar chemical structures, the same mechanism of action (binding to the same biological target), a related mode of action, and that are used to treat the same disease. The Anatomical Therapeutic Chemical Classification System (ATC), the most widely used drug classification system, assigns drugs a unique ATC code, which is an alphanumeric code that assigns it to specific drug classes within the ATC system. Another major classification system is the Biopharmaceutics Classification System. This classifies drugs according to their solubility and permeability or absorption properties. Psychoactive drugs are substances that affect the function of the central nervous system, altering perception, mood or consciousness. These drugs are divided into different groups like: stimulants, depressants, antidepressants, anxiolytics, antipsychotics, and hallucinogens. These psychoactive drugs have been proven useful in treating wide range of medical conditions including mental disorders around the world. The most widely used drugs in the world include caffeine, nicotine and alcohol, which are also considered recreational drugs, since they are used for pleasure rather than medicinal purposes. All drugs can have potential side effects. Abuse of several psychoactive drugs can cause addiction and/or physical dependence. Excessive use of stimulants can promote stimulant psychosis. Many recreational drugs are illicit; international treaties such as the Single Convention on Narcotic Drugs exist for the purpose of their prohibition. Etymology In English, the noun "drug" is thought to originate from Old French "", possibly deriving from "" from Middle Dutch meaning "dry (barrels)", referring to medicinal plants preserved as dry matter in barrels. In the 1990s however, Spanish lexicographer Federico Corriente Córdoba documented the possible origin of the word in {ḥṭr} an early romanized form of Al-Andalus language from Northwestern part of the Iberian peninsula. The term could approximately be transcribed as حطروكة or hatruka. The term "drug" has become a skunked term with negative connotation, being used as a synonym for illegal substances like cocaine or heroin or for drugs used recreationally. In other contexts the terms "drug" and "medicine" are used interchangeably. Efficacy Drug action is highly specific and their effects may only be detected in certain individuals. For instance, the 10 highest-grossing drugs in the US may help only 4-25% of people. Often, the activity of a drug depends on the genotype of a patient. For example, Erbitux (cetuximab) increases the survival rate of colorectal cancer patients if they carry a particular mutation in the EGFR gene. Some drugs are specifically approved for certain genotypes. Vemurafenib is such a case which is used for melanoma patients who carry a mutation in BRAF gene. The number of people who benefit from a drug determines if drug trials are worth carrying out, given that phase III trials may cost between $100 million and $700 million per drug. This is the motivation behind personalized medicine, that is, to develop drugs that are adapted to individual patients. Medication A medication or medicine is a drug taken to cure or ameliorate any symptoms of an illness or medical condition. The use may also be as preventive medicine that has future benefits but does not treat any existing or pre-existing diseases or symptoms. Dispensing of medication is often regulated by governments into three categories—over-the-counter medications, which are available in pharmacies and supermarkets without special restrictions; behind-the-counter medicines, which are dispensed by a pharmacist without needing a doctor's prescription, and prescription only medicines, which must be prescribed by a licensed medical professional, usually a physician. In the United Kingdom, behind-the-counter medicines are called pharmacy medicines which can only be sold in registered pharmacies, by or under the supervision of a pharmacist. These medications are designated by the letter P on the label. The range of medicines available without a prescription varies from country to country. Medications are typically produced by pharmaceutical companies and are often patented to give the developer exclusive rights to produce them. Those that are not patented (or with expired patents) are called generic drugs since they can be produced by other companies without restrictions or licenses from the patent holder. Pharmaceutical drugs are usually categorised into drug classes. A group of drugs will share a similar chemical structure, or have the same mechanism of action, the same related mode of action or target the same illness or related illnesses. The Anatomical Therapeutic Chemical Classification System (ATC), the most widely used drug classification system, assigns drugs a unique ATC code, which is an alphanumeric code that assigns it to specific drug classes within the ATC system. Another major classification system is the Biopharmaceutics Classification System. This groups drugs according to their solubility and permeability or absorption properties. Spiritual and religious use Some religions, particularly ethnic religions, are based completely on the use of certain drugs, known as entheogens, which are mostly hallucinogens,—psychedelics, dissociatives, or deliriants. Some entheogens include kava which can act as a stimulant, a sedative, a euphoriant and an anesthetic. The roots of the kava plant are used to produce a drink consumed throughout the cultures of the Pacific Ocean. Some shamans from different cultures use entheogens, defined as "generating the divine within," to achieve religious ecstasy. Amazonian shamans use ayahuasca (yagé), a hallucinogenic brew, for this purpose. Mazatec shamans have a long and continuous tradition of religious use of Salvia divinorum, a psychoactive plant. Its use is to facilitate visionary states of consciousness during spiritual healing sessions. Silene undulata is regarded by the Xhosa people as a sacred plant and used as an entheogen. Its roots are traditionally used to induce vivid (and according to the Xhosa, prophetic) lucid dreams during the initiation process of shamans, classifying it a naturally occurring oneirogen similar to the more well-known dream herb Calea ternifolia. Peyote, a small spineless cactus, has been a major source of psychedelic mescaline and has probably been used by Native Americans for at least five thousand years. Most mescaline is now obtained from a few species of columnar cacti in particular from San Pedro and not from the vulnerable peyote. The entheogenic use of cannabis has also been widely practised for centuries. Rastafari use marijuana (ganja) as a sacrament in their religious ceremonies. Psychedelic mushrooms (psilocybin mushrooms), commonly called magic mushrooms or shrooms have also long been used as entheogens. Smart drugs and designer drugs Nootropics, also commonly referred to as "smart drugs", are drugs that are claimed to improve human cognitive abilities. Nootropics are used to improve memory, concentration, thought, mood, and learning. An increasingly used nootropic among students, also known as a study drug, is methylphenidate branded commonly as Ritalin and used for the treatment of attention deficit hyperactivity disorder (ADHD) and narcolepsy. At high doses methylphenidate can become highly addictive. Serious addiction can lead to psychosis, anxiety and heart problems, and the use of this drug is related to a rise in suicides, and overdoses. Evidence for use outside of student settings is limited but suggests that it is commonplace. Intravenous use of methylphenidate can lead to emphysematous damage to the lungs, known as Ritalin lung. Other drugs known as designer drugs are produced. An early example of what today would be labelled a 'designer drug' was LSD, which was synthesised from ergot. Other examples include analogs of performance-enhancing drugs such as designer steroids taken to improve physical capabilities; these are sometimes used (legally or not) for this purpose, often by professional athletes. Other designer drugs mimic the effects of psychoactive drugs. Since the late 1990s there has been the identification of many of these synthesised drugs. In Japan and the United Kingdom this has spurred the addition of many designer drugs into a newer class of controlled substances known as a temporary class drug. Synthetic cannabinoids have been produced for a longer period of time and are used in the designer drug synthetic cannabis. Recreational drug use Recreational drug use is the use of a drug (legal, controlled, or illegal) with the primary intention of altering the state of consciousness through alteration of the central nervous system in order to create positive emotions and feelings. The hallucinogen LSD is a psychoactive drug commonly used as a recreational drug. Ketamine is a drug used for anesthesia, and is also used as a recreational drug, both in powder and liquid form, for its hallucinogenic and dissociative effects. Some national laws prohibit the use of different recreational drugs; medicinal drugs that have the potential for recreational use are often heavily regulated. However, there are many recreational drugs that are legal in many jurisdictions and widely culturally accepted. Cannabis is the most commonly consumed controlled recreational drug in the world (as of 2012). Its use in many countries is illegal but is legally used in several countries usually with the proviso that it can only be used for personal use. It can be used in the leaf form of marijuana (grass), or in the resin form of hashish. Marijuana is a more mild form of cannabis than hashish. There may be an age restriction on the consumption and purchase of legal recreational drugs. Some recreational drugs that are legal and accepted in many places include alcohol, tobacco, betel nut, and caffeine products, and in some areas of the world the legal use of drugs such as khat is common. There are a number of legal intoxicants commonly called legal highs that are used recreationally. The most widely used of these is alcohol. Administration of drugs All drugs have a route of administration, and many can be administered by more than one. A bolus is the administration of a medication, drug or other compound that is given to raise its concentration in blood rapdily to an effective level, regardless of the route of administration Control of drugs Numerous governmental offices in many countries deal with the control and supervision of drug manufacture and use, and the implementation of various drug laws. The Single Convention on Narcotic Drugs is an international treaty brought about in 1961 to prohibit the use of narcotics save for those used in medical research and treatment. In 1971, a second treaty the Convention on Psychotropic Substances had to be introduced to deal with newer recreational psychoactive and psychedelic drugs. The legal status of Salvia divinorum varies in many countries and even in states within the United States. Where it is legislated against, the degree of prohibition also varies. The Food and Drug Administration (FDA) in the United States is a federal agency responsible for protecting and promoting public health through the regulation and supervision of food safety, tobacco products, dietary supplements, prescription and over-the-counter medications, vaccines, biopharmaceuticals, blood transfusions, medical devices, electromagnetic radiation emitting devices, cosmetics, animal foods and veterinary drugs. In India, the Narcotics Control Bureau (NCB), an Indian federal law enforcement and intelligence agency under the Ministry of Home Affairs, is tasked with combating drug trafficking and assisting international use of illegal substances under the provisions of Narcotic Drugs and Psychotropic Substances Act. See also Club drug Controlled Substances Act Drug checking Drug development Inverse benefit law Lifestyle drug Medical cannabis Nonsteroidal anti-inflammatory drug Pharmacognosy Placebo Prodrug Specialty drugs (United States) United Nations Office on Drugs and Crime Lists of drugs List of drugs List of pharmaceutical companies List of psychoactive plants List of Schedule I drugs (US) References Further reading External links DrugBank, a database of 13,400 drugs and 5,100 protein drug targets "Drugs", BBC Radio 4 discussion with Richard Davenport-Hines, Sadie Plant and Mike Jay (In Our Time, May 23, 2002)
0.771668
0.998687
0.770655
Safety data sheet
A safety data sheet (SDS), material safety data sheet (MSDS), or product safety data sheet (PSDS) is a document that lists information relating to occupational safety and health for the use of various substances and products. SDSs are a widely used type of fact sheet used to catalogue information on chemical species including chemical compounds and chemical mixtures. SDS information may include instructions for the safe use and potential hazards associated with a particular material or product, along with spill-handling procedures. The older MSDS formats could vary from source to source within a country depending on national requirements; however, the newer SDS format is internationally standardized. An SDS for a substance is not primarily intended for use by the general consumer, focusing instead on the hazards of working with the material in an occupational setting. There is also a duty to properly label substances on the basis of physico-chemical, health, or environmental risk. Labels often include hazard symbols such as the European Union standard symbols. The same product (e.g. paints sold under identical brand names by the same company) can have different formulations in different countries. The formulation and hazards of a product using a generic name may vary between manufacturers in the same country. Globally Harmonized System The Globally Harmonized System of Classification and Labelling of Chemicals contains a standard specification for safety data sheets. The SDS follows a 16 section format which is internationally agreed and for substances especially, the SDS should be followed with an Annex which contains the exposure scenarios of this particular substance. The 16 sections are: SECTION 1: Identification of the substance/mixture and of the company/undertaking 1.1. Product identifier 1.2. Relevant identified uses of the substance or mixture and uses advised against 1.3. Details of the supplier of the safety data sheet 1.4. Emergency telephone number SECTION 2: Hazards identification 2.1. Classification of the substance or mixture 2.2. Label elements 2.3. Other hazards SECTION 3: Composition/information on ingredients 3.1. Substances 3.2. Mixtures SECTION 4: First aid measures 4.1. Description of first aid measures 4.2. Most important symptoms and effects, both acute and delayed 4.3. Indication of any immediate medical attention and special treatment needed SECTION 5: Firefighting measures 5.1. Extinguishing media 5.2. Special hazards arising from the substance or mixture 5.3. Advice for firefighters SECTION 6: Accidental release measure 6.1. Personal precautions, protective equipment and emergency procedures 6.2. Environmental precautions 6.3. Methods and material for containment and cleaning up 6.4. Reference to other sections SECTION 7: Handling and storage 7.1. Precautions for safe handling 7.2. Conditions for safe storage, including any incompatibilities 7.3. Specific end use(s) SECTION 8: Exposure controls/personal protection 8.1. Control parameters 8.2. Exposure controls SECTION 9: Physical and chemical properties 9.1. Information on basic physical and chemical properties 9.2. Other information SECTION 10: Stability and reactivity 10.1. Reactivity 10.2. Chemical stability 10.3. Possibility of hazardous reactions 10.4. Conditions to avoid 10.5. Incompatible materials 10.6. Hazardous decomposition products SECTION 11: Toxicological information 11.1. Information on toxicological effects SECTION 12: Ecological information 12.1. Toxicity 12.2. Persistence and degradability 12.3. Bioaccumulative potential 12.4. Mobility in soil 12.5. Results of PBT and vPvB assessment 12.6. Other adverse effects SECTION 13: Disposal considerations 13.1. Waste treatment methods SECTION 14: Transport information 14.1. UN number 14.2. UN proper shipping name 14.3. Transport hazard class(es) 14.4. Packing group 14.5. Environmental hazards 14.6. Special precautions for user 14.7. Transport in bulk according to Annex II of MARPOL and the IBC Code SECTION 15: Regulatory information 15.1. Safety, health and environmental regulations/legislation specific for the substance or mixture 15.2. Chemical safety assessment SECTION 16: Other information 16.2. Date of the latest revision of the SDS National and international requirements Canada In Canada, the program known as the Workplace Hazardous Materials Information System (WHMIS) establishes the requirements for SDSs in workplaces and is administered federally by Health Canada under the Hazardous Products Act, Part II, and the Controlled Products Regulations. European Union Safety data sheets have been made an integral part of the system of Regulation (EC) No 1907/2006 (REACH). The original requirements of REACH for SDSs have been further adapted to take into account the rules for safety data sheets of the Global Harmonised System (GHS) and the implementation of other elements of the GHS into EU legislation that were introduced by Regulation (EC) No 1272/2008 (CLP) via an update to Annex II of REACH. The SDS must be supplied in an official language of the Member State(s) where the substance or mixture is placed on the market, unless the Member State(s) concerned provide(s) otherwise (Article 31(5) of REACH). The European Chemicals Agency (ECHA) has published a guidance document on the compilation of safety data sheets. Germany In Germany, safety data sheets must be compiled in accordance with REACH Regulation No. 1907/2006. The requirements concerning national aspects are defined in the Technical Rule for Hazardous Substances (TRGS) 220 "National aspects when compiling safety data sheets". A national measure mentioned in SDS section 15 is as example the water hazard class (WGK) it is based on regulations governing systems for handling substances hazardous to waters (AwSV). The Netherlands Dutch Safety Data Sheets are well known as veiligheidsinformatieblad or Chemiekaarten. This is a collection of Safety Data Sheets of the most widely used chemicals. The Chemiekaarten boek is commercially available, but also made available through educational institutes, such as the web site offered by the University of Groningen. South Africa This section contributes to a better understanding of the regulations governing SDS within the South African framework. As regulations may change, it is the responsibility of the reader to verify the validity of the regulations mentioned in text. As globalisation increased and countries engaged in cross-border trade, the quantity of hazardous material crossing international borders amplified. Realising the detrimental effects of hazardous trade, the United Nations established a committee of experts specialising in the transportation of hazardous goods. The committee provides best practises governing the conveyance of hazardous materials and goods for land including road and railway; air as well as sea transportation. These best practises are constantly updated to remain current and relevant. There are various other international bodies who provide greater detail and guidance for specific modes of transportation such as the International Maritime Organisation (IMO) by means of the International Maritime Code and the International Civil Aviation Organisation (ICAO) via the Technical Instructions for the safe transport of dangerous goods by air as well as the International Air Transport Association (IATA) who provides regulations for the transport of dangerous goods. These guidelines prescribed by the international authorities are applicable to the South African land, sea and air transportation of hazardous materials and goods. In addition to these rules and regulations to International best practice, South Africa has also implemented common laws which are laws based on custom and practise. Common laws are a vital part of maintaining public order and forms the basis of case laws. Case laws, using the principles of common law are interpretations and decisions of statutes made by courts. Acts of parliament are determinations and regulations by parliament which form the foundation of statutory law. Statutory laws are published in the government gazette or on the official website. Lastly, subordinate legislation are the bylaws issued by local authorities and authorised by parliament. Statutory law gives effect to the Occupational Health and Safety Act of 1993 and the National Road Traffic Act of 1996. The Occupational Health and Safety Act details the necessary provisions for the safe handling and storage of hazardous materials and goods whilst the transport act details with the necessary provisions for the transportation of the hazardous goods. Relevant South African legislation includes the Hazardous Chemicals Agent regulations of 2021 under the Occupational Health and Safety Act of 1993, the Chemical Substance Act 15 of 1973, and the National Road Traffic Act of 1996, and the Standards Act of 2008. There has been selective incorporation of aspects of the Globally Harmonised System (GHS) of Classification and Labelling of Chemicals into South African legislation. At each point of the chemical value chain, there is a responsibility to manage chemicals in a safe and responsible manner. SDS is therefore required by law. A SDS is included in the requirements of Occupational Health and Safety Act, 1993 (Act No.85 of 1993) Regulation 1179 dated 25 August 1995. The categories of information supplied in the SDS are listed in SANS 11014:2010; dangerous goods standards – Classification and information. SANS 11014:2010 supersedes the first edition SANS 11014-1:1994 and is an identical implementation of ISO 11014:2009. According to SANS 11014:2010: United Kingdom In the U.K., the Chemicals (Hazard Information and Packaging for Supply) Regulations 2002 - known as CHIP Regulations - impose duties upon suppliers, and importers into the EU, of hazardous materials. NOTE: Safety data sheets (SDS) are no longer covered by the CHIP regulations. The laws that require a SDS to be provided have been transferred to the European REACH Regulations. The Control of Substances Hazardous to Health (COSHH) Regulations govern the use of hazardous substances in the workplace in the UK and specifically require an assessment of the use of a substance. Regulation 12 requires that an employer provides employees with information, instruction and training for people exposed to hazardous substances. This duty would be very nearly impossible without the data sheet as a starting point. It is important for employers therefore to insist on receiving a data sheet from a supplier of a substance. The duty to supply information is not confined to informing only business users of products. SDSs for retail products sold by large DIY shops are usually obtainable on those companies' web sites. Web sites of manufacturers and large suppliers do not always include them even if the information is obtainable from retailers but written or telephone requests for paper copies will usually be responded to favourably. United Nations The United Nations (UN) defines certain details used in SDSs such as the UN numbers used to identify some hazardous materials in a standard form while in international transit. United States In the U.S., the Occupational Safety and Health Administration requires that SDSs be readily available to all employees for potentially harmful substances handled in the workplace under the Hazard Communication Standard. The SDS is also required to be made available to local fire departments and local and state emergency planning officials under Section 311 of the Emergency Planning and Community Right-to-Know Act. The American Chemical Society defines Chemical Abstracts Service Registry Numbers (CAS numbers) which provide a unique number for each chemical and are also used internationally in SDSs. Reviews of material safety data sheets by the U.S. Chemical Safety and Hazard Investigation Board have detected dangerous deficiencies. The board's Combustible Dust Hazard Study analyzed 140 data sheets of substances capable of producing combustible dusts. None of the SDSs contained all the information the board said was needed to work with the material safely, and 41 percent failed to even mention that the substance was combustible. As part of its study of an explosion and fire that destroyed the Barton Solvents facility in Valley Center, Kansas, in 2007, the safety board reviewed 62 material safety data sheets for commonly used nonconductive flammable liquids. As in the combustible dust study, the board found all the data sheets inadequate. In 2012, the US adopted the 16 section Safety Data Sheet to replace Material Safety Data Sheets. This became effective on 1 December 2013. These new Safety Data Sheets comply with the Globally Harmonized System of Classification and Labeling of Chemicals (GHS). By 1 June 2015, employers were required to have their workplace labeling and hazard communication programs updated as necessary – including all MSDSs replaced with SDS-formatted documents. SDS authoring Many companies offer the service of collecting, or writing and revising, data sheets to ensure they are up to date and available for their subscribers or users. Some jurisdictions impose an explicit duty of care that each SDS be regularly updated, usually every three to five years. However, when new information becomes available, the SDS must be revised without delay. If a full SDS is not feasible, then a reduced workplace label should be authored. See also Occupational exposure banding References Chemical safety Documents Environmental law Industrial hygiene Materials Occupational safety and health Regulation of chemicals in the European Union Safety engineering Toxicology
0.771818
0.998473
0.77064
CHNOPS
CHNOPS and CHON are mnemonic acronyms for the most common elements in living organisms. "CHON" stands for carbon, hydrogen, oxygen, and nitrogen, which together make up more than 95 percent of the mass of biological systems. "CHNOPS" adds phosphorus and sulfur. Description Carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur are the six most important chemical elements whose covalent combinations make up most biological molecules on Earth. All of these elements are nonmetals. In animals in general,the four elements—C, H, O, and N—compose about 96% of the weight, and major minerals (macrominerals) and minor minerals (also called trace elements) compose the remainder. To be organic, something must include Carbon and Hydrogen. Carbs and Lipids are also the best sourse of energy in your body. Sulfur is contained in the amino acids cysteine and methionine. Phosphorus is contained in phospholipids, a class of lipids that are a major component of all cell membranes, as they can form lipid bilayers, which keep ions, proteins, and other molecules where they are needed for cell function, and prevent them from diffusing into areas where they should not be. Phosphate groups are also an essential component of the backbone of nucleic acids (general name for DNA & RNA) and are required to form ATP – the main molecule used as energy powering the cell in all living creatures. Carbonaceous asteroids are rich in CHON elements. These asteroids are the most common type, and frequently collide with Earth as meteorites. Such collisions were especially common early in Earth's history, and these impactors may have been crucial in the formation of the planet's oceans. The simplest compounds to contain all of the CHON elements are isomers fulminic acid (HCNO), isofulminic acid (HONC), cyanic acid (HOCN) and isocyanic acid (HNCO), having one of each atom. See also Abundance of the chemical elements Biochemistry Bioinorganic chemistry Carbon-based life References External links "Impact of the Biosphere on the Earth", University of Texas at Dallas Astrobiology Biology and pharmacology of chemical elements Mnemonic acronyms Science mnemonics Science fiction themes Astrochemistry
0.776515
0.992187
0.770448
Heterogeneous condition
A medical condition is termed heterogeneous, or a heterogeneous disease, if it has several etiologies (root causes); as opposed to homogeneous conditions, which have the same root cause for all patients in a given group. Examples of heterogeneous conditions are hepatitis and diabetes. Heterogeneity is not unusual, as medical conditions are usually defined pathologically (i.e. based on the state of the patient), as in "liver inflammation", or clinically (i.e. based on the apparent symptoms of the patient), as in "excessive urination", rather than etiologically (i.e. based on the underlying cause of the symptoms). Heterogeneous conditions are often divided into endotypes based on etiology. Where necessary to determine appropriate treatment differential diagnosis procedures are employed. Endotype An endotype is a subtype of a condition, which is defined by a distinct functional or pathobiological mechanism. This is distinct from a phenotype, which is any observable characteristic or trait of a disease, such as morphology, development, biochemical or physiological properties, or behavior, without any implication of a mechanism. It is envisaged that patients with a specific endotype present themselves within phenotypic clusters of diseases. One example is asthma, which is considered to be a syndrome, consisting of a series of endotypes. This is related to the concept of disease entity Heterogeneity in medical conditions The term medical condition is a nosological broad term that includes all diseases, disorders, injuries and syndromes, and it is specially suitable in the last case, in which it is not possible to speak about a single disease associated to the clinical course of the patient. While the term medical condition generally includes mental illnesses, in some contexts the term is used specifically to denote any illness, injury, or disease except for mental illnesses. The Diagnostic and Statistical Manual of Mental Disorders (DSM), the widely used psychiatric manual that defines all mental disorders, uses the term general medical condition to refer to all diseases, illnesses, and injuries except for mental disorders. This usage is also commonly seen in the psychiatric literature. Some health insurance policies also define a medical condition as any illness, injury, or disease except for psychiatric illnesses. As it is more value-neutral than terms like disease, the term medical condition is sometimes preferred by people with health issues that they do not consider deleterious. It is also preferred when etiology is not unique, because the word disease is normally associated to the cause of the clinical problems. On the other hand, by emphasizing the medical nature of the condition, this term is sometimes rejected, such as by proponents of the autism rights movement. The term is also used in specialized areas of medicine. A genetic or allelic heterogeneous condition is one where the same disease or condition can be caused, or contributed to, by varying different genes or alleles. In clinical trials and statistics the concepts of homogeneous and heterogeneous populations is important. The same applies for epidemiology See also Endotype, each one of the etiological subclasses of a given heterogeneous condition. References Diseases and disorders
0.781584
0.985637
0.770357
Medical terminology
Medical terminology is a language used to precisely describe the human body including all its components, processes, conditions affecting it, and procedures performed upon it. Medical terminology is used in the field of medicine. Medical terminology has quite regular morphology, the same prefixes and suffixes are used to add meanings to different roots. The root of a term often refers to an organ, tissue, or condition. For example, in the disorder known as hypertension, the prefix "hyper-" means "high" or "over", and the root word "tension" refers to pressure, so the word "hypertension" refers to abnormally high blood pressure. The roots, prefixes and suffixes are often derived from Greek or Latin, and often quite dissimilar from their English-language variants. This regular morphology means that once a reasonable number of morphemes are learnt it becomes easy to understand very precise terms assembled from these morphemes. Much medical language is anatomical terminology, concerning itself with the names of various parts of the body. Discussion In forming or understanding a word root, one needs a basic comprehension of the terms and the source language. The study of the origin of words is called etymology. For example, if a word was to be formed to indicate a condition of kidneys, there are two primary roots – one from Greek (νεφρός nephr(os)) and one from Latin (ren(es)). Renal failure would be a condition of kidneys, and nephritis is also a condition, or inflammation, of the kidneys. The suffix -itis means inflammation, and the entire word conveys the meaning inflammation of the kidney. To continue using these terms, other combinations will be presented for the purpose of examples: The term supra-renal is a combination of the prefix supra- (meaning "above"), and the word root for kidney, and the entire word means "situated above the kidneys". The word "nephrologist" combines the root word for kidney to the suffix -ologist with the resultant meaning of "one who studies the kidneys". The formation of plurals should usually be done using the rules of forming the proper plural form in the source language. Greek and Latin each have differing rules to be applied when forming the plural form of the word root. Often such details can be found using a medical dictionary. Morphology Medical terminology often uses words created using prefixes and suffixes in Latin and Ancient Greek. In medicine, their meanings, and their etymology, are informed by the language of origin. Prefixes and suffixes, primarily in Greek—but also in Latin, have a droppable -o-. Medical roots generally go together according to language: Greek prefixes go with Greek suffixes and Latin prefixes with Latin suffixes. Although it is technically considered acceptable to create hybrid words, it is strongly preferred not to mix different lingual roots. Examples of well-accepted medical words that do mix lingual roots are neonatology and quadriplegia. Prefixes do not normally require further modification to be added to a word root because the prefix normally ends in a vowel or vowel sound, although in some cases they may assimilate slightly and an in- may change to im- or syn- to sym-. Suffixes are attached to the end of a word root to add meaning such as condition, disease process, or procedure. In the process of creating medical terminology, certain rules of language apply. These rules are part of language mechanics called linguistics. The word root is developed to include a vowel sound following the term to add a smoothing action to the sound of the word when applying a suffix. The result is the formation of a new term with a vowel attached (word root + vowel) called a combining form. In English, the most common vowel used in the formation of the combining form is the letter -o-, added to the word root. For example, if there is an inflammation of the stomach and intestines, this would be written as gastro- and enter- plus -itis, gastroenteritis. Suffixes are categorized as either (1) needing the combining form, or (2) not needing the combining form since they start with a vowel. See also References Scientific terminology
0.775673
0.99313
0.770344
Dissection
Dissection (from Latin "to cut to pieces"; also called anatomization) is the dismembering of the body of a deceased animal or plant to study its anatomical structure. Autopsy is used in pathology and forensic medicine to determine the cause of death in humans. Less extensive dissection of plants and smaller animals preserved in a formaldehyde solution is typically carried out or demonstrated in biology and natural science classes in middle school and high school, while extensive dissections of cadavers of adults and children, both fresh and preserved are carried out by medical students in medical schools as a part of the teaching in subjects such as anatomy, pathology and forensic medicine. Consequently, dissection is typically conducted in a morgue or in an anatomy lab. Dissection has been used for centuries to explore anatomy. Objections to the use of cadavers have led to the use of alternatives including virtual dissection of computer models. In the field of surgery, the term "dissection" or "dissecting" means more specifically to the practice of separating an anatomical structure (an organ, nerve or blood vessel) from its surrounding connective tissue in order to minimize unwanted damage during a surgical procedure. Overview Plant and animal bodies are dissected to analyze the structure and function of its components. Dissection is practised by students in courses of biology, botany, zoology, and veterinary science, and sometimes in arts studies. In medical schools, students dissect human cadavers to learn anatomy. Zoötomy is sometimes used to describe "dissection of an animal". Human dissection A key principle in the dissection of human cadavers (sometimes called androtomy) is the prevention of human disease to the dissector. Prevention of transmission includes the wearing of protective gear, ensuring the environment is clean, dissection technique and pre-dissection tests to specimens for the presence of HIV and hepatitis viruses. Specimens are dissected in morgues or anatomy labs. When provided, they are evaluated for use as a "fresh" or "prepared" specimen. A "fresh" specimen may be dissected within some days, retaining the characteristics of a living specimen, for the purposes of training. A "prepared" specimen may be preserved in solutions such as formalin and pre-dissected by an experienced anatomist, sometimes with the help of a diener. This preparation is sometimes called prosection. Most dissection involves the careful isolation and removal of individual organs, called the Virchow technique. An alternative more cumbersome technique involves the removal of the entire organ body, called the Letulle technique. This technique allows a body to be sent to a funeral director without waiting for the sometimes time-consuming dissection of individual organs. The Rokitansky method involves an in situ dissection of the organ block, and the technique of Ghon involves dissection of three separate blocks of organs - the thorax and cervical areas, gastrointestinal and abdominal organs, and urogenital organs. Dissection of individual organs involves accessing the area in which the organ is situated, and systematically removing the anatomical connections of that organ to its surroundings. For example, when removing the heart, connects such as the superior vena cava and inferior vena cava are separated. If pathological connections exist, such as a fibrous pericardium, then this may be deliberately dissected along with the organ. Autopsy and necropsy Dissection is used to help to determine the cause of death in autopsy (called necropsy in other animals) and is an intrinsic part of forensic medicine. History Classical antiquity Human dissections were carried out by the Greek physicians Herophilus of Chalcedon and Erasistratus of Chios in the early part of the third century BC. Before then, animal dissection had been carried out systematically starting from the fifth century BC. During this period, the first exploration into full human anatomy was performed rather than a base knowledge gained from 'problem-solution' delving. While there was a deep taboo in Greek culture concerning human dissection, there was at the time a strong push by the Ptolemaic government to build Alexandria into a hub of scientific study. For a time, Roman law forbade dissection and autopsy of the human body, so anatomists relied on the cadavers of animals or made observations of human anatomy from injuries of the living. Galen, for example, dissected the Barbary macaque and other primates, assuming their anatomy was basically the same as that of humans, and supplemented these observations with knowledge of human anatomy which he acquired while tending to wounded gladiators. Celsus wrote in On Medicine I Proem 23, "Herophilus and Erasistratus proceeded in by far the best way: they cut open living men - criminals they obtained out of prison from the kings and they observed, while their subjects still breathed, parts that nature had previously hidden, their position, color, shape, size, arrangement, hardness, softness, smoothness, points of contact, and finally the processes and recesses of each and whether any part is inserted into another or receives the part of another into itself." Galen was another such writer who was familiar with the studies of Herophilus and Erasistratus. India The ancient societies that were rooted in India left behind artwork on how to kill animals during a hunt. The images showing how to kill most effectively depending on the game being hunted relay an intimate knowledge of both external and internal anatomy as well as the relative importance of organs. The knowledge was mostly gained through hunters preparing the recently captured prey. Once the roaming lifestyle was no longer necessary it was replaced in part by the civilization that formed in the Indus Valley. Unfortunately, there is little that remains from this time to indicate whether or not dissection occurred, the civilization was lost to the Aryan people migrating. Early in the history of India (2nd to 3rd century), the Arthashastra described the 4 ways that death can occur and their symptoms: drowning, hanging, strangling, or asphyxiation. According to that source, an autopsy should be performed in any case of untimely demise. The practice of dissection flourished during the 7th and 8th century. It was under their rule that medical education was standardized. This created a need to better understand human anatomy, so as to have educated surgeons. Dissection was limited by the religious taboo on cutting the human body. This changed the approach taken to accomplish the goal. The process involved the loosening of the tissues in streams of water before the outer layers were sloughed off with soft implements to reach the musculature. To perfect the technique of slicing, the prospective students used gourds and squash. These techniques of dissection gave rise to an advanced understanding of the anatomy and the enabled them to complete procedures used today, such as rhinoplasty. During medieval times the anatomical teachings from India spread throughout the known world; however, the practice of dissection was stunted by Islam. The practice of dissection at a university level was not seen again until 1827, when it was performed by the student Pandit Madhusudan Gupta. Through the 1900s, the university teachers had to continually push against the social taboos of dissection, until around 1850 when the universities decided that it was more cost effective to train Indian doctors than bring them in from Britain. Indian medical schools were, however, training female doctors well before those in England. The current state of dissection in India is deteriorating. The number of hours spent in dissection labs during medical school has decreased substantially over the last twenty years. The future of anatomy education will probably be an elegant mix of traditional methods and integrative computer learning. The use of dissection in early stages of medical training has been shown more effective in the retention of the intended information than their simulated counterparts. However, there is use for the computer-generated experience as review in the later stages. The combination of these methods is intended to strengthen the students' understanding and confidence of anatomy, a subject that is infamously difficult to master. There is a growing need for anatomist—seeing as most anatomy labs are taught by graduates hoping to complete degrees in anatomy—to continue the long tradition of anatomy education. Islamic world From the beginning of the Islamic faith in 610 A.D., Shari'ah law has applied to a greater or lesser extent within Muslim countries, supported by Islamic scholars such as Al-Ghazali. Islamic physicians such as Ibn Zuhr (Avenzoar) (1091–1161) in Al-Andalus, Saladin's physician Ibn Jumay during the 12th century, Abd el-Latif in Egypt , and Ibn al-Nafis in Syria and Egypt in the 13th century may have practiced dissection, but it remains ambiguous whether or not human dissection was practiced. Ibn al-Nafis, a physician and Muslim jurist, suggested that the "precepts of Islamic law have discouraged us from the practice of dissection, along with whatever compassion is in our temperament", indicating that while there was no law against it, it was nevertheless uncommon. Islam dictates that the body be buried as soon as possible, barring religious holidays, and that there be no other means of disposal such as cremation. Prior to the 10th century, dissection was not performed on human cadavers. The book Al-Tasrif, written by Al-Zahrawi in 1000 A.D., details surgical procedure that differed from the previous standards. The book was an educational text of medicine and surgery which included detailed illustrations. It was later translated and took the place of Avicenna's The Canon of Medicine as the primary teaching tool in Europe from the 12th century to the 17th century. There were some that were willing to dissect humans up to the 12th century, for the sake of learning, after which it was forbidden. This attitude remained constant until 1952, when the Islamic School of Jurisprudence in Egypt ruled that "necessity permits the forbidden". This decision allowed for the investigation of questionable deaths by autopsy. In 1982, the decision was made by a fatwa that if it serves justice, autopsy is worth the disadvantages. Though Islam now approves of autopsy, the Islamic public still disapproves. Autopsy is prevalent in most Muslim countries for medical and judicial purposes. In Egypt it holds an important place within the judicial structure, and is taught at all the country's medical universities. In Saudi Arabia, whose law is completely dictated by Shari'ah, autopsy is viewed poorly by the population but can be compelled in criminal cases; human dissection is sometimes found at university level. Autopsy is performed for judicial purposes in Qatar and Tunisia. Human dissection is present in the modern day Islamic world, but is rarely published on due to the religious and social stigma. Tibet Tibetan medicine developed a rather sophisticated knowledge of anatomy, acquired from long-standing experience with human dissection. Tibetans had adopted the practice of sky burial because of the country's hard ground, frozen for most of the year, and the lack of wood for cremation. A sky burial begins with a ritual dissection of the deceased, and is followed by the feeding of the parts to vultures on the hill tops. Over time, Tibetan anatomical knowledge found its way into Ayurveda and to a lesser extent into Chinese medicine. Christian Europe Throughout the history of Christian Europe, the dissection of human cadavers for medical education has experienced various cycles of legalization and proscription in different countries. Dissection was rare during the Middle Ages, but it was practised, with evidence from at least as early as the 13th century. The practice of autopsy in Medieval Western Europe is "very poorly known" as few surgical texts or conserved human dissections have survived. A modern Jesuit scholar has claimed that the Christian theology contributed significantly to the revival of human dissection and autopsy by providing a new socio-religious and cultural context in which the human cadaver was no longer seen as sacrosanct. A non-existent "Ecclesia abhorret a sanguine" edict of the 1163 Council of Tours and an early 14th-century decree of Pope Boniface VIII have mistakenly been identified as prohibiting dissection and autopsy, misunderstanding or extrapolation from these edicts may have contributed to reluctance to perform such procedures. The Middle Ages witnessed the revival of an interest in medical studies, including human dissection and autopsy. Frederick II (1194–1250), the Holy Roman emperor, ruled that any that were studying to be a physician or a surgeon must attend a human dissection, which would be held no less than every five years. Some European countries began legalizing the dissection of executed criminals for educational purposes in the late 13th and early 14th centuries. Mondino de Luzzi carried out the first recorded public dissection around 1315. At this time, autopsies were carried out by a team consisting of a Lector, who lectured, the Sector, who did the dissection, and the Ostensor who pointed to features of interest. The Italian Galeazzo di Santa Sofia made the first public dissection north of the Alps in Vienna in 1404. Vesalius in the 16th century carried out numerous dissections in his extensive anatomical investigations. He was attacked frequently for his disagreement with Galen's opinions on human anatomy. Vesalius was the first to lecture and dissect the cadaver simultaneously. The Catholic Church is known to have ordered an autopsy on conjoined twins Joana and Melchiora Ballestero in Hispaniola in 1533 to determine whether they shared a soul. They found that there were two distinct hearts, and hence two souls, based on the ancient Greek philosopher Empedocles, who believed the soul resided in the heart. Human dissection was also practised by Renaissance artists. Though most chose to focus on the external surfaces of the body, some like Michelangelo Buonarotti, Antonio del Pollaiuolo, Baccio Bandinelli, and Leonardo da Vinci sought a deeper understanding. However, there were no provisions for artists to obtain cadavers, so they had to resort to unauthorised means, as indeed anatomists sometimes did, such as grave robbing, body snatching, and murder. Anatomization was sometimes ordered as a form of punishment, as, for example, in 1806 to James Halligan and Dominic Daley after their public hanging in Northampton, Massachusetts. In modern Europe, dissection is routinely practised in biological research and education, in medical schools, and to determine the cause of death in autopsy. It is generally considered a necessary part of learning and is thus accepted culturally. It sometimes attracts controversy, as when Odense Zoo decided to dissect lion cadavers in public before a "self-selected audience". Britain In Britain, dissection remained entirely prohibited from the end of the Roman conquest and through the Middle Ages to the 16th century, when a series of royal edicts gave specific groups of physicians and surgeons some limited rights to dissect cadavers. The permission was quite limited: by the mid-18th century, the Royal College of Physicians and Company of Barber-Surgeons were the only two groups permitted to carry out dissections, and had an annual quota of ten cadavers between them. As a result of pressure from anatomists, especially in the rapidly growing medical schools, the Murder Act 1752 allowed the bodies of executed murderers to be dissected for anatomical research and education. By the 19th century this supply of cadavers proved insufficient, as the public medical schools were growing, and the private medical schools lacked legal access to cadavers. A thriving black market arose in cadavers and body parts, leading to the creation of the profession of body snatching, and the infamous Burke and Hare murders in 1828, when 16 people were murdered for their cadavers, to be sold to anatomists. The resulting public outcry led to the passage of the Anatomy Act 1832, which increased the legal supply of cadavers for dissection. By the 21st century, the availability of interactive computer programs and changing public sentiment led to renewed debate on the use of cadavers in medical education. The Peninsula College of Medicine and Dentistry in the UK, founded in 2000, became the first modern medical school to carry out its anatomy education without dissection. United States In the United States, dissection of frogs became common in college biology classes from the 1920s, and were gradually introduced at earlier stages of education. By 1988, some 75 to 80 percent of American high school biology students were participating in a frog dissection, with a trend towards introduction in elementary schools. The frogs are most commonly from the genus Rana. Other popular animals for high-school dissection at the time of that survey were, among vertebrates, fetal pigs, perch, and cats; and among invertebrates, earthworms, grasshoppers, crayfish, and starfish. About six million animals are dissected each year in United States high schools (2016), not counting medical training and research. Most of these are purchased already dead from slaughterhouses and farms. Dissection in U.S. high schools became prominent in 1987, when a California student, Jenifer Graham, sued to require her school to let her complete an alternative project. The court ruled that mandatory dissections were permissible, but that Graham could ask to dissect a frog that had died of natural causes rather than one that was killed for the purposes of dissection; the practical impossibility of procuring a frog that had died of natural causes in effect let Graham opt out of the required dissection. The suit gave publicity to anti-dissection advocates. Graham appeared in a 1987 Apple Computer commercial for the virtual-dissection software Operation Frog. The state of California passed a Student's Rights Bill in 1988 requiring that objecting students be allowed to complete alternative projects. Opting out of dissection increased through the 1990s. In the United States, 17 states along with Washington, D.C. have enacted dissection-choice laws or policies that allow students in primary and secondary education to opt out of dissection. Other states including Arizona, Hawaii, Minnesota, Texas, and Utah have more general policies on opting out on moral, religious, or ethical grounds. To overcome these concerns, J. W. Mitchell High School in New Port Richey, Florida, in 2019 became the first US high school to use synthetic frogs for dissection in its science classes, instead of preserved real frogs. As for the dissection of cadavers in undergraduate and medical school, traditional dissection is supported by professors and students, with some opposition, limiting the availability of dissection. Upper-level students who have experienced this method along with their professors agree that "Studying human anatomy with colorful charts is one thing. Using a scalpel and an actual, recently-living person is an entirely different matter." Acquisition of cadavers The way in which cadaveric specimens are obtained differs greatly according to country. In the UK, donation of a cadaver is wholly voluntary. Involuntary donation plays a role in about 20 percent of specimens in the US and almost all specimens donated in some countries such as South Africa and Zimbabwe. Countries that practice involuntary donation may make available the bodies of dead criminals or unclaimed or unidentified bodies for the purposes of dissection. Such practices may lead to a greater proportion of the poor, homeless and social outcasts being involuntarily donated. Cadavers donated in one jurisdiction may also be used for the purposes of dissection in another, whether across states in the US, or imported from other countries, such as with Libya. As an example of how a cadaver is donated voluntarily, a funeral home in conjunction with a voluntary donation program identifies a body who is part of the program. After broaching the subject with relatives in a diplomatic fashion, the body is then transported to a registered facility. The body is tested for the presence of HIV and hepatitis viruses. It is then evaluated for use as a "fresh" or "prepared" specimen. Disposal of specimens Cadaveric specimens for dissection are, in general, disposed of by cremation. The deceased may then be interred at a local cemetery. If the family wishes, the ashes of the deceased are then returned to the family. Many institutes have local policies to engage, support and celebrate the donors. This may include the setting up of local monuments at the cemetery. Use in education Human cadavers are often used in medicine to teach anatomy or surgical instruction. Cadavers are selected according to their anatomy and availability. They may be used as part of dissection courses involving a "fresh" specimen so as to be as realistic as possible—for example, when training surgeons. Cadavers may also be pre-dissected by trained instructors. This form of dissection involves the preparation and preservation of specimens for a longer time period and is generally used for the teaching of anatomy. Alternatives Some alternatives to dissection may present educational advantages over the use of animal cadavers, while eliminating perceived ethical issues. These alternatives include computer programs, lectures, three dimensional models, films, and other forms of technology. Concern for animal welfare is often at the root of objections to animal dissection. Studies show that some students reluctantly participate in animal dissection out of fear of real or perceived punishment or ostracism from their teachers and peers, and many do not speak up about their ethical objections. One alternative to the use of cadavers is computer technology. At Stanford Medical School, software combines X-ray, ultrasound and MRI imaging for display on a screen as large as a body on a table. In a variant of this, a "virtual anatomy" approach being developed at New York University, students wear three dimensional glasses and can use a pointing device to "[swoop] through the virtual body, its sections as brightly colored as living tissue." This method is claimed to be "as dynamic as Imax [cinema]". Advantages and disadvantages Proponents of animal-free teaching methodologies argue that alternatives to animal dissection can benefit educators by increasing teaching efficiency and lowering instruction costs while affording teachers an enhanced potential for the customization and repeat-ability of teaching exercises. Those in favor of dissection alternatives point to studies which have shown that computer-based teaching methods "saved academic and nonacademic staff time … were considered to be less expensive and an effective and enjoyable mode of student learning [and] … contributed to a significant reduction in animal use" because there is no set-up or clean-up time, no obligatory safety lessons, and no monitoring of misbehavior with animal cadavers, scissors, and scalpels. With software and other non-animal methods, there is also no expensive disposal of equipment or hazardous material removal. Some programs also allow educators to customize lessons and include built-in test and quiz modules that can track student performance. Furthermore, animals (whether dead or alive) can be used only once, while non-animal resources can be used for many years—an added benefit that could result in significant cost savings for teachers, school districts, and state educational systems. Several peer-reviewed comparative studies examining information retention and performance of students who dissected animals and those who used an alternative instruction method have concluded that the educational outcomes of students who are taught basic and advanced biomedical concepts and skills using non-animal methods are equivalent or superior to those of their peers who use animal-based laboratories such as animal dissection. Some reports state that students' confidence, satisfaction, and ability to retrieve and communicate information was much higher for those who participated in alternative activities compared to dissection. Three separate studies at universities across the United States found that students who modeled body systems out of clay were significantly better at identifying the constituent parts of human anatomy than their classmates who performed animal dissection. Another study found that students preferred using clay modeling over animal dissection and performed just as well as their cohorts who dissected animals. In 2008, the National Association of Biology Teachers (NABT) affirmed its support for classroom animal dissection stating that they "Encourage the presence of live animals in the classroom with appropriate consideration to the age and maturity level of the students …NABT urges teachers to be aware that alternatives to dissection have their limitations. NABT supports the use of these materials as adjuncts to the educational process but not as exclusive replacements for the use of actual organisms." The National Science Teachers Association (NSTA) "supports including live animals as part of instruction in the K-12 science classroom because observing and working with animals firsthand can spark students' interest in science as well as a general respect for life while reinforcing key concepts" of biological sciences. NSTA also supports offering dissection alternatives to students who object to the practice. The NORINA database lists over 3,000 products which may be used as alternatives or supplements to animal use in education and training. These include alternatives to dissection in schools. InterNICHE has a similar database and a loans system. Additional images See also 1788 Doctors' riot in New York City Vivisection Forensics Andreas Vesalius, founder of modern anatomy Jean-Joseph Sue, 18th century surgeon and anatomist Notes References Further reading C. Celsus, On Medicine, I, Proem 23, 1935, translated by W. G. Spencer (Loeb Classics Library, 1992). Claire Bubb. 2022. Dissection in Classical Antiquity: A Social and Medical History. Cambridge: Cambridge University Press. External links How to dissect a frog Dissection Alternatives Human Dissections Virtual Frog Dissection Alternatives To Animal Dissection in School Science Classes Research Project on Death and Dead Bodies, last conference: "Death and Dissection" July 2009, Berlin, Germany Evolutionary Biology Digital Dissection Collections Dissection photographs for study and teaching from the University at Buffalo The Free Dictionary Biological techniques and tools Forensic pathology Corpses
0.77423
0.994813
0.770214
Medicine
Medicine is the science and practice of caring for patients, managing the diagnosis, prognosis, prevention, treatment, palliation of their injury or disease, and promoting their health. Medicine encompasses a variety of health care practices evolved to maintain and restore health by the prevention and treatment of illness. Contemporary medicine applies biomedical sciences, biomedical research, genetics, and medical technology to diagnose, treat, and prevent injury and disease, typically through pharmaceuticals or surgery, but also through therapies as diverse as psychotherapy, external splints and traction, medical devices, biologics, and ionizing radiation, amongst others. Medicine has been practiced since prehistoric times, and for most of this time it was an art (an area of creativity and skill), frequently having connections to the religious and philosophical beliefs of local culture. For example, a medicine man would apply herbs and say prayers for healing, or an ancient philosopher and physician would apply bloodletting according to the theories of humorism. In recent centuries, since the advent of modern science, most medicine has become a combination of art and science (both basic and applied, under the umbrella of medical science). For example, while stitching technique for sutures is an art learned through practice, knowledge of what happens at the cellular and molecular level in the tissues being stitched arises through science. Prescientific forms of medicine, now known as traditional medicine or folk medicine, remain commonly used in the absence of scientific medicine and are thus called alternative medicine. Alternative treatments outside of scientific medicine with ethical, safety and efficacy concerns are termed quackery. Etymology Medicine (, ) is the science and practice of the diagnosis, prognosis, treatment, and prevention of disease. The word "medicine" is derived from Latin medicus, meaning "a physician". Clinical practice Medical availability and clinical practice vary across the world due to regional differences in culture and technology. Modern scientific medicine is highly developed in the Western world, while in developing countries such as parts of Africa or Asia, the population may rely more heavily on traditional medicine with limited evidence and efficacy and no required formal training for practitioners. In the developed world, evidence-based medicine is not universally used in clinical practice; for example, a 2007 survey of literature reviews found that about 49% of the interventions lacked sufficient evidence to support either benefit or harm. In modern clinical practice, physicians and physician assistants personally assess patients to diagnose, prognose, treat, and prevent disease using clinical judgment. The doctor-patient relationship typically begins with an interaction with an examination of the patient's medical history and medical record, followed by a medical interview and a physical examination. Basic diagnostic medical devices (e.g., stethoscope, tongue depressor) are typically used. After examining for signs and interviewing for symptoms, the doctor may order medical tests (e.g., blood tests), take a biopsy, or prescribe pharmaceutical drugs or other therapies. Differential diagnosis methods help to rule out conditions based on the information provided. During the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. The medical encounter is then documented in the medical record, which is a legal document in many jurisdictions. Follow-ups may be shorter but follow the same general procedure, and specialists follow a similar process. The diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue. The components of the medical interview and encounter are: Chief complaint (CC): the reason for the current medical visit. These are the symptoms. They are in the patient's own words and are recorded along with the duration of each one. Also called chief concern or presenting complaint. Current activity: occupation, hobbies, what the patient actually does. Family history (FH): listing of diseases in the family that may impact the patient. A family tree is sometimes used. History of present illness (HPI): the chronological order of events of symptoms and further clarification of each symptom. Distinguishable from history of previous illness, often called past medical history (PMH). Medical history comprises HPI and PMH. Medications (Rx): what drugs the patient takes including prescribed, over-the-counter, and home remedies, as well as alternative and herbal medicines or remedies. Allergies are also recorded. Past medical history (PMH/PMHx): concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. Review of systems (ROS) or systems inquiry: a set of additional questions to ask, which may be missed on HPI: a general enquiry (have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc.), followed by questions on the body's main organ systems (heart, lungs, digestive tract, urinary tract, etc.). Social history (SH): birthplace, residences, marital history, social and economic status, habits (including diet, medications, tobacco, alcohol). The physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. The healthcare provider uses sight, hearing, touch, and sometimes smell (e.g., in infection, uremia, diabetic ketoacidosis). Four actions are the basis of physical examination: inspection, palpation (feel), percussion (tap to determine resonance characteristics), and auscultation (listen), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. The clinical examination involves the study of: Abdomen and rectum Cardiovascular (heart and blood vessels) General appearance of the patient and specific indicators of disease (nutritional status, presence of jaundice, pallor or clubbing) Genitalia (and pregnancy if the patient is or could be pregnant) Head, eye, ear, nose, and throat (HEENT) Musculoskeletal (including spine and extremities) Neurological (consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves) Psychiatric (orientation, mental state, mood, evidence of abnormal perception or thought). Respiratory (large airways and lungs) Skin Vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation It is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. The treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. A follow-up may be advised. Depending upon the health insurance plan and the managed care system, various forms of "utilization review", such as prior authorization of tests, may place barriers on accessing expensive services. The medical decision-making (MDM) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses (the differential diagnoses), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient's problem. On subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, lab or imaging results, or specialist consultations. Institutions Contemporary medicine is, in general, conducted within health care systems. Legal, credentialing, and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. The characteristics of any given health care system have a significant impact on the way medical care is provided. From ancient times, Christian emphasis on practical charity gave rise to the development of systematic nursing and hospitals, and the Catholic Church today remains the largest non-government provider of medical services in the world. Advanced industrial countries (with the exception of the United States) and many developing countries provide medical services through a system of universal health care that aims to guarantee care for all through a single-payer health care system or compulsory private or cooperative health insurance. This is intended to ensure that the entire population has access to medical care on the basis of need rather than ability to pay. Delivery may be via private medical practices, state-owned hospitals and clinics, or charities, most commonly a combination of all three. Most tribal societies provide no guarantee of healthcare for the population as a whole. In such societies, healthcare is available to those who can afford to pay for it, have self-insured it (either directly or as part of an employment contract), or may be covered by care financed directly by the government or tribe. Transparency of information is another factor defining a delivery system. Access to information on conditions, treatments, quality, and pricing greatly affects the choice of patients/consumers and, therefore, the incentives of medical professionals. While the US healthcare system has come under fire for its lack of openness, new legislation may encourage greater openness. There is a perceived tension between the need for transparency on the one hand and such issues as patient confidentiality and the possible exploitation of information for commercial gain on the other. The health professionals who provide care in medicine comprise multiple professions, such as medics, nurses, physiotherapists, and psychologists. These professions will have their own ethical standards, professional education, and bodies. The medical profession has been conceptualized from a sociological perspective. Delivery Provision of medical care is classified into primary, secondary, and tertiary care categories. Primary care medical services are provided by physicians, physician assistants, nurse practitioners, or other health professionals who have first contact with a patient seeking medical treatment or care. These occur in physician offices, clinics, nursing homes, schools, home visits, and other places close to patients. About 90% of medical visits can be treated by the primary care provider. These include treatment of acute and chronic illnesses, preventive care and health education for all ages and both sexes. Secondary care medical services are provided by medical specialists in their offices or clinics or at local community hospitals for a patient referred by a primary care provider who first diagnosed or treated the patient. Referrals are made for those patients who required the expertise or procedures performed by specialists. These include both ambulatory care and inpatient services, emergency departments, intensive care medicine, surgery services, physical therapy, labor and delivery, endoscopy units, diagnostic laboratory and medical imaging services, hospice centers, etc. Some primary care providers may also take care of hospitalized patients and deliver babies in a secondary care setting. Tertiary care medical services are provided by specialist hospitals or regional centers equipped with diagnostic and treatment facilities not generally available at local hospitals. These include trauma centers, burn treatment centers, advanced neonatology unit services, organ transplants, high-risk pregnancy, radiation oncology, etc. Modern medical care also depends on information – still delivered in many health care settings on paper records, but increasingly nowadays by electronic means. In low-income countries, modern healthcare is often too expensive for the average person. International healthcare policy researchers have advocated that "user fees" be removed in these areas to ensure access, although even after removal, significant costs and barriers remain. Separation of prescribing and dispensing is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs. Branches Working together as an interdisciplinary team, many highly trained health professionals besides medical practitioners are involved in the delivery of modern health care. Examples include: nurses, emergency medical technicians and paramedics, laboratory scientists, pharmacists, podiatrists, physiotherapists, respiratory therapists, speech therapists, occupational therapists, radiographers, dietitians, and bioengineers, medical physicists, surgeons, surgeon's assistant, surgical technologist. The scope and sciences underpinning human medicine overlap many other fields. A patient admitted to the hospital is usually under the care of a specific team based on their main presenting problem, e.g., the cardiology team, who then may interact with other specialties, e.g., surgical, radiology, to help diagnose or treat the main problem or any subsequent complications/developments. Physicians have many specializations and subspecializations into certain branches of medicine, which are listed below. There are variations from country to country regarding which specialties certain subspecialties are in. The main branches of medicine are: Basic sciences of medicine; this is what every physician is educated in, and some return to in biomedical research. Interdisciplinary fields, where different medical specialties are mixed to function in certain occasions. Medical specialties Basic sciences Anatomy is the study of the physical structure of organisms. In contrast to macroscopic or gross anatomy, cytology and histology are concerned with microscopic structures. Biochemistry is the study of the chemistry taking place in living organisms, especially the structure and function of their chemical components. Biomechanics is the study of the structure and function of biological systems by means of the methods of Mechanics. Biophysics is an interdisciplinary science that uses the methods of physics and physical chemistry to study biological systems. Biostatistics is the application of statistics to biological fields in the broadest sense. A knowledge of biostatistics is essential in the planning, evaluation, and interpretation of medical research. It is also fundamental to epidemiology and evidence-based medicine. Cytology is the microscopic study of individual cells. Embryology is the study of the early development of organisms. Endocrinology is the study of hormones and their effect throughout the body of animals. Epidemiology is the study of the demographics of disease processes, and includes, but is not limited to, the study of epidemics. Genetics is the study of genes, and their role in biological inheritance. Gynecology is the study of female reproductive system. Histology is the study of the structures of biological tissues by light microscopy, electron microscopy and immunohistochemistry. Immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example. Lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them. Medical physics is the study of the applications of physics principles in medicine. Microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses. Molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. Neuroscience includes those disciplines of science that are related to the study of the nervous system. A main focus of neuroscience is the biology and physiology of the human brain and spinal cord. Some related clinical specialties include neurology, neurosurgery and psychiatry. Nutrition science (theoretical focus) and dietetics (practical focus) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. Medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases. Pathology as a science is the study of diseasethe causes, course, progression and resolution thereof. Pharmacology is the study of drugs and their actions. Photobiology is the study of the interactions between non-ionizing radiation and living organisms. Physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms. Radiobiology is the study of the interactions between ionizing radiation and living organisms. Toxicology is the study of hazardous effects of drugs and poisons. Specialties In the broadest meaning of "medicine", there are many different specialties. In the UK, most specialities have their own body or college, which has its own entrance examination. These are collectively known as the Royal Colleges, although not all currently use the term "Royal". The development of a speciality is often driven by new technology (such as the development of effective anaesthetics) or ways of working (such as emergency departments); the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination. Within medical circles, specialities usually fit into one of two broad categories: "Medicine" and "Surgery". "Medicine" refers to the practice of non-operative medicine, and most of its subspecialties require preliminary training in Internal Medicine. In the UK, this was traditionally evidenced by passing the examination for the Membership of the Royal College of Physicians (MRCP) or the equivalent college in Scotland or Ireland. "Surgery" refers to the practice of operative medicine, and most subspecialties in this area require preliminary training in General Surgery, which in the UK leads to membership of the Royal College of Surgeons of England (MRCS). At present, some specialties of medicine do not fit easily into either of these categories, such as radiology, pathology, or anesthesia. Most of these have branched from one or other of the two camps above; for example anaesthesia developed first as a faculty of the Royal College of Surgeons (for which MRCS/FRCS would have been required) before becoming the Royal College of Anaesthetists and membership of the college is attained by sitting for the examination of the Fellowship of the Royal College of Anesthetists (FRCA). Surgical specialty Surgery is an ancient medical specialty that uses operative manual and instrumental techniques on a patient to investigate or treat a pathological condition such as disease or injury, to help improve bodily function or appearance or to repair unwanted ruptured areas (for example, a perforated ear drum). Surgeons must also manage pre-operative, post-operative, and potential surgical candidates on the hospital wards. In some centers, anesthesiology is part of the division of surgery (for historical and logistical reasons), although it is not a surgical discipline. Other medical specialties may employ surgical procedures, such as ophthalmology and dermatology, but are not considered surgical sub-specialties per se. Surgical training in the U.S. requires a minimum of five years of residency after medical school. Sub-specialties of surgery often require seven or more years. In addition, fellowships can last an additional one to three years. Because post-residency fellowships can be competitive, many trainees devote two additional years to research. Thus in some cases surgical training will not finish until more than a decade after medical school. Furthermore, surgical training can be very difficult and time-consuming. Surgical subspecialties include those a physician may specialize in after undergoing general surgery residency training as well as several surgical fields with separate residency training. Surgical subspecialties that one may pursue following general surgery residency training: Bariatric surgery Cardiovascular surgery – may also be pursued through a separate cardiovascular surgery residency track Colorectal surgery Endocrine surgery General surgery Hand surgery Hepatico-Pancreatico-Biliary Surgery Minimally invasive surgery Pediatric surgery Plastic surgery – may also be pursued through a separate plastic surgery residency track Surgical critical care Surgical oncology Transplant surgery Trauma surgery Vascular surgery – may also be pursued through a separate vascular surgery residency track Other surgical specialties within medicine with their own individual residency training: Dermatology Neurosurgery Ophthalmology Oral and maxillofacial surgery Orthopedic surgery Otorhinolaryngology Podiatric surgery – do not undergo medical school training, but rather separate training in podiatry school Urology Internal medicine specialty Internal medicine is the medical specialty dealing with the prevention, diagnosis, and treatment of adult diseases. According to some sources, an emphasis on internal structures is implied. In North America, specialists in internal medicine are commonly called "internists". Elsewhere, especially in Commonwealth nations, such specialists are often called physicians. These terms, internist or physician (in the narrow sense, common outside North America), generally exclude practitioners of gynecology and obstetrics, pathology, psychiatry, and especially surgery and its subspecialities. Because their patients are often seriously ill or require complex investigations, internists do much of their work in hospitals. Formerly, many internists were not subspecialized; such general physicians would see any complex nonsurgical problem; this style of practice has become much less common. In modern urban practice, most internists are subspecialists: that is, they generally limit their medical practice to problems of one organ system or to one particular area of medical knowledge. For example, gastroenterologists and nephrologists specialize respectively in diseases of the gut and the kidneys. In the Commonwealth of Nations and some other countries, specialist pediatricians and geriatricians are also described as specialist physicians (or internists) who have subspecialized by age of patient rather than by organ system. Elsewhere, especially in North America, general pediatrics is often a form of primary care. There are many subspecialities (or subdisciplines) of internal medicine: Angiology/Vascular Medicine Bariatrics Cardiology Critical care medicine Endocrinology Gastroenterology Geriatrics Hematology Hepatology Infectious disease Nephrology Neurology Oncology Pediatrics Pulmonology/Pneumology/Respirology/chest medicine Rheumatology Sports Medicine Training in internal medicine (as opposed to surgical training), varies considerably across the world: see the articles on medical education for more details. In North America, it requires at least three years of residency training after medical school, which can then be followed by a one- to three-year fellowship in the subspecialties listed above. In general, resident work hours in medicine are less than those in surgery, averaging about 60 hours per week in the US. This difference does not apply in the UK where all doctors are now required by law to work less than 48 hours per week on average. Diagnostic specialties Clinical laboratory sciences are the clinical diagnostic services that apply laboratory techniques to diagnosis and management of patients. In the United States, these services are supervised by a pathologist. The personnel that work in these medical laboratory departments are technically trained staff who do not hold medical degrees, but who usually hold an undergraduate medical technology degree, who actually perform the tests, assays, and procedures needed for providing the specific services. Subspecialties include transfusion medicine, cellular pathology, clinical chemistry, hematology, clinical microbiology and clinical immunology. Clinical neurophysiology is concerned with testing the physiology or function of the central and peripheral aspects of the nervous system. These kinds of tests can be divided into recordings of: (1) spontaneous or continuously running electrical activity, or (2) stimulus evoked responses. Subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. Sometimes these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional. Diagnostic radiology is concerned with imaging of the body, e.g. by x-rays, x-ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. Interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. Nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances (radiopharmaceuticals) to the body, which can then be imaged outside the body by a gamma camera or a PET scanner. Each radiopharmaceutical consists of two parts: a tracer that is specific for the function under study (e.g., neurotransmitter pathway, metabolic pathway, blood flow, or other), and a radionuclide (usually either a gamma-emitter or a positron emitter). There is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the PET/CT scanner. Pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. As a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence-based medicine. Many modern molecular tests such as flow cytometry, polymerase chain reaction (PCR), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization (FISH) fall within the territory of pathology. Other major specialties The following are some major medical specialties that do not directly fit into any of the above-mentioned groups: Anesthesiology (also known as anaesthetics): concerned with the perioperative management of the surgical patient. The anesthesiologist's role during surgery is to prevent derangement in the vital organs' (i.e. brain, heart, kidneys) functions and postoperative pain. Outside of the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine. Emergency medicine is concerned with the diagnosis and treatment of acute or life-threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies. Family medicine, family practice, general practice or primary care is, in many countries, the first port-of-call for patients with non-emergency medical problems. Family physicians often provide services across a broad range of settings including office based practices, emergency department coverage, inpatient care, and nursing home care. Medical genetics is concerned with the diagnosis and management of hereditary disorders. Neurology is concerned with diseases of the nervous system. In the UK, neurology is a subspecialty of general medicine. Obstetrics and gynecology (often abbreviated as OB/GYN (American English) or Obs & Gynae (British English)) are concerned respectively with childbirth and the female reproductive and associated organs. Reproductive medicine and fertility medicine are generally practiced by gynecological specialists. Pediatrics (AE) or paediatrics (BE) is devoted to the care of infants, children, and adolescents. Like internal medicine, there are many pediatric subspecialties for specific age ranges, organ systems, disease classes, and sites of care delivery. Pharmaceutical medicine is the medical scientific discipline concerned with the discovery, development, evaluation, registration, monitoring and medical aspects of marketing of medicines for the benefit of patients and public health. Physical medicine and rehabilitation (or physiatry) is concerned with functional improvement after injury, illness, or congenital disorders. Podiatric medicine is the study of, diagnosis, and medical and surgical treatment of disorders of the foot, ankle, lower limb, hip and lower back. Preventive medicine is the branch of medicine concerned with preventing disease. Community health or public health is an aspect of health services concerned with threats to the overall health of a community based on population health analysis. Psychiatry is the branch of medicine concerned with the bio-psycho-social study of the etiology, diagnosis, treatment and prevention of cognitive, perceptual, emotional and behavioral disorders. Related fields include psychotherapy and clinical psychology. Interdisciplinary fields Some interdisciplinary sub-specialties of medicine include: Addiction medicine deals with the treatment of addiction. Aerospace medicine deals with medical problems related to flying and space travel. Biomedical Engineering is a field dealing with the application of engineering principles to medical practice. Clinical pharmacology is concerned with how systems of therapeutics interact with patients. Conservation medicine studies the relationship between human and non-human animal health, and environmental conditions. Also known as ecological medicine, environmental medicine, or medical geology. Disaster medicine deals with medical aspects of emergency preparedness, disaster mitigation and management. Diving medicine (or hyperbaric medicine) is the prevention and treatment of diving-related problems. Evolutionary medicine is a perspective on medicine derived through applying evolutionary theory. Forensic medicine deals with medical questions in legal context, such as determination of the time and cause of death, type of weapon used to inflict trauma, reconstruction of the facial features using remains of deceased (skull) thus aiding identification. Gender-based medicine studies the biological and physiological differences between the human sexes and how that affects differences in disease. Health informatics is a relatively recent field that deal with the application of computers and information technology to medicine. Hospice and Palliative Medicine is a relatively modern branch of clinical medicine that deals with pain and symptom relief and emotional support in patients with terminal illnesses including cancer and heart failure. Hospital medicine is the general medical care of hospitalized patients. Physicians whose primary professional focus is hospital medicine are called hospitalists in the United States and Canada. The term Most Responsible Physician (MRP) or attending physician is also used interchangeably to describe this role. Laser medicine involves the use of lasers in the diagnostics or treatment of various conditions. Many other health science fields, e.g. dietetics Medical ethics deals with ethical and moral principles that apply values and judgments to the practice of medicine. Medical humanities includes the humanities (literature, philosophy, ethics, history and religion), social science (anthropology, cultural studies, psychology, sociology), and the arts (literature, theater, film, and visual arts) and their application to medical education and practice. Nosokinetics is the science/subject of measuring and modelling the process of care in health and social care systems. Nosology is the classification of diseases for various purposes. Occupational medicine is the provision of health advice to organizations and individuals to ensure that the highest standards of health and safety at work can be achieved and maintained. Pain management (also called pain medicine, or algiatry) is the medical discipline concerned with the relief of pain. Pharmacogenomics is a form of individualized medicine. Podiatric medicine is the study of, diagnosis, and medical treatment of disorders of the foot, ankle, lower limb, hip and lower back. Sexual medicine is concerned with diagnosing, assessing and treating all disorders related to sexuality. Sports medicine deals with the treatment and prevention and rehabilitation of sports/exercise injuries such as muscle spasms, muscle tears, injuries to ligaments (ligament tears or ruptures) and their repair in athletes, amateur and professional. Therapeutics is the field, more commonly referenced in earlier periods of history, of the various remedies that can be used to treat disease and promote health. Travel medicine or emporiatrics deals with health problems of international travelers or travelers across highly different environments. Tropical medicine deals with the prevention and treatment of tropical diseases. It is studied separately in temperate climates where those diseases are quite unfamiliar to medical practitioners and their local clinical needs. Urgent care focuses on delivery of unscheduled, walk-in care outside of the hospital emergency department for injuries and illnesses that are not severe enough to require care in an emergency department. In some jurisdictions this function is combined with the emergency department. Veterinary medicine; veterinarians apply similar techniques as physicians to the care of non-human animals. Wilderness medicine entails the practice of medicine in the wild, where conventional medical facilities may not be available. Education and legal controls Medical education and training varies around the world. It typically involves entry level education at a university medical school, followed by a period of supervised practice or internship, or residency. This can be followed by postgraduate vocational training. A variety of teaching methods have been employed in medical education, still itself a focus of active research. In Canada and the United States of America, a Doctor of Medicine degree, often abbreviated M.D., or a Doctor of Osteopathic Medicine degree, often abbreviated as D.O. and unique to the United States, must be completed in and delivered from a recognized university. Since knowledge, techniques, and medical technology continue to evolve at a rapid rate, many regulatory authorities require continuing medical education. Medical practitioners upgrade their knowledge in various ways, including medical journals, seminars, conferences, and online programs. A database of objectives covering medical knowledge, as suggested by national societies across the United States, can be searched at http://data.medobjectives.marian.edu/ . In most countries, it is a legal requirement for a medical doctor to be licensed or registered. In general, this entails a medical degree from a university and accreditation by a medical board or an equivalent national organization, which may ask the applicant to pass exams. This restricts the considerable legal authority of the medical profession to physicians that are trained and qualified by national standards. It is also intended as an assurance to patients and as a safeguard against charlatans that practice inadequate medicine for personal gain. While the laws generally require medical doctors to be trained in "evidence based", Western, or Hippocratic Medicine, they are not intended to discourage different paradigms of health. In the European Union, the profession of doctor of medicine is regulated. A profession is said to be regulated when access and exercise is subject to the possession of a specific professional qualification. The regulated professions database contains a list of regulated professions for doctor of medicine in the EU member states, EEA countries and Switzerland. This list is covered by the Directive 2005/36/EC. Doctors who are negligent or intentionally harmful in their care of patients can face charges of medical malpractice and be subject to civil, criminal, or professional sanctions. Medical ethics Medical ethics is a system of moral principles that apply values and judgments to the practice of medicine. As a scholarly discipline, medical ethics encompasses its practical application in clinical settings as well as work on its history, philosophy, theology, and sociology. Six of the values that commonly apply to medical ethics discussions are: autonomy – the patient has the right to refuse or choose their treatment. (.) beneficence – a practitioner should act in the best interest of the patient. (.) justice – concerns the distribution of scarce health resources, and the decision of who gets what treatment (fairness and equality). non-maleficence – "first, do no harm". respect for persons – the patient (and the person treating the patient) have the right to be treated with dignity. truthfulness and honesty – the concept of informed consent has increased in importance since the historical events of the Doctors' Trial of the Nuremberg trials, Tuskegee syphilis experiment, and others. Values such as these do not give answers as to how to handle a particular situation, but provide a useful framework for understanding conflicts. When moral values are in conflict, the result may be an ethical dilemma or crisis. Sometimes, no good solution to a dilemma in medical ethics exists, and occasionally, the values of the medical community (i.e., the hospital and its staff) conflict with the values of the individual patient, family, or larger non-medical community. Conflicts can also arise between health care providers, or among family members. For example, some argue that the principles of autonomy and beneficence clash when patients refuse blood transfusions, considering them life-saving; and truth-telling was not emphasized to a large extent before the HIV era. History Ancient world Prehistoric medicine incorporated plants (herbalism), animal parts, and minerals. In many cases these materials were used ritually as magical substances by priests, shamans, or medicine men. Well-known spiritual systems include animism (the notion of inanimate objects having spirits), spiritualism (an appeal to gods or communion with ancestor spirits); shamanism (the vesting of an individual with mystic powers); and divination (magically obtaining the truth). The field of medical anthropology examines the ways in which culture and society are organized around or impacted by issues of health, health care and related issues. The earliest known medical texts in the world were found in the ancient Syrian city of Ebla and date back to 2500 BCE. Other early records on medicine have been discovered from ancient Egyptian medicine, Babylonian Medicine, Ayurvedic medicine (in the Indian subcontinent), classical Chinese medicine (Alternative medicine) predecessor to the modern traditional Chinese medicine), and ancient Greek medicine and Roman medicine. In Egypt, Imhotep (3rd millennium BCE) is the first physician in history known by name. The oldest Egyptian medical text is the Kahun Gynaecological Papyrus from around 2000 BCE, which describes gynaecological diseases. The Edwin Smith Papyrus dating back to 1600 BCE is an early work on surgery, while the Ebers Papyrus dating back to 1500 BCE is akin to a textbook on medicine. In China, archaeological evidence of medicine in Chinese dates back to the Bronze Age Shang dynasty, based on seeds for herbalism and tools presumed to have been used for surgery. The Huangdi Neijing, the progenitor of Chinese medicine, is a medical text written beginning in the 2nd century BCE and compiled in the 3rd century. In India, the surgeon Sushruta described numerous surgical operations, including the earliest forms of plastic surgery.Earliest records of dedicated hospitals come from Mihintale in Sri Lanka where evidence of dedicated medicinal treatment facilities for patients are found. In Greece, the ancient Greek physician Hippocrates, the "father of modern medicine", laid the foundation for a rational approach to medicine. Hippocrates introduced the Hippocratic Oath for physicians, which is still relevant and in use today, and was the first to categorize illnesses as acute, chronic, endemic and epidemic, and use terms such as, "exacerbation, relapse, resolution, crisis, paroxysm, peak, and convalescence". The Greek physician Galen was also one of the greatest surgeons of the ancient world and performed many audacious operations, including brain and eye surgeries. After the fall of the Western Roman Empire and the onset of the Early Middle Ages, the Greek tradition of medicine went into decline in Western Europe, although it continued uninterrupted in the Eastern Roman (Byzantine) Empire. Most of our knowledge of ancient Hebrew medicine during the 1st millennium BC comes from the Torah, i.e. the Five Books of Moses, which contain various health related laws and rituals. The Hebrew contribution to the development of modern medicine started in the Byzantine Era, with the physician Asaph the Jew. Middle Ages The concept of hospital as institution to offer medical care and possibility of a cure for the patients due to the ideals of Christian charity, rather than just merely a place to die, appeared in the Byzantine Empire. Although the concept of uroscopy was known to Galen, he did not see the importance of using it to localize the disease. It was under the Byzantines with physicians such of Theophilus Protospatharius that they realized the potential in uroscopy to determine disease in a time when no microscope or stethoscope existed. That practice eventually spread to the rest of Europe. After 750 CE, the Muslim world had the works of Hippocrates, Galen and Sushruta translated into Arabic, and Islamic physicians engaged in some significant medical research. Notable Islamic medical pioneers include the Persian polymath, Avicenna, who, along with Imhotep and Hippocrates, has also been called the "father of medicine". He wrote The Canon of Medicine which became a standard medical text at many medieval European universities, considered one of the most famous books in the history of medicine. Others include Abulcasis, Avenzoar, Ibn al-Nafis, and Averroes. Persian physician Rhazes was one of the first to question the Greek theory of humorism, which nevertheless remained influential in both medieval Western and medieval Islamic medicine. Some volumes of Rhazes's work Al-Mansuri, namely "On Surgery" and "A General Book on Therapy", became part of the medical curriculum in European universities. Additionally, he has been described as a doctor's doctor, the father of pediatrics, and a pioneer of ophthalmology. For example, he was the first to recognize the reaction of the eye's pupil to light. The Persian Bimaristan hospitals were an early example of public hospitals. In Europe, Charlemagne decreed that a hospital should be attached to each cathedral and monastery and the historian Geoffrey Blainey likened the activities of the Catholic Church in health care during the Middle Ages to an early version of a welfare state: "It conducted hospitals for the old and orphanages for the young; hospices for the sick of all ages; places for the lepers; and hostels or inns where pilgrims could buy a cheap bed and meal". It supplied food to the population during famine and distributed food to the poor. This welfare system the church funded through collecting taxes on a large scale and possessing large farmlands and estates. The Benedictine order was noted for setting up hospitals and infirmaries in their monasteries, growing medical herbs and becoming the chief medical care givers of their districts, as at the great Abbey of Cluny. The Church also established a network of cathedral schools and universities where medicine was studied. The Schola Medica Salernitana in Salerno, looking to the learning of Greek and Arab physicians, grew to be the finest medical school in Medieval Europe. However, the fourteenth and fifteenth century Black Death devastated both the Middle East and Europe, and it has even been argued that Western Europe was generally more effective in recovering from the pandemic than the Middle East. In the early modern period, important early figures in medicine and anatomy emerged in Europe, including Gabriele Falloppio and William Harvey. The major shift in medical thinking was the gradual rejection, especially during the Black Death in the 14th and 15th centuries, of what may be called the "traditional authority" approach to science and medicine. This was the notion that because some prominent person in the past said something must be so, then that was the way it was, and anything one observed to the contrary was an anomaly (which was paralleled by a similar shift in European society in general – see Copernicus's rejection of Ptolemy's theories on astronomy). Physicians like Vesalius improved upon or disproved some of the theories from the past. The main tomes used both by medicine students and expert physicians were Materia Medica and Pharmacopoeia. Andreas Vesalius was the author of De humani corporis fabrica, an important book on human anatomy. Bacteria and microorganisms were first observed with a microscope by Antonie van Leeuwenhoek in 1676, initiating the scientific field microbiology. Independently from Ibn al-Nafis, Michael Servetus rediscovered the pulmonary circulation, but this discovery did not reach the public because it was written down for the first time in the "Manuscript of Paris" in 1546, and later published in the theological work for which he paid with his life in 1553. Later this was described by Renaldus Columbus and Andrea Cesalpino. Herman Boerhaave is sometimes referred to as a "father of physiology" due to his exemplary teaching in Leiden and textbook 'Institutiones medicae' (1708). Pierre Fauchard has been called "the father of modern dentistry". Modern Veterinary medicine was, for the first time, truly separated from human medicine in 1761, when the French veterinarian Claude Bourgelat founded the world's first veterinary school in Lyon, France. Before this, medical doctors treated both humans and other animals. Modern scientific biomedical research (where results are testable and reproducible) began to replace early Western traditions based on herbalism, the Greek "four humours" and other such pre-modern notions. The modern era really began with Edward Jenner's discovery of the smallpox vaccine at the end of the 18th century (inspired by the method of variolation originated in ancient China), Robert Koch's discoveries around 1880 of the transmission of disease by bacteria, and then the discovery of antibiotics around 1900. The post-18th century modernity period brought more groundbreaking researchers from Europe. From Germany and Austria, doctors Rudolf Virchow, Wilhelm Conrad Röntgen, Karl Landsteiner and Otto Loewi made notable contributions. In the United Kingdom, Alexander Fleming, Joseph Lister, Francis Crick and Florence Nightingale are considered important. Spanish doctor Santiago Ramón y Cajal is considered the father of modern neuroscience. From New Zealand and Australia came Maurice Wilkins, Howard Florey, and Frank Macfarlane Burnet. Others that did significant work include William Williams Keen, William Coley, James D. Watson (United States); Salvador Luria (Italy); Alexandre Yersin (Switzerland); Kitasato Shibasaburō (Japan); Jean-Martin Charcot, Claude Bernard, Paul Broca (France); Adolfo Lutz (Brazil); Nikolai Korotkov (Russia); Sir William Osler (Canada); and Harvey Cushing (United States). As science and technology developed, medicine became more reliant upon medications. Throughout history and in Europe right until the late 18th century, not only plant products were used as medicine, but also animal (including human) body parts and fluids. Pharmacology developed in part from herbalism and some drugs are still derived from plants (atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc.). Vaccines were discovered by Edward Jenner and Louis Pasteur. The first antibiotic was arsphenamine (Salvarsan) discovered by Paul Ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. The first major class of antibiotics was the sulfa drugs, derived by German chemists originally from azo dyes. Pharmacology has become increasingly sophisticated; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side-effects. Genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision-making. Evidence-based medicine is a contemporary movement to establish the most effective algorithms of practice (ways of doing things) through the use of systematic reviews and meta-analysis. The movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. The Cochrane Collaboration leads this movement. A 2001 review of 160 Cochrane systematic reviews revealed that, according to two readers, 21.3% of the reviews concluded insufficient evidence, 20% concluded evidence of no effect, and 22.5% concluded positive effect. Quality, efficiency, and access Evidence-based medicine, prevention of medical error (and other "iatrogenesis"), and avoidance of unnecessary health care are a priority in modern medical systems. These topics generate significant political and public policy attention, particularly in the United States where healthcare is regarded as excessively costly but population health metrics lag similar nations. Globally, many developing countries lack access to care and access to medicines. , most wealthy developed countries provide health care to all citizens, with a few exceptions such as the United States where lack of health insurance coverage may limit access. See also References new:चिकित्सा
0.770722
0.9993
0.770183
Innate immune system
The innate immune system or nonspecific immune system is one of the two main immunity strategies (the other being the adaptive immune system) in vertebrates. The innate immune system is an alternate defense strategy and is the dominant immune system response found in plants, fungi, prokaryotes, and invertebrates (see Beyond vertebrates). The major functions of the innate immune system are to: recruit immune cells to infection sites by producing chemical factors, including chemical mediators called cytokines activate the complement cascade to identify bacteria, activate cells, and promote clearance of antibody complexes or dead cells identify and remove foreign substances present in organs, tissues, blood and lymph, by specialized white blood cells activate the adaptive immune system through antigen presentation act as a physical and chemical barrier to infectious agents; via physical measures such as skin and mucus, and chemical measures such as clotting factors and host defence peptides. Anatomical barriers Anatomical barriers include physical, chemical and biological barriers. The epithelial surfaces form a physical barrier that is impermeable to most infectious agents, acting as the first line of defense against invading organisms. Desquamation (shedding) of skin epithelium also helps remove bacteria and other infectious agents that have adhered to the epithelial surface. Lack of blood vessels, the inability of the epidermis to retain moisture, and the presence of sebaceous glands in the dermis, produces an environment unsuitable for the survival of microbes. In the gastrointestinal and respiratory tract, movement due to peristalsis or cilia, respectively, helps remove infectious agents. Also, mucus traps infectious agents. Gut flora can prevent the colonization of pathogenic bacteria by secreting toxic substances or by competing with pathogenic bacteria for nutrients or cell surface attachment sites. The flushing action of tears and saliva helps prevent infection of the eyes and mouth. Inflammation Inflammation is one of the first responses of the immune system to infection or irritation. Inflammation is stimulated by chemical factors released by injured cells. It establishes a physical barrier against the spread of infection and promotes healing of any damaged tissue following pathogen clearance. The process of acute inflammation is initiated by cells already present in all tissues, mainly resident macrophages, dendritic cells, histiocytes, Kupffer cells, and mast cells. These cells present receptors contained on the surface or within the cell, named pattern recognition receptors (PRRs), which recognize molecules that are broadly shared by pathogens but distinguishable from host molecules, collectively referred to as pathogen-associated molecular patterns (PAMPs). At the onset of an infection, burn, or other injuries, these cells undergo activation (one of their PRRs recognizes a PAMP) and release inflammatory mediators, like cytokines and chemokines, which are responsible for the clinical signs of inflammation. PRR activation and its cellular consequences have been well-characterized as methods of inflammatory cell death, which include pyroptosis, necroptosis, and PANoptosis. These cell death pathways help clear infected or aberrant cells and release cellular contents and inflammatory mediators. Chemical factors produced during inflammation (histamine, bradykinin, serotonin, leukotrienes, and prostaglandins) sensitize pain receptors, cause local vasodilation of the blood vessels, and attract phagocytes, especially neutrophils. Neutrophils then trigger other parts of the immune system by releasing factors that summon additional leukocytes and lymphocytes. Cytokines produced by macrophages and other cells of the innate immune system mediate the inflammatory response. These cytokines include TNF, HMGB1, and IL-1. The inflammatory response is characterized by the following symptoms: redness of the skin, due to locally increased blood circulation; heat, either increased local temperature, such as a warm feeling around a localized infection, or a systemic fever; swelling of affected tissues, such as the upper throat during the common cold or joints affected by rheumatoid arthritis; increased production of mucus, which can cause symptoms like a runny nose or a productive cough; pain, either local pain, such as painful joints or a sore throat, or affecting the whole body, such as body aches; and possible dysfunction of involved organs/tissues. Complement system The complement system is a biochemical cascade of the immune system that helps, or "complements", the ability of antibodies to clear pathogens or mark them for destruction by other cells. The cascade is composed of many plasma proteins, synthesized in the liver, primarily by hepatocytes. The proteins work together to: trigger the recruitment of inflammatory cells "tag" pathogens for destruction by other cells by opsonizing, or coating, the surface of the pathogen form holes in the plasma membrane of the pathogen, resulting in cytolysis of the pathogen cell, causing its death rid the body of neutralised antigen-antibody complexes. The three different complement systems are classical, alternative and lectin. Classical: starts when antibody binds to bacteria Alternative: starts "spontaneously" Lectin: starts when lectins bind to mannose on bacteria Elements of the complement cascade can be found in many non-mammalian species including plants, birds, fish, and some species of invertebrates. White blood cells White blood cells (WBCs) are also known as leukocytes. Most leukocytes differ from other cells of the body in that they are not tightly associated with a particular organ or tissue; thus, their function is similar to that of independent, single-cell organisms. Most leukocytes are able to move freely and interact with and capture cellular debris, foreign particles, and invading microorganisms (although macrophages, mast cells, and dendritic cells are less mobile). Unlike many other cells, most innate immune leukocytes cannot divide or reproduce on their own, but are the products of multipotent hematopoietic stem cells present in bone marrow. The innate leukocytes include: natural killer cells, mast cells, eosinophils, basophils; and the phagocytic cells include macrophages, neutrophils, and dendritic cells, and function within the immune system by identifying and eliminating pathogens that might cause infection. Mast cells Mast cells are a type of innate immune cell that resides in connective tissue and in mucous membranes. They are intimately associated with wound healing and defense against pathogens, but are also often associated with allergy and anaphylaxis. When activated, mast cells rapidly release characteristic granules, rich in histamine and heparin, along with various hormonal mediators and chemokines, or chemotactic cytokines into the environment. Histamine dilates blood vessels, causing the characteristic signs of inflammation, and recruits neutrophils and macrophages. Phagocytes The word 'phagocyte' literally means 'eating cell'. These are immune cells that engulf, or 'phagocytose', pathogens or particles. To engulf a particle or pathogen, a phagocyte extends portions of its plasma membrane, wrapping the membrane around the particle until it is enveloped (i.e., the particle is now inside the cell). Once inside the cell, the invading pathogen is contained inside a phagosome, which merges with a lysosome. The lysosome contains enzymes and acids that kill and digest the particle or organism. In general, phagocytes patrol the body searching for pathogens, but are also able to react to a group of highly specialized molecular signals produced by other cells, called cytokines. The phagocytic cells of the immune system include macrophages, neutrophils, and dendritic cells. Phagocytosis of the hosts' own cells is common as part of regular tissue development and maintenance. When host cells die, either by apoptosis or by cell injury due to an infection, phagocytic cells are responsible for their removal from the affected site. By helping to remove dead cells preceding growth and development of new healthy cells, phagocytosis is an important part of the healing process following tissue injury. Macrophages Macrophages, from the Greek, meaning "large eaters", are large phagocytic leukocytes, which are able to move beyond the vascular system by migrating through the walls of capillary vessels and entering the areas between cells in pursuit of invading pathogens. In tissues, organ-specific macrophages are differentiated from phagocytic cells present in the blood called monocytes. Macrophages are the most efficient phagocytes and can phagocytose substantial numbers of bacteria or other cells or microbes. The binding of bacterial molecules to receptors on the surface of a macrophage triggers it to engulf and destroy the bacteria through the generation of a "respiratory burst", causing the release of reactive oxygen species. Pathogens also stimulate the macrophage to produce chemokines, which summon other cells to the site of infection. Neutrophils Neutrophils, along with eosinophils and basophils, are known as granulocytes due to the presence of granules in their cytoplasm, or as polymorphonuclear cells (PMNs) due to their distinctive lobed nuclei. Neutrophil granules contain a variety of toxic substances that kill or inhibit growth of bacteria and fungi. Similar to macrophages, neutrophils attack pathogens by activating a respiratory burst. The main products of the neutrophil respiratory burst are strong oxidizing agents including hydrogen peroxide, free oxygen radicals and hypochlorite. Neutrophils are the most abundant type of phagocyte, normally representing 50–60% of the total circulating leukocytes, and are usually the first cells to arrive at the site of an infection. The bone marrow of a normal healthy adult produces more than 100 billion neutrophils per day, and more than 10 times that many per day during acute inflammation. Dendritic cells Dendritic cells (DCs) are phagocytic cells present in tissues that are in contact with the external environment, mainly the skin (where they are often called Langerhans cells), and the inner mucosal lining of the nose, lungs, stomach, and intestines. They are named for their resemblance to neuronal dendrites, but dendritic cells are not connected to the nervous system. Dendritic cells are very important in the process of antigen presentation, and serve as a link between the innate and adaptive immune systems. Basophils and eosinophils Basophils and eosinophils are cells related to the neutrophil. When activated by a pathogen encounter, histamine-releasing basophils are important in the defense against parasites and play a role in allergic reactions, such as asthma. Upon activation, eosinophils secrete a range of highly toxic proteins and free radicals that are highly effective in killing parasites, but may also damage tissue during an allergic reaction. Activation and release of toxins by eosinophils are, therefore, tightly regulated to prevent any inappropriate tissue destruction. Natural killer cells Natural killer cells (NK cells) do not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self". This term describes cells with abnormally low levels of a cell-surface marker called MHC I (major histocompatibility complex) - a situation that can arise in viral infections of host cells. They were named "natural killer" because of the initial notion that they do not require activation in order to kill cells that are "missing self". The MHC makeup on the surface of damaged cells is altered and the NK cells become activated by recognizing this. Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors (KIR) that slow the reaction of NK cells. The NK-92 cell line does not express KIR and is developed for tumor therapy. γδ T cells Like other 'unconventional' T cell subsets bearing invariant T cell receptors (TCRs), such as CD1d-restricted Natural Killer T cells, γδ T cells exhibit characteristics that place them at the border between innate and adaptive immunity. γδ T cells may be considered a component of adaptive immunity in that they rearrange TCR genes to produce junctional diversity and develop a memory phenotype. The various subsets may be considered part of the innate immune system where a restricted TCR or NK receptors may be used as a pattern recognition receptor. For example, according to this paradigm, large numbers of Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted intraepithelial Vδ1 T cells will respond to stressed epithelial cells. Other vertebrate mechanisms The coagulation system overlaps with the immune system. Some products of the coagulation system can contribute to non-specific defenses via their ability to increase vascular permeability and act as chemotactic agents for phagocytic cells. In addition, some of the products of the coagulation system are directly antimicrobial. For example, beta-lysine, a protein produced by platelets during coagulation, can cause lysis of many Gram-positive bacteria by acting as a cationic detergent. Many acute-phase proteins of inflammation are involved in the coagulation system. Increased levels of lactoferrin and transferrin inhibit bacterial growth by binding iron, an essential bacterial nutrient. Neural regulation The innate immune response to infectious and sterile injury is modulated by neural circuits that control cytokine production period. The inflammatory reflex is a prototypical neural circuit that controls cytokine production in the spleen. Action potentials transmitted via the vagus nerve to the spleen mediate the release of acetylcholine, the neurotransmitter that inhibits cytokine release by interacting with alpha7 nicotinic acetylcholine receptors (CHRNA7) expressed on cytokine-producing cells. The motor arc of the inflammatory reflex is termed the cholinergic anti-inflammatory pathway. Pathogen-specificity The parts of the innate immune system display specificity for different pathogens. Immune evasion Innate immune system cells prevent free growth of microorganisms within the body, but many pathogens have evolved mechanisms to evade it. One strategy is intracellular replication, as practised by Mycobacterium tuberculosis, or wearing a protective capsule, which prevents lysis by complement and by phagocytes, as in Salmonella. Bacteroides species are normally mutualistic bacteria, making up a substantial portion of the mammalian gastrointestinal flora. Species such as B. fragilis are opportunistic pathogens, causing infections of the peritoneal cavity. They inhibit phagocytosis by affecting the phagocytes receptors used to engulf bacteria. They may also mimic host cells so the immune system does not recognize them as foreign. Staphylococcus aureus inhibits the ability of the phagocyte to respond to chemokine signals. M. tuberculosis, Streptococcus pyogenes, and Bacillus anthracis utilize mechanisms that directly kill the phagocyte. Bacteria and fungi may form complex biofilms, protecting them from immune cells and proteins; biofilms are present in the chronic Pseudomonas aeruginosa and Burkholderia cenocepacia infections characteristic of cystic fibrosis. Viruses Type I interferons (IFN), secreted mainly by dendritic cells, play a central role in antiviral host defense and a cell's antiviral state. Viral components are recognized by different receptors: Toll-like receptors are located in the endosomal membrane and recognize double-stranded RNA (dsRNA), MDA5 and RIG-I receptors are located in the cytoplasm and recognize long dsRNA and phosphate-containing dsRNA respectively. When the cytoplasmic receptors MDA5 and RIG-I recognize a virus the conformation between the caspase-recruitment domain (CARD) and the CARD-containing adaptor MAVS changes. In parallel, when TLRs in the endocytic compartments recognize a virus the activation of the adaptor protein TRIF is induced. Both pathways converge in the recruitment and activation of the IKKε/TBK-1 complex, inducing dimerization of transcription factors IRF3 and IRF7, which are translocated in the nucleus, where they induce IFN production with the presence of a particular transcription factor and activate transcription factor 2. IFN is secreted through secretory vesicles, where it can activate receptors on both the cell it was released from (autocrine) or nearby cells (paracrine). This induces hundreds of interferon-stimulated genes to be expressed. This leads to antiviral protein production, such as protein kinase R, which inhibits viral protein synthesis, or the 2′,5′-oligoadenylate synthetase family, which degrades viral RNA. Some viruses evade this by producing molecules that interfere with IFN production. For example, the Influenza A virus produces NS1 protein, which can bind to host and viral RNA, interact with immune signaling proteins or block their activation by ubiquitination, thus inhibiting type I IFN production. Influenza A also blocks protein kinase R activation and establishment of the antiviral state. The dengue virus also inhibits type I IFN production by blocking IRF-3 phosophorylation using NS2B3 protease complex. Beyond vertebrates Prokaryotes Bacteria (and perhaps other prokaryotic organisms), utilize a unique defense mechanism, called the restriction modification system to protect themselves from pathogens, such as bacteriophages. In this system, bacteria produce enzymes, called restriction endonucleases, that attack and destroy specific regions of the viral DNA of invading bacteriophages. Methylation of the host's own DNA marks it as "self" and prevents it from being attacked by endonucleases. Restriction endonucleases and the restriction modification system exist exclusively in prokaryotes. Invertebrates Invertebrates do not possess lymphocytes or an antibody-based humoral immune system, and it is likely that a multicomponent, adaptive immune system arose with the first vertebrates. Nevertheless, invertebrates possess mechanisms that appear to be precursors of these aspects of vertebrate immunity. Pattern recognition receptors (PRRs) are proteins used by nearly all organisms to identify molecules associated with microbial pathogens. TLRs are a major class of pattern recognition receptor, that exists in all coelomates (animals with a body-cavity), including humans. The complement system exists in most life forms. Some invertebrates, including various insects, crabs, and worms utilize a modified form of the complement response known as the prophenoloxidase (proPO) system. Antimicrobial peptides are an evolutionarily conserved component of the innate immune response found among all classes of life and represent the main form of invertebrate systemic immunity. Several species of insect produce antimicrobial peptides known as defensins and cecropins. Proteolytic cascades In invertebrates, PRRs trigger proteolytic cascades that degrade proteins and control many of the mechanisms of the innate immune system of invertebrates—including hemolymph coagulation and melanization. Proteolytic cascades are important components of the invertebrate immune system because they are turned on more rapidly than other innate immune reactions because they do not rely on gene changes. Proteolytic cascades function in both vertebrate and invertebrates, even though different proteins are used throughout the cascades. Clotting mechanisms In the hemolymph, which makes up the fluid in the circulatory system of arthropods, a gel-like fluid surrounds pathogen invaders, similar to the way blood does in other animals. Various proteins and mechanisms are involved in invertebrate clotting. In crustaceans, transglutaminase from blood cells and mobile plasma proteins make up the clotting system, where the transglutaminase polymerizes 210 kDa subunits of a plasma-clotting protein. On the other hand, in the horseshoe crab clotting system, components of proteolytic cascades are stored as inactive forms in granules of hemocytes, which are released when foreign molecules, like lipopolysaccharides enter. Plants Members of every class of pathogen that infect humans also infect plants. Although the exact pathogenic species vary with the infected species, bacteria, fungi, viruses, nematodes, and insects can all cause plant disease. As with animals, plants attacked by insects or other pathogens use a set of complex metabolic responses that lead to the formation of defensive chemical compounds that fight infection or make the plant less attractive to insects and other herbivores. (see: plant defense against herbivory). Like invertebrates, plants neither generate antibody or T-cell responses nor possess mobile cells that detect and attack pathogens. In addition, in case of infection, parts of some plants are treated as disposable and replaceable, in ways that few animals can. Walling off or discarding a part of a plant helps stop infection spread. Most plant immune responses involve systemic chemical signals sent throughout a plant. Plants use PRRs to recognize conserved microbial signatures. This recognition triggers an immune response. The first plant receptors of conserved microbial signatures were identified in rice (XA21, 1995) and in Arabidopsis (FLS2, 2000). Plants also carry immune receptors that recognize variable pathogen effectors. These include the NBS-LRR class of proteins. When a part of a plant becomes infected with a microbial or viral pathogen, in case of an incompatible interaction triggered by specific elicitors, the plant produces a localized hypersensitive response (HR), in which cells at the site of infection undergo rapid apoptosis to prevent spread to other parts of the plant. HR has some similarities to animal pyroptosis, such as a requirement of caspase-1-like proteolytic activity of VPEγ, a cysteine protease that regulates cell disassembly during cell death. "Resistance" (R) proteins, encoded by R genes, are widely present in plants and detect pathogens. These proteins contain domains similar to the NOD Like Receptors and TLRs. Systemic acquired resistance (SAR) is a type of defensive response that renders the entire plant resistant to a broad spectrum of infectious agents. SAR involves the production of chemical messengers, such as salicylic acid or jasmonic acid. Some of these travel through the plant and signal other cells to produce defensive compounds to protect uninfected parts, e.g., leaves. Salicylic acid itself, although indispensable for expression of SAR, is not the translocated signal responsible for the systemic response. Recent evidence indicates a role for jasmonates in transmission of the signal to distal portions of the plant. RNA silencing mechanisms are important in the plant systemic response, as they can block virus replication. The jasmonic acid response is stimulated in leaves damaged by insects, and involves the production of methyl jasmonate. See also Antimicrobial peptides Apoptosis Innate lymphoid cell NOD-like receptor Endothelial cell tropism References External links system Innate Immune System Animation XVIVO Scientific Animation Immune system Insect immunity
0.77328
0.99562
0.769893
Human physical appearance
Human physical appearance is the outward phenotype or look of human beings. There are functionally infinite variations in human phenotypes, though society reduces the variability to distinct categories. The physical appearance of humans, in particular those attributes which are regarded as important for physical attractiveness, are believed by anthropologists to affect the development of personality significantly and social relations. Many humans are acutely sensitive to their physical appearance. Some differences in human appearance are genetic, others are the result of age, lifestyle or disease, and many are the result of personal adornment. Some people have linked some differences with ethnicity, such as skeletal shape, prognathism or elongated stride. Different cultures place different degrees of emphasis on physical appearance and its importance to social status and other phenomena. Aspects Various aspects are considered relevant to the physical appearance of humans. Physiological differences Humans are distributed across the globe except for Antarctica and form a variable species. In adults, the average weight varies from around 40 kg (88 pounds) for the smallest and most lightly built tropical people to around 80 kg (176 pounds) for the heavier northern peoples. Size also varies between the sexes, with the sexual dimorphism in humans being more pronounced than that of chimpanzees, but less than the dimorphism found in gorillas. The colouration of skin, hair and eyes also varies considerably, with darker pigmentation dominating in tropical climates and lighter in polar regions. Genetic, ethnic affiliation, geographical ancestry. Height, body weight, skin tone, body hair, sexual organs, hair color, hair texture, eye color, eye shape (see epicanthic fold and eyelid variations), nose shape (see nasal bridge), ear shape (see earlobes), body shape Body and skin variations such as amputations, scars, burns and wounds. Long-term physiological changes Aging Hair loss Short-term physiological changes Blushing, crying, fainting, hiccup, yawning, laughing, stuttering, sexual arousal, sweating, shivering, skin color changes due to sunshine or frost. Clothing, personal effects, and intentional body modifications Clothing, including headgear and footwear; some clothes alter or mold the shape of the body (e.g. corset, support pantyhose, bra). As for footwear, high heels make a person look taller. Style and colour of haircut (see also mohawk, dreadlocks, braids, ponytail, wig, hairpin, facial hair, beard and moustache) Cosmetics, stage makeup, body paintings, permanent makeup Body modifications, such as body piercings, tattoos, scarification, subdermal implants Plastic surgery Decorative objects (jewelry) such as necklaces, bracelets, rings, earrings Medical or body shape altering devices (e.g., tooth braces, bandages, casts, hearing aids, cervical collar, crutches, contact lenses of different colours, glasses, gold teeth). For example, the same person's appearance can be quite different, depending on whether they use any of the aforementioned modifications. Exercises, for example, bodybuilding Other functional objects, temporarily attached to the body Capes Goggles Hair ornaments Hats and caps Headdresses Headphones/handsfree phone headset Jewelry Masks Prosthetic limbs Sunglasses Watches See also Beauty Biometrics Body image Deformity Dress code Eigenface Face perception Facial symmetry Fashion Female body shape Hairstyle Human variability Human body Hair coloring Nudity Sexual attraction Sexual capital Sexual selection Somatotype Vanity References physical Human body Fashion Aesthetics
0.774235
0.994273
0.769801
Hypertensive emergency
A hypertensive emergency is very high blood pressure with potentially life-threatening symptoms and signs of acute damage to one or more organ systems (especially brain, eyes, heart, aorta, or kidneys). It is different from a hypertensive urgency by this additional evidence for impending irreversible hypertension-mediated organ damage (HMOD). Blood pressure is often above 200/120 mmHg, however there are no universally accepted cutoff values. Signs and symptoms Symptoms may include headache, nausea, or vomiting. Chest pain may occur due to increased workload on the heart resulting in inadequate delivery of oxygen to meet the heart muscle's metabolic needs. The kidneys may be affected, resulting in blood or protein in the urine, and acute kidney failure. People can have decreased urine production, fluid retention, and confusion. Other signs and symptoms can include: Chest pain Abnormal heart rhythms Headache Nosebleeds that are difficult to stop Dyspnea Fainting or the sensation of the world spinning around one (vertigo) Severe anxiety Agitation Altered mental status Abnormal sensations The most common presentations of hypertensive emergencies are cerebral infarction (24.5%), pulmonary edema (22.5%), hypertensive encephalopathy (16.3%), and congestive heart failure (12%). Less common presentations include intracranial bleeding, aortic dissection, and pre-eclampsia or eclampsia. Massive, rapid elevations in blood pressure can trigger any of these symptoms, and warrant further work-up by physicians. Physical exam would include measurement of blood pressure in both arms. Laboratory tests to be conducted include urine toxicology, blood glucose, a basic metabolic panel evaluating kidney function, or a complete metabolic panel evaluating liver function, EKG, chest x-rays, and pregnancy screening. The eyes may show bleeding in the retina, an exudate, cotton-wool spots, scattered splinter hemorrhages, or swelling of the optic disc called papilledema. Causes Many factors and causes are contributory in hypertensive crises. The most common cause is patients with diagnosed, chronic hypertension who have discontinued anti hypertensive medications. Other common causes of hypertensive crises are autonomic hyperactivity such as pheochromocytoma, collagen-vascular diseases, drug use particularly stimulants, cocaine and amphetamines and their substituted analogues, monoamine oxidase inhibitors or food-drug interactions, spinal cord disorders, glomerulonephritis, head trauma, neoplasias, preeclampsia and eclampsia, hyperthyroidism and renovascular hypertension. People withdrawing from medications such as clonidine or beta-blockers have been frequently found to develop hypertensive crises. It is important to note that these conditions exist outside of hypertensive emergency, in that patients diagnosed with these conditions are at increased risk of hypertensive emergencies or end organ failure. Pathophysiology The pathophysiology of hypertensive emergency is not well understood. Failure of normal autoregulation and an abrupt rise in systemic vascular resistance are typical initial components of the disease process. Hypertensive emergency pathophysiology includes: Abrupt increase in systemic vascular resistance, likely related to humoral vasoconstrictors Endothelial injury and dysfunction Fibrinoid necrosis of the arterioles Deposition of platelets and fibrin Breakdown of normal autoregulatory function The resulting ischemia prompts further release of vasoactive substances including prostaglandins, free radicals, and thrombotic/mitotic growth factors, completing a vicious cycle of inflammatory changes. If the process is not stopped, homeostatic failure begins, leading to loss of cerebral and local autoregulation, organ system ischemia and dysfunction, and myocardial infarction. Single-organ involvement is found in approximately 83% of hypertensive emergency patients, two-organ involvement in about 14% of patients, and multi-organ failure (failure of at least 3 organ systems) in about 3% of patients. In the brain, hypertensive encephalopathy - characterized by hypertension, altered mental status, and swelling of the optic disc - is a manifestation of the dysfunction of cerebral autoregulation. Cerebral autoregulation is the ability of the blood vessels in the brain to maintain a constant blood flow. People with chronic hypertension can tolerate higher arterial pressure before their autoregulation system is disrupted. Hypertensives also have an increased cerebrovascular resistance which puts them at greater risk of developing cerebral ischemia if the blood flow decreases into a normotensive range. On the other hand, sudden or rapid rises in blood pressure may cause hyperperfusion and increased cerebral blood flow, causing increased intracranial pressure and cerebral edema, with increased risk of intracranial bleeding. In the heart, increased arterial stiffness, increased systolic blood pressure, and widened pulse pressures, all resulting from chronic hypertension, can cause significant damage. Coronary perfusion pressures are decreased by these factors, which also increase myocardial oxygen consumption, possibly leading to left ventricular hypertrophy. As the left ventricle becomes unable to compensate for an acute rise in systemic vascular resistance, left ventricular failure and pulmonary edema or myocardial ischemia may occur. In the kidneys, chronic hypertension has a great impact on the kidney vasculature, leading to pathologic changes in the small arteries of the kidney. Affected arteries develop endothelial dysfunction and impairment of normal vasodilation, which alter kidney autoregulation. When the kidneys' autoregulatory system is disrupted, the intraglomerular pressure starts to vary directly with the systemic arterial pressure, thus offering no protection to the kidney during blood pressure fluctuations. The renin-aldosterone-angiotensin system can be activated, leading to further vasoconstriction and damage. During a hypertensive crisis, this can lead to acute kidney ischemia, with hypoperfusion, involvement of other organs, and subsequent dysfunction. After an acute event, this endothelial dysfunction has persisted for years. Diagnosis The term hypertensive emergency is primarily used as a specific term for a hypertensive crisis with a diastolic blood pressure greater than or equal to 120 mmHg or systolic blood pressure greater than or equal to 180 mmHg. Hypertensive emergency differs from hypertensive urgency in that, in the former, there is evidence of acute organ damage. Both of these definitions had collectively been known as malignant hypertension, although this medical term is replaced. In the pregnant patient, the definition of hypertensive emergency (likely secondary to pre-eclampsia or eclampsia) is only a blood pressure exceeding 160 mmHg systolic blood pressure or 110 mmHg diastolic blood pressure. Treatment In a hypertensive emergency, treatment should first be to stabilize the patient's airway, breathing, and circulation per ACLS guidelines. Patients should have their blood pressure slowly lowered over a period of minutes to hours with an antihypertensive agent. Documented goals for blood pressure include a reduction in the mean arterial pressure by less than or equal to 25% within the first 8 hours of emergency. If blood pressure is lowered aggressively, patients are at increased risk of complications including stroke, blindness, or kidney failure. Several classes of anti hypertensive agents are recommended, with the choice depending on the cause of the hypertensive crisis, the severity of the elevation in blood pressure, and the patient's baseline blood pressure prior to a hypertensive emergency. Physicians will attempt to identify a cause of the patient's hypertension, including chest radiograph, serum laboratory studies evaluating kidney function, urinalysis, as that will alter the treatment approach for a more patient-directed regimen. Hypertensive emergencies differ from hypertensive urgency in that they are treated parenterally, whereas in urgency it is recommended to use oral anti hypertensives to reduce the risk of hypotensive complications or ischemia. Parenteral agents are classified into beta-blockers, calcium channel blockers, systemic vasodilators, or other (fenoldopam, phentolamine, clonidine). Medications include labetalol, nicardipine, hydralazine, sodium nitroprusside, esmolol, nifedipine, minoxidil, isradipine, clonidine, and chlorpromazine. These medications work through a variety of mechanisms. Labetalol is a beta-blocker with mild alpha antagonism, decreasing the ability of catecholamine activity to increase systemic vascular resistance, while also decreasing heart rate and myocardial oxygen demand. Nicardipine, Nifedipine, and Isradipine are calcium channel blockers that work to decrease systemic vascular resistance and subsequently lower blood pressure. Hydralazine and Sodium nitroprusside are systemic vasodilators, thereby reducing afterload, however can be found to have reflex tachycardia, making them likely second or third line choices. Sodium nitroprusside was previously the first-line choice due to its rapid onset, although now it is less commonly used due to side effects, drastic drops in blood pressure, and cyanide toxicity. Sodium nitroprusside is also contraindicated in patients with myocardial infarction, due to coronary steal. It is again important that the blood pressure is lowered slowly. The initial goal in hypertensive emergencies is to reduce the pressure by no more than 25% the mean arterial pressure. Excessive reduction in blood pressure can precipitate coronary, cerebral, or kidney ischemia and, possibly, infarction. A hypertensive emergency is not based solely on an absolute level of blood pressure, but also on a patient's baseline blood pressure before the hypertensive crisis occurs. Individuals with a history of chronic hypertension may not tolerate a "normal" blood pressure, and can therefore present symptomatically with hypotension, including fatigue, light-headedness, nausea, vomiting, or syncope. Prognosis Severe hypertension is a serious and potentially life-threatening medical condition. It is estimated that people who do not receive appropriate treatment only live an average of about three years after the event. The morbidity and mortality of hypertensive emergencies depend on the extent of end-organ dysfunction at the time of presentation and the degree to which blood pressure is controlled afterward. With good blood pressure control and medication compliance, the 5-year survival rate of patients with hypertensive crises approaches 55%. The risks of developing a life-threatening disease affecting the heart or brain increase as the blood flow increases. Commonly, ischemic heart attack and stroke are the causes that lead to death in patients with severe hypertension. It is estimated that for every 20 mm Hg systolic or 10 mm Hg diastolic increase in blood pressures above 115/75 mm Hg, the mortality rate for both ischemic heart disease, cancer and stroke doubles. Consequences of hypertensive emergency result after prolonged elevations in blood pressure and associated end-organ dysfunction. Acute end-organ damage may occur, affecting the neurological, cardiovascular, kidney, or other organ systems. Some examples of neurological damage include hypertensive encephalopathy, cerebral vascular accident/cerebral infarction, subarachnoid hemorrhage, and intracranial bleeding. Cardiovascular system damage can include myocardial ischemia/infarction, acute left ventricular dysfunction, acute pulmonary edema, and aortic dissection. Other end-organ damage can include acute kidney failure or insufficiency, retinopathy, eclampsia, lung cancer, brain cancer, leukemia and microangiopathic hemolytic anemia. Epidemiology In 2000, it was estimated that 1 billion people worldwide have hypertension, making it the most prevalent condition in the world. Approximately 60 million Americans have chronic hypertension, with 1% of these individuals having an episode of hypertensive urgency. In emergency departments and clinics around the U.S., the prevalence of hypertensive urgency is suspected to be between 3-5%. 25% of hypertensive crises have been found to be hypertensive emergency versus urgency when presenting to the ER. Risk factors for hypertensive emergency include age, obesity, noncompliance to anti hypertensive medications, female sex, Caucasian race, preexisting diabetes or coronary artery disease, mental illness, and sedentary lifestyle. Several studies have concluded that African Americans have a greater incidence of hypertension and a greater morbidity and mortality from hypertensive disease than non-Hispanic whites, however hypertensive crises have a greater incidence in Caucasians. Although severe hypertension is more common in the elderly, it may occur in children (though very rarely), likely due to metabolic or hormonal dysfunction. In 2014, a systematic review identified women as having slightly higher increased risks of developing hypertensive crises than do men. With the usage of anti hypertensives, the rates of hypertensive emergencies has declined from 7% to 1% of patients with hypertensive urgency. 16% of patients presenting with hypertensive emergency can have no known history of hypertension. See also Hypertensive retinopathy Hypertensive encephalopathy Preeclampsia Eclampsia Aortic dissection Intracranial hemorrhage References External links Hypertension Medical emergencies
0.776649
0.990964
0.769632
Hyperthermia
Hyperthermia, also known simply as overheating, is a condition in which an individual's body temperature is elevated beyond normal due to failed thermoregulation. The person's body produces or absorbs more heat than it dissipates. When extreme temperature elevation occurs, it becomes a medical emergency requiring immediate treatment to prevent disability or death. Almost half a million deaths are recorded every year from hyperthermia. The most common causes include heat stroke and adverse reactions to drugs. Heat stroke is an acute temperature elevation caused by exposure to excessive heat, or combination of heat and humidity, that overwhelms the heat-regulating mechanisms of the body. The latter is a relatively rare side effect of many drugs, particularly those that affect the central nervous system. Malignant hyperthermia is a rare complication of some types of general anesthesia. Hyperthermia can also be caused by a traumatic brain injury. Hyperthermia differs from fever in that the body's temperature set point remains unchanged. The opposite is hypothermia, which occurs when the temperature drops below that required to maintain normal metabolism. The term is from Greek ὑπέρ, hyper, meaning "above", and θέρμος, thermos, meaning "heat". Classification In humans, hyperthermia is defined as a temperature greater than , depending on the reference used, that occurs without a change in the body's temperature set point. The normal human body temperature can be as high as in the late afternoon. Hyperthermia requires an elevation from the temperature that would otherwise be expected. Such elevations range from mild to extreme; body temperatures above can be life-threatening. Signs and symptoms An early stage of hyperthermia can be "heat exhaustion" (or "heat prostration" or "heat stress"), whose symptoms can include heavy sweating, rapid breathing and a fast, weak pulse. If the condition progresses to heat stroke, then hot, dry skin is typical as blood vessels dilate in an attempt to increase heat loss. An inability to cool the body through perspiration may cause dry skin. Hyperthermia from neurological disease may include little or no sweating, cardiovascular problems, and confusion or delirium. Other signs and symptoms vary. Accompanying dehydration can produce nausea, vomiting, headaches, and low blood pressure and the latter can lead to fainting or dizziness, especially if the standing position is assumed quickly. In severe heat stroke, confusion and aggressive behavior may be observed. Heart rate and respiration rate will increase (tachycardia and tachypnea) as blood pressure drops and the heart attempts to maintain adequate circulation. The decrease in blood pressure can then cause blood vessels to contract reflexively, resulting in a pale or bluish skin color in advanced cases. Young children, in particular, may have seizures. Eventually, organ failure, unconsciousness and death will result. Causes Heat stroke occurs when thermoregulation is overwhelmed by a combination of excessive metabolic production of heat (exertion), excessive environmental heat, and insufficient or impaired heat loss, resulting in an abnormally high body temperature. In severe cases, temperatures can exceed . Heat stroke may be non-exertional (classic) or exertional. Exertional Significant physical exertion in hot conditions can generate heat beyond the ability to cool, because, in addition to the heat, humidity of the environment may reduce the efficiency of the body's normal cooling mechanisms. Human heat-loss mechanisms are limited primarily to sweating (which dissipates heat by evaporation, assuming sufficiently low humidity) and vasodilation of skin vessels (which dissipates heat by convection proportional to the temperature difference between the body and its surroundings, according to Newton's law of cooling). Other factors, such as insufficient water intake, consuming alcohol, or lack of air conditioning, can worsen the problem. The increase in body temperature that results from a breakdown in thermoregulation affects the body biochemically. Enzymes involved in metabolic pathways within the body such as cellular respiration fail to work effectively at higher temperatures, and further increases can lead them to denature, reducing their ability to catalyse essential chemical reactions. This loss of enzymatic control affects the functioning of major organs with high energy demands such as the heart and brain. Loss of fluid and electrolytes cause heat cramps – slow muscular contraction and severe muscular spasm lasting between one and three minutes. Almost all cases of heat cramps involve vigorous physical exertion. Body temperature may remain normal or a little higher than normal and cramps are concentrated in heavily used muscles. Situational Situational heat stroke occurs in the absence of exertion. It mostly affects the young and elderly. In the elderly in particular, it can be precipitated by medications that reduce vasodilation and sweating, such as anticholinergic drugs, antihistamines, and diuretics. In this situation, the body's tolerance for high environmental temperature may be insufficient, even at rest. Heat waves are often followed by a rise in the death rate, and these 'classical hyperthermia' deaths typically involve the elderly and infirm. This is partly because thermoregulation involves cardiovascular, respiratory and renal systems which may be inadequate for the additional stress because of the existing burden of aging and disease, further compromised by medications. During the July 1995 heatwave in Chicago, there were at least 700 heat-related deaths. The strongest risk factors were being confined to bed, and living alone, while the risk was reduced for those with working air conditioners and those with access to transportation. Even then, reported deaths may be underestimated as diagnosis can be mis-classified as stroke or heart attack. Drugs Some drugs cause excessive internal heat production. The rate of drug-induced hyperthermia is higher where use of these drugs is higher. Many psychotropic medications, such as selective serotonin reuptake inhibitors (SSRIs), monoamine oxidase inhibitors (MAOIs), and tricyclic antidepressants, can cause hyperthermia. Serotonin syndrome is a rare adverse reaction to overdose of these medications or the use of several simultaneously. Similarly, neuroleptic malignant syndrome is an uncommon reaction to neuroleptic agents. These syndromes are differentiated by other associated symptoms, such as tremor in serotonin syndrome and "lead-pipe" muscle rigidity in neuroleptic malignant syndrome. Recreational drugs such as amphetamines and cocaine, PCP, dextromethorphan, LSD, and MDMA may cause hyperthermia. Malignant hyperthermia is a rare reaction to common anesthetic agents (such as halothane) or the paralytic agent succinylcholine. Those who have this reaction, which is potentially fatal, have a genetic predisposition. The use of anticholinergics, more specifically muscarinic antagonists are thought to cause mild hyperthermic episodes due to its parasympatholytic effects. The sympathetic nervous system, also known as the "fight-or-flight response", dominates by raising catecholamine levels by the blocked action of the "rest and digest system". Drugs that decouple oxidative phosphorylation may also cause hyperthermia. From this group of drugs the most well-known is 2,4-dinitrophenol which was used as a weight loss drug until dangers from its use became apparent. Personal protective equipment Those working in industry, in the military, or as first responders may be required to wear personal protective equipment (PPE) against hazards such as chemical agents, gases, fire, small arms and improvised explosive devices (IEDs). PPE includes a range of hazmat suits, firefighting turnout gear, body armor and bomb suits, among others. Depending on design, the wearer may be encapsulated in a microclimate, due to an increase in thermal resistance and decrease in vapor permeability. As physical work is performed, the body's natural thermoregulation (i.e. sweating) becomes ineffective. This is compounded by increased work rates, high ambient temperature and humidity levels, and direct exposure to the sun. The net effect is that desired protection from some environmental threats inadvertently increases the threat of heat stress. The effect of PPE on hyperthermia has been noted in fighting the 2014 Ebola virus epidemic in Western Africa. Doctors and healthcare workers were only able to work for 40 minutes at a time in their protective suits, fearing heat stroke. Other Other rare causes of hyperthermia include thyrotoxicosis and an adrenal gland tumor, called pheochromocytoma, both of which can cause increased heat production. Damage to the central nervous system from brain hemorrhage, traumatic brain injury, status epilepticus, and other kinds of injury to the hypothalamus can also cause hyperthermia. Pathophysiology A fever occurs when the core temperature is set higher, through the action of the pre-optic region of the anterior hypothalamus. For example, in response to a bacterial or viral infection, certain white blood cells within the blood will release pyrogens which have a direct effect on the anterior hypothalamus, causing body temperature to rise, much like raising the temperature setting on a thermostat. In contrast, hyperthermia occurs when the body temperature rises without a change in the heat control centers. Some of the gastrointestinal symptoms of acute exertional heatstroke, such as vomiting, diarrhea, and gastrointestinal bleeding, may be caused by barrier dysfunction and subsequent endotoxemia. Ultraendurance athletes have been found to have significantly increased plasma endotoxin levels. Endotoxin stimulates many inflammatory cytokines, which in turn may cause multiorgan dysfunction. Experimentally, monkeys treated with oral antibiotics prior to induction of heat stroke do not become endotoxemic. There is scientific support for the concept of a temperature set point; that is, maintenance of an optimal temperature for the metabolic processes that life depends on. Nervous activity in the preoptic-anterior hypothalamus of the brain triggers heat losing (sweating, etc.) or heat generating (shivering and muscle contraction, etc.) activities through stimulation of the autonomic nervous system. The pre-optic anterior hypothalamus has been shown to contain warm sensitive, cool sensitive, and temperature insensitive neurons, to determine the body's temperature setpoint. As the temperature that these neurons are exposed to rises above , the rate of electrical discharge of the warm-sensitive neurons increases progressively. Cold-sensitive neurons increase their rate of electrical discharge progressively below . Diagnosis Hyperthermia is generally diagnosed by the combination of unexpectedly high body temperature and a history that supports hyperthermia instead of a fever. Most commonly this means that the elevated temperature has occurred in a hot, humid environment (heat stroke) or in someone taking a drug for which hyperthermia is a known side effect (drug-induced hyperthermia). The presence of signs and symptoms related to hyperthermia syndromes, such as extrapyramidal symptoms characteristic of neuroleptic malignant syndrome, and the absence of signs and symptoms more commonly related to infection-related fevers, are also considered in making the diagnosis. If fever-reducing drugs lower the body temperature, even if the temperature does not return entirely to normal, then hyperthermia is excluded. Prevention When ambient temperature is excessive, humans and many other animals cool themselves below ambient by evaporative cooling of sweat (or other aqueous liquid; saliva in dogs, for example); this helps prevent potentially fatal hyperthermia. The effectiveness of evaporative cooling depends upon humidity. Wet-bulb temperature, which takes humidity into account, or more complex calculated quantities such as wet-bulb globe temperature (WBGT), which also takes solar radiation into account, give useful indications of the degree of heat stress and are used by several agencies as the basis for heat-stress prevention guidelines. (Wet-bulb temperature is essentially the lowest skin temperature attainable by evaporative cooling at a given ambient temperature and humidity.) A sustained wet-bulb temperature exceeding is likely to be fatal even to fit and healthy people unclothed in the shade next to a fan; at this temperature, environmental heat gain instead of loss occurs. , wet-bulb temperatures only very rarely exceeded anywhere, although significant global warming may change this. In cases of heat stress caused by physical exertion, hot environments, or protective equipment, prevention or mitigation by frequent rest breaks, careful hydration, and monitoring body temperature should be attempted. However, in situations demanding one is exposed to a hot environment for a prolonged period or must wear protective equipment, a personal cooling system is required as a matter of health and safety. There are a variety of active or passive personal cooling systems; these can be categorized by their power sources and whether they are person- or vehicle-mounted. Because of the broad variety of operating conditions, these devices must meet specific requirements concerning their rate and duration of cooling, their power source, and their adherence to health and safety regulations. Among other criteria are the user's need for physical mobility and autonomy. For example, active-liquid systems operate by chilling water and circulating it through a garment; the skin surface area is thereby cooled through conduction. This type of system has proven successful in certain military, law enforcement, and industrial applications. Bomb-disposal technicians wearing special suits to protect against improvised explosive devices (IEDs) use a small, ice-based chiller unit that is strapped to one leg; a liquid-circulating garment, usually a vest, is worn over the torso to maintain a safe core body temperature. By contrast, soldiers traveling in combat vehicles can face microclimate temperatures in excess of and require a multiple-user, vehicle-powered cooling system with rapid connection capabilities. Requirements for hazmat teams, the medical community, and workers in heavy industry vary further. Treatment The underlying cause must be removed. Mild hyperthemia caused by exertion on a hot day may be adequately treated through self-care measures, such as increased water consumption and resting in a cool place. Hyperthermia that results from drug exposure requires prompt cessation of that drug, and occasionally the use of other drugs as counter measures. Antipyretics (e.g., acetaminophen, aspirin, other nonsteroidal anti-inflammatory drugs) have no role in the treatment of heatstroke because antipyretics interrupt the change in the hypothalamic set point caused by pyrogens; they are not expected to work on a healthy hypothalamus that has been overloaded, as in the case of heatstroke. In this situation, antipyretics actually may be harmful in patients who develop hepatic, hematologic, and renal complications because they may aggravate bleeding tendencies. When body temperature is significantly elevated, mechanical cooling methods are used to remove heat and to restore the body's ability to regulate its own temperatures. Passive cooling techniques, such as resting in a cool, shady area and removing clothing can be applied immediately. Active cooling methods, such as sponging the head, neck, and trunk with cool water, remove heat from the body and thereby speed the body's return to normal temperatures. When methods such as immersion are impractical, misting the body with water and using a fan have also been shown to be effective. Sitting in a bathtub of tepid or cool water (immersion method) can remove a significant amount of heat in a relatively short period of time. It was once thought that immersion in very cold water is counterproductive, as it causes vasoconstriction in the skin and thereby prevents heat from escaping the body core. However, a British analysis of various studies stated: "this has never been proven experimentally. Indeed, a recent study using normal volunteers has shown that cooling rates were fastest when the coldest water was used." The analysis concluded that iced water immersion is the most-effective cooling technique for exertional heat stroke. No superior cooling method has been found for non-exertional heat stroke. Thus, aggressive ice-water immersion remains the gold standard for life-threatening heat stroke. When the body temperature reaches about , or if the affected person is unconscious or showing signs of confusion, hyperthermia is considered a medical emergency that requires treatment in a proper medical facility. A cardiopulmonary resuscitation (CPR) may be necessary if the person goes into cardiac arrest (stop of heart beats). Already in a hospital, more aggressive cooling measures are available, including intravenous hydration, gastric lavage with iced saline, and even hemodialysis to cool the blood. Epidemiology Hyperthermia affects those who are unable to regulate their body heat, mainly due to environmental conditions. The main risk factor for hyperthermia is the lack of ability to sweat. People who are dehydrated or who are older may not produce the sweat they need to regulate their body temperature. High heat conditions can put certain groups at risk for hyperthermia including: physically active individuals, soldiers, construction workers, landscapers and factory workers. Some people that do not have access to cooler living conditions, like people with lower socioeconomic status, may have a difficult time fighting the heat. People are at risk for hyperthermia during high heat and dry conditions, most commonly seen in the summer. Various cases of different types of hyperthermia have been reported. A research study was published in March 2019 that looked into multiple case reports of drug induced hyperthermia. The study concluded that psychotropic drugs such as anti-psychotics, antidepressants, and anxiolytics were associated with an increased heat-related mortality as opposed to the other drugs researched (anticholinergics, diuretics, cardiovascular agents, etc.). A different study was published in June 2019 that examined the association between hyperthermia in older adults and the temperatures in the United States. Hospitalization records of elderly patients in the US between 1991 and 2006 were analyzed and concluded that cases of hyperthermia were observed to be highest in regions with arid climates. The study discussed finding a disproportionately high number of cases of hyperthermia in early seasonal heat waves indicating that people were not yet practicing proper techniques to stay cool and prevent overheating in the early presence of warm, dry weather. In urban areas people are at an increased susceptibility to hyperthermia. This is due to a phenomenon called the urban heat island effect. Since the 20th century in the United States, the north-central region (Ohio, Indiana, Illinois, Missouri, Iowa, and Nebraska) was the region with the highest morbidity resulting from hyperthermia. Northeastern states had the next highest. Regions least affected by heat wave-related hyperthermia causing death were Southern and Pacific Coastal states. Northern cities in the United States are at greater risk of hyperthermia during heat waves due to the fact that people tend to have a lower minimum mortality temperature at higher latitudes. In contrast, cities residing in lower latitudes within the continental US typically have higher thresholds for ambient temperatures. In India, hundreds die every year from summer heat waves, including more than 2,500 in the year 2015. Later that same summer, the 2015 Pakistani heat wave killed about 2,000 people. An extreme 2003 European heat wave caused tens of thousands of deaths. Causes of hyperthermia include dehydration, use of certain medications, using cocaine and amphetamines or excessive alcohol use. Bodily temperatures greater than can be diagnosed as a hyperthermic case. As body temperatures increase or excessive body temperatures persist, individuals are at a heightened risk of developing progressive conditions. Greater risk complications of hyperthermia include heat stroke, organ malfunction, organ failure, and death. There are two forms of heat stroke; classical heatstroke and exertional heatstroke. Classical heatstroke occurs from extreme environmental conditions, such as heat waves. Those who are most commonly affected by classical heatstroke are very young, elderly or chronically ill. Exertional heatstroke appears in individuals after vigorous physical activity. Exertional heatstroke is displayed most commonly in healthy 15-50 year old people. Sweating is often present in exertional heatstroke. The associated mortality rate of heatstroke is 40 to 64%. Research Hyperthermia can also be deliberately induced using drugs or medical devices, and is being studied and applied in clinical routine as a treatment of some kinds of cancer. Research has shown that medically controlled hyperthermia can shrink tumours. This occurs when a high body temperature damages cancerous cells by destroying proteins and structures within each cell. Hyperthermia has also been researched to investigate whether it causes cancerous tumours to be more prone to radiation as a form of treatment; which as a result has allowed hyperthermia to be used to compliment other forms of cancer therapy. Various techniques of hyperthermia in the treatment of cancer include local or regional hyperthermia, as well as whole body techniques. See also Effects of climate change on human health Space blanket References External links Tips to Beat the Heat —American Red Cross Extreme Heat—CDC Emergency Preparedness and Response Workplace Safety and Health Topics: Heat Stress—CDC and NIOSH Excessive Heat Events Guidebook—US EPA Physiological Responses to Exercise in the Heat—US National Academies Causes of death Heat waves Medical emergencies Weather and health Physiology Thermoregulation
0.771392
0.997701
0.769618
Coagulation
Coagulation, also known as clotting, is the process by which blood changes from a liquid to a gel, forming a blood clot. It results in hemostasis, the cessation of blood loss from a damaged vessel, followed by repair. The process of coagulation involves activation, adhesion and aggregation of platelets, as well as deposition and maturation of fibrin. Coagulation begins almost instantly after an injury to the endothelium that lines a blood vessel. Exposure of blood to the subendothelial space initiates two processes: changes in platelets, and the exposure of subendothelial platelet tissue factor to coagulation factor VII, which ultimately leads to cross-linked fibrin formation. Platelets immediately form a plug at the site of injury; this is called primary hemostasis. Secondary hemostasis occurs simultaneously: additional coagulation factors beyond factor VII (listed below) respond in a cascade to form fibrin strands, which strengthen the platelet plug. Coagulation is highly conserved throughout biology. In all mammals, coagulation involves both cellular components (platelets) and proteinaceous components (coagulation or clotting factors). The pathway in humans has been the most extensively researched and is the best understood. Disorders of coagulation can result in problems with hemorrhage, bruising, or thrombosis. List of coagulation factors There are 13 traditional clotting factors, as named below, and other substances necessary for coagulation: Physiology Physiology of blood coagulation is based on hemostasis, the normal bodily process that stops bleeding. Coagulation is a part of an integrated series of haemostatic reactions, involving plasma, platelet, and vascular components. Hemostasis consists of four main stages: Vasoconstriction (vasospasm or vascular spasm): Here, this refers to contraction of smooth muscles in the tunica media layer of endothelium (blood vessel wall). Activation of platelets and platelet plug formation: Platelet activation: Platelet activators, such as platelet activating factor and thromboxane A2, activate platelets in the bloodstream, leading to attachment of platelets' membrane receptors (e.g. glycoprotein IIb/IIIa) to extracellular matrix proteins (e.g. von Willebrand factor) on cell membranes of damaged endothelial cells and exposed collagen at the site of injury. Platelet plug formation: The adhered platelets aggregate and form a temporary plug to stop bleeding. This process is often called "primary hemostasis". Coagulation cascade: It is a series of enzymatic reactions that lead to the formation of a stable blood clot. The endothelial cells release substances like tissue factor, which triggers the extrinsic pathway of the coagulation cascade. This is called as "secondary hemostasis". Fibrin clot formation: Near the end of the extrinsic pathway, after thrombin completes conversion of fibrinogen into fibrin, factor XIIIa (plasma transglutaminase; activated form of fibrin-stabilizing factor) promotes fibrin cross-linking, and subsequent stabilization of fibrin, leading to the formation of a fibrin clot (final blood clot), which temporarily seals the wound to allow wound healing until its inner part is dissolved by fibrinolytic enzymes, while the clot's outer part is shed off. After the fibrin clot is formed, clot retraction occurs and then clot resolution starts, and these two process are together called "tertiary hemostasis". Activated platelets contract their internal actin and myosin fibrils in their cytoskeleton, which leads to shrinkage of the clot volume. Plasminogen activators, such as tissue plasminogen activator (t-PA), activate plasminogen into plasmin, which promotes lysis of the fibrin clot; this restores the flow of blood in the damaged/obstructed blood vessels. Vasoconstriction When there is an injury to a blood vessel, the endothelial cells can release various vasoconstrictor substances, such as endothelin and thromboxane, to induce the constriction of the smooth muscles in the vessel wall. This helps reduce blood flow to the site of injury and limits bleeding. Platelet activation and platelet plug formation When the endothelium is damaged, the normally isolated underlying collagen is exposed to circulating platelets, which bind directly to collagen with collagen-specific glycoprotein Ia/IIa surface receptors. This adhesion is strengthened further by von Willebrand factor (vWF), which is released from the endothelium and from platelets; vWF forms additional links between the platelets' glycoprotein Ib/IX/V and A1 domain. This localization of platelets to the extracellular matrix promotes collagen interaction with platelet glycoprotein VI. Binding of collagen to glycoprotein VI triggers a signaling cascade that results in activation of platelet integrins. Activated integrins mediate tight binding of platelets to the extracellular matrix. This process adheres platelets to the site of injury. Activated platelets release the contents of stored granules into the blood plasma. The granules include ADP, serotonin, platelet-activating factor (PAF), vWF, platelet factor 4, and thromboxane A2 (TXA2), which, in turn, activate additional platelets. The granules' contents activate a Gq-linked protein receptor cascade, resulting in increased calcium concentration in the platelets' cytosol. The calcium activates protein kinase C, which, in turn, activates phospholipase A2 (PLA2). PLA2 then modifies the integrin membrane glycoprotein IIb/IIIa, increasing its affinity to bind fibrinogen. The activated platelets change shape from spherical to stellate, and the fibrinogen cross-links with glycoprotein IIb/IIIa aid in aggregation of adjacent platelets, forming a platelet plug and thereby completing primary hemostasis). Coagulation cascade The coagulation cascade of secondary hemostasis has two initial pathways which lead to fibrin formation. These are the contact activation pathway (also known as the intrinsic pathway), and the tissue factor pathway (also known as the extrinsic pathway), which both lead to the same fundamental reactions that produce fibrin. It was previously thought that the two pathways of coagulation cascade were of equal importance, but it is now known that the primary pathway for the initiation of blood coagulation is the tissue factor (extrinsic) pathway. The pathways are a series of reactions, in which a zymogen (inactive enzyme precursor) of a serine protease and its glycoprotein co-factor are activated to become active components that then catalyze the next reaction in the cascade, ultimately resulting in cross-linked fibrin. Coagulation factors are generally indicated by Roman numerals, with a lowercase a appended to indicate an active form. The coagulation factors are generally enzymes called serine proteases, which act by cleaving downstream proteins. The exceptions are tissue factor, FV, FVIII, FXIII. Tissue factor, FV and FVIII are glycoproteins, and Factor XIII is a transglutaminase. The coagulation factors circulate as inactive zymogens. The coagulation cascade is therefore classically divided into three pathways. The tissue factor and contact activation pathways both activate the "final common pathway" of factor X, thrombin and fibrin. Tissue factor pathway (extrinsic) The main role of the tissue factor (TF) pathway is to generate a "thrombin burst", a process by which thrombin, the most important constituent of the coagulation cascade in terms of its feedback activation roles, is released very rapidly. FVIIa circulates in a higher amount than any other activated coagulation factor. The process includes the following steps: Following damage to the blood vessel, FVII leaves the circulation and comes into contact with tissue factor expressed on tissue-factor-bearing cells (stromal fibroblasts and leukocytes), forming an activated complex (TF-FVIIa). TF-FVIIa activates FIX and FX. FVII is itself activated by thrombin, FXIa, FXII, and FXa. The activation of FX (to form FXa) by TF-FVIIa is almost immediately inhibited by tissue factor pathway inhibitor (TFPI). FXa and its co-factor FVa form the prothrombinase complex, which activates prothrombin to thrombin. Thrombin then activates other components of the coagulation cascade, including FV and FVIII (which forms a complex with FIX), and activates and releases FVIII from being bound to vWF. FVIIIa is the co-factor of FIXa, and together they form the "tenase" complex, which activates FX; and so the cycle continues. ("Tenase" is a contraction of "ten" and the suffix "-ase" used for enzymes.) Contact activation pathway (intrinsic) The contact activation pathway begins with formation of the primary complex on collagen by high-molecular-weight kininogen (HMWK), prekallikrein, and FXII (Hageman factor). Prekallikrein is converted to kallikrein and FXII becomes FXIIa. FXIIa converts FXI into FXIa. Factor XIa activates FIX, which with its co-factor FVIIIa form the tenase complex, which activates FX to FXa. The minor role that the contact activation pathway has in initiating blood clot formation can be illustrated by the fact that individuals with severe deficiencies of FXII, HMWK, and prekallikrein do not have a bleeding disorder. Instead, contact activation system seems to be more involved in inflammation, and innate immunity. Despite this, interference with the pathway may confer protection against thrombosis without a significant bleeding risk. Final common pathway The division of coagulation in two pathways is arbitrary, originating from laboratory tests in which clotting times were measured either after the clotting was initiated by glass, the intrinsic pathway; or clotting was initiated by thromboplastin (a mix of tissue factor and phospholipids), the extrinsic pathway. Further, the final common pathway scheme implies that prothrombin is converted to thrombin only when acted upon by the intrinsic or extrinsic pathways, which is an oversimplification. In fact, thrombin is generated by activated platelets at the initiation of the platelet plug, which in turn promotes more platelet activation. Thrombin functions not only to convert fibrinogen to fibrin, it also activates Factors VIII and V and their inhibitor protein C (in the presence of thrombomodulin). By activating Factor XIII, covalent bonds are formed that crosslink the fibrin polymers that form from activated monomers. This stabilizes the fibrin network. The coagulation cascade is maintained in a prothrombotic state by the continued activation of FVIII and FIX to form the tenase complex until it is down-regulated by the anticoagulant pathways. Cell-based scheme of coagulation A newer model of coagulation mechanism explains the intricate combination of cellular and biochemical events that occur during the coagulation process in vivo. Along with the procoagulant and anticoagulant plasma proteins, normal physiologic coagulation requires the presence of two cell types for formation of coagulation complexes: cells that express tissue factor (usually extravascular) and platelets. The coagulation process occurs in two phases. First is the initiation phase, which occurs in tissue-factor-expressing cells. This is followed by the propagation phase, which occurs on activated platelets. The initiation phase, mediated by the tissue factor exposure, proceeds via the classic extrinsic pathway and contributes to about 5% of thrombin production. The amplified production of thrombin occurs via the classic intrinsic pathway in the propagation phase; about 95% of thrombin generated will be during this second phase. Fibrinolysis Eventually, blood clots are reorganized and resorbed by a process termed fibrinolysis. The main enzyme responsible for this process is plasmin, which is regulated by plasmin activators and plasmin inhibitors. Role in immune system The coagulation system overlaps with the immune system. Coagulation can physically trap invading microbes in blood clots. Also, some products of the coagulation system can contribute to the innate immune system by their ability to increase vascular permeability and act as chemotactic agents for phagocytic cells. In addition, some of the products of the coagulation system are directly antimicrobial. For example, beta-lysine, an amino acid produced by platelets during coagulation, can cause lysis of many Gram-positive bacteria by acting as a cationic detergent. Many acute-phase proteins of inflammation are involved in the coagulation system. In addition, pathogenic bacteria may secrete agents that alter the coagulation system, e.g. coagulase and streptokinase. Immunohemostasis is the integration of immune activation into adaptive clot formation. Immunothrombosis is the pathological result of crosstalk between immunity, inflammation, and coagulation. Mediators of this process include damage-associated molecular patterns and pathogen-associated molecular patterns, which are recognized by toll-like receptors, triggering procoagulant and proinflammatory responses such as formation of neutrophil extracellular traps. Cofactors Various substances are required for the proper functioning of the coagulation cascade: Calcium and phospholipids Calcium and phospholipids (constituents of platelet membrane) are required for the tenase and prothrombinase complexes to function. Calcium mediates the binding of the complexes via the terminal gamma-carboxy residues on Factor Xa and Factor IXa to the phospholipid surfaces expressed by platelets, as well as procoagulant microparticles or microvesicles shed from them. Calcium is also required at other points in the coagulation cascade. Calcium ions play a major role in the regulation of coagulation cascade that is paramount in the maintenance of hemostasis. Other than platelet activation, calcium ions are responsible for complete activation of several coagulation factors, including coagulation Factor XIII. Vitamin K Vitamin K is an essential factor to the hepatic gamma-glutamyl carboxylase that adds a carboxyl group to glutamic acid residues on factors II, VII, IX and X, as well as Protein S, Protein C and Protein Z. In adding the gamma-carboxyl group to glutamate residues on the immature clotting factors, Vitamin K is itself oxidized. Another enzyme, Vitamin K epoxide reductase (VKORC), reduces vitamin K back to its active form. Vitamin K epoxide reductase is pharmacologically important as a target of anticoagulant drugs warfarin and related coumarins such as acenocoumarol, phenprocoumon, and dicumarol. These drugs create a deficiency of reduced vitamin K by blocking VKORC, thereby inhibiting maturation of clotting factors. Vitamin K deficiency from other causes (e.g., in malabsorption) or impaired vitamin K metabolism in disease (e.g., in liver failure) lead to the formation of PIVKAs (proteins formed in vitamin K absence), which are partially or totally non-gamma carboxylated, affecting the coagulation factors' ability to bind to phospholipid. Regulators Several mechanisms keep platelet activation and the coagulation cascade in check. Abnormalities can lead to an increased tendency toward thrombosis: Protein C and Protein S Protein C is a major physiological anticoagulant. It is a vitamin K-dependent serine protease enzyme that is activated by thrombin into activated protein C (APC). Protein C is activated in a sequence that starts with Protein C and thrombin binding to a cell surface protein thrombomodulin. Thrombomodulin binds these proteins in such a way that it activates Protein C. The activated form, along with protein S and a phospholipid as cofactors, degrades FVa and FVIIIa. Quantitative or qualitative deficiency of either (protein C or protein S) may lead to thrombophilia (a tendency to develop thrombosis). Impaired action of Protein C (activated Protein C resistance), for example by having the "Leiden" variant of Factor V or high levels of FVIII, also may lead to a thrombotic tendency. Antithrombin Antithrombin is a serine protease inhibitor (serpin) that degrades the serine proteases: thrombin, FIXa, FXa, FXIa, and FXIIa. It is constantly active, but its adhesion to these factors is increased by the presence of heparan sulfate (a glycosaminoglycan) or the administration of heparins (different heparinoids increase affinity to FXa, thrombin, or both). Quantitative or qualitative deficiency of antithrombin (inborn or acquired, e.g., in proteinuria) leads to thrombophilia. Tissue factor pathway inhibitor (TFPI) Tissue factor pathway inhibitor (TFPI) limits the action of tissue factor (TF). It also inhibits excessive TF-mediated activation of FVII and FX. Plasmin Plasmin is generated by proteolytic cleavage of plasminogen, a plasma protein synthesized in the liver. This cleavage is catalyzed by tissue plasminogen activator (t-PA), which is synthesized and secreted by endothelium. Plasmin proteolytically cleaves fibrin into fibrin degradation products that inhibit excessive fibrin formation. Prostacyclin Prostacyclin (PGI2) is released by endothelium and activates platelet Gs protein-linked receptors. This, in turn, activates adenylyl cyclase, which synthesizes cAMP. cAMP inhibits platelet activation by decreasing cytosolic levels of calcium and, by doing so, inhibits the release of granules that would lead to activation of additional platelets and the coagulation cascade. Medical assessment Numerous medical tests are used to assess the function of the coagulation system: Common: aPTT, PT (also used to determine INR), fibrinogen testing (often by the Clauss fibrinogen assay), platelet count, platelet function testing (often by PFA-100), thrombodynamics test. Other: TCT, bleeding time, mixing test (whether an abnormality corrects if the patient's plasma is mixed with normal plasma), coagulation factor assays, antiphospholipid antibodies, D-dimer, genetic tests (e.g. factor V Leiden, prothrombin mutation G20210A), dilute Russell's viper venom time (dRVVT), miscellaneous platelet function tests, thromboelastography (TEG or Sonoclot), euglobulin lysis time (ELT). The contact activation (intrinsic) pathway is initiated by activation of the contact activation system, and can be measured by the activated partial thromboplastin time (aPTT) test. The tissue factor (extrinsic) pathway is initiated by release of tissue factor (a specific cellular lipoprotein), and can be measured by the prothrombin time (PT) test. PT results are often reported as ratio (INR value) to monitor dosing of oral anticoagulants such as warfarin. The quantitative and qualitative screening of fibrinogen is measured by the thrombin clotting time (TCT). Measurement of the exact amount of fibrinogen present in the blood is generally done using the Clauss fibrinogen assay. Many analysers are capable of measuring a "derived fibrinogen" level from the graph of the Prothrombin time clot. If a coagulation factor is part of the contact activation or tissue factor pathway, a deficiency of that factor will affect only one of the tests: Thus hemophilia A, a deficiency of factor VIII, which is part of the contact activation pathway, results in an abnormally prolonged aPTT test but a normal PT test. Deficiencies of common pathway factors prothrombin, fibrinogen, FX, and FV will prolong both aPTT and PT. If an abnormal PT or aPTT is present, additional testing will occur to determine which (if any) factor is present as aberrant concentrations. Deficiencies of fibrinogen (quantitative or qualitative) will prolong PT, aPTT, thrombin time, and reptilase time. Role in disease Coagulation defects may cause hemorrhage or thrombosis, and occasionally both, depending on the nature of the defect. Platelet disorders Platelet disorders are either congenital or acquired. Examples of congenital platelet disorders are Glanzmann's thrombasthenia, Bernard–Soulier syndrome (abnormal glycoprotein Ib-IX-V complex), gray platelet syndrome (deficient alpha granules), and delta storage pool deficiency (deficient dense granules). Most are rare. They predispose to hemorrhage. Von Willebrand disease is due to deficiency or abnormal function of von Willebrand factor, and leads to a similar bleeding pattern; its milder forms are relatively common. Decreased platelet numbers (thrombocytopenia) is due to insufficient production (e.g., myelodysplastic syndrome or other bone marrow disorders), destruction by the immune system (immune thrombocytopenic purpura), or consumption (e.g., thrombotic thrombocytopenic purpura, hemolytic-uremic syndrome, paroxysmal nocturnal hemoglobinuria, disseminated intravascular coagulation, heparin-induced thrombocytopenia). An increase in platelet count is called thrombocytosis, which may lead to formation of thromboembolisms; however, thrombocytosis may be associated with increased risk of either thrombosis or hemorrhage in patients with myeloproliferative neoplasm. Coagulation factor disorders The best-known coagulation factor disorders are the hemophilias. The three main forms are hemophilia A (factor VIII deficiency), hemophilia B (factor IX deficiency or "Christmas disease") and hemophilia C (factor XI deficiency, mild bleeding tendency). Von Willebrand disease (which behaves more like a platelet disorder except in severe cases), is the most common hereditary bleeding disorder and is characterized as being inherited autosomal recessive or dominant. In this disease, there is a defect in von Willebrand factor (vWF), which mediates the binding of glycoprotein Ib (GPIb) to collagen. This binding helps mediate the activation of platelets and formation of primary hemostasis. In acute or chronic liver failure, there is insufficient production of coagulation factors, possibly increasing risk of bleeding during surgery. Thrombosis is the pathological development of blood clots. These clots may break free and become mobile, forming an embolus or grow to such a size that occludes the vessel in which it developed. An embolism is said to occur when the thrombus (blood clot) becomes a mobile embolus and migrates to another part of the body, interfering with blood circulation and hence impairing organ function downstream of the occlusion. This causes ischemia and often leads to ischemic necrosis of tissue. Most cases of venous thrombosis are due to acquired states (older age, surgery, cancer, immobility). Unprovoked venous thrombosis may be related to inherited thrombophilias (e.g., factor V Leiden, antithrombin deficiency, and various other genetic deficiencies or variants), particularly in younger patients with family history of thrombosis; however, thrombotic events are more likely when acquired risk factors are superimposed on the inherited state. Pharmacology Procoagulants The use of adsorbent chemicals, such as zeolites, and other hemostatic agents are also used for sealing severe injuries quickly (such as in traumatic bleeding secondary to gunshot wounds). Thrombin and fibrin glue are used surgically to treat bleeding and to thrombose aneurysms. Hemostatic Powder Spray TC-325 is used to treated gastrointestinal bleeding. Desmopressin is used to improve platelet function by activating arginine vasopressin receptor 1A. Coagulation factor concentrates are used to treat hemophilia, to reverse the effects of anticoagulants, and to treat bleeding in people with impaired coagulation factor synthesis or increased consumption. Prothrombin complex concentrate, cryoprecipitate and fresh frozen plasma are commonly used coagulation factor products. Recombinant activated human factor VII is sometimes used in the treatment of major bleeding. Tranexamic acid and aminocaproic acid inhibit fibrinolysis and lead to a de facto reduced bleeding rate. Before its withdrawal, aprotinin was used in some forms of major surgery to decrease bleeding risk and the need for blood products. Anticoagulants Anticoagulants and anti-platelet agents (together "antithrombotics") are amongst the most commonly used medications. Anti-platelet agents include aspirin, dipyridamole, ticlopidine, clopidogrel, ticagrelor and prasugrel; the parenteral glycoprotein IIb/IIIa inhibitors are used during angioplasty. Of the anticoagulants, warfarin (and related coumarins) and heparin are the most commonly used. Warfarin affects the vitamin K-dependent clotting factors (II, VII, IX, X) and protein C and protein S, whereas heparin and related compounds increase the action of antithrombin on thrombin and factor Xa. A newer class of drugs, the direct thrombin inhibitors, is under development; some members are already in clinical use (such as lepirudin, argatroban, bivalirudin and dabigatran). Also in clinical use are other small molecular compounds that interfere directly with the enzymatic action of particular coagulation factors (the directly acting oral anticoagulants: dabigatran, rivaroxaban, apixaban, and edoxaban). History Initial discoveries Theories on the coagulation of blood have existed since antiquity. Physiologist Johannes Müller (1801–1858) described fibrin, the substance of a thrombus. Its soluble precursor, fibrinogen, was thus named by Rudolf Virchow (1821–1902), and isolated chemically by Prosper Sylvain Denis (1799–1863). Alexander Schmidt suggested that the conversion from fibrinogen to fibrin is the result of an enzymatic process, and labeled the hypothetical enzyme "thrombin" and its precursor "prothrombin". Arthus discovered in 1890 that calcium was essential in coagulation. Platelets were identified in 1865, and their function was elucidated by Giulio Bizzozero in 1882. The theory that thrombin is generated by the presence of tissue factor was consolidated by Paul Morawitz in 1905. At this stage, it was known that thrombokinase/thromboplastin (factor III) is released by damaged tissues, reacting with prothrombin (II), which, together with calcium (IV), forms thrombin, which converts fibrinogen into fibrin (I). Coagulation factors The remainder of the biochemical factors in the process of coagulation were largely discovered in the 20th century. A first clue as to the actual complexity of the system of coagulation was the discovery of proaccelerin (initially and later called Factor V) by (1905–1990) in 1947. He also postulated its function to be the generation of accelerin (Factor VI), which later turned out to be the activated form of V (or Va); hence, VI is not now in active use. Factor VII (also known as serum prothrombin conversion accelerator or proconvertin, precipitated by barium sulfate) was discovered in a young female patient in 1949 and 1951 by different groups. Factor VIII turned out to be deficient in the clinically recognized but etiologically elusive hemophilia A; it was identified in the 1950s and is alternatively called antihemophilic globulin due to its capability to correct hemophilia A. Factor IX was discovered in 1952 in a young patient with hemophilia B named Stephen Christmas (1947–1993). His deficiency was described by Dr. Rosemary Biggs and Professor R.G. MacFarlane in Oxford, UK. The factor is, hence, called Christmas Factor. Christmas lived in Canada and campaigned for blood transfusion safety until succumbing to transfusion-related AIDS at age 46. An alternative name for the factor is plasma thromboplastin component, given by an independent group in California. Hageman factor, now known as factor XII, was identified in 1955 in an asymptomatic patient with a prolonged bleeding time named of John Hageman. Factor X, or Stuart-Prower factor, followed, in 1956. This protein was identified in a Ms. Audrey Prower of London, who had a lifelong bleeding tendency. In 1957, an American group identified the same factor in a Mr. Rufus Stuart. Factors XI and XIII were identified in 1953 and 1961, respectively. The view that the coagulation process is a "cascade" or "waterfall" was enunciated almost simultaneously by MacFarlane in the UK and by Davie and Ratnoff in the US, respectively. Nomenclature The usage of Roman numerals rather than eponyms or systematic names was agreed upon during annual conferences (starting in 1955) of hemostasis experts. In 1962, consensus was achieved on the numbering of factors I–XII. This committee evolved into the present-day International Committee on Thrombosis and Hemostasis (ICTH). Assignment of numerals ceased in 1963 after the naming of Factor XIII. The names Fletcher Factor and Fitzgerald Factor were given to further coagulation-related proteins, namely prekallikrein and high-molecular-weight kininogen, respectively. Factor VI is unassigned, as accelerin was found to be activated Factor V. Other species All mammals have an extremely closely related blood coagulation process, using a combined cellular and serine protease process. It is possible for any mammalian coagulation factor to "cleave" its equivalent target in any other mammal. The only non-mammalian animal known to use serine proteases for blood coagulation is the horseshoe crab. Exemplifying the close links between coagulation and inflammation, the horseshoe crab has a primitive response to injury, carried out by cells known as amoebocytes (or hemocytes) which serve both hemostatic and immune functions. See also Agglutination (biology) Antihemorrhagic Post-vaccination embolic and thrombotic events References Further reading External links Coagulation system Blood
0.770739
0.998505
0.769587
Apoplexy
Apoplexy refers to the rupture of an internal organ and the associated symptoms. Informally or metaphorically, the term apoplexy is associated with being furious, especially as "apoplectic". Historically, it described what is now known as a hemorrhagic stroke, involving a ruptured blood vessel in the brain; modern medicine typically specifies the anatomical location of the bleeding, such as cerebral, ovarian, or pituitary. Historical meaning From the late 14th to the late 19th century, apoplexy referred to any sudden death that began with abrupt loss of consciousness, especially when the victim died within seconds after losing consciousness. The word apoplexy was sometimes used to refer to the symptom of sudden loss of consciousness immediately preceding death. Strokes, ruptured aortic aneurysms, and even heart attacks were referred to as apoplexy in the past, because before the advent of medical science, there was limited ability to differentiate abnormal conditions and diseased states. Although physiology as a medical field dates back at least to the time of Hippocrates, until the late 19th century physicians often had inadequate or inaccurate understandings of many of the human body's normal functions and abnormal presentations. Hence, identifying a specific cause of a symptom or of death often proved difficult or impossible. Hemorrhage To specify the site of bleeding, the term "apoplexy" is often accompanied by a descriptive adjective. For instance, bleeding within the pituitary gland is termed "pituitary apoplexy," and bleeding within the adrenal glands is referred to as "adrenal apoplexy." Apoplexy also includes hemorrhaging within the gland and accompanying neurological problems such as confusion, headache, and impairment of consciousness. See also Transient ischemic attack References External links Pathology Causes of death Bleeding Obsolete medical terms es:Apoplejía it:Apoplessia
0.770663
0.998543
0.769541
Nutrient
A nutrient is a substance used by an organism to survive, grow and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted into smaller molecules in the process of releasing energy such as for carbohydrates, lipids, proteins and fermentation products (ethanol or vinegar) leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host. Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential to humans and some animal species but most other animals and many plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, protein, fats, sugars and vitamins. A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiological roles in cellular processes, like vascular functions or nerve conduction. Inadequate amounts of essential nutrients or diseases that interfere with absorption, result in a deficiency state that compromises growth, survival and reproduction. Consumer advisories for dietary nutrient intakes such as the United States Dietary Reference Intake, are based on the amount required to prevent deficiency and provide macronutrient and micronutrient guides for both lower and upper limits of intake. In many countries, regulations require that food product labels display information about the amount of any macronutrients and micronutrients present in the food in significant quantities. Nutrients in larger quantities than the body needs may have harmful effects. Edible plants also contain thousands of compounds generally called phytochemicals which have unknown effects on disease or health including a diverse class with non-nutrient status called polyphenols which remain poorly understood as of 2024. Types Macronutrients Macronutrients are defined in several ways. The chemical elements humans consume in the largest quantities are carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulphur, summarized as CHNOPS. The chemical compounds that humans consume in the largest quantities and provide bulk energy are classified as carbohydrates, proteins, and fats. Water must be also consumed in large quantities but does not provide caloric value. Calcium, sodium, potassium, magnesium, and chloride ions, along with phosphorus and sulfur, are listed with macronutrients because they are required in large quantities compared to micronutrients, i.e., vitamins and other minerals, the latter often described as trace or ultratrace minerals. Macronutrients provide energy: Carbohydrates are compounds made up of types of sugar. Carbohydrates are classified according to their number of sugar units: monosaccharides (such as glucose and fructose), disaccharides (such as sucrose and lactose), oligosaccharides, and polysaccharides (such as starch, glycogen, and cellulose). Proteins are organic compounds that consist of amino acids joined by peptide bonds. Since the body cannot manufacture some of the amino acids (termed essential amino acids), the diet must supply them. Through digestion, proteins are broken down by proteases back into free amino acids. Fats consist of a glycerin molecule with three fatty acids attached. Fatty acid molecules contain a -COOH group attached to unbranched hydrocarbon chains connected by single bonds alone (saturated fatty acids) or by both double and single bonds (unsaturated fatty acids). Fats are needed for construction and maintenance of cell membranes, to maintain a stable body temperature, and to sustain the health of skin and hair. Because the body does not manufacture certain fatty acids (termed essential fatty acids), they must be obtained through one's diet. Ethanol is not an essential nutrient, but it does provide calories.The United States Department of Agriculture uses a figure of per gram of alcohol ( per ml) for calculating food energy. For distilled spirits, a standard serving in the U.S. is , which at 40% ethanol (80 proof) would be 14 grams and 98 calories. Micronutrients Micronutrients are essential dietary elements required in varying quantities throughout life to serve metabolic and physiological functions. Dietary minerals, such as potassium, sodium, and iron, are elements native to Earth, and cannot be synthesized. They are required in the diet in microgram or milligram amounts. As plants obtain minerals from the soil, dietary minerals derive directly from plants consumed or indirectly from edible animal sources. Vitamins are organic compounds required in microgram or milligram amounts. The importance of each dietary vitamin was first established when it was determined that a disease would develop if that vitamin was absent from the diet. Essentiality Essential nutrients An essential nutrient is a nutrient required for normal physiological function that cannot be synthesized in the body – either at all or in sufficient quantities – and thus must be obtained from a dietary source. Apart from water, which is universally required for the maintenance of homeostasis in mammals, essential nutrients are indispensable for various cellular metabolic processes and for the maintenance and function of tissues and organs. The nutrients considered essential for humans comprise nine amino acids, two fatty acids, thirteen vitamins, fifteen minerals and choline. In addition, there are several molecules that are considered conditionally essential nutrients since they are indispensable in certain developmental and pathological states. Amino acids An essential amino acid is an amino acid that is required by an organism but cannot be synthesized de novo by it, and therefore must be supplied in its diet. Out of the twenty standard protein-producing amino acids, nine cannot be endogenously synthesized by humans: phenylalanine, valine, threonine, tryptophan, methionine, leucine, isoleucine, lysine, and histidine. Fatty acids Essential fatty acids (EFAs) are fatty acids that humans and other animals must ingest because the body requires them for good health but cannot synthesize them. Only two fatty acids are known to be essential for humans: alpha-linolenic acid (an omega-3 fatty acid) and linoleic acid (an omega-6 fatty acid). Vitamins and vitamers Vitamins occur in a variety of related forms known as vitamers. The vitamers of a given vitamin perform the functions of that vitamin and prevent symptoms of deficiency of that vitamin. Vitamins are those essential organic molecules that are not classified as amino acids or fatty acids. They commonly function as enzymatic cofactors, metabolic regulators or antioxidants. Humans require thirteen vitamins in their diet, most of which are actually groups of related molecules (e.g. vitamin E includes tocopherols and tocotrienols): vitamins A, C, D, E, K, thiamine (B1), riboflavin (B2), niacin (B3), pantothenic acid (B5), pyridoxine (B6), biotin (B7), folate (B9), and cobalamin (B12). The requirement for vitamin D is conditional, as people who get sufficient exposure to ultraviolet light, either from the sun or an artificial source, synthesize vitamin D in the skin. Minerals Minerals are the exogenous chemical elements indispensable for life. Although the four elements: carbon, hydrogen, oxygen, and nitrogen (CHON) are essential for life, they are so plentiful in food and drink that these are not considered nutrients and there are no recommended intakes for these as minerals. The need for nitrogen is addressed by requirements set for protein, which is composed of nitrogen-containing amino acids. Sulfur is essential, but again does not have a recommended intake. Instead, recommended intakes are identified for the sulfur-containing amino acids methionine and cysteine. The essential nutrient trace elements for humans, listed in order of Recommended Dietary Allowance (expressed as a mass), are potassium, chloride, sodium, calcium, phosphorus, magnesium, iron, zinc, manganese, copper, iodine, chromium, molybdenum, and selenium. Additionally, cobalt is a component of Vitamin B12 which is essential. There are other minerals which are essential for some plants and animals, but may or may not be essential for humans, such as boron and silicon. Choline Choline is an essential nutrient. The cholines are a family of water-soluble quaternary ammonium compounds. Choline is the parent compound of the cholines class, consisting of ethanolamine having three methyl substituents attached to the amino function. Healthy humans fed artificially composed diets that are deficient in choline develop fatty liver, liver damage, and muscle damage. Choline was not initially classified as essential because the human body can produce choline in small amounts through phosphatidylcholine metabolism. Conditionally essential Conditionally essential nutrients are certain organic molecules that can normally be synthesized by an organism, but under certain conditions in insufficient quantities. In humans, such conditions include premature birth, limited nutrient intake, rapid growth, and certain disease states. Inositol, taurine, arginine, glutamine and nucleotides are classified as conditionally essential and are particularly important in neonatal diet and metabolism. Non-essential Non-essential nutrients are substances within foods that can have a significant impact on health. Dietary fiber is not absorbed in the human digestive tract. Soluble fiber is metabolized to butyrate and other short-chain fatty acids by bacteria residing in the large intestine. Soluble fiber is marketed as serving a prebiotic function with claims for promoting "healthy" intestinal bacteria. Non-nutrients Ethanol (C2H5OH) is not an essential nutrient, but it does supply approximately of food energy per gram. For spirits (vodka, gin, rum, etc.) a standard serving in the United States is , which at 40%ethanol (80proof) would be 14 grams and . At 50%alcohol, 17.5 g and . Wine and beer contain a similar amount of ethanol in servings of , respectively, but these beverages also contribute to food energy intake from components other than ethanol. A serving of wine contains . A serving of beer contains . According to the U.S. Department of Agriculture, based on NHANES 2013–2014 surveys, women ages 20 and up consume on average 6.8grams of alcohol per day and men consume on average 15.5 grams per day. Ignoring the non-alcohol contribution of those beverages, the average ethanol contributions to daily food energy intake are , respectively. Alcoholic beverages are considered empty calorie foods because, while providing energy, they contribute no essential nutrients. By definition, phytochemicals include all nutritional and non-nutritional components of edible plants. Included as nutritional constituents are provitamin A carotenoids, whereas those without nutrient status are diverse polyphenols, flavonoids, resveratrol, and lignans that are present in numerous plant foods. Some phytochemical compounds are under preliminary research for their potential effects on human diseases and health. However, the qualification for nutrient status of compounds with poorly defined properties in vivo is that they must first be defined with a Dietary Reference Intake level to enable accurate food labeling, a condition not established for most phytochemicals that are claimed to provide antioxidant benefits. Deficiencies and toxicity See Vitamin, Mineral (nutrient), Protein (nutrient) An inadequate amount of a nutrient is a deficiency. Deficiencies can be due to several causes, including an inadequacy in nutrient intake, called a dietary deficiency, or any of several conditions that interfere with the utilization of a nutrient within an organism. Some of the conditions that can interfere with nutrient utilization include problems with nutrient absorption, substances that cause a greater-than-normal need for a nutrient, conditions that cause nutrient destruction, and conditions that cause greater nutrient excretion. Nutrient toxicity occurs when excess consumption of a nutrient does harm to an organism. In the United States and Canada, recommended dietary intake levels of essential nutrients are based on the minimum level that "will maintain a defined level of nutriture in an individual", a definition somewhat different from that used by the World Health Organization and Food and Agriculture Organization of a "basal requirement to indicate the level of intake needed to prevent pathologically relevant and clinically detectable signs of a dietary inadequacy". In setting human nutrient guidelines, government organizations do not necessarily agree on amounts needed to avoid deficiency or maximum amounts to avoid the risk of toxicity. For example, for vitamin C, recommended intakes range from 40 mg/day in India to 155 mg/day for the European Union. The table below shows U.S. Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for vitamins and minerals, PRIs for the European Union (same concept as RDAs), followed by what three government organizations deem to be the safe upper intake. RDAs are set higher than EARs to cover people with higher-than-average needs. Adequate Intakes (AIs) are set when there is insufficient information to establish EARs and RDAs. Countries establish tolerable upper intake levels, also referred to as upper limits (ULs), based on amounts that cause adverse effects. Governments are slow to revise information of this nature. For the U.S. values, except calcium and vitamin D, all data date from 1997 to 2004. * The daily recommended amounts of niacin and magnesium are higher than the tolerable upper limit because, for both nutrients, the ULs identify the amounts which will not increase risk of adverse effects when the nutrients are consumed as a serving of a dietary supplement. Magnesium supplementation above the UL may cause diarrhea. Supplementation with niacin above the UL may cause flushing of the face and a sensation of body warmth. Each country or regional regulatory agency decides on a safety margin below when symptoms may occur, so the ULs may differ based on source. EAR U.S. Estimated Average Requirements. RDA U.S. Recommended Dietary Allowances; higher for adults than for children, and may be even higher for women who are pregnant or lactating. AI U.S. Adequate Intake; AIs established when there is not sufficient information to set EARs and RDAs. PRI Population Reference Intake is European Union equivalent of RDA; higher for adults than for children, and may be even higher for women who are pregnant or lactating. For Thiamin and Niacin, the PRIs are expressed as amounts per megajoule (239 kilocalories) of food energy consumed. Upper Limit Tolerable upper intake levels. ND ULs have not been determined. NE EARs, PRIs or AIs have not yet been established or will not be (EU does not consider chromium an essential nutrient). Plant Plant nutrients consist of more than a dozen minerals absorbed through roots, plus carbon dioxide and oxygen absorbed or released through leaves. All organisms obtain all their nutrients from the surrounding environment. Plants absorb carbon, hydrogen, and oxygen from air and soil as carbon dioxide and water. Other nutrients are absorbed from soil (exceptions include some parasitic or carnivorous plants). Counting these, there are 17 important nutrients for plants: these are macronutrients; nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), sulfur (S), magnesium (Mg), carbon (C), oxygen(O) and hydrogen (H), and the micronutrients; iron (Fe), boron (B), chlorine (Cl), manganese (Mn), zinc (Zn), copper (Cu), molybdenum (Mo) and nickel (Ni). In addition to carbon, hydrogen, and oxygen, nitrogen, phosphorus, and sulfur are also needed in relatively large quantities. Together, the "Big Six" are the elemental macronutrients for all organisms. They are sourced from inorganic matter (for example, carbon dioxide, water, nitrates, phosphates, sulfates, and diatomic molecules of nitrogen and, especially, oxygen) and organic matter (carbohydrates, lipids, proteins). See also References External links USDA. Dietary Reference Intakes Chemical oceanography Ecology Edaphology Biology and pharmacology of chemical elements Nutrition Essential nutrients
0.771239
0.997777
0.769524
Fluid balance
Fluid balance is an aspect of the homeostasis of organisms in which the amount of water in the organism needs to be controlled, via osmoregulation and behavior, such that the concentrations of electrolytes (salts in solution) in the various body fluids are kept within healthy ranges. The core principle of fluid balance is that the amount of water lost from the body must equal the amount of water taken in; for example, in humans, the output (via respiration, perspiration, urination, defecation, and expectoration) must equal the input (via eating and drinking, or by parenteral intake). Euvolemia is the state of normal body fluid volume, including blood volume, interstitial fluid volume, and intracellular fluid volume; hypovolemia and hypervolemia are imbalances. Water is necessary for all life on Earth. Humans can survive for 4 to 6 weeks without food but only for a few days without water. Profuse sweating can increase the need for electrolyte replacement. Water-electrolyte imbalance produces headache and fatigue if mild; illness if moderate, and sometimes even death if severe. For example, water intoxication (which results in hyponatremia), the process of consuming too much water too quickly, can be fatal. Deficits to body water result in volume contraction and dehydration. Diarrhea is a threat to both body water volume and electrolyte levels, which is why diseases that cause diarrhea are great threats to fluid balance. Implications Water consumption Medical use Effects of illness When a person is ill, fluid may also be lost through vomiting, diarrhea, and hemorrhage. An individual is at an increased risk of dehydration in these instances, as the kidneys will find it more difficult to match fluid loss by reducing urine output (the kidneys must produce at least some urine in order to excrete metabolic waste.) Oral rehydration therapy Oral rehydration therapy (ORT), is type of fluid replacement used as a treatment for dehydration. In an acute hospital setting, fluid balance is monitored carefully. This provides information on the patient's state of hydration, kidney function and cardiovascular function. If fluid loss is greater than fluid gain (for example if the patient vomits and has diarrhea), the patient is said to be in negative fluid balance. In this case, fluid is often given intravenously to compensate for the loss. On the other hand, a positive fluid balance (where fluid gain is greater than fluid loss) might suggest a problem with either the kidney or cardiovascular system. If blood pressure is low (hypotension), the filtration rate in the kidneys will lessen, causing less fluid reabsorption and thus less urine output. An accurate measure of fluid balance is therefore an important diagnostic tool, and allows for prompt intervention to correct the imbalance. Routes of fluid loss and gain Fluid can leave the body in many ways. Fluid can enter the body as preformed water, ingested food and drink and to a lesser extent as metabolic water which is produced as a by-product of aerobic respiration (cellular respiration) and dehydration synthesis. Input A constant supply is needed to replenish the fluids lost through normal physiological activities, such as respiration, sweating and urination. Water generated from the biochemical metabolism of nutrients provides a significant proportion of the daily water requirements for some arthropods and desert animals, but provides only a small fraction of a human's necessary intake. In the normal resting state, input of water through ingested fluids is approximately 1200 ml/day, from ingested foods 1000 ml/day and from aerobic respiration 300 ml/day, totaling 2500 ml/day. Regulation of input Input of water is regulated mainly through ingested fluids, which, in turn, depends on thirst. An insufficiency of water results in an increased osmolarity in the extracellular fluid. This is sensed by osmoreceptors in the organum vasculosum of the lamina terminalis, which trigger thirst. Thirst can to some degree be voluntarily resisted, as during fluid restriction. The human kidneys will normally adjust to varying levels of water intake. The kidneys will require time to adjust to the new water intake level. This can cause someone who drinks a lot of water to become dehydrated more easily than someone who routinely drinks less. Output The majority of fluid output occurs via the urine, approximately 1500 ml/day (approx 1.59 qt/day) in the normal adult resting state. Some fluid is lost through perspiration (part of the body's temperature control mechanism) and as water vapor in exhaled air. These are termed "insensible fluid losses" as they cannot be easily measured. Some sources say insensible losses account for 500 to 650 ml/day (0.5 to 0.6 qt.) of water in adults, while other sources put the minimum value at 800 ml (0.8 qt.). In children, one calculation used for insensible fluid loss is 400 ml/m2 body surface area. In addition, an adult loses approximately 100 ml/day of fluid through feces. For females, an additional 50 ml/day is lost through vaginal secretions. These outputs are in balance with the input of ~2500 ml/day. Regulation of output The body's homeostatic control mechanisms, which maintain a constant internal environment, ensure that a balance between fluid gain and fluid loss is maintained. The anti-diuretic hormones vasopressin (ADH) and aldosterone play a major role in this. If the body is becoming fluid-deficient, there will be an increase in the secretion of these hormones, causing fluid to be retained by the kidneys and urine output to be reduced. Conversely, if fluid levels are excessive, secretion of these hormones is suppressed, resulting in less retention of fluid by the kidneys and a subsequent increase in the volume of urine produced. Antidiuretic hormone If the body is becoming fluid-deficient, this will be sensed by osmoreceptors in the vascular organ of lamina terminalis and subfornical organ. These areas project to the supraoptic nucleus and paraventricular nucleus, which contain neurons that secrete the antidiuretic hormone, vasopressin, from their nerve endings in the posterior pituitary. Thus, there will be an increase in the secretion of antidiuretic hormone, causing fluid to be retained by the kidneys and urine output to be reduced. Aldosterone A fluid-insufficiency causes a decreased perfusion of the juxtaglomerular apparatus in the kidneys. This activates the renin–angiotensin system. Among other actions, it causes renal tubules (i.e. the distal convoluted tubules and the cortical collecting ducts) to reabsorb more sodium and water from the urine. Potassium is secreted into the tubule in exchange for the sodium, which is reabsorbed. The activated renin–angiotensin system stimulates the zona glomerulosa of the adrenal cortex which in turn secretes the hormone aldosterone. This hormone stimulates the reabsorption of sodium ions from distal tubules and collecting ducts. Water in the tubular lumen cannot follow the sodium reabsorption osmotically, as this part of the kidney is impermeable to water; release of ADH (vasopressin) is required to increase expression of aquaporin channels in the cortical collecting duct, allowing reabsorption of water. Effect on weight loss See also Drinking water References Human homeostasis Electrolyte disturbances Nutrition de:Hypovolämie#Isovolämie, Normovolämie
0.78207
0.983913
0.769489
Paraneoplastic syndrome
A paraneoplastic syndrome is a syndrome (a set of signs and symptoms) that is the consequence of a tumor in the body (usually a cancerous one). It is specifically due to the production of chemical signaling molecules (such as hormones or cytokines) by tumor cells or by an immune response against the tumor. Unlike a mass effect, it is not due to the local presence of cancer cells. Paraneoplastic syndromes are typical among middle-aged to older people, and they most commonly occur with cancers of the lung, breast, ovaries or lymphatic system (a lymphoma). Sometimes, the symptoms of paraneoplastic syndromes show before the diagnosis of a malignancy, which has been hypothesized to relate to the disease pathogenesis. In this paradigm, tumor cells express tissue-restricted antigens (e.g., neuronal proteins), triggering an anti-tumor immune response which may be partially or, rarely, completely effective in suppressing tumor growth and symptoms. Patients then come to clinical attention when this tumor immune response breaks immune tolerance and begins to attack the normal tissue expressing that (e.g., neuronal) protein. The abbreviation PNS is sometimes used for paraneoplastic syndrome, although it is used more often to refer to the peripheral nervous system. Signs and symptoms Symptomatic features of paraneoplastic syndrome cultivate in four ways: endocrine, neurological, mucocutaneous, and hematological. The most common presentation is a fever (release of endogenous pyrogens often related to lymphokines or tissue pyrogens), but the overall picture will often include several clinical cases observed which may specifically simulate more common benign conditions. Endocrine The following diseases manifest by means of endocrine dysfunction: Cushing syndrome, syndrome of inappropriate antidiuretic hormone, hypercalcemia, hypoglycemia, carcinoid syndrome, and hyperaldosteronism. Neurological The following diseases manifest by means of neurological dysfunction: Lambert–Eaton myasthenic syndrome, paraneoplastic cerebellar degeneration, encephalomyelitis, limbic encephalitis, brainstem encephalitis, opsoclonus myoclonus ataxia syndrome, anti-NMDA receptor encephalitis, and polymyositis. Mucocutaneous The following diseases manifest by means of mucocutaneous dysfunction: acanthosis nigricans, dermatomyositis, Leser-Trélat sign, necrolytic migratory erythema, Sweet's syndrome, Florid cutaneous papillomatosis, pyoderma gangrenosum, and acquired generalized hypertrichosis. Mucocutaneous dysfunctions of paraneoplastic syndromes can be seen in cases of itching (hypereosinophilia), immune system depression (latent varicella-zoster virus in sensory ganglia), pancreatic tumors (leading to adipose nodular necrosis of subcutaneous tissues), flushes (prostaglandin secretions), and even dermic melanosis (cannot be eliminated via urine and results in grey to black-blueish skin tones). Hematological The following diseases manifest by means of hematological dysfunction: granulocytosis, polycythemia, Trousseau sign, nonbacterial thrombotic endocarditis, and anemia. Hematological dysfunction of paraneoplastic syndromes can be seen from an increase of erythropoietin (EPO), which may occur in response to hypoxia or ectopic EPO production/altered catabolism. Erythrocytosis is common in regions of the liver, kidney, adrenal glands, lung, thymus, and central nervous system (as well as gynecological tumors and myosarcomas). Other The following diseases manifest by means of physiological dysfunction besides the categories above: membranous glomerulonephritis, tumor-induced osteomalacia, Stauffer syndrome, Neoplastic fever, and thymoma-associated multiorgan autoimmunity. Rheumatologic (hypertrophic osteoarthropathy), renal (secondary kidney amyloidosis and sedimentation of the immunocomplexes in nephrons), and gastrointestinal (production of molecules that affect the motility and secretory activity of the digestive tract) dysfunctions, for example, may relate to paraneoplastic syndromes. Mechanism The mechanism for a paraneoplastic syndrome varies from case to case. However, pathophysiological outcomes usually arise when a tumor does. Paraneoplastic syndrome often occurs alongside associated cancers as a result of an activated immune system. In this scenario, the body may produce antibodies to fight off the tumor by directly binding and destroying the tumor cell. Paraneoplastic disorders may arise in that antibodies would cross-react with normal tissues and destroy them. Diagnosis Diagnostic testing in a possible paraneoplastic syndrome depends on the symptoms and the suspected underlying cancer. Diagnosis may be difficult in patients in whom paraneoplastic antibodies cannot be detected. In the absence of these antibodies, other tests that may be helpful include MRI, PET, lumbar puncture and electrophysiology. Types A specifically devastating form of (neurological) paraneoplastic syndromes is a group of disorders classified as paraneoplastic neurological disorders (PNDs). These PNDs affect the central or peripheral nervous system; some are degenerative, though others (such as LEMS) may improve with treatment of the condition or the tumor. Symptoms of PNDs may include difficulty with walking and balance, dizziness, rapid uncontrolled eye movements, difficulty swallowing, loss of muscle tone, loss of fine motor coordination, slurred speech, memory loss, vision problems, sleep disturbances, dementia, seizures, and sensory loss in the limbs. The most common cancers associated with PNDs are breast, ovarian, and lung cancers, but many other cancers can produce paraneoplastic symptoms, as well. The root cause is extremely difficult to identify for paraneoplastic syndrome, as there are so many ways the disease can manifest (which may eventually lead to cancer). Ideas may relate to age-related diseases (unable to handle environmental or physical stress in combination with genetic pre-dispositions), accumulation of damaged biomolecules (damages signaling pathways in various regions of the body), increased oxygen free radicals in the body (alters metabolic processes in various regions of the body), etc. . However, prophylactic efforts include routine checks with physicians (particularly those that specialize in neurology and oncology) especially when a patient notices subtle changes in his or her own body. Treatment Treatment options include: Therapies to eliminate the underlying cancer, such as chemotherapy, radiation and surgery. Therapies to reduce or slow neurological degeneration. In this scenario, rapid diagnosis and treatment are critical for the patient to have the best chance of recovery. Since these disorders are relatively rare, few doctors have seen or treated paraneoplastic neurological disorders (PNDs). Therefore, PND patients should consult with a specialist with experience in diagnosing and treating paraneoplastic neurological disorders. A specific prognosis for those with paraneoplastic syndromes links to each unique case presented. Thus, prognosis for paraneoplastic syndromes may vary greatly. For example, paraneoplastic pemphigus often included infection as a major cause of death. Paraneoplastic pemphigus is one of the three major subtypes that affects IgG autoantibodies that are characteristically raised against desmoglein 1 and desmoglein 3 (which are cell-cell adhesion molecules found in desmosomes). Underlying cancer or irreversible system impairment, seen in acute heart failure or kidney failure, may result in death as well. Research directions Prostate cancer is the second most common urological malignancy to be associated with paraneoplastic syndromes after renal cell carcinoma. Paraneoplastic syndromes of this nature tend to occur in the setting of late stage and aggressive tumors with poor overall outcomes (endocrine manifestations, neurological entities, dermatological conditions, and other syndromes). A vast majority of prostate cancer cases (over 70%) document paraneoplastic syndrome as a major clinical manifestation of prostate cancer; and (under 20%), the syndrome as an initial sign of disease progression to the castrate-resistant state. Urologist researchers identify serum markers that are associated with the syndrome in order to specific what type of therapies may work most effectively. Paraneoplastic neurological syndromes may be related immune checkpoint inhibitors (ICIs), one of the underlying causes in inflammatory central nervous system diseases (CNS). The central idea around such research pinpoints treatment strategies to combat cancer related outcomes in the clinical arena, specifically ICIs. Research suggests that patients who are treated with ICIs are more susceptible to CNS disease (since the mechanism of ICIs induces adverse effects on the CNS due to augmented immune responses and neurotoxicity). The purpose of this exploration was to shed light on immunotherapies and distinguishing between neurotoxicity and brain metastasis in the early stages of treatment. In other research, scientists have found that paraneoplastic peripheral nerve disorders (autoantibodies linked to multifocal motor neuropathy) may provide important clinical manifestations. This is especially important for patients who experience inflammatory neuropathies since solid tumors are often associated with peripheral nerve disorders. CV2 autoantibodies, which target dihydropyriminase-related protein 5 (DRP5, or CRMP5) are also associated with a variety of paraneoplastic neurological syndromes, including sensorimotor polyneuropathies. Patients undergoing immune therapies or tumor removal respond very well to antibodies that target CASPR2 (to treat nerve hyperexcitability and neuromyotonia). References Immune system disorders
0.771371
0.997423
0.769384
Physical therapy
Physical therapy (PT), also known as physiotherapy, is a healthcare profession, as well as the care provided by physical therapists who promote, maintain, or restore health through patient education, physical intervention, disease prevention, and health promotion. Physical therapist is the term used for such professionals in the United States, and physiotherapist is the term used in many other countries. The career has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. PTs practice in many settings, both public and private. In addition to clinical practice, other aspects of physical therapy practice include research, education, consultation, and health administration. Physical therapy is provided as a primary care treatment or alongside, or in conjunction with, other medical services. In some jurisdictions, such as the United Kingdom, physical therapists may have the authority to prescribe medication. Overview Physical therapy addresses the illnesses or injuries that limit a person's abilities to move and perform functional activities in their daily lives. PTs use an individual's history and physical examination to arrive at a diagnosis and establish a management plan and, when necessary, incorporate the results of laboratory and imaging studies like X-rays, CT-scan, or MRI findings. Physical therapists can use sonography to diagnose and manage common musculoskeletal, nerve, and pulmonary conditions. Electrodiagnostic testing (e.g., electromyograms and nerve conduction velocity testing) may also be used. PT management commonly includes prescription of or assistance with specific exercises, manual therapy, and manipulation, mechanical devices such as traction, education, electrophysical modalities which include heat, cold, electricity, sound waves, radiation, assistive devices, prostheses, orthoses, and other interventions. In addition, PTs work with individuals to prevent the loss of mobility before it occurs by developing fitness and wellness-oriented programs for healthier and more active lifestyles, providing services to individuals and populations to develop, maintain, and restore maximum movement and functional ability throughout the lifespan. This includes providing treatment in circumstances where movement and function are threatened by aging, injury, disease, or environmental factors. Functional movement is central to what it means to be healthy. Physical therapy is a professional career that has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. Neurological rehabilitation is, in particular, a rapidly emerging field. PTs practice in many settings, such as privately-owned physical therapy clinics, outpatient clinics or offices, health and wellness clinics, rehabilitation hospital facilities, skilled nursing facilities, extended care facilities, private homes, education and research centers, schools, hospices, industrial and these workplaces or other occupational environments, fitness centers and sports training facilities. Physical therapists also practice in non-patient care roles such as health policy, health insurance, health care administration and as health care executives. Physical therapists are involved in the medical-legal field serving as experts, performing peer review and independent medical examinations. Education varies greatly by country. The span of education ranges from some countries having little formal education to others having doctoral degrees and post-doctoral residencies and fellowships. Regarding its relationship to other healthcare professions, physiotherapy is one of the allied health professions. World Physiotherapy has signed a "memorandum of understanding" with the four other members of the World Health Professions Alliance "to enhance their joint collaboration on protecting and investing in the health workforce to provide safe, quality and equitable care in all settings". History Physicians like Hippocrates and later Galen are believed to have been the first practitioners of physical therapy, advocating massage, manual therapy techniques and hydrotherapy to treat people in 460 BC. After the development of orthopedics in the eighteenth century, machines like the Gymnasticon were developed to treat gout and similar diseases by systematic exercise of the joints, similar to later developments in physical therapy. The earliest documented origins of actual physical therapy as a professional group date back to Per Henrik Ling, "Father of Swedish Gymnastics," who founded the Royal Central Institute of Gymnastics (RCIG) in 1813 for manipulation, and exercise. Up until 2014, the Swedish word for a physical therapist was sjukgymnast = someone involved in gymnastics for those who are ill, but the title was then changed to fysioterapeut (physiotherapist), the word used in the other Scandinavian countries. In 1887, PTs were given official registration by Sweden's National Board of Health and Welfare. Other countries soon followed. In 1894, four nurses in Great Britain formed the Chartered Society of Physiotherapy. The School of Physiotherapy at the University of Otago in New Zealand in 1913, and the United States 1914 Reed College in Portland, Oregon, which graduated "reconstruction aides." Since the profession's inception, spinal manipulative therapy has been a component of the physical therapist practice. Modern physical therapy was established towards the end of the 19th century due to events that affected on a global scale, which called for rapid advances in physical therapy. Following this, American orthopedic surgeons began treating children with disabilities and employed women trained in physical education, and remedial exercise. These treatments were further applied and promoted during the Polio outbreak of 1916. During the First World War, women were recruited to work with and restore physical function to injured soldiers, and the field of physical therapy was institutionalized. In 1918 the term "Reconstruction Aide" was used to refer to individuals practicing physical therapy. The first school of physical therapy was established at Walter Reed Army Hospital in Washington, D.C., following the outbreak of World War I. Research catalyzed the physical therapy movement. The first physical therapy research was published in the United States in March 1921 in "The PT Review." In the same year, Mary McMillan organized the American Women's Physical Therapeutic Association (now called the American Physical Therapy Association (APTA). In 1924, the Georgia Warm Springs Foundation promoted the field by touting physical therapy as a treatment for polio. Treatment through the 1940s primarily consisted of exercise, massage, and traction. Manipulative procedures to the spine and extremity joints began to be practiced, especially in the British Commonwealth countries, in the early 1950s. Around the time polio vaccines were developed, physical therapists became a normal occurrence in hospitals throughout North America and Europe. In the late 1950s, physical therapists started to move beyond hospital-based practice to outpatient orthopedic clinics, public schools, colleges/universities health-centres, geriatric settings (skilled nursing facilities), rehabilitation centers and medical centers. Specialization in physical therapy in the U.S. occurred in 1974, with the Orthopaedic Section of the APTA being formed for those physical therapists specializing in orthopedics. In the same year, the International Federation of Orthopaedic Manipulative Physical Therapists was formed, which has ever since played an important role in advancing manual therapy worldwide. An international organization for the profession is the World Confederation for Physical Therapy (WCPT). It was founded in 1951 and has operated under the brand name World Physiotherapy since 2020. Education Educational criteria for physical therapy providers vary from state to state, country to country, and among various levels of professional responsibility. Most U.S. states have physical therapy practice acts that recognize both physical therapists (PT) and physical therapist assistants (PTA) and some jurisdictions also recognize physical therapy technicians (PT Techs) or aides. Most countries have licensing bodies that require physical therapists to be member of before they can start practicing as independent professionals. Canada The Canadian Alliance of Physiotherapy Regulators (CAPR) offers eligible program graduates to apply for the national Physiotherapy Competency Examination (PCE). Passing the PCE is one of the requirements in most provinces and territories to work as a licensed physiotherapist in Canada. CAPR has members which are physiotherapy regulatory organizations recognized in their respective provinces and territories: Government of Yukon, Consumer Services College of Physical Therapists of British Columbia College of Physiotherapists of Alberta Saskatchewan College of Physical Therapists College of Physiotherapists of Manitoba College of Physiotherapists of Ontario Ordre professionnel de la physiothérapie du Québec College of Physiotherapists of New Brunswick/Collège des physiothérapeutes du Nouveau-Brunswick Nova Scotia College of Physiotherapists Prince Edward Island College of Physiotherapists Newfoundland & Labrador College of Physiotherapists Physiotherapy programs are offered at fifteen universities, often through the university's respective college of medicine. Each of Canada's physical therapy schools has transitioned from three-year Bachelor of Science in Physical Therapy (BScPT) programs that required two years of prerequisite university courses (five-year bachelor's degree) to two-year Master's of Physical Therapy (MPT) programs that require prerequisite bachelor's degrees. The last Canadian university to follow suit was the University of Manitoba, which transitioned to the MPT program in 2012, making the MPT credential the new entry to practice standard across Canada. Existing practitioners with BScPT credentials are not required to upgrade their qualifications. In the province of Quebec, prospective physiotherapists are required to have completed a college diploma in either health sciences, which lasts on average two years, or physical rehabilitation technology, which lasts at least three years, to apply to a physiotherapy program or program in university. Following admission, physical therapy students work on a bachelor of science with a major in physical therapy and rehabilitation. The B.Sc. usually requires three years to complete. Students must then enter graduate school to complete a master's degree in physical therapy, which normally requires one and a half to two years of study. Graduates who obtain their M.Sc. must successfully pass the membership examination to become members of the Ordre Professionnel de la physiothérapie du Québec (PPQ). Physiotherapists can pursue their education in such fields as rehabilitation sciences, sports medicine, kinesiology, and physiology. In the province of Quebec, physical rehabilitation therapists are health care professionals who are required to complete a four-year college diploma program in physical rehabilitation therapy and be members of the Ordre Professionnel de la physiothérapie du Québec (OPPQ) to practice legally in the country according to specialist De Van Gerard. Most physical rehabilitation therapists complete their college diploma at Collège Montmorency, Dawson College, or Cégep Marie-Victorin, all situated in and around the Montreal area. After completing their technical college diploma, graduates have the opportunity to pursue their studies at the university level to perhaps obtain a bachelor's degree in physiotherapy, kinesiology, exercise science, or occupational therapy. The Université de Montréal, the Université Laval and the Université de Sherbrooke are among the Québécois universities that admit physical rehabilitation therapists in their programs of study related to health sciences and rehabilitation to credit courses that were completed in college. To date, there are no bridging programs available to facilitate upgrading from the BScPT to the MPT credential. However, research Master's of Science (MSc) and Doctor of Philosophy (Ph.D.) programs are available at every university. Aside from academic research, practitioners can upgrade their skills and qualifications through continuing education courses and curriculums. Continuing education is a requirement of the provincial regulatory bodies. The Canadian Physiotherapy Association offers a curriculum of continuing education courses in orthopedics and manual therapy. The program consists of 5 levels (7 courses) of training with ongoing mentorship and evaluation at each level. The orthopedic curriculum and examinations take a minimum of 4 years to complete. However, upon completion of level 2, physiotherapists can apply to a unique 1-year course-based Master's program in advanced orthopedics and manipulation at the University of Western Ontario to complete their training. This program accepts only 16 physiotherapists annually since 2007. Successful completion of either of these education streams and their respective examinations allows physiotherapists the opportunity to apply to the Canadian Academy of Manipulative Physiotherapy (CAMPT) for fellowship. Fellows of the Canadian Academy of manipulative Physiotherapists (FCAMPT) are considered leaders in the field, having extensive post-graduate education in orthopedics and manual therapy. FCAMPT is an internationally recognized credential, as CAMPT is a member of the International Federation of Manipulative Physiotherapists (IFOMPT), a branch of World Physiotherapy (formerly World Confederation of Physical Therapy (WCPT)) and the World Health Organization (WHO). Scotland Physiotherapy degrees are offered at four universities: Edinburgh Napier University in Edinburgh, Robert Gordon University in Aberdeen, Glasgow Caledonian University in Glasgow, and Queen Margaret University in Edinburgh. Students can qualify as physiotherapists by completing a four-year Bachelor of Science degree or a two-year master's degree (if they already have an undergraduate degree in a related field). To use the title 'Physiotherapist', a student must register with the Health and Care Professions Council, a UK-wide regulatory body, on qualifying. Many physiotherapists are also members of the Chartered Society of Physiotherapy (CSP), which provides insurance and professional support. United States The primary physical therapy practitioner is the Physical Therapist (PT) who is trained and licensed to examine, evaluate, diagnose and treat impairment, functional limitations, and disabilities in patients or clients. Physical therapist education curricula in the United States culminate in a Doctor of Physical Therapy (DPT) degree, with some practicing PTs holding a Master of Physical Therapy degree, and some with a Bachelor's degree. The Master of Physical Therapy and Master of Science in Physical Therapy degrees are no longer offered, and the entry-level degree is the Doctor of Physical Therapy degree, which typically takes 3 years after completing a bachelor's degree. PTs who hold a Masters or bachelors in PT are encouraged to get their DPT because APTA's goal is for all PT's to be on a doctoral level. WCPT recommends physical therapist entry-level educational programs be based on university or university-level studies, of a minimum of four years, independently validated and accredited. Curricula in the United States are accredited by the Commission on Accreditation in Physical Therapy Education (CAPTE). According to CAPTE, there are 37,306 students currently enrolled in 294 accredited PT programs in the United States while 10,096 PTA students are currently enrolled in 396 PTA programs in the United States. The physical therapist professional curriculum includes content in the clinical sciences (e.g., content about the cardiovascular, pulmonary, endocrine, metabolic, gastrointestinal, genitourinary, integumentary, musculoskeletal, and neuromuscular systems and the medical and surgical conditions frequently seen by physical therapists). Current training is specifically aimed to enable physical therapists to appropriately recognize and refer non-musculoskeletal diagnoses that may present similarly to those caused by systems not appropriate for physical therapy intervention, which has resulted in direct access to physical therapists in many states. Post-doctoral residency and fellowship education prevalence is increasing steadily with 219 residency, and 42 fellowship programs accredited in 2016. Residencies are aimed to train physical therapists in a specialty such as acute care, cardiovascular & pulmonary, clinical electrophysiology, faculty, geriatrics, neurology, orthopaedics, pediatrics, sports, women's health, and wound care, whereas fellowships train specialists in a subspecialty (e.g. critical care, hand therapy, and division 1 sports), similar to the medical model. Residency programs offer eligibility to sit for the specialist certification in their respective area of practice. For example, completion of an orthopedic physical therapy residency, allows its graduates to apply and sit for the clinical specialist examination in orthopedics, achieving the OCS designation upon passing the examination. Board certification of physical therapy specialists is aimed to recognize individuals with advanced clinical knowledge and skill training in their respective area of practice, and exemplifies the trend toward greater education to optimally treat individuals with movement dysfunction. Physical therapist assistants may deliver treatment and physical interventions for patients and clients under a care plan established by and under the supervision of a physical therapist. Physical therapist assistants in the United States are currently trained under associate of applied sciences curricula specific to the profession, as outlined and accredited by CAPTE. As of December 2022, there were 396 accredited two-year (Associate degree) programs for physical therapist assistants In the United States of America. Curricula for the physical therapist assistant associate degree include: Anatomy & physiology Exercise physiology Human biology Physics Biomechanics Kinesiology Neuroscience Clinical pathology Behavioral sciences Communication Ethics Research Other coursework as required by individual programs Job duties and education requirements for Physical Therapy Technicians or Aides may vary depending on the employer, but education requirements range from a high school diploma or equivalent to completion of a 2-year degree program. O-Net reports that 64% of PT Aides/Techs have a high school diploma or equivalent, 21% have completed some college but do not hold a degree, and 10% hold an associate degree. Some jurisdictions allow physical therapists to employ technicians or aides or therapy assistants to perform designated routine tasks related to physical therapy under the direct supervision of a physical therapist. Some jurisdictions require physical therapy technicians or aides to be certified, and education and certification requirements vary among jurisdictions. Employment Physical therapy-related jobs in North America have shown rapid growth in recent years, but employment rates and average wages may vary significantly between different countries, states, provinces, or regions. A study from 2013 states that 56.4% of physical therapists were globally satisfied with their jobs. Salary, interest in work, and fulfillment in a job are important predictors of job satisfaction. In a Polish study, job burnout among the physical therapists was manifested by increased emotional exhaustion and decreased sense of personal achievement. Emotional exhaustion is significantly higher among physical therapists working with adults and employed in hospitals. Other factors that increased burnout include working in a hospital setting and having seniority from 15 to 19 years. United States According to the United States Department of Labor's Bureau of Labor Statistics, there were approximately 210,900 physical therapists employed in the United States in 2014, earning an average of $84,020 annually in 2015, or $40.40 per hour, with 34% growth in employment projected by 2024. The Bureau of Labor Statistics also reports that there were approximately 128,700 Physical Therapist Assistants and Aides employed in the United States in 2014, earning an average $42,980 annually, or $20.66 per hour, with 40% growth in employment projected by 2024. To meet their needs, many healthcare and physical therapy facilities hire "travel physical therapists", who work temporary assignments between 8 and 26 weeks for much higher wages; about $113,500 a year. Bureau of Labor Statistics data on PTAs and Techs can be difficult to decipher, due to their tendency to report data on these job fields collectively rather than separately. O-Net reports that in 2015, PTAs in the United States earned a median wage of $55,170 annually or $26.52 hourly and that Aides/Techs earned a median wage of $25,120 annually or $12.08 hourly in 2015. The American Physical Therapy Association reports vacancy rates for physical therapists as 11.2% in outpatient private practice, 10% in acute care settings, and 12.1% in skilled nursing facilities. The APTA also reports turnover rates for physical therapists as 10.7% in outpatient private practice, 11.9% in acute care settings, 27.6% in skilled nursing facilities. Definitions and licensing requirements in the United States vary among jurisdictions, as each state has enacted its own physical therapy practice act defining the profession within its jurisdiction, but the Federation of State Boards of Physical Therapy has also drafted a model definition to limit this variation. The Commission on Accreditation in Physical Therapy Education (CAPTE) is responsible for accrediting physical therapy education curricula throughout the United States of America. United Kingdom The title of Physiotherapist is a protected professional title in the United Kingdom. Anyone using this title must be registered with the Health & Care Professions Council (HCPC). Physiotherapists must complete the necessary qualifications, usually an undergraduate physiotherapy degree (at university or as an intern), a master rehabilitation degree, or a doctoral degree in physiotherapy. This is typically followed by supervised professional experience lasting two to three years. All professionals on the HCPC register must comply with continuing professional development (CPD) and can be audited for this evidence at intervals. Specialty areas The body of knowledge of physical therapy is large, and therefore physical therapists may specialize in a specific clinical area. While there are many different types of physical therapy, the American Board of Physical Therapy Specialties lists ten current specialist certifications. Most Physical Therapists practicing in a specialty will have undergone further training, such as an accredited residency program, although individuals are currently able to sit for their specialist examination after 2,000 hours of focused practice in their respective specialty population, in addition to requirements set by each respective specialty board. Cardiovascular and pulmonary Cardiovascular and pulmonary rehabilitation respiratory practitioners and physical therapists offer therapy for a wide variety of cardiopulmonary disorders or pre and post cardiac or pulmonary surgery. An example of cardiac surgery is coronary bypass surgery. The primary goals of this specialty include increasing endurance and functional independence. Manual therapy is used in this field to assist in clearing lung secretions experienced with cystic fibrosis. Pulmonary disorders, heart attacks, post coronary bypass surgery, chronic obstructive pulmonary disease, and pulmonary fibrosis, treatments can benefit from cardiovascular and pulmonary specialized physical therapists. Clinical electrophysiology This specialty area includes electrotherapy/physical agents, electrophysiological evaluation (EMG/NCV), physical agents, and wound management. Geriatric Geriatric physical therapy covers a wide area of issues concerning people as they go through normal adult aging but is usually focused on the older adult. There are many conditions that affect many people as they grow older and include but are not limited to the following: arthritis, osteoporosis, cancer, Alzheimer's disease, hip and joint replacement, balance disorders, incontinence, etc. Geriatric physical therapists specialize in providing therapy for such conditions in older adults. Physical rehabilitation can prevent deterioration in health and activities of daily living among care home residents. The current evidence suggests benefits to physical health from participating in different types of physical rehabilitation to improve daily living, strength, flexibility, balance, mood, memory, exercise tolerance, fear of falling, injuries, and death. It may be both safe and effective in improving physical and possibly mental state, while reducing disability with few adverse events. The current body of evidence suggests that physical rehabilitation may be effective for long-term care residents in reducing disability with few adverse events. However, there is insufficient to conclude whether the beneficial effects are sustainable and cost-effective. The findings are based on moderate quality evidence. Wound management Wound management physical therapy includes the treatment of conditions involving the skin and all its related organs. Common conditions managed include wounds and burns. Physical therapists may utilize surgical instruments, wound irrigations, dressings, and topical agents to remove the damaged or contaminated tissue and promote tissue healing. Other commonly used interventions include exercise, edema control, splinting, and compression garments. The work done by physical therapists in the integumentary specialty does work similar to what would be done by medical doctors or nurses in the emergency room or triage. Neurology Neurological physical therapy is a field focused on working with individuals who have a neurological disorder or disease. These can include stroke, chronic back pain, Alzheimer's disease, Charcot-Marie-Tooth disease (CMT), ALS, brain injury, cerebral palsy, multiple sclerosis, Parkinson's disease, facial palsy and spinal cord injury. Common impairments associated with neurologic conditions include impairments of vision, balance, ambulation, activities of daily living, movement, muscle strength and loss of functional independence. The techniques involve in neurological physical therapy are wide-ranging and often require specialized training. Neurological physiotherapy is also called neurophysiotherapy or neurological rehabilitation. It is recommended for neurophysiotherapists to collaborate with psychologists when providing physical treatment of movement disorders. This is especially important because combining physical therapy and psychotherapy can improve neurological status of the patients. Orthopaedics Orthopedic physical therapists diagnose, manage, and treat disorders and injuries of the musculoskeletal system including rehabilitation after orthopedic surgery, acute trauma such as sprains, strains, injuries of insidious onset such as tendinopathy, bursitis, and deformities like scoliosis. This specialty of physical therapy is most often found in the outpatient clinical setting. Orthopedic therapists are trained in the treatment of post-operative orthopedic procedures, fractures, acute sports injuries, arthritis, sprains, strains, back and neck pain, spinal conditions, and amputations. Joint and spine mobilization/manipulation, dry needling (similar to acupuncture), therapeutic exercise, neuromuscular techniques, muscle reeducation, hot/cold packs, and electrical muscle stimulation (e.g., cryotherapy, iontophoresis, electrotherapy) are modalities employed to expedite recovery in the orthopedic setting. Additionally, an emerging adjunct to diagnosis and treatment is the use of sonography for diagnosis and to guide treatments such as muscle retraining. Those with injury or disease affecting the muscles, bones, ligaments, or tendons will benefit from assessment by a physical therapist specialized in orthopedics. Pediatrics Pediatric physical therapy assists in the early detection of health problems and uses a variety of modalities to provide physical therapy for disorders in the pediatric population. These therapists are specialized in the diagnosis, treatment, and management of infants, children, and adolescents with a variety of congenital, developmental, neuromuscular, skeletal, or acquired disorders/diseases. Treatments focus mainly on improving gross and fine motor skills, balance and coordination, strength and endurance as well as cognitive and sensory processing/integration. Sports Physical therapists are closely involved in the care and wellbeing of athletes including recreational, semi-professional (paid), and professional (full-time employment) participants. This area of practice encompasses athletic injury management under 5 main categories: acute care – assessment and diagnosis of an initial injury; treatment – application of specialist advice and techniques to encourage healing; rehabilitation – progressive management for full return to sport; prevention – identification and address of deficiencies known to directly result in, or act as precursors to injury, such as movement assessment education – sharing of specialist knowledge to individual athletes, teams, or clubs to assist in prevention or management of injury Physical therapists who work for professional sports teams often have a specialized sports certification issued through their national registering organization. Most Physical therapists who practice in a sporting environment are also active in collaborative sports medicine programs too (See also: athletic trainers). Women's health Women's health or pelvic floor physical therapy mostly addresses women's issues related to the female reproductive system, child birth, and post-partum. These conditions include lymphedema, osteoporosis, pelvic pain, prenatal and post-partum periods, and urinary incontinence. It also addresses incontinence, pelvic pain, pelvic organ prolapse and other disorders associated with pelvic floor dysfunction. Manual physical therapy has been demonstrated in multiple studies to increase rates of conception in women with infertility. Oncology Physical therapy in the field of oncology and palliative care is a continuously evolving and developing specialty, both in malignant and non-malignant diseases. Physical therapy for both groups of patients is now recognized as an essential part of the clinical pathway, as early diagnoses and new treatments are enabling patients to live longer. it is generally accepted that patients should have access to an appropriate level of rehabilitation, so that they can function at a minimum level of dependency and optimize their quality of life, regardless of their life expectancy. Physical therapist–patient collaborative relationship People with brain injury, musculoskeletal conditions, cardiac conditions, or multiple pathologies benefit from a positive alliance between patient and therapist. Outcomes include the ability to perform activities of daily living, manage pain, complete specific physical function tasks, depression, global assessment of physical health, treatment adherence, and treatment satisfaction. Studies have explored four themes that may influence patient-therapist interactions: interpersonal and communication skills, practical skills, individualized patient-centered care, and organizational and environmental factors. Physical therapists need to be able to effectively communicate with their patients on a variety of levels. Patients have varying levels of health literacy so physical therapists need to take that into account when discussing the patient's ailments as well as planned treatment. Research has shown that using communication tools tailored to the patient's health literacy leads to improved engagement with their practitioner and their clinical care. In addition, patients reported that shared decision-making will yield a positive relationship. Practical skills such as the ability to educate patients about their conditions, and professional expertise are perceived as valuable factors inpatient care. Patients value the ability of a clinician to provide clear and simple explanations about their problems. Furthermore, patients value when physical therapists possess excellent technical skills that improve the patient effectively. Environmental factors such as the location, equipment used, and parking are less important to the patient than the physical therapy clinical encounter itself. Based on the current understanding, the most important factors that contribute to the patient-therapist interactions include that the physical therapist: spends an adequate amount of time with the patient, possesses strong listening and communication skills, treats the patient with respect, provides clear explanations of the treatment, and allows the patient to be involved in the treatment decisions. Effectiveness Physical therapy has been found to be effective for improving outcomes, both in terms of pain and function, in multiple musculoskeletal conditions. Spinal manipulation by physical therapists is a safe option to improve outcomes for lower back pain. Several studies have suggested that physical therapy, particularly manual therapy techniques focused on the neck and the median nerve, combined with stretching exercises, may be equivalent or even preferable to surgery for carpal tunnel syndrome. While spine manipulation and therapeutic massage are effective interventions for neck pain, electroacupuncture, strain-counterstrain, relaxation massage, heat therapy, and ultrasound therapy are not as effective, and thus not recommended. Studies also show physical therapy is effective for patients with other conditions. Physiotherapy treatment may improve quality of life, promote cardiopulmonary fitness and inspiratory pressure, as well as reduce symptoms and medication use by people with asthma. Physical therapy is sometimes provided to patients in the ICU, as early mobilization can help reduce ICU and hospital length of stay and improve long-term functional ability. Early progressive mobilization for adult, intubated ICU patients on mechanical ventilation is safe and effective. Psychologically informed physical therapy (PIPT), in which a physical therapist treats patients while other members of a multidisciplinary care team help in preoperative planning for patient management of pain and quality of life, helps improve patient outcomes, especially before and after spine, hip, or knee surgery. Telehealth Telehealth (or telerehabilitation) is a developing form of physical therapy in response to the increasing demand for physical therapy treatment. Telehealth is online communication between the clinician and patient, either live or in pre-recorded sessions with mixed reviews when compared to usual, in-person care. The benefits of telehealth include improved accessibility in remote areas, cost efficiency, and improved convenience for people who are bedridden and home-restricted, or physically disabled. Some considerations for telehealth include: limited evidence to prove effectiveness and compliance more than in-person therapy, licensing and payment policy issues, and compromised privacy. Studies are controversial as to the effectiveness of telehealth in patients with more serious conditions, such as stroke, multiple sclerosis, and lower back pain. The interstate compact, enacted in March 2018, allows patients to participate in Telehealth appointments with medical practices located in different states. During the COVID-19 pandemic, the need for telehealth came to the fore as patients were less able to safely attend in-person, particularly if they were elderly or had chronic diseases. Telehealth was considered to be a proactive step to prevent decline in individuals that could not attend classes. Physical decline in at risk groups is difficult to address or undo later. The platform licensing or development are found to be the most substantial cost in telehealth. Telehealth does not remove the need for the physical therapist as they still need to oversee the program. See also American Board of Physical Therapy Specialties American Physical Therapy Association Basic body-awareness methodology Chiropractic Doctor (title) Doctor of Physical Therapy Exercise physiology Exercise prescription List of exercise prescription software Neurophysiotherapy Occupational therapy Physical medicine and rehabilitation Postural Restoration Sports medicine Therapy World Physiotherapy References External links Europe: Regulated professions database – Physiotherapist, European Commission Allied health professions Hospital departments Manual therapy Physical exercise Rehabilitation medicine Rehabilitation team Sports medicine Sports occupations and roles
0.770005
0.999097
0.76931
Myelopathy
Myelopathy describes any neurologic deficit related to the spinal cord. The most common form of myelopathy in humans, cervical spondylotic myelopathy (CSM), also called degenerative cervical myelopathy, results from narrowing of the spinal canal (spinal stenosis) ultimately causing compression of the spinal cord. When due to trauma, myelopathy is known as (acute) spinal cord injury. When inflammatory, it is known as myelitis. Disease that is vascular in nature is known as vascular myelopathy. In Asian populations, spinal cord compression often occurs due to a different, inflammatory process affecting the posterior longitudinal ligament. Presentation Clinical signs and symptoms depend on which spinal cord level (cervical, thoracic, or lumbar) is affected and the extent (anterior, posterior, or lateral) of the pathology, and may include: Upper motor neuron signs—weakness, spasticity, clumsiness, altered tonus, hyperreflexia and pathological reflexes, including Hoffmann's sign and inverted plantar reflex (positive Babinski sign) Lower motor neuron signs—weakness, clumsiness in the muscle group innervated at the level of spinal cord compromise, muscle atrophy, hyporeflexia, muscle hypotonicity or flaccidity, fasciculations Sensory deficits Bowel/bladder symptoms and sexual dysfunction Diagnosis Diagnosis of myelopathy Myelopathy is primarily diagnosed by clinical exam findings. Because the term myelopathy describes a clinical syndrome that can be caused by many pathologies the differential diagnosis of myelopathy is extensive. In some cases the onset of myelopathy is rapid, in others, such as CSM, the course may be insidious with symptoms developing slowly over a period of months. As a consequence, the diagnosis of CSM is often delayed. As the disease is thought to be progressive, this may impact negatively on outcome. Diagnosis of etiology Once the clinical diagnosis myelopathy is established, the underlying cause must be investigated. Most commonly this involves medical imaging. The best way to visualize the spinal cord is magnetic resonance imaging (MRI). Apart from T1 and T2 MRI images, which are commonly used for routine diagnosis, more recently researchers are exploring quantitative MRI signals. Further imaging modalities used for evaluating myelopathy include plain X-rays for detecting arthritic changes of the bones, and Computer Tomography, which is often used for pre-operative planning of surgical interventions for cervical spondylotic myelopathy. Angiography is used to examine blood vessels in suspected cases of vascular myelopathy. The presence and severity of myelopathy can also be evaluated by means of transcranial magnetic stimulation (TMS), a neurophysiological method that allows the measurement of the time required for a neural impulse to cross the pyramidal tracts, starting from the cerebral cortex and ending at the anterior horn cells of the cervical, thoracic or lumbar spinal cord. This measurement is called Central Conduction Time (CCT). TMS can aid physicians to: Determine whether myelopathy exists Identify the level of the spinal cord where myelopathy is located. This is especially useful in cases where more than two lesions may be responsible for the clinical symptoms and signs, such as in patients with two or more cervical disc hernias Follow-up the progression of myelopathy in time, for example before and after cervical spine surgery TMS can also help in the differential diagnosis of different causes of pyramidal tract damage. Treatment The treatment and prognosis of myelopathy depends on the underlying cause: myelopathy caused by infection requires medical treatment with pathogen specific antibiotics. Similarly, specific treatments exist for multiple sclerosis, which may also present with myelopathy. As outlined above, the most common form of myelopathy is secondary to degeneration of the cervical spine. Newer findings have challenged the existing controversy with respect to surgery for cervical spondylotic myelopathy by demonstrating that patients benefit from surgery. See also Surfer's myelopathy References External links Spinal cord disorders Orthopedic problems
0.774171
0.993704
0.769296
Environment (systems)
In science and engineering, a system is the part of the universe that is being studied, while the environment is the remainder of the universe that lies outside the boundaries of the system. It is also known as the surroundings or neighbourhood, and in thermodynamics, as the reservoir. Depending on the type of system, it may interact with the environment by exchanging mass, energy (including heat and work), linear momentum, angular momentum, electric charge, or other conserved properties. In some disciplines, such as information theory, information may also be exchanged. The environment is ignored in analysis of the system, except in regard to these interactions. See also Bioenergetic systems – energy system Earth system science Environment (biophysical) Environmental Management System Thermodynamic system External links Geography of transport systems people.hofstra.edu Environmental Management Systems epa.gov Earth's Environmental Systems eesc.Columbia.edu Environmental Education Thermodynamic systems
0.773986
0.993873
0.769244
Stimulus (physiology)
In physiology, a stimulus is a change in a living thing's internal or external environment. This change can be detected by an organism or organ using sensitivity, and leads to a physiological reaction. Sensory receptors can receive stimuli from outside the body, as in touch receptors found in the skin or light receptors in the eye, as well as from inside the body, as in chemoreceptors and mechanoreceptors. When a stimulus is detected by a sensory receptor, it can elicit a reflex via stimulus transduction. An internal stimulus is often the first component of a homeostatic control system. External stimuli are capable of producing systemic responses throughout the body, as in the fight-or-flight response. In order for a stimulus to be detected with high probability, its level of strength must exceed the absolute threshold; if a signal does reach threshold, the information is transmitted to the central nervous system (CNS), where it is integrated and a decision on how to react is made. Although stimuli commonly cause the body to respond, it is the CNS that finally determines whether a signal causes a reaction or not. Types Internal Homeostatic imbalances Homeostatic outbalances are the main driving force for changes of the body. These stimuli are monitored closely by receptors and sensors in different parts of the body. These sensors are mechanoreceptors, chemoreceptors and thermoreceptors that, respectively, respond to pressure or stretching, chemical changes, or temperature changes. Examples of mechanoreceptors include baroreceptors which detect changes in blood pressure, Merkel's discs which can detect sustained touch and pressure, and hair cells which detect sound stimuli. Homeostatic imbalances that can serve as internal stimuli include nutrient and ion levels in the blood, oxygen levels, and water levels. Deviations from the homeostatic ideal may generate a homeostatic emotion, such as pain, thirst or fatigue, that motivates behavior that will restore the body to stasis (such as withdrawal, drinking or resting). Blood pressure Blood pressure, heart rate, and cardiac output are measured by stretch receptors found in the carotid arteries. Nerves embed themselves within these receptors and when they detect stretching, they are stimulated and fire action potentials to the central nervous system. These impulses inhibit the constriction of blood vessels and lower the heart rate. If these nerves do not detect stretching, the body determines perceives low blood pressure as a dangerous stimulus and signals are not sent, preventing the inhibition CNS action; blood vessels constrict and the heart rate increases, causing an increase in blood pressure in the body. External Touch and pain Sensory feelings, especially pain, are stimuli that can elicit a large response and cause neurological changes in the body. Pain also causes a behavioral change in the body, which is proportional to the intensity of the pain. The feeling is recorded by sensory receptors on the skin and travels to the central nervous system, where it is integrated and a decision on how to respond is made; if it is decided that a response must be made, a signal is sent back down to a muscle, which behaves appropriately according to the stimulus. The postcentral gyrus is the location of the primary somatosensory area, the main sensory receptive area for the sense of touch. Pain receptors are known as nociceptors. Two main types of nociceptors exist, A-fiber nociceptors and C-fiber nociceptors. A-fiber receptors are myelinated and conduct currents rapidly. They are mainly used to conduct fast and sharp types of pain. Conversely, C-fiber receptors are unmyelinated and slowly transmit. These receptors conduct slow, burning, diffuse pain. The absolute threshold for touch is the minimum amount of sensation needed to elicit a response from touch receptors. This amount of sensation has a definable value and is often considered to be the force exerted by dropping the wing of a bee onto a person's cheek from a distance of one centimeter. This value will change based on the body part being touched. Vision Vision provides opportunity for the brain to perceive and respond to changes occurring around the body. Information, or stimuli, in the form of light enters the retina, where it excites a special type of neuron called a photoreceptor cell. A local graded potential begins in the photoreceptor, where it excites the cell enough for the impulse to be passed along through a track of neurons to the central nervous system. As the signal travels from photoreceptors to larger neurons, action potentials must be created for the signal to have enough strength to reach the CNS. If the stimulus does not warrant a strong enough response, it is said to not reach absolute threshold, and the body does not react. However, if the stimulus is strong enough to create an action potential in neurons away from the photoreceptor, the body will integrate the information and react appropriately. Visual information is processed in the occipital lobe of the CNS, specifically in the primary visual cortex. The absolute threshold for vision is the minimum amount of sensation needed to elicit a response from photoreceptors in the eye. This amount of sensation has a definable value and is often considered to be the amount of light present from someone holding up a single candle 30 miles away, if one's eyes were adjusted to the dark. Smell Smell allows the body to recognize chemical molecules in the air through inhalation. Olfactory organs located on either side of the nasal septum consist of olfactory epithelium and lamina propria. The olfactory epithelium, which contains olfactory receptor cells, covers the inferior surface of the cribiform plate, the superior portion of the perpendicular plate, the superior nasal concha. Only roughly two percent of airborne compounds inhaled are carried to olfactory organs as a small sample of the air being inhaled. Olfactory receptors extend past the epithelial surface providing a base for many cilia that lie in the surrounding mucus. Odorant-binding proteins interact with these cilia stimulating the receptors. Odorants are generally small organic molecules. Greater water and lipid solubility is related directly to stronger smelling odorants. Odorant binding to G protein coupled receptors activates adenylate cyclase, which converts ATP to camp. cAMP, in turn, promotes the opening of sodium channels resulting in a localized potential. The absolute threshold for smell is the minimum amount of sensation needed to elicit a response from receptors in the nose. This amount of sensation has a definable value and is often considered to be a single drop of perfume in a six-room house. This value will change depending on what substance is being smelled. Taste Taste records flavoring of food and other materials that pass across the tongue and through the mouth. Gustatory cells are located on the surface of the tongue and adjacent portions of the pharynx and larynx. Gustatory cells form on taste buds, specialized epithelial cells, and are generally turned over every ten days. From each cell, protrudes microvilli, sometimes called taste hairs, through also the taste pore and into the oral cavity. Dissolved chemicals interact with these receptor cells; different tastes bind to specific receptors. Salt and sour receptors are chemically gated ion channels, which depolarize the cell. Sweet, bitter, and umami receptors are called gustducins, specialized G protein coupled receptors. Both divisions of receptor cells release neurotransmitters to afferent fibers causing action potential firing. The absolute threshold for taste is the minimum amount of sensation needed to elicit a response from receptors in the mouth. This amount of sensation has a definable value and is often considered to be a single drop of quinine sulfate in 250 gallons of water. Sound Changes in pressure caused by sound reaching the external ear resonate in the tympanic membrane, which articulates with the auditory ossicles, or the bones of the middle ear. These tiny bones multiply these pressure fluctuations as they pass the disturbance into the cochlea, a spiral-shaped bony structure within the inner ear. Hair cells in the cochlear duct, specifically the organ of Corti, are deflected as waves of fluid and membrane motion travel through the chambers of the cochlea. Bipolar sensory neurons located in the center of the cochlea monitor the information from these receptor cells and pass it on to the brainstem via the cochlear branch of cranial nerve VIII. Sound information is processed in the temporal lobe of the CNS, specifically in the primary auditory cortex. The absolute threshold for sound is the minimum amount of sensation needed to elicit a response from receptors in the ears. This amount of sensation has a definable value and is often considered to be a watch ticking in an otherwise soundless environment 20 feet away. Equilibrium Semi circular ducts, which are connected directly to the cochlea, can interpret and convey to the brain information about equilibrium by a similar method as the one used for hearing. Hair cells in these parts of the ear protrude kinocilia and stereocilia into a gelatinous material that lines the ducts of this canal. In parts of these semi circular canals, specifically the maculae, calcium carbonate crystals known as statoconia rest on the surface of this gelatinous material. When tilting the head or when the body undergoes linear acceleration, these crystals move disturbing the cilia of the hair cells and, consequently, affecting the release of neurotransmitter to be taken up by surrounding sensory nerves. In other areas of the semi circular canal, specifically the ampulla, a structure known as the cupula—analogous to the gelatinous material in the maculae—distorts hair cells in a similar fashion when the fluid medium that surrounds it causes the cupula itself to move. The ampulla communicates to the brain information about the head's horizontal rotation. Neurons of the adjacent vestibular ganglia monitor the hair cells in these ducts. These sensory fibers form the vestibular branch of the cranial nerve VIII. Cellular response In general, cellular response to stimuli is defined as a change in state or activity of a cell in terms of movement, secretion, enzyme production, or gene expression. Receptors on cell surfaces are sensing components that monitor stimuli and respond to changes in the environment by relaying the signal to a control center for further processing and response. Stimuli are always converted into electrical signals via transduction. This electrical signal, or receptor potential, takes a specific pathway through the nervous system to initiate a systematic response. Each type of receptor is specialized to respond preferentially to only one kind of stimulus energy, called the adequate stimulus. Sensory receptors have a well-defined range of stimuli to which they respond, and each is tuned to the particular needs of the organism. Stimuli are relayed throughout the body by mechanotransduction or chemotransduction, depending on the nature of the stimulus. Mechanical In response to a mechanical stimulus, cellular sensors of force are proposed to be extracellular matrix molecules, cytoskeleton, transmembrane proteins, proteins at the membrane-phospholipid interface, elements of the nuclear matrix, chromatin, and the lipid bilayer. Response can be twofold: the extracellular matrix, for example, is a conductor of mechanical forces but its structure and composition is also influenced by the cellular responses to those same applied or endogenously generated forces. Mechanosensitive ion channels are found in many cell types and it has been shown that the permeability of these channels to cations is affected by stretch receptors and mechanical stimuli. This permeability of ion channels is the basis for the conversion of the mechanical stimulus into an electrical signal. Chemical Chemical stimuli, such as odorants, are received by cellular receptors that are often coupled to ion channels responsible for chemotransduction. Such is the case in olfactory cells. Depolarization in these cells result from opening of non-selective cation channels upon binding of the odorant to the specific receptor. G protein-coupled receptors in the plasma membrane of these cells can initiate second messenger pathways that cause cation channels to open. In response to stimuli, the sensory receptor initiates sensory transduction by creating graded potentials or action potentials in the same cell or in an adjacent one. Sensitivity to stimuli is obtained by chemical amplification through second messenger pathways in which enzymatic cascades produce large numbers of intermediate products, increasing the effect of one receptor molecule. Systematic response Nervous-system response Though receptors and stimuli are varied, most extrinsic stimuli first generate localized graded potentials in the neurons associated with the specific sensory organ or tissue. In the nervous system, internal and external stimuli can elicit two different categories of responses: an excitatory response, normally in the form of an action potential, and an inhibitory response. When a neuron is stimulated by an excitatory impulse, neuronal dendrites are bound by neurotransmitters which cause the cell to become permeable to a specific type of ion; the type of neurotransmitter determines to which ion the neurotransmitter will become permeable. In excitatory postsynaptic potentials, an excitatory response is generated. This is caused by an excitatory neurotransmitter, normally glutamate binding to a neuron's dendrites, causing an influx of sodium ions through channels located near the binding site. This change in membrane permeability in the dendrites is known as a local graded potential and causes the membrane voltage to change from a negative resting potential to a more positive voltage, a process known as depolarization. The opening of sodium channels allows nearby sodium channels to open, allowing the change in permeability to spread from the dendrites to the cell body. If a graded potential is strong enough, or if several graded potentials occur in a fast enough frequency, the depolarization is able to spread across the cell body to the axon hillock. From the axon hillock, an action potential can be generated and propagated down the neuron's axon, causing sodium ion channels in the axon to open as the impulse travels. Once the signal begins to travel down the axon, the membrane potential has already passed threshold, which means that it cannot be stopped. This phenomenon is known as an all-or-nothing response. Groups of sodium channels opened by the change in membrane potential strengthen the signal as it travels away from the axon hillock, allowing it to move the length of the axon. As the depolarization reaches the end of the axon, or the axon terminal, the end of the neuron becomes permeable to calcium ions, which enters the cell via calcium ion channels. Calcium causes the release of neurotransmitters stored in synaptic vesicles, which enter the synapse between two neurons known as the presynaptic and postsynaptic neurons; if the signal from the presynaptic neuron is excitatory, it will cause the release of an excitatory neurotransmitter, causing a similar response in the postsynaptic neuron. These neurons may communicate with thousands of other receptors and target cells through extensive, complex dendritic networks. Communication between receptors in this fashion enables discrimination and the more explicit interpretation of external stimuli. Effectively, these localized graded potentials trigger action potentials that communicate, in their frequency, along nerve axons eventually arriving in specific cortexes of the brain. In these also highly specialized parts of the brain, these signals are coordinated with others to possibly trigger a new response. If a signal from the presynaptic neuron is inhibitory, inhibitory neurotransmitters, normally GABA will be released into the synapse. This neurotransmitter causes an inhibitory postsynaptic potential in the postsynaptic neuron. This response will cause the postsynaptic neuron to become permeable to chloride ions, making the membrane potential of the cell negative; a negative membrane potential makes it more difficult for the cell to fire an action potential and prevents any signal from being passed on through the neuron. Depending on the type of stimulus, a neuron can be either excitatory or inhibitory. Muscular-system response Nerves in the peripheral nervous system spread out to various parts of the body, including muscle fibers. A muscle fiber and the motor neuron to which it is connected. The spot at which the motor neuron attaches to the muscle fiber is known as the neuromuscular junction. When muscles receive information from internal or external stimuli, muscle fibers are stimulated by their respective motor neuron. Impulses are passed from the central nervous system down neurons until they reach the motor neuron, which releases the neurotransmitter acetylcholine (ACh) into the neuromuscular junction. ACh binds to nicotinic acetylcholine receptors on the surface of the muscle cell and opens ion channels, allowing sodium ions to flow into the cell and potassium ions to flow out; this ion movement causes a depolarization, which allows for the release of calcium ions within the cell. Calcium ions bind to proteins within the muscle cell to allow for muscle contraction; the ultimate consequence of a stimulus. Endocrine-system response Vasopressin The endocrine system is affected largely by many internal and external stimuli. One internal stimulus that causes hormone release is blood pressure. Hypotension, or low blood pressure, is a large driving force for the release of vasopressin, a hormone which causes the retention of water in the kidneys. This process also increases an individual's thirst. By fluid retention or by consuming fluids, if an individual's blood pressure returns to normal, vasopressin release slows and less fluid is retained by the kidneys. Hypovolemia, or low fluid levels in the body, can also act as a stimulus to cause this response. Epinephrine Epinephrine, also known as adrenaline, is also used commonly to respond to both internal and external changes. One common cause of the release of this hormone is the Fight-or-flight response. When the body encounters an external stimulus that is potentially dangerous, epinephrine is released from the adrenal glands. Epinephrine causes physiological changes in the body, such as constriction of blood vessels, dilation of pupils, increased heart and respiratory rate, and the metabolism of glucose. All of these responses to a single stimuli aid in protecting the individual, whether the decision is made to stay and fight, or run away and avoid danger. Digestive-system response Cephalic phase The digestive system can respond to external stimuli, such as the sight or smell of food, and cause physiological changes before the food ever enters the body. This reflex is known as the cephalic phase of digestion. The sight and smell of food are strong enough stimuli to cause salivation, gastric and pancreatic enzyme secretion, and endocrine secretion in preparation for the incoming nutrients; by starting the digestive process before food reaches the stomach, the body is able to more effectively and efficiently metabolize food into necessary nutrients. Once food hits the mouth, taste and information from receptors in the mouth add to the digestive response. Chemoreceptors and mechanorceptors, activated by chewing and swallowing, further increase the enzyme release in the stomach and intestine. Enteric nervous system The digestive system is also able to respond to internal stimuli. The digestive tract, or enteric nervous system alone contains millions of neurons. These neurons act as sensory receptors that can detect changes, such as food entering the small intestine, in the digestive tract. Depending on what these sensory receptors detect, certain enzymes and digestive juices from the pancreas and liver can be secreted to aid in metabolism and breakdown of food. Research methods and techniques Clamping techniques Intracellular measurements of electrical potential across the membrane can be obtained by microelectrode recording. Patch clamp techniques allow for the manipulation of the intracellular or extracellular ionic or lipid concentration while still recording potential. In this way, the effect of various conditions on threshold and propagation can be assessed. Noninvasive neuronal scanning Positron emission tomography (PET) and magnetic resonance imaging (MRI) permit the noninvasive visualization of activated regions of the brain while the test subject is exposed to different stimuli. Activity is monitored in relation to blood flow to a particular region of the brain. Other methods Hindlimb withdrawal time is another method. Sorin Barac et al. in a recent paper published in the Journal of Reconstructive Microsurgery monitored the response of test rats to pain stimuli by inducing an acute, external heat stimulus and measuring hindlimb withdrawal times (HLWT). See also Reflex Sensory stimulation therapy Stimulation Stimulus (psychology) References Neurophysiology Plant cognition
0.772571
0.995643
0.769205
Ergonomics
Ergonomics, also known as human factors or human factors engineering (HFE), is the application of psychological and physiological principles to the engineering and design of products, processes, and systems. Primary goals of human factors engineering are to reduce human error, increase productivity and system availability, and enhance safety, health and comfort with a specific focus on the interaction between the human and equipment. The field is a combination of numerous disciplines, such as psychology, sociology, engineering, biomechanics, industrial design, physiology, anthropometry, interaction design, visual design, user experience, and user interface design. Human factors research employs methods and approaches from these and other knowledge disciplines to study human behavior and generate data relevant to previously stated goals. In studying and sharing learning on the design of equipment, devices, and processes that fit the human body and its cognitive abilities, the two terms, "human factors" and "ergonomics", are essentially synonymous as to their referent and meaning in current literature. The International Ergonomics Association defines ergonomics or human factors as follows: Human factors engineering is relevant in the design of such things as safe furniture and easy-to-use interfaces to machines and equipment. Proper ergonomic design is necessary to prevent repetitive strain injuries and other musculoskeletal disorders, which can develop over time and can lead to long-term disability. Human factors and ergonomics are concerned with the "fit" between the user, equipment, and environment or "fitting a job to a person" or "fitting the task to the man". It accounts for the user's capabilities and limitations in seeking to ensure that tasks, functions, information, and the environment suit that user. To assess the fit between a person and the used technology, human factors specialists or ergonomists consider the job (activity) being done and the demands on the user; the equipment used (its size, shape, and how appropriate it is for the task), and the information used (how it is presented, accessed, and changed). Ergonomics draws on many disciplines in its study of humans and their environments, including anthropometry, biomechanics, mechanical engineering, industrial engineering, industrial design, information design, kinesiology, physiology, cognitive psychology, industrial and organizational psychology, and space psychology. Etymology The term ergonomics (from the Greek ἔργον, meaning "work", and νόμος, meaning "natural law") first entered the modern lexicon when Polish scientist Wojciech Jastrzębowski used the word in his 1857 article (The Outline of Ergonomics; i.e. Science of Work, Based on the Truths Taken from the Natural Science). The French scholar Jean-Gustave Courcelle-Seneuil, apparently without knowledge of Jastrzębowski's article, used the word with a slightly different meaning in 1858. The introduction of the term to the English lexicon is widely attributed to British psychologist Hywel Murrell, at the 1949 meeting at the UK's Admiralty, which led to the foundation of The Ergonomics Society. He used it to encompass the studies in which he had been engaged during and after World War II. The expression human factors is a predominantly North American term which has been adopted to emphasize the application of the same methods to non-work-related situations. A "human factor" is a physical or cognitive property of an individual or social behavior specific to humans that may influence the functioning of technological systems. The terms "human factors" and "ergonomics" are essentially synonymous. Domains of specialization According to the International Ergonomics Association, within the discipline of ergonomics there exist domains of specialization. These comprise three main fields of research: physical, cognitive, and organizational ergonomics. There are many specializations within these broad categories. Specializations in the field of physical ergonomics may include visual ergonomics. Specializations within the field of cognitive ergonomics may include usability, human–computer interaction, and user experience engineering. Some specializations may cut across these domains: Environmental ergonomics is concerned with human interaction with the environment as characterized by climate, temperature, pressure, vibration, light. The emerging field of human factors in highway safety uses human factor principles to understand the actions and capabilities of road users – car and truck drivers, pedestrians, cyclists, etc. – and use this knowledge to design roads and streets to reduce traffic collisions. Driver error is listed as a contributing factor in 44% of fatal collisions in the United States, so a topic of particular interest is how road users gather and process information about the road and its environment, and how to assist them to make the appropriate decision. New terms are being generated all the time. For instance, "user trial engineer" may refer to a human factors engineering professional who specializes in user trials. Although the names change, human factors professionals apply an understanding of human factors to the design of equipment, systems and working methods to improve comfort, health, safety, and productivity. Physical ergonomics Physical ergonomics is concerned with human anatomy, and some of the anthropometric, physiological, and biomechanical characteristics as they relate to physical activity. Physical ergonomic principles have been widely used in the design of both consumer and industrial products for optimizing performance and to preventing / treating work-related disorders by reducing the mechanisms behind mechanically induced acute and chronic musculoskeletal injuries / disorders. Risk factors such as localized mechanical pressures, force and posture in a sedentary office environment lead to injuries attributed to an occupational environment. Physical ergonomics is important to those diagnosed with physiological ailments or disorders such as arthritis (both chronic and temporary) or carpal tunnel syndrome. Pressure that is insignificant or imperceptible to those unaffected by these disorders may be very painful, or render a device unusable, for those who are. Many ergonomically designed products are also used or recommended to treat or prevent such disorders, and to treat pressure-related chronic pain. One of the most prevalent types of work-related injuries is musculoskeletal disorder. Work-related musculoskeletal disorders (WRMDs) result in persistent pain, loss of functional capacity and work disability, but their initial diagnosis is difficult because they are mainly based on complaints of pain and other symptoms. Every year, 1.8 million U.S. workers experience WRMDs and nearly 600,000 of the injuries are serious enough to cause workers to miss work. Certain jobs or work conditions cause a higher rate of worker complaints of undue strain, localized fatigue, discomfort, or pain that does not go away after overnight rest. These types of jobs are often those involving activities such as repetitive and forceful exertions; frequent, heavy, or overhead lifts; awkward work positions; or use of vibrating equipment. The Occupational Safety and Health Administration (OSHA) has found substantial evidence that ergonomics programs can cut workers' compensation costs, increase productivity and decrease employee turnover. Mitigation solutions can include both short term and long-term solutions. Short and long-term solutions involve awareness training, positioning of the body, furniture and equipment and ergonomic exercises. Sit-stand stations and computer accessories that provide soft surfaces for resting the palm as well as split keyboards are recommended. Additionally, resources within the HR department can be allocated to provide assessments to employees to ensure the above criteria are met. Therefore, it is important to gather data to identify jobs or work conditions that are most problematic, using sources such as injury and illness logs, medical records, and job analyses. Innovative workstations that are being tested include sit-stand desks, height adjustable desk, treadmill desks, pedal devices and cycle ergometers. In multiple studies these new workstations resulted in decreased waist circumference and improved psychological well-being. However a significant number of additional studies have seen no marked improvement in health outcomes. With the emergence of collaborative robots and smart systems in manufacturing environments, the artificial agents can be used to improve physical ergonomics of human co-workers. For example, during human–robot collaboration the robot can use biomechanical models of the human co-worker in order to adjust the working configuration and account for various ergonomic metrics, such as human posture, joint torques, arm manipulability and muscle fatigue. The ergonomic suitability of the shared workspace with respect to these metrics can also be displayed to the human with workspace maps through visual interfaces. Cognitive ergonomics Cognitive ergonomics is concerned with mental processes, such as perception, emotion, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system. (Relevant topics include mental workload, decision-making, skilled performance, human reliability, work stress and training as these may relate to human–system and human–computer interaction design.) Epidemiological studies show a correlation between the time one spends sedentary and their cognitive function such as lowered mood and depression. Organizational ergonomics and safety culture Organizational ergonomics is concerned with the optimization of socio-technical systems, including their organizational structures, policies, and processes. Relevant topics include human communication successes or failures in adaptation to other system elements, crew resource management, work design, work systems, design of working times, teamwork, participatory ergonomics, community ergonomics, cooperative work, new work programs, virtual organizations, remote work, and quality management. Safety culture within an organization of engineers and technicians has been linked to engineering safety with cultural dimensions including power distance and ambiguity tolerance. Low power distance has been shown to be more conducive to a safety culture. Organizations with cultures of concealment or lack of empathy have been shown to have poor safety culture. History Ancient societies Some have stated that human ergonomics began with Australopithecus prometheus (also known as "little foot"), a primate who created handheld tools out of different types of stone, clearly distinguishing between tools based on their ability to perform designated tasks. The foundations of the science of ergonomics appear to have been laid within the context of the culture of Ancient Greece. A good deal of evidence indicates that Greek civilization in the 5th century BC used ergonomic principles in the design of their tools, jobs, and workplaces. One outstanding example of this can be found in the description Hippocrates gave of how a surgeon's workplace should be designed and how the tools he uses should be arranged. The archaeological record also shows that the early Egyptian dynasties made tools and household equipment that illustrated ergonomic principles. Industrial societies Bernardino Ramazzini was one of the first people to systematically study the illness that resulted from work earning himself the nickname "father of occupational medicine". In the late 1600s and early 1700s Ramazzini visited many worksites where he documented the movements of laborers and spoke to them about their ailments. He then published "De Morbis Artificum Diatriba" (Latin for Diseases of Workers) which detailed occupations, common illnesses, remedies. In the 19th century, Frederick Winslow Taylor pioneered the "scientific management" method, which proposed a way to find the optimum method of carrying out a given task. Taylor found that he could, for example, triple the amount of coal that workers were shoveling by incrementally reducing the size and weight of coal shovels until the fastest shoveling rate was reached. Frank and Lillian Gilbreth expanded Taylor's methods in the early 1900s to develop the "time and motion study". They aimed to improve efficiency by eliminating unnecessary steps and actions. By applying this approach, the Gilbreths reduced the number of motions in bricklaying from 18 to 4.5, allowing bricklayers to increase their productivity from 120 to 350 bricks per hour. However, this approach was rejected by Russian researchers who focused on the well-being of the worker. At the First Conference on Scientific Organization of Labour (1921) Vladimir Bekhterev and Vladimir Nikolayevich Myasishchev criticised Taylorism. Bekhterev argued that "The ultimate ideal of the labour problem is not in it [Taylorism], but is in such organisation of the labour process that would yield a maximum of efficiency coupled with a minimum of health hazards, absence of fatigue and a guarantee of the sound health and all round personal development of the working people." Myasishchev rejected Frederick Taylor's proposal to turn man into a machine. Dull monotonous work was a temporary necessity until a corresponding machine can be developed. He also went on to suggest a new discipline of "ergology" to study work as an integral part of the re-organisation of work. The concept was taken up by Myasishchev's mentor, Bekhterev, in his final report on the conference, merely changing the name to "ergonology" Aviation Prior to World War I, the focus of aviation psychology was on the aviator himself, but the war shifted the focus onto the aircraft, in particular, the design of controls and displays, and the effects of altitude and environmental factors on the pilot. The war saw the emergence of aeromedical research and the need for testing and measurement methods. Studies on driver behavior started gaining momentum during this period, as Henry Ford started providing millions of Americans with automobiles. Another major development during this period was the performance of aeromedical research. By the end of World War I, two aeronautical labs were established, one at Brooks Air Force Base, Texas and the other at Wright-Patterson Air Force Base outside of Dayton, Ohio. Many tests were conducted to determine which characteristic differentiated the successful pilots from the unsuccessful ones. During the early 1930s, Edwin Link developed the first flight simulator. The trend continued and more sophisticated simulators and test equipment were developed. Another significant development was in the civilian sector, where the effects of illumination on worker productivity were examined. This led to the identification of the Hawthorne Effect, which suggested that motivational factors could significantly influence human performance. World War II marked the development of new and complex machines and weaponry, and these made new demands on operators' cognition. It was no longer possible to adopt the Tayloristic principle of matching individuals to preexisting jobs. Now the design of equipment had to take into account human limitations and take advantage of human capabilities. The decision-making, attention, situational awareness and hand-eye coordination of the machine's operator became key in the success or failure of a task. There was substantial research conducted to determine the human capabilities and limitations that had to be accomplished. A lot of this research took off where the aeromedical research between the wars had left off. An example of this is the study done by Fitts and Jones (1947), who studied the most effective configuration of control knobs to be used in aircraft cockpits. Much of this research transcended into other equipment with the aim of making the controls and displays easier for the operators to use. The entry of the terms "human factors" and "ergonomics" into the modern lexicon date from this period. It was observed that fully functional aircraft flown by the best-trained pilots, still crashed. In 1943 Alphonse Chapanis, a lieutenant in the U.S. Army, showed that this so-called "pilot error" could be greatly reduced when more logical and differentiable controls replaced confusing designs in airplane cockpits. After the war, the Army Air Force published 19 volumes summarizing what had been established from research during the war. In the decades since World War II, human factors has continued to flourish and diversify. Work by Elias Porter and others within the RAND Corporation after WWII extended the conception of human factors. "As the thinking progressed, a new concept developed—that it was possible to view an organization such as an air-defense, man-machine system as a single organism and that it was possible to study the behavior of such an organism. It was the climate for a breakthrough." In the initial 20 years after the World War II, most activities were done by the "founding fathers": Alphonse Chapanis, Paul Fitts, and Small. Cold War The beginning of the Cold War led to a major expansion of Defense supported research laboratories. Also, many labs established during WWII started expanding. Most of the research following the war was military-sponsored. Large sums of money were granted to universities to conduct research. The scope of the research also broadened from small equipments to entire workstations and systems. Concurrently, a lot of opportunities started opening up in the civilian industry. The focus shifted from research to participation through advice to engineers in the design of equipment. After 1965, the period saw a maturation of the discipline. The field has expanded with the development of the computer and computer applications. The Space Age created new human factors issues such as weightlessness and extreme g-forces. Tolerance of the harsh environment of space and its effects on the mind and body were widely studied. Information age The dawn of the Information Age has resulted in the related field of human–computer interaction (HCI). Likewise, the growing demand for and competition among consumer goods and electronics has resulted in more companies and industries including human factors in their product design. Using advanced technologies in human kinetics, body-mapping, movement patterns and heat zones, companies are able to manufacture purpose-specific garments, including full body suits, jerseys, shorts, shoes, and even underwear. Organizations Formed in 1946 in the UK, the oldest professional body for human factors specialists and ergonomists is The Chartered Institute of Ergonomics and Human Factors, formally known as the Institute of Ergonomics and Human Factors and before that, The Ergonomics Society. The Human Factors and Ergonomics Society (HFES) was founded in 1957. The Society's mission is to promote the discovery and exchange of knowledge concerning the characteristics of human beings that are applicable to the design of systems and devices of all kinds. The Association of Canadian Ergonomists - l'Association canadienne d'ergonomie (ACE) was founded in 1968. It was originally named the Human Factors Association of Canada (HFAC), with ACE (in French) added in 1984, and the consistent, bilingual title adopted in 1999. According to it 2017 mission statement, ACE unites and advances the knowledge and skills of ergonomics and human factors practitioners to optimise human and organisational well-being. The International Ergonomics Association (IEA) is a federation of ergonomics and human factors societies from around the world. The mission of the IEA is to elaborate and advance ergonomics science and practice, and to improve the quality of life by expanding its scope of application and contribution to society. As of September 2008, the International Ergonomics Association has 46 federated societies and 2 affiliated societies. The Human Factors Transforming Healthcare (HFTH) is an international network of HF practitioners who are embedded within hospitals and health systems. The goal of the network is to provide resources for human factors practitioners and healthcare organizations looking to successfully apply HF principles to improve patient care and provider performance. The network also serves as collaborative platform for human factors practitioners, students, faculty, industry partners, and those curious about human factors in healthcare. Related organizations The Institute of Occupational Medicine (IOM) was founded by the coal industry in 1969. From the outset the IOM employed an ergonomics staff to apply ergonomics principles to the design of mining machinery and environments. To this day, the IOM continues ergonomics activities, especially in the fields of musculoskeletal disorders; heat stress and the ergonomics of personal protective equipment (PPE). Like many in occupational ergonomics, the demands and requirements of an ageing UK workforce are a growing concern and interest to IOM ergonomists. The International Society of Automotive Engineers (SAE) is a professional organization for mobility engineering professionals in the aerospace, automotive, and commercial vehicle industries. The Society is a standards development organization for the engineering of powered vehicles of all kinds, including cars, trucks, boats, aircraft, and others. The Society of Automotive Engineers has established a number of standards used in the automotive industry and elsewhere. It encourages the design of vehicles in accordance with established human factors principles. It is one of the most influential organizations with respect to ergonomics work in automotive design. This society regularly holds conferences which address topics spanning all aspects of human factors and ergonomics. Practitioners Human factors practitioners come from a variety of backgrounds, though predominantly they are psychologists (from the various subfields of industrial and organizational psychology, engineering psychology, cognitive psychology, perceptual psychology, applied psychology, and experimental psychology) and physiologists. Designers (industrial, interaction, and graphic), anthropologists, technical communication scholars and computer scientists also contribute. Typically, an ergonomist will have an undergraduate degree in psychology, engineering, design or health sciences, and usually a master's degree or doctoral degree in a related discipline. Though some practitioners enter the field of human factors from other disciplines, both M.S. and PhD degrees in Human Factors Engineering are available from several universities worldwide. Sedentary workplace Contemporary offices did not exist until the 1830s, with Wojciech Jastrzębowsk's seminal book on MSDergonomics following in 1857 and the first published study of posture appearing in 1955. As the American workforce began to shift towards sedentary employment, the prevalence of [WMSD/cognitive issues/ etc..] began to rise. In 1900, 41% of the US workforce was employed in agriculture but by 2000 that had dropped to 1.9% This coincides with an increase in growth in desk-based employment (25% of all employment in 2000) and the surveillance of non-fatal workplace injuries by OSHA and Bureau of Labor Statistics in 1971. 0–1.5 and occurs in a sitting or reclining position. Adults older than 50 years report spending more time sedentary and for adults older than 65 years this is often 80% of their awake time. Multiple studies show a dose-response relationship between sedentary time and all-cause mortality with an increase of 3% mortality per additional sedentary hour each day. High quantities of sedentary time without breaks is correlated to higher risk of chronic disease, obesity, cardiovascular disease, type 2 diabetes and cancer. Currently, there is a large proportion of the overall workforce who is employed in low physical activity occupations. Sedentary behavior, such as spending long periods of time in seated positions poses a serious threat for injuries and additional health risks. Unfortunately, even though some workplaces make an effort to provide a well designed environment for sedentary employees, any employee who is performing large amounts of sitting will likely experience discomfort. There are existing conditions that would predispose both individuals and populations to an increase in prevalence of living sedentary lifestyles, including: socioeconomic determinants, education levels, occupation, living environment, age (as mentioned above) and more. A study published by the Iranian Journal of Public Health examined socioeconomic factors and sedentary lifestyle effects for individuals in a working community. The study concluded that individuals who reported living in low income environments were more inclined to living sedentary behavior compared to those who reported being of high socioeconomic status. Individuals who achieve less education are also considered to be a high risk group to partake in sedentary lifestyles, however, each community is different and has different resources available that may vary this risk. Oftentimes, larger worksites are associated with increased occupational sitting. Those who work in environments that are classified as business and office jobs are typically more exposed to sitting and sedentary behavior while in the workplace. Additionally, occupations that are full-time, have schedule flexibility, are also included in that demographic, and are more likely to sit often throughout their workday. Policy implementation Obstacles surrounding better ergonomic features to sedentary employees include cost, time, effort and for both companies and employees. The evidence above helps establish the importance of ergonomics in a sedentary workplace, yet missing information from this problem is enforcement and policy implementation. As a modernized workplace becomes more and more technology-based more jobs are becoming primarily seated, therefore leading to a need to prevent chronic injuries and pain. This is becoming easier with the amount of research around ergonomic tools saving money companies by limiting the number of days missed from work and workers comp cases. The way to ensure that corporations prioritize these health outcomes for their employees is through policy and implementation. In the United States, there are no nationwide policies that are currently in place; however, a handful of big companies and states have taken on cultural policies to ensure the safety of all workers. For example, the state of Nevada risk management department has established a set of ground rules for both agencies' responsibilities and employees' responsibilities. The agency responsibilities include evaluating workstations, using risk management resources when necessary and keeping OSHA records. To see specific workstation ergonomic policies and responsibilities click here. Methods Until recently, methods used to evaluate human factors and ergonomics ranged from simple questionnaires to more complex and expensive usability labs. Some of the more common human factors methods are listed below: Ethnographic analysis: Using methods derived from ethnography, this process focuses on observing the uses of technology in a practical environment. It is a qualitative and observational method that focuses on "real-world" experience and pressures, and the usage of technology or environments in the workplace. The process is best used early in the design process. Focus groups are another form of qualitative research in which one individual will facilitate discussion and elicit opinions about the technology or process under investigation. This can be on a one-to-one interview basis, or in a group session. Can be used to gain a large quantity of deep qualitative data, though due to the small sample size, can be subject to a higher degree of individual bias. Can be used at any point in the design process, as it is largely dependent on the exact questions to be pursued, and the structure of the group. Can be extremely costly. Iterative design: Also known as prototyping, the iterative design process seeks to involve users at several stages of design, to correct problems as they emerge. As prototypes emerge from the design process, these are subjected to other forms of analysis as outlined in this article, and the results are then taken and incorporated into the new design. Trends among users are analyzed, and products redesigned. This can become a costly process, and needs to be done as soon as possible in the design process before designs become too concrete. Meta-analysis: A supplementary technique used to examine a wide body of already existing data or literature to derive trends or form hypotheses to aid design decisions. As part of a literature survey, a meta-analysis can be performed to discern a collective trend from individual variables. Subjects-in-tandem: Two subjects are asked to work concurrently on a series of tasks while vocalizing their analytical observations. The technique is also known as "Co-Discovery" as participants tend to feed off of each other's comments to generate a richer set of observations than is often possible with the participants separately. This is observed by the researcher, and can be used to discover usability difficulties. This process is usually recorded. Surveys and questionnaires: A commonly used technique outside of human factors as well, surveys and questionnaires have an advantage in that they can be administered to a large group of people for relatively low cost, enabling the researcher to gain a large amount of data. The validity of the data obtained is, however, always in question, as the questions must be written and interpreted correctly, and are, by definition, subjective. Those who actually respond are in effect self-selecting as well, widening the gap between the sample and the population further. Task analysis: A process with roots in activity theory, task analysis is a way of systematically describing human interaction with a system or process to understand how to match the demands of the system or process to human capabilities. The complexity of this process is generally proportional to the complexity of the task being analyzed, and so can vary in cost and time involvement. It is a qualitative and observational process. Best used early in the design process. Human performance modeling: A method of quantifying human behavior, cognition, and processes; a tool used by human factors researchers and practitioners for both the analysis of human function and for the development of systems designed for optimal user experience and interaction. Think aloud protocol: Also known as "concurrent verbal protocol", this is the process of asking a user to execute a series of tasks or use technology, while continuously verbalizing their thoughts so that a researcher can gain insights as to the users' analytical process. Can be useful for finding design flaws that do not affect task performance, but may have a negative cognitive effect on the user. Also useful for utilizing experts to better understand procedural knowledge of the task in question. Less expensive than focus groups, but tends to be more specific and subjective. User analysis: This process is based around designing for the attributes of the intended user or operator, establishing the characteristics that define them, creating a persona for the user. Best done at the outset of the design process, a user analysis will attempt to predict the most common users, and the characteristics that they would be assumed to have in common. This can be problematic if the design concept does not match the actual user, or if the identified are too vague to make clear design decisions from. This process is, however, usually quite inexpensive, and commonly used. "Wizard of Oz": This is a comparatively uncommon technique but has seen some use in mobile devices. Based upon the Wizard of Oz experiment, this technique involves an operator who remotely controls the operation of a device to imitate the response of an actual computer program. It has the advantage of producing a highly changeable set of reactions, but can be quite costly and difficult to undertake. Methods analysis is the process of studying the tasks a worker completes using a step-by-step investigation. Each task in broken down into smaller steps until each motion the worker performs is described. Doing so enables you to see exactly where repetitive or straining tasks occur. Time studies determine the time required for a worker to complete each task. Time studies are often used to analyze cyclical jobs. They are considered "event based" studies because time measurements are triggered by the occurrence of predetermined events. Work sampling is a method in which the job is sampled at random intervals to determine the proportion of total time spent on a particular task. It provides insight into how often workers are performing tasks which might cause strain on their bodies. Predetermined time systems are methods for analyzing the time spent by workers on a particular task. One of the most widely used predetermined time system is called Methods-Time-Measurement. Other common work measurement systems include MODAPTS and MOST. Industry specific applications based on PTS are Seweasy, MODAPTS and GSD as seen in paper: . Cognitive walkthrough: This method is a usability inspection method in which the evaluators can apply user perspective to task scenarios to identify design problems. As applied to macroergonomics, evaluators are able to analyze the usability of work system designs to identify how well a work system is organized and how well the workflow is integrated. Kansei method: This is a method that transforms consumer's responses to new products into design specifications. As applied to macroergonomics, this method can translate employee's responses to changes to a work system into design specifications. High Integration of Technology, Organization, and People: This is a manual procedure done step-by-step to apply technological change to the workplace. It allows managers to be more aware of the human and organizational aspects of their technology plans, allowing them to efficiently integrate technology in these contexts. Top modeler: This model helps manufacturing companies identify the organizational changes needed when new technologies are being considered for their process. Computer-integrated Manufacturing, Organization, and People System Design: This model allows for evaluating computer-integrated manufacturing, organization, and people system design based on knowledge of the system. Anthropotechnology: This method considers analysis and design modification of systems for the efficient transfer of technology from one culture to another. Systems analysis tool: This is a method to conduct systematic trade-off evaluations of work-system intervention alternatives. Macroergonomic analysis of structure: This method analyzes the structure of work systems according to their compatibility with unique sociotechnical aspects. Macroergonomic analysis and design: This method assesses work-system processes by using a ten-step process. Virtual manufacturing and response surface methodology: This method uses computerized tools and statistical analysis for workstation design. Weaknesses Problems related to measures of usability include the fact that measures of learning and retention of how to use an interface are rarely employed and some studies treat measures of how users interact with interfaces as synonymous with quality-in-use, despite an unclear relation. Although field methods can be extremely useful because they are conducted in the users' natural environment, they have some major limitations to consider. The limitations include: Usually take more time and resources than other methods Very high effort in planning, recruiting, and executing compared with other methods Much longer study periods and therefore requires much goodwill among the participants Studies are longitudinal in nature, therefore, attrition can become a problem. See also ISO 9241 Occupational Health Science (journal) Wojciech Jastrzębowski (1799–1882), a Polish pioneer of ergonomics References Further reading Books Thomas J. Armstrong (2008), Chapter 10: Allowances, Localized Fatigue, Musculoskeletal Disorders, and Biomechanics (not yet published) Berlin C. & Adams C. & 2017. Production Ergonomics: Designing Work Systems to Support Optimal Human Performance. London: Ubiquity Press. . Jan Dul and Bernard Weedmaster, Ergonomics for Beginners. A classic introduction on ergonomics—Original title: Vademecum Ergonomie (Dutch)—published and updated since the 1960s. Valerie J Gawron (2000), Human Performance Measures Handbook Lawrence Erlbaum Associates—A useful summary of human performance measures. Liu, Y (2007). IOE 333. Course pack. Industrial and Operations Engineering 333 (Introduction to Ergonomics), University of Michigan, Ann Arbor, MI. Winter 2007 Donald Norman, The Design of Everyday Things—An entertaining user-centered critique of nearly every gadget out there (at the time it was published) Peter Opsvik (2009), "Re-Thinking Sitting". Interesting insights on the history of the chair and how we sit from an ergonomic pioneer Computer Ergonomics & Work Related Upper Limb Disorder Prevention- Making The Business Case For Pro-active Ergonomics (Rooney et al., 2008) Stephen Pheasant, Bodyspace—A classic exploration of ergonomics Alvin R. Tilley & Henry Dreyfuss Associates (1993, 2002), The Measure of Man & Woman: Human Factors in Design A human factors design manual. Kim Vicente, The Human Factor Full of examples and statistics illustrating the gap between existing technology and the human mind, with suggestions to narrow it Wickens and Hollands (2000). Engineering Psychology and Human Performance. Discusses memory, attention, decision making, stress and human error, among other topics Wilson & Corlett, Evaluation of Human Work A practical ergonomics methodology. Warning: very technical and not a suitable 'intro' to ergonomics Zamprotta, Luigi, La qualité comme philosophie de la production.Interaction avec l'ergonomie et perspectives futures, thèse de Maîtrise ès Sciences Appliquées – Informatique, Institut d'Etudes Supérieures L'Avenir, Brussels, année universitaire 1992–93, TIU Press, Independence, Missouri (USA), 1994, Peer-reviewed Journals (Numbers between brackets are the ISI impact factor, followed by the date) Behavior & Information Technology (0.915, 2008) Ergonomics (0.747, 2001–2003) Ergonomics in Design (-) Applied Ergonomics (1.713, 2015) Human Factors (1.37, 2015) International Journal of Industrial Ergonomics (0.395, 2001–2003) Human Factors and Ergonomics in Manufacturing (0.311, 2001–2003) Travail Humain (0.260, 2001–2003) Theoretical Issues in Ergonomics Science (-) International Journal of Human Factors and Ergonomics (-) International Journal of Occupational Safety and Ergonomics (-) External links Directory of Design Support Methods Directory of Design Support Methods Engineering Data Compendium of Human Perception and Performance Index of Non-Government Standards on Human Engineering... Index of Government Standards on Human Engineering... NIOSH Topic Page on Ergonomics and Musculoskeletal Disorders Office Ergonomics Information from European Agency for Safety and Health at Work Human Factors Standards & Handbooks from the University of Maryland Department of Mechanical Engineering Human Factors and Ergonomics Resources Human Factors Engineering Collection, The University of Alabama in Huntsville Archives and Special Collections Industrial engineering Occupational safety and health Posture
0.77095
0.997599
0.769099
Unconsciousness
Unconsciousness is a state in which a living individual exhibits a complete, or near-complete, inability to maintain an awareness of self and environment or to respond to any human or environmental stimulus. Unconsciousness may occur as the result of traumatic brain injury, brain hypoxia (inadequate oxygen, possibly due to a brain infarction or cardiac arrest), severe intoxication with drugs that depress the activity of the central nervous system (e.g., alcohol and other hypnotic or sedative drugs), severe fatigue, pain, anaesthesia, and other causes. Loss of consciousness should not be confused with the notion of the psychoanalytic unconscious, cognitive processes that take place outside awareness (e.g., implicit cognition), and with altered states of consciousness such as sleep, delirium, hypnosis, and other altered states in which the person responds to stimuli, including trance and psychedelic experiences. Causes This is not a complete list. Cardiovascular system Arrhythmia (irregular heart beat) Bleeding Cardiac arrest Cardiomegaly Heart failure (HF) (congestive heart failure (CHF)) Myocardial infarction (MI) (heart attack) Myocarditis Pericarditis Shock Nervous system Brain abscess Brain tumor Encephalitis Increased intracranial pressure Intracerebral hemorrhage (hemorrhagic stroke) Ischemic stroke Meningitis Seizure Subarachnoid hemorrhage Traumatic brain injury (TBI) (intracranial injury) Respiratory system Acute respiratory distress syndrome (ARDS) Choking Drowning Lung cancer (lung carcinoma) Pneumonia Pulmonary embolism (PE) Respiratory arrest Respiratory failure Other Drugs Electrocution Kidney failure Liver failure Poison or venom Sepsis Law and medicine In jurisprudence, unconsciousness may entitle the criminal defendant to the defense of automatism, i.e. a state without control of one's own actions, an excusing condition that allows a defendant to argue that they should not be held criminally liable for their actions or omissions. In most countries, courts must consider whether unconsciousness in a situation can be accepted as a defense; it can vary from case to case. Hence epileptic seizures, neurological dysfunctions and sleepwalking may be considered acceptable excusing conditions because the loss of control is not foreseeable, but falling asleep (especially while driving or during any other safety-critical activity) may not, because natural sleep rarely overcomes an ordinary person without warning. In many countries, it is presumed that someone who is less than fully conscious cannot give consent to anything. This can be relevant in cases of sexual assault, euthanasia, or patients giving informed consent with regard to starting or stopping a medical treatment. See also Coma Do not resuscitate Greyout Hypnosis Living will Shallow water blackout Sleep Somnophilia Syncope (fainting) Trance Traumatic brain injury Twilight sleep References Consciousness Symptoms and signs of mental disorders
0.773313
0.994537
0.769088
Lactic acidosis
Lactic acidosis refers to the process leading to the production of lactate by anaerobic metabolism. It increases hydrogen ion concentration tending to the state of acidemia or low pH. The result can be detected with high levels of lactate and low levels of bicarbonate. This is usually considered the result of illness but also results from strenuous exercise. The effect on pH is moderated by the presence of respiratory compensation. Lactic acidosis is usually the result of tissue hypoxia which is not the same as arterial hypoxia. Adequate circulation of blood and perfusion of metabolizing tissue to meet demand is necessary to prevent tissue hypoxia. Lactic acidosis can also be the result of illnesses, medications, poisonings or inborn errors of metabolism that interfere directly with oxygen utilization by cells. The symptoms are generally attributable to the underlying cause, but may include nausea, vomiting, shortness of breath, and generalised weakness. The diagnosis is made on biochemical analysis of blood (often initially on arterial blood gas samples), and once confirmed, generally prompts an investigation to establish the underlying cause to treat the acidosis. In some situations, hemofiltration (purification of the blood) is temporarily required. In rare chronic forms of lactic acidosis caused by mitochondrial disease, a specific diet or dichloroacetate may be used. The prognosis of lactic acidosis depends largely on the underlying cause; in some situations (such as severe infections), it indicates an increased risk of death. Classification The Cohen–Woods classification categorizes causes of lactic acidosis as: Type A: Decreased tissue oxygenation (e.g., from decreased blood flow) Type B B1: Underlying diseases (sometimes causing type A) B2: Medication or intoxication B3: Inborn error of metabolism Signs and symptoms Lactic acidosis is commonly found in people who are unwell, such as those with severe heart and/or lung disease, a severe infection with sepsis, the systemic inflammatory response syndrome due to another cause, severe physical trauma, or severe depletion of body fluids. Symptoms in humans include all those of typical metabolic acidosis (nausea, vomiting, generalized muscle weakness, and laboured and deep breathing). Causes The several different causes of lactic acidosis include: Genetic conditions Biotinidase deficiency, multiple carboxylase deficiency, or nongenetic deficiencies of biotin Diabetes mellitus and deafness Fructose 1,6-bisphosphatase deficiency Glucose-6-phosphatase deficiency GRACILE syndrome Mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes Pyruvate dehydrogenase deficiency Pyruvate carboxylase deficiency Leigh syndrome Drugs Linezolid Paracetamol/acetaminophen poisoning Metformin: this risk is low (less than 10 cases for 100,000 patient years), but the risk of metformin-induced lactic acidosis (MALA) increases in certain situations where both the plasma levels of metformin are increased and lactate clearance is impaired. The older related and now withdrawn drug phenformin carried a much higher risk of lactic acidosis. Isoniazid toxicity Propofol Epinephrine Propylene glycol (D-lactic acidosis) Nucleoside reverse-transcriptase inhibitors Abacavir/dolutegravir/lamivudine Emtricitabine/tenofovir Potassium cyanide (cyanide poisoning) Fialuridine Other Thiamine deficiency (especially during TPN) Impaired delivery of oxygen to cells in the tissues (e.g., from impaired blood flow (hypoperfusion)) Bleeding Polymyositis Ethanol toxicity Sepsis Shock Advanced liver disease Diabetic ketoacidosis Excessive exercise (overtraining) Regional hypoperfusion (e.g., bowel ischemia or marked cellulitis) Cancers such as Non-Hodgkin's and Burkitt lymphomas Pheochromocytoma Tumor lysis syndrome D-lactic acidosis due to intestinal bacterial flora production in short gut syndrome Pathophysiology Glucose metabolism begins with glycolysis, in which the molecule is broken down into pyruvate in ten enzymatic steps. A significant proportion of pyruvate is converted into lactate (the blood lactate-to-pyruvate ratio is normally 10:1). The human metabolism produces about 20 mmol/kg of lactic acid every 24 hours. This happens predominantly in tissues (especially muscle) that have high levels of the "A" isoform of the enzyme lactate dehydrogenase (LDHA), which predominantly converts pyruvate into lactate. The lactate is carried by the bloodstream to other tissues where it is converted back to pyruvate by the "B" isoform of LDH (LDHB). Firstly there is gluconeogenesis in the liver (as well as the kidney and some other tissues), where lactate is converted into pyruvate and then into glucose; this is known as the Cori cycle. In addition, pyruvate generated from lactate can be oxidized to acetyl-CoA, which can enter the citric acid cycle to enable ATP production by oxidative phosphorylation. Elevations in lactate are either a consequence of increased production or of decreased metabolism. With regards to metabolism, this predominantly takes place in the liver (70%), which explains that lactate levels may be elevated in the setting of liver disease. In "type A" lactic acidosis, the production of lactate is attributable to insufficient oxygen for aerobic metabolism. If there is no oxygen available for the parts of the glucose metabolism that require oxygen (citric acid cycle and oxidative phosphorylation), excess pyruvate will be converted in excess lactate. In "type B" lactic acidosis the lactate accumulates because there is a mismatch between glycolysis activity and the remainder of glucose metabolism. Examples are situations where the sympathetic nervous system is highly active (e.g. severe asthma). There is controversy as to whether elevated lactate in acute illness can be attributed to tissue hypoxia; there is limited empirical support for this theoretical notion. Diagnosis Acid-base disturbances such as lactic acidosis are typically first assessed using arterial blood gas tests. Testing of venous blood is also available as an alternative as they are effectively interchangeable. Normally resulting lactate concentrations are in the range indicated below: Lactic acidosis is classically defined as an elevated lactate together with pH < 7.35 and bicarbonate below 20 mmol/L, but this is not required as lactic acidosis may exist together with other acid-base abnormalities that may affect these two parameters. Treatment If elevated lactate is present in acute illness, supporting the oxygen supply and blood flow are key initial steps. Some vasopressors (drugs that augment the blood pressure) are less effective when lactate levels are high, and some agents that stimulate the beta-2 adrenergic receptor can elevate the lactate further. Direct removal of lactate from the body (e.g. with hemofiltration or dialysis) is difficult, with limited evidence for benefit; it may not be possible to keep up with the lactate production. Limited evidence supports the use of sodium bicarbonate solutions to improve the pH (which is associated with increased carbon dioxide generation and may reduce the calcium levels). Lactic acidosis caused by inherited mitochondrial disorders (type B3) may be treated with a ketogenic diet and possibly with dichloroacetate (DCA), although this may be complicated by peripheral neuropathy and has a weak evidence base. Prognosis Mild and transient elevations in lactate have limited impact on mortality, whereas sustained and severe lactate elevations are associated with a high mortality. The mortality of lactic acidosis in people taking metformin was previously reported to be 50%, but in more recent reports this was closer to 25%. Other animals Reptiles Reptiles, which rely primarily on anaerobic energy metabolism (glycolysis) for intense movements, can be particularly susceptible to lactic acidosis. In particular, during the capture of large crocodiles, the animals' use of their glycolytic muscles often alters the blood's pH to a point where they are unable to respond to stimuli or move. Cases are recorded in which particularly large crocodiles which put up extreme resistance to capture later died of the resulting pH imbalance. Certain turtle species have been found to be capable of tolerating high levels of lactic acid without experiencing the effects of lactic acidosis. Painted turtles hibernate buried in mud or underwater and do not resurface for the entire winter. As a result, they rely on lactic acid fermentation to provide the majority of their energy needs. Adaptations in particular in the turtle's blood composition and shell allow it to tolerate high levels of lactic acid accumulation. In the anoxic conditions where fermentation is dominant, calcium levels in the blood plasma increase. This calcium serves as a buffer, reacting with the excess lactate to form the precipitate calcium lactate. This precipitate is suggested to be reabsorbed by the shell and skeleton, thereby removing it from the bloodstream; studies examining turtles that have been subjected to prolonged anoxic conditions have up to 45% of their lactate stored within their skeletal structure. Ruminants In ruminant livestock, the cause of clinically serious lactic acidosis is different from the causes described above. In domesticated ruminants, lactic acidosis may occur as a consequence of ingesting large amounts of grain, especially when the rumen population is poorly adapted to deal with grain. Activity of various rumen organisms results in accumulation of various volatile fatty acids (normally, mostly acetic, propionic, and butyric acids), which are partially dissociated. Although some lactate is normally produced in the rumen, it is normally metabolized by such organisms as Megasphaera elsdenii and, to a lesser extent, Selenomonas ruminantium and some other organisms. With high grain consumption, the concentration of dissociated organic acids can become quite high, resulting in rumen pH dropping below 6. Within this lower pH range, Lactobacillus spp. (producing lactate and hydrogen ions) are favored, and M. elsdenii and S. ruminantium are inhibited, tending to result in a considerable rise of lactate and hydrogen ion concentrations in the rumen fluid. The pKa of lactic acid is low, about 3.9, versus, for example, 4.8 for acetic acid; this contributes to the considerable drop in rumen pH which can occur. Because of the high solute concentration of the rumen fluid under such conditions, considerable water is translocated from the blood to the rumen along the osmotic potential gradient, resulting in dehydration which cannot be relieved by drinking, and which can ultimately lead to hypovolemic shock. As more lactate accumulates and rumen pH drops, the ruminal concentration of undissociated lactic acid increases. Undissociated lactic acid can cross the rumen wall to the blood, where it dissociates, lowering blood pH. Both L and D isomers of lactic acid are produced in the rumen; these isomers are metabolized by different metabolic pathways, and activity of the principal enzyme involved in metabolism of the D isomer declines greatly with lower pH, tending to result in an increased ratio of D:L isomers as acidosis progresses. Measures for preventing lactic acidosis in ruminants include avoidance of excessive amounts of grain in the diet, and gradual introduction of grain over a period of several days, to develop a rumen population capable of safely dealing with a relatively high grain intake. Administration of lasalocid or monensin in feed can reduce risk of lactic acidosis in ruminants, inhibiting most of the lactate-producing bacterial species without inhibiting the major lactate fermenters. Also, using a higher feeding frequency to provide the daily grain ration can allow higher grain intake without reducing the pH of the rumen fluid. Treatment of lactic acidosis in ruminants may involve intravenous administration of dilute sodium bicarbonate, oral administration of magnesium hydroxide, and/or repeated removal of rumen fluids and replacement with water (followed by reinoculation with rumen organisms, if necessary). References External links Acid–base disturbances
0.770779
0.997773
0.769063
Hepatology
Hepatology is the branch of medicine that incorporates the study of liver, gallbladder, biliary tree, and pancreas as well as management of their disorders. Although traditionally considered a sub-specialty of gastroenterology, rapid expansion has led in some countries to doctors specializing solely on this area, who are called hepatologists. Diseases and complications related to viral hepatitis and alcohol are the main reason for seeking specialist advice. More than two billion people have been infected with hepatitis B virus at some point in their life, and approximately 350 million have become persistent carriers. Up to 80% of liver cancers can be attributed to either hepatitis B or hepatitis C virus. In terms of mortality, the former is second only to smoking among known agents causing cancer. With more widespread implementation of vaccination and strict screening before blood transfusion, lower infection rates are expected in the future. In many countries, however, overall alcohol consumption is increasing, and consequently the number of people with cirrhosis and other related complications is commensurately increasing. Scope of specialty As for many medical specialties, patients are most likely to be referred by family physicians (i.e., GP) or by physicians from different disciplines. The reasons might be: Drug overdose. Paracetamol overdose is common. Gastrointestinal bleeding from portal hypertension related to liver damage Abnormal blood test suggesting liver disease Enzyme defects leading to bigger liver in children commonly named storage disease of liver Jaundice / Hepatitis virus positivity in blood, perhaps discovered on screening blood tests Ascites or swelling of abdomen from fluid accumulation, commonly due to liver disease but can be from other diseases like heart failure All patients with advanced liver disease e.g. cirrhosis should be under specialist care To undergo ERCP for diagnosing diseases of biliary tree or their management Fever with other features suggestive of infection involving mentioned organs. Some exotic tropical diseases like hydatid cyst, kala-azar or schistosomiasis may be suspected. Microbiologists would be involved as well Systemic diseases affecting liver and biliary tree e.g. haemochromatosis Follow-up of liver transplant Pancreatitis - commonly due to alcohol or gallstone Cancer of above organs. Usually multi-disciplinary approach is undertaken with involvement of oncologist and other experts. History Evidence from autopsies on Egyptian mummies suggests that liver damage from the parasitic infection bilharziasis was widespread in the ancient society. It is possible that the Greeks may have been aware of the liver's ability to exponentially duplicate as illustrated by the story of Prometheus. However, knowledge about liver disease in antiquity is questionable. Most of the important advances in the field have been made in the last 50 years. In 400 BC Hippocrates mentioned liver abscess in aphorisms. Roman anatomist Galen thought the liver was the principal organ of the body. He also identified its relationship with the gallbladder and spleen. Around 100 CE Aretaeus of Cappadocia wrote on jaundice In the medieval period Avicenna noted the importance of urine in diagnosing liver conditions. In 1770, French anatomist Antoine Portal noted bleeding due to oesophageal varices, In 1844, Gabriel Valentin showed pancreatic juices break down food in digestion. 1846 Justus Von Leibig discovered pancreatic juice tyrosine 1862 Austin Flint described the production of "stercorin". 1875 Victor Charles Hanot described cirrhotic jaundice and other diseases of the liver In 1958, Moore developed a standard technique for canine orthotopic liver transplantation. The first human liver transplant was performed in 1963 by Dr. Thomas E. Starzl on a three-year-old male afflicted with biliary atresia after perfecting the technique on canine livers. Baruch S. Blumberg discovered hepatitis B virus in 1966 and developed the first vaccine against it 1969. He was awarded the Nobel Prize in Physiology or Medicine 1976. In 1989, investigators from the CDC (Daniel W. Bradley) and Chiron (Michael Houghton) identified the hepatitis C virus, which had previously been known as non-A, non-B hepatitis and could not be detected in the blood supply. Only in 1992 was a blood test created that could detect hepatitis C in donated blood. The word hepatology is from Ancient Greek ἧπαρ (hepar) or ἡπατο- (hepato-), meaning "liver", and -λογία (-logia), meaning "study". Disease classification 1. International Classification of Disease (ICD 2007) – WHO classification: Chapter XI: Diseases of the digestive system K70-K77 Diseases of liver K80-K87 Disorders of gallbladder, biliary tract and pancreas 2. MeSH (medical subject heading):sam G02.403.776.409.405 same as "Gastroenterology" C06.552 Liver Diseases C06.130 Biliary Tract Diseases C06.689 Pancreatic diseases 3. National Library of Medicine Catalogue WI 700-740 Liver and biliary tree Diseases WI 800-830 Pancrease Also see Hepato-biliary diseases Important procedures Endoscopic retrograde cholangiopancreatography (ERCP) Transhepatic pancreato-cholangiography (TPC) Transjugular intrahepatic portosystemic shunt (TIPSS) Liver transplant and pancreas transplant References Gastroenterology
0.773865
0.993697
0.768987
Interstitium
The interstitium is a contiguous fluid-filled space existing between a structural barrier, such as a cell membrane or the skin, and internal structures, such as organs, including muscles and the circulatory system. The fluid in this space is called interstitial fluid, comprises water and solutes, and drains into the lymph system. The interstitial compartment is composed of connective and supporting tissues within the body – called the extracellular matrix – that are situated outside the blood and lymphatic vessels and the parenchyma of organs. The role of the interstitium in solute concentration, protein transport and hydrostatic pressure impacts human pathology and physiological responses such as edema, inflammation and shock. Structure The non-fluid parts of the interstitium are predominantly collagen types I, III, and V, elastin, and glycosaminoglycans, such as hyaluronan and proteoglycans that are cross-linked to form a honeycomb-like reticulum. Collagen bundles of the extracellular matrix form scaffolding with a high tensile strength. Interstitial cells (e.g., fibroblasts, dendritic cells, adipocytes, interstitial cells of Cajal and inflammatory cells, such as macrophages and mast cells), serve a variety of structural and immune functions. Fibroblasts synthesize the production of structural molecules as well as enzymes that break down polymeric molecules. Such structural components exist both for the general interstitium of the body, and within individual organs, such as the myocardial interstitium of the heart, the renal interstitium of the kidney, and the pulmonary interstitium of the lung. The interstitium in the submucosae of visceral organs, the dermis, superficial fascia, and perivascular adventitia are fluid-filled spaces supported by a collagen bundle lattice. Blind end, highly permeable, lymphatic capillaries extend into the interstitium. The fluid spaces communicate with draining lymph nodes, although they do not have lining cells or structures of lymphatic channels. Interstitial fluid entering the lymphatic system becomes lymph, which is transported through lymphatic vessels until it empties into the microcirculation and the venous system. Functions The interstitial fluid is a reservoir and transportation system for nutrients and solutes distributing among organs, cells, and capillaries, for signaling molecules communicating between cells, and for antigens and cytokines participating in immune regulation. The structure of the gel reticulum plays a role in the distribution of solutes across the interstitium, as the microstructure of the extracellular matrix in some parts excludes larger molecules (exclusion volume). The density of the collagen matrix fluctuates with the fluid volume of the interstitium. Increasing fluid volume is associated with a decrease in matrix fiber density, and a lower exclusion volume. The total fluid volume of the interstitium during health is about 20% of body weight, but this space is dynamic and may change in volume and composition during immune responses and in conditions such as cancer, and specifically within the interstitium of tumors. The amount of interstitial fluid varies from about 50% of the tissue weight in skin to about 10% in skeletal muscle. Interstitial fluid pressure is variable, ranging from -1 to -4 mmHg in tissues like the skin, intestine and lungs to 21 to 24 mmHg in the liver, kidney and myocardium. Generally, increasing interstitial volume is associated with increased interstitial pressure and microvascular filtration. The renal interstitium facilitates solute and water transport between blood and urine in the vascular and tubular elements of the kidneys, and water reabsorption through changes in solute concentrations and hydrostatic gradients. The myocardial interstitium participates in ionic exchanges associated with the spread of electrical events. The pulmonary interstitium allows for fluctuations in lung volume between inspiration and expiration. The composition and chemical properties of the interstitial fluid vary among organs and undergo changes in chemical composition during normal function, as well as during body growth, conditions of inflammation, and development of diseases, as in heart failure and chronic kidney disease. Disease In people with lung diseases, heart disease, cancer, kidney disease, immune disorders, and periodontal disease, the interstitial fluid and lymph system are sites where disease mechanisms may develop. Interstitial fluid flow is associated with the migration of cancer cells to metastatic sites. The enhanced permeability and retention effects refers to increased interstitial flow causing a neutral or reversed pressure differential between blood vessels and healthy tissue, limiting the distribution of intravenous drugs to tumors, which under other circumstances display a high-pressure gradient at their periphery. Changes in interstitial volume and pressure play critical roles in the onset of conditions like shock and inflammation. During hypovolemic shock, digestive enzymes and inflammatory agents diffuse to the interstitial space, then drain into the mesenteric lymphatic system and enter into circulation, contributing to systemic inflammation. Accumulating fluid in the interstitial space (interstitial edema) is caused by increased microvascular pressure and permeability, a positive feedback loop mechanism resulting in an associated in increasing the rate of microvascular filtration into the interstitial space. Decreased lymphatic drainage due to blockage can compound these effects. Interstitial edema can prevent oxygen diffusion across tissue and in the brain, kidney and intestines lead to the onset of compartment syndrome. See also Extracellular matrix Extracellular fluid References Anatomy Extracellular matrix Matrices (biology) Tissues (biology)
0.776629
0.990103
0.768942
Biomolecule
A biomolecule or biological molecule is loosely defined as a molecule produced by a living organism and essential to one or more typically biological processes. Biomolecules include large macromolecules such as proteins, carbohydrates, lipids, and nucleic acids, as well as small molecules such as vitamins and hormones. A general name for this class of material is biological materials. Biomolecules are an important element of living organisms, those biomolecules are often endogenous, produced within the organism but organisms usually need exogenous biomolecules, for example certain nutrients, to survive. Biology and its subfields of biochemistry and molecular biology study biomolecules and their reactions. Most biomolecules are organic compounds, and just four elements—oxygen, carbon, hydrogen, and nitrogen—make up 96% of the human body's mass. But many other elements, such as the various biometals, are also present in small amounts. The uniformity of both specific types of molecules (the biomolecules) and of certain metabolic pathways are invariant features among the wide diversity of life forms; thus these biomolecules and metabolic pathways are referred to as "biochemical universals" or "theory of material unity of the living beings", a unifying concept in biology, along with cell theory and evolution theory. Types of biomolecules A diverse range of biomolecules exist, including: Small molecules: Lipids, fatty acids, glycolipids, sterols, monosaccharides Vitamins Hormones, neurotransmitters Metabolites Monomers, oligomers and polymers: Nucleosides and nucleotides Nucleosides are molecules formed by attaching a nucleobase to a ribose or deoxyribose ring. Examples of these include cytidine (C), uridine (U), adenosine (A), guanosine (G), and thymidine (T). Nucleosides can be phosphorylated by specific kinases in the cell, producing nucleotides. Both DNA and RNA are polymers, consisting of long, linear molecules assembled by polymerase enzymes from repeating structural units, or monomers, of mononucleotides. DNA uses the deoxynucleotides C, G, A, and T, while RNA uses the ribonucleotides (which have an extra hydroxyl(OH) group on the pentose ring) C, G, A, and U. Modified bases are fairly common (such as with methyl groups on the base ring), as found in ribosomal RNA or transfer RNAs or for discriminating the new from old strands of DNA after replication. Each nucleotide is made of an acyclic nitrogenous base, a pentose and one to three phosphate groups. They contain carbon, nitrogen, oxygen, hydrogen and phosphorus. They serve as sources of chemical energy (adenosine triphosphate and guanosine triphosphate), participate in cellular signaling (cyclic guanosine monophosphate and cyclic adenosine monophosphate), and are incorporated into important cofactors of enzymatic reactions (coenzyme A, flavin adenine dinucleotide, flavin mononucleotide, and nicotinamide adenine dinucleotide phosphate). DNA and RNA structure DNA structure is dominated by the well-known double helix formed by Watson-Crick base-pairing of C with G and A with T. This is known as B-form DNA, and is overwhelmingly the most favorable and common state of DNA; its highly specific and stable base-pairing is the basis of reliable genetic information storage. DNA can sometimes occur as single strands (often needing to be stabilized by single-strand binding proteins) or as A-form or Z-form helices, and occasionally in more complex 3D structures such as the crossover at Holliday junctions during DNA replication. RNA, in contrast, forms large and complex 3D tertiary structures reminiscent of proteins, as well as the loose single strands with locally folded regions that constitute messenger RNA molecules. Those RNA structures contain many stretches of A-form double helix, connected into definite 3D arrangements by single-stranded loops, bulges, and junctions. Examples are tRNA, ribosomes, ribozymes, and riboswitches. These complex structures are facilitated by the fact that RNA backbone has less local flexibility than DNA but a large set of distinct conformations, apparently because of both positive and negative interactions of the extra OH on the ribose. Structured RNA molecules can do highly specific binding of other molecules and can themselves be recognized specifically; in addition, they can perform enzymatic catalysis (when they are known as "ribozymes", as initially discovered by Tom Cech and colleagues). Saccharides Monosaccharides are the simplest form of carbohydrates with only one simple sugar. They essentially contain an aldehyde or ketone group in their structure. The presence of an aldehyde group in a monosaccharide is indicated by the prefix aldo-. Similarly, a ketone group is denoted by the prefix keto-. Examples of monosaccharides are the hexoses, glucose, fructose, Trioses, Tetroses, Heptoses, galactose, pentoses, ribose, and deoxyribose. Consumed fructose and glucose have different rates of gastric emptying, are differentially absorbed and have different metabolic fates, providing multiple opportunities for two different saccharides to differentially affect food intake. Most saccharides eventually provide fuel for cellular respiration. Disaccharides are formed when two monosaccharides, or two single simple sugars, form a bond with removal of water. They can be hydrolyzed to yield their saccharin building blocks by boiling with dilute acid or reacting them with appropriate enzymes. Examples of disaccharides include sucrose, maltose, and lactose. Polysaccharides are polymerized monosaccharides, or complex carbohydrates. They have multiple simple sugars. Examples are starch, cellulose, and glycogen. They are generally large and often have a complex branched connectivity. Because of their size, polysaccharides are not water-soluble, but their many hydroxy groups become hydrated individually when exposed to water, and some polysaccharides form thick colloidal dispersions when heated in water. Shorter polysaccharides, with 3 to 10 monomers, are called oligosaccharides. A fluorescent indicator-displacement molecular imprinting sensor was developed for discriminating saccharides. It successfully discriminated three brands of orange juice beverage. The change in fluorescence intensity of the sensing films resulting is directly related to the saccharide concentration. Lignin Lignin is a complex polyphenolic macromolecule composed mainly of beta-O4-aryl linkages. After cellulose, lignin is the second most abundant biopolymer and is one of the primary structural components of most plants. It contains subunits derived from p-coumaryl alcohol, coniferyl alcohol, and sinapyl alcohol, and is unusual among biomolecules in that it is racemic. The lack of optical activity is due to the polymerization of lignin which occurs via free radical coupling reactions in which there is no preference for either configuration at a chiral center. Lipid Lipids (oleaginous) are chiefly fatty acid esters, and are the basic building blocks of biological membranes. Another biological role is energy storage (e.g., triglycerides). Most lipids consist of a polar or hydrophilic head (typically glycerol) and one to three non polar or hydrophobic fatty acid tails, and therefore they are amphiphilic. Fatty acids consist of unbranched chains of carbon atoms that are connected by single bonds alone (saturated fatty acids) or by both single and double bonds (unsaturated fatty acids). The chains are usually 14-24 carbon groups long, but it is always an even number. For lipids present in biological membranes, the hydrophilic head is from one of three classes: Glycolipids, whose heads contain an oligosaccharide with 1-15 saccharide residues. Phospholipids, whose heads contain a positively charged group that is linked to the tail by a negatively charged phosphate group. Sterols, whose heads contain a planar steroid ring, for example, cholesterol. Other lipids include prostaglandins and leukotrienes which are both 20-carbon fatty acyl units synthesized from arachidonic acid. They are also known as fatty acids Amino acids Amino acids contain both amino and carboxylic acid functional groups. (In biochemistry, the term amino acid is used when referring to those amino acids in which the amino and carboxylate functionalities are attached to the same carbon, plus proline which is not actually an amino acid). Modified amino acids are sometimes observed in proteins; this is usually the result of enzymatic modification after translation (protein synthesis). For example, phosphorylation of serine by kinases and dephosphorylation by phosphatases is an important control mechanism in the cell cycle. Only two amino acids other than the standard twenty are known to be incorporated into proteins during translation, in certain organisms: Selenocysteine is incorporated into some proteins at a UGA codon, which is normally a stop codon. Pyrrolysine is incorporated into some proteins at a UAG codon. For instance, in some methanogens in enzymes that are used to produce methane. Besides those used in protein synthesis, other biologically important amino acids include carnitine (used in lipid transport within a cell), ornithine, GABA and taurine. Protein structure The particular series of amino acids that form a protein is known as that protein's primary structure. This sequence is determined by the genetic makeup of the individual. It specifies the order of side-chain groups along the linear polypeptide "backbone". Proteins have two types of well-classified, frequently occurring elements of local structure defined by a particular pattern of hydrogen bonds along the backbone: alpha helix and beta sheet. Their number and arrangement is called the secondary structure of the protein. Alpha helices are regular spirals stabilized by hydrogen bonds between the backbone CO group (carbonyl) of one amino acid residue and the backbone NH group (amide) of the i+4 residue. The spiral has about 3.6 amino acids per turn, and the amino acid side chains stick out from the cylinder of the helix. Beta pleated sheets are formed by backbone hydrogen bonds between individual beta strands each of which is in an "extended", or fully stretched-out, conformation. The strands may lie parallel or antiparallel to each other, and the side-chain direction alternates above and below the sheet. Hemoglobin contains only helices, natural silk is formed of beta pleated sheets, and many enzymes have a pattern of alternating helices and beta-strands. The secondary-structure elements are connected by "loop" or "coil" regions of non-repetitive conformation, which are sometimes quite mobile or disordered but usually adopt a well-defined, stable arrangement. The overall, compact, 3D structure of a protein is termed its tertiary structure or its "fold". It is formed as result of various attractive forces like hydrogen bonding, disulfide bridges, hydrophobic interactions, hydrophilic interactions, van der Waals force etc. When two or more polypeptide chains (either of identical or of different sequence) cluster to form a protein, quaternary structure of protein is formed. Quaternary structure is an attribute of polymeric (same-sequence chains) or heteromeric (different-sequence chains) proteins like hemoglobin, which consists of two "alpha" and two "beta" polypeptide chains. Apoenzymes An apoenzyme (or, generally, an apoprotein) is the protein without any small-molecule cofactors, substrates, or inhibitors bound. It is often important as an inactive storage, transport, or secretory form of a protein. This is required, for instance, to protect the secretory cell from the activity of that protein. Apoenzymes become active enzymes on addition of a cofactor. Cofactors can be either inorganic (e.g., metal ions and iron-sulfur clusters) or organic compounds, (e.g., [Flavin group|flavin] and heme). Organic cofactors can be either prosthetic groups, which are tightly bound to an enzyme, or coenzymes, which are released from the enzyme's active site during the reaction. Isoenzymes Isoenzymes, or isozymes, are multiple forms of an enzyme, with slightly different protein sequence and closely similar but usually not identical functions. They are either products of different genes, or else different products of alternative splicing. They may either be produced in different organs or cell types to perform the same function, or several isoenzymes may be produced in the same cell type under differential regulation to suit the needs of changing development or environment. LDH (lactate dehydrogenase) has multiple isozymes, while fetal hemoglobin is an example of a developmentally regulated isoform of a non-enzymatic protein. The relative levels of isoenzymes in blood can be used to diagnose problems in the organ of secretion . See also Biomolecular engineering List of biomolecules Metabolism Multi-state modeling of biomolecules References External links Society for Biomolecular Sciences provider of a forum for education and information exchange among professionals within drug discovery and related disciplines. Molecules Biochemistry Organic compounds
0.772301
0.995643
0.768936
Chemistry
Chemistry is the scientific study of the properties and behavior of matter. It is a physical science within the natural sciences that studies the chemical elements that make up matter and compounds made of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during reactions with other substances. Chemistry also addresses the nature of chemical bonds in chemical compounds. In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant growth (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the Moon (cosmochemistry), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics). Chemistry has existed under various names since ancient times. It has evolved, and now chemistry encompasses various areas of specialisation, or subdisciplines, that continue to increase in number and interrelate to create further interdisciplinary fields of study. The applications of various fields of chemistry are used frequently for economic purposes in the chemical industry. Etymology The word chemistry comes from a modification during the Renaissance of the word alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism, and medicine. Alchemy is often associated with the quest to turn lead or other base metals into gold, though alchemists were also interested in many of the questions of modern chemistry. The modern word alchemy in turn is derived from the Arabic word . This may have Egyptian origins since is derived from the Ancient Greek , which is in turn derived from the word , which is the ancient name of Egypt in the Egyptian language. Alternately, may derive from 'cast together'. Modern principles The current model of atomic structure is the quantum mechanical model. Traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. Matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. The interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. Such behaviors are studied in a chemistry laboratory. The chemistry laboratory stereotypically uses various forms of laboratory glassware. However glassware is not central to chemistry, and a great deal of experimental (as well as applied/industrial) chemistry is done without it. A chemical reaction is a transformation of some substances into one or more different substances. The basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. It can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. The number of atoms on the left and the right in the equation for a chemical transformation is equal. (When the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay.) The type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws. Energy and entropy considerations are invariably important in almost all chemical studies. Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. They can be analyzed using the tools of chemical analysis, e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists. Most chemists specialize in one or more sub-disciplines. Several concepts are essential for the study of chemistry; some of them are: Matter In chemistry, matter is defined as anything that has rest mass and volume (it takes up space) and is made up of particles. The particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. Matter can be a pure chemical substance or a mixture of substances. Atom The atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. The nucleus is made up of positively charged protons and uncharged neutrons (together called nucleons), while the electron cloud consists of negatively charged electrons which orbit the nucleus. In a neutral atom, the negatively charged electrons balance out the positive charge of the protons. The nucleus is dense; the mass of a nucleon is approximately 1,836 times that of an electron, yet the radius of an atom is about 10,000 times that of its nucleus. The atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state(s), coordination number, and preferred types of bonds to form (e.g., metallic, ionic, covalent). Element A chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol Z. The mass number is the sum of the number of protons and neutrons in a nucleus. Although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number; atoms of an element which have different mass numbers are known as isotopes. For example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. The standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. The periodic table is arranged in groups, or columns, and periods, or rows. The periodic table is useful in identifying periodic trends. Compound A compound is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number. Molecule A molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. However, this definition only works well for substances that are composed of molecules, which is not true of many substances (see below). Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs. Thus, molecules exist as electrically neutral units, unlike ions. When this rule is broken, giving the "molecule" a charge, the result is sometimes named a molecular ion or a polyatomic ion. However, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well-separated form, such as a directed beam in a vacuum in a mass spectrometer. Charged polyatomic collections residing in solids (for example, common sulfate or nitrate ions) are generally not considered "molecules" in chemistry. Some molecules contain one or more unpaired electrons, creating radicals. Most radicals are comparatively reactive, but some, such as nitric oxide (NO) can be stable. The "inert" or noble gas elements (helium, neon, argon, krypton, xenon and radon) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. Identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. However, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the Earth are chemical compounds without molecules. These other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. Instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. Examples of such substances are mineral salts (such as table salt), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. One of the main characteristics of a molecule is its geometry often called its structure. While the structure of diatomic, triatomic or tetra-atomic molecules may be trivial, (linear, angular pyramidal etc.) the structure of polyatomic molecules, that are constituted of more than six atoms (of several elements) can be crucial for its chemical nature. Substance and mixture A chemical substance is a kind of matter with a definite composition and set of properties. A collection of substances is called a mixture. Examples of mixtures are air and alloys. Mole and amount of substance The mole is a unit of measurement that denotes an amount of substance (also called chemical amount). One mole is defined to contain exactly particles (atoms, molecules, ions, or electrons), where the number of particles per mole is known as the Avogadro constant. Molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol/dm3. Phase In addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. For the most part, the chemical classifications are independent of these bulk phase classifications; however, some more exotic phases are incompatible with certain chemical properties. A phase is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature. Physical properties, such as density and refractive index tend to fall within values characteristic of the phase. The phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions. Sometimes the distinction between phases can be continuous instead of having a discrete boundary' in this case the matter is considered to be in a supercritical state. When three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions. The most familiar examples of phases are solids, liquids, and gases. Many substances exhibit multiple solid phases. For example, there are three phases of solid iron (alpha, gamma, and delta) that vary based on temperature and pressure. A principal difference between solid phases is the crystal structure, or arrangement, of the atoms. Another phase commonly encountered in the study of chemistry is the aqueous phase, which is the state of substances dissolved in aqueous solution (that is, in water). Less familiar phases include plasmas, Bose–Einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. While most familiar phases deal with three-dimensional systems, it is also possible to define analogs in two-dimensional systems, which has received attention for its relevance to systems in biology. Bonding Atoms sticking together in molecules or crystals are said to be bonded with one another. A chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. The chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of Van der Waals force. Each of these kinds of bonds is ascribed to some potential. These potentials create the interactions which hold atoms together in molecules or crystals. In many simple compounds, valence bond theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the concept of oxidation number can be used to explain molecular structure and composition. An ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non-metal atom, becoming a negatively charged anion. The two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. For example, sodium (Na), a metal, loses one electron to become an Na+ cation while chlorine (Cl), a non-metal, gains this electron to become Cl−. The ions are held together due to electrostatic attraction, and that compound sodium chloride (NaCl), or common table salt, is formed. In a covalent bond, one or more pairs of valence electrons are shared by two atoms: the resulting electrically neutral group of bonded atoms is termed a molecule. Atoms will share valence electrons in such a way as to create a noble gas electron configuration (eight electrons in their outermost shell) for each atom. Atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. However, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. See diagram on electronic orbitals. Energy In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. A reaction is said to be exothermic if the reaction releases heat to the surroundings; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor – that is the probability of a molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. A related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. A reaction is feasible only if the total change in the Gibbs free energy is negative, ; if it is equal to zero the chemical reaction is said to be at equilibrium. There exist only limited possible states of energy for electrons, atoms and molecules. These are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. The atoms/molecules in a higher energy state are said to be excited. The molecules/atoms of substance in an excited energy state are often much more reactive; that is, more amenable to chemical reactions. The phase of a substance is invariably determined by its energy and the energy of its surroundings. When the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water (H2O); a liquid at room temperature because its molecules are bound by hydrogen bonds. Whereas hydrogen sulfide (H2S) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole–dipole interactions. The transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. However, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. Thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. For example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. The existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. Different kinds of spectra are often used in chemical spectroscopy, e.g. IR, microwave, NMR, ESR, etc. Spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra. The term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. Reaction When a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. A chemical reaction is therefore a concept related to the "reaction" of a substance when it comes in close contact with another, whether as a mixture or a solution; exposure to some form of energy, or both. It results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels—often laboratory glassware. Chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. Oxidation, reduction, dissociation, acid–base neutralization and molecular rearrangement are some examples of common chemical reactions. A chemical reaction can be symbolically depicted through a chemical equation. While in a non-nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. The sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. A chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. Many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. Reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. Many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. Several empirical rules, like the Woodward–Hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. According to the IUPAC gold book, a chemical reaction is "a process that results in the interconversion of chemical species." Accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. An additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. Such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities (i.e. 'microscopic chemical events'). Ions and salts An ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. When an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. When an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. Cations and anions can form a crystalline lattice of neutral salts, such as the Na+ and Cl− ions forming sodium chloride, or NaCl. Examples of polyatomic ions that do not split up during acid–base reactions are hydroxide (OH−) and phosphate (PO43−). Plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. Acidity and basicity A substance can often be classified as an acid or a base. There are several different theories which explain acid–base behavior. The simplest is Arrhenius theory, which states that acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. According to Brønsted–Lowry acid–base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction; by extension, a base is the substance which receives that hydrogen ion. A third common theory is Lewis acid–base theory, which is based on the formation of new chemical bonds. Lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. There are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. Acid strength is commonly measured by two methods. One measurement, based on the Arrhenius definition of acidity, is pH, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. Thus, solutions that have a low pH have a high hydronium ion concentration and can be said to be more acidic. The other measurement, based on the Brønsted–Lowry definition, is the acid dissociation constant (Ka), which measures the relative ability of a substance to act as an acid under the Brønsted–Lowry definition of an acid. That is, substances with a higher Ka are more likely to donate hydrogen ions in chemical reactions than those with lower Ka values. Redox Redox (-) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons (reduction) or losing electrons (oxidation). Substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. An oxidant removes electrons from another substance. Similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers. A reductant transfers electrons to another substance and is thus oxidized itself. And because it "donates" electrons it is also called an electron donor. Oxidation and reduction properly refer to a change in oxidation number—the actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number. Equilibrium Although the concept of equilibrium is widely used across sciences, in the context of chemistry, it arises whenever a number of different states of the chemical composition are possible, as for example, in a mixture of several chemical compounds that can react with one another, or when a substance can be present in more than one kind of phase. A system of chemical substances at equilibrium, even though having an unchanging composition, is most often not static; molecules of the substances continue to react with one another thus giving rise to a dynamic equilibrium. Thus the concept describes the state in which the parameters such as chemical composition remain unchanged over time. Chemical laws Chemical reactions are governed by certain laws, which have become fundamental concepts in chemistry. Some of them are: Avogadro's law Beer–Lambert law Boyle's law (1662, relating pressure and volume) Charles's law (1787, relating volume and temperature) Fick's laws of diffusion Gay-Lussac's law (1809, relating pressure and temperature) Le Chatelier's principle Henry's law Hess's law Law of conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics. Law of conservation of mass continues to be conserved in isolated systems, even in modern physics. However, special relativity shows that due to mass–energy equivalence, whenever non-material "energy" (heat, light, kinetic energy) is removed from a non-isolated system, some mass will be lost with it. High energy losses result in loss of weighable amounts of mass, an important topic in nuclear chemistry. Law of definite composition, although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. Law of multiple proportions Raoult's law History The history of chemistry spans a period from the ancient past to the present. Since several millennia BC, civilizations were using technologies that would eventually form the basis of the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze. Chemistry was preceded by its protoscience, alchemy, which operated a non-scientific approach to understanding the constituents of matter and their interactions. Despite being unsuccessful in explaining the nature of matter and its transformations, alchemists set the stage for modern chemistry by performing experiments and recording the results. Robert Boyle, although skeptical of elements and convinced of alchemy, played a key part in elevating the "sacred art" as an independent, fundamental and philosophical discipline in his work The Sceptical Chymist (1661). While both alchemy and chemistry are concerned with matter and its transformations, the crucial difference was given by the scientific method that chemists employed in their work. Chemistry, as a body of knowledge distinct from alchemy, became an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry afterwards is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs. Definition The definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term "chymistry", in the view of noted scientist Robert Boyle in 1661, meant the subject of the material principles of mixed bodies. In 1663, the chemist Christopher Glaser described "chymistry" as a scientific art, by which one learns to dissolve bodies, and draw from them the different substances on their composition, and how to unite them again, and exalt them to a higher perfection. The 1730 definition of the word "chemistry", as used by Georg Ernst Stahl, meant the art of resolving mixed, compound, or aggregate bodies into their principles; and of composing such bodies from those principles. In 1837, Jean-Baptiste Dumas considered the word "chemistry" to refer to the science concerned with the laws and effects of molecular forces. This definition further evolved until, in 1947, it came to mean the science of substances: their structure, their properties, and the reactions that change them into other substances – a characterization accepted by Linus Pauling. More recently, in 1998, Professor Raymond Chang broadened the definition of "chemistry" to mean the study of matter and the changes it undergoes. Background Early civilizations, such as the Egyptians Babylonians and Indians amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but did not develop a systematic theory. A basic chemical hypothesis first emerged in Classical Greece with the theory of four elements as propounded definitively by Aristotle stating that fire, air, earth and water were the fundamental elements from which everything is formed as a combination. Greek atomism dates back to 440 BC, arising in works by philosophers such as Democritus and Epicurus. In 50 BCE, the Roman philosopher Lucretius expanded upon the theory in his poem De rerum natura (On The Nature of Things). Unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments. An early form of the idea of conservation of mass is the notion that "Nothing comes from nothing" in Ancient Greek philosophy, which can be found in Empedocles (approx. 4th century BC): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed." and Epicurus (3rd century BC), who, describing the nature of the Universe, wrote that "the totality of things was always such as it is now, and always will be". In the Hellenistic world the art of alchemy first proliferated, mingling magic and occultism into the study of natural substances with the ultimate goal of transmuting elements into gold and discovering the elixir of eternal life. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. Alchemy continued to be developed and practised throughout the Arab world after the Muslim conquests, and from there, and from the Byzantine remnants, diffused into medieval and Renaissance Europe through Latin translations. The Arabic works attributed to Jabir ibn Hayyan introduced a systematic classification of chemical substances, and provided instructions for deriving an inorganic compound (sal ammoniac or ammonium chloride) from organic substances (such as plants, blood, and hair) by chemical means. Some Arabic Jabirian works (e.g., the "Book of Mercy", and the "Book of Seventy") were later translated into Latin under the Latinized name "Geber", and in 13th-century Europe an anonymous writer, usually referred to as pseudo-Geber, started to produce alchemical and metallurgical writings under this name. Later influential Muslim philosophers, such as Abū al-Rayhān al-Bīrūnī and Avicenna disputed the theories of alchemy, particularly the theory of the transmutation of metals. Improvements of the refining of ores and their extractions to smelt metals was widely used source of information for early chemists in the 16th century, among them Georg Agricola (1494–1555), who published his major work De re metallica in 1556. His work, describing highly developed and complex processes of mining metal ores and metal extraction, were the pinnacle of metallurgy during that time. His approach removed all mysticism associated with the subject, creating the practical base upon which others could and would build. The work describes the many kinds of furnace used to smelt ore, and stimulated interest in minerals and their composition. Agricola has been described as the "father of metallurgy" and the founder of geology as a scientific discipline. Under the influence of the new empirical methods propounded by Sir Francis Bacon and others, a group of chemists at Oxford, Robert Boyle, Robert Hooke and John Mayow began to reshape the old alchemical traditions into a scientific discipline. Boyle in particular questioned some commonly held chemical theories and argued for chemical practitioners to be more "philosophical" and less commercially focused in The Sceptical Chemyst. He formulated Boyle's law, rejected the classical "four elements" and proposed a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous experiment. In the following decades, many important discoveries were made, such as the nature of 'air' which was discovered to be composed of many different gases. The Scottish chemist Joseph Black and the Flemish Jan Baptist van Helmont discovered carbon dioxide, or what Black called 'fixed air' in 1754; Henry Cavendish discovered hydrogen and elucidated its properties and Joseph Priestley and, independently, Carl Wilhelm Scheele isolated pure oxygen. The theory of phlogiston (a substance at the root of all combustion) was propounded by the German Georg Ernst Stahl in the early 18th century and was only overturned by the end of the century by the French chemist Antoine Lavoisier, the chemical analogue of Newton in physics. Lavoisier did more than any other to establish the new science on proper theoretical footing, by elucidating the principle of conservation of mass and developing a new system of chemical nomenclature used to this day. English scientist John Dalton proposed the modern theory of atoms; that all substances are composed of indivisible 'atoms' of matter and that different atoms have varying atomic weights. The development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, Jöns Jacob Berzelius and Humphry Davy, made possible by the prior invention of the voltaic pile by Alessandro Volta. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current. British William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. J.A.R. Newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by Dmitri Mendeleev and independently by several other scientists including Julius Lothar Meyer. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table. At the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. In 1897, J.J. Thomson of the University of Cambridge discovered the electron and soon after the French scientist Becquerel as well as the couple Pierre and Marie Curie investigated the phenomenon of radioactivity. In a series of pioneering scattering experiments Ernest Rutherford at the University of Manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles. His work on atomic structure was improved on by his students, the Danish physicist Niels Bohr, the Englishman Henry Moseley and the German Otto Hahn, who went on to father the emerging nuclear chemistry and discovered nuclear fission. The electronic theory of chemical bonds and molecular orbitals was developed by the American scientists Linus Pauling and Gilbert N. Lewis. The year 2011 was declared by the United Nations as the International Year of Chemistry. It was an initiative of the International Union of Pure and Applied Chemistry, and of the United Nations Educational, Scientific, and Cultural Organization and involves chemical societies, academics, and institutions worldwide and relied on individual initiatives to organize local and regional activities. Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhler's synthesis of urea. Other crucial 19th century advances were; an understanding of valence bonding (Edward Frankland in 1852) and the application of thermodynamics to chemistry (J. W. Gibbs and Svante Arrhenius in the 1870s). Practice In the practice of chemistry, pure chemistry is the study of the fundamental principles of chemistry, while applied chemistry applies that knowledge to develop technology and solve real-world problems. Subdisciplines Chemistry is typically divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry. Biochemistry is the study of the chemicals, chemical reactions and interactions that take place at a molecular level in living organisms. Biochemistry is highly interdisciplinary, covering medicinal chemistry, neurochemistry, molecular biology, forensics, plant science and genetics. Inorganic chemistry is the study of the properties and reactions of inorganic compounds, such as metals and minerals. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry. Materials chemistry is the preparation, characterization, and understanding of solid state components or devices with a useful current or future function. The field is a new breadth of study in graduate programs, and it integrates elements from all classical areas of chemistry like organic chemistry, inorganic chemistry, and crystallography with a focus on fundamental issues that are unique to materials. Primary systems of study include the chemistry of condensed phases (solids, liquids, polymers) and interfaces between different phases. Neurochemistry is the study of neurochemicals; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system. Nuclear chemistry is the study of how subatomic particles come together and make nuclei. Modern transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field. In addition to medical applications, nuclear chemistry encompasses nuclear engineering which explores the topic of using nuclear power sources for generating energy Organic chemistry is the study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton. Organic compounds can be classified, organized and understood in reactions by their functional groups, unit atoms or molecules that show characteristic chemical properties in a compound. Physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of infinitesimal calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. Physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap. Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics. Others subdivisions include electrochemistry, femtochemistry, flavor chemistry, flow chemistry, immunohistochemistry, hydrogenation chemistry, mathematical chemistry, molecular mechanics, natural product chemistry, organometallic chemistry, petrochemistry, photochemistry, physical organic chemistry, polymer chemistry, radiochemistry, sonochemistry, supramolecular chemistry, synthetic chemistry, and many others. Interdisciplinary Interdisciplinary fields include agrochemistry, astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemical biology, chemo-informatics, environmental chemistry, geochemistry, green chemistry, immunochemistry, marine chemistry, materials science, mechanochemistry, medicinal chemistry, molecular biology, nanotechnology, oenology, pharmacology, phytochemistry, solid-state chemistry, surface science, thermochemistry, and many others. Industry The chemical industry represents an important economic activity worldwide. The global top 50 chemical producers in 2013 had sales of US$980.5 billion with a profit margin of 10.3%. Professional societies American Chemical Society American Society for Neurochemistry Chemical Institute of Canada Chemical Society of Peru International Union of Pure and Applied Chemistry Royal Australian Chemical Institute Royal Netherlands Chemical Society Royal Society of Chemistry Society of Chemical Industry World Association of Theoretical and Computational Chemists List of chemistry societies See also Comparison of software for molecular mechanics modeling Glossary of chemistry terms International Year of Chemistry List of chemists List of compounds List of important publications in chemistry List of unsolved problems in chemistry Outline of chemistry Periodic systems of small molecules Philosophy of chemistry Science tourism References Bibliography Further reading Popular reading Atkins, P. W. Galileo's Finger (Oxford University Press) Atkins, P. W. Atkins' Molecules (Cambridge University Press) Kean, Sam. The Disappearing Spoon – and Other True Tales from the Periodic Table (Black Swan) London, England, 2010 Levi, Primo The Periodic Table (Penguin Books) [1975] translated from the Italian by Raymond Rosenthal (1984) Stwertka, A. A Guide to the Elements (Oxford University Press) Introductory undergraduate textbooks Atkins, P.W., Overton, T., Rourke, J., Weller, M. and Armstrong, F. Shriver and Atkins Inorganic Chemistry (4th ed.) 2006 (Oxford University Press) Chang, Raymond. Chemistry 6th ed. Boston, Massachusetts: James M. Smith, 1998. Voet and Voet. Biochemistry (Wiley) Advanced undergraduate-level or graduate textbooks Atkins, P. W. Physical Chemistry (Oxford University Press) Atkins, P. W. et al. Molecular Quantum Mechanics (Oxford University Press) McWeeny, R. Coulson's Valence (Oxford Science Publications) Pauling, L. The Nature of the chemical bond (Cornell University Press) Pauling, L., and Wilson, E. B. Introduction to Quantum Mechanics with Applications to Chemistry (Dover Publications) Smart and Moore. Solid State Chemistry: An Introduction (Chapman and Hall) Stephenson, G. Mathematical Methods for Science Students (Longman) External links General Chemistry principles, patterns and applications.
0.769218
0.999546
0.768868
Pharmacy
Pharmacy is the science and practice of discovering, producing, preparing, dispensing, reviewing and monitoring medications, aiming to ensure the safe, effective, and affordable use of medicines. It is a miscellaneous science as it links health sciences with pharmaceutical sciences and natural sciences. The professional practice is becoming more clinically oriented as most of the drugs are now manufactured by pharmaceutical industries. Based on the setting, pharmacy practice is either classified as community or institutional pharmacy. Providing direct patient care in the community of institutional pharmacies is considered clinical pharmacy. The scope of pharmacy practice includes more traditional roles such as compounding and dispensing of medications. It also includes more modern services related to health care including clinical services, reviewing medications for safety and efficacy, and providing drug information with patient counselling. Pharmacists, therefore, are experts on drug therapy and are the primary health professionals who optimize the use of medication for the benefit of the patients. An establishment in which pharmacy (in the first sense) is practiced is called a pharmacy (this term is more common in the United States) or chemists (which is more common in Great Britain, though pharmacy is also used). In the United States and Canada, drugstores commonly sell medicines, as well as miscellaneous items such as confectionery, cosmetics, office supplies, toys, hair care products and magazines, and occasionally refreshments and groceries. In its investigation of herbal and chemical ingredients, the work of the apothecary may be regarded as a precursor of the modern sciences of chemistry and pharmacology, prior to the formulation of the scientific method. Disciplines The field of pharmacy can generally be divided into various disciplines: Pharmaceutics and Computational Pharmaceutics Pharmacokinetics and Pharmacodynamics Medicinal Chemistry and Pharmacognosy Pharmacology Pharmacy Practice Pharmacoinformatics Pharmacogenomics The boundaries between these disciplines and with other sciences, such as biochemistry, are not always clear-cut. Often, collaborative teams from various disciplines (pharmacists and other scientists) work together toward the introduction of new therapeutics and methods for patient care. However, pharmacy is not a basic or biomedical science in its typical form. Medicinal chemistry is also a distinct branch of synthetic chemistry combining pharmacology, organic chemistry, and chemical biology. Pharmacology is sometimes considered the fourth discipline of pharmacy. Although pharmacology is essential to the study of pharmacy, it is not specific to pharmacy. Both disciplines are distinct. Those who wish to practice both pharmacy (patient-oriented) and pharmacology (a biomedical science requiring the scientific method) receive separate training and degrees unique to either discipline. Pharmacoinformatics is considered another new discipline, for systematic drug discovery and development with efficiency and safety. Pharmacogenomics is the study of genetic-linked variants that effect patient clinical responses, allergies, and metabolism of drugs. Professionals The World Health Organization estimates that there are at least 2.6 million pharmacists and other pharmaceutical personnel worldwide. Pharmacists Pharmacists are healthcare professionals with specialized education and training who perform various roles to ensure optimal health outcomes for their patients through the quality use of medicines. Pharmacists may also be small business proprietors, owning the pharmacy in which they practice. Since pharmacists know about the mode of action of a particular drug, and its metabolism and physiological effects on the human body in great detail, they play an important role in optimization of drug treatment for an individual. Pharmacists are represented internationally by the International Pharmaceutical Federation (FIP), an NGO linked with World Health Organization (WHO). They are represented at the national level by professional organisations such as the Royal Pharmaceutical Society in the UK, Pharmaceutical Society of Australia (PSA), Canadian Pharmacists Association (CPhA), Indian Pharmacist Association (IPA), Pakistan Pharmacists Association (PPA), American Pharmacists Association (APhA), and the Malaysian Pharmaceutical Society (MPS). In some cases, the representative body is also the registering body, which is responsible for the regulation and ethics of the profession. In the United States, specializations in pharmacy practice recognized by the Board of Pharmacy Specialties include: cardiovascular, infectious disease, oncology, pharmacotherapy, nuclear, nutrition, and psychiatry. The Commission for Certification in Geriatric Pharmacy certifies pharmacists in geriatric pharmacy practice. The American Board of Applied Toxicology certifies pharmacists and other medical professionals in applied toxicology. Pharmacy support staff Pharmacy technicians Pharmacy technicians support the work of pharmacists and other health professionals by performing a variety of pharmacy-related functions, including dispensing prescription drugs and other medical devices to patients and instructing on their use. They may also perform administrative duties in pharmaceutical practice, such as reviewing prescription requests with medic's offices and insurance companies to ensure correct medications are provided and payment is received. Legislation requires the supervision of certain pharmacy technician's activities by a pharmacist. The majority of pharmacy technicians work in community pharmacies. In hospital pharmacies, pharmacy technicians may be managed by other senior pharmacy technicians. In the UK the role of a PhT in hospital pharmacy has grown and responsibility has been passed on to them to manage the pharmacy department and specialized areas in pharmacy practice allowing pharmacists the time to specialize in their expert field as medication consultants spending more time working with patients and in research. Pharmacy technicians are registered with the General Pharmaceutical Council (GPhC). The GPhC is the regulator of pharmacists, pharmacy technicians, and pharmacy premises. In the US, pharmacy technicians perform their duties under the supervision of pharmacists. Although they may perform, under supervision, most dispensing, compounding and other tasks, they are not generally allowed to perform the role of counseling patients on the proper use of their medications. Some states have a legally mandated pharmacist-to-pharmacy technician ratio. Dispensing assistants Dispensing assistants are commonly referred to as "dispensers" and in community pharmacies perform largely the same tasks as a pharmacy technician. They work under the supervision of pharmacists and are involved in preparing (dispensing and labelling) medicines for provision to patients. Healthcare assistants/medicines counter assistants In the UK, this group of staff can sell certain medicines (including pharmacy only and general sales list medicines) over the counter. They cannot prepare prescription-only medicines for supply to patients. History The earliest known compilation of medicinal substances was the Sushruta Samhita, an Indian Ayurvedic treatise attributed to Sushruta in the 6th century BC. However, the earliest text as preserved dates to the 3rd or 4th century AD. Many Sumerian (4th millennium BC – early 2nd millennium BC) cuneiform clay tablets record prescriptions for medicine. Ancient Egyptian pharmacological knowledge was recorded in various papyri such as the Ebers Papyrus of 1550 BC, and the Edwin Smith Papyrus of the 16th century BC. In Ancient Greece, Diocles of Carystus (4th century BC) was one of several men studying the medicinal properties of plants. He wrote several treatises on the topic. The Greek physician Pedanius Dioscorides is famous for writing a five-volume book in his native Greek Περί ύλης ιατρικής in the 1st century AD. The Latin translation (Concerning medical substances) was used as a basis for many medieval texts and was built upon by many middle eastern scientists during the Islamic Golden Age, themselves deriving their knowledge from earlier Greek Byzantine medicine Byzantine Medicine. Pharmacy in China dates at least to the earliest known Chinese manual, the Shennong Bencao Jing (The Divine Farmer's Herb-Root Classic), dating back to the 1st century AD. It was compiled during the Han dynasty and was attributed to the mythical Shennong. Earlier literature included lists of prescriptions for specific ailments, exemplified by a manuscript "Recipes for 52 Ailments", found in the Mawangdui, sealed in 168 BC. In Japan, at the end of the Asuka period (538–710) and the early Nara period (710–794), the men who fulfilled roles similar to those of modern pharmacists were highly respected. The place of pharmacists in society was expressly defined in the Taihō Code (701) and re-stated in the Yōrō Code (718). Ranked positions in the pre-Heian Imperial court were established; and this organizational structure remained largely intact until the Meiji Restoration (1868). In this highly stable hierarchy, the pharmacists—and even pharmacist assistants—were assigned status superior to all others in health-related fields such as physicians and acupuncturists. In the Imperial household, the pharmacist was even ranked above the two personal physicians of the Emperor. There is a stone sign for a pharmacy shop with a tripod, a mortar, and a pestle opposite one for a doctor in the Arcadian Way in Ephesus near Kusadasi in Turkey. The current Ephesus dates back to 400 BC and was the site of the Temple of Artemis, one of the seven wonders of the world. In Baghdad the first pharmacies, or drug stores, were established in 754, under the Abbasid Caliphate during the Islamic Golden Age. By the 9th century, these pharmacies were state-regulated. The advances made in the Middle East in botany and chemistry led medicine in medieval Islam substantially to develop pharmacology. Muhammad ibn Zakarīya Rāzi (Rhazes) (865–915), for instance, acted to promote the medical uses of chemical compounds. Abu al-Qasim al-Zahrawi (Abulcasis) (936–1013) pioneered the preparation of medicines by sublimation and distillation. His Liber servitoris is of particular interest, as it provides the reader with recipes and explains how to prepare the "simples" from which were compounded the complex drugs then generally used. Sabur Ibn Sahl (d 869), was, however, the first physician to record his findings in a pharmacopoeia, describing a large variety of drugs and remedies for ailments. Al-Biruni (973–1050) wrote one of the most valuable Islamic works on pharmacology, entitled Kitab al-Saydalah (The Book of Drugs), in which he detailed the properties of drugs and outlined the role of pharmacy and the functions and duties of the pharmacist. Avicenna, too, described no less than 700 preparations, their properties, modes of action, and their indications. He devoted in fact a whole volume to simple drugs in The Canon of Medicine. Of great impact were also the works by al-Maridini of Baghdad and Cairo, and Ibn al-Wafid (1008–1074), both of which were printed in Latin more than fifty times, appearing as De Medicinis universalibus et particularibus by 'Mesue' the younger, and the Medicamentis simplicibus by 'Abenguefit'. Peter of Abano (1250–1316) translated and added a supplement to the work of al-Maridini under the title De Veneris. Al-Muwaffaq's contributions in the field are also pioneering. Living in the 10th century, he wrote The foundations of the true properties of Remedies, amongst others describing arsenious oxide, and being acquainted with silicic acid. He made clear distinction between sodium carbonate and potassium carbonate, and drew attention to the poisonous nature of copper compounds, especially copper vitriol, and also lead compounds. He also describes the distillation of sea-water for drinking. In Europe, pharmacy-like shops began to appear during the 12th century. In 1240, emperor Frederic II issued a decree by which the physician's and the apothecary's professions were separated. There are pharmacies in Europe that have been in operation since medieval times. In Florence, Italy, the director of the museum in the former Santa Maria Novella pharmacy says that the pharmacy there dates back to 1221. In Trier (Germany), the Löwen-Apotheke is in operation since 1241, the oldest pharmacy in Europe in continuous operation. In Dubrovnik (Croatia), a pharmacy that first opened in 1317 is located inside the Franciscan monastery: it is the 2nd oldest pharmacy in Europe that is still operating. In the Town Hall Square of Tallinn (Estonia), there is a pharmacy dating from at least 1422. The medieval Esteve Pharmacy, located in Llívia, a Catalan enclave close to Puigcerdà, is a museum: the building dates back to the 15th century and the museum keeps albarellos from the 16th and 17th centuries, old prescription books and antique drugs. Practice areas Pharmacists practice in a variety of areas including community pharmacies, infusion pharmacies, hospitals, clinics, insurance companies, medical communication companies, research facilities, pharmaceutical companies, extended care facilities, psychiatric hospitals, and regulatory agencies. Pharmacists themselves may have expertise in a medical specialty. Community pharmacy A pharmacy (also known as a chemist in Australia, New Zealand and the British Isles; or drugstore in North America; retail pharmacy in industry terminology; or apothecary, historically) is where most pharmacists practice the profession of pharmacy. It is the community pharmacy in which the dichotomy of the profession exists; health professionals who are also retailers. Community pharmacies usually consist of a retail storefront with a dispensary, where medications are stored and dispensed. According to Sharif Kaf al-Ghazal, the opening of the first drugstores are recorded by Muslim pharmacists in Baghdad in 754 AD. Hospital pharmacy Pharmacies within hospitals differ considerably from community pharmacies. Some pharmacists in hospital pharmacies may have more complex clinical medication management issues, and pharmacists in community pharmacies often have more complex business and customer relations issues. Because of the complexity of medications including specific indications, effectiveness of treatment regimens, safety of medications (i.e., drug interactions) and patient compliance issues (in the hospital and at home), many pharmacists practicing in hospitals gain more education and training after pharmacy school through a pharmacy practice residency, sometimes followed by another residency in a specific area. Those pharmacists are often referred to as clinical pharmacists and they often specialize in various disciplines of pharmacy. For example, there are pharmacists who specialize in hematology/oncology, HIV/AIDS, infectious disease, critical care, emergency medicine, toxicology, nuclear pharmacy, pain management, psychiatry, anti-coagulation clinics, herbal medicine, neurology/epilepsy management, pediatrics, neonatal pharmacists and more. Hospital pharmacies can often be found within the premises of the hospital. Hospital pharmacies usually stock a larger range of medications, including more specialized medications, than would be feasible in the community setting. Most hospital medications are unit-dose, or a single dose of medicine. Hospital pharmacists and trained pharmacy technicians compound sterile products for patients including total parenteral nutrition (TPN), and other medications are given intravenously. That is a complex process that requires adequate training of personnel, quality assurance of products, and adequate facilities. Several hospital pharmacies have decided to outsource high-risk preparations and some other compounding functions to companies who specialize in compounding. The high cost of medications and drug-related technology and the potential impact of medications and pharmacy services on patient-care outcomes and patient safety require hospital pharmacies to perform at the highest level possible. Clinical pharmacy Pharmacists provide direct patient care services that optimize the use of medication and promotes health, wellness, and disease prevention. Clinical pharmacists care for patients in all health care settings, but the clinical pharmacy movement initially began inside hospitals and clinics. Clinical pharmacists often collaborate with physicians and other healthcare professionals to improve pharmaceutical care. Clinical pharmacists are now an integral part of the interdisciplinary approach to patient care. They often participate in patient care rounds for drug product selection. In the UK clinical pharmacists can also prescribe some medications for patients on the NHS or privately, after completing a non-medical prescribers course to become an Independent Prescriber. The clinical pharmacist's role involves creating a comprehensive drug therapy plan for patient-specific problems, identifying goals of therapy, and reviewing all prescribed medications prior to dispensing and administration to the patient. The review process often involves an evaluation of the appropriateness of drug therapy (e.g., drug choice, dose, route, frequency, and duration of therapy) and its efficacy. Research shows that pharmacist led strategies reduce errors related to medication use. The pharmacist must also consider potential drug interactions, adverse drug reactions, and patient drug allergies while they design and initiate a drug therapy plan. Ambulatory care pharmacy Since the emergence of modern clinical pharmacy, ambulatory care pharmacy practice has emerged as a unique pharmacy practice setting. Ambulatory care pharmacy is based primarily on pharmacotherapy services that a pharmacist provides in a clinic. Pharmacists in this setting often do not dispense drugs, but rather see patients in-office visits to manage chronic disease states. In the U.S. federal health care system (including the VA, the Indian Health Service, and NIH) ambulatory care pharmacists are given full independent prescribing authority. In some states, such North Carolina and New Mexico, these pharmacist clinicians are given collaborative prescriptive and diagnostic authority. In 2011 the board of Pharmaceutical Specialties approved ambulatory care pharmacy practice as a separate board certification. The official designation for pharmacists who pass the ambulatory care pharmacy specialty certification exam will be Board Certified Ambulatory Care Pharmacist and these pharmacists will carry the initials BCACP. Compounding pharmacy/industrial pharmacy Compounding involves preparing drugs in forms that are different from the generic prescription standard. This may include altering the strength, ingredients, or dosage form. Compounding is a way to create custom drugs for patients who may not be able to take the medication in its standard form, such as due to an allergy or difficulty swallowing. Compounding is necessary for these patients to still be able to properly get the prescriptions they need. One area of compounding is preparing drugs in new dosage forms. For example, if a drug manufacturer only provides a drug as a tablet, a compounding pharmacist might make a medicated lollipop that contains the drug. Patients who have difficulty swallowing the tablet may prefer to suck the medicated lollipop instead. Another form of compounding is by mixing different strengths (g, mg, mcg) of capsules or tablets to yield the desired amount of medication indicated by the physician, physician assistant, nurse practitioner, or clinical pharmacist practitioner. This form of compounding is found at community or hospital pharmacies or in-home administration therapy. Compounding pharmacies specialize in compounding, although many also dispense the same non-compounded drugs that patients can obtain from community pharmacies. Consultant pharmacy Consultant pharmacy practice focuses more on medication regimen review (i.e. "cognitive services") than on actual dispensing of drugs. Consultant pharmacists most typically work in nursing homes, but are increasingly branching into other institutions and non-institutional settings. Traditionally consultant pharmacists were usually independent business owners, though in the United States many now work for a large pharmacy management company such as Omnicare, Kindred Healthcare or PharMerica. This trend may be gradually reversing as consultant pharmacists begin to work directly with patients, primarily because many elderly people are now taking numerous medications but continue to live outside of institutional settings. Some community pharmacies employ consultant pharmacists and/or provide consulting services. The main principle of consultant pharmacy is developed by Hepler and Strand in 1990. Veterinary pharmacy Veterinary pharmacies, sometimes called animal pharmacies, may fall in the category of hospital pharmacy, retail pharmacy or mail-order pharmacy. Veterinary pharmacies stock different varieties and different strengths of medications to fulfill the pharmaceutical needs of animals. Because the needs of animals, as well as the regulations on veterinary medicine, are often very different from those related to people, in some jurisdictions veterinary pharmacy may be kept separate from regular pharmacies. Nuclear pharmacy Nuclear pharmacy focuses on preparing radioactive materials for diagnostic tests and for treating certain diseases. Nuclear pharmacists undergo additional training specific to handling radioactive materials, and unlike in community and hospital pharmacies, nuclear pharmacists typically do not interact directly with patients. Military pharmacy Military pharmacy is a different working environment to civilian practise because military pharmacy technicians perform duties such as evaluating medication orders, preparing medication orders, and dispensing medications. This would be illegal in civilian pharmacies because these duties are required to be performed by a licensed registered pharmacist. In the US military, state laws that prevent technicians from counseling patients or doing the final medication check prior to dispensing to patients (rather than a pharmacist solely responsible for these duties) do not apply. Pharmacy informatics Pharmacy informatics is the combination of pharmacy practice science and applied information science. Pharmacy informaticists work in many practice areas of pharmacy, however, they may also work in information technology departments or for healthcare information technology vendor companies. As a practice area and specialist domain, pharmacy informatics is growing quickly to meet the needs of major national and international patient information projects and health system interoperability goals. Pharmacists in this area are trained to participate in medication management system development, deployment, and optimization. Specialty pharmacy Specialty pharmacies supply high-cost injectable, oral, infused, or inhaled medications that are used for chronic and complex disease states such as cancer, hepatitis, and rheumatoid arthritis. Unlike a traditional community pharmacy where prescriptions for any common medication can be brought in and filled, specialty pharmacies carry novel medications that need to be properly stored, administered, carefully monitored, and clinically managed. In addition to supplying these drugs, specialty pharmacies also provide lab monitoring, adherence counseling, and assist patients with cost-containment strategies needed to obtain their expensive specialty drugs. In the US, it is currently the fastest-growing sector of the pharmaceutical industry with 19 of 28 newly FDA approved medications in 2013 being specialty drugs. Due to the demand for clinicians who can properly manage these specific patient populations, the Specialty Pharmacy Certification Board has developed a new certification exam to certify specialty pharmacists. Along with the 100 questions computerized multiple-choice exam, pharmacists must also complete 3,000 hours of specialty pharmacy practice within the past three years as well as 30 hours of specialty pharmacist continuing education within the past two years. Pharmaceutical sciences The pharmaceutical sciences are a group of interdisciplinary areas of study concerned with the design, manufacturing, action, delivery, and classification of drugs. They apply knowledge from chemistry (inorganic, physical, biochemical and analytical), biology (anatomy, physiology, biochemistry, cell biology, and molecular biology), epidemiology, statistics, chemometrics, mathematics, physics, and chemical engineering. The pharmaceutical sciences are further subdivided into several specific specialties, with four main branches: Pharmacology: the study of the biochemical and physiological effects of drugs on human beings. Pharmacodynamics: the study of the cellular and molecular interactions of drugs with their receptors. Simply "What the drug does to the body" Pharmacokinetics: the study of the factors that control the concentration of drug at various sites in the body. Simply "What the body does to the drug" Pharmaceutical toxicology: the study of the harmful or toxic effects of drugs. Pharmacogenomics: the study of the inheritance of characteristic patterns of interaction between drugs and organisms. Pharmaceutical chemistry: the study of drug design to optimize pharmacokinetics and pharmacodynamics, and synthesis of new drug molecules (Medicinal Chemistry). Pharmaceutics: the study and design of drug formulation for optimum delivery, stability, pharmacokinetics, and patient acceptance. Pharmacognosy: the study of medicines derived from natural sources. As new discoveries advance and extend the pharmaceutical sciences, subspecialties continue to be added to this list. Importantly, as knowledge advances, boundaries between these specialty areas of pharmaceutical sciences are beginning to blur. Many fundamental concepts are common to all pharmaceutical sciences. These shared fundamental concepts further the understanding of their applicability to all aspects of pharmaceutical research and drug therapy. Pharmacocybernetics (also known as pharma-cybernetics, cybernetic pharmacy, and cyber pharmacy) is an emerging field that describes the science of supporting drugs and medications use through the application and evaluation of informatics and internet technologies, so as to improve the pharmaceutical care of patients. Society and culture Etymology The word pharmacy is derived from Old French farmacie "substance, such as a food or in the form of a medicine which has a laxative effect" from Medieval Latin pharmacia from Greek pharmakeia "a medicine", which itself derives from pharmakon, meaning "drug, poison, spell" (which is etymologically related to pharmakos). Separation of prescribing and dispensing Separation of prescribing and dispensing, also called dispensing separation, is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs. In contemporary time researchers and health policy analysts have more deeply considered these traditions and their effects. Advocates for separation and advocates for combining make similar claims for each of their conflicting perspectives, saying that separating or combining reduces conflict of interest in the healthcare industry, unnecessary health care, and lowers costs, while the opposite causes those things. Research in various places reports mixed outcomes in different circumstances. Environmental impacts In 2022 the Organisation for Economic Co-operation and Development proposed that pharmaceutical companies should be required to collect and destroy unused or expired medicines that they have put on the market in order to reduce public health risks around the misuse of medicines obtained from waste bins, the development of antimicrobial resistant bacteria from the discharge of antibiotics into environmental systems and "economic losses" from wasted healthcare resources. Potentially harmful concentrations of pharmaceutical waste has been detected in more than a quarter of water samples taken from 258 rivers around the world. OECD recommend that medicines should be collected separately from household waste and that "marketplaces and redistribution platforms for unused close-to-expiry-date medicines" should be set up. Such extended producer responsibility schemes are already running in France, Spain and Portugal. The future of pharmacy In the coming decades, pharmacists are expected to become more integral within the health care system. Rather than simply dispensing medication, pharmacists are increasingly expected to be compensated for their patient care skills. In particular, Medication Therapy Management (MTM) includes the clinical services that pharmacists can provide for their patients. Such services include a thorough analysis of all medication (prescription, non-prescription, and herbals) currently being taken by an individual. The result is a reconciliation of medication and patient education resulting in increased patient health outcomes and decreased costs to the health care system. This shift has already commenced in some countries; for instance, pharmacists in Australia receive remuneration from the Australian Government for conducting comprehensive Home Medicines Reviews. In Canada, pharmacists in certain provinces have limited prescribing rights (as in Alberta and British Columbia) or are remunerated by their provincial government for expanded services such as medications reviews (Medschecks in Ontario). In the United Kingdom, pharmacists who undertake additional training are obtaining prescribing rights and this is because of pharmacy education. They are also being paid for by the government for medicine use reviews. In Scotland, the pharmacist can write prescriptions for Scottish registered patients of their regular medications, for the majority of drugs, except for controlled drugs, when the patient is unable to see their doctor, as could happen if they are away from home or the doctor is unavailable. In the United States, pharmaceutical care or clinical pharmacy has had an evolving influence on the practice of pharmacy. Moreover, the Doctor of Pharmacy (Pharm. D.) degree is now required before entering practice and some pharmacists now complete one or two years of residency or fellowship training following graduation. In addition, consultant pharmacists, who traditionally operated primarily in nursing homes, are now expanding into direct consultation with patients, under the banner of "senior care pharmacy". In addition to patient care, pharmacies will be a focal point for medical adherence initiatives. There is enough evidence to show that integrated pharmacy based initiatives significantly impact adherence for chronic patients. For example, a study published in NIH shows "pharmacy based interventions improved patients' medication adherence rates by 2.1 percent and increased physicians' initiation rates by 38 percent, compared to the control group". Pharmacy journals List of pharmaceutical sciences journals Symbols The symbols most commonly associated with pharmacy are the mortar and pestle (North America) and the (medical prescription) character, which is often written as "Rx" in typed text; the green Greek cross in France, Argentina, the United Kingdom, Belgium, Ireland, Italy, Spain, and India; the Bowl of Hygieia (only) often used in the Netherlands but may be seen combined with other symbols elsewhere. Other common symbols include conical measures, and (in the US) caduceuses, in their logos. A red stylized letter A in used Germany and Austria (from Apotheke, the German word for pharmacy, from the same Greek root as the English word "apothecary"). The show globe was used in the US until the early 20th century; the Gaper in the Netherlands is increasingly rare. See also Bachelor of Pharmacy, Master of Pharmacy, Doctor of Pharmacy Notes References Sources Asai, T. (1985). Nyokan Tūkai. Tokyo: Kōdan-Sha. Titsingh, Isaac, ed. (1834). [Siyun-sai Rin-siyo/Hayashi Gahō, 1652], Nipon o daï itsi ran; ou, Annales des empereurs du Japon. Paris: Oriental Translation Fund of Great Britain and Ireland....Click link for digitized, full-text copy of this book (in French) Pharmacy Consulting Services | McKesson – A landmark study in hospital pharmacy performance based on an extensive literature review and the collective experience of the Health Systems Pharmacy Executive Alliance. External links Navigator History of Pharmacy Collection of internet resources related to the history of pharmacy Soderlund Pharmacy Museum – Information about the history of the American Drugstore The Lloyd Library Library of botanical, medical, pharmaceutical, and scientific books and periodicals, and works of allied sciences American Institute of the History of Pharmacy American Institute of the History of Pharmacy—resources in the history of pharmacy International Pharmaceutical Federation (FIP) Federation representing national associations of pharmacists and pharmaceutical scientists. Information and resources relating to pharmacy education, practice, science and policy Medicinal chemistry Symbols Greek words and phrases
0.770495
0.997865
0.76885
Hypoxia (medicine)
Hypoxia is a condition in which the body or a region of the body is deprived of adequate oxygen supply at the tissue level. Hypoxia may be classified as either generalized, affecting the whole body, or local, affecting a region of the body. Although hypoxia is often a pathological condition, variations in arterial oxygen concentrations can be part of the normal physiology, for example, during strenuous physical exercise. Hypoxia differs from hypoxemia and anoxemia, in that hypoxia refers to a state in which oxygen present in a tissue or the whole body is insufficient, whereas hypoxemia and anoxemia refer specifically to states that have low or no oxygen in the blood. Hypoxia in which there is complete absence of oxygen supply is referred to as anoxia. Hypoxia can be due to external causes, when the breathing gas is hypoxic, or internal causes, such as reduced effectiveness of gas transfer in the lungs, reduced capacity of the blood to carry oxygen, compromised general or local perfusion, or inability of the affected tissues to extract oxygen from, or metabolically process, an adequate supply of oxygen from an adequately oxygenated blood supply. Generalized hypoxia occurs in healthy people when they ascend to high altitude, where it causes altitude sickness leading to potentially fatal complications: high altitude pulmonary edema (HAPE) and high altitude cerebral edema (HACE). Hypoxia also occurs in healthy individuals when breathing inappropriate mixtures of gases with a low oxygen content, e.g., while diving underwater, especially when using malfunctioning closed-circuit rebreather systems that control the amount of oxygen in the supplied air. Mild, non-damaging intermittent hypoxia is used intentionally during altitude training to develop an athletic performance adaptation at both the systemic and cellular level. Hypoxia is a common complication of preterm birth in newborn infants. Because the lungs develop late in pregnancy, premature infants frequently possess underdeveloped lungs. To improve blood oxygenation, infants at risk of hypoxia may be placed inside incubators that provide warmth, humidity, and supplemental oxygen. More serious cases are treated with continuous positive airway pressure (CPAP). Classification Hypoxia exists when there is a reduced amount of oxygen in the tissues of the body. Hypoxemia refers to a reduction in arterial oxygenation below the normal range, regardless of whether gas exchange is impaired in the lung, arterial oxygen content (CaO2 – which represents the amount of oxygen delivered to the tissues) is adequate, or tissue hypoxia exists. The classification categories are not always mutually exclusive, and hypoxia can be a consequence of a wide variety of causes. By cause Hypoxic hypoxia, also referred to as generalised hypoxia, may be caused by: Hypoventilation, which is insufficient ventilation of the lungs due to any cause (fatigue, excessive work of breathing, barbiturate poisoning, pneumothorax, sleep apnea, etc.). Low-inspired oxygen partial pressure, which may be caused by breathing normal air at low ambient pressures due to altitude, by breathing hypoxic breathing gas at an unsuitable depth, by breathing inadequately re-oxygenated recycled breathing gas from a rebreather, life support system, or anesthetic machine. Hypoxia of ascent (latent hypoxia) in freediving and rebreather diving. Airway obstruction, choking, drowning. Chronic obstructive pulmonary disease (COPD) Neuromuscular diseases or interstitial lung disease Malformed vascular system such as an anomalous coronary artery. Hypoxemic hypoxia is a lack of oxygen caused by low oxygen tension in the arterial blood, due to the inability of the lungs to sufficiently oxygenate the blood. Causes include hypoventilation, impaired alveolar diffusion, and pulmonary shunting. This definition overlaps considerably with that of hypoxic hypoxia. is hypoxia from hypoxemia due to abnormal pulmonary function, and occurs when the lungs receive adequately oxygenated gas which does not oxygenate the blood sufficiently. It may be caused by: Ventilation perfusion mismatch (V/Q mismatch), which can be either low or high. A reduced V/Q ratio can be caused by impaired ventilation, which may be a consequence of conditions such as bronchitis, obstructive airway disease, mucus plugs, or pulmonary edema, which limit or obstruct the ventilation. In this situation there is not enough oxygen in the alveolar gas to fully oxygenate the blood volume passing through, and PaO2 will be low. Conversely, an increased V/Q ratio tends to be a consequence of impaired perfusion, in which circumstances the blood supply is insufficient to carry the available oxygen, PaO2 will be normal, but tissues will be insufficiently perfused to meet the oxygen demand. A V/Q mismatch can also occur when the surface area available for gas exchange in the lungs is decreased. Pulmonary shunt, in which blood passes from the right to the left side of the heart without being oxygenated. This may be due to anatomical shunts, in which the blood bypasses the alveoli, via intracardiac shunts, pulmonary arteriovenous malformations, fistulas, and hepatopulmonary syndrome, or physiological shunting, in which blood passes through non-ventilated alveoli. Impaired diffusion, a reduced capacity for gas molecules to move between the air in the alveoli and the blood, which occurs when alveolar–capillary membranes thicken. This can happen in interstitial lung diseases such as pulmonary fibrosis, sarcoidosis, hypersensitivity pneumonitis, and connective tissue disorders. , also known as ischemic hypoxia or stagnant hypoxia, is caused by abnormally low blood flow to the lungs, which can occur during shock, cardiac arrest, severe congestive heart failure, or abdominal compartment syndrome, where the main dysfunction is in the cardiovascular system, causing a major reduction in perfusion. Arterial gas is adequately oygenated in the lungs, and the tissues are able to accept the oxygen available, but the flow rate to the tissues is insufficient. Venous oxygenation is particularly low. Anemic hypoxia or hypemic hypoxia is the lack of capacity of the blood to carry the normal level of oxygen. It can be caused by anemia or: Carbon monoxide poisoning, in which carbon monoxide combines with the hemoglobin, to form carboxyhemoglobin (HbCO) preventing it from transporting oxygen. Methemoglobinemia, a change in the hemoglobin molecule from a ferrous ion (Fe2+) to a ferric ion (Fe3+), which has a lesser capacity to bind free oxygen molecules, and a greater affinity for bound oxygen. This causes a left shift in the O2–Hb curve. It can be congenital or caused by medications, food additives or toxins, including chloroquine, benzene, nitrites, benzocaine. Histotoxic hypoxia (Dysoxia) or occurs when the cells of the affected tissues are unable to use oxygen provided by normally oxygenated hemoglobin. Examples include cyanide poisoning which inhibits cytochrome c oxidase, an enzyme required for cellular respiration in mitochondria. Methanol poisoning has a similar effect, as the metabolism of methanol produces formic acid which inhibits mitochondrial cytochrome oxidase. Intermittent hypoxic training induces mild generalized hypoxia for short periods as a training method to improve sporting performance. This is not considered a medical condition. Acute cerebral hypoxia leading to blackout can occur during freediving. This is a consequence of prolonged voluntary apnea underwater, and generally occurs in trained athletes in good health and good physical condition. By extent Hypoxia may affect the whole body, or just some parts. Generalized hypoxia The term generalized hypoxia may refer to hypoxia affecting the whole body, or may be used as a synonym for hypoxic hypoxia, which occurs when there is insufficient oxygen in the breathing gas to oxygenate the blood to a level that will adequately support normal metabolic processes, and which will inherently affect all perfused tissues. The symptoms of generalized hypoxia depend on its severity and acceleration of onset. In the case of altitude sickness, where hypoxia develops gradually, the symptoms include fatigue, numbness / tingling of extremities, nausea, and cerebral hypoxia. These symptoms are often difficult to identify, but early detection of symptoms can be critical. In severe hypoxia, or hypoxia of very rapid onset, ataxia, confusion, disorientation, hallucinations, behavioral change, severe headaches, reduced level of consciousness, papilloedema, breathlessness, pallor, tachycardia, and pulmonary hypertension eventually leading to the late signs cyanosis, slow heart rate, cor pulmonale, and low blood pressure followed by heart failure eventually leading to shock and death. Because hemoglobin is a darker red when it is not bound to oxygen (deoxyhemoglobin), as opposed to the rich red color that it has when bound to oxygen (oxyhemoglobin), when seen through the skin it has an increased tendency to reflect blue light back to the eye. In cases where the oxygen is displaced by another molecule, such as carbon monoxide, the skin may appear 'cherry red' instead of cyanotic. Hypoxia can cause premature birth, and injure the liver, among other deleterious effects. Localized hypoxia Hypoxia that is localized to a region of the body, such as an organ or a limb. is usually the consequence of ischemia, the reduced perfusion to that organ or limb, and may not necessarily be associated with general hypoxemia. A locally reduced perfusion is generally caused by an increased resistance to flow through the blood vessels of the affected area. Ischemia is a restriction in blood supply to any tissue, muscle group, or organ, causing a shortage of oxygen. Ischemia is generally caused by problems with blood vessels, with resultant damage to or dysfunction of tissue i.e. hypoxia and microvascular dysfunction. It also means local hypoxia in a given part of a body sometimes resulting from vascular occlusion such as vasoconstriction, thrombosis, or embolism. Ischemia comprises not only insufficiency of oxygen, but also reduced availability of nutrients and inadequate removal of metabolic wastes. Ischemia can be a partial (poor perfusion) or total blockage. Compartment syndrome is a condition in which increased pressure within one of the body's anatomical compartments results in insufficient blood supply to tissue within that space. There are two main types: acute and chronic. Compartments of the leg or arm are most commonly involved. If tissue is not being perfused properly, it may feel cold and appear pale; if severe, hypoxia can result in cyanosis, a blue discoloration of the skin. If hypoxia is very severe, a tissue may eventually become gangrenous. By affected tissues and organs Any living tissue can be affected by hypoxia, but some are particularly sensitive, or have more noticeable or notable consequences. Cerebral hypoxia Cerebral hypoxia is hypoxia specifically involving the brain. The four categories of cerebral hypoxia in order of increasing severity are: diffuse cerebral hypoxia (DCH), focal cerebral ischemia, cerebral infarction, and global cerebral ischemia. Prolonged hypoxia induces neuronal cell death via apoptosis, resulting in a hypoxic brain injury. Oxygen deprivation can be hypoxic (reduced general oxygen availability) or ischemic (oxygen deprivation due to a disruption in blood flow) in origin. Brain injury as a result of oxygen deprivation is generally termed hypoxic injury. Hypoxic ischemic encephalopathy (HIE) is a condition that occurs when the entire brain is deprived of an adequate oxygen supply, but the deprivation is not total. While HIE is associated in most cases with oxygen deprivation in the neonate due to birth asphyxia, it can occur in all age groups, and is often a complication of cardiac arrest. Corneal hypoxia Although corneal hypoxia can arise from any of several causes, it is primarily attributable to the prolonged use of contact lenses. The corneas are not perfused and get their oxygen from the atmosphere by diffusion. Impermeable contact lenses form a barrier to this diffusion, and therefore can cause damage to the corneas. Symptoms may include irritation, excessive tearing and blurred vision. The sequelae of corneal hypoxia include punctate keratitis, corneal neovascularization and epithelial microcysts. Intrauterine hypoxia Intrauterine hypoxia, also known as fetal hypoxia, occurs when the fetus is deprived of an adequate supply of oxygen. It may be due to a variety of reasons such as prolapse or occlusion of the umbilical cord, placental infarction, maternal diabetes (prepregnancy or gestational diabetes) and maternal smoking. Intrauterine growth restriction may cause or be the result of hypoxia. Intrauterine hypoxia can cause cellular damage that occurs within the central nervous system (the brain and spinal cord). This results in an increased mortality rate, including an increased risk of sudden infant death syndrome (SIDS). Oxygen deprivation in the fetus and neonate have been implicated as either a primary or as a contributing risk factor in numerous neurological and neuropsychiatric disorders such as epilepsy, attention deficit hyperactivity disorder, eating disorders and cerebral palsy. Tumor hypoxia Tumor hypoxia is the situation where tumor cells have been deprived of oxygen. As a tumor grows, it rapidly outgrows its blood supply, leaving portions of the tumor with regions where the oxygen concentration is significantly lower than in healthy tissues. Hypoxic microenvironements in solid tumors are a result of available oxygen being consumed within 70 to 150 μm of tumour vasculature by rapidly proliferating tumor cells thus limiting the amount of oxygen available to diffuse further into the tumor tissue. The severity of hypoxia is related to tumor types and varies between different types. Research has shown that the level of oxygenation in hypoxic tumor tissues is poorer than normal tissues and it is reported somewhere between 1%–2% O2. In order to support continuous growth and proliferation in challenging hypoxic environments, cancer cells are found to alter their metabolism. Furthermore, hypoxia is known to change cell behavior and is associated with extracellular matrix remodeling and increased migratory and metastatic behavior. Tumour hypoxia is usually associated with highly malignant tumours, which frequently do not respond well to treatment. Vestibular system In acute exposure to hypoxic hypoxia on the vestibular system and the visuo-vestibular interactions, the gain of the vestibulo–ocular reflex (VOR) decreases under mild hypoxia at altitude. Postural control is also disturbed by hypoxia at altitude, postural sway is increased, and there is a correlation between hypoxic stress and adaptive tracking performance. Signs and symptoms Arterial oxygen tension can be measured by blood gas analysis of an arterial blood sample, and less reliably by pulse oximetry, which is not a complete measure of circulatory oxygen sufficiency. If there is insufficient blood flow or insufficient hemoglobin in the blood (anemia), tissues can be hypoxic even when there is high arterial oxygen saturation. Cyanosis Headache Decreased reaction time, disorientation, and uncoordinated movement. Impaired judgment, confusion, memory loss and cognitive problems. Euphoria or dissociation Visual impairment A moderate level of hypoxia can cause a generalized partial loss of color vision affecting both red-green and blue-yellow discrimination at an altitude of . Lightheaded or dizzy sensation, vertigo Fatigue, drowsiness, or tiredness Shortness of breath Palpitations may occur in the initial phases. Later, the heart rate may reduce significantly degree. In severe cases, abnormal heart rhythms may develop. Nausea and vomiting Initially raised blood pressure followed by lowered blood pressure as the condition progresses. Severe hypoxia can cause loss of consciousness, seizures or convulsions, coma and eventually death. Breathing rate may slow down and become shallow and the pupils may not respond to light. Tingling in fingers and toes Numbness Complications Local tissue death and gangrene is a relatively common complication of ischaemic hypoxia. (diabetes, etc.) Brain damage – cortical blindness is a known but uncommon complication of acute hypoxic damage to the cerebral cortex. Obstructive sleep apnea syndrome is a risk factor for cerebrovascular disease and cognitive dysfunction. Causes Oxygen passively diffuses in the lung alveoli according to a concentration gradient, also referred to as a partial pressure gradient. Inhaled air rapidly reaches saturation with water vapour, which slightly reduces the partial pressures of the other components. Oxygen diffuses from the inhaled air to arterial blood, where its partial pressure is around 100 mmHg (13.3 kPa). In the blood, oxygen is bound to hemoglobin, a protein in red blood cells. The binding capacity of hemoglobin is influenced by the partial pressure of oxygen in the environment, as described by the oxygen–hemoglobin dissociation curve. A smaller amount of oxygen is transported in solution in the blood. In systemic tissues, oxygen again diffuses down a concentration gradient into cells and their mitochondria, where it is used to produce energy in conjunction with the breakdown of glucose, fats, and some amino acids. Hypoxia can result from a failure at any stage in the delivery of oxygen to cells. This can include low partial pressures of oxygen in the breathing gas, problems with diffusion of oxygen in the lungs through the interface between air and blood, insufficient available hemoglobin, problems with blood flow to the end user tissue, problems with the breathing cycle regarding rate and volume, and physiological and mechanical dead space. Experimentally, oxygen diffusion becomes rate limiting when arterial oxygen partial pressure falls to 60 mmHg (5.3 kPa) or below. Almost all the oxygen in the blood is bound to hemoglobin, so interfering with this carrier molecule limits oxygen delivery to the perfused tissues. Hemoglobin increases the oxygen-carrying capacity of blood by about 40-fold, with the ability of hemoglobin to carry oxygen influenced by the partial pressure of oxygen in the local environment, a relationship described in the oxygen–hemoglobin dissociation curve. When the ability of hemoglobin to carry oxygen is degraded, a hypoxic state can result. Ischemia Ischemia, meaning insufficient blood flow to a tissue, can also result in hypoxia in the affected tissues. This is called 'ischemic hypoxia'. Ischemia can be caused by an embolism, a heart attack that decreases overall blood flow, trauma to a tissue that results in damage reducing perfusion, and a variety of other causes. A consequence of insufficient blood flow causing local hypoxia is gangrene that occurs in diabetes. Diseases such as peripheral vascular disease can also result in local hypoxia. Symptoms are worse when a limb is used, increasing the oxygen demand in the active muscles. Pain may also be felt as a result of increased hydrogen ions leading to a decrease in blood pH (acidosis) created as a result of anaerobic metabolism. G-LOC, or g-force induced loss of consciousness, is a special case of ischemic hypoxia which occurs when the body is subjected to high enough acceleration sustained for long enough to lower cerebral blood pressure and circulation to the point where loss of consciousness occurs due to cerebral hypoxia. The human body is most sensitive to longitudinal acceleration towards the head, as this causes the largest hydrostatic pressure deficit in the head. Hypoxemic hypoxia This refers specifically to hypoxic states where the arterial content of oxygen is insufficient. This can be caused by alterations in respiratory drive, such as in respiratory alkalosis, physiological or pathological shunting of blood, diseases interfering in lung function resulting in a ventilation-perfusion mismatch, such as a pulmonary embolus, or alterations in the partial pressure of oxygen in the environment or lung alveoli, such as may occur at altitude or when diving. Common disorders that can cause respiratory dysfunction include trauma to the head and spinal cord, nontraumatic acute myelopathies, demyelinating disorders, stroke, Guillain–Barré syndrome, and myasthenia gravis. These dysfunctions may necessitate mechanical ventilation. Some chronic neuromuscular disorders such as motor neuron disease and muscular dystrophy may require ventilatory support in advanced stages. Carbon monoxide poisoning Carbon monoxide competes with oxygen for binding sites on hemoglobin molecules. As carbon monoxide binds with hemoglobin hundreds of times tighter than oxygen, it can prevent the carriage of oxygen. Carbon monoxide poisoning can occur acutely, as with smoke intoxication, or over a period of time, as with cigarette smoking. Due to physiological processes, carbon monoxide is maintained at a resting level of 4–6 ppm. This is increased in urban areas (7–13 ppm) and in smokers (20–40 ppm). A carbon monoxide level of 40 ppm is equivalent to a reduction in hemoglobin levels of 10 g/L. Carbon monoxide has a second toxic effect, namely removing the allosteric shift of the oxygen dissociation curve and shifting the foot of the curve to the left. In so doing, the hemoglobin is less likely to release its oxygen at the peripheral tissues. Certain abnormal hemoglobin variants also have higher than normal affinity for oxygen, and so are also poor at delivering oxygen to the periphery. Altitude Atmospheric pressure reduces with altitude and proportionally, so does the oxygen content of the air. The reduction in the partial pressure of inspired oxygen at higher altitudes lowers the oxygen saturation of the blood, ultimately leading to hypoxia. The clinical features of altitude sickness include: sleep problems, dizziness, headache and oedema. Hypoxic breathing gases The breathing gas may contain an insufficient partial pressure of oxygen. Such situations may lead to unconsciousness without symptoms since carbon dioxide levels remain normal and the human body senses pure hypoxia poorly. Hypoxic breathing gases can be defined as mixtures with a lower oxygen fraction than air, though gases containing sufficient oxygen to reliably maintain consciousness at normal sea level atmospheric pressure may be described as normoxic even when the oxygen fraction is slightly below normoxic. Hypoxic breathing gas mixtures in this context are those which will not reliably maintain consciousness at sea level pressure. One of the most widespread circumstances of exposure to hypoxic breathing gas is ascent to altitudes where the ambient pressure drops sufficiently to reduce the partial pressure of oxygen to hypoxic levels. Gases with as little as 2% oxygen by volume in a helium diluent are used for deep diving operations. The ambient pressure at 190 msw is sufficient to provide a partial pressure of about 0.4 bar, which is suitable for saturation diving. As the divers are decompressed, the breathing gas must be oxygenated to maintain a breathable atmosphere. It is also possible for the breathing gas for diving to have a dynamically controlled oxygen partial pressure, known as a set point, which is maintained in the breathing gas circuit of a diving rebreather by addition of oxygen and diluent gas to maintain the desired oxygen partial pressure at a safe level between hypoxic and hyperoxic at the ambient pressure due to the current depth. A malfunction of the control system may lead to the gas mixture becoming hypoxic at the current depth. A special case of hypoxic breathing gas is encountered in deep freediving where the partial pressure of the oxygen in the lung gas is depleted during the dive, but remains sufficient at depth, and when it drops during ascent, it becomes too hypoxic to maintain consciousness, and the diver loses consciousness before reaching the surface. Hypoxic gases may also occur in industrial, mining, and firefighting environments. Some of these may also be toxic or narcotic, others are just asphyxiant. Some are recognisable by smell, others are odourless. Inert gas asphyxiation may be deliberate with use of a suicide bag. Accidental death has occurred in cases where concentrations of nitrogen in controlled atmospheres, or methane in mines, has not been detected or appreciated. Other Hemoglobin's function can also be lost by chemically oxidizing its iron atom to its ferric form. This form of inactive hemoglobin is called methemoglobin and can be made by ingesting sodium nitrite as well as certain drugs and other chemicals. Anemia Hemoglobin plays a substantial role in carrying oxygen throughout the body, and when it is deficient, anemia can result, causing 'anaemic hypoxia' if tissue oxygenation is decreased. Iron deficiency is the most common cause of anemia. As iron is used in the synthesis of hemoglobin, less hemoglobin will be synthesised when there is less iron, due to insufficient intake, or poor absorption. Anemia is typically a chronic process that is compensated over time by increased levels of red blood cells via upregulated erythropoetin. A chronic hypoxic state can result from a poorly compensated anaemia. Histotoxic hypoxia Histotoxic hypoxia (also called histoxic hypoxia) is the inability of cells to take up or use oxygen from the bloodstream, despite physiologically normal delivery of oxygen to such cells and tissues. Histotoxic hypoxia results from tissue poisoning, such as that caused by cyanide (which acts by inhibiting cytochrome oxidase) and certain other poisons like hydrogen sulfide (byproduct of sewage and used in leather tanning). Mechanism Tissue hypoxia from low oxygen delivery may be due to low haemoglobin concentration (anaemic hypoxia), low cardiac output (stagnant hypoxia) or low haemoglobin saturation (hypoxic hypoxia). The consequence of oxygen deprivation in tissues is a switch to anaerobic metabolism at the cellular level. As such, reduced systemic blood flow may result in increased serum lactate. Serum lactate levels have been correlated with illness severity and mortality in critically ill adults and in ventilated neonates with respiratory distress. Physiological responses All vertebrates must maintain oxygen homeostasis to survive, and have evolved physiological systems to ensure adequate oxygenation of all tissues. In air breathing vertebrates this is based on lungs to acquire the oxygen, hemoglobin in red corpuscles to transport it, a vasculature to distribute, and a heart to deliver. Short term variations in the levels of oxygenation are sensed by chemoreceptor cells which respond by activating existing proteins, and over longer terms by regulation of gene transcription. Hypoxia is also involved in the pathogenesis of some common and severe pathologies. The most common causes of death in an aging population include myocardial infarction, stroke and cancer. These diseases share a common feature that limitation of oxygen availability contributes to the development of the pathology. Cells and organisms are also able to respond adaptively to hypoxic conditions, in ways that help them to cope with these adverse conditions. Several systems can sense oxygen concentration and may respond with adaptations to acute and long-term hypoxia. The systems activated by hypoxia usually help cells to survive and overcome the hypoxic conditions. Erythropoietin, which is produced in larger quantities by the kidneys under hypoxic conditions, is an essential hormone that stimulates production of red blood cells, which are the primary transporter of blood oxygen, and glycolytic enzymes are involved in anaerobic ATP formation. Hypoxia-inducible factors (HIFs) are transcription factors that respond to decreases in available oxygen in the cellular environment, or hypoxia. The HIF signaling cascade mediates the effects of hypoxia on the cell. Hypoxia often keeps cells from differentiating. However, hypoxia promotes the formation of blood vessels, and is important for the formation of a vascular system in embryos and tumors. The hypoxia in wounds also promotes the migration of keratinocytes and the restoration of the epithelium. It is therefore not surprising that HIF-1 modulation was identified as a promising treatment paradigm in wound healing. Exposure of a tissue to repeated short periods of hypoxia, between periods of normal oxygen levels, influences the tissue's later response to prolonged ischaemic exposure. Thus is known as ischaemic preconditioning, and it is known to occur in many tissues. Acute If oxygen delivery to cells is insufficient for the demand (hypoxia), electrons will be shifted to pyruvic acid in the process of lactic acid fermentation. This temporary measure (anaerobic metabolism) allows small amounts of energy to be released. Lactic acid build up (in tissues and blood) is a sign of inadequate mitochondrial oxygenation, which may be due to hypoxemia, poor blood flow (e.g., shock) or a combination of both. If severe or prolonged it could lead to cell death. In humans, hypoxia is detected by the peripheral chemoreceptors in the carotid body and aortic body, with the carotid body chemoreceptors being the major mediators of reflex responses to hypoxia. This response does not control ventilation rate at normal PO2, but below normal the activity of neurons innervating these receptors increases dramatically, so much as to override the signals from central chemoreceptors in the hypothalamus, increasing PO2 despite a falling PCO2 In most tissues of the body, the response to hypoxia is vasodilation. By widening the blood vessels, the tissue allows greater perfusion. By contrast, in the lungs, the response to hypoxia is vasoconstriction. This is known as hypoxic pulmonary vasoconstriction, or "HPV", and has the effect of redirecting blood away from poorly ventilated regions, which helps match perfusion to ventilation, giving a more even oxygenation of blood from different parts of the lungs. In conditions of hypoxic breathing gas, such as at high altitude, HPV is generalized over the entire lung, but with sustained exposure to generalized hypoxia, HPV is suppressed. Hypoxic ventilatory response (HVR) is the increase in ventilation induced by hypoxia that allows the body to take in and transport lower concentrations of oxygen at higher rates. It is initially elevated in lowlanders who travel to high altitude, but reduces significantly over time as people acclimatize. Chronic When the pulmonary capillary pressure remains elevated chronically (for at least 2 weeks), the lungs become even more resistant to pulmonary edema because the lymph vessels expand greatly, increasing their capability of carrying fluid away from the interstitial spaces perhaps as much as 10-fold. Therefore, in patients with chronic mitral stenosis, pulmonary capillary pressures of 40 to 45 mm Hg have been measured without the development of lethal pulmonary edema. There are several potential physiologic mechanisms for hypoxemia, but in patients with chronic obstructive pulmonary disease (COPD), ventilation/perfusion (V/Q) mismatching is most common, with or without alveolar hypoventilation, as indicated by arterial carbon dioxide concentration. Hypoxemia caused by V/Q mismatching in COPD is relatively easy to correct, and relatively small flow rates of supplemental oxygen (less than 3 L/min for the majority of patients) are required for long term oxygen therapy (LTOT). Hypoxemia normally stimulates ventilation and produces dyspnea, but these and the other signs and symptoms of hypoxia are sufficiently variable in COPD to limit their value in patient assessment. Chronic alveolar hypoxia is the main factor leading to development of cor pulmonale — right ventricular hypertrophy with or without overt right ventricular failure — in patients with COPD. Pulmonary hypertension adversely affects survival in COPD, proportional to resting mean pulmonary artery pressure elevation. Although the severity of airflow obstruction as measured by forced expiratory volume tests FEV1 correlates best with overall prognosis in COPD, chronic hypoxemia increases mortality and morbidity for any severity of disease. Large-scale studies of long term oxygen therapy in patients with COPD show a dose–response relationship between daily hours of supplemental oxygen use and survival. Continuous, 24-hours-per-day oxygen use in appropriately selected patients may produce a significant survival benefit. Pathological responses Cerebral ischemia The brain has relatively high energy requirements, using about 20% of the oxygen under resting conditions, but low reserves, which make it specially vulnerable to hypoxia. In normal conditions, an increased demand for oxygen is easily compensated by an increased cerebral blood flow. but under conditions when there is insufficient oxygen available, increased blood flow may not be sufficient to compensate, and hypoxia can result in brain injury. A longer duration of cerebral hypoxia will generally result in larger areas of the brain being affected. The brainstem, hippocampus and cerebral cortex seem to be the most vulnerable regions. Injury becomes irreversible if oxygenation is not soon restored. Most cell death is by necrosis but delayed apoptosis also occurs. In addition, presynaptic neurons release large amounts of glutamate which further increases Ca2+ influx and causes catastrophic collapse in postsynaptic cells. Although it is the only way to save the tissue, reperfusion also produces reactive oxygen species and inflammatory cell infiltration, which induces further cell death. If the hypoxia is not too severe, cells can suppress some of their functions, such as protein synthesis and spontaneous electrical activity, in a process called penumbra, which is reversible if the oxygen supply is resumed soon enough. Myocardial ischemia Parts of the heart are exposed to ischemic hypoxia in the event of occlusion of a coronary artery. Short periods of ischaemia are reversible if reperfused within about 20 minutes, without development of necrosis, but the phenomenon known as stunning is generally evident. If hypoxia continues beyond this period, necrosis propagates through the myocardial tissue. Energy metabolism in the affected area shifts from mitochondrial respiration to anaerobic glycolysis almost immediately, with concurrent reduction of effectiveness of contractions, which soon cease. Anaerobic products accumulate in the muscle cells, which develop acidosis and osmotic load leading to cellular edema. Intracellular Ca2+ increases and eventually leads to cell necrosis. Arterial flow must be restored to return to aerobic metabolism and prevent necrosis of the affected muscle cells, but this also causes further damage by reperfusion injury. Myocadial stunning has been described as "prolonged postischaemic dysfunction of viable tissue salvaged by reperfusion", which manifests as temporary contractile failure in oxygenated muscle tissue. This may be caused by a release of reactive oxygen species during the early stages of reperfusion. Tumor angiogenesis As tumors grow, regions of relative hypoxia develop as the oxygen supply is unevenly utilized by the tumor cells. The formation of new blood vessels is necessary for continued tumor growth, and is also an important factor in metastasis, as the route by which cancerous cells are transported to other sites. Diagnosis Physical examination and history Hypoxia can present as acute or chronic. Acute presentation may include dyspnea (shortness of breath) and tachypnea (rapid, often shallow, breathing). Severity of symptom presentation is commonly an indication of severity of hypoxia. Tachycardia (rapid pulse) may develop to compensate for low arterial oxygen tension. Stridor may be heard in upper airway obstruction, and cyanosis may indicate severe hypoxia. Neurological symptoms and organ function deterioration occur when the oxygen delivery is severely compromised. In moderate hypoxia, restlessness, headache and confusion may occur, with coma and eventual death possible in severe cases. In chronic presentation, dyspnea following exertion is most commonly mentioned. Symptoms of the underlying condition that caused the hypoxia may be apparent, and can help with differential diagnosis. A productive cough and fever may be present with lung infection, and leg edema may suggest heart failure. Lung auscultation can provide useful information. Tests An arterial blood gas test (ABG) may be done, which usually includes measurements of oxygen content, hemoglobin, oxygen saturation (how much of the hemoglobin is carrying oxygen), arterial partial pressure of oxygen (PaO2), partial pressure of carbon dioxide (PaCO2), blood pH level, and bicarbonate (HCO3) An arterial oxygen tension (PaO2) less than 80 mmHg is considered abnormal, but must be considered in context of the clinical situation. In addition to diagnosis of hypoxemia, the ABG may provide additional information, such as PCO2, which can help identify the etiology. The arterial partial pressure of carbon dioxide is an indirect measure of exchange of carbon dioxide with the air in the lungs, and is related to minute ventilation. PCO2 is raised in hypoventilation. The normal range of PaO2:FiO2 ratio is 300 to 500 mmHg, if this ratio is lower than 300 it may indicate a deficit in gas exchange, which is particularly relevant for identifying acute respiratory distress syndrome (ARDS). A ratio of less than 200 indicates severe hypoxemia. The alveolar–arterial gradient (A-aO2, or A–a gradient), is the difference between the alveolar (A) concentration of oxygen and the arterial (a) concentration of oxygen. It is a useful parameter for narrowing the differential diagnosis of hypoxemia. The A–a gradient helps to assess the integrity of the alveolar capillary unit. For example, at high altitude, the arterial oxygen PaO2 is low, but only because the alveolar oxygen PAO2 is also low. However, in states of ventilation perfusion mismatch, such as pulmonary embolism or right-to-left shunt, oxygen is not effectively transferred from the alveoli to the blood which results in an elevated A-a gradient. PaO2 can be obtained from the arterial blood gas analysis and PAO2 is calculated using the alveolar gas equation. An abnormally low hematocrit (volume percentage of red blood cells) may indicate anemia. X-rays or CT scans of the chest and airways can reveal abnormalities that may affect ventilation or perfusion. A ventilation/perfusion scan, also called a V/Q lung scan, is a type of medical imaging using scintigraphy and medical isotopes to evaluate the circulation of air and blood within a patient's lungs, in order to determine the ventilation/perfusion ratio. The ventilation part of the test looks at the ability of air to reach all parts of the lungs, while the perfusion part evaluates how well blood circulates within the lungs. Pulmonary function testing may include: Tests that measure oxygen levels during the night The six-minute walk test, which measures how far a person can walk on a flat surface in six minutes to test exercise capacity by measuring oxygen levels in response to exercise. Diagnostic measurements that may be relevant include: Lung volumes, including lung capacity, airway resistance, respiratory muscle strength, diffusing capacity Other pulmonary function tests which may be relevant include: Spirometry, body plethysmography, forced oscillation technique for calculating the volume, pressure, and air flow in the lungs, bronchodilator responsiveness, carbon monoxide diffusion test (DLCO), oxygen titration studies, cardiopulmonary stress test, bronchoscopy, and thoracentesis Differential diagnosis Treatment will depend on severity and may also depend on the cause, as some cases are due to external causes and removing them and treating acute symptoms may be sufficient, but where the symptoms are due to underlying pathology, treatment of the obvious symptoms may only provide temporary or partial relief, so differential diagnosis can be important in selecting definitive treatment. Hypoxemic hypoxia: Low oxygen tension in the arterial blood (PaO2) is generally an indication of inability of the lungs to properly oxygenate the blood. Internal causes include hypoventilation, impaired alveolar diffusion, and pulmonary shunting. External causes include hypoxic environment, which could be caused by low ambient pressure or unsuitable breathing gas. Both acute and chronic hypoxia and hypercapnia caused by respiratory dysfunction can produce neurological symptoms such as encephalopathy, seizures, headache, papilledema, and asterixis. Obstructive sleep apnea syndrome may cause morning headaches Circulatory Hypoxia: Caused by insufficient perfusion of the affected tissues by blood which is adequately oxygenated. This may be generalised, due to cardiac failure or hypovolemia, or localised, due to infarction or localised injury. Anemic Hypoxia is caused by a deficit in oxygen-carrying capacity, usually due to low hemoglobin levels, leading to generalised inadequate oxygen delivery. Histotoxic Hypoxia (Dysoxia) is a consequence of cells being unable to utilize oxygen effectively. A classic example is cyanide poisoning which inhibits the enzyme cytochrome C oxidase in the mitochondria, blocking the use of oxygen to make ATP. Critical illness polyneuropathy or myopathy should be considered in the intensive care unit when patients have difficulty coming off the ventilator. Prevention Prevention can be as simple as risk management of occupational exposure to hypoxic environments, and commonly involves the use of environmental monitoring and personal protective equipment. Prevention of hypoxia as a predictable consequence of medical conditions requires prevention of those conditions. Screening of demographics known to be at risk for specific disorders may be useful. Prevention of altitude induced hypoxia To counter the effects of high-altitude diseases, the body must return arterial PaO2 toward normal. Acclimatization, the means by which the body adapts to higher altitudes, only partially restores PO2 to standard levels. Hyperventilation, the body's most common response to high-altitude conditions, increases alveolar PO2 by raising the depth and rate of breathing. However, while PO2 does improve with hyperventilation, it does not return to normal. Studies of miners and astronomers working at 3000 meters and above show improved alveolar PO2 with full acclimatization, yet the PO2 level remains equal to or even below the threshold for continuous oxygen therapy for patients with chronic obstructive pulmonary disease (COPD). In addition, there are complications involved with acclimatization. Polycythemia, in which the body increases the number of red blood cells in circulation, thickens the blood, raising the risk of blood clots. In high-altitude situations, only oxygen enrichment or compartment pressurisation can counteract the effects of hypoxia. Pressurisation is practicable in vehicles, and for emergencies in ground installations. By increasing the concentration of oxygen in the at ambient pressure, the effects of lower barometric pressure are countered and the level of arterial PO2 is restored toward normal capacity. A small amount of supplemental oxygen reduces the equivalent altitude in climate-controlled rooms. At 4000 m, raising the oxygen concentration level by 5% via an oxygen concentrator and an existing ventilation system provides an altitude equivalent of 3000 m, which is much more tolerable for the increasing number of low-landers who work in high altitude. In a study of astronomers working in Chile at 5050 m, oxygen concentrators increased the level of oxygen concentration by almost 30 percent (that is, from 21 percent to 27 percent). This resulted in increased worker productivity, less fatigue, and improved sleep. Oxygen concentrators are suited for high altitude oxygen enrichment of climate-controlled environments. They require little maintenance and electricity, utilise a locally available source of oxygen, and eliminate the expensive task of transporting oxygen cylinders to remote areas. Offices and housing often already have climate-controlled rooms, in which temperature and humidity are kept at a constant level. Treatment and management Treatment and management depend on circumstances. For most high altitude situations the risk is known, and prevention is appropriate. At low altitudes hypoxia is more likely to be associated with a medical problem or an unexpected contingency, and treatment is more likely to be provided to suit the specific case. It is necessary to identify persons who need oxygen therapy, as supplemental oxygen is required to treat most causes of hypoxia, but different oxygen concentrations may be appropriate. Treatment of acute and chronic cases Treatment will depend on the cause of hypoxia. If it is determined that there is an external cause, and it can be removed, then treatment may be limited to support and returning the system to normal oxygenation. In other cases a longer course of treatment may be necessary, and this may require supplemental oxygen over a fairly long term or indefinitely. There are three main aspects of oxygenation treatment: maintaining patent airways, providing sufficient oxygen content of the inspired air, and improving the diffusion in the lungs. In some cases treatment may extend to improving oxygen capacity of the blood, which may include volumetric and circulatory intervention and support, hyperbaric oxygen therapy and treatment of intoxication. Invasive ventilation may be necessary or an elective option in surgery. This generally involves a positive pressure ventilator connected to an endotracheal tube, and allows precise delivery of ventilation, accurate monitoring of FiO2, and positive end-expiratory pressure, and can be combined with anaesthetic gas delivery. In some cases a tracheotomy may be necessary. Decreasing metabolic rate by reducing body temperature lowers oxygen demand and consumption, and can minimise the effects of tissue hypoxia, especially in the brain, and therapeutic hypothermia based on this principle may be useful. Where the problem is due to respiratory failure. it is desirable to treat the underlying cause. In cases of pulmonary edema, diuretics can be used to reduce the oedems. Steroids may be effective in some cases of interstitial lung disease, and in extreme cases, extracorporeal membrane oxygenation (ECMO) can be used. Hyperbaric oxygen has been found useful for treating some forms of localized hypoxia, including poorly perfused trauma injuries such as Crush injury, compartment syndrome, and other acute traumatic ischemias. It is the definitive treatment for severe decompression sickness, which is largely a condition involving localized hypoxia initially caused by inert gas embolism and inflammatory reactions to extravascular bubble growth. It is also effective in carbon monoxide poisoning and diabetic foot. A prescription renewal for home oxygen following hospitalization requires an assessment of the patient for ongoing hypoxemia. Outcomes Prognosis is strongly affected by cause, severity, treatment, and underlying pathology. Hypoxia leading to reduced capacity to respond appropriately, or to loss of consciousness, has been implicated in incidents where the direct cause of death was not hypoxia. This is recorded in underwater diving incidents, where drowning has often been given as cause of death, high altitude mountaineering, where exposure, hypothermia and falls have been consequences, flying in unpressurized aircraft, and aerobatic maneuvers, where loss of control leading to a crash is possible. Epidemiology Hypoxia is a common disorder but there are many possible causes. Prevalence is variable. Some of the causes are very common, like pneumonia or chronic obstructive pulmonary disease; some are quite rare like hypoxia due to cyanide poisoning. Others, like reduced oxygen tension at high altitude, may be regionally distributed or associated with a specific demographic. Generalized hypoxia is an occupational hazard in several high-risk occupations, including firefighting, professional diving, mining and underground rescue, and flying at high altitudes in unpressurised aircraft. Potentially life-threatening hypoxemia is common in critically ill patients. Localized hypoxia may be a complication of diabetes, decompression sickness, and of trauma that affects blood supply to the extremities. Hypoxia due to underdeveloped lung function is a common complication of premature birth. In the United States, intrauterine hypoxia and birth asphyxia were listed together as the tenth leading cause of neonatal death. Silent hypoxia Silent hypoxia (also known as happy hypoxia) is generalised hypoxia that does not coincide with shortness of breath. This presentation is known to be a complication of COVID-19, and is also known in atypical pneumonia, altitude sickness, and rebreather malfunction accidents. History The 2019 Nobel Prize in Physiology or Medicine was awarded to William G. Kaelin Jr., Sir Peter J. Ratcliffe, and Gregg L. Semenza in recognition of their discovery of cellular mechanisms to sense and adapt to different oxygen concentrations, establishing a basis for how oxygen levels affect physiological function. The use of the term hypoxia appears to be relatively recent, with the first recorded use in scientific publication from 1945. Previous to this the term anoxia was extensively used for all levels of oxygen deprivation. Investigation into the effects of lack of oxygen date from the mid 19th century. Etymology Hypoxia is formed from the Greek roots υπo (hypo), meaning under, below, and less than, and oξυ (oxy), meaning acute or acid, which is the root for oxygen. See also , a result of insufficient oxygen available to the lungs a device intended for hypoxia acclimatisation in a controlled manner , when a fetus is deprived of an adequate supply of oxygen , increased cytosolic ratio of free NADH to NAD+ in cells Solid stress – one of the physical hallmarks of cancer Vasculogenic Mimicry Notes References Aviation medicine Underwater diving medicine Mountaineering and health Oxygen Pulmonology
0.770102
0.998325
0.768812
Chemical burn
A chemical burn occurs when living tissue is exposed to a corrosive substance (such as a strong acid, base or oxidizer) or a cytotoxic agent (such as mustard gas, lewisite or arsine). Chemical burns follow standard burn classification and may cause extensive tissue damage. The main types of irritant and/or corrosive products are: acids, bases, oxidizers / reducing agents, solvents, and alkylants. Additionally, chemical burns can be caused by biological toxins (such as anthrax toxin) and by some types of cytotoxic chemical weapons, e.g., vesicants such as mustard gas and Lewisite, or urticants such as phosgene oxime. Chemical burns may: need no source of heat occur immediately on contact not be immediately evident or noticeable be extremely painful diffuse into tissue and damage cellular structures under skin without immediately apparent damage to skin surface Exposure to a toxic substance that is insufficient to cause a chemical burn can still be very serious, and the lack of a noticeable effect from a chemical exposure is not an indication of safety, particularly in the case of chronic exposure. Presentation The exact symptoms of a chemical burn depend on the chemical involved. Symptoms include itching, bleaching or darkening of skin, burning sensations, trouble breathing, coughing blood and/or tissue necrosis. Common sources of chemical burns include sulfuric acid (H2SO4), hydrochloric acid (HCl), sodium hydroxide (NaOH), lime (CaO), silver nitrate (AgNO3), and hydrogen peroxide (H2O2). Effects depend on the substance; hydrogen peroxide removes a bleached layer of skin, while nitric acid causes a characteristic color change to yellow in the skin, and silver nitrate produces noticeable black stains. Chemical burns may occur through direct contact on body surfaces, including skin and eyes, via inhalation, and/or by ingestion. Substances that diffuse efficiently in human tissue, e.g., hydrofluoric acid, sulfur mustard, and dimethyl sulfate, may not react immediately, but instead produce the burns and inflammation hours after the contact. Chemical fabrication, mining, medicine, and related professional fields are examples of occupations where chemical burns may occur. Hydrofluoric acid leaches into the bloodstream, reacts with calcium and magnesium, and the resulting salts can cause cardiac arrest after eating through skin. Prevention In Belgium, the Conseil Supérieur de la Santé gives a scientific advisory report on public health policy. The Superior Health Council of Belgium provides an overview of products that are authorized in Belgium for consumer use and that contain caustic substances, as well as of the risks linked to exposure to these products. This report aims at suggesting protection measures for the consumers, and formulates recommendations that apply to the different stages of the chain, which begins with the formulation of the product, followed by its regulation, marketing, application, post-application and ends with its monitoring. Gallery See also Acid throwing References External links Burns Contact dermatitis Burn, Chemical Occupational diseases
0.777173
0.989239
0.76881
Biosynthesis
Biosynthesis, i.e., chemical synthesis occurring in biological contexts, is a term most often referring to multi-step, enzyme-catalyzed processes where chemical substances absorbed as nutrients (or previously converted through biosynthesis) serve as enzyme substrates, with conversion by the living organism either into simpler or more complex products. Examples of biosynthetic pathways include those for the production of amino acids, lipid membrane components, and nucleotides, but also for the production of all classes of biological macromolecules, and of acetyl-coenzyme A, adenosine triphosphate, nicotinamide adenine dinucleotide and other key intermediate and transactional molecules needed for metabolism. Thus, in biosynthesis, any of an array of compounds, from simple to complex, are converted into other compounds, and so it includes both the catabolism and anabolism (building up and breaking down) of complex molecules (including macromolecules). Biosynthetic processes are often represented via charts of metabolic pathways. A particular biosynthetic pathway may be located within a single cellular organelle (e.g., mitochondrial fatty acid synthesis pathways), while others involve enzymes that are located across an array of cellular organelles and structures (e.g., the biosynthesis of glycosylated cell surface proteins). Elements of biosynthesis Elements of biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds. Properties of chemical reactions Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary: Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process. Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavourable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule. Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy. In the simplest sense, the reactions that occur in biosynthesis have the following format: Reactant ->[][enzyme] Product Some variations of this basic equation which will be discussed later in more detail are: Simple compounds which are converted into other compounds, usually as part of a multiple step reaction pathway. Two examples of this type of reaction occur during the formation of nucleic acids and the charging of tRNA prior to translation. For some of these steps, chemical energy is required: {Precursor~molecule} + ATP <=> {product~AMP} + PP_i Simple compounds that are converted into other compounds with the assistance of cofactors. For example, the synthesis of phospholipids requires acetyl CoA, while the synthesis of another membrane component, sphingolipids, requires NADH and FADH for the formation the sphingosine backbone. The general equation for these examples is: {Precursor~molecule} + Cofactor ->[][enzyme] macromolecule Simple compounds that join to create a macromolecule. For example, fatty acids join to form phospholipids. In turn, phospholipids and cholesterol interact noncovalently in order to form the lipid bilayer. This reaction may be depicted as follows: {Molecule~1} + Molecule~2 -> macromolecule Lipid Many intricate macromolecules are synthesized in a pattern of simple, repeated structures. For example, the simplest structures of lipids are fatty acids. Fatty acids are hydrocarbon derivatives; they contain a carboxyl group "head" and a hydrocarbon chain "tail". These fatty acids create larger components, which in turn incorporate noncovalent interactions to form the lipid bilayer. Fatty acid chains are found in two major components of membrane lipids: phospholipids and sphingolipids. A third major membrane component, cholesterol, does not contain these fatty acid units. Eukaryotic phospholipids The foundation of all biomembranes consists of a bilayer structure of phospholipids. The phospholipid molecule is amphipathic; it contains a hydrophilic polar head and a hydrophobic nonpolar tail. The phospholipid heads interact with each other and aqueous media, while the hydrocarbon tails orient themselves in the center, away from water. These latter interactions drive the bilayer structure that acts as a barrier for ions and molecules. There are various types of phospholipids; consequently, their synthesis pathways differ. However, the first step in phospholipid synthesis involves the formation of phosphatidate or diacylglycerol 3-phosphate at the endoplasmic reticulum and outer mitochondrial membrane. The synthesis pathway is found below: The pathway starts with glycerol 3-phosphate, which gets converted to lysophosphatidate via the addition of a fatty acid chain provided by acyl coenzyme A. Then, lysophosphatidate is converted to phosphatidate via the addition of another fatty acid chain contributed by a second acyl CoA; all of these steps are catalyzed by the glycerol phosphate acyltransferase enzyme. Phospholipid synthesis continues in the endoplasmic reticulum, and the biosynthesis pathway diverges depending on the components of the particular phospholipid. Sphingolipids Like phospholipids, these fatty acid derivatives have a polar head and nonpolar tails. Unlike phospholipids, sphingolipids have a sphingosine backbone. Sphingolipids exist in eukaryotic cells and are particularly abundant in the central nervous system. For example, sphingomyelin is part of the myelin sheath of nerve fibers. Sphingolipids are formed from ceramides that consist of a fatty acid chain attached to the amino group of a sphingosine backbone. These ceramides are synthesized from the acylation of sphingosine. The biosynthetic pathway for sphingosine is found below: As the image denotes, during sphingosine synthesis, palmitoyl CoA and serine undergo a condensation reaction which results in the formation of 3-dehydrosphinganine. This product is then reduced to form dihydrospingosine, which is converted to sphingosine via the oxidation reaction by FAD. Cholesterol This lipid belongs to a class of molecules called sterols. Sterols have four fused rings and a hydroxyl group. Cholesterol is a particularly important molecule. Not only does it serve as a component of lipid membranes, it is also a precursor to several steroid hormones, including cortisol, testosterone, and estrogen. Cholesterol is synthesized from acetyl CoA. The pathway is shown below: More generally, this synthesis occurs in three stages, with the first stage taking place in the cytoplasm and the second and third stages occurring in the endoplasmic reticulum. The stages are as follows: 1. The synthesis of isopentenyl pyrophosphate, the "building block" of cholesterol 2. The formation of squalene via the condensation of six molecules of isopentenyl phosphate 3. The conversion of squalene into cholesterol via several enzymatic reactions Nucleotides The biosynthesis of nucleotides involves enzyme-catalyzed reactions that convert substrates into more complex products. Nucleotides are the building blocks of DNA and RNA. Nucleotides are composed of a five-membered ring formed from ribose sugar in RNA, and deoxyribose sugar in DNA; these sugars are linked to a purine or pyrimidine base with a glycosidic bond and a phosphate group at the 5' location of the sugar. Purine nucleotides The DNA nucleotides adenosine and guanosine consist of a purine base attached to a ribose sugar with a glycosidic bond. In the case of RNA nucleotides deoxyadenosine and deoxyguanosine, the purine bases are attached to a deoxyribose sugar with a glycosidic bond. The purine bases on DNA and RNA nucleotides are synthesized in a twelve-step reaction mechanism present in most single-celled organisms. Higher eukaryotes employ a similar reaction mechanism in ten reaction steps. Purine bases are synthesized by converting phosphoribosyl pyrophosphate (PRPP) to inosine monophosphate (IMP), which is the first key intermediate in purine base biosynthesis. Further enzymatic modification of IMP produces the adenosine and guanosine bases of nucleotides. The first step in purine biosynthesis is a condensation reaction, performed by glutamine-PRPP amidotransferase. This enzyme transfers the amino group from glutamine to PRPP, forming 5-phosphoribosylamine. The following step requires the activation of glycine by the addition of a phosphate group from ATP. GAR synthetase performs the condensation of activated glycine onto PRPP, forming glycineamide ribonucleotide (GAR). GAR transformylase adds a formyl group onto the amino group of GAR, forming formylglycinamide ribonucleotide (FGAR). FGAR amidotransferase catalyzes the addition of a nitrogen group to FGAR, forming formylglycinamidine ribonucleotide (FGAM). FGAM cyclase catalyzes ring closure, which involves removal of a water molecule, forming the 5-membered imidazole ring 5-aminoimidazole ribonucleotide (AIR). N5-CAIR synthetase transfers a carboxyl group, forming the intermediate N5-carboxyaminoimidazole ribonucleotide (N5-CAIR). N5-CAIR mutase rearranges the carboxyl functional group and transfers it onto the imidazole ring, forming carboxyamino- imidazole ribonucleotide (CAIR). The two step mechanism of CAIR formation from AIR is mostly found in single celled organisms. Higher eukaryotes contain the enzyme AIR carboxylase, which transfers a carboxyl group directly to AIR imidazole ring, forming CAIR. SAICAR synthetase forms a peptide bond between aspartate and the added carboxyl group of the imidazole ring, forming N-succinyl-5-aminoimidazole-4-carboxamide ribonucleotide (SAICAR). SAICAR lyase removes the carbon skeleton of the added aspartate, leaving the amino group and forming 5-aminoimidazole-4-carboxamide ribonucleotide (AICAR). AICAR transformylase transfers a carbonyl group to AICAR, forming N-formylaminoimidazole- 4-carboxamide ribonucleotide (FAICAR). The final step involves the enzyme IMP synthase, which performs the purine ring closure and forms the inosine monophosphate intermediate. Pyrimidine nucleotides Other DNA and RNA nucleotide bases that are linked to the ribose sugar via a glycosidic bond are thymine, cytosine and uracil (which is only found in RNA). Uridine monophosphate biosynthesis involves an enzyme that is located in the mitochondrial inner membrane and multifunctional enzymes that are located in the cytosol. The first step involves the enzyme carbamoyl phosphate synthase combining glutamine with CO2 in an ATP dependent reaction to form carbamoyl phosphate. Aspartate carbamoyltransferase condenses carbamoyl phosphate with aspartate to form uridosuccinate. Dihydroorotase performs ring closure, a reaction that loses water, to form dihydroorotate. Dihydroorotate dehydrogenase, located within the mitochondrial inner membrane, oxidizes dihydroorotate to orotate. Orotate phosphoribosyl hydrolase (OMP pyrophosphorylase) condenses orotate with PRPP to form orotidine-5'-phosphate. OMP decarboxylase catalyzes the conversion of orotidine-5'-phosphate to UMP. After the uridine nucleotide base is synthesized, the other bases, cytosine and thymine are synthesized. Cytosine biosynthesis is a two-step reaction which involves the conversion of UMP to UTP. Phosphate addition to UMP is catalyzed by a kinase enzyme. The enzyme CTP synthase catalyzes the next reaction step: the conversion of UTP to CTP by transferring an amino group from glutamine to uridine; this forms the cytosine base of CTP. The mechanism, which depicts the reaction UTP + ATP + glutamine ⇔ CTP + ADP + glutamate, is below: Cytosine is a nucleotide that is present in both DNA and RNA. However, uracil is only found in RNA. Therefore, after UTP is synthesized, it is must be converted into a deoxy form to be incorporated into DNA. This conversion involves the enzyme ribonucleoside triphosphate reductase. This reaction that removes the 2'-OH of the ribose sugar to generate deoxyribose is not affected by the bases attached to the sugar. This non-specificity allows ribonucleoside triphosphate reductase to convert all nucleotide triphosphates to deoxyribonucleotide by a similar mechanism. In contrast to uracil, thymine bases are found mostly in DNA, not RNA. Cells do not normally contain thymine bases that are linked to ribose sugars in RNA, thus indicating that cells only synthesize deoxyribose-linked thymine. The enzyme thymidylate synthetase is responsible for synthesizing thymine residues from dUMP to dTMP. This reaction transfers a methyl group onto the uracil base of dUMP to generate dTMP. The thymidylate synthase reaction, dUMP + 5,10-methylenetetrahydrofolate ⇔ dTMP + dihydrofolate, is shown to the right. DNA Although there are differences between eukaryotic and prokaryotic DNA synthesis, the following section denotes key characteristics of DNA replication shared by both organisms. DNA is composed of nucleotides that are joined by phosphodiester bonds. DNA synthesis, which takes place in the nucleus, is a semiconservative process, which means that the resulting DNA molecule contains an original strand from the parent structure and a new strand. DNA synthesis is catalyzed by a family of DNA polymerases that require four deoxynucleoside triphosphates, a template strand, and a primer with a free 3'OH in which to incorporate nucleotides. In order for DNA replication to occur, a replication fork is created by enzymes called helicases which unwind the DNA helix. Topoisomerases at the replication fork remove supercoils caused by DNA unwinding, and single-stranded DNA binding proteins maintain the two single-stranded DNA templates stabilized prior to replication. DNA synthesis is initiated by the RNA polymerase primase, which makes an RNA primer with a free 3'OH. This primer is attached to the single-stranded DNA template, and DNA polymerase elongates the chain by incorporating nucleotides; DNA polymerase also proofreads the newly synthesized DNA strand. During the polymerization reaction catalyzed by DNA polymerase, a nucleophilic attack occurs by the 3'OH of the growing chain on the innermost phosphorus atom of a deoxynucleoside triphosphate; this yields the formation of a phosphodiester bridge that attaches a new nucleotide and releases pyrophosphate. Two types of strands are created simultaneously during replication: the leading strand, which is synthesized continuously and grows towards the replication fork, and the lagging strand, which is made discontinuously in Okazaki fragments and grows away from the replication fork. Okazaki fragments are covalently joined by DNA ligase to form a continuous strand. Then, to complete DNA replication, RNA primers are removed, and the resulting gaps are replaced with DNA and joined via DNA ligase. Amino acids A protein is a polymer that is composed from amino acids that are linked by peptide bonds. There are more than 300 amino acids found in nature of which only twenty two, known as the proteinogenic amino acids, are the building blocks for protein. Only green plants and most microbes are able to synthesize all of the 20 standard amino acids that are needed by all living species. Mammals can only synthesize ten of the twenty standard amino acids. The other amino acids, valine, methionine, leucine, isoleucine, phenylalanine, lysine, threonine and tryptophan for adults and histidine, and arginine for babies are obtained through diet. Amino acid basic structure The general structure of the standard amino acids includes a primary amino group, a carboxyl group and the functional group attached to the α-carbon. The different amino acids are identified by the functional group. As a result of the three different groups attached to the α-carbon, amino acids are asymmetrical molecules. For all standard amino acids, except glycine, the α-carbon is a chiral center. In the case of glycine, the α-carbon has two hydrogen atoms, thus adding symmetry to this molecule. With the exception of proline, all of the amino acids found in life have the L-isoform conformation. Proline has a functional group on the α-carbon that forms a ring with the amino group. Nitrogen source One major step in amino acid biosynthesis involves incorporating a nitrogen group onto the α-carbon. In cells, there are two major pathways of incorporating nitrogen groups. One pathway involves the enzyme glutamine oxoglutarate aminotransferase (GOGAT) which removes the amide amino group of glutamine and transfers it onto 2-oxoglutarate, producing two glutamate molecules. In this catalysis reaction, glutamine serves as the nitrogen source. An image illustrating this reaction is found to the right. The other pathway for incorporating nitrogen onto the α-carbon of amino acids involves the enzyme glutamate dehydrogenase (GDH). GDH is able to transfer ammonia onto 2-oxoglutarate and form glutamate. Furthermore, the enzyme glutamine synthetase (GS) is able to transfer ammonia onto glutamate and synthesize glutamine, replenishing glutamine. The glutamate family of amino acids The glutamate family of amino acids includes the amino acids that derive from the amino acid glutamate. This family includes: glutamate, glutamine, proline, and arginine. This family also includes the amino acid lysine, which is derived from α-ketoglutarate. The biosynthesis of glutamate and glutamine is a key step in the nitrogen assimilation discussed above. The enzymes GOGAT and GDH catalyze the nitrogen assimilation reactions. In bacteria, the enzyme glutamate 5-kinase initiates the biosynthesis of proline by transferring a phosphate group from ATP onto glutamate. The next reaction is catalyzed by the enzyme pyrroline-5-carboxylate synthase (P5CS), which catalyzes the reduction of the ϒ-carboxyl group of L-glutamate 5-phosphate. This results in the formation of glutamate semialdehyde, which spontaneously cyclizes to pyrroline-5-carboxylate. Pyrroline-5-carboxylate is further reduced by the enzyme pyrroline-5-carboxylate reductase (P5CR) to yield a proline amino acid. In the first step of arginine biosynthesis in bacteria, glutamate is acetylated by transferring the acetyl group from acetyl-CoA at the N-α position; this prevents spontaneous cyclization. The enzyme N-acetylglutamate synthase (glutamate N-acetyltransferase) is responsible for catalyzing the acetylation step. Subsequent steps are catalyzed by the enzymes N-acetylglutamate kinase, N-acetyl-gamma-glutamyl-phosphate reductase, and acetylornithine/succinyldiamino pimelate aminotransferase and yield the N-acetyl-L-ornithine. The acetyl group of acetylornithine is removed by the enzyme acetylornithinase (AO) or ornithine acetyltransferase (OAT), and this yields ornithine. Then, the enzymes citrulline and argininosuccinate convert ornithine to arginine. There are two distinct lysine biosynthetic pathways: the diaminopimelic acid pathway and the α-aminoadipate pathway. The most common of the two synthetic pathways is the diaminopimelic acid pathway; it consists of several enzymatic reactions that add carbon groups to aspartate to yield lysine: Aspartate kinase initiates the diaminopimelic acid pathway by phosphorylating aspartate and producing aspartyl phosphate. Aspartate semialdehyde dehydrogenase catalyzes the NADPH-dependent reduction of aspartyl phosphate to yield aspartate semialdehyde. 4-hydroxy-tetrahydrodipicolinate synthase adds a pyruvate group to the β-aspartyl-4-semialdehyde, and a water molecule is removed. This causes cyclization and gives rise to (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate. 4-hydroxy-tetrahydrodipicolinate reductase catalyzes the reduction of (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate by NADPH to yield Δ'-piperideine-2,6-dicarboxylate (2,3,4,5-tetrahydrodipicolinate) and H2O. Tetrahydrodipicolinate acyltransferase catalyzes the acetylation reaction that results in ring opening and yields N-acetyl α-amino-ε-ketopimelate. N-succinyl-α-amino-ε-ketopimelate-glutamate aminotransaminase catalyzes the transamination reaction that removes the keto group of N-acetyl α-amino-ε-ketopimelate and replaces it with an amino group to yield N-succinyl-L-diaminopimelate. N-acyldiaminopimelate deacylase catalyzes the deacylation of N-succinyl-L-diaminopimelate to yield L,L-diaminopimelate. DAP epimerase catalyzes the conversion of L,L-diaminopimelate to the meso form of L,L-diaminopimelate. DAP decarboxylase catalyzes the removal of the carboxyl group, yielding L-lysine. The serine family of amino acids The serine family of amino acid includes: serine, cysteine, and glycine. Most microorganisms and plants obtain the sulfur for synthesizing methionine from the amino acid cysteine. Furthermore, the conversion of serine to glycine provides the carbons needed for the biosynthesis of the methionine and histidine. During serine biosynthesis, the enzyme phosphoglycerate dehydrogenase catalyzes the initial reaction that oxidizes 3-phospho-D-glycerate to yield 3-phosphonooxypyruvate. The following reaction is catalyzed by the enzyme phosphoserine aminotransferase, which transfers an amino group from glutamate onto 3-phosphonooxypyruvate to yield L-phosphoserine. The final step is catalyzed by the enzyme phosphoserine phosphatase, which dephosphorylates L-phosphoserine to yield L-serine. There are two known pathways for the biosynthesis of glycine. Organisms that use ethanol and acetate as the major carbon source utilize the glyconeogenic pathway to synthesize glycine. The other pathway of glycine biosynthesis is known as the glycolytic pathway. This pathway converts serine synthesized from the intermediates of glycolysis to glycine. In the glycolytic pathway, the enzyme serine hydroxymethyltransferase catalyzes the cleavage of serine to yield glycine and transfers the cleaved carbon group of serine onto tetrahydrofolate, forming 5,10-methylene-tetrahydrofolate. Cysteine biosynthesis is a two-step reaction that involves the incorporation of inorganic sulfur. In microorganisms and plants, the enzyme serine acetyltransferase catalyzes the transfer of acetyl group from acetyl-CoA onto L-serine to yield O-acetyl-L-serine. The following reaction step, catalyzed by the enzyme O-acetyl serine (thiol) lyase, replaces the acetyl group of O-acetyl-L-serine with sulfide to yield cysteine. The aspartate family of amino acids The aspartate family of amino acids includes: threonine, lysine, methionine, isoleucine, and aspartate. Lysine and isoleucine are considered part of the aspartate family even though part of their carbon skeleton is derived from pyruvate. In the case of methionine, the methyl carbon is derived from serine and the sulfur group, but in most organisms, it is derived from cysteine. The biosynthesis of aspartate is a one step reaction that is catalyzed by a single enzyme. The enzyme aspartate aminotransferase catalyzes the transfer of an amino group from aspartate onto α-ketoglutarate to yield glutamate and oxaloacetate. Asparagine is synthesized by an ATP-dependent addition of an amino group onto aspartate; asparagine synthetase catalyzes the addition of nitrogen from glutamine or soluble ammonia to aspartate to yield asparagine. The diaminopimelic acid biosynthetic pathway of lysine belongs to the aspartate family of amino acids. This pathway involves nine enzyme-catalyzed reactions that convert aspartate to lysine. Aspartate kinase catalyzes the initial step in the diaminopimelic acid pathway by transferring a phosphoryl from ATP onto the carboxylate group of aspartate, which yields aspartyl-β-phosphate. Aspartate-semialdehyde dehydrogenase catalyzes the reduction reaction by dephosphorylation of aspartyl-β-phosphate to yield aspartate-β-semialdehyde. Dihydrodipicolinate synthase catalyzes the condensation reaction of aspartate-β-semialdehyde with pyruvate to yield dihydrodipicolinic acid. 4-hydroxy-tetrahydrodipicolinate reductase catalyzes the reduction of dihydrodipicolinic acid to yield tetrahydrodipicolinic acid. Tetrahydrodipicolinate N-succinyltransferase catalyzes the transfer of a succinyl group from succinyl-CoA on to tetrahydrodipicolinic acid to yield N-succinyl-L-2,6-diaminoheptanedioate. N-succinyldiaminopimelate aminotransferase catalyzes the transfer of an amino group from glutamate onto N-succinyl-L-2,6-diaminoheptanedioate to yield N-succinyl-L,L-diaminopimelic acid. Succinyl-diaminopimelate desuccinylase catalyzes the removal of acyl group from N-succinyl-L,L-diaminopimelic acid to yield L,L-diaminopimelic acid. Diaminopimelate epimerase catalyzes the inversion of the α-carbon of L,L-diaminopimelic acid to yield meso-diaminopimelic acid. Siaminopimelate decarboxylase catalyzes the final step in lysine biosynthesis that removes the carbon dioxide group from meso-diaminopimelic acid to yield L-lysine. Proteins Protein synthesis occurs via a process called translation. During translation, genetic material called mRNA is read by ribosomes to generate a protein polypeptide chain. This process requires transfer RNA (tRNA) which serves as an adaptor by binding amino acids on one end and interacting with mRNA at the other end; the latter pairing between the tRNA and mRNA ensures that the correct amino acid is added to the chain. Protein synthesis occurs in three phases: initiation, elongation, and termination. Prokaryotic (archaeal and bacterial) translation differs from eukaryotic translation; however, this section will mostly focus on the commonalities between the two organisms. Additional background Before translation can begin, the process of binding a specific amino acid to its corresponding tRNA must occur. This reaction, called tRNA charging, is catalyzed by aminoacyl tRNA synthetase. A specific tRNA synthetase is responsible for recognizing and charging a particular amino acid. Furthermore, this enzyme has special discriminator regions to ensure the correct binding between tRNA and its cognate amino acid. The first step for joining an amino acid to its corresponding tRNA is the formation of aminoacyl-AMP: {Amino~acid} + ATP <=> {aminoacyl-AMP} + PP_i This is followed by the transfer of the aminoacyl group from aminoacyl-AMP to a tRNA molecule. The resulting molecule is aminoacyl-tRNA: {Aminoacyl-AMP} + tRNA <=> {aminoacyl-tRNA} + AMP The combination of these two steps, both of which are catalyzed by aminoacyl tRNA synthetase, produces a charged tRNA that is ready to add amino acids to the growing polypeptide chain. In addition to binding an amino acid, tRNA has a three nucleotide unit called an anticodon that base pairs with specific nucleotide triplets on the mRNA called codons; codons encode a specific amino acid. This interaction is possible thanks to the ribosome, which serves as the site for protein synthesis. The ribosome possesses three tRNA binding sites: the aminoacyl site (A site), the peptidyl site (P site), and the exit site (E site). There are numerous codons within an mRNA transcript, and it is very common for an amino acid to be specified by more than one codon; this phenomenon is called degeneracy. In all, there are 64 codons, 61 of each code for one of the 20 amino acids, while the remaining codons specify chain termination. Translation in steps As previously mentioned, translation occurs in three phases: initiation, elongation, and termination. Step 1: Initiation The completion of the initiation phase is dependent on the following three events: 1. The recruitment of the ribosome to mRNA 2. The binding of a charged initiator tRNA into the P site of the ribosome 3. The proper alignment of the ribosome with mRNA's start codon Step 2: Elongation Following initiation, the polypeptide chain is extended via anticodon:codon interactions, with the ribosome adding amino acids to the polypeptide chain one at a time. The following steps must occur to ensure the correct addition of amino acids: 1. The binding of the correct tRNA into the A site of the ribosome 2. The formation of a peptide bond between the tRNA in the A site and the polypeptide chain attached to the tRNA in the P site 3. Translocation or advancement of the tRNA-mRNA complex by three nucleotides Translocation "kicks off" the tRNA at the E site and shifts the tRNA from the A site into the P site, leaving the A site free for an incoming tRNA to add another amino acid. Step 3: Termination The last stage of translation occurs when a stop codon enters the A site. Then, the following steps occur: 1. The recognition of codons by release factors, which causes the hydrolysis of the polypeptide chain from the tRNA located in the P site 2. The release of the polypeptide chain 3. The dissociation and "recycling" of the ribosome for future translation processes A summary table of the key players in translation is found below: Diseases associated with macromolecule deficiency Errors in biosynthetic pathways can have deleterious consequences including the malformation of macromolecules or the underproduction of functional molecules. Below are examples that illustrate the disruptions that occur due to these inefficiencies. Familial hypercholesterolemia: this disorder is characterized by the absence of functional receptors for LDL. Deficiencies in the formation of LDL receptors may cause faulty receptors which disrupt the endocytic pathway, inhibiting the entry of LDL into the liver and other cells. This causes a buildup of LDL in the blood plasma, which results in atherosclerotic plaques that narrow arteries and increase the risk of heart attacks. Lesch–Nyhan syndrome: this genetic disease is characterized by self- mutilation, mental deficiency, and gout. It is caused by the absence of hypoxanthine-guanine phosphoribosyltransferase, which is a necessary enzyme for purine nucleotide formation. The lack of enzyme reduces the level of necessary nucleotides and causes the accumulation of biosynthesis intermediates, which results in the aforementioned unusual behavior. Severe combined immunodeficiency (SCID): SCID is characterized by a loss of T cells. Shortage of these immune system components increases the susceptibility to infectious agents because the affected individuals cannot develop immunological memory. This immunological disorder results from a deficiency in adenosine deaminase activity, which causes a buildup of dATP. These dATP molecules then inhibit ribonucleotide reductase, which prevents of DNA synthesis. Huntington's disease: this neurological disease is caused from errors that occur during DNA synthesis. These errors or mutations lead to the expression of a mutant huntingtin protein, which contains repetitive glutamine residues that are encoded by expanding CAG trinucleotide repeats in the gene. Huntington's disease is characterized by neuronal loss and gliosis. Symptoms of the disease include: movement disorder, cognitive decline, and behavioral disorder. See also Lipids Phospholipid bilayer Nucleotides DNA DNA replication Proteinogenic amino acid Codon table Prostaglandin Porphyrins Chlorophylls and bacteriochlorophylls Vitamin B12 References Biochemical reactions Metabolism
0.776031
0.990682
0.768799
Tetraplegia
Tetraplegia, also known as quadriplegia, is defined as the dysfunction or loss of motor and/or sensory function in the cervical area of the spinal cord. A loss of motor function can present as either weakness or paralysis leading to partial or total loss of function in the arms, legs, trunk, and pelvis. (Paraplegia is similar but affects the thoracic, lumbar, and sacral segments of the spinal cord and arm function is retained.) The paralysis may be flaccid or spastic. A loss of sensory function can present as an impairment or complete inability to sense light touch, pressure, heat, pinprick/pain, and proprioception. In these types of spinal cord injury, it is common to have a loss of both sensation and motor control. Signs and symptoms Although the most obvious symptom is impairment of the limbs, functioning is also impaired in the trunk and pelvic organs. This can lead to loss or impairment of controlling bowel and bladder, sexual function, digestion, breathing and other autonomic functions. Furthermore, sensation is usually impaired in affected areas. This may manifest as numbness, reduced sensation or neuropathic pain. Secondarily, because of their depressed functioning and immobility, tetraplegics are often more vulnerable to pressure sores, osteoporosis and fractures, frozen joints, spasticity, respiratory complications, infections, autonomic dysreflexia, deep vein thrombosis, and cardiovascular disease. The severity of the condition depends on both the level at which the spinal cord is injured and the extent of the injury. An individual with an injury at C1 (the highest cervical vertebra, at the base of the skull) will probably lose function from the neck down and be ventilator-dependent. An individual with a C7 injury may lose function from the chest down but still retain use of the arms and much of the hands. An individual in between, with a C5 injury may lose some function from the chest down and fine motor skills in his/her hands but still have flexion and extension abilities of certain muscles around the back or arm area. The extent of the injury is also important. A complete severing of the spinal cord will result in complete loss of function from that vertebra down. A partial severing or even bruising of the spinal cord results in varying degrees of mixed function and paralysis. A common misconception with tetraplegia is that the victim cannot move legs, arms, or any other major body regions; this is often not the case. Some tetraplegics can walk and use their hands, as though they did not have a spinal cord injury, while others may use wheelchairs and retain some functions in their arms and fingers; again, this varies based on the degree of damage to the spinal cord and is mostly seen with incomplete tetraplegia. It is common to have partial movement in limbs, such as the ability to move the arms but not the hands, or to be able to use the fingers but not to the same extent as before the injury. Furthermore, the deficit in the limbs may not be the same on both sides of the body; either side may be more affected, depending on the location of the lesion on the spinal cord. Another important factor is the possibility that the patient may exhibit sporadic movement in the affected areas. One of the main causes for this would be myoclonus, or muscle spasms. "After a spinal cord injury, the normal flow of signals is disrupted, and the message does not reach the brain. Instead, the signals are sent back to the motor cells in the spinal cord and cause a reflex muscle spasm. This can result in a twitch, jerk or stiffening of the muscle." Causes Tetraplegia is caused by damage to the brain or the spinal cord at a high level. The injury, which is known as a lesion, causes the loss of partial or total function of all four limbs, meaning the arms and the legs. Typical causes of this damage are trauma (such as a traffic collision, diving into shallow water, a fall, a sports injury), disease (such as transverse myelitis, Guillain–Barré syndrome, multiple sclerosis, or polio), or congenital disorders (such as muscular dystrophy). Tetraplegia is defined in many ways; C1–C4 usually affects arm movement more so than a C5–C7 injury; however, all tetraplegics have or have had some kind of finger dysfunction. So, it is not uncommon to have a tetraplegic with fully functional arms but no nervous control of their fingers and thumbs. It is possible to have a broken neck without becoming tetraplegic if the vertebrae are fractured or dislocated but the spinal cord is not damaged. Conversely, it is possible to injure the spinal cord without breaking the spine, for example when a ruptured disc or bone spur on the vertebra protrudes into the spinal column. Anatomy and function Since tetraplegia is defined as dysfunction in the cervical spinal cord, this section will focus on the anatomy of the cervical spinal cord. To understand how tetraplegia presents after injury, it is imperative to have a broad knowledge of the cervical spinal roots and its many functions. In the cervical spine, nerve roots exit the spine above the associated vertebra (i.e. the C6 nerve root exits above the C6 vertebra). By evaluating what nerve root of the cervical spine is injured, the affected muscle groups and dermatomes can be determined. This informs the evaluator as to what activities may be limited as a result of the injury. This is typically done at 72 hours post-injury; exams done prior to this time have been found to be inaccurate due to the presence of swelling and other confounding factors. For example, an injury at the C6 nerve root level will affect the function of the triceps (elbow extension) but the biceps (elbow flexion) will be spared; in this case, an injury at the C6 root level affects all function at that level and below whereas the C5 nerve root, which controls the biceps, is spared since it is above the C6 level in the spinal column. When classifying an individual's level of function, there are numerous functional assessment tools that may be used in a clinical setting and it is often up to the clinician's discretion as to which tools are used. A comprehensive list of these tools may be found on the ShirleyRyan AbilityLab website. Diagnosis Classification Spinal cord injuries are classified as complete and incomplete by the American Spinal Injury Association (ASIA) classification. The ASIA scale grades patients based on their functional impairment as a result of the injury, grading a patient from A to D. This has considerable consequences for surgical planning and therapy. After a comprehensive neurologic exam testing segments of the body corresponding to spinal nerve roots, the examiner will determine the patient's motor level and sensory level (i.e. motor level C6, sensory level C7). These levels are unique for the patient's left and right side. This level is assigned based on the lowest (closest to the patient's feet) intact motor and sensory level. After this assignment, a neurological level of injury (NLI) is determined. The NLI is the lowest segment with intact sensory and motor function provided there is normal sensory and motor function above this segment. Complete spinal-cord lesions As in the above ASIA chart, a complete spinal cord injury is any injury which has absent motor and sensory function in the sacral segments S4 and S5. This is verified during the physical exam by the absence of all three of: voluntary anal contraction, deep anal pressure, and pinprick+light touch sensation in the perineal area. S4 and S5 are both sacral nerve roots found at the lowest portion of the spinal cord. In simpler terms, "complete" is meant as a way to express that the spinal cord is injured such that no signal, motor or sensory, is carried to or from the level of injury to these lower levels of the spinal cord. Incomplete spinal-cord lesions Incomplete spinal cord injuries result in varied post injury presentations. There are three main syndromes described, depending on the exact site and extent of the lesion. Central cord syndrome: an injury to the central area of the spinal cord, most often seen as a result of a fall with subsequent hyperextension injury. This typically presents with weakness greater in the upper limbs than in the lower limbs. Brown-Séquard syndrome: hemisection of the spinal cord with resultant loss in: a.) ipsilateral proprioception, vibration, and motor control below the level of injury b.) complete sensory loss at the level of injury c.) contralateral pain and temperature loss. Anterior cord syndrome: a lesion of the anterior two-thirds of the spinal cord, most commonly due to ischemia. This typically presents with loss of pain, temperature, and motor function at and below the level of injury. Cauda equina syndrome: a lesion of the lumbosacral nerve roots that may spare the spinal cord. As these nerve roots are lower motor neurons, a flaccid lower limb paralysis is typically seen along with loss of bowel and bladder reflexes, varying degrees of impairment of sensation, and loss of sacral reflexes (bulbocavernosus reflex, anal wink). Conus medullaris syndrome: a lesion similar to cauda equina syndrome however this lesion is typically found higher in the cord. This presents clinically similarly to cauda equina syndrome however there may be intact sacral reflexes. Unlike cauda equina, the unique location of this syndrome leads it to present with mixed upper and lower motor neuron signs. For most patients with ASIA A (complete) tetraplegia, ASIA B (incomplete) tetraplegia and ASIA C (incomplete) tetraplegia, the International Classification level of the patient can be established without great difficulty. The surgical procedures according to the International Classification level can be performed. In contrast, for patients with ASIA D (incomplete) tetraplegia it is difficult to assign an International Classification other than International Classification level X (others). Therefore, it is more difficult to decide which surgical procedures should be performed. A far more personalized approach is needed for these patients. Decisions must be based more on experience than on texts or journals. The results of tendon transfers for patients with complete injuries are predictable. On the other hand, it is well known that muscles lacking normal excitation perform unreliably after surgical tendon transfers. Despite the unpredictable aspect in incomplete lesions, tendon transfers may be useful. The surgeon should be confident that the muscle to be transferred has enough power and is under good voluntary control. Pre-operative assessment is more difficult to assess in incomplete lesions. Patients with an incomplete lesion also often need therapy or surgery before the procedure to restore function to correct the consequences of the injury. These consequences are hypertonicity/spasticity, contractures, painful hyperesthesias and paralyzed proximal upper limb muscles with distal muscle sparing. Spasticity is a frequent consequence of incomplete injuries. Spasticity often decreases function, but sometimes a patient can control the spasticity in a way that it is useful to their function. The location and the effect of the spasticity should be analyzed carefully before treatment is planned. An injection of botulinum toxin (Botox) into spastic muscles is a treatment to reduce spasticity. This can be used to prevent muscle shortening and early contractures. Over the last ten years, an increase in traumatic incomplete lesions is seen, due to the better protection in traffic. Treatment Upper limb paralysis refers to the loss of function of the elbow and hand. When upper limb function is absent as a result of a spinal cord injury it is a major barrier to regain autonomy. People with tetraplegia should be examined and informed concerning the options for reconstructive surgery of the tetraplegic arms and hands. Prognosis Delayed diagnosis of cervical spine injury has grave consequences for the victim. About one in 20 cervical fractures are missed and about two-thirds of these patients have further spinal-cord damage as a result. About 30% of cases of delayed diagnosis of cervical spine injury develop permanent neurological deficits. In high-level cervical injuries, total paralysis from the neck can result. High-level tetraplegics (C4 and higher) will likely need constant care and assistance in activities of daily living (ADLs), such as getting dressed, eating, and bowel/bladder care. Individuals with C5 injuries retain some function in their biceps, deltoids, and other muscles; they typically can perform many ADLs including feeding, bathing, and grooming but require total assistance with bowel/bladder care. The C6 level adds function in the extensor carpi radialis, longus, and other muscles allowing for wrist extension, scapular abduction, and wrist flexion; typically, these patients have modified independent feeding and grooming with adaptive equipment, independent with dressing, can use both a manual and power wheelchair but require assistance with some activities of daily living. The C7 level is where function is retained in the triceps allowing for arm extension; C7 is considered the key level at which most activities can be performed independently with a wheelchair and assistive devices; activities include feeding, grooming, dressing, light meal preparation, and transfers on level surfaces. Even in complete spinal cord injury, it is common for individuals to recover up to 1 level of motor function. Even with "complete" injuries, in some rare cases, through intensive rehabilitation, function can be regained through "rewiring" neural connections, as in the case of actor Christopher Reeve. In the case of cerebral palsy, which is caused by damage to the motor cortex either before, during (10%), or after birth, some people with incomplete tetraplegia are gradually able to learn to stand or walk through physical therapy. Tetraplegics can improve muscle strength by performing resistance training at least three times per week. Combining resistance training with proper nutrition intake can greatly reduce co-morbidities such as obesity and type 2 diabetes. Epidemiology There are an estimated 17,700 spinal cord injuries each year in the United States; the total number of people affected by spinal cord injuries is estimated to be approximately 290,000 people. In the US, spinal cord injuries alone cost approximately $40.5 billion each year, which is a 317 percent increase from costs estimated in 1998 ($9.7 billion). The estimated lifetime costs for a 25-year-old in 2018 is $3.6 million when affected by low tetraplegia and $4.9 million when affected by high tetraplegia. In 2009, it was estimated that the lifetime care of a 25-year-old rendered with low tetraplegia was about $1.7 million, and $3.1 million with high tetraplegia. About 1,000 people are affected each year in the UK (~1 in 60,000—assuming a population of 60 million). Terminology The condition of paralysis affecting four limbs is alternately termed tetraplegia or quadriplegia. Quadriplegia combines the Latin root quadra, for "four", with the Greek root πληγία plegia, for "paralysis". Tetraplegia uses the Greek root τετρα tetra for "four". In the past, "tetraplegia" and "quadriplegia" were used interchangeably in the medical literature. Medical literature favors using "tetraplegia" as the standardized term, as it is frowned upon to mix Greek and Latin roots, although "quadriplegia" remains in use. "Tetraplegia", meaning the paralysis of four limbs, may be confused with "tetraparesis", meaning the weakness of four limbs. In medicine, it is important to not use these terms when making a diagnosis. When diagnosing and classifying spinal cord injuries, the ASIA classification is used to distinguish between weakness vs. no weakness, and to classify neurologically complete vs. incomplete lesions. Use of "tetraparesis" is discouraged as it inaccurately describes an incomplete lesion and incorrectly implies tetraplegia applies only to cases of complete lesions. See also Clearing the cervical spine Hemiplegia Paraplegia Locked-in syndrome Sexuality after spinal cord injury Spinal cord injury research References Further reading External links Cerebral palsy and other paralytic syndromes Neurotrauma
0.769601
0.998926
0.768775
Mechanism of action
In pharmacology, the term mechanism of action (MOA) refers to the specific biochemical interaction through which a drug substance produces its pharmacological effect. A mechanism of action usually includes mention of the specific molecular targets to which the drug binds, such as an enzyme or receptor. Receptor sites have specific affinities for drugs based on the chemical structure of the drug, as well as the specific action that occurs there. Drugs that do not bind to receptors produce their corresponding therapeutic effect by simply interacting with chemical or physical properties in the body. Common examples of drugs that work in this way are antacids and laxatives. In contrast, a mode of action (MoA) describes functional or anatomical changes, at the cellular level, resulting from the exposure of a living organism to a substance. Importance Elucidating the mechanism of action of novel drugs and medications is important for several reasons: In the case of anti-infective drug development, the information permits anticipation of problems relating to clinical safety. Drugs disrupting the cytoplasmic membrane or electron transport chain, for example, are more likely to cause toxicity problems than those targeting components of the cell wall (peptidoglycan or β-glucans) or 70S ribosome, structures which are absent in human cells. By knowing the interaction between a certain site of a drug and a receptor, other drugs can be formulated in a way that replicates this interaction, thus producing the same therapeutic effects. Indeed, this method is used to create new drugs. It can help identify which patients are most likely to respond to treatment. Because the breast cancer medication trastuzumab is known to target protein HER2, for example, tumors can be screened for the presence of this molecule to determine whether or not the patient will benefit from trastuzumab therapy. It can enable better dosing because the drug's effects on the target pathway can be monitored in the patient. Statin dosage, for example, is usually determined by measuring the patient's blood cholesterol levels. It allows drugs to be combined in such a way that the likelihood of drug resistance emerging is reduced. By knowing what cellular structure an anti-infective or anticancer drug acts upon, it is possible to administer a cocktail that inhibits multiple targets simultaneously, thereby reducing the risk that a single mutation in microbial or tumor DNA will lead to drug resistance and treatment failure. It may allow other indications for the drug to be identified. Discovery that sildenafil inhibits phosphodiesterase-5 (PDE-5) proteins, for example, enabled this drug to be repurposed for pulmonary arterial hypertension treatment, since PDE-5 is expressed in pulmonary hypertensive lungs. Determination Microscopy-based methods Bioactive compounds induce phenotypic changes in target cells, changes that are observable by microscopy and that can give insight into the mechanism of action of the compound. With antibacterial agents, the conversion of target cells to spheroplasts can be an indication that peptidoglycan synthesis is being inhibited, and filamentation of target cells can be an indication that PBP3, FtsZ, or DNA synthesis is being inhibited. Other antibacterial agent-induced changes include ovoid cell formation, pseudomulticellular forms, localized swelling, bulge formation, blebbing, and peptidoglycan thickening. In the case of anticancer agents, bleb formation can be an indication that the compound is disrupting the plasma membrane. A current limitation of this approach is the time required to manually generate and interpret data, but advances in automated microscopy and image analysis software may help resolve this. Direct biochemical methods Direct biochemical methods include methods in which a protein or a small molecule, such as a drug candidate, is labeled and is traced throughout the body. This proves to be the most direct approach to find target protein that will bind to small targets of interest, such as a basic representation of a drug outline, in order to identify the pharmacophore of the drug. Due to the physical interactions between the labeled molecule and a protein, biochemical methods can be used to determine the toxicity, efficacy, and mechanism of action of the drug. Computation inference methods Typically, computation inference methods are primarily used to predict protein targets for small molecule drugs based on computer based pattern recognition. However, this method could also be used for finding new targets for existing or newly developed drugs. By identifying the pharmacophore of the drug molecule, the profiling method of pattern recognition can be carried out where a new target is identified. This provides an insight at a possible mechanism of action since it is known what certain functional components of the drug are responsible for when interacting with a certain area on a protein, thus leading to a therapeutic effect. Omics based methods Omics based methods use omics technologies, such as chemoproteomics, reverse genetics and genomics, transcriptomics, and proteomics, to identify the potential targets of the compound of interest. Reverse genetics and genomics approaches, for instance, uses genetic perturbation (e.g. CRISPR-Cas9 or siRNA) in combination with the compound to identify genes whose knockdown or knockout abolishes the pharmacological effect of the compound. On the other hand, transcriptomics and proteomics profiles of the compound can be used to compare with profiles of compounds with known targets. Thanks to computation inference, it is then possible to make hypotheses about the mechanism of action of the compound, which can subsequently be tested. Drugs with known MOA There are many drugs in which the mechanism of action is known. One example is aspirin. Aspirin The mechanism of action of aspirin involves irreversible inhibition of the enzyme cyclooxygenase; therefore suppressing the production of prostaglandins and thromboxanes, thus reducing pain and inflammation. This mechanism of action is specific to aspirin and is not constant for all nonsteroidal anti-inflammatory drugs (NSAIDs). Rather, aspirin is the only NSAID that irreversibly inhibits COX-1. Drugs with unknown MOA Some drug mechanisms of action are still unknown. However, even though the mechanism of action of a certain drug is unknown, the drug still functions; it is just unknown or unclear how the drug interacts with receptors and produces its therapeutic effect. Mode of action In some literature articles, the terms "mechanism of action" and "mode of action" are used interchangeably, typically referring to the way in which the drug interacts and produces a medical effect. However, in actuality, a mode of action describes functional or anatomical changes, at the cellular level, resulting from the exposure of a living organism to a substance. This differs from a mechanism of action since it is a more specific term that focuses on the interaction between the drug itself and an enzyme or receptor and its particular form of interaction, whether through inhibition, activation, agonism, or antagonism. Furthermore, the term "mechanism of action" is the main term that is primarily used in pharmacology, whereas "mode of action" will more often appear in the field of microbiology or certain aspects of biology. See also Mode of action (MoA) Pharmacodynamics Chemoproteomics References Pharmacology Pharmacodynamics Medicinal chemistry
0.774447
0.992565
0.768689
Lupus erythematosus
Lupus erythematosus is a collection of autoimmune diseases in which the human immune system becomes hyperactive and attacks healthy tissues. Symptoms of these diseases can affect many different body systems, including joints, skin, kidneys, blood cells, heart, and lungs. The most common and most severe form is systemic lupus erythematosus. Signs and symptoms Symptoms vary from person to person, and may come and go. Almost everyone with lupus has joint pain and swelling. Some develop arthritis. Frequently affected joints are the fingers, hands, wrists, and knees. Other common symptoms include: chest pain during respiration joint pain (stiffness and swelling) painless oral ulcer fatigue weight loss headaches fever with no other cause Skin lesions that appear worse after sun exposure general discomfort, uneasiness, or ill feeling (malaise) hair loss sensitivity to sunlight a "butterfly" facial rash, seen in about half of people with SLE swollen lymph nodes Photosensitivity Photosensitivity is the amount to which an object reacts upon receiving photons especially in visible light. Photosensitivity is a known symptom of lupus, but its relationship to and influence on other aspects of the disease remain to be defined. Causes of photosensitivity may include: change in autoantibody location cytotoxicity induction of apoptosis with autoantigens in apoptotic blebs upregulation of adhesion molecules and cytokines induction of nitric oxide synthase expression ultraviolet-generated antigenic DNA Genetics It is typically believed that lupus is influenced by multiple genes. Lupus is usually influenced by gene polymorphisms, 30 of which have now been linked with the disorder. Some of these polymorphisms have been linked very tentatively, however, as the role that they play or the degree to which they influence the disease is unknown. Other genes that are commonly thought to be associated with lupus are those in the human leukocyte antigen (HLA) family. There have been several cases wherein a single gene influence appears to be present, but this is rare. When a single gene deficiency does cause lupus, it is usually attributed to the complement protein genes C1, C2, or C4. The influence of sex chromosomes and environmental factors are also noteworthy. Usually, these factors contribute to lupus by influencing the immune system. Several studies also indicate a potential association of lupus with mutations in DNA repair genes. Age difference Lupus can develop in people at any age, but it does most commonly at ages 15 to 44, with varying results. Typically, the manifestation of the disease tends to be more acute in those of younger age. Women are more likely to get it than men. Patients with juvenile-onset lupus are more vulnerable to mucocutaneous manifestations of the disease (alopecia, skin rash, and ulceration of the mucous membranes) than any other age group, and they are also more susceptible to evaluation of pulmonary artery pressure. However, patients with late-onset lupus have a much higher mortality rate. Nearly 50% of those with late-onset lupus die of their condition. Women who are of childbearing age are also particularly at risk. Differences in ethnicity Substantial data have been found to indicate that certain ethnic populations could be more at risk for lupus erythematosus and to have a better or worse prognosis. Asian, African, and Native Americans are more likely to get lupus than Caucasians. Caucasians seem generally to have a milder manifestation of the disease. Their survival rates after five years were typically around 94–96%, while patients of African and some Asian ethnicities had survival rates closer to 79–92%. The only documented ethnic group that had a higher survival rate than Caucasians was Koreans, who had survival rates nearer to 98%. Among Caucasians, the most common causes of death were complications involving the cardiovascular system, the respiratory system, and malignancies. Atherosclerotic cardiovascular disease is more prevalent in African Americans with lupus than in Caucasians with lupus. Diagnosis Diagnosis of lupus will vary from person to person. It is common to be diagnosed with other illnesses before a doctor can finally give a diagnosis of lupus because a lot of the symptoms overlap with other common illness. Diagnosis of lupus erythematosus requires a physical examination, blood and urine tests, and a skin or kidney biopsy. Some other tests that may need to be run include: Antinuclear antibody (ANA) CBC with differential Chest X-ray Serum creatinine Urinalysis Classification Lupus erythematosus may manifest as systemic disease or in a purely cutaneous form also known as incomplete lupus erythematosus. Lupus has four main types: systemic discoid drug-induced neonatal Of these, systemic lupus erythematosus (also known as SLE) is the most common and serious form. A more thorough categorization of lupus includes the following types: acute cutaneous lupus erythematosus subacute cutaneous lupus erythematosus discoid lupus erythematosus (chronic cutaneous) childhood discoid lupus erythematosus generalized discoid lupus erythematosus localized discoid lupus erythematosus chilblain lupus erythematosus (Hutchinson) lupus erythematosus-lichen planus overlap syndrome lupus erythematosus panniculitis (lupus erythematosus profundus) tumid lupus erythematosus verrucous lupus erythematosus (hypertrophic lupus erythematosus) cutaneous lupus mucinosis complement deficiency syndromes drug-induced lupus erythematosus neonatal lupus erythematosus systemic lupus erythematosus Treatment There is still no cure for lupus but there are options to help control symptoms. The goal for treatment is to prevent flare ups and reduce organ damage. Doctors may prescribe a handful of different medications to help with their patients' symptoms. Some medications are: Nonsteroidal anti-inflammatory drugs (NSAIDs). Corticosteroids Antimalarial drugs BLyS-specific inhibitors Immunosuppressive agents/chemotherapy After being diagnosed some treatment options that may be offered are: Treatment consists primarily of immunosuppressive drugs (e.g., hydroxychloroquine and corticosteroids). A second-line drug is methotrexate in its low-dose schedule. In 2011, the U.S. Food and Drug Administration (FDA) approved the first new drug for lupus in more than 50 years to be used in the US, belimumab. In addition to medical therapy, cognitive behavioral therapy has also been demonstrated to be effective in reducing stress, anxiety, and depression due to the psychological and social impacts that lupus may have. People with SLE treated with standard care experience a higher risk of opportunistic infections and death than the general population. This risk is higher in men and in African Americans. Epidemiology Worldwide An estimated 5 million people worldwide have some form of lupus disease. 70% of lupus cases diagnosed are systemic lupus erythematosus. 20% of people with lupus will have a parent or sibling who already has lupus or may develop lupus. about 5% of the children born to individuals with lupus will develop the illness. United Kingdom Females in the UK are seven times more likely to be diagnosed with SLE than males. The estimated number of females in the UK with SLE is 21,700, and the number of males is 3000 — a total of 24,700, or 0.041% of the population. SLE is more common amongst certain ethnic groups than others, especially those of African origin. United States Lupus occurs from infancy to old age, with peak occurrence between ages 15 and 40. Lupus affects females in the US 6 to 10 times more often than males. Prevalence data are limited. Estimates vary and range from 1.8 to 7.6 cases per 100,000 persons per year in parts of the continental United States. In popular culture In the early seasons of the television show House, members of the eponymous character's medical team often suggested lupus as a diagnosis for their patients, only to be rebuked. The rarity of legitimate lupus diagnoses in the show eventually became described as a running gag. See also List of cutaneous conditions List of target antigens in pemphigoid List of immunofluorescence findings for autoimmune bullous conditions List of human leukocyte antigen alleles associated with cutaneous conditions List of people with lupus References External links Autoimmune diseases Connective tissue diseases de:Lupus erythematodes
0.770833
0.997206
0.768679
Kinesiology
Kinesiology is the scientific study of human body movement. Kinesiology addresses physiological, anatomical, biomechanical, pathological, neuropsychological principles and mechanisms of movement. Applications of kinesiology to human health include biomechanics and orthopedics; strength and conditioning; sport psychology; motor control; skill acquisition and motor learning; methods of rehabilitation, such as physical and occupational therapy; and sport and exercise physiology. Studies of human and animal motion include measures from motion tracking systems, electrophysiology of muscle and brain activity, various methods for monitoring physiological function, and other behavioral and cognitive research techniques. Basics Kinesiology studies the science of human movement, performance, and function by applying the fundamental sciences of Cell Biology, Molecular Biology, Chemistry, Biochemistry, Biophysics, Biomechanics, Biomathematics, Biostatistics, Anatomy, Physiology, Exercise Physiology, Pathophysiology, Neuroscience, and Nutritional science. A bachelor's degree in kinesiology can provide strong preparation for graduate study in biomedical research, as well as in professional programs. The term "kinesiologist" is not a licensed nor professional designation in many countries, with the notable exception of Canada. Individuals with training in this area can teach physical education, work as personal trainers and sport coaches, provide consulting services, conduct research and develop policies related to rehabilitation, human motor performance, ergonomics, and occupational health and safety. In North America, kinesiologists may study to earn a Bachelor of Science, Master of Science, or Doctorate of Philosophy degree in Kinesiology or a Bachelor of Kinesiology degree, while in Australia or New Zealand, they are often conferred an Applied Science (Human Movement) degree (or higher). Many doctoral level faculty in North American kinesiology programs received their doctoral training in related disciplines, such as neuroscience, mechanical engineering, psychology, and physiology. In 1965, the University of Massachusetts Amherst created the United States' first Department of Exercise Science (kinesiology) under the leadership of visionary researchers and academicians in the field of exercise science. In 1967, the University of Waterloo launched Canada's first kinesiology department. Principles Adaptation through exercise Adaptation through exercise is a key principle of kinesiology that relates to improved fitness in athletes as well as health and wellness in clinical populations. Exercise is a simple and established intervention for many movement disorders and musculoskeletal conditions due to the neuroplasticity of the brain and the adaptability of the musculoskeletal system. Therapeutic exercise has been shown to improve neuromotor control and motor capabilities in both normal and pathological populations. There are many different types of exercise interventions that can be applied in kinesiology to athletic, normal, and clinical populations. Aerobic exercise interventions help to improve cardiovascular endurance. Anaerobic strength training programs can increase muscular strength, power, and lean body mass. Decreased risk of falls and increased neuromuscular control can be attributed to balance intervention programs. Flexibility programs can increase functional range of motion and reduce the risk of injury. As a whole, exercise programs can reduce symptoms of depression and risk of cardiovascular and metabolic diseases. Additionally, they can help to improve quality of life, sleeping habits, immune system function, and body composition. The study of the physiological responses to physical exercise and their therapeutic applications is known as exercise physiology, which is an important area of research within kinesiology. Neuroplasticity Neuroplasticity is also a key scientific principle used in kinesiology to describe how movement and changes in the brain are related. The human brain adapts and acquires new motor skills based on this principle. The brain can be exposed to new stimuli and experiences and therefore learn from them and create new neural pathways hence leading to brain adaptation. These new adaptations and skills include both adaptive and maladaptive brain changes. Adaptive plasticity Recent empirical evidence indicates the significant impact of physical activity on brain function; for example, greater amounts of physical activity are associated with enhanced cognitive function in older adults. The effects of physical activity can be distributed throughout the whole brain, such as higher gray matter density and white matter integrity after exercise training, and/or on specific brain areas, such as greater activation in prefrontal cortex and hippocampus. Neuroplasticity is also the underlying mechanism of skill acquisition. For example, after long-term training, pianists showed greater gray matter density in sensorimotor cortex and white matter integrity in the internal capsule compared to non-musicians. Maladaptive plasticity Maladaptive plasticity is defined as neuroplasticity with negative effects or detrimental consequences in behavior. Movement abnormalities may occur among individuals with and without brain injuries due to abnormal remodeling in central nervous system. Learned non-use is an example commonly seen among patients with brain damage, such as stroke. Patients with stroke learned to suppress paretic limb movement after unsuccessful experience in paretic hand use; this may cause decreased neuronal activation at adjacent areas of the infarcted motor cortex. There are many types of therapies that are designed to overcome maladaptive plasticity in clinic and research, such as constraint-induced movement therapy (CIMT), body weight support treadmill training (BWSTT) and virtual reality therapy. These interventions are shown to enhance motor function in paretic limbs and stimulate cortical reorganization in patients with brain damage. Motor redundancy Motor redundancy is a widely used concept in kinesiology and motor control which states that, for any task the human body can perform, there are effectively an unlimited number of ways the nervous system could achieve that task. This redundancy appears at multiple levels in the chain of motor execution: Kinematic redundancy means that for a desired location of the endpoint (e.g. the hand or finger), there are many configurations of the joints that would produce the same endpoint location in space. Muscle redundancy means that the same net joint torque could be generated by many different relative contributions of individual muscles. Motor unit redundancy means that for the same net muscle force could be generated by many different relative contributions of motor units within that muscle. The concept of motor redundancy is explored in numerous studies, usually with the goal of describing the relative contribution of a set of motor elements (e.g. muscles) in various human movements, and how these contributions can be predicted from a comprehensive theory. Two distinct (but not incompatible) theories have emerged for how the nervous system coordinates redundant elements: simplification and optimization. In the simplification theory, complex movements and muscle actions are constructed from simpler ones, often known as primitives or synergies, resulting in a simpler system for the brain to control. In the optimization theory, motor actions arise from the minimization of a control parameter, such as the energetic cost of movement or errors in movement performance. Scope of practice In Canada, kinesiology is a professional designation as well as an area of study. In the province of Ontario the scope has been officially defined as, "the assessment of human movement and performance and its rehabilitation and management to maintain, rehabilitate or enhance movement and performance" Kinesiologists work in a variety of roles as health professionals. They work as rehabilitation providers in hospitals, clinics and private settings working with populations needing care for musculoskeletal, cardiac and neurological conditions. They provide rehabilitation to persons injured at work and in vehicular accidents. Kinesiologists also work as functional assessment specialists, exercise therapists, ergonomists, return to work specialists, case managers and medical legal evaluators. They can be found in hospital, long-term care, clinic, work, and community settings. Additionally, kinesiology is applied in areas of health and fitness for all levels of athletes, but more often found with training of elite athletes. Licensing and regulation Canada In Canada, kinesiology has been designated a regulated health profession in Ontario. Kinesiology was granted the right to regulate in the province of Ontario in the summer of 2007 and similar proposals have been made for other provinces. The College of Kinesiologists of Ontario achieved proclamation on April 1, 2013, at which time the professional title "Kinesiologist" became protected by law. In Ontario only members of the college may call themselves a Registered Kinesiologist. Individuals who have earned degrees in kinesiology can work in research, the fitness industry, clinical settings, and in industrial environments. They also work in cardiac rehabilitation, health and safety, hospital and long-term care facilities and community health centers just to name a few. Health service Health promotion Kinesiologists working in the health promotion industry work with individuals to enhance the health, fitness, and well-being of the individual. Kinesiologists can be found working in fitness facilities, personal training/corporate wellness facilities, and industry. Clinical/rehabilitation Kinesiologists work with individuals with disabling conditions to assist in regaining their optimal physical function. They work with individuals in their home, fitness facilities, rehabilitation clinics, and at the worksite. They also work alongside physiotherapists and occupational therapists. Ergonomics Kinesiologists work in industry to assess suitability of design of workstations and provide suggestions for modifications and assistive devices. Health and safety Kinesiologists are involved in consulting with industry to identify hazards and provide recommendations and solutions to optimize the health and safety of workers. Disability management/case coordination Kinesiologists recommend and provide a plan of action to return an injured individual to their optimal function in all aspects of life. Management/research/administration Kinesiologists frequently fulfill roles in all above areas, perform research, and manage businesses. Health education Kinesiologists working in health education teach people about behaviors that promote wellness. They develop and implement strategies to improve the health of individuals and communities. Community health workers collect data and discuss health concerns with members of specific populations or communities. Athletic training Kinesiologists working in athletic training work in cooperation with physicians. Athletic trainers strive to prevent athletes from suffering injuries, diagnose them if they have suffered an injury and apply the appropriate treatment. Athletic coaches and scouts Kinesiologists who pursue a career as an athletic coach develop new talent and guide an athlete's progress in a specific sport. They teach amateur or professional athletes the skills they need to succeed at their sport. Many coaches are also involved in scouting. Scouts look for new players and evaluate their skills and likelihood for success at the college, amateur, or professional level. Physical education teacher Kinesiologists working as physical education teachers are responsible for teaching fitness, sports and health. They help students stay both mentally and physically fit by teaching them to make healthy choices. Physical therapy Kinesiologists working in physical therapy diagnose physical abnormalities, restore mobility to the client, and promote proper function of joints. History of kinesiology Royal Central Institute of Gymnastics (sv) G.C.I. was founded 1813 in Stockholm, Sweden by Pehr Henrik Ling. It was the first Physiotherapy school in the world, training hundreds of medical gymnasts who spread the Swedish physical therapy around the entire world. In 1887, Sweden was the first country in the world to give a national state licence to physiotherapists/physical therapists. The Swedish medical gymnast and kinesiologist Carl August Georgii (sv), Professor at the Royal Gymnastic Central Institute GCI in Stockholm, was the one who created and coined the new international word Kinesiology in 1854. The term Kinesiology is a literal translation to Greek+English from the original Swedish word Rörelselära, meaning "Movement Science". It was the foundation of the Medical Gymnastics, the original Physiotherapy and Physical Therapy, developed for over 100 years in Sweden (starting 1813). The new medical therapy created in Sweden was originally called Rörelselära (sv), and later in 1854 translated to the new and invented international word "Kinesiology". The Kinesiology consisted of nearly 2,000 physical movements and 50 different types of massage therapy techniques. They were all used to affect various dysfunctions and even illnesses, not only in the movement apparatus, but also into the internal physiology of man. Thus, the original classical and Traditional Kinesiology was not only a system of rehabilitation for the body, or biomechanics like in modern Academic Kinesiology, but also a new therapy for relieving and curing diseases, by affecting the autonomic nervous system, organs and glands in the body., In 1886, the Swedish Medical Gymnast Nils Posse (1862-1895) introduced the term kinesiology in the U.S. Nils Posse was a graduate of the Royal Gymnastic Central Institute in Stockholm, Sweden and founder of the Posse Gymnasium in Boston, MA. He was teaching at Boston Normal School of Gymnastics BNSG. The Special Kinesiology Of Educational Gymnastics was the first book ever written in the world with the word "Kinesiology" in the title of the book. It was written by Nils Posse and published in Boston, 1894–1895. Posse was elected posthumously as an Honorary Fellow in Memoriam in the National Academy of Kinesiology. The National Academy of Kinesiology was formally founded in 1930 in the United States. The academy's dual purpose is to encourage and promote the study and educational applications of the art and science of human movement and physical activity and to honor by election to its membership persons who have directly or indirectly contributed significantly to the study of and/or application of the art and science of human movement and physical activity. Membership in the National Academy of Kinesiology is by election and those elected are known as Fellows. Fellows are elected from around the world. Election into the National Academy of Kinesiology is considered a pinnacle achievement and recognition with the discipline. For further information see: National Academy of Kinesiology | National Academy of Kinesiology Technology in kinesiology Motion capture technology has application in measuring human movement, and thus kinesiology. Historically, motion capture labs have recorded high fidelity data. While accurate and credible, these systems can come at high capital and operational costs. Modern-day systems have increased accessibility to mocap technology. Adapted physical activity Adapted physical activity (APA) is a branch of kinesiology, referring to physical activity that is modified or designed to meet the needs of individuals with disabilities. The term originated in the field of physical education and is commonly used in the field of physical education and rehabilitation to refer to physical activities and exercises that have been modified or adapted for individuals with disabilities. These activities are often led by trained professionals, such as adapted physical educators, occupational therapists, or physical therapists. In 1973 the Federation Internationale de lʼ Activite Physique Adaptee (International Federation of Adapted Physical Activity - IFAPA) was formed and is described as a discipline/profession that purpose to facilitates physical activity across people with a wide range of individual differences, emphasizing in empowerment, self-determination and opportunities access. A common definition of APA is "a cross-disciplinary body of practical and theoretical knowledge directed toward impairments, activity limitations, and participation restrictions in physical activity. It is a service delivery profession and an academic field of study that supports an attitude of acceptance of individual differences, advocates access to active lifestyles and sport, and promotes innovative and cooperative service delivery, supports, and empowerment. Adapted physical activity includes, but is not limited to, physical education, sport, recreation, dance, creative arts, nutrition, medicine, and rehabilitation." This definition aligns with the World Health Organization International Classification of Functioning, Disability and Health whereby disability is seen as the interaction between impairments or conditions with activity limitations, participation restrictions and contextual factors. Overview The term APA has evolved in the course of years, and in some countries could be recognized with alternative terms that contain a similar set of constructs, for example, sports for disabled people, sports therapy, and psychomotor therapy. The APA is considered as (i) activities or service delivery, (ii) a profession, and (iii) an academic field of study with a unique body of knowledge that differs from terms such as adapted physical education or para-sport. Principally, APA is an umbrella term that incorporates the mentioned terms considered sub-specializations (i.e., physical education, para-sports, recreation, and rehabilitation). APA is proposed to have close links between the field of practice and the field of study with unique theories and growing bodies of practical and scientific knowledge, where APA practitioners are those who provide the services and activities, while APA scholars generate and promote evidence-based research practices among practitioners. Adaptation to physical activity opportunities is most often provided in the form of appropriately designed and modified equipment (prosthesis, wheelchairs, mono-ski, ball size), task criteria (e.g., modifying skill quality criteria or using a different skill), instructions (e.g., using personal supports, peer tutors, non-verbal instructions, motivational strategies), physical and social environments (e.g., increasing or decreasing court dimensions; segregated vs. inclusive; type of training climate: mastery-oriented, collaborative or competitive social environment; degree of peer and parental support), and rules (e.g., double bounce rule in wheelchair tennis). In general, the APA presents various sub-specializations such as physical education (e.g., inclusion in physical education, attention to students with special needs, development of new education contents), sports (e.g., development of paralympic sports, activity by sports federations for athletes with disabilities), recreation (e.g., development of the inclusive sport approach and attitudes change programs), and rehabilitation (e.g., physical activity programs in rehabilitation centers, involvement of health-related professionals). The role of sports and physical activity participation in the population with disabilities has been recognized as a human right in the Convention on the Rights of Persons with Disabilities and declared in other international organization agreements such as: International Charter of Physical Education, Physical Activity and Sport (UNESCO). International Conference of Ministers and Senior Officials Responsible for Physical Education and Sport (MINEPS). Marseille Declaration, Universal Fitness Innovation & Transformation - UFIT Launch October 2015. A Commitment to Inclusion by and for the Global Fitness Industry. Sustainable Development Goals, Sports and Physical Activity, United Nations (UN). In this line, the APA as a discipline/profession plays an essential role in addressing the needs from a theoretical and practical framework to provide full participation access in physical activity to populations with disabilities. There are many educational programmes offered around the world that specialise in APA, including disability sports, adapted sports, rehabilitation, adapted physical education and parasport management. In Europe there is the European Diploma of Adapted Physical Activity for bachelor's degrees. At the master's degree level, there is the International Masters in Adapted Physical Activity and the master's degree in Adapted Physical Activity offered by the Lithuanian Sports University. A doctoral programme in adapted physical activity can be studied through the Multi-Institution Mentorship Consortium (MAMC). Furthermore, there is offered a Master of Adapted Physical Education in the North American region in Oregon State University (USA). In the South American Region, the San Sebastian University (Chile) offers a Master of Physical Activity and Adapted Sports. The universities Viña del Mar and UMCE in Chile offers a specialization in adapted physical activity. International Federation of Adapted Physical Activity The International Federation of Adapted Physical Activity (IFAPA) is an international scientific organization of higher education scholars, practitioners and students dedicated to promoting APA. IFAPA was founded in 1973 in Quebec, Canada, presenting an original purpose declared "to give global focus to professionals who use adapted physical activities for instruction, recreation, remediation, and research". From these initial times, IFAPA evolved from a small organization to an international corporation with active regional federations in different world regions. The current purpose of IFAPA are: To encourage international cooperation in the field of physical activity to the benefit of individuals of all abilities, to promote, stimulate and support research in the field of adapted physical activity throughout the world, and to make scientific knowledge of and practical experiences in adapted physical activity available to all interested persons, organizations and institutions. IFAPA coordinates national, regional, and international functions (both governmental and nongovernmental) that pertain to sport, dance, aquatics, exercise, fitness, and wellness for individuals of all ages with disabilities or special needs. IFAPA is linked with several other international governing bodies, including the International Paralympic Committee (IPC), Special Olympics International and the International Council of Sport Science and Physical Education (ICSSPE). English is the language used for IFAPA correspondence, conferences. Professor David Legg from Mount Royal University is the current President of the International Federation of Adapted Physical Activity (IFAPA) since 2019 at the International Symposium of Adapted Physical Activity (ISAPA) hosted by IFAPA Past President Martin Block at the University of Virginia. The biennial ISAPA scheduled for 2021 was planned to be held at the University of Jyväskylä, Finland. Due to the COVID-19 pandemic it was later announced to be held online only, making it the first Online ISAPA since the first one in 1977. The 2023 ISAPA was awarded to a multi-site organisation by Halberg Foundation in New Zealand and Mooven in France. Regions Africa - no formal organisation Asia - Asian society of adapted physical education - ASAPE Europe - European Federation of Adapted Physical Activity - EUFAPA Middle East - Middle East Federation of Adapted Physical Activity - MEFAPA North America - North American Federation of Adapted Physical Activity - NAFAPA Oceania - no formal organisation South and Central America - South American Federation of Adapted Physical Activity - SAPA Research and dissemination in adapted physical activity Actually, it is possible to find numerous sports science journals with research papers on adapted sport, while those specific to APA are lesser. Adapted Physical Activity Quarterly (APAQ) is the only AFA-specific journal indexed in the Journal Citation Reports Index, appearing in both the Sport Sciences and Rehabilitation directories, which is another example of its interdisciplinarity (Impact Score 2020-2021 = 2.61) (Pérez et al., 2012). Additionally, the European Journal of Adapted Physical Activity (EUJAPA) is another international, multidisciplinary journal introduced to communicate, share and stimulate academic inquiry focusing on APA of persons with disabilities, appearing in the Education directories of Scimago Journal & Country Rank (SJR). Regarding the dissemination of scientific knowledge generated by the APA, the most relevant international events are described as follows: International Symposium of Adapted Physical Activity (ISAPA), organized by IFAPA on a biannual basis. Vista conference, organized by the International Paralympic Committee on a biannual basis. Paralympic Congress, organized by the International Paralympic Committee every four years. European Conference on Adapted Physical Activity (EUCAPA), organized by European Federation in Adapted Physical Activity on a biannual basis. North American Federation of Adapted Physical Activity (NAFAPA) Conference, organized by NAFAPA on a biannual basis. South American Adapted Physical Activity Conference, organized by South American Federation of Adapted Physical Activity. Adapted physical education Adapted physical education is a sub-discipline of physical education with a focus on including students with disabilities into the subject. APE is the term used to refer to the physical education for individuals with disabilities that occurs primarily in elementary and secondary schools. According to Dunn and Leitschuh APE is defined as "Adapted physical education programs are those that have the same objectives as the regular physical education program but in which adjustments are made in the regular offerings to meet the needs and abilities of exceptional students". This education can be provided in separate educational settings as well as in general (regular) educational settings. APE is oriented to educate students to lifelong engagement in physical activities and to live a healthy lifestyle offering possibilities to exploit movements, games, and sports and at the same time personal development. Goals and objectives of adapted and general physical education might be the same with some minor differences. For example, learning to push a wheelchair or play wheelchair basketball might be a goal for a child with a spinal cord injury, while running and playing regular basketball is a goal for a child with a disability. In other cases, a child with a disability might focus on fewer objectives or modified objectives within a domain (e.g., physical fitness) compared to peers without disabilities. Parasport or disability sport The APA in this field is oriented principally to the Parasports movement, which organises sports for and by people with disabilities. Examples of para-sports organizations include sports in the Paralympic Games, Special Olympics, Deaflympics as well as Invictus games to name a few. Many para-sports have eligibility criteria according to the characteristics of the participants. In the Paralympics Games, this is known as sport classification, a system that provides a framework for determining who can and who cannot participate according to the impact of the impairments on the outcome of the competition. In the Special Olympics individuals eligible have to meet the following criteria be at least 8 years old have been identified by an agency or professional as having one of the following conditions: intellectual disabilities, cognitive delays (as measured by formal assessment), or significant learning or vocational problems due to cognitive delay that require specially designed instruction. Another sporting competition for people with intellectual impairments is the Virtus Games (formerly known as International Sports Federation for Persons with Intellectual Disability. This is different from the Special Olympics. Eligibility is based on a master list of II 1 Intellectual Disability II 2 Significant Intellectual Disability II 3 Austism To be eligible to compete at the Deaflympics, athletes must have a hearing loss of at least 55 decibels in the better ear. The Invictus Games were designed to allow sport competitions between wounded, injured or sick servicemen and women (WIS). Therefore, only people in the military sectors can compete in the Invictus games. Physical medicine and rehabiltiation The results from APA can help the practice of Physical medicine and rehabilitation, whereby the functional ability and quality of life is improved. Rehabilitation is helping the individual achieve the highest level of functioning, independence, participation, and quality of life possible. The APA and sport in rehabilitation for individuals with disabilities is particularly important and is associated with the legacy of the medical rehabilitation specialist Sir Ludwig Guttman who was the founder of the International Stoke Mandeville Games Federation, the basis of the actual Paralympic movement. APA and sports are strongly recommended in rehabilitation programs due to the positive impact and health benefits in people with different disabilities. The APA practitioner provides exercise and training regimens adapted for specific individual needs and works based on the International Classification of Functioning, Disability, and Health of the World Health Organization, facilitating a common language with other rehabilitation professionals during the rehabilitation process. See also Adapted Physical Education (USA) Anatomical terms of motion Assistive technology in sport Disability Disabled sports Exercise physiology Human musculoskeletal system Kinanthropometry Kinesiogenomics Kinesiotherapy Mental practice of action Motor imagery Movement assessment Neurology Parasports Physical therapy (USA) Physiological movement Sports science References External links Ergonomics Applied sciences Human physiology Motor control Exercise physiology
0.769346
0.999112
0.768663
Microorganism
A microorganism, or microbe, is an organism of microscopic size, which may exist in its single-celled form or as a colony of cells. The possible existence of unseen microbial life was suspected from ancient times, such as in Jain scriptures from sixth century BC India. The scientific study of microorganisms began with their observation under the microscope in the 1670s by Anton van Leeuwenhoek. In the 1850s, Louis Pasteur found that microorganisms caused food spoilage, debunking the theory of spontaneous generation. In the 1880s, Robert Koch discovered that microorganisms caused the diseases tuberculosis, cholera, diphtheria, and anthrax. Because microorganisms include most unicellular organisms from all three domains of life, they can be extremely diverse. Two of the three domains, Archaea and Bacteria, only contain microorganisms. The third domain, Eukaryota, includes all multicellular organisms as well as many unicellular protists and protozoans that are microbes. Some protists are related to animals and some to green plants. Many multicellular organisms are also microscopic, namely micro-animals, some fungi, and some algae, but these are generally not considered microorganisms. Microorganisms can have very different habitats, and live everywhere from the poles to the equator, in deserts, geysers, rocks, and the deep sea. Some are adapted to extremes such as very hot or very cold conditions, others to high pressure, and a few, such as Deinococcus radiodurans, to high radiation environments. Microorganisms also make up the microbiota found in and on all multicellular organisms. There is evidence that 3.45-billion-year-old Australian rocks once contained microorganisms, the earliest direct evidence of life on Earth. Microbes are important in human culture and health in many ways, serving to ferment foods and treat sewage, and to produce fuel, enzymes, and other bioactive compounds. Microbes are essential tools in biology as model organisms and have been put to use in biological warfare and bioterrorism. Microbes are a vital component of fertile soil. In the human body, microorganisms make up the human microbiota, including the essential gut flora. The pathogens responsible for many infectious diseases are microbes and, as such, are the target of hygiene measures. Discovery Ancient precursors The possible existence of microscopic organisms was discussed for many centuries before their discovery in the seventeenth century. By the 6th century BC, the Jains of present-day India postulated the existence of tiny organisms called nigodas. These nigodas are said to be born in clusters; they live everywhere, including the bodies of plants, animals, and people; and their life lasts only for a fraction of a second. According to Mahavira, the 24th preacher of Jainism, the humans destroy these nigodas on a massive scale, when they eat, breathe, sit, and move. Many modern Jains assert that Mahavira's teachings presage the existence of microorganisms as discovered by modern science. The earliest known idea to indicate the possibility of diseases spreading by yet unseen organisms was that of the Roman scholar Marcus Terentius Varro in a first-century BC book entitled On Agriculture in which he called the unseen creatures animalia minuta, and warns against locating a homestead near a swamp: In The Canon of Medicine (1020), Avicenna suggested that tuberculosis and other diseases might be contagious. Early modern Turkish scientist Akshamsaddin mentioned the microbe in his work Maddat ul-Hayat (The Material of Life) about two centuries prior to Antonie van Leeuwenhoek's discovery through experimentation: In 1546, Girolamo Fracastoro proposed that epidemic diseases were caused by transferable seedlike entities that could transmit infection by direct or indirect contact, or even without contact over long distances. Antonie van Leeuwenhoek is considered to be one of the fathers of microbiology. He was the first in 1673 to discover and conduct scientific experiments with microorganisms, using simple single-lensed microscopes of his own design. Robert Hooke, a contemporary of Leeuwenhoek, also used microscopy to observe microbial life in the form of the fruiting bodies of moulds. In his 1665 book Micrographia, he made drawings of studies, and he coined the term cell. 19th century Louis Pasteur (1822–1895) exposed boiled broths to the air, in vessels that contained a filter to prevent particles from passing through to the growth medium, and also in vessels without a filter, but with air allowed in via a curved tube so dust particles would settle and not come in contact with the broth. By boiling the broth beforehand, Pasteur ensured that no microorganisms survived within the broths at the beginning of his experiment. Nothing grew in the broths in the course of Pasteur's experiment. This meant that the living organisms that grew in such broths came from outside, as spores on dust, rather than spontaneously generated within the broth. Thus, Pasteur refuted the theory of spontaneous generation and supported the germ theory of disease. In 1876, Robert Koch (1843–1910) established that microorganisms can cause disease. He found that the blood of cattle that were infected with anthrax always had large numbers of Bacillus anthracis. Koch found that he could transmit anthrax from one animal to another by taking a small sample of blood from the infected animal and injecting it into a healthy one, and this caused the healthy animal to become sick. He also found that he could grow the bacteria in a nutrient broth, then inject it into a healthy animal, and cause illness. Based on these experiments, he devised criteria for establishing a causal link between a microorganism and a disease and these are now known as Koch's postulates. Although these postulates cannot be applied in all cases, they do retain historical importance to the development of scientific thought and are still being used today. The discovery of microorganisms such as Euglena that did not fit into either the animal or plant kingdoms, since they were photosynthetic like plants, but motile like animals, led to the naming of a third kingdom in the 1860s. In 1860 John Hogg called this the Protoctista, and in 1866 Ernst Haeckel named it the Protista. The work of Pasteur and Koch did not accurately reflect the true diversity of the microbial world because of their exclusive focus on microorganisms having direct medical relevance. It was not until the work of Martinus Beijerinck and Sergei Winogradsky late in the nineteenth century that the true breadth of microbiology was revealed. Beijerinck made two major contributions to microbiology: the discovery of viruses and the development of enrichment culture techniques. While his work on the tobacco mosaic virus established the basic principles of virology, it was his development of enrichment culturing that had the most immediate impact on microbiology by allowing for the cultivation of a wide range of microbes with wildly different physiologies. Winogradsky was the first to develop the concept of chemolithotrophy and to thereby reveal the essential role played by microorganisms in geochemical processes. He was responsible for the first isolation and description of both nitrifying and nitrogen-fixing bacteria. French-Canadian microbiologist Felix d'Herelle co-discovered bacteriophages and was one of the earliest applied microbiologists. Classification and structure Microorganisms can be found almost anywhere on Earth. Bacteria and archaea are almost always microscopic, while a number of eukaryotes are also microscopic, including most protists, some fungi, as well as some micro-animals and plants. Viruses are generally regarded as not living and therefore not considered to be microorganisms, although a subfield of microbiology is virology, the study of viruses. Evolution Single-celled microorganisms were the first forms of life to develop on Earth, approximately 3.5 billion years ago. Further evolution was slow, and for about 3 billion years in the Precambrian eon, (much of the history of life on Earth), all organisms were microorganisms. Bacteria, algae and fungi have been identified in amber that is 220 million years old, which shows that the morphology of microorganisms has changed little since at least the Triassic period. The newly discovered biological role played by nickel, however – especially that brought about by volcanic eruptions from the Siberian Traps – may have accelerated the evolution of methanogens towards the end of the Permian–Triassic extinction event. Microorganisms tend to have a relatively fast rate of evolution. Most microorganisms can reproduce rapidly, and bacteria are also able to freely exchange genes through conjugation, transformation and transduction, even between widely divergent species. This horizontal gene transfer, coupled with a high mutation rate and other means of transformation, allows microorganisms to swiftly evolve (via natural selection) to survive in new environments and respond to environmental stresses. This rapid evolution is important in medicine, as it has led to the development of multidrug resistant pathogenic bacteria, superbugs, that are resistant to antibiotics. A possible transitional form of microorganism between a prokaryote and a eukaryote was discovered in 2012 by Japanese scientists. Parakaryon myojinensis is a unique microorganism larger than a typical prokaryote, but with nuclear material enclosed in a membrane as in a eukaryote, and the presence of endosymbionts. This is seen to be the first plausible evolutionary form of microorganism, showing a stage of development from the prokaryote to the eukaryote. Archaea Archaea are prokaryotic unicellular organisms, and form the first domain of life in Carl Woese's three-domain system. A prokaryote is defined as having no cell nucleus or other membrane bound-organelle. Archaea share this defining feature with the bacteria with which they were once grouped. In 1990 the microbiologist Woese proposed the three-domain system that divided living things into bacteria, archaea and eukaryotes, and thereby split the prokaryote domain. Archaea differ from bacteria in both their genetics and biochemistry. For example, while bacterial cell membranes are made from phosphoglycerides with ester bonds, archaean membranes are made of ether lipids. Archaea were originally described as extremophiles living in extreme environments, such as hot springs, but have since been found in all types of habitats. Only now are scientists beginning to realize how common archaea are in the environment, with Thermoproteota (formerly Crenarchaeota) being the most common form of life in the ocean, dominating ecosystems below in depth. These organisms are also common in soil and play a vital role in ammonia oxidation. The combined domains of archaea and bacteria make up the most diverse and abundant group of organisms on Earth and inhabit practically all environments where the temperature is below +. They are found in water, soil, air, as the microbiome of an organism, hot springs and even deep beneath the Earth's crust in rocks. The number of prokaryotes is estimated to be around five nonillion, or 5 × 1030, accounting for at least half the biomass on Earth. The biodiversity of the prokaryotes is unknown, but may be very large. A May 2016 estimate, based on laws of scaling from known numbers of species against the size of organism, gives an estimate of perhaps 1 trillion species on the planet, of which most would be microorganisms. Currently, only one-thousandth of one percent of that total have been described. Archael cells of some species aggregate and transfer DNA from one cell to another through direct contact, particularly under stressful environmental conditions that cause DNA damage. Bacteria Like archaea, bacteria are prokaryotic – unicellular, and having no cell nucleus or other membrane-bound organelle. Bacteria are microscopic, with a few extremely rare exceptions, such as Thiomargarita namibiensis. Bacteria function and reproduce as individual cells, but they can often aggregate in multicellular colonies. Some species such as myxobacteria can aggregate into complex swarming structures, operating as multicellular groups as part of their life cycle, or form clusters in bacterial colonies such as E.coli. Their genome is usually a circular bacterial chromosome – a single loop of DNA, although they can also harbor small pieces of DNA called plasmids. These plasmids can be transferred between cells through bacterial conjugation. Bacteria have an enclosing cell wall, which provides strength and rigidity to their cells. They reproduce by binary fission or sometimes by budding, but do not undergo meiotic sexual reproduction. However, many bacterial species can transfer DNA between individual cells by a horizontal gene transfer process referred to as natural transformation. Some species form extraordinarily resilient spores, but for bacteria this is a mechanism for survival, not reproduction. Under optimal conditions bacteria can grow extremely rapidly and their numbers can double as quickly as every 20 minutes. Eukaryotes Most living things that are visible to the naked eye in their adult form are eukaryotes, including humans. However, many eukaryotes are also microorganisms. Unlike bacteria and archaea, eukaryotes contain organelles such as the cell nucleus, the Golgi apparatus and mitochondria in their cells. The nucleus is an organelle that houses the DNA that makes up a cell's genome. DNA (Deoxyribonucleic acid) itself is arranged in complex chromosomes. Mitochondria are organelles vital in metabolism as they are the site of the citric acid cycle and oxidative phosphorylation. They evolved from symbiotic bacteria and retain a remnant genome. Like bacteria, plant cells have cell walls, and contain organelles such as chloroplasts in addition to the organelles in other eukaryotes. Chloroplasts produce energy from light by photosynthesis, and were also originally symbiotic bacteria. Unicellular eukaryotes consist of a single cell throughout their life cycle. This qualification is significant since most multicellular eukaryotes consist of a single cell called a zygote only at the beginning of their life cycles. Microbial eukaryotes can be either haploid or diploid, and some organisms have multiple cell nuclei. Unicellular eukaryotes usually reproduce asexually by mitosis under favorable conditions. However, under stressful conditions such as nutrient limitations and other conditions associated with DNA damage, they tend to reproduce sexually by meiosis and syngamy. Protists Of eukaryotic groups, the protists are most commonly unicellular and microscopic. This is a highly diverse group of organisms that are not easy to classify. Several algae species are multicellular protists, and slime molds have unique life cycles that involve switching between unicellular, colonial, and multicellular forms. The number of species of protists is unknown since only a small proportion has been identified. Protist diversity is high in oceans, deep sea-vents, river sediment and an acidic river, suggesting that many eukaryotic microbial communities may yet be discovered. Fungi The fungi have several unicellular species, such as baker's yeast (Saccharomyces cerevisiae) and fission yeast (Schizosaccharomyces pombe). Some fungi, such as the pathogenic yeast Candida albicans, can undergo phenotypic switching and grow as single cells in some environments, and filamentous hyphae in others. Plants The green algae are a large group of photosynthetic eukaryotes that include many microscopic organisms. Although some green algae are classified as protists, others such as charophyta are classified with embryophyte plants, which are the most familiar group of land plants. Algae can grow as single cells, or in long chains of cells. The green algae include unicellular and colonial flagellates, usually but not always with two flagella per cell, as well as various colonial, coccoid, and filamentous forms. In the Charales, which are the algae most closely related to higher plants, cells differentiate into several distinct tissues within the organism. There are about 6000 species of green algae. Ecology Microorganisms are found in almost every habitat present in nature, including hostile environments such as the North and South poles, deserts, geysers, and rocks. They also include all the marine microorganisms of the oceans and deep sea. Some types of microorganisms have adapted to extreme environments and sustained colonies; these organisms are known as extremophiles. Extremophiles have been isolated from rocks as much as 7 kilometres below the Earth's surface, and it has been suggested that the amount of organisms living below the Earth's surface is comparable with the amount of life on or above the surface. Extremophiles have been known to survive for a prolonged time in a vacuum, and can be highly resistant to radiation, which may even allow them to survive in space. Many types of microorganisms have intimate symbiotic relationships with other larger organisms; some of which are mutually beneficial (mutualism), while others can be damaging to the host organism (parasitism). If microorganisms can cause disease in a host they are known as pathogens and then they are sometimes referred to as microbes. Microorganisms play critical roles in Earth's biogeochemical cycles as they are responsible for decomposition and nitrogen fixation. Bacteria use regulatory networks that allow them to adapt to almost every environmental niche on earth. A network of interactions among diverse types of molecules including DNA, RNA, proteins and metabolites, is utilised by the bacteria to achieve regulation of gene expression. In bacteria, the principal function of regulatory networks is to control the response to environmental changes, for example nutritional status and environmental stress. A complex organization of networks permits the microorganism to coordinate and integrate multiple environmental signals. Extremophiles Extremophiles are microorganisms that have adapted so that they can survive and even thrive in extreme environments that are normally fatal to most life-forms. Thermophiles and hyperthermophiles thrive in high temperatures. Psychrophiles thrive in extremely low temperatures. – Temperatures as high as , as low as Halophiles such as Halobacterium salinarum (an archaean) thrive in high salt conditions, up to saturation. Alkaliphiles thrive in an alkaline pH of about 8.5–11. Acidophiles can thrive in a pH of 2.0 or less. Piezophiles thrive at very high pressures: up to 1,000–2,000 atm, down to 0 atm as in a vacuum of space. A few extremophiles such as Deinococcus radiodurans are radioresistant, resisting radiation exposure of up to 5k Gy. Extremophiles are significant in different ways. They extend terrestrial life into much of the Earth's hydrosphere, crust and atmosphere, their specific evolutionary adaptation mechanisms to their extreme environment can be exploited in biotechnology, and their very existence under such extreme conditions increases the potential for extraterrestrial life. Plants and soil The nitrogen cycle in soils depends on the fixation of atmospheric nitrogen. This is achieved by a number of diazotrophs. One way this can occur is in the root nodules of legumes that contain symbiotic bacteria of the genera Rhizobium, Mesorhizobium, Sinorhizobium, Bradyrhizobium, and Azorhizobium. The roots of plants create a narrow region known as the rhizosphere that supports many microorganisms known as the root microbiome. These microorganisms in the root microbiome are able to interact with each other and surrounding plants through signals and cues. For example, mycorrhizal fungi are able to communicate with the root systems of many plants through chemical signals between both the plant and fungi. This results in a mutualistic symbiosis between the two. However, these signals can be eavesdropped by other microorganisms, such as the soil bacteria, Myxococcus xanthus, which preys on other bacteria. Eavesdropping, or the interception of signals from unintended receivers, such as plants and microorganisms, can lead to large-scale, evolutionary consequences. For example, signaler-receiver pairs, like plant-microorganism pairs, may lose the ability to communicate with neighboring populations because of variability in eavesdroppers. In adapting to avoid local eavesdroppers, signal divergence could occur and thus, lead to the isolation of plants and microorganisms from the inability to communicate with other populations. Symbiosis A lichen is a symbiosis of a macroscopic fungus with photosynthetic microbial algae or cyanobacteria. Applications Microorganisms are useful in producing foods, treating waste water, creating biofuels and a wide range of chemicals and enzymes. They are invaluable in research as model organisms. They have been weaponised and sometimes used in warfare and bioterrorism. They are vital to agriculture through their roles in maintaining soil fertility and in decomposing organic matter. They also have applications in aquaculture, such as in biofloc technology. Food production Microorganisms are used in a fermentation process to make yoghurt, cheese, curd, kefir, ayran, xynogala, and other types of food. Fermentation cultures provide flavour and aroma, and inhibit undesirable organisms. They are used to leaven bread, and to convert sugars to alcohol in wine and beer. Microorganisms are used in brewing, wine making, baking, pickling and other food-making processes. Water treatment These depend for their ability to clean up water contaminated with organic material on microorganisms that can respire dissolved substances. Respiration may be aerobic, with a well-oxygenated filter bed such as a slow sand filter. Anaerobic digestion by methanogens generate useful methane gas as a by-product. Energy Microorganisms are used in fermentation to produce ethanol, and in biogas reactors to produce methane. Scientists are researching the use of algae to produce liquid fuels, and bacteria to convert various forms of agricultural and urban waste into usable fuels. Chemicals, enzymes Microorganisms are used to produce many commercial and industrial chemicals, enzymes and other bioactive molecules. Organic acids produced on a large industrial scale by microbial fermentation include acetic acid produced by acetic acid bacteria such as Acetobacter aceti, butyric acid made by the bacterium Clostridium butyricum, lactic acid made by Lactobacillus and other lactic acid bacteria, and citric acid produced by the mould fungus Aspergillus niger. Microorganisms are used to prepare bioactive molecules such as Streptokinase from the bacterium Streptococcus, Cyclosporin A from the ascomycete fungus Tolypocladium inflatum, and statins produced by the yeast Monascus purpureus. Science Microorganisms are essential tools in biotechnology, biochemistry, genetics, and molecular biology. The yeasts Saccharomyces cerevisiae and Schizosaccharomyces pombe are important model organisms in science, since they are simple eukaryotes that can be grown rapidly in large numbers and are easily manipulated. They are particularly valuable in genetics, genomics and proteomics. Microorganisms can be harnessed for uses such as creating steroids and treating skin diseases. Scientists are also considering using microorganisms for living fuel cells, and as a solution for pollution. Warfare In the Middle Ages, as an early example of biological warfare, diseased corpses were thrown into castles during sieges using catapults or other siege engines. Individuals near the corpses were exposed to the pathogen and were likely to spread that pathogen to others. In modern times, bioterrorism has included the 1984 Rajneeshee bioterror attack and the 1993 release of anthrax by Aum Shinrikyo in Tokyo. Soil Microbes can make nutrients and minerals in the soil available to plants, produce hormones that spur growth, stimulate the plant immune system and trigger or dampen stress responses. In general a more diverse set of soil microbes results in fewer plant diseases and higher yield. Human health Human gut flora Microorganisms can form an endosymbiotic relationship with other, larger organisms. For example, microbial symbiosis plays a crucial role in the immune system. The microorganisms that make up the gut flora in the gastrointestinal tract contribute to gut immunity, synthesize vitamins such as folic acid and biotin, and ferment complex indigestible carbohydrates. Some microorganisms that are seen to be beneficial to health are termed probiotics and are available as dietary supplements, or food additives. Disease Microorganisms are the causative agents (pathogens) in many infectious diseases. The organisms involved include pathogenic bacteria, causing diseases such as plague, tuberculosis and anthrax; protozoan parasites, causing diseases such as malaria, sleeping sickness, dysentery and toxoplasmosis; and also fungi causing diseases such as ringworm, candidiasis or histoplasmosis. However, other diseases such as influenza, yellow fever or AIDS are caused by pathogenic viruses, which are not usually classified as living organisms and are not, therefore, microorganisms by the strict definition. No clear examples of archaean pathogens are known, although a relationship has been proposed between the presence of some archaean methanogens and human periodontal disease. Numerous microbial pathogens are capable of sexual processes that appear to facilitate their survival in their infected host. Hygiene Hygiene is a set of practices to avoid infection or food spoilage by eliminating microorganisms from the surroundings. As microorganisms, in particular bacteria, are found virtually everywhere, harmful microorganisms may be reduced to acceptable levels rather than actually eliminated. In food preparation, microorganisms are reduced by preservation methods such as cooking, cleanliness of utensils, short storage periods, or by low temperatures. If complete sterility is needed, as with surgical equipment, an autoclave is used to kill microorganisms with heat and pressure. In fiction Osmosis Jones, a 2001 film, and its show Ozzy & Drix, set in a stylized version of the human body, featured anthropomorphic microorganisms. War of the Worlds (2005 film), when alien lifeforms attempt to conquer Earth, they are ultimately defeated by a common microbe to which humans are immune. See also Catalogue of Life Impedance microbiology Microbial biogeography Microbial intelligence Microbiological culture Microbivory, an eating behavior of some animals feeding on living microbes Nanobacterium Nylon-eating bacteria Petri dish Staining Budapest Treaty (Budapest Treaty on the International Recognition of the Deposit of Microorganisms for the Purposes of Patent Procedure) Notes References External links Microbes.info is a microbiology information portal containing a vast collection of resources including articles, news, frequently asked questions, and links pertaining to the field of microbiology. Our Microbial Planet A free poster from the National Academy of Sciences about the positive roles of micro-organisms. "Uncharted Microbial World: Microbes and Their Activities in the Environment" Report from the American Academy of Microbiology Understanding Our Microbial Planet: The New Science of Metagenomics A 20-page educational booklet providing a basic overview of metagenomics and our microbial planet. Tree of Life Eukaryotes Microbe News from Genome News Network Medical Microbiology On-line textbook Through the microscope: A look at all things small On-line microbiology textbook by Timothy Paustian and Gary Roberts, University of Wisconsin–Madison Methane-spewing microbe blamed in worst mass extinction. CBCNews
0.769429
0.999002
0.768661
Frailty syndrome
Frailty is a common and clinically significant grouping of symptoms that occurs in aging and older adults. These symptoms can include decreased physical abilities such as walking, excessive fatigue, and weight and muscle loss leading to declined physical status. In addition, frailty encompasses a decline in both overall physical function and physiologic reserve of organ systems resulting in worse health outcomes for this population. This syndrome is associated with increased risk of heart disease, falls, hospitalization, and death. In addition, it has been shown that adults living with frailty face more anxiety and depression symptoms than those who do not. The presence of frailty varies based on the assessment technique, however it is estimated that 4-16% of the population over 65 years old is living with frailty. Frailty can have impacts on public health due to the factors that comprise the syndrome affecting physical and mental health outcomes. There are several ways to identify, prevent, and mitigate the prevalence of frailty and the evaluation of frailty can be done through clinical assessments created to combine recognized signs and symptoms of frailty. Definitions Frailty refers to an age-related functional decline and heightened state of vulnerability. It is a worsening of functional status compared to the normal physiological process of aging. It can refer to the combination of a decline of physical and physiological aspects of a human body. The reduced reserve capacity of organ systems, muscle, and bone create a state where the body is not capable of coping with stressors such as illness or falls. Frailty can lead to increased risk of adverse side effects, complications, and mortality. Older age by itself is not what defines frailty, it is however a syndrome found in older adults. Many adults over 65 are not living with frailty. Frailty is not one specific disease, however is a combination of many factors. Frailty does not have a specific universal criteria on which it is diagnosed; there are a combination of signs and symptoms that can lead to a diagnosis of frailty. Evaluations can be done on physical staus, weight fluctuations, or subjective symptoms. Frailty most commonly refers to physical status and is not a syndrome of mental capacity such as dementia, which is a decline in cognitive function. Although, frailty can be a risk factor for the development of dementia. Although no universal diagnostic criteria exist, some clinical screening tools are commonly used to identify frailty. These include the Fried Frailty Phenotype and a deficit accumulation frailty index. The Fried Frailty Phenotype assesses five domains commonly affected by frailty: exhaustion, weakness, slowness, physical inactivity, and weight loss. The presence of 1-2 findings is classified as "pre-frailty", 3 or more as frailty and the presence of all 5 indicates "end-stage frailty" and is associated with poor prognosis. The deficit accumulation characterization of frailty tallies deficits present in a variety of clinical areas (including nutritional deficiency, laboratory abnormalities, disability index, cognitive and physical impairment) to create a frailty index. A higher number of deficits is associated with a worse prognosis. Geriatric syndromes related to frailty Major Contributors to Frailty Decreases in skeletal muscle mass (sarcopenia) and bone density are two major contributors to developing frailty in older adults. In early to middle age, bone density and muscle mass are closely related. As adults age, skeletal muscle mass or bone density may begin to decline. This decline can lead to frailty and both have been identified as contributors to disability. The development of sarcopenia or osteoporosis alone does not establish frailty, as there are many factors that are taken into account. Studies suggest that frailty is a result of multiple body systems experiencing dysregulation, and the more body systems that are affected, the higher the risk is for developing frailty. Sarcopenia Sarcopenia is the degenerative loss of skeletal muscle mass, quality, and strength associated with aging. The rate of muscle loss is dependent on exercise level, co-morbidities, nutrition and other factors. Sarcopenia can lead to reduction in functional status and cause significant disability from increased weakness. The muscle loss is related to changes in muscle synthesis signalling pathways although is incompletely understood. The cellular mechanisms are distinct from other types of muscle atrophy such as cachexia, in which muscle is degraded through cytokine-mediated degradation although both conditions may co-exist. Osteoporosis Osteoporosis is a disease of bone mineral density loss (usually age related) that leads to an increased risk of bone fractures, especially with falls. Frailty is associated with an increased risk of osteoporosis related bone fractures. Muscle weakness Muscle weakness and associated muscle atrophy (muscle wasting, also known as sarcopenia) are more common in those with frailty. The prevalence of muscle weakness was more common in those with frailty in a population based study of older adults. Aging, lower levels of DHEA, testosterone, IGF-1 and increased levels of cortisol are thought to contribute to muscle wasting in those with frailty. Heart Failure Frailty is also common in those with heart failure. Both frailty and heart failure share similar methods of progressive health decline and often lead to worsened health conditions when combined. Depression, Bipolar Disorder, & Anxiety Disorders People who had mental disorders were found to be at increased risk of frailty. Biological and physiological mechanisms The causes of frailty are multifactorial involving dysregulation across many physiological systems. Frailty may be related to a proinflammatory state. A common interleukin elevated in this state is IL-6. A pro-inflammatory cytokine, IL-6 was found to be common in older adults with frailty. IL-6 is typically up-regulated by inflammatory mediators, such as C-reactive protein, released in the presence of chronic disease. Increased levels of inflammatory mediators are often associated with chronic disease; however, they may also be elevated even in the absence of chronic disease. Sarcopenia, anemia, anabolic hormone deficiencies, and excess exposure to catabolic hormones such ascortisol have been associated with an increased likelihood of frailty. Other mechanisms associated with frailty include insulin resistance, increased glucose levels, compromised immune function, micronutrient deficiencies, and oxidative stress. Mitochondrial dysfunction, including mitochondrial DNA mutations, cellular respiration dysfunction, and changes in mitochondrial hemostasis is thought to contribute to reduced cellular energy, production of reactive oxygen species and inflammation. This mitochondrial dysfunction is thought to contribute to the signs of frailty. Researchers found that individual abnormal body functions may not be the best predictor of risk of frailty. However, they did conclude that once the number of conditions reaches a certain threshold, the risk of frailty increases. This finding suggests that treatment of frailty syndrome should not be focused on a single condition, but a multitude in order to increase the likelihood of better treatment results. Theoretical understanding Declines in physiologic reserves and resilience contribute to frailty. The risk of frailty increases with age and with the incidence of diseases. The development of frailty is also thought to involve declines in energy production, energy utilization and repair systems in the body, resulting in declines in the function of many different physiological systems. This decline in multiple systems affects the normal complex adaptive behavior that is essential to health and eventually results in frailty. A comparison of peripheral blood mononuclear cells from frail older individuals to cells from healthy younger individuals showed evidence in the frail older individuals of increased oxidative stress, increased apurinic/pyrimidinic sites in DNA, increased accumulation of endogenous DNA damage and reduced ability to repair DNA double-strand breaks. Frailty assessment The syndrome of geriatric frailty is hypothesized to reflect impairments in the regulation of multiple physiologic systems, embodying a lack of resilience to physiologic challenges and thus elevated risk for a range of deleterious endpoints. Generally speaking, the empirical assessment of geriatric frailty in individuals seeks ultimately to capture this or related features, though distinct approaches to such assessment have been developed in the literature (see de Vries et al., 2011 for a comprehensive review). Two most widely used approaches, different in their nature and scopes, are discussed below. Other approaches follow. Physical frailty phenotype A popular approach to the assessment of geriatric frailty encompasses the assessment of five dimensions that are hypothesized to reflect systems whose impaired regulation underlies the syndrome. These five dimensions are: unintentional weight loss exhaustion muscle weakness slowness while walking low levels of activity These five dimensions form specific criteria indicating adverse functioning, which are implemented using a combination of self-reported and performance-based measures. Those who meet at least three of the criteria are defined as "frail", while those not matching any of the five criteria are defined as "robust". Frailty index/deficit accumulation Another notable approach to the assessment of geriatric frailty in which frailty is viewed in terms of the number of health "deficits" that are manifest in the individual, leading to a continuous measure of frailty. This score is based the presence of deficits in may areas related to frailty, including symptoms of cognitive or physical impairment, laboratory abnormalities, nutritional deficits, or disability. Four domains of frailty A model consisting of four domains of frailty was proposed in response to an article in the BMJ. This conceptualisation could be viewed as blending the phenotypic and index models. Researchers tested this model for signal in routinely collected hospital data, and then used this signal in the development of a frailty model, finding even predictive capability across 3 outcomes of care. In the care home setting, one study indicated that not all four domains of frailty were routinely assessed in residents, giving evidence to suggest that frailty may still primarily be viewed only in terms of physical health. SHARE Frailty Index The SHARE-Frailty Index (SHARE-FI) assesses frailty based on five domains of the frailty phenotype: Fatigue Loss of appetite Grip strength Functional difficulties Physical activity Clinical Frailty Scale The Clinical Frailty Scale (CFS) is a scale used to assess frailty which was evolved from the Canadian Study of Health and Aging. It is a 9-point scale used to assess a persons frailty level, where a score of 1 point would mean a person is very fit and robust, to a score of 9 points meaning the person is severely frail and terminally ill. Edmonton Frail Scale The Edmonton Frail Scale (EFS) is another method used to screen frailty. This scale is given scores of up to 17 points. It has been assessed to screen all domains of frailty, and is said to be easy to perform by clinicians. Specific tests used in this scaling system are walking tests and clock drawing. Electronic Frail Scale (eFI) The electronic Frail Scale (eFI) is a scale weighted out of 36 deficit points where the higher the number in the score will represent the more frail, or more prone to frailty. Each frailty-related deficit the person has is given a point and the more deficits the person is experiencing the more likely they are frail or will experience frailty in the future. The total number of deficits is divided by 36. Then, a frailty category is assigned. A person with a score of 0.00–0.12 is in the "Fit" category. A person with a score of 0.13–0.24 is in the "Mild" category. A person with a score of 0.25–0.36 is in the "Moderate" category. Finally, a person with the score of 0.36 or above is considered to be in the "Severe" category. Prevention As frailty arises as a result of reduced reserve capacity in a biological system and causes an individual to have heightened vulnerability to stress, avoiding known stressors (ie. surgeries, infections, etc.) and understanding mechanisms to reduce frailty can help older adults prevent worsening their frail status. Some signs of frailty include: unwanted weight loss, muscle weakness, low energy, and low grip strength. Currently, preventative interventions focus on minimizing muscle loss and improvement of overall well-being in older adults or individuals with chronic illnesses. Identification of risk factors When considering prevention of frailty, it is important to understand the risk factors that contribute to frailty and identify them early on. Early identification of risk factors allows for preventative interventions, reducing risks of future complications. A 2005 observational study found associations between frailty and a number of risk factors such as: low income, advanced age, chronic medical conditions, lack of education, and smoking. Exercise A significant target in the prevention of frailty is physical activity. As people age, physical activity markedly drops, with the steepest declines seen in adolescence and continuing on throughout life. The lower levels of physical activity and are associated with and a key component of frailty syndrome. Therefore, exercise regimens consisting of walking, strength training, and self-directed physical activity, have been examined in a number of studies as an intervention to prevent frailty. A randomized control trial published in 2017 found significantly lower rates of frailty in older adults who were assigned an exercise regimen vs those who were in the control group. In this study, 15.3% of the control group became frail in the time frame of the study, in comparison to 4.9% of the exercise group. The exercise group also received a nutritional assessment, which is another target in frailty prevention. Nutrition Nutrition has also been a major target in the prevention of frailty. A healthy dietary pattern consisting of high consumption of healthy fats, fruits, vegetables, low-fat dairy products, and whole grains can contribute to maintaining a healthy weight and postpone frailty. A 2019 review paper examined a variety of studies and found evidence of nutritional intervention as an effective way of preventing frailty. Specifically, multiple studies showed adherence to the Mediterranean diet is associated with a decreased risk of incident frailty in the US. Non-surgical management Frailty management largely depends on an individual's classification (i.e. non-frail, pre-fail, and frail) and treatment needs. Currently, there is a lack of strong evidence-based treatment and management plans for frailty. Physicians must work closely with patients to develop a realistic management plan to ensure patient compliance, leading to better health outcomes. In clinical practice, guidelines developed by International Conference on Frailty and Sarcopenia Research (ICFSR) can be used to identify and manage frailty based on classification. There are currently no pharmacological interventions available for frailty. Exercise Exercise is one of the major targets to prevent and manage frailty in older adults to improve and maintain mobility. Individuals partaking in exercise appear to have potential in preventing frailty. In 2018, a systemic review concluded that group exercise had the benefit of delaying frailty in older adults aged 65 and older. Individualized physical therapy programs developed by physicians can help improve frail status. For example, progressive resistance strength training for older adults can be used in clinical practice or at-home as a way to regain mobility. A systematic review conducted in 2022 across multiple countries using data from twelve randomized clinical trials found evidence that mobility training can increase mobility level and functioning in older adults living in community-dwellings, such as a nursing home. However, the review also concluded little to no difference in the risk of falls. Occupational therapy Activities of daily living (ADLs) include activities that are necessary to sustain life. Examples are brushing teeth, getting out of bed, dressing oneself, bathing, etc. Occupational therapy provided modest improvements in elderly adults mobility to do ADLs. Nutritional supplementation Frailty can involve changes such as weight loss. Interventions should focus on any difficulties with supplementation and diet. For those who may be undernourished and not acquiring adequate calories, oral nutritional supplements in between meals may decrease nutritional deficits. Vitamin D, omega-3 fatty acid, sex hormone (such as testosterone) or growth hormone supplementation have not shown benefits in physical functioning, activities of daily living or frailty. Palliative care Palliative care may be helpful for individuals who are experiencing an advanced state of frailty with possible other co-morbidities. Improving quality of life by reducing pain and other harmful symptoms is the goal with palliative care. One study showed the cost reduction by focusing on palliative care rather than other treatments that may be unnecessary and unhelpful. Surgical outcomes Frail elderly people are at significant risk of post-surgical complications and the need for extended care. Frailty more than doubles the risk of morbidity and mortality from surgery and cardiovascular conditions. Assessment of older patients before elective surgeries can accurately predict the patients' recovery trajectories. One frailty scale consists of five items: unintentional weight loss >4.5 kg in the past year self-reported exhaustion <20th population percentile for grip strength slowed walking speed, defined as lowest population quartile on 4-minute walking test low physical activity such that persons would only rarely undertake a short walk A healthy person scores 0; a very frail person scores 5. Compared to non-frail elderly people, people with intermediate frailty scores (2 or 3) are twice as likely to have post-surgical complications, spend 50% more time in the hospital, and are three times as likely to be discharged to a skilled nursing facility instead of to their own homes. Frail elderly patients (score of 4 or 5) have even worse outcomes, with the risk of being discharged to a nursing home rising to twenty times the rate for non-frail elderly people. Another tool that has been used to predict frailty outcome post-surgery is the Modifies Frailty Index, or mFI-5. This scale consists of 5 key co-morbidities: Congestive heart failure within 1 month of surgery Diabetes mellitus Chronic Obstruction Pulmonary Disease or pneumonia in the past Individuals needing additional assistance to perform everyday activities of living High blood pressure that is controlled with medication An individual without one of these conditions would be given a score of 0 for the condition absent. An individual who does have one of the conditions would be given a score of 1 for each of the conditions present. In an initial study using the mFI-5 scale, individuals with a sum mFI-5 score of 2 or greater were predicted to experience post-surgery complications due to frailty, which was supported by the results of the study. Frailty scales can be used to predict the risk of complications in patients before and after surgery. There is an association between frailty and delayed transplant function after a kidney transplant. Other studies note that frailty scales alone may be innacurate in predicting outcomes for people undergoing surgical procedures, and other factors such as co-morbid medical conditions need to be considered. Epidemiology and public health Frailty is a common geriatric syndrome. Due to the absence of international diagnostic criteria, the prevalence estimates may not be accurate. Estimates of frailty prevalence in older populations vary according to a number of factors, including the setting in which the prevalence is being estimated — e.g., nursing home (higher prevalence) vs. community (lower prevalence) — and the definition used for frailty. Using the widely used frailty phenotype framework, prevalence estimates of 7–16% have been reported in non-institutionalized, community-dwelling older adults. In a systemic review exploring the prevalence of frailty based on geographical location it was found that Africa and North and South America had the largest prevalence at 22% and 17% respectively. Europe had the lowest prevalence at 8%. The development of frailty occurs most often in individuals with low socio-economic status, those living with obesity, female sex, a history of smoking, limited activity levels, and older age. Epidemiologic research has also indicated that presence of multiple chronic diseases (such as cardiovascular disease, diabetes, or chronic kidney disease, anemia, atherosclerosis) depression, and cognitive impairment to be risk factors for frailty. Autonomic dysfunction, hormonal abnormalities, and obesity have also been implicated in the development of frailty. obesity, Vitamin D deficiency in men may be associated with increased risk of frailty. Environmental factors such as living space and neighborhood characteristics may also be related to frailty. Frailty is more common in those with diabetes plus peripheral arterial disease and in those with heart failure. Frailty is more common in those with mental health conditions including anxiety disorders, bipolar disorder and depression. The presence of frailty with these mental disorders was also associated with a poor prognosis and increased mortality Research comparing case management trials to standard care for people living with frailty in high-income countries found that there was no difference in reducing cost or improving patient outcomes between the two approaches. Sex and ethnicity differences in frailty Meta-analyses have shown that the prevalence of frailty is higher in female older adults compared to male older adults. This sex difference was consistently found in pre-clinical research models as well. Studies have found that the incidence of frailty was higher in females with more medical comorbidities. In recent research where muscle-biopsies were taken from fit and weak older adults of both sexes, it was shown that there were sex-specific alterations in muscle content in association with frailty-related physical weakness. In a population based study, Non-Hispanic Black-Americans and Hispanic-Americans had a higher incidence of frailty compared to non-Hispanic White-Americans. Ongoing clinical trials , ongoing clinical trials on frailty syndrome in the US include: the impact of frailty on clinical outcomes of patients treated for abdominal aortic aneurysms the use of "pre-habilitation," an exercise regimen used before transplant surgery, to prevent the frailty effects of kidney transplant in recipients defining the acute changes in frailty following sepsis in the abdomen the efficacy of the anti-inflammatory drug, Fisetin, in reducing frailty markers in elderly adults Physical Performance Testing and Frailty in Prediction of Early Postoperative Course After Cardiac Surgery (Cardiostep) See also Ageing Osteoporosis Sarcopenia References External links Frailty Geriatrics Gerontology
0.775727
0.990878
0.768651
Panic attack
Panic attacks are sudden periods of intense fear and discomfort that may include palpitations, sweating, chest pain or chest discomfort, shortness of breath, trembling, dizziness, numbness, confusion, or a feeling of impending doom or of losing control. Typically, symptoms reach a peak within ten minutes of onset, and last for roughly 30 minutes, but the duration can vary from seconds to hours. Although they can be extremely frightening and distressing, panic attacks themselves are not physically dangerous. The essential features of panic attacks remain unchanged, although the official DSM-IV terminology for describing different types of panic attacks (i.e., situationally bound/cued, situationally predisposed, and unexpected/uncued) is generally simplified into two types: unexpected and expected panic attacks. Panic attacks function as a marker and prognostic factor for severity of diagnosis, course, and comorbidity across an array of disorders, including but not limited to anxiety disorders. Hence, panic attacks can be listed as a specifier that is applicable to all DSM-5 disorders. Panic attacks can occur due to several disorders including panic disorder, social anxiety disorder, post-traumatic stress disorder, substance use disorder, depression, and medical problems. They can either be triggered or occur unexpectedly. Nicotine, caffeine, and psychological stress increase the risk of having a panic attack. Before diagnosis, conditions that produce similar symptoms should be ruled out, such as hyperthyroidism, hyperparathyroidism, heart disease, lung disease, drug use, and dysautonomia. Treatment of panic attacks should be directed at the underlying cause. In those with frequent attacks, counseling or medications may be used. Breathing training and muscle relaxation techniques may also help. Those affected are at a higher risk of suicide. In Europe, about 3% of the population has a panic attack in a given year while in the United States they affect about 11%. They are more common in females than in males. They often begin during puberty or early adulthood. Children and older people are less commonly affected. Signs and symptoms People with panic attacks often report a fear of dying or heart attack, flashing vision or other visual disturbances, faintness or nausea, numbness throughout the body, shortness of breath and hyperventilation, or loss of body control. Some people also experience tunnel vision, mostly due to blood flow leaving the head to more critical parts of the body in defense. These feelings may provoke a strong urge to escape or flee the place where the attack began (a consequence of the "fight-or-flight response", in which the hormone causing this response is released in significant amounts). This response floods the body with hormones, particularly epinephrine (adrenaline), which aid it in defending against harm. A panic attack can result when up-regulation by the sympathetic nervous system (SNS) is not moderated by the parasympathetic nervous system (PNS). The most common symptoms include trembling, dyspnea (shortness of breath), heart palpitations, chest pain (or chest tightness), hot flashes, cold flashes, burning sensations (particularly in the facial or neck area), sweating, nausea, dizziness (or slight vertigo), lightheadedness, heavy-headedness, hyperventilation, paresthesias (tingling sensations), intense muscle cramps (particularly in the hands, which may 'lock up' and become difficult to move), sensations of choking or smothering, difficulty moving, depersonalization and/or derealization. These physical symptoms are interpreted with alarm in people prone to panic attacks. This results in increased anxiety and forms a positive feedback loop. Shortness of breath and chest pain are the predominant symptoms. Many people experiencing a panic attack incorrectly attribute them to a heart attack and thus seek treatment in an emergency room. Because chest pain and shortness of breath are hallmark symptoms of cardiovascular illnesses, including unstable angina and myocardial infarction (heart attack), a diagnosis of exclusion (ruling out other conditions) must be performed before diagnosing a panic attack. It is especially important to do this for people whose mental health and heart health statuses are unknown. This can be done using an electrocardiogram and mental health assessments. Panic attacks are distinguished from other forms of anxiety by their intensity and their sudden, episodic nature. They are often experienced in conjunction with anxiety disorders and other psychological conditions, although panic attacks are not generally indicative of a mental disorder. Causes There are long-term, biological, environmental, and social causes of panic attacks. In 1993, Fava et al. proposed a staging method of understanding the origins of disorders. The first stage in developing a disorder involves predisposing factors, such as genetics, personality, and a lack of well-being. Panic disorder often occurs in early adulthood, although it may appear at any age. It occurs more frequently in women and more often in people with above-average intelligence. Various twin studies where one identical twin has an anxiety disorder have reported a high incidence of the other twin also having an anxiety disorder diagnosis. Biological causes may include obsessive–compulsive disorder, postural orthostatic tachycardia syndrome, post-traumatic stress disorder, hypoglycemia, hyperthyroidism, Wilson's disease, mitral valve prolapse, pheochromocytoma, and inner ear disturbances (labyrinthitis). Dysregulation of the norepinephrine system in the locus coeruleus, an area of the brain stem, has been linked to panic attacks. Panic attacks may also occur due to short-term stressors. Significant personal loss, including an emotional attachment to a romantic partner, life transitions, and significant life changes may all trigger a panic attack to occur. A person with an anxious temperament, excessive need for reassurance, hypochondriacal fears, overcautious view of the world, and cumulative stress have been correlated with panic attacks. In adolescents, social transitions may also be a cause. People will often experience panic attacks as a direct result of exposure to an object/situation that they have a phobia for. Panic attacks may also become situationally-bound when certain situations are associated with panic due to previously experiencing an attack in that particular situation. People may also have a cognitive or behavioral predisposition to having panic attacks in certain situations. Some maintaining causes include avoidance of panic-provoking situations or environments, anxious/negative self-talk ("what-if" thinking), mistaken beliefs ("these symptoms are harmful and/or dangerous"), and withheld feelings. Hyperventilation syndrome may occur when a person breathes from the chest, which can lead to over-breathing (exhaling excessive carbon dioxide related to the amount of oxygen in one's bloodstream). Hyperventilation syndrome can cause respiratory alkalosis and hypocapnia. This syndrome often involves prominent mouth breathing as well. This causes a cluster of symptoms, including rapid heartbeat, dizziness, and lightheadedness, which can trigger panic attacks. Panic attacks may also be caused by substances. Discontinuation or marked reduction in the dose of a drug (drug withdrawal), for example, an antidepressant (antidepressant discontinuation syndrome), can cause a panic attack. According to the Harvard Mental Health Letter, "the most commonly reported side effects of smoking marijuana are anxiety and panic attacks. Studies report that about 20% to 30% of recreational users experience such problems after smoking marijuana." Cigarette smoking is another substance that has been linked to panic attacks. A common denominator of current psychiatric approaches to panic disorder is that no real danger exists, and the person's anxiety is inappropriate. Panic disorder People who have repeated, persistent attacks or feel severe anxiety about having another attack are said to have panic disorder. Panic disorder is strikingly different from other types of anxiety disorders in that panic attacks are often sudden and unprovoked. However, panic attacks experienced by those with panic disorder may also be linked to or heightened by certain places or situations, making daily life difficult. Agoraphobia Agoraphobia is an anxiety disorder that primarily consists of the fear of experiencing a difficult or embarrassing situation from which the affected cannot escape. Panic attacks are commonly linked to agoraphobia. People with severe agoraphobia may become confined to their homes, experiencing difficulty traveling from this "safe place". The word "agoraphobia" comes from the Greek words agora (αγορά) and phobos (φόβος), the term "agora" referring to the city centre in an ancient Greek city. In Japan, people who exhibit extreme agoraphobia to the point of becoming unwilling or unable to leave their homes are referred to as Hikikomori. The phenomena in general is known by the same name, and it is estimated that roughly half a million Japanese youths are Hikikomori. People who have had a panic attack in certain situations may develop phobias of these situations and begin to avoid them. Eventually, the pattern of avoidance and level of anxiety about another attack may reach the point where individuals with panic disorder are unable to drive or even step out of the house. At this stage, the person is said to have panic disorder with agoraphobia. Experimentally induced Panic attack symptoms can be experimentally induced in the laboratory by various means. Among them, for research purposes, by administering a bolus injection of the neuropeptide cholecystokinin-tetrapeptide (CCK-4). Various animal models of panic attacks have been experimentally studied. Neurotransmitter imbalances Many neurotransmitters are affected when the body is under the increased stress and anxiety that accompany a panic attack. Some include serotonin, GABA (gamma-aminobutyric acid), dopamine, norepinephrine, and glutamate. More research into how these neurotransmitters interact with one another during a panic attack is needed to make any solid conclusions, however. An increase of serotonin in certain pathways of the brain seems to be correlated with reduced anxiety. More evidence that suggests serotonin plays a role in anxiety is that people who take SSRIs tend to feel a reduction of anxiety when their brain has more serotonin available to use. The main inhibitory neurotransmitter in the central nervous system (CNS) is GABA. Most of the pathways that use GABA tend to reduce anxiety immediately. Dopamine's role in anxiety is not well understood. Some antipsychotic medications that affect dopamine production have been proven to treat anxiety. However, this may be attributed to dopamine's tendency to increase feelings of self-efficacy and confidence, which indirectly reduces anxiety. Many physical symptoms of anxiety, such as rapid heart rate and hand tremors, are regulated by norepinephrine. Drugs that counteract norepinephrine's effect may be effective in reducing the physical symptoms of a panic attack. Nevertheless, some drugs that increase 'background' norepinephrine levels such as tricyclics and SNRIs are effective for the long-term treatment of panic attacks, possibly by blunting the norepinephrine spikes associated with panic attacks. Because glutamate is the primary excitatory neurotransmitter involved in the central nervous system (CNS), it can be found in almost every neural pathway in the body. Glutamate is likely involved in conditioning, which is the process by which certain fears are formed, and extinction, which is the elimination of those fears. Pathophysiology The symptoms of a panic attack may cause the person to feel that their body is failing. The symptoms can be understood as follows. First, there is frequently the sudden onset of fear with little provoking stimulus. This leads to a release of adrenaline (epinephrine) which brings about the fight-or-flight response when the body prepares for strenuous physical activity, resulting in increased sympathetic tone. This results in increased heart rate (tachycardia), rapid breathing (hyperventilation) which may be perceived as shortness of breath (dyspnea), and sweating. Because strenuous activity rarely ensues, the hyperventilation leads to a drop in carbon dioxide levels in the lungs and then in the blood. This leads to shifts in blood pH (respiratory alkalosis or hypocapnia), causing compensatory metabolic acidosis activating chemosensing mechanisms that translate this pH shift into autonomic and respiratory responses. Moreover, this hypocapnia and release of adrenaline during a panic attack cause vasoconstriction resulting in slightly less blood flow to the head which causes dizziness and lightheadedness. A panic attack can cause blood sugar to be drawn away from the brain and toward the major muscles. Neuroimaging suggests heightened activity in the amygdala, thalamus, hypothalamus, and brainstem regions including the periaqueductal gray, parabrachial nucleus, and Locus coeruleus. In particular, the amygdala has been suggested to have a critical role. The combination of increased activity in the amygdala (fear center) and brainstem along with decreased blood flow and blood sugar in the brain can lead to decreased activity in the prefrontal cortex (PFC) region of the brain. There is evidence that having an anxiety disorder increases the risk of cardiovascular disease (CVD). Those affected also have a reduction in heart rate variability. Cardiac mechanism Panic attacks can cause chest pain by directly affecting the circulation in the coronary vasculature. A panic attack induces significant sympathetic activation, which can cause vasoconstriction of small coronary vessels and microvascular angina. Autonomic nervous system activation and hyperventilation during panic attacks may induce coronary artery spasm (vasospasm). This process may result in ischemic damage to the myocardium and cardiac chest pain, despite a normal angiogram. In individuals with coronary artery disease, panic attacks and psychological stress may exacerbate ischemic pain by increasing myocardial oxygen demand through increased heart rate, blood pressure, coronary vasomotor tone, or sympathetic hyperactivity regulated by the autonomic nervous system. Cardiovascular disease People who have been diagnosed with panic disorder have approximately double the risk of coronary heart disease. Certain stress responses to depression also have been shown to increase the risk and those diagnosed with both depression and panic disorder are nearly three times more at risk. Diagnosis According to the DSM-5 a panic attack is part of the diagnostic class of anxiety disorders. It is not considered a specific disorder on its own, with the symptoms of a panic attack regarded as characteristics of another disorder during which the panic attack occurs. DSM-5 criteria for a panic attack is defined as "an abrupt surge of intense fear or intense discomfort that reaches a peak within minutes and during which time four or more of the following symptoms occur": Palpitations, and/or accelerated heart rate Sweating Trembling or shaking Sensations of shortness of breath or being smothered Feeling of choking Chest pain or discomfort Nausea or abdominal distress Feeling dizzy, unsteady, lightheaded, or faint Derealization (feelings of unreality) or depersonalization (being detached from oneself) Fear of losing control or going insane Sense of impending doom Paresthesias (numbness or tingling sensations) Chills or heat sensations In DSM-5, culture-specific symptoms (e.g., tinnitus, neck soreness, headache, and uncontrollable screaming or crying) may be seen. Such symptoms should not count as one of the four required symptoms. Screening tools such as the Panic Disorder Severity Scale can be used to detect possible cases of disorder and suggest the need for a formal diagnostic assessment. Treatment Panic disorder is usually effectively treated with a variety of interventions, including psychological therapies and medication. Cognitive-behavioral therapy has the most complete and longest duration of effect, followed by specific selective serotonin reuptake inhibitors. A 2009 review found positive results from therapy and medication and a much better result when the two were combined. Lifestyle changes Caffeine may cause or exacerbate panic anxiety. Anxiety can temporarily increase during withdrawal from caffeine and various other drugs. Increased and regimented aerobic exercise such as running has been shown to have a positive effect on combating panic anxiety. There is evidence that suggests that this effect is correlated to the release of exercise-induced endorphins and the subsequent reduction of the stress hormone cortisol. There remains a chance of panic symptoms becoming triggered or being made worse due to increased respiration rate that occurs during aerobic exercise. This increased respiration rate can lead to hyperventilation and hyperventilation syndrome, which mimics symptoms of a heart attack, thus inducing a panic attack. The benefits of incorporating an exercise regimen have shown the best results when paced accordingly. Meditation may also be helpful in the treatment of panic disorders. Muscle relaxation techniques are useful to some individuals. These can be learned using recordings, videos, or books. While muscle relaxation has proved to be less effective than cognitive-behavioral therapies in controlled trials, many people still find at least temporary relief from muscle relaxation. Breathing exercises In the great majority of cases, hyperventilation is involved, exacerbating the effects of the panic attack. Breathing retraining exercise helps to rebalance the oxygen and CO2 levels in the blood. David D. Burns recommends breathing exercises for those with anxiety. One such breathing exercise is a 5-2-5 count. Using the stomach (or diaphragm)—and not the chest—inhale (feel the stomach come out, as opposed to the chest expanding) for 5 seconds. As the maximal point at inhalation is reached, hold the breath for 2 seconds. Then slowly exhale, over 5 seconds. Repeat this cycle twice and then breathe 'normally' for 5 cycles (1 cycle = 1 inhale + 1 exhale). The point is to focus on breathing and relax the heart rate. Regular diaphragmatic breathing may be achieved by extending the out-breath by counting or humming. Although breathing into a paper bag was a common recommendation for short-term treatment of symptoms of an acute panic attack, it has been criticized as inferior to measured breathing, potentially worsening the panic attack and possibly reducing needed blood oxygen. While the paper bag technique increases needed carbon dioxide and so reduces symptoms, it may excessively lower oxygen levels in the bloodstream. Capnometry, which provides exhaled CO2 levels, may help guide breathing. Therapy According to the American Psychological Association, "most specialists agree that a combination of cognitive and behavioral therapies are the best treatment for panic disorder. Medication might also be appropriate in some cases." The first part of therapy is largely informational; many people are greatly helped by simply understanding exactly what panic disorder is and how many others experience it. Many people with panic disorder are worried that their panic attacks mean they are "going crazy" or that the panic might induce a heart attack. Cognitive restructuring helps people to replace those thoughts with more realistic, positive ways of viewing the attacks. Avoidant behavior is one of the key aspects that prevent people with frequent panic attacks from functioning healthily. Exposure therapy, which includes repeated and prolonged confrontation with feared situations and body sensations, helps weaken anxiety responses to panic-inducing external and internal stimuli and reinforce realistic ways of viewing panic symptoms. In deeper-level psychoanalytic approaches, in particular object relations theory, panic attacks are frequently associated with splitting (psychology), paranoid-schizoid and depressive positions, and paranoid anxiety. They are often found to be comorbid with borderline personality disorder and child sexual abuse. Paranoid anxiety may reach the level of a persecutory anxiety state. There was a meta-analysis of the comorbidity of panic disorders and agoraphobia. It used exposure therapy to treat patients over a period. Hundreds of patients were used in these studies and they all met the DSM-IV criteria for both of these disorders. A result was that thirty-two percent of patients had a panic episode after treatment. They concluded that the use of exposure therapy has lasting efficacy for a client who is living with a panic disorder and agoraphobia. The efficacy of group therapy treatment over conventional individual therapy for people with panic disorder with or without agoraphobia appears similar. Medication Medication options for panic attacks typically include benzodiazepines and antidepressants. Benzodiazepines are being prescribed less often because of their potential side effects, such as dependence, fatigue, slurred speech, and memory loss. Antidepressant treatments for panic attacks include selective serotonin reuptake inhibitors (SSRIs), serotonin noradrenaline reuptake inhibitors (SNRIs), tricyclic antidepressants (TCAs), and MAO inhibitors (MAOIs). SSRIs in particular tend to be the first drug treatment used to treat panic attacks. Selective serotonin reuptake inhibitors (SSRIs) and tricyclic antidepressants appear similar for short-term efficacy. SSRIs carry a relatively low risk since they are not associated with much tolerance or dependence, and are difficult to overdose with. TCAs are similar to SSRIs in their many advantages but come with more common side effects such as weight gain and cognitive disturbances. They are also easier to overdose on. MAOIs are generally suggested for patients who have not responded to other forms of treatment. While the use of drugs in treating panic attacks can be very successful, it is generally recommended that people also be in some form of therapy, such as cognitive-behavioral therapy. Drug treatments are usually used throughout the duration of panic attack symptoms and discontinued after the patient has been free of symptoms for at least six months. It is usually safest to withdraw from these drugs gradually while undergoing therapy. While drug treatment seems promising for children and adolescents, they are at an increased risk of suicide while taking these medications and their well-being should be monitored closely. Prognosis Roughly one-third are treatment-resistant. These people continue to have panic attacks and various other panic disorder symptoms after receiving treatment. Many people being treated for panic attacks begin to experience limited symptom attacks. These panic attacks are less comprehensive, with fewer than four bodily symptoms being experienced. It is not unusual to experience only one or two symptoms at a time, such as vibrations in their legs, shortness of breath, or an intense wave of heat traveling up their bodies, which is not similar to hot flashes due to estrogen shortage. Some symptoms, such as vibrations in the legs, are sufficiently different from any normal sensation that they indicate a panic disorder. Other symptoms on the list can occur in people who may or may not have panic disorder. Panic disorder does not require four or more symptoms to all be present at the same time. Causeless panic and racing heartbeat are sufficient to indicate a panic attack. Epidemiology In Europe, about 3% of the population has a panic attack in a given year while in the United States they affect about 11%. They are more common in females than in males. They often begin during puberty or early adulthood. Children and older people are less commonly affected. A meta-analysis was conducted on data collected about twin studies and family studies on the link between genes and panic disorder. The researchers also examined the possibility of a link to phobias, obsessive-compulsive disorder (OCD), and generalized anxiety disorder. The researchers used a database called MEDLINE to accumulate their data. The results concluded that the aforementioned disorders have a genetic component and are inherited or passed down through genes. For the non-phobias, the likelihood of inheriting is 30–40%, and for the phobias, it was 50–60%. See also Hysteria Nervous breakdown Panic References External links Anxiety Wikipedia medicine articles ready to translate Wikipedia neurology articles ready to translate
0.769201
0.999199
0.768585
Medical encyclopedia
A medical encyclopaedia is a comprehensive written compendium that holds information about diseases, medical conditions, tests, symptoms, injuries, and surgeries. It may contain an extensive gallery of medicine-related photographs and illustrations. A medical encyclopaedia provides information to readers about health questions. It may also contain some information about the history of diseases, the development of medical technology uses to detect diseases in its early phase. A licensed physician should be consulted for diagnosis and treatment of any and all medical conditions. Characteristics Four major elements define a medical encyclopaedia: its subject matter, its scope, its method of organization, and its method of production: Encyclopaedias can be general, containing articles on topics in every field. A medical encyclopedia provides valuable health information, tools for managing your health, and support to those who seek information. Works of encyclopedic scope aim to convey the important accumulated knowledge for their subject domain, such as an encyclopaedia of medicine. The articles on subjects in a medical encyclopedia are usually accessed alphabetically by article name or for health topics. As modern multimedia and the information age have evolved, they have had an ever-increasing effect on the collection, verification, summation, and presentation of information of all kinds. Medical encyclopedias such as Medline Plus, WebMD, and the Merck Manual are examples of new forms of the medical encyclopedias as information retrieval becomes simpler. Some online encyclopedias are medical wikis, which use wiki software to write the information collaboratively. Listing of Medical Encyclopedias A.D.A.M. Medical Encyclopedia (MedlinePlus) A.D.A.M (Animated Dissection of Anatomy for Medicine) contains articles discussing diseases, tests, symptoms, injuries and surgeries. Content is reviewed by physicians; the goal is to present evidence-based health information. It also contains a library of medical photographs and illustrations. MedlinePlus is a free Web site that provides consumer health information for patients, families, and health care providers. MedlinePlus brings together information from the United States National Library of Medicine, the National Institutes of Health (NIH), other U.S. government agencies, and health-related organizations. The U.S. National Library of Medicine produces and maintains MedlinePlus. WebMD WebMD is an American provider of health information services. It is primarily known for its public Internet site, which has information regarding health and health care, including a symptom checklist, pharmacy information, blogs of physicians with specific topics and a place to store personal medical information. The site was reported to have received over 17.1 million average monthly unique visitors in Q1 2007 and is the leading health portal in the United States. The site receives information from accredited individuals and is reviewed by a medical review board consisting of four physicians to ensure accuracy. Medscape is a professional portal for physicians with 30 medical specialty areas and over 30 physician discussion boards. Recently WebMD has been acquired by the News Corporation. MedicineNet MedicineNet, Inc. is owned and Operated by WebMD and part of the WebMD Network emphasizing non-technical, medical peer-reviewed information for consumers. Founded in 1996, WebMD acquired MedicineNet in 2004. MedicineNet, Inc.'s main office is in San Clemente, Calif., and the corporate office is in New York City. See also Pharmacopoeia, a list of medications and their properties Materia medica, an encyclopedia of medications List of medical wikis List of online encyclopedias References External links Medical Encyclopedia WebMD Medical Encyclopedia MayoClinic Medical Encyclopedia University of Maryland Medical Center Encyclopedia of the Human Body. 3D Human Anatomy Model HealthCareMagic Medical equipment
0.793116
0.96906
0.768577
Heat intolerance
Heat intolerance is a symptom characterized by feeling overheated in warm environments or when the surrounding environment's temperature rises. Typically, the person feels uncomfortably hot and sweats excessively. Compared to heat illnesses like heatstroke, heat intolerance is usually a symptom of endocrine disorders, drugs, or other medical conditions, rather than the result of too much exercise or hot, humid weather. Symptoms Feeling subjectively hot Sweating, which may be excessive In patients with multiple sclerosis (MS), heat intolerance may cause a pseudoexacerbation, which is a temporary worsening of MS-related symptoms. A temporary worsening of symptoms can also happen in patients with postural orthostatic tachycardia syndrome (POTS) and dysautonomia. Diagnosis Diagnosis is largely made from the patient history, followed by blood tests and other medical tests to determine the underlying cause. In women, hot flashes must be excluded. Causes Excess thyroid hormone, which is called thyrotoxicosis (such as in cases of hyperthyroidism), is the most common cause. Other causes include: Amphetamines along with other types of stimulant medications, such as appetite suppressants Anticholinergics and other drugs that can impair sweating Caffeine Malignant hyperthermia susceptibility Menopause Multiple sclerosis Fibromyalgia Diabetes Hypothalamic tumors Methadone treatment Dysautonomia Postural orthostatic tachycardia syndrome (POTS) Sensory defensiveness/sensory processing disorder Serotonin syndrome Treatment Treatment is directed at making the affected person feel more comfortable, and, if possible, resolving the underlying cause of the heat intolerance. Symptoms can be reduced by staying in a cool environment. Drinking more fluids, especially if the person is sweating excessively, may help. Cooling vests can be used as a preventative tool to reduce a person's body temperature or when symptoms present to feel more comfortable. References Symptoms and signs Sensitivities
0.775601
0.990878
0.768527
Integumentary system
The integumentary system is the set of organs forming the outermost layer of an animal's body. It comprises the skin and its appendages, which act as a physical barrier between the external environment and the internal environment that it serves to protect and maintain the body of the animal. Mainly it is the body's outer skin. The integumentary system includes skin, hair, scales, feathers, hooves, claws, and nails. It has a variety of additional functions: it may serve to maintain water balance, protect the deeper tissues, excrete wastes, and regulate body temperature, and is the attachment site for sensory receptors which detect pain, sensation, pressure, and temperature. Structure Skin The skin is one of the largest organs of the body. In humans, it accounts for about 12 to 15 percent of total body weight and covers 1.5 to 2 m2 of surface area. The skin (integument) is a composite organ, made up of at least two major layers of tissue: the epidermis and the dermis. The epidermis is the outermost layer, providing the initial barrier to the external environment. It is separated from the dermis by the basement membrane (basal lamina and reticular lamina). The epidermis contains melanocytes and gives color to the skin. The deepest layer of the epidermis also contains nerve endings. Beneath this, the dermis comprises two sections, the papillary and reticular layers, and contains connective tissues, vessels, glands, follicles, hair roots, sensory nerve endings, and muscular tissue. Between the integument and the deep body musculature there is a transitional subcutaneous zone made up of very loose connective and adipose tissue, the hypodermis. Substantial collagen bundles anchor the dermis to the hypodermis in a way that permits most areas of the skin to move freely over the deeper tissue layers. Epidermis The epidermis is the strong, superficial layer that serves as the first line of protection against the outer environment. The human epidermis is composed of stratified squamous epithelial cells, which further break down into four to five layers: the stratum corneum, stratum granulosum, stratum spinosum and stratum basale. Where the skin is thicker, such as in the palms and soles, there is an extra layer of skin between the stratum corneum and the stratum granulosum, called the stratum lucidum. The epidermis is regenerated from the stem cells found in the basal layer that develop into the corneum. The epidermis itself is devoid of blood supply and draws its nutrition from its underlying dermis. Its main functions are protection, absorption of nutrients, and homeostasis. In structure, it consists of a keratinized stratified squamous epithelium; four types of cells: keratinocytes, melanocytes, Merkel cells, and Langerhans cells. The predominant cell keratinocyte, which produces keratin, a fibrous protein that aids in skin protection, is responsible for the formation of the epidermal water barrier by making and secreting lipids. The majority of the skin on the human body is keratinized, with the exception of the lining of mucous membranes, such as the inside of the mouth. Non-keratinized cells allow water to "stay" atop the structure. The protein keratin stiffens epidermal tissue to form fingernails. Nails grow from a thin area called the nail matrix at an average of 1 mm per week. The lunula is the crescent-shape area at the base of the nail, lighter in color as it mixes with matrix cells. Only primates have nails. In other vertebrates, the keratinizing system at the terminus of each digit produces claws or hooves. The epidermis of vertebrates is surrounded by two kinds of coverings, which are produced by the epidermis itself. In fish and aquatic amphibians, it is a thin mucus layer that is constantly being replaced. In terrestrial vertebrates, it is the stratum corneum (dead keratinized cells). The epidermis is, to some degree, glandular in all vertebrates, but more so in fish and amphibians. Multicellular epidermal glands penetrate the dermis, where they are surrounded by blood capillaries that provide nutrients and, in the case of endocrine glands, transport their products. Dermis The dermis is the underlying connective tissue layer that supports the epidermis. It is composed of dense irregular connective tissue and areolar connective tissue such as a collagen with elastin arranged in a diffusely bundled and woven pattern. The dermis has two layers: the papillary dermis and the reticular layer. The papillary layer is the superficial layer that forms finger-like projections into the epidermis (dermal papillae), and consists of highly vascularized, loose connective tissue. The reticular layer is the deep layer of the dermis and consists of the dense irregular connective tissue. These layers serve to give elasticity to the integument, allowing stretching and conferring flexibility, while also resisting distortions, wrinkling, and sagging. The dermal layer provides a site for the endings of blood vessels and nerves. Many chromatophores are also stored in this layer, as are the bases of integumental structures such as hair, feathers, and glands. Hypodermis The hypodermis, otherwise known as the subcutaneous layer, is a layer beneath the skin. It invaginates into the dermis and is attached to the latter, immediately above it, by collagen and elastin fibers. It is essentially composed of a type of cell known as adipocytes, which are specialized in accumulating and storing fats. These cells are grouped together in lobules separated by connective tissue. The hypodermis acts as an energy reserve. The fats contained in the adipocytes can be put back into circulation, via the venous route, during intense effort or when there is a lack of energy-providing substances, and are then transformed into energy. The hypodermis participates, passively at least, in thermoregulation since fat is a heat insulator. Functions The integumentary system has multiple roles in maintaining the body's equilibrium. All body systems work in an interconnected manner to maintain the internal conditions essential to the function of the body. The skin has an important job of protecting the body and acts as the body's first line of defense against infection, temperature change, and other challenges to homeostasis. Its main functions include: Protect the body's internal living tissues and organs Protect against invasion by infectious organisms Protect the body from dehydration Protect the body against abrupt changes in temperature, maintain homeostasis Help excrete waste materials through perspiration Act as a receptor for touch, pressure, pain, heat, and cold (see Somatosensory system) Protect the body against sunburns by secreting melanin Generate vitamin D through exposure to ultraviolet light Store water, fat, glucose, vitamin D Maintenance of the body form Formation of new cells from stratum germinativum to repair minor injuries Protect from UV rays. Regulates body temperature It distinguishes, separates, and protects the organism from its surroundings. Small-bodied invertebrates of aquatic or continually moist habitats respire using the outer layer (integument). This gas exchange system, where gases simply diffuse into and out of the interstitial fluid, is called integumentary exchange. Clinical significance Possible diseases and injuries to the human integumentary system include: Rash Yeast Athlete's foot Infection Sunburn Skin cancer Albinism Acne Herpes Herpes labialis, commonly called cold sores Impetigo Rubella Cancer Psoriasis Rabies Rosacea Atopic dermatitis Eczema References External links Organ systems
0.769591
0.998614
0.768525
Tympany
Tympany or tympanites (sometimes tympanism or tympania), also known as meteorism (especially in humans), is a medical condition in which excess gas accumulates in the gastrointestinal tract and causes abdominal distension. The term is from the Greek τύμπανο (meaning "drum"). Possible causes Bowel obstruction Renal stones Functional disorder Overeating Bacterial overgrowth Inflammation of the bowel Blunt kidney trauma Peritonitis Menstruation See also Ruminal tympany Bloating Kwashiorkor References External links Symptoms and signs: Digestive system and abdomen
0.778375
0.987332
0.768515
Acute (medicine)
In medicine, describing a disease as acute denotes that it is of recent onset; it occasionally denotes a short duration. The quantification of how much time constitutes "short" and "recent" varies by disease and by context, but the core denotation of "acute" is always qualitatively in contrast with "chronic", which denotes long-lasting disease (for example, in acute leukaemia and chronic leukaemia). In the context of the mass noun "acute disease", it refers to the acute phase (that is, a short course) of any disease entity. For example, in an article on ulcerative enteritis in poultry, the author says, "in acute disease there may be increased mortality without any obvious signs", referring to the acute form or phase of ulcerative enteritis. Meaning variations A mild stubbed toe is an acute injury. Similarly, many acute upper respiratory infections and acute gastroenteritis cases in adults are mild and usually resolve within a few days or weeks. The term "acute" is also included in the definition of several diseases, such as severe acute respiratory syndrome, acute leukaemia, acute myocardial infarction, and acute hepatitis. This is often to distinguish diseases from their chronic forms, such as chronic leukaemia, or to highlight the sudden onset of the disease, such as acute myocardial infarct. Related terminology Related terms include: Acute care Acute care is the early and specialist management of adult patients who have a wide range of medical conditions requiring urgent or emergency care usually within 48 hours of admission or referral from other specialties. Acute hospitals are those intended for short-term medical and/or surgical treatment and care which is a medical speciality of acute medicine, as often primary care is not positioned to assume this role. References Medical terminology Pain Human diseases and disorders
0.77604
0.990302
0.768514
Human variability
Human variability, or human variation, is the range of possible values for any characteristic, physical or mental, of human beings. Frequently debated areas of variability include cognitive ability, personality, physical appearance (body shape, skin color, etc.) and immunology. Variability is partly heritable and partly acquired (nature vs. nurture debate). As the human species exhibits sexual dimorphism, many traits show significant variation not just between populations but also between the sexes. Sources of human variability Human variability is attributed to a combination of environmental and genetic sources including: For the genetic variables listed above, few of the traits characterizing human variability are controlled by simple Mendelian inheritance. Most are polygenic or are determined by a complex combination of genetics and environment. Many genetic differences (polymorphisms) have little effect on health or reproductive success but help to distinguish one population from another. It is helpful for researchers in the field of population genetics to study ancient migrations and relationships between population groups. Environmental factors Climate and disease Other important factors of environmental factors include climate and disease. Climate has effects on determining what kinds of human variation are more adaptable to survive without much restrictions and hardships. For example, people who live in a climate where there is a lot of exposure to sunlight have a darker color of skin tone. Evolution has caused production of folate (folic acid) from UV radiation, thus giving them darker skin tone with more melanin to make sure child development is smooth and successful. Conversely, people who live farther away from the equator have a lighter skin tone. This is due to a need for an increased exposure and absorbance of sunlight to make sure the body can produce enough vitamin D for survival. Blackfoot disease is a disease caused by environmental pollution and causes people to have black, charcoal-like skin in the lower limbs. This is caused by arsenic pollution in water and food source. This is an example of how disease can affect human variation. Another disease that can affect human variation is syphilis, a sexual transmitted disease. Syphilis does not affect human variation until the middle stage of the disease. It then starts to grow rashes all over the body, affecting people's human variation. Nutrition Phenotypic variation is a combination of one's genetics and their surrounding environment, with no interaction or mutual influence between the two. This means that a significant portion of human variability can be controlled by human behavior. Nutrition and diet play a substantial role in determining phenotype because they are arguably the most controllable forms of environmental factors that create epigenetic changes. This is because they can be changed or altered relatively easily as opposed to other environmental factors like location. If people are reluctant to changing their diets, consuming harmful foods can have chronic negative effects on variability. One such instance of this occurs when eating certain chemicals through one's diet or consuming carcinogens, which can have adverse effects on individual phenotype. For example, Bisphenol A (BPA) is a known endocrine disruptor that mimics the hormone estradiol and can be found in various plastic products. BPA seeps into food or drinks when the plastic containing it is heated up and begins to melt. When these contaminated substances are consumed, especially often and over long periods of time, one's risk of diabetes and cardiovascular disease increases. BPA also has the potential to alter "physiological weight control patterns." Examples such as this demonstrate that preserving a healthy phenotype largely rests on nutritional decision-making skills. The concept that nutrition and diet affect phenotype extends to what the mother eats during pregnancy, which can have drastic effects on the outcome of the phenotype of the child. A recent study by researchers at the MRC International Nutrition Group shows that "methylation machinery can be disrupted by nutrient deficiencies and that this can lead to disease" susceptibility in newborn babies. The reason for this is because methyl groups have the ability to silence certain genes. Increased deficiencies of various nutrients such as this have the potential to permanently change the epigenetics of the baby. Genetic factors Genetic variation in humans may mean any variance in phenotype which results from heritable allele expression, mutations, and epigenetic changes. While human phenotypes may seem diverse, individuals actually differ by only 1 in every 1,000 base pairs and is primarily the result of inherited genetic differences. Pure consideration of alleles is often referred to as Mendelian Genetics, or more properly Classical Genetics, and involves the assessment of whether a given trait is dominant or recessive and thus, at what rates it will be inherited.  The color of one's eyes was long believed to occur with a pattern of brown-eye dominance, with blue eyes being a recessive characteristic resulting from a past mutation. However, it is now understood that eye color is controlled by various genes, and thus, may not follow as distinct a pattern as previously believed. The trait is still the result of variance in genetic sequence between individuals as a result of inheritance from their parents. Common traits which may be linked to genetic patterns are earlobe attachment, hair color, and hair growth patterns. In terms of evolution, genetic mutations are the origins of differences in alleles between individuals. However, mutations may also occur within a person's life-time and be passed down from parent to offspring. In some cases, mutations may result in genetic diseases, such as Cystic Fibrosis, which is the result of a mutation to the CFTR gene that is recessively inherited from both parents. In other cases, mutations may be harmless or phenotypically unnoticeable. We are able to treat biological traits as manifestations of either a single loci or multiple loci, labeling said biological traits as either monogenic or polygenic, respectively. Concerning polygenic traits it may be essential to be mindful of inter-genetic interactions or epistasis. Although epistasis is a significant genetic source of biological variation, it is only additive interactions that are heritable as other epistatic interactions involve recondite inter-genetic relationships. Epistatic interactions in of themselves vary further with their dependency on the results of the mechanisms of recombination and crossing over. The ability of genes to be expressed may also be a source of variation between individuals and result in changes to phenotype. This may be the result of epigenetics, which are founded upon an organism's phenotypic plasticity, with such a plasticity even being heritable. Epigenetics may result from methylation of gene sequences leading to the blocking of expression or changes to histone protein structuring as a result of environmental or biological cues. Such alterations influence how genetic material is handled by the cell and to what extent certain DNA sections are expressed and compose the epigenome. The division between what can be considered as a genetic source of biological variation and not becomes immensely arbitrary as we approach aspects of biological variation such as epigenetics. Indeed, gene specific gene expression and inheritance may be reliant on environmental  influences. Cultural factors Archaeological findings such as those that indicate that the Middle Stone Age and the Acheulean – identified as a specific 'cultural phases' of humanity with a number of characteristics – lasted substantially longer in some places or 'ended' at times over 100,000 years apart, highlight a significant spatiotemporal cultural variability in and complexity of the sociocultural history and evolution of humanity. In some cases cultural factors may be intertwined with genetic and environmental factors. Measuring variation Scientific Measurement of human variation can fall under the purview of several scholarly disciplines, many of which lie at the intersection of biology and statistics. The methods of biostatistics, the application of statistical methods to the analysis of biological data, and bioinformatics, the application of information technologies to the analysis of biological data, are utilized by researchers in these fields to uncover significant patterns of variability. Some fields of scientific research include the following: Demography is a branch of statistics and sociology concerned with the statistical study of populations, especially humans. A demographic analysis can measure various metrics of a population, most commonly metrics of size and growth, diversity in culture, ethnicity, language, religious belief, political belief, etc. Biodemography is a subfield which specifically integrates biological understanding into demographics analysis. In the social sciences, social research is conducted and collected data is analyzed under statistical methods. The methodologies of this research can be divided into qualitative and quantitative designs. Some example subdisciplines include: Anthropology, the study of human societies. Comparative research in subfields of anthropology may yield results on human variation with respect to the subfield's topic of interest. Psychology, the study of behavior from a mental perspective. Does a lot of experiments and analysis grouped into quantitative or qualitative research methods. Sociology, the study of behavior from a social perspective. Sociological research can be conducted in either quantitative or qualitative formats, depending on the nature of data collected and the subfield of sociology under which the research falls. Analysis of this data is subject to quantitative or qualitative methods. Computational sociology is also a method of producing useful data for studies of social behavior. Anthropometry Anthropometry is the study of the measurements of different parts of the human body. Common measurements include height, weight, organ size (brain, stomach, penis, vagina), and other bodily metrics such as waist–hip ratio. Each measurement can vary significantly between populations; for instance, the average height of males of European descent is 178 cm ± 7 cm and of females of European descent is 165 cm ± 7 cm. Meanwhile, average height of Nilotic males in Dinka is 181.3 cm. Applications of anthropometry include ergonomics, biometrics, and forensics. Knowing the distribution of body measurements enable designers to build better tools for workers. Anthropometry is also used when designing safety equipment such as seat belts. In biometrics, measurements of fingerprints and iris patterns can be used for secure identification purposes. Measuring genetic variation Human genomics and population genetics are the study of the human genome and variome, respectively. Studies in these areas may concern the patterns and trends in human DNA. The Human Genome Project and The Human Variome Project are examples of large scale studies of the entire human population to collect data which can be analyzed to understand genomic and genetic variation in individuals, respectively. The Human Genome Project is the largest scientific project in the history of biology. At a cost of $3.8 billion in funding and over a period of 13 years from 1990 to 2003, the project processed through DNA sequencing the approximately 3 billion base pairs and catalogued the 20,000 to 25,000 genes in human DNA. The project made the data available to all scientific researchers and developed analytical tools for processing this information. A particular finding regarding human variability due to difference in DNA made possible by the Human Genome Project is that any two individuals share 99.9% of their nucleotide sequences. The Human Variome Project is a similar undertaking with the goal of identification and categorization of the set of human genetic variation, specifically variations which are medically pertinent. This project will also provide a data repository for further research and analysis of disease. The Human Variome Project was launched in 2006 and is being run by an international community of researchers and representatives, including collaborators from the World Health Organization and the United Nations Educational, Scientific, and Cultural Organization. Genetic drift Genetic drift is one method by which variability occurs in populations. Unlike natural selection, genetic drift occurs when alleles decrease randomly over time and not as a result of selection bias. Over a long history, this can cause significant shifts in the underlying genetic distribution of a population. We can model genetic drift with the Wright-Fisher model. In a population of N with 2N genes, there are two alleles with frequencies p and q. If the previous generation had an allele with frequency p, then the probability that the next generation has k of that allele is: Over time, one allele will be fixed when the frequency of that allele reaches 1 and the frequency of the other allele reaches 0. The probability that any allele is fixed is proportional to the frequency of that allele. For two alleles with frequencies p and q, the probability that p will be fixed is p. The expected number of generations for an allele with frequency p to be fixed is: Where Ne is the effective population size. Single-nucleotide polymorphism Single-nucleotide polymorphism or SNPs are variations of a single nucleotide. SNPs can occur in coding or non-coding regions of genes and on average occur once every 300 nucleotides. SNPs in coding regions can cause synonymous, missense, and nonsense mutations. SNPs have shown to be correlated with drug responses and risk of diseases such as sickle-cell anemia, Alzheimer's disease, cystic fibrosis, and more. DNA fingerprinting DNA profiling, whereby a DNA fingerprint is constructed by extracting a DNA sample from body tissue or fluid. Then, it is segmented using restriction enzymes and each segment marked with probes then exposed on X-ray film. The segments form patterns of black bars;the DNA fingerprint. DNA Fingerprints are used in conjunction with other methods in order to individuals information in Federal programs such as CODIS (Combined DNA Index System for Missing Persons) in order to help identify individuals Mitochondrial DNA Mitochondrial DNA, which is only passed from mother to child. The first human population studies based on mitochondrial DNA were performed by restriction enzyme analyses (RFLPs) and revealed differences between the four ethnic groups (Caucasian, Amerindian, African, and Asian). Differences in mtDNA patterns have also been shown in communities with a different geographic origin within the same ethnic group Alloenzymic variation Alloenzymic variation, a source of variation that identifies protein variants of the same gene due to amino acid substitutions in proteins. After grinding tissue to release the cytoplasm, wicks are used to absorb the resulting extract and placed in a slit cut into a starch gel. A low current is run across the gel resulting in a positive and negative ends. Proteins are then separated by charge and size, with the smaller and more highly charged molecules moving more quickly across the gel. This techniques does underestimate true genetic variability as there may be an amino acid substitution but if the amino acid is not charged differently than the original no difference in migration will appear it is estimated that approximately 1/3 of the true genetic variation is not expressed by this technique. Structural variation Structural variation, which can include insertions, deletions, duplications, and mutations in DNA. Within the human population, about 13% of the human genome is defined as structurally variant. Phenotypic variation Phenotypic variation, which accounts for both genetic and epigenetic factors that affect what characteristics are shown. For applications such as organ donations and matching, phenotypic variation of blood type, tissue type, and organ size are considered. Civic Measurement of human variation may also be initiated by governmental parties. A government may conduct a census, the systematic recording of an entire population of a region. The data may be used for calculating metrics of demography such as sex, gender, age, education, employment, etc.; this information is utilized for civic, political, economic, industrial, and environmental assessment and planning. Commercial Commercial motivation for understanding variation in human populations arises from the competitive advantage of tailoring products and services for a specific target market. A business may undertake some form of market research in order to collect data on customer preference and behavior and implement changes which align with the results. Social significance and valuation Both individuals and entire societies and cultures place values on different aspects of human variability; however, values can change as societies and cultures change. Not all people agree on the values or relative rankings, and neither do all societies and cultures. Nonetheless, nearly all human differences have a social value dimension. Examples of variations which may be given different values in different societies include skin color and/or body structure. Race and sex have a strong value difference, while handedness has a much weaker value difference. The values given to different traits among human variability are often influenced by what phenotypes are more prevalent locally. Local valuation may affect social standing, reproductive opportunities, or even survival. Differences may vary or be distributed in various ways. Some, like height for a given sex, vary in close to a "normal" or Gaussian distribution. Other characteristics (e.g., skin color) vary continuously in a population, but the continuum may be socially divided into a small number of distinct categories. Then, there are some characteristics that vary bimodally (for example, handedness), with fewer people in intermediate categories. Classification and evaluation of traits When an inherited difference of body structure or function is severe enough to cause a significant hindrance in certain perceived abilities, it is termed a genetic disease, but even this categorization has fuzzy edges. There are many instances in which the degree of negative value of a human difference depends completely on the social or physical environment. For example, in a society with a large proportion of deaf people (as Martha's Vineyard in the 19th century), it was possible to deny that deafness is a disability. Another example of social renegotiation of the value assigned to a difference is reflected in the controversy over management of ambiguous genitalia, especially whether abnormal genital structure has enough negative consequences to warrant surgical correction. Furthermore, many genetic traits may be advantageous in certain circumstances and disadvantageous in others. Being a heterozygote or carrier of the sickle-cell disease gene confers some protection against malaria, apparently enough to maintain the gene in populations of malarial areas. In a homozygous dose it is a significant disability. Each trait has its own advantages and disadvantages, but sometimes a trait that is found desirable may not be favorable in terms of certain biological factors such as reproductive fitness, and traits that are not highly valued by the majority of people may be favorable in terms of biological factors. For example, women tend to have fewer pregnancies on average than before and therefore net worldwide fertility rates are dropping. Moreover, this leads to the fact that multiple births tend to be favorable in terms of number of children and therefore offspring count; when the average number of pregnancies and the average number of children was higher, multiple births made only a slight relative difference in number of children. However, with fewer pregnancies, multiple births can make the difference in number of children relatively large. A hypothetical scenario would be that couple 1 has ten children and couple 2 has eight children, but in both couples, the woman undergoes eight pregnancies. This is not a large difference in ratio of fertility. However, another hypothetical scenario can be that couple 1 has three children and couple 2 has one child but in both couples the woman undergoes one pregnancy (in this case couple 2 has triplets). When the proportion of offspring count in the latter hypothetical scenario is compared, the difference in proportion of offspring count becomes higher. A trait in women known to greatly increase the chance of multiple births is being a tall woman (presumably the chance is further increased when the woman is very tall among both women and men). Yet very tall women are not viewed as a desirable phenotype by the majority of people, and the phenotype of very tall women has not been highly favored in the past. Nevertheless, values placed on traits can change over time. Such an example is homosexuality. In Ancient Greece, what in present terms would be called homosexuality, primarily between a man and a young boy, was not uncommon and was not outlawed. However, homosexuality became more condemned. Attitudes towards homosexuality alleviated in modern times. Acknowledgement and study of human differences does have a wide range of uses, such as tailoring the size and shape of manufactured items. See Ergonomics. Controversies of sociocultural and personal implications Possession of above average amounts of some abilities is valued by most societies. Some of the traits that societies try to measure by perception are intellectual aptitude in the form of ability to learn, artistic prowess, strength, endurance, agility, and resilience. Each individual's distinctive differences, even the negatively valued or stigmatized ones, are usually considered an essential part of self-identity. Membership or status in a social group may depend on having specific values for certain attributes. It is not unusual for people to deliberately try to amplify or exaggerate differences, or to conceal or minimize them, for a variety of reasons. Examples of practices designed to minimize differences include tanning, hair straightening, skin bleaching, plastic surgery, orthodontia, and growth hormone treatment for extreme shortness. Conversely, male-female differences are enhanced and exaggerated in most societies. In some societies, such as the United States, circumcision is practiced on a majority of males, as well as sex reassignment on intersex infants, with substantial emphasis on cultural and religious norms. Circumcision is highly controversial because although it offers health benefits, such as less chance of urinary tract infections, STDs, and penile cancer, it is considered a drastic procedure that is not medically mandatory and argued as a decision that should be taken when the child is old enough to decide for himself. Similarly, sex reassignment surgery offers psychiatric health benefits to transgender people but is seen as unethical by some Christians, especially when performed on children. Much controversy surrounds the assigning or distinguishing of some variations, especially since differences between groups in a society or between societies is often debated as part of either a person's "essential" nature or a socially constructed attribution. For example, there has long been a debate among sex researchers on whether sexual orientation is due to evolution and biology (the "essentialist" position), or a result of mutually reinforcing social perceptions and behavioral choices (the "constructivist" perspective). The essentialist position emphasizes inclusive fitness as the reason homosexuality has not been eradicated by natural selection. Gay or lesbian individuals have not been greatly affected by evolutionary selection because they may help the fitness of their siblings and siblings' children, thus increasing their own fitness through inclusive fitness and maintaining evolution of homosexuality. Biological theories for same gender sexual orientation include genetic influences, neuroanatomical factors, and hormone differences but research so far has not provided any conclusive results. In contrast, the social constructivist position argues that sexuality is a result of culture and has originated from language or dialogue about sex. Mating choices are the product of cultural values, such as youth and attractiveness, and homosexuality varies greatly between cultures and societies. In this view, complexities, such as sexual orientation changing during the course of one's lifespan, are accounted for. Controversy also surrounds the boundaries of "wellness", "wholeness," or "normality." In some cultures, differences in physical appearance, mental ability, and even sex can exclude one from traditions, ceremonies, or other important events, such as religious service. For example, in India, menstruation is not only a taboo subject but also traditionally considered shameful. Depending on beliefs, a woman who is menstruating is not allowed to cook or enter spiritual areas because she is "impure" and "cursed". There has been large-scale renegotiation of the social significance of variations which reduce the ability of a person to do one or more functions in western culture. Laws have been passed to alleviate the reduction of social opportunity available to those with disabilities. The concept of "differently abled" has been pushed by those persuading society to see limited incapacities as a human difference of less negative value. Ideologies of superiority and inferiority The extreme exercise of social valuation of human difference is in the definition of "human." Differences between humans can lead to an individual's "nonhuman" status, in the sense of withholding identification, charity, and social participation. Views of these variations can change enormously between cultures over time. For example, nineteenth-century European and American ideas of race and eugenics culminated in the attempts of the Nazi-led German society of the 1930s to deny not just reproduction, but life itself to a variety of people with "differences" attributed in part to biological characteristics. Hitler and Nazi leaders wanted to create a "master race" consisting of only Aryans, or blue-eyed, blonde-haired, and tall individuals, thus discriminating and attempting to exterminate those who didn't fit into this ideal. Contemporary controversy continues over "what kind of human" is a fetus or child with a significant disability. On one end are people who would argue that Down syndrome is not a disability but a mere "difference," and on the other those who consider it such a calamity as to assume that such a child is better off "not born". For example, in India and China, being female is widely considered such a negatively valued human difference that female infanticide occurs such to severely affect the proportion of sexes. Common human variations See also Anthropometry Human genetic variation Human physical appearance Mendelian traits in humans Quantitative trait locus Human behaviour genetics Big Five personality traits References Bibliography Further reading Books Humans Comparisons
0.779348
0.986059
0.768483
Cachexia
Cachexia is a complex syndrome associated with an underlying illness, causing ongoing muscle loss that is not entirely reversed with nutritional supplementation. A range of diseases can cause cachexia, most commonly cancer, congestive heart failure, chronic obstructive pulmonary disease, chronic kidney disease, and AIDS. Systemic inflammation from these conditions can cause detrimental changes to metabolism and body composition. In contrast to weight loss from inadequate caloric intake, cachexia causes mostly muscle loss instead of fat loss. Diagnosis of cachexia can be difficult due to the lack of well-established diagnostic criteria. Cachexia can improve with treatment of the underlying illness but other treatment approaches have limited benefit. Cachexia is associated with increased mortality and poor quality of life. The term is from Greek κακός kakos 'bad' and ἕξις hexis 'condition'. Causes Cachexia can be caused by diverse medical conditions, but is most often associated with end-stage cancer, known as cancer cachexia. About 50% of all cancer patients develop cachexia. Those with upper gastrointestinal and pancreatic cancers have the highest frequency of developing a cachexic symptom. Prevalence of cachexia rises in more advanced stages and is estimated to affect 80% of terminal cancer patients. Congestive heart failure, AIDS, chronic obstructive pulmonary disease, and chronic kidney disease are other conditions that often cause cachexia. Cachexia can also be the result of advanced stages of cystic fibrosis, multiple sclerosis, motor neuron disease, Parkinson's disease, dementia, tuberculosis, multiple system atrophy, mercury poisoning, Crohn's disease, trypanosomiasis, rheumatoid arthritis, and celiac disease as well as other systemic diseases. Mechanism The exact mechanism in which these diseases cause cachexia is poorly understood, and likely is multifactorial with multiple disease pathways involved. Inflammatory cytokines appear to play a central role including tumor necrosis factor (TNF) (which is also nicknamed 'cachexin' or 'cachectin'), interferon gamma and interleukin 6. TNF has been shown to have a direct catabolic effect on skeletal muscle and adipose tissue through the ubiquitin proteasome pathway. This mechanism involves the formation of reactive oxygen species leading to upregulation of the transcription factor NF-κB. NF-κB is a known regulator of the genes that encode cytokines and cytokine receptors. The increased production of cytokines induces proteolysis and breakdown of myofibrillar proteins. Systemic inflammation also causes reduced protein synthesis through inhibition of the Akt/mTOR pathway. Although many different tissues and cell types may be responsible for the increase in circulating cytokines, evidence indicates tumors themselves are an important source of factors that may promote cachexia in cancer. Tumor-derived molecules such as lipid mobilizing factor, proteolysis-inducing factor, and mitochondrial uncoupling proteins may induce protein degradation and contribute to cachexia. Uncontrolled inflammation in cachexia can lead to an elevated resting metabolic rate, further increasing the demands for protein and energy sources. There is also evidence of alteration in feeding control loops in cachexia. High levels of leptin, a hormone secreted by adipocytes, block the release of neuropeptide Y, which is the most potent feeding-stimulatory peptide in the hypothalamic orexigenic network, leading to decreased energy intake despite the high metabolic demand for nutrients. Diagnosis Diagnostic guidelines and criteria to differentiate from sarcopenia have only recently been proposed despite the prevalence of cachexia and varying criteria; the primary features of cachexia include progressive depletion of muscle and fat mass, reduced food intake, abnormal metabolism of carbohydrate, protein, and fat, reduced quality of life, and increased physical impairment. Historically, body weight changes were used as the primary metrics of cachexia, including low body mass index and involuntary weight loss of more than 10%. Using weight alone is limited by the presence of edema, tumor mass and the high prevalence of obesity in the general population. Weight-based criteria do not take into account changes in body composition, especially loss of lean body mass. In the attempt to include a broader evaluation of the burden of cachexia, diagnostic criteria using assessments of laboratory metrics and symptoms in addition to weight have been proposed. The criteria included weight loss of at least 5% in 12 months or low body mass index (less than 22  kg/m2) with at least three of the following features: decreased muscle strength, fatigue, anorexia, low fat-free mass index, or abnormal biochemistry (increased inflammatory markers, anemia, low serum albumin). In cancer patients, cachexia is diagnosed from unintended weight loss of more than 5%. For cancer patients with a body mass index of less than 20 kg/m2, cachexia is diagnosed after the unintended weight loss of more than 2%. Additionally, it can be diagnosed through sarcopenia, or loss of skeletal muscle mass. Laboratory markers are used in evaluation of people with cachexia, including albumin, prealbumin, C-reactive protein, or hemoglobin. However, laboratory metrics and cut-off values are not standardized across different diagnostic criteria. Acute phase reactants (IL-6, IL-1b, tumor necrosis factor-a, IL-8, interferon-g) are sometimes measured but correlate poorly with outcomes. There are no biomarkers to identify people with cancer who may develop cachexia. In the effort to better classify cachexia severity, several scoring systems have been proposed including the Cachexia Staging Score (CSS) and Cachexia Score (CASCO). The CSS takes into account weight loss, subjective reporting of muscle function, performance status, appetite loss, and laboratory changes to categorize patients into non-cachexia, pre-cachexia, cachexia, and refractory cachexia. The Cachexia SCOre (CASCO) is another validated score that includes evaluation of body weight loss and composition, inflammation, metabolic disturbances, immunosuppression, physical performance, anorexia, and quality of life. Evaluation of changes in body composition is limited by the difficulty in measuring muscle mass and health in a non-invasive and cost-effective way. Imaging with quantification of muscle mass has been investigated including bioelectrical impedance analysis, computed tomography, dual-energy X-ray absorptiometry (DEXA), and magnetic resonance imaging but are not widely used. Definition Identification, treatment, and research of cachexia have historically been limited by the lack of a widely accepted definition of cachexia. In 2011, an international consensus group adopted a definition of cachexia as "a multifactorial syndrome defined by an ongoing loss of skeletal muscle mass (with or without loss of fat mass) that can be partially but not entirely reversed by conventional nutritional support." Cachexia differs from weight loss due to malnutrition from malabsorption, anorexia nervosa, or anorexia due to major depressive disorder. Weight loss from inadequate caloric intake generally causes fat loss before muscle loss, whereas cachexia causes predominantly muscle wasting. Cachexia is also distinct from sarcopenia, or age-related muscle loss, although they often co-exist. Treatment The management of cachexia depends on the underlying cause, the general prognosis, and the needs of the person affected. The most effective approach to cachexia is treating the underlying disease process. An example is the reduction in cachexia from AIDS by highly active antiretroviral therapy. However this is often not possible or may be inadequate to reverse the cachexia syndrome in other diseases. Approaches to mitigate muscle loss include exercise, nutritional therapies, and medications. Exercise Therapy that includes regular physical exercise can be recommended for the treatment of cachexia due to the positive effects of exercise on skeletal muscle but current evidence remains uncertain as to its effectiveness, acceptability and safety for cancer patients. Individuals with cachexia generally report low levels of physical activity and few engage in an exercise routine, owing to low motivation to exercise and a belief that exercising may worsen their symptoms or cause harm. Medications Appetite stimulant medications are used to treat cachexia to increase food intake, but are not effective in stopping muscle wasting and may have detrimental side effects. Appetite stimulants include glucocorticoids, cannabinoids, or progestins such as megestrol acetate. Anti-emetics such as 5-HT3 antagonists are also commonly used in cancer cachexia if nausea is a prominent symptom. Anabolic-androgenic steroids like oxandrolone may be beneficial in cachexia but their use is recommended for a maximum of two weeks since a longer duration of treatment increases side effects. Whilst preliminary studies have suggested thalidomide may be useful, a Cochrane review found no evidence to make an informed decision about the use of this drug in cancer patients with cachexia. Nutrition The increased metabolic rate and appetite suppression common in cachexia can compound muscle loss. Studies using a calorie-dense protein supplementation have suggested at least weight stabilization can be achieved, although improvements in lean body mass have not been observed in these studies. Supplements Administration of exogenous amino acids have been investigated to serve as a protein-sparing metabolic fuel by providing substrates for both muscle metabolism and gluconeogenesis. The branched-chain amino acids leucine and valine may have potential in inhibiting overexpression of protein breakdown pathways. The amino acid glutamine has been used as a component of oral supplementation to reverse cachexia in people with advanced cancer or HIV/AIDS. β-hydroxy β-methylbutyrate (HMB) is a metabolite of leucine that acts as a signaling molecule to stimulate protein synthesis. Studies showed positive results for chronic pulmonary disease, hip fracture, and in AIDS-related and cancer-related cachexia. However, many of these clinical studies used HMB as a component of combination treatment with glutamine, arginine, leucine, higher dietary protein and/or vitamins, which limits the assessment of the efficacy of HMB alone. Creatine has been shown some promise as a nutritional supplement to treat cachexia, by reducing muscle wasting. Epidemiology Accurate epidemiological data on the prevalence of cachexia is lacking due to changing diagnostic criteria and under-identification of people with the disorder. It is estimated that cachexia from any disease is estimated to affect more than 5 million people in the United States. The prevalence of cachexia is growing and estimated at 1% of the population. The prevalence is lower in Asia but due to the larger population, represents a similar burden. Cachexia is also a significant problem in South America and Africa. The most frequent causes of cachexia in the United States by population prevalence are: 1) chronic obstructive pulmonary disease (COPD), 2) heart failure, 3) cancer cachexia, 4) chronic kidney disease. The prevalence of cachexia ranges from 15 to 60% among people with cancer, increasing to an estimated 80% in terminal cancer. This wide range is attributed to differences in cachexia definition, variability in cancer populations, and timing of diagnosis. Although the prevalence of cachexia among people with COPD or heart failure is lower (estimated 5% to 20%), the large number of people with these conditions dramatically increases the total cachexia burden. Cachexia contributes to significant loss of function and healthcare utilization. Estimates using the National Inpatient Sample in the United States suggest that cachexia accounted for 177,640 hospital stays in 2016. Cachexia is considered the immediate cause of death of many people with cancer, estimated between 22 and 40%. History The word "cachexia" is derived from the Greek words "Kakos" (bad) and "hexis" (condition). English ophthalmologist John Zachariah Laurence was the first to use the phrase "cancerous cachexia", doing so in 1858. He applied the phrase to the chronic wasting associated with malignancy. It was not until 2011 that the term "cancer-associated cachexia" was given a formal definition, with a publication by Kenneth Fearon. Fearon defined it as "a multifactorial syndrome characterized by ongoing loss of skeletal muscle (with or without loss of fat mass) that cannot be fully reversed by conventional nutritional support and leads to progressive functional impairment". Research Several medications are under investigation or have been previously trialed for use in cachexia but are currently not in widespread clinical use: Thalidomide Cytokine antagonists Cannabinoids Omega-3 fatty acids, including eicosapentaenoic acid (EPA) Non-steroidal anti-inflammatory drugs Prokinetics Ghrelin and ghrelin receptor agonist Anabolic catabolic transforming agents such as MT-102 Selective androgen receptor modulators Cyproheptadine Hydrazine sulfate Medical marijuana has been allowed for the treatment of cachexia in some US states, such as Missouri, Illinois, Maryland, Delaware, Nevada, Michigan, Washington, Oregon, California, Colorado, New Mexico, Arizona, Vermont, New Jersey, Rhode Island, Maine, and New York Hawaii and Connecticut. Multimodal therapy Despite the extensive investigation into single therapeutic targets for cachexia, the most effective treatments use multi-targeted therapies. In Europe, a combination of non-drug approaches including physical training, nutritional counseling, and psychotherapeutic intervention are used in belief this approach may be more effective than monotherapy. Administration of anti-inflammatory drugs showed efficacy and safety in the treatment of people with advanced cancer cachexia. See also Sarcopenia Muscle atrophy Marasmus Cancer Progressive disease Refeeding syndrome Journal of Cachexia, Sarcopenia and Muscle References External links Geriatrics Rehabilitation medicine Symptoms and signs: Endocrinology, nutrition, and metabolism
0.76996
0.998077
0.768479
Acidosis
Acidosis is a biological process producing hydrogen ions and increasing their concentration in blood or body fluids. pH is the negative log of hydrogen ion concentration and so it is decreased by a process of acidosis. Acidemia The term acidemia describes the state of low blood pH, when arterial pH falls below 7.35 (except in the fetus – see below) while acidosis is used to describe the processes leading to these states. The use of acidosis for a low pH creates an ambiguity in its meaning. The difference is important where a patient has factors causing both acidosis and alkalosis, wherein the relative severity of both determines whether the result is a high, low, or normal pH. Alkalemia occurs at a pH over 7.45. Arterial blood gas analysis and other tests are required to separate the main causes. In certain situations the main cause is clear. For instance, a diabetic with ketoacidosis is a recognizable case where the main cause of acidemia is essentially obvious. The rate of cellular metabolic activity affects and, at the same time, is affected by the pH of the body fluids. In mammals, the normal pH of arterial blood lies between 7.35 and 7.50 depending on the species (e.g., healthy human-arterial blood pH varies between 7.35 and 7.45). Signs and symptoms Nervous system involvement may be seen with acidosis and occurs more often with respiratory acidosis than with metabolic acidosis. Signs and symptoms that may be seen in acidosis include headaches, confusion, feeling tired, tremors, sleepiness, flapping tremor, and dysfunction of the cerebrum of the brain which may progress to coma if there is no intervention. Metabolic acidosis Metabolic acidosis may result from either increased production of metabolic acids, such as lactic acid, or disturbances in the ability to excrete acid via the kidneys, such as either renal tubular acidosis or the acidosis of kidney failure, which is associated with an accumulation of urea and creatinine as well as metabolic acid residues of protein catabolism. Lactic acidosis occurs whenever the demand for oxygen by tissues exceeds the supply and the more efficient aerobic metabolism is supplemented by anaerobic metabolism that produces lactate. Increased demand occurs, for example, with high intensity exercise such as sprinting. Inadequate supply occurs, for example, with hypoperfusion as occurs in hemorrhagic shock. A rise in lactate out of proportion to the level of pyruvate, e.g., in mixed venous blood, is termed "excess lactate", and is an indicator of anaerobic glycolysis occurring in muscle cells, as seen during strenuous exercise. Once oxygenation is restored, the acidosis clears quickly. Another example of increased production of acids occurs in starvation and diabetic ketoacidosis. It is due to the accumulation of ketoacids (via excessive ketosis) and reflects a severe shift from glycolysis to lipolysis for energy needs. Acid consumption from poisoning such as methanol ingestion, elevated levels of iron in the blood, and chronically decreased production of bicarbonate may also produce metabolic acidosis. Metabolic acidosis is compensated for in the lungs, as increased exhalation of carbon dioxide promptly shifts the buffering equation to reduce metabolic acid. This is a result of stimulation to chemoreceptors, which increases alveolar ventilation, leading to respiratory compensation, otherwise known as Kussmaul breathing (a specific type of hyperventilation). Should this situation persist, the patient is at risk of exhaustion leading to respiratory failure. Mutations to the V-ATPase 'a4' or 'B1' isoforms result in distal renal tubular acidosis, a condition that leads to metabolic acidosis, in some cases with sensorineural deafness. Arterial blood gases will indicate low pH, low blood HCO3, and normal or low PaCO2. In addition to arterial blood gas, an anion gap can also differentiate between possible causes. The Henderson-Hasselbalch equation is useful for calculating blood pH, because blood is a buffer solution. In the clinical setting, this equation is usually used to calculate HCO3 from measurements of pH and PaCO2 in arterial blood gases. The amount of metabolic acid accumulating can also be quantitated by using buffer base deviation, a derivative estimate of the metabolic as opposed to the respiratory component. In hypovolemic shock for example, approximately 50% of the metabolic acid accumulation is lactic acid, which disappears as blood flow and oxygen debt are corrected. Treatment Treatment of uncompensated metabolic acidosis is focused upon correcting the underlying problem. When metabolic acidosis is severe and can no longer be compensated for adequately by the lungs or kidneys, neutralizing the acidosis with infusions of bicarbonate may be required. Fetal metabolic acidemia In the fetus, the normal range differs based on which umbilical vessel is sampled (umbilical vein pH is normally 7.25 to 7.45; umbilical artery pH is normally 7.18 to 7.38). Fetal metabolic acidemia is defined as an umbilical vessel pH of less than 7.20 and a base excess of less than −8. Respiratory acidosis Respiratory acidosis results from a build-up of carbon dioxide in the blood (hypercapnia) due to hypoventilation. It is most often caused by pulmonary problems, although head injuries, drugs (especially anaesthetics and sedatives), and brain tumors can cause this acidemia. Pneumothorax, emphysema, chronic bronchitis, asthma, severe pneumonia, and aspiration are among the most frequent causes. It can also occur as a compensatory response to chronic metabolic alkalosis. One key to distinguish between respiratory and metabolic acidosis is that in respiratory acidosis, the CO2 is increased while the bicarbonate is either normal (uncompensated) or increased (compensated). Compensation occurs if respiratory acidosis is present, and a chronic phase is entered with partial buffering of the acidosis through renal bicarbonate retention. However, in cases where chronic illnesses that compromise pulmonary function persist, such as late-stage emphysema and certain types of muscular dystrophy, compensatory mechanisms will be unable to reverse this acidotic condition. As metabolic bicarbonate production becomes exhausted, and extraneous bicarbonate infusion can no longer reverse the extreme buildup of carbon dioxide associated with uncompensated respiratory acidosis, mechanical ventilation will usually be applied. Fetal respiratory acidemia In the fetus, the normal range differs based on which umbilical vessel is sampled (umbilical vein pH is normally 7.25 to 7.45; umbilical artery pH is normally 7.20 to 7.38). In the fetus, the lungs are not used for ventilation. Instead, the placenta performs ventilatory functions (gas exchange). Fetal respiratory acidemia is defined as an umbilical vessel pH of less than 7.20 and an umbilical artery PCO2 of 66 or higher or umbilical vein PCO2 of 50 or higher. See also Acid–base homeostasis Acid–base imbalance Alkalinizing agent Alkaline diet Arterial blood gas Chemical equilibrium Lactic acidosis pCO2 pKa References Notes Hobler KE, Carey LC. Effect of acute progressive hypoxemia on cardiac output and plasma excess lactate. Ann. Surg. 1973 Feb;177(2):199-202. Hobler KE, Napodano RJ. Tolerance of swine to acute blood volume deficits. J Trauma. 1974 Aug;14(8):716-8. Rose, BD, Post TW. Clinical Physiology of Acid-Base and Electrolyte Disorders, 5th ed. (No content available.) 2000. New York: McGraw Hill Professional. External links Acid–base disturbances
0.772384
0.994932
0.768469
Physical disability
A physical disability is a limitation on a person's physical functioning, mobility, dexterity or stamina. Other physical disabilities include impairments which limit other facets of daily living, such as respiratory disorders, blindness, epilepsy and sleep disorders. Causes Prenatal disabilities are acquired before birth. These may be due to diseases or substances that the mother has been exposed to during pregnancy, embryonic or fetal developmental accidents or genetic disorders. Perinatal disabilities are acquired between some weeks before to up to four weeks after birth in humans. These can be due to prolonged lack of oxygen or obstruction of the respiratory tract, damage to the brain during birth (due to the accidental misuse of forceps, for example) or the baby being born prematurely. These may also be caused due to genetic disorders or accidents. Post-natal disabilities are gained after birth. They can be due to accidents, injuries, obesity, infection or other illnesses. These may also be caused due to genetic disorders. Types Mobility impairment includes upper or lower limb loss or impairment, poor manual dexterity, and damage to one or multiple organs of the body. Disability in mobility can be a congenital or acquired problem or a consequence of disease. People who have a broken skeletal structure also fall into this category. Visual impairment is another type of physical impairment. There are hundreds of thousands of people with minor to various serious vision injuries or impairments. These types of injuries can also result in severe problems or diseases such as blindness and ocular trauma. Some other types of vision impairment include scratched cornea, scratches on the sclera, diabetes-related eye conditions, dry eyes and corneal graft, macular degeneration in old age and retinal detachment. Hearing loss is a partial or total inability to hear. Deaf and hard of hearing people have a rich culture and benefit from learning sign language for communication purposes. People who are only partially deaf can sometimes make use of hearing aids to improve their hearing ability. Speech and language disability: the person with deviations of speech and language processes which are outside the range of acceptable deviation within a given environment and which prevent full social or educational development Physical impairment can also be attributed to disorders causing, among others, sleep deficiency, chronic fatigue, chronic pain, and seizures. See also Mental disability References Further reading Danielle Gourevitch & Mirko Grmek. (1998). Les maladies dans l'art antique, Paris. Mirko Grmek. (1983). Les maladies à l'aube de la civilisation occidentale, Paris. Phys Disability
0.773829
0.993012
0.768421
Bleeding
Bleeding, hemorrhage, haemorrhage or blood loss is blood escaping from the circulatory system from damaged blood vessels. Bleeding can occur internally, or externally either through a natural opening such as the mouth, nose, ear, urethra, vagina or anus, or through a puncture in the skin. Hypovolemia is a massive decrease in blood volume, and death by excessive loss of blood is referred to as exsanguination. Typically, a healthy person can endure a loss of 10–15% of the total blood volume without serious medical difficulties (by comparison, blood donation typically takes 8–10% of the donor's blood volume). The stopping or controlling of bleeding is called hemostasis and is an important part of both first aid and surgery. Types Upper head Intracranial hemorrhage — bleeding in the skull. Cerebral hemorrhage — a type of intracranial hemorrhage, bleeding within the brain tissue itself. Intracerebral hemorrhage — bleeding in the brain caused by the rupture of a blood vessel within the head. See also hemorrhagic stroke. Subarachnoid hemorrhage (SAH) implies the presence of blood within the subarachnoid space from some pathologic process. The common medical use of the term SAH refers to the nontraumatic types of hemorrhages, usually from rupture of a berry aneurysm or arteriovenous malformation (AVM). The scope of this article is limited to these nontraumatic hemorrhages. Eyes Subconjunctival hemorrhage — bloody eye arising from a broken blood vessel in the sclera (whites of the eyes). Often the result of strain, including sneezing, coughing, vomiting or other kind of strain Nose Epistaxis — nosebleed Mouth Tooth eruption — losing a tooth Hematemesis — vomiting fresh blood Hemoptysis — coughing up blood from the lungs Lungs Pulmonary hemorrhage Gastrointestinal Upper gastrointestinal bleed Lower gastrointestinal bleed Occult gastrointestinal bleed Urinary tract Hematuria — blood in the urine from urinary bleeding Gynecologic Vaginal bleeding Postpartum hemorrhage Breakthrough bleeding Ovarian bleeding — This is a potentially catastrophic and not so rare complication among lean patients with polycystic ovary syndrome undergoing transvaginal oocyte retrieval. Anus Melena — upper gastrointestinal bleeding Hematochezia — lower gastrointestinal bleeding, or brisk upper gastrointestinal bleeding Vascular Ruptured aneurysm Aortic transection Iatrogenic injury Causes Bleeding arises due to either traumatic injury, underlying medical condition, or a combination. Traumatic injury Traumatic bleeding is caused by some type of injury. There are different types of wounds which may cause traumatic bleeding. These include: Abrasion — Also called a graze, this is caused by transverse action of a foreign object against the skin, and usually does not penetrate below the epidermis. Excoriation — In common with Abrasion, this is caused by mechanical destruction of the skin, although it usually has an underlying medical cause. Hematoma — Caused by damage to a blood vessel that in turn causes blood to collect in an enclosed area. Laceration — Irregular wound caused by blunt impact to soft tissue overlying hard tissue or tearing such as in childbirth. In some instances, this can also be used to describe an incision. Incision — A cut into a body tissue or organ, such as by a scalpel, made during surgery. Puncture Wound — Caused by an object that penetrated the skin and underlying layers, such as a nail, needle or knife. Contusion — Also known as a bruise, this is a blunt trauma damaging tissue under the surface of the skin. Crushing Injuries — Caused by a great or extreme amount of force applied over a period of time. The extent of a crushing injury may not immediately present itself. Ballistic Trauma — Caused by a projectile weapon such as a firearm. This may include two external wounds (entry and exit) and a contiguous wound between the two. The pattern of injury, evaluation and treatment will vary with the mechanism of the injury. Blunt trauma causes injury via a shock effect; delivering energy over an area. Wounds are often not straight and unbroken skin may hide significant injury. Penetrating trauma follows the course of the injurious device. As the energy is applied in a more focused fashion, it requires less energy to cause significant injury. Any body organ, including bone and brain, can be injured and bleed. Bleeding may not be readily apparent; internal organs such as the liver, kidney and spleen may bleed into the abdominal cavity. The only apparent signs may come with blood loss. Bleeding from a bodily orifice, such as the rectum, nose, or ears may signal internal bleeding, but cannot be relied upon. Bleeding from a medical procedure also falls into this category. Medical condition "Medical bleeding" denotes hemorrhage as a result of an underlying medical condition (i.e. causes of bleeding that are not directly due to trauma). Blood can escape from blood vessels as a result of 3 basic patterns of injury: Intravascular changes — changes of the blood within vessels (e.g. ↑ blood pressure, ↓ clotting factors) Intramural changes — changes arising within the walls of blood vessels (e.g. aneurysms, dissections, AVMs, vasculitides) Extravascular changes — changes arising outside blood vessels (e.g. H pylori infection, brain abscess, brain tumor) The underlying scientific basis for blood clotting and hemostasis is discussed in detail in the articles, coagulation, hemostasis and related articles. The discussion here is limited to the common practical aspects of blood clot formation which manifest as bleeding. Some medical conditions can also make patients susceptible to bleeding. These are conditions that affect the normal hemostatic (bleeding-control) functions of the body. Such conditions either are, or cause, bleeding diatheses. Hemostasis involves several components. The main components of the hemostatic system include platelets and the coagulation system. Platelets are small blood components that form a plug in the blood vessel wall that stops bleeding. Platelets also produce a variety of substances that stimulate the production of a blood clot. One of the most common causes of increased bleeding risk is exposure to nonsteroidal anti-inflammatory drugs (NSAIDs). The prototype for these drugs is aspirin, which inhibits the production of thromboxane. NSAIDs (for example Ibuprofen) inhibit the activation of platelets, and thereby increase the risk of bleeding. The effect of aspirin is irreversible; therefore, the inhibitory effect of aspirin is present until the platelets have been replaced (about ten days). Other NSAIDs, such as "ibuprofen" (Motrin) and related drugs, are reversible and therefore, the effect on platelets is not as long-lived. There are several named coagulation factors that interact in a complex way to form blood clots, as discussed in the article on coagulation. Deficiencies of coagulation factors are associated with clinical bleeding. For instance, deficiency of Factor VIII causes classic hemophilia A while deficiencies of Factor IX cause "Christmas disease"(hemophilia B). Antibodies to Factor VIII can also inactivate the Factor VII and precipitate bleeding that is very difficult to control. This is a rare condition that is most likely to occur in older patients and in those with autoimmune diseases. Another common bleeding disorder is Von Willebrand disease. It is caused by a deficiency or abnormal function of the "Von Willebrand" factor, which is involved in platelet activation. Deficiencies in other factors, such as factor XIII or factor VII are occasionally seen, but may not be associated with severe bleeding and are not as commonly diagnosed. In addition to NSAID-related bleeding, another common cause of bleeding is that related to the medication, warfarin ("Coumadin" and others). This medication needs to be closely monitored as the bleeding risk can be markedly increased by interactions with other medications. Warfarin acts by inhibiting the production of Vitamin K in the gut. Vitamin K is required for the production of the clotting factors, II, VII, IX, and X in the liver. One of the most common causes of warfarin-related bleeding is taking antibiotics. The gut bacteria make vitamin K and are killed by antibiotics. This decreases vitamin K levels and therefore the production of these clotting factors. Deficiencies of platelet function may require platelet transfusion while deficiencies of clotting factors may require transfusion of either fresh frozen plasma or specific clotting factors, such as Factor VIII for patients with hemophilia. Infection Infectious diseases such as Ebola, Marburg virus disease and yellow fever can cause bleeding. Diagnosis/Imaging Dioxaborolane chemistry enables radioactive fluoride (18F) labeling of red blood cells, which allows for positron emission tomography (PET) imaging of intracerebral hemorrhages. Classification Blood loss Hemorrhaging is broken down into four classes by the American College of Surgeons' advanced trauma life support (ATLS). Class I Hemorrhage involves up to 15% of blood volume. There is typically no change in vital signs and fluid resuscitation is not usually necessary. Class II Hemorrhage involves 15–30% of total blood volume. A patient is often tachycardic (rapid heart beat) with a reduction in the difference between the systolic and diastolic blood pressures. The body attempts to compensate with peripheral vasoconstriction. Skin may start to look pale and be cool to the touch. The patient may exhibit slight changes in behavior. Volume resuscitation with crystalloids (Saline solution or Lactated Ringer's solution) is all that is typically required. Blood transfusion is not usually required. Class III Hemorrhage involves loss of 30–40% of circulating blood volume. The patient's blood pressure drops, the heart rate increases, peripheral hypoperfusion (shock) with diminished capillary refill occurs, and the mental status worsens. Fluid resuscitation with crystalloid and blood transfusion are usually necessary. Class IV Hemorrhage involves loss of >40% of circulating blood volume. The limit of the body's compensation is reached and aggressive resuscitation is required to prevent death. This system is basically the same as used in the staging of hypovolemic shock. Individuals in excellent physical and cardiovascular shape may have more effective compensatory mechanisms before experiencing cardiovascular collapse. These patients may look deceptively stable, with minimal derangements in vital signs, while having poor peripheral perfusion. Elderly patients or those with chronic medical conditions may have less tolerance to blood loss, less ability to compensate, and may take medications such as betablockers that can potentially blunt the cardiovascular response. Care must be taken in the assessment. Massive hemorrhage Although there is no universally accepted definition of massive hemorrhage, the following can be used to identify the condition: "(i) blood loss exceeding circulating blood volume within a 24-hour period, (ii) blood loss of 50% of circulating blood volume within a 3-hour period, (iii) blood loss exceeding 150 ml/min, or (iv) blood loss that necessitates plasma and platelet transfusion." World Health Organization The World Health Organization made a standardized grading scale to measure the severity of bleeding. Management Acute bleeding from an injury to the skin is often treated by the application of direct pressure. For severely injured patients, tourniquets are helpful in preventing complications of shock. Anticoagulant medications may need to be discontinued and possibly reversed in patients with clinically significant bleeding. Patients that have lost excessive amounts of blood may require a blood transfusion. The use of cyanoacrylate glue to prevent bleeding and seal battle wounds was designed and first used in the Vietnam War. Skin glue, a medical version of "super glue", is sometimes used instead of using traditional stitches used for small wounds that need to be closed at the skin level. Etymology The word "Haemorrhage" (or hæmorrhage; using the æ ligature) comes from Latin haemorrhagia, from Ancient Greek αἱμορραγία (haimorrhagía, "a violent bleeding"), from αἱμορραγής (haimorrhagḗs, "bleeding violently"), from αἷμα (haîma, "blood") + -ραγία (-ragía), from ῥηγνύναι (rhēgnúnai, "to break, burst"). See also Aneurysm Autohemorrhaging Anemia Coagulation Contusion Exsanguination Hematophagy Hemophilia Hematoma Istihadha References External links Transfusion medicine
0.770647
0.997034
0.768361
Hypotension
Hypotension, also known as low blood pressure, is a cardiovascular condition characterized by abnormally reduced blood pressure. Blood pressure is the force of blood pushing against the walls of the arteries as the heart pumps out blood and is indicated by two numbers, the systolic blood pressure (the top number) and the diastolic blood pressure (the bottom number), which are the maximum and minimum blood pressures within the cardiac cycle, respectively. A systolic blood pressure of less than 90 millimeters of mercury (mmHg) or diastolic of less than 60 mmHg is generally considered to be hypotension. Different numbers apply to children. However, in practice, blood pressure is considered too low only if noticeable symptoms are present. Symptoms may include dizziness, lightheadedness, confusion, feeling tired, weakness, headache, blurred vision, nausea, neck or back pain, an irregular heartbeat or feeling that the heart is skipping beats or fluttering, and fainting. Hypotension is the opposite of hypertension, which is high blood pressure. It is best understood as a physiological state rather than a disease. Severely low blood pressure can deprive the brain and other vital organs of oxygen and nutrients, leading to a life-threatening condition called shock. Shock is classified based on the underlying cause, including hypovolemic shock, cardiogenic shock, distributive shock, and obstructive shock. Hypotension can be caused by strenuous exercise, excessive heat, low blood volume (hypovolemia), hormonal changes, widening of blood vessels, anemia, vitamin B12 deficiency, anaphylaxis, heart problems, or endocrine problems. Some medications can also lead to hypotension. There are also syndromes that can cause hypotension in patients including orthostatic hypotension, vasovagal syncope, and other rarer conditions. For many people, excessively low blood pressure can cause dizziness and fainting or indicate serious heart, endocrine or neurological disorders. For some people who exercise and are in top physical condition, low blood pressure could be normal. A single session of exercise can induce hypotension and water-based exercise can induce a hypotensive response. Treatment depends on what causes low blood pressure. Treatment of hypotension may include the use of intravenous fluids or vasopressors. When using vasopressors, trying to achieve a mean arterial pressure (MAP) of greater than 70 mmHg does not appear to result in better outcomes than trying to achieve an MAP of greater than 65 mmHg in adults. Signs and symptoms For many people, low blood pressure goes unnoticed. For some people, low blood pressure may be a sign of an underlying health condition, especially when it drops suddenly or occurs with symptoms. Older adults also have a higher risk of symptoms of low blood pressure, such as falls, fainting, or dizziness when standing or after a meal. If the blood pressure is sufficiently low, fainting (syncope) may occur. Low blood pressure is sometimes associated with certain symptoms, many of which are related to causes rather than effects of hypotension: confusion dizziness or lightheadedness feeling tired or weak shortness of breath irregular heartbeat, feeling that the heart is skipping beats, or fluttering chest pain fever headache stiff neck severe back or neck pain cough with sputum prolonged diarrhea or vomiting chills loss of appetite nausea dyspepsia (indigestion) dysuria (painful urination) acute, life-threatening allergic reaction seizures loss of consciousness temporary blurring or loss of vision black tarry stools Causes Low blood pressure can be caused by low blood volume, hormonal changes, pregnancy, widening of blood vessels, medicine side effects, severe dehydration, anemia, vitamin B12 deficiency, anaphylaxis, heart problems or endocrine problems. Reduced blood volume, hypovolemia, is the most common cause of hypotension. This can result from hemorrhage; insufficient fluid intake, as in starvation; or excessive fluid losses from diarrhea or vomiting. Hypovolemia can be induced by excessive use of diuretics. Low blood pressure may also be attributed to heat stroke which can be indicated by absence of perspiration, light headedness and dark colored urine. Other medications can produce hypotension by different mechanisms. Chronic use of alpha blockers or beta blockers can lead to hypotension. Beta blockers can cause hypotension both by slowing the heart rate and by decreasing the pumping ability of the heart muscle. Decreased cardiac output despite normal blood volume, due to severe congestive heart failure, large myocardial infarction, heart valve problems, or extremely low heart rate (bradycardia), often produces hypotension and can rapidly progress to cardiogenic shock. Arrhythmias often result in hypotension by this mechanism. Excessive vasodilation, or insufficient constriction of the blood vessels (mostly arterioles), causes hypotension. This can be due to decreased sympathetic nervous system output or to increased parasympathetic activity occurring as a consequence of injury to the brain or spinal cord. Dysautonomia, an intrinsic abnormality in autonomic system functioning, can also lead to hypotension. Excessive vasodilation can also result from sepsis, acidosis, or medications, such as nitrate preparations, calcium channel blockers, or AT1 receptor antagonists (Angiotensin II acts on AT1 receptors). Many anesthetic agents and techniques, including spinal anesthesia and most inhalational agents, produce significant vasodilation. Lower blood pressure is a side effect of certain herbal medicines, which can also interact with several medications. An example is the theobromine in Theobroma cacao, which lowers blood pressure through its actions as both a vasodilator and a diuretic, and has been used to treat high blood pressure. Syndromes Orthostatic hypotension Orthostatic hypotension, also called postural hypotension, is a common form of low blood pressure. It occurs after a change in body position, typically when a person stands up from either a seated or lying position. It is usually transient and represents a delay in the normal compensatory ability of the autonomic nervous system. It is commonly seen in hypovolemia and as a result of various medications. In addition to blood pressure-lowering medications, many psychiatric medications, in particular antidepressants, can have this side effect. Simple blood pressure and heart rate measurements while lying, seated, and standing (with a two-minute delay in between each position change) can confirm the presence of orthostatic hypotension. Taking these measurements is known as orthostatic vitals. Orthostatic hypotension is indicated if there is a drop of 20 mmHg in systolic pressure (and a 10 mmHg drop in diastolic pressure in some facilities) and a 20 beats per minute increase in heart rate. Vasovagal syncope Vasovagal syncope is a form of dysautonomia characterized by an inappropriate drop in blood pressure while in the upright position. Vasovagal syncope occurs as a result of increased activity of the vagus nerve, the mainstay of the parasympathetic nervous system. Patients will feel sudden, unprovoked lightheadedness, sweating, changes in vision, and finally a loss of consciousness. Consciousness will often return rapidly once patient is lying down and the blood pressure returns to normal. Other Another, but rarer form, is postprandial hypotension, a drastic decline in blood pressure that occurs 30 to 75 minutes after eating substantial meals. When a great deal of blood is diverted to the intestines (a kind of "splanchnic blood pooling") to facilitate digestion and absorption, the body must increase cardiac output and peripheral vasoconstriction to maintain enough blood pressure to perfuse vital organs, such as the brain. Postprandial hypotension is believed to be caused by the autonomic nervous system not compensating appropriately, because of aging or a specific disorder. Hypotension is a feature of Flammer syndrome, which is characterized by cold hands and feet and predisposes to normal tension glaucoma. Hypotension can be a symptom of relative energy deficiency in sport, sometimes known as the female athlete triad, although it can also affect men. Pathophysiology Blood pressure is continuously regulated by the autonomic nervous system, using an elaborate network of receptors, nerves, and hormones to balance the effects of the sympathetic nervous system, which tends to raise blood pressure, and the parasympathetic nervous system, which lowers it. The vast and rapid compensation abilities of the autonomic nervous system allow normal individuals to maintain an acceptable blood pressure over a wide range of activities and in many disease states. Even small alterations in these networks can lead to hypotension. Diagnosis For most adults, the optimal blood pressure is at or below 120/80 mmHg. If the systolic blood pressure is <90 mmHg or the diastolic blood pressure is <60 mmHg, it would be classified as hypotension. However, occasional blood pressure readings below 90/60 mmHg are not infrequent in the general population, and, in the absence of some pathological cause, hypotension appears to be a relatively benign condition in most people. The diagnosis of hypotension is usually made by measuring blood pressure, either non-invasively with a sphygmomanometer or invasively with an arterial catheter (mostly in an intensive care setting). Another way to diagnose low blood pressure is by using the mean arterial pressure (MAP) measured using an arterial catheter or by continuous, non-invasive hemodynamic monitoring which measures intra-operative blood pressure beat-by-beat throughout surgery. A MAP <65 mmHg is considered hypotension. Intra-operative hypotension <65 mmHg can lead to an increased risk of acute kidney injury, myocardial injury or post-operative stroke. While an incidental finding of hypotension during a routine blood pressure measurement may not be particularly worrying, a substantial drop in blood pressure following standing, exercise or eating can be associated with symptoms and may have implications for future health. A drop in blood pressure after standing, termed postural or orthostatic hypotension, is defined as a decrease in supine-to-standing BP >20 mm Hg systolic or >10 mm Hg diastolic within 3 minutes of standing. Orthostatic hypotension is associated with increased risk of future cardiovascular events and mortality. Orthostatic vitals are frequently measured to assist with the diagnosis of orthostatic hypotension, and may involve the use of a tilt table test to evaluate vasovagal syncope. Treatment Treatment depends on what causes low blood pressure. Treatment may not be needed for asymptomatic low blood pressure. Depending on symptoms, treatment may include drinking more fluids to prevent dehydration, taking medicines to raise blood pressure, or adjusting medicines that cause low blood pressure. Adding electrolytes to a diet can relieve symptoms of mild hypotension, and a morning dose of caffeine can also be effective. Chronic hypotension rarely exists as more than a symptom. In mild cases, where the patient is still responsive, laying the person on their back and lifting the legs increases venous return, thus making more blood available to critical organs in the chest and head. The Trendelenburg position, though used historically, is no longer recommended. Hypotensive shock treatment always follows the first four following steps. Outcomes, in terms of mortality, are directly linked to the speed that hypotension is corrected. Still-debated methods are in parentheses, as are benchmarks for evaluating progress in correcting hypotension. A study on septic shock provided the delineation of these general principles. However, since it focuses on hypotension due to infection, it is not applicable to all forms of severe hypotension. Volume resuscitation (usually with crystalloid or blood products) Blood pressure support with a vasopressor (all seem equivalent with respect to risk of death, with norepinephrine possibly better than dopamine). Trying to achieve a mean arterial pressure (MAP) of greater than 70 mmHg does not appear to result in better outcomes than trying to achieve a MAP of greater than 65 mmHg in adults. Ensure adequate tissue perfusion (maintain SvO2 >70 with use of blood or dobutamine) Address the underlying problem (i.e., antibiotic for infection, stent or CABG (coronary artery bypass graft surgery) for infarction, steroids for adrenal insufficiency, etc...) The best way to determine if a person will benefit from fluids is by doing a passive leg raise followed by measuring the output from the heart. Medication Chronic hypotension sometimes requires the use of medications. Some medications that are commonly used include Fludrocortisone, Erythropoietin, and Sympathomimetics such as Midodrine and Noradrenaline and precursor (L-DOPS). Fludrocortisone is the first-line therapy (in the absence of heart failure) for patients with chronic hypotension or resistant orthostatic hypotension. It works by increasing the intravascular volume. Midodrine is a therapy used for severe orthostatic hypotension, and works by increasing peripheral vascular resistance. Noradrenaline and its precursor L-DOPS are used for primary autonomic dysfunction by increasing vascular tone. Erythropoietin is given to patients with neurogenic orthostatic hypotension and it works through increasing vascular volume and viscosity. Pediatrics The definition of hypotension changes in the pediatric population depending on the child's age as seen in the table below. The clinical history provided by the caretaker is the most important part in determining the cause of hypotension in pediatric patients. Symptoms for children with hypotension include increased sleepiness, not using the restroom as much (or at all), having difficulty breathing or breathing rapidly, or syncope. The treatment for hypotension in pediatric patients is similar to the treatment in adults by following the four first steps listed above (see Treatment). Children are more likely to undergo intubation during the treatment of hypotension because their oxygen levels drop more rapidly than adults. The closing of fetal shunts following birth can create instability in the "transitional circulation" of the fetus, and often creates a state of hypotension following birth; while many infants can overcome this hypotension through the closing of shunts, a mean blood pressure (MBP) of lower than 30 mmHg is correlated with severe cerebral injury and can be experienced by premature infants who have poor shunt closure. Etymology Hypotension, from Ancient Greek hypo-, meaning "under" or "less" + English tension, meaning "'strain" or "tightness". This refers to the under-constriction of the blood vessels and arteries which leads to low blood pressure. See also Hypertension References External links Curlie.org: Hypotension Vascular diseases
0.769432
0.998571
0.768332
Immunity (medicine)
In biology, immunity is the state of being insusceptible or resistant to a noxious agent or process, especially a pathogen or infectious disease. Immunity may occur naturally or be produced by prior exposure or immunization. Innate and adaptive The immune system has innate and adaptive components. Innate immunity is present in all metazoans, immune responses: inflammatory responses and phagocytosis. The adaptive component, on the other hand, involves more advanced lymphatic cells that can distinguish between specific "non-self" substances in the presence of "self". The reaction to foreign substances is etymologically described as inflammation while the non-reaction to self substances is described as immunity. The two components of the immune system create a dynamic biological environment where "health" can be seen as a physical state where the self is immunologically spared, and what is foreign is inflammatorily and immunologically eliminated. "Disease" can arise when what is foreign cannot be eliminated or what is self is not spared. Innate immunity, also known as native immunity, is a semi-specific and widely distributed form of immunity. It is defined as the first line of defense against pathogens, representing a critical systemic response to prevent infection and maintain homeostasis, contributing to the activation of an adaptive immune response. It does not adapt to specific external stimulus or a prior infection, but relies on genetically encoded recognition of particular patterns. Adaptive or acquired immunity is the active component of the host immune response, mediated by antigen-specific lymphocytes. Unlike the innate immunity, the acquired immunity is highly specific to a particular pathogen, including the development of immunological memory. Like the innate system, the acquired system includes both humoral immunity components and cell-mediated immunity components. Adaptive immunity can be acquired either 'naturally' (by infection) or 'artificially' (through deliberate actions such as vaccination). Adaptive immunity can also be classified as 'active' or 'passive'. Active immunity is acquired through the exposure to a pathogen, which triggers the production of antibodies by the immune system. Passive immunity is acquired through the transfer of antibodies or activated T-cells derived from an immune host either artificially or through the placenta; it is short-lived, requiring booster doses for continued immunity. The diagram below summarizes these divisions of immunity. Adaptive immunity recognizes more diverse patterns. Unlike innate immunity it is associated with memory of the pathogen. History of theories For thousands of years mankind has been intrigued with the causes of disease and the concept of immunity. The prehistoric view was that disease was caused by supernatural forces, and that illness was a form of theurgic punishment for "bad deeds" or "evil thoughts" visited upon the soul by the gods or by one's enemies. In Classical Greek times, Hippocrates, who is regarded as the Father of Medicine, diseases were attributed to an alteration or imbalance in one of the four humors (blood, phlegm, yellow bile or black bile). The first written descriptions of the concept of immunity may have been made by the Athenian Thucydides who, in 430 BC, described that when the plague hit Athens: "the sick and the dying were tended by the pitying care of those who had recovered, because they knew the course of the disease and were themselves free from apprehensions. For no one was ever attacked a second time, or not with a fatal result". Active immunotherapy may have begun with Mithridates VI of Pontus (120-63 BC) who, to induce active immunity for snake venom, recommended using a method similar to modern toxoid serum therapy, by drinking the blood of animals which fed on venomous snakes. He is thought to have assumed that those animals acquired some detoxifying property, so that their blood would contain transformed components of the snake venom that could induce resistance to it instead of exerting a toxic effect. Mithridates reasoned that, by drinking the blood of these animals, he could acquire a similar resistance. Fearing assassination by poison, he took daily sub-lethal doses of venom to build tolerance. He is also said to have sought to create a 'universal antidote' to protect him from all poisons. For nearly 2000 years, poisons were thought to be the proximate cause of disease, and a complicated mixture of ingredients, called Mithridate, was used to cure poisoning during the Renaissance. An updated version of this cure, Theriacum Andromachi, was used well into the 19th century. The term "immunes" is also found in the epic poem "Pharsalia" written around 60 BC by the poet Marcus Annaeus Lucanus to describe a North African tribe's resistance to snake venom. The first clinical description of immunity which arose from a specific disease-causing organism is probably A Treatise on Smallpox and Measles ("Kitab fi al-jadari wa-al-hasbah{{}}, translated 1848A "al-Razi". 2003 The Columbia Electronic Encyclopedia, Sixth Edition. Columbia University Press (from Answers.com, 2006.)) written by the Islamic physician Al-Razi in the 9th century. In the treatise, Al Razi describes the clinical presentation of smallpox and measles and goes on to indicate that exposure to these specific agents confers lasting immunity (although he does not use this term). Until the 19th century, the miasma theory was also widely accepted. The theory viewed diseases such as cholera or the Black Plague as being caused by a miasma, a noxious form of "bad air". If someone was exposed to the miasma in a swamp, in evening air, or breathing air in a sickroom or hospital ward, they could catch a disease. Since the 19th century, communicable diseases came to be viewed as being caused by germs/microbes. The modern word "immunity" derives from the Latin immunis, meaning exemption from military service, tax payments or other public services. The first scientist who developed a full theory of immunity was Ilya Mechnikov who revealed phagocytosis in 1882. With Louis Pasteur's germ theory of disease, the fledgling science of immunology began to explain how bacteria caused disease, and how, following infection, the human body gained the ability to resist further infections. In 1888 Emile Roux and Alexandre Yersin isolated diphtheria toxin, and following the 1890 discovery by Behring and Kitasato of antitoxin based immunity to diphtheria and tetanus, the antitoxin became the first major success of modern therapeutic immunology. In Europe, the induction of active immunity emerged in an attempt to contain smallpox. Immunization has existed in various forms for at least a thousand years, without the terminology. The earliest use of immunization is unknown, but, about 1000 AD, the Chinese began practicing a form of immunization by drying and inhaling powders derived from the crusts of smallpox lesions. Around the 15th century in India, the Ottoman Empire, and east Africa, the practice of inoculation (poking the skin with powdered material derived from smallpox crusts) was quite common. This practice was first introduced into the west in 1721 by Lady Mary Wortley Montagu [the phrase "first introduced into the west in 1721 by lady Montagu" is quite not accurate and should be rendered "first promoted in the west, by lady Montague, in 1721". Because, as you can read here https://en.wikipedia.org/wiki/Variolation, the procedure was already known in Wales: "The method was first used in China, India, parts of Africa and the Middle East before it was introduced into England and North America in the 1720s in the face of some opposition. However, inoculation had been reported in Wales since the early 17th century"]. In 1798, Edward Jenner introduced the far safer method of deliberate infection with cowpox virus, (smallpox vaccine), which caused a mild infection that also induced immunity to smallpox. By 1800, the procedure was referred to as vaccination. To avoid confusion, smallpox inoculation was increasingly referred to as variolation, and it became common practice to use this term without regard for chronology. The success and general acceptance of Jenner's procedure would later drive the general nature of vaccination developed by Pasteur and others towards the end of the 19th century. In 1891, Pasteur widened the definition of vaccine in honour of Jenner, and it then became essential to qualify the term by referring to polio vaccine, measles vaccine etc. Passive immunity Passive immunity is the immunity acquired by the transfer of ready-made antibodies from one individual to another. Passive immunity can occur naturally, such as when maternal antibodies are transferred to the foetus through the placenta, and can also be induced artificially, when high levels of human (or horse) antibodies specific for a pathogen or toxin are transferred to non-immune individuals. Passive immunization is used when there is a high risk of infection and insufficient time for the body to develop its own immune response, or to reduce the symptoms of ongoing or immunosuppressive diseases. Passive immunity provides immediate protection, but the body does not develop memory, therefore the patient is at risk of being infected by the same pathogen later. Naturally acquired passive immunity A fetus naturally acquires passive immunity from its mother during pregnancy. Maternal passive immunity is antibody-mediated immunity. The mother's antibodies (MatAb) are passed through the placenta to the fetus by an FcRn receptor on placental cells. This occurs around the third month of gestation. IgG is the only antibody isotype that can pass through the placenta. Passive immunity is also provided through the transfer of IgA antibodies found in breast milk that are transferred to the gut of a nursing infant, protecting against bacterial infections, until the newborn can synthesize its antibodies. Colostrum present in mothers milk is an example of passive immunity. Artificially acquired passive immunity Artificially acquired passive immunity is a short-term immunization induced by the transfer of antibodies, which can be administered in several forms; as human or animal blood plasma, as pooled human immunoglobulin for intravenous (IVIG) or intramuscular (IG) use, and in the form of monoclonal antibodies (MAb). Passive transfer is used prophylactically in the case of immunodeficiency diseases, such as hypogammaglobulinemia. It is also used in the treatment of several types of acute infection, and to treat poisoning. Immunity derived from passive immunization lasts for only a short period of time, and there is also a potential risk for hypersensitivity reactions, and serum sickness, especially from gamma globulin of non-human origin. The artificial induction of passive immunity has been used for over a century to treat infectious disease, and before the advent of antibiotics, was often the only specific treatment for certain infections. Immunoglobulin therapy continued to be a first line therapy in the treatment of severe respiratory diseases until the 1930s, even after sulfonamide lot antibiotics were introduced. Transfer of activated T-cells Passive or "adoptive transfer" of cell-mediated immunity, is conferred by the transfer of "sensitized" or activated T-cells from one individual into another. It is rarely used in humans because it requires histocompatible (matched) donors, which are often difficult to find. In unmatched donors this type of transfer carries severe risks of graft versus host disease. It has, however, been used to treat certain diseases including some types of cancer and immunodeficiency. This type of transfer differs from a bone marrow transplant, in which (undifferentiated) hematopoietic stem cells are transferred. Active immunity When B cells and T cells are activated by a pathogen, memory B-cells and T- cells develop, and the primary immune response results. Throughout the lifetime of an animal, these memory cells will "remember" each specific pathogen encountered, and can mount a strong secondary response if the pathogen is detected again. The primary and secondary responses were first described in 1921 by English immunologist Alexander Glenny although the mechanism involved was not discovered until later. This type of immunity is both active and adaptive because the body's immune system prepares itself for future challenges. Active immunity often involves both the cell-mediated and humoral aspects of immunity as well as input from the innate immune system. Naturally acquired Naturally acquired active immunity occurs as the result of an infection. When a person is exposed to a live pathogen and develops a primary immune response, this leads to immunological memory. Many disorders of immune system function can affect the formation of active immunity, such as immunodeficiency (both acquired and congenital forms) and immunosuppression. Artificially acquired Artificially acquired active immunity can be induced by a vaccine, a substance that contains antigen. A vaccine stimulates a primary response against the antigen without causing symptoms of the disease. The term vaccination was coined by Richard Dunning, a colleague of Edward Jenner, and adapted by Louis Pasteur for his pioneering work in vaccination. The method Pasteur used entailed treating the infectious agents for those diseases, so they lost the ability to cause serious disease. Pasteur adopted the name vaccine as a generic term in honor of Jenner's discovery, which Pasteur's work built upon. In 1807, Bavaria became the first group to require their military recruits to be vaccinated against smallpox, as the spread of smallpox was linked to combat. Subsequently, the practice of vaccination would increase with the spread of war. There are four types of traditional vaccines: Inactivated vaccines are composed of micro-organisms that have been killed with chemicals and/or heat and are no longer infectious. Examples are vaccines against flu, cholera, plague, and hepatitis A. Most vaccines of this type are likely to require booster shots. Live, attenuated vaccines are composed of micro-organisms that have been cultivated under conditions which disable their ability to induce disease. These responses are more durable, however, they may require booster shots. Examples include yellow fever, measles, rubella, and mumps. Toxoids are inactivated toxic compounds from micro-organisms in cases where these (rather than the micro-organism itself) cause illness, used prior to an encounter with the toxin of the micro-organism. Examples of toxoid-based vaccines include tetanus and diphtheria. Subunit, recombinant, polysaccharide, and conjugate vaccines are composed of small fragments or pieces from a pathogenic (disease-causing) organism. A characteristic example is the subunit vaccine against Hepatitis B virus. In addition, there are some newer types of vaccines in use: Outer Membrane Vesicle (OMV) vaccines contain the outer membrane of a bacterium without any of its internal components or genetic material. Thus, ideally, they stimulate an immune response effective against the original bacteria without the risk of an infection. Genetic vaccines deliver nucleic acid that codes for an antigen into host cells, which then produce that antigen, stimulating an immune response. This category of vaccine includes DNA vaccines, RNA vaccines, and viral vector vaccines, which differ in the chemical form of nucleic acid and how it is delivered into host cells. A variety of vaccine types are under development; see Experimental Vaccine Types. Most vaccines are given by hypodermic or intramuscular injection as they are not absorbed reliably through the gut. Live attenuated polio and some typhoid and cholera vaccines are given orally in order to produce immunity based in the bowel. Hybrid immunity Hybrid immunity is the combination of natural immunity and artificial immunity. Studies of hybrid-immune people found that their blood was better able to neutralize the Beta and other variants of SARS-CoV-2 than never-infected, vaccinated people. Moreover, on 29 October 2021, the Centers for Disease Control and Prevention (CDC) concluded that "Multiple studies in different settings have consistently shown that infection with SARS-CoV-2 and vaccination each result in a low risk of subsequent infection with antigenically similar variants for at least 6 months. Numerous immunologic studies and a growing number of epidemiologic studies have shown that vaccinating previously infected individuals significantly enhances their immune response and effectively reduces the risk of subsequent infection, including in the setting of increased circulation of more infectious variants. ..." Genetics Immunity is determined genetically. Genomes in humans and animals encode the antibodies and numerous other immune response genes. While many of these genes are generally required for active and passive immune responses (see sections above), there are also many genes that appear to be required for very specific immune responses. For instance, Tumor Necrosis Factor (TNF) is required for defense of tuberculosis in humans. Individuals with genetic defects in TNF may get recurrent and life-threatening infections with tuberculosis bacteria (Mycobacterium tuberculosis) but are otherwise healthy. They also seem to respond to other infections more or less normally. The condition is therefore called Mendelian susceptibility to mycobacterial disease (MSMD) and variants of it can be caused by other genes related to interferon production or signaling (e.g. by mutations in the genes IFNG, IL12B, IL12RB1, IL12RB2, IL23R, ISG15, MCTS1, RORC, TBX21, TYK2, CYBB, JAK1, IFNGR1, IFNGR2, STAT1, USP18, IRF1, IRF8, NEMO, SPPL2A''). See also Antiserum Antivenin Cell-mediated immunity Herd immunity Heterosubtypic immunity Hoskins effect Humoral immunity Immunology Inoculation Premunity Vaccine-naive Virgin soil epidemic References External links The Center for Modeling Immunity to Enteric Pathogens (MIEP) Immunology
0.774056
0.992558
0.768295
Emergency medicine
Emergency medicine is the medical speciality concerned with the care of illnesses or injuries requiring immediate medical attention. Emergency medicine physicians (often called "ER doctors" in the United States) specialize in providing care for unscheduled and undifferentiated patients of all ages. As first-line providers, in coordination with emergency medical services, they are primarily responsible for initiating resuscitation and stabilization and performing the initial investigations and interventions necessary to diagnose and treat illnesses or injuries in the acute phase. Emergency medical physicians generally practice in hospital emergency departments, pre-hospital settings via emergency medical services, and intensive care units. Still, they may also work in primary care settings such as urgent care clinics. Sub-specializations of emergency medicine include; disaster medicine, medical toxicology, point-of-care ultrasonography, critical care medicine, emergency medical services, hyperbaric medicine, sports medicine, palliative care, or aerospace medicine. Various models for emergency medicine exist internationally. In countries following the Anglo-American model, emergency medicine initially consisted of surgeons, general practitioners, and other generalist physicians. However, in recent decades it has become recognised as a speciality in its own right with its training programmes and academic posts, and the speciality is now a popular choice among medical students and newly qualified medical practitioners. By contrast, in countries following the Franco-German model, the speciality does not exist, and emergency medical care is instead provided directly by anesthesiologists (for critical resuscitation), surgeons, specialists in internal medicine, paediatricians, cardiologists or neurologists as appropriate. Emergency medicine is still evolving in developing countries, and international emergency medicine programs offer hope of improving primary emergency care where resources are limited. Scope Emergency medicine is a medical speciality—a field of practice based on the knowledge and skills required to prevent, diagnose, and manage acute and urgent aspects of illness and injury affecting patients of all age groups with a full spectrum of undifferentiated physical and behavioural disorders. It further encompasses an understanding of the development of pre-hospital and in-hospital emergency medical systems and the skills necessary for this development. The field of emergency medicine encompasses care involving the acute care of internal medical and surgical conditions. In many modern emergency departments, emergency physicians see many patients, treating their illnesses and arranging for disposition—either admitting them to the hospital or releasing them after treatment as necessary. They also provide episodic primary care to patients during off-hours and those who do not have primary care providers. Most patients present to emergency departments with low-acuity conditions (such as minor injuries or exacerbations of chronic disease), but a small proportion will be critically ill or injured. Therefore, the emergency physician requires broad knowledge and procedural skills, often including surgical procedures, trauma resuscitation, advanced cardiac life support and advanced airway management. They must have some of the core skills from many medical specialities—the ability to resuscitate a patient (intensive care medicine), manage a difficult airway (anesthesiology), suture a complex laceration (plastic surgery), set a fractured bone or dislocated joint (orthopaedic surgery), treat a heart attack (cardiology), manage strokes (neurology), work-up a pregnant patient with vaginal bleeding (obstetrics and gynaecology), control a patient with mania (psychiatry), stop a severe nosebleed (otolaryngology), place a chest tube (cardiothoracic surgery), and conduct and interpret x-rays and ultrasounds (radiology). This generalist approach can obviate barrier-to-care issues seen in systems without specialists in emergency medicine, where patients requiring immediate attention are instead managed from the outset by speciality doctors such as surgeons or internal physicians. However, this may lead to barriers through acute and critical care specialities disconnecting from emergency care. Emergency medicine may separate from urgent care, which refers to primary healthcare for less emergent medical issues, but there is obvious overlap, and many emergency physicians work in urgent care settings. Emergency medicine also includes many aspects of acute primary care and shares with family medicine the uniqueness of seeing all patients regardless of age, gender or organ system. The emergency physician workforce also includes many competent physicians who have medical skills from other specialities. Physicians specializing in emergency medicine can enter fellowships to receive credentials in subspecialties such as palliative care, critical care medicine, medical toxicology, wilderness medicine, pediatric emergency medicine, sports medicine, disaster medicine, tactical medicine, ultrasound, pain medicine, pre-hospital emergency medicine, or undersea and hyperbaric medicine. The practice of emergency medicine is often quite different in rural areas where there are far fewer other specialities and healthcare resources. In these areas, family physicians with additional skills in emergency medicine often staff emergency departments. Rural emergency physicians may be the only health care providers in the community and require skills that include primary care and obstetrics. Work patterns Patterns vary by country and region. In the United States, the employment arrangement of emergency physician practices are either private (with a co-operative group of doctors staffing an emergency department under contract), institutional (physicians with or without an independent contractor relationship with the hospital), corporate (physicians with an independent contractor relationship with a third-party staffing company that services multiple emergency departments), or governmental (for example, when working within personal service military services, public health services, veterans' benefit systems or other government agencies). In the United Kingdom, all consultants in emergency medicine work in the National Health Service, and there is little scope for private emergency practice. In other countries like Australia, New Zealand, or Turkey, emergency medicine specialists are almost always salaried employees of government health departments and work in public hospitals, with pockets of employment in private or non-government aeromedical rescue or transport services, as well as some private hospitals with emergency departments; they may be supplemented or backed by non-specialist medical officers, and visiting general practitioners. Rural emergency departments are sometimes run by general practitioners alone, sometimes with non-specialist qualifications in emergency medicine. History During the French Revolution, after seeing the speed with which the carriages of the French flying artillery maneuvered across the battlefields, French military surgeon Dominique Jean Larrey applied the idea of ambulances, or "flying carriages", for rapid transport of wounded soldiers to a central place where medical care was more accessible and practical. Larrey operated ambulances with trained crews of drivers, corpsmen and litter-bearers and had them bring the wounded to centralized field hospitals, effectively creating a forerunner of the modern MASH units. Dominique Jean Larrey is sometimes called the Father of Emergency Medicine for his strategies during the French wars. Emergency medicine as an independent medical speciality is relatively young. Before the 1960s and 1970s, hospital emergency departments (EDs) were generally staffed by physicians on staff at the hospital on a rotating basis, among them family physicians, general surgeons, internists, and a variety of other specialists. In many smaller emergency departments, nurses would triage patients, and physicians would be called in based on the type of injury or illness. Family physicians were often on call for the emergency department and recognized the need for dedicated emergency department coverage. Many of the pioneers of emergency medicine were family physicians and other specialists who saw a need for additional training in emergency care. During this period, physicians began to emerge who had left their respective practices to devote their work entirely to the ED. In the UK in 1952, Maurice Ellis was appointed as the first "casualty consultant" at Leeds General Infirmary. In 1967, the Casualty Surgeons Association was co-established with Maurice Ellis as its first president. In the US, the first of such groups managed by Dr James DeWitt Mills in 1961, along with four associate physicians; Dr Chalmers A. Loughridge, Dr William Weaver, Dr John McDade, and Dr Steven Bednar, at Alexandria Hospital in Alexandria, Virginia, established 24/7 year-round emergency care, which became known as the "Alexandria Plan". It was not until Dr. John Wiegenstein founded the American College of Emergency Physicians (ACEP) the recognition of emergency medicine training programs by the AMA and the AOA, and in 1979 a historic vote by the American Board of Medical Specialties that emergency medicine became a recognized medical speciality in the US. The first emergency medicine residency program in the world began in 1970 at the University of Cincinnati. Furthermore, the first department of emergency medicine at a US medical school occurred in 1971 at the University of Southern California. The second residency program in the United States soon followed at what was then called Hennepin County General Hospital in Minneapolis, with two residents entering the program in 1971. In 1990 the UK's Casualty Surgeons Association changed its name to the British Association for Accident and Emergency Medicine and subsequently became the British Association for Emergency Medicine (BAEM) in 2004. In 1993, an intercollegiate Faculty of Accident and Emergency Medicine (FAEM) became a "daughter college" of six royal medical colleges in England and Scotland to arrange professional examinations and training. In 2005, the BAEM and the FAEM became a single unit to form the College of Emergency Medicine, now the Royal College of Emergency Medicine, which conducts membership and fellowship examinations and publishes guidelines and standards for the practice of emergency medicine. Financing and practice organization Reimbursement Many hospitals and care centres feature departments of emergency medicine, where patients can receive acute care without an appointment. While many patients get treated for life-threatening injuries, others utilize the emergency department (ED) for non-urgent reasons such as headaches or a cold. (defined as "visits for conditions for which a delay of several hours would not increase the likelihood of an adverse outcome"). As such, EDs can adjust staffing ratios and designate an area of the department for faster patient turnover to accommodate various patient needs and volumes. Policies have improved to assist better ED staff (such as emergency medical technicians, paramedics). Mid-level providers such as physician assistants and nurse practitioners direct patients towards more appropriate medical settings, such as their primary care physician, urgent care clinics or detoxification facilities. The emergency department, welfare programs, and healthcare clinics serve as a critical part of the healthcare safety net for uninsured patients who cannot afford medical treatment or adequately utilize their coverage. In emergency departments in Australia, the government utilises an "Activity based funding and management", meaning that the amount of funding to emergency departments are allocated money based on the number of patients and the complexity of their cases or illnesses. However, rural emergency departments of Australia are funded under the principle of providing the necessary equipment and staffing levels required to provide safe and adequate care, not necessarily on the number of patients. Compensation Emergency physicians are compensated at a higher rate than some other specialities, ranking 10th out of 26 physician specialities in 2015, at an average salary of $306,000 annually. They are compensated in the mid-range (averaging $13,000 annually) for non-patient activities, such as speaking engagements or acting as an expert witness; they also saw a 12% increase in salary from 2014 – 2015 (which was not out of line with many other physician specialities that year). While emergency physicians work 8–12 hour shifts and do not tend to work on-call, the high level of stress and need for solid diagnostic and triage capabilities for the undifferentiated, acute patient contributes to arguments justifying higher salaries for these physicians. Emergency care must be available every hour of every day and requires a doctor to be available on-site 24/7, unlike an outpatient clinic or other hospital departments with more limited hours and may only call a physician in when needed. The necessity to have a physician on staff and all other diagnostic services available every hour of every day is thus a costly arrangement for hospitals. Payment systems American health payment systems are undergoing significant reform efforts, Which include compensating emergency physicians through "pay for performance" incentives and penalty measures under commercial and public health programs, including Medicare and Medicaid. This payment reform aims to improve the quality of care and control costs, despite the differing opinions on the existing evidence to show that this payment approach is effective in emergency medicine. Initially, these incentives would only target primary care providers (PCPs), but some would argue that emergency medicine is primary care, as no one refers patients to the ED. In one such program, two specific conditions listed were directly tied to patients frequently seen by emergency medical providers: acute myocardial infarction and pneumonia.(See: Hospital Quality Incentive Demonstration.) There are some challenges with implementing these quality-based incentives in emergency medicine in that patients are often not given a definitive diagnosis in the ED, making it challenging to allocate payments through coding. Additionally, adjustments based on patient risk-level and multiple co-morbidities for complex patients further complicate attribution of positive or negative health outcomes. It is not easy to assess whether much of the costs directly result from the emergent condition treated in acutely care settings. It is also difficult to quantify the savings due to preventive care during emergency treatment (i.e. workup, stabilizing treatments, coordination of care and discharge, rather than a hospital admission). Thus, ED providers tend to support a modified fee-for-service model over other payment systems. Overutilization Some patients without health insurance utilize EDs as their primary form of medical care. Because these patients do not utilize insurance or primary care, emergency medical providers often face overutilization and financial loss, especially since many patients cannot pay for their care (see below). ED overuse produces $38 billion in wasteful spending each year (i.e. care delivery and coordination failures, over-treatment, administrative complexity, pricing failures, and fraud), Moreover, it unnecessarily drains departmental resources, reducing the quality of care across all patients. While overuse is not limited to the uninsured, the uninsured constitute a growing proportion of non-urgent ED visits. Insurance coverage can help mitigate overutilization by improving access to alternative forms of care and lowering the need for emergency visits. A common misconception pegs frequent ED visitors as a significant factor in wasteful spending. However, frequent ED users make up a small portion of those contributing to overutilization and are often insured. Uncompensated care Injury and illness are often unforeseen, and patients of lower socioeconomic status are especially susceptible to being suddenly burdened with the cost of a necessary ED visit. For example, in the event that a patient is unable to pay for medical care received, the hospital, under the Emergency Medical Treatment and Active Labor Act (EMTALA), is obligated to treat emergency conditions regardless of a patient's ability to pay and therefore faces an economic loss for this uncompensated care. Estimates suggest that over half (approximately 55%) of all quantifiable emergency care is uncompensated and inadequate reimbursement has led to the closure of many EDs. Policy changes (such as the Affordable Care Act) are expected to decrease the number of uninsured people and thereby reduce uncompensated care. In addition to decreasing the uninsured rate, ED overutilization might reduce by improving patient access to primary care and increasing patient flow to alternative care centres for non-life-threatening injuries. Financial disincentives, patient education, and improved management for patients with chronic diseases can also reduce overutilization and help manage costs of care. Moreover, physician knowledge of prices for treatment and analyses, discussions on costs with their patients, and a changing culture away from defensive medicine can improve cost-effective use. A transition towards more value-based care in the ED is an avenue by which providers can contain costs. EMTALA Doctors that work in the EDs of hospitals receiving Medicare funding are subject to the provisions of EMTALA. The US Congress enacted EMTALA in 1986 to curtail "patient dumping", a practice whereby patients were refused medical care for economic or other non-medical reasons. Since its enactment, ED visits have substantially increased, with one study showing a rise in visits of 26% (which is more than double the increase in population over the same period). While more individuals are receiving care, a lack of funding and ED overcrowding may be affecting quality. To comply with the provisions of EMTALA, hospitals, through their ED physicians, must provide medical screening and stabilize the emergency medical conditions of anyone that presents themselves at a hospital ED with patient capacity. EMTALA holds both the hospital and the responsible ED physician liable for civil penalties of up to $50,000 if there is no help for those in need. . While both the Office of Inspector General, U.S. Department of Health and Human Services (OIG) and private citizens can bring an action under EMTALA, courts have uniformly held that ED physicians can only be held liable if the case is prosecuted by OIG (whereas hospitals are subject to penalties regardless of who brings the suit). Additionally, the Centres for Medicare and Medicaid Services (CMS) can discontinue provider status under Medicare for physicians that do not comply with EMTALA. Liability also extends to on-call physicians that fail to respond to an ED request to come to the hospital to provide service. While the goals of EMTALA are laudable, commentators have noted that it appears to have created a substantial unfunded burden on the resources of hospitals and emergency physicians. As a result of financial difficulty, between the period of 1991–2011, 12.6% of EDs in the US closed. Care delivery in different ED settings Rural Despite the practice emerging over the past few decades, the delivery of emergency medicine has significantly increased and evolved across diverse settings related to cost, provider availability and overall usage. Before the Affordable Care Act (ACA), emergency medicine was leveraged primarily by "uninsured or underinsured patients, women, children, and minorities, all of whom frequently face barriers to accessing primary care". While this still exists today, as mentioned above, it is critical to consider the location in which care is delivered to understand the population and system challenges related to overutilization and high cost. In rural communities where provider and ambulatory facility shortages exist, a primary care physician (PCP) in the ED with general knowledge is likely to be the only source of health care for a population, as specialists and other health resources are generally unavailable due to lack of funding and desire to serve in these areas. As a result, the incidence of complex co-morbidities not managed by the appropriate provider results in worse health outcomes and eventually costlier care that extends beyond rural communities. Though typically quite separated, PCPs in rural areas must partner with larger health systems to comprehensively address the complex needs of their community, improve population health, and implement strategies such as telemedicine to improve health outcomes and reduce ED utilization for preventable illnesses. (See: Rural health.) Urban Alternatively, emergency medicine in urban areas consists of diverse provider groups, including physicians, physician assistants, nurse practitioners and registered nurses who coordinate with specialists in both inpatient and outpatient facilities to address patients' needs, more specifically in the ED. For all systems, regardless of funding source, EMTALA mandates EDs to conduct a medical examination for anyone that presents at the department, irrespective of paying ability. Non-profit hospitals and health systems – as required by the ACA – must provide a certain threshold of charity care "by actively ensuring that those who qualify for financial assistance get it, by charging reasonable rates to uninsured patients and by avoiding extraordinary collection practices." While there are limitations, this mandate provides support to many in need. That said, despite policy efforts and increased funding and federal reimbursement in urban areas, the triple aim (of improving patient experience, enhancing population health, and reducing the per-capita cost of care) remains a challenge without providers' and payers' collaboration to increase access to preventive care and decrease in ED usage. As a result, many experts support the notion that emergency medical services should only serve immediate risks in urban and rural areas. Patient–provider relationships As stated above, EMTALA includes provisions that protect patients from being turned away or transferred before adequate stabilisation. Upon making contact with a patient, EMS providers are responsible for diagnosing and stabilising a patient's condition without regard for the ability to pay. In the pre-hospital setting, providers must exercise appropriate judgement in choosing a suitable hospital for transport. Hospitals can only turn away incoming ambulances if they are on diversion and incapable of providing adequate care. However, once a patient has arrived on hospital property, care must be provided. At the hospital, a triage nurse first contacts the patient, who determines the appropriate level of care needed. According to Mead v. Legacy Health System, a patient-physician relationship is established when "the physician takes an affirmative action with regard to the care of the patient". Initiating such a relationship forms a legal contract in which the physician must continue to provide treatment or adequately terminate the relationship. This legal responsibility can extend to physician consultations and on-call physicians even without direct patient contact. In emergency medicine, termination of the patient–provider relationship prior to stabilization or without handoff to another qualified provider is considered abandonment. In order to initiate an outside transfer, a physician must verify that the next hospital can provide a similar or higher level of care. Hospitals and physicians must also ensure that the patient's condition will not be further aggravated by the transfer process. The setting of emergency medicine presents a challenge for delivering high quality, patient-centered care. Clear, effective communication can be particularly difficult due to noise, frequent interruptions, and high patient turnover. The Society for Academic Emergency Medicine has identified five essential tasks for patient-physician communication: establishing rapport, gathering information, giving information, providing comfort, and collaboration. The miscommunication of patient information is a crucial source of medical error; minimising shortcoming in communication remains a topic of current and future research. Medical error Many circumstances, including the regular transfer of patients in emergency treatment and crowded, noisy and chaotic ED environments, make emergency medicine particularly susceptible to medical error and near misses. One study identified an error rate of 18 per 100 registered patients in one particular academic ED. Another study found that where a lack of teamwork (i.e. poor communication, lack of team structure, lack of cross-monitoring) was implicated in a particular incident of ED medical error, "an average of 8.8 teamwork failures occurred per case [and] more than half of the deaths and permanent disabilities that occurred were judged avoidable." Particular cultural (i.e. "a focus on the errors of others and a 'blame-and-shame' culture") and structural (i.e. lack of standardisation and equipment incompatibilities) aspects of emergency medicine often result in a lack of disclosure of medical error and near misses to patients and other caregivers. While concerns about malpractice liability are one reason why disclosure of medical errors is not made, some have noted that disclosing the error and providing an apology can mitigate malpractice risk. Ethicists uniformly agree that the disclosure of a medical error that causes harm is a care provider's duty. The critical components of the disclosure include "honesty, explanation, empathy, apology, and the chance to lessen the chance of future errors" (represented by the mnemonic HEEAL). The nature of emergency medicine is such that error will likely always be a substantial risk of emergency care. However, maintaining public trust through open communication regarding a harmful error can help patients and physicians constructively address problems when they occur. Treatments Emergency medicine is a primary or first-contact point of care for patients requiring the use of the health care system. Specialists in emergency medicine are required to possess specialist skills in acute illness diagnosis and resuscitation. Emergency physicians are responsible for providing immediate recognition, evaluation, care, and stabilisation to adult and pediatric patients in response to acute illness and injury. Emergency medical physicians provide treatments to a range of cases requiring vast knowledge. They deal with patients from mental illnesses to physical and anything in-between. An average treatment process would likely involve, investigation then diagnosis then either treatment or the patient being admitted. In terms of procedure's they cover a wide and broad range, including treatment to GSW's (Gun Shot Wounds), Head and body traumas, stomach bugs, mental episodes, seizures and much more. They are some of the most highly trained physicians in the world and are responsible for providing immediate recognition, evaluation, care, and stabilisation to adult and paediatric patients in response to acute illness and injury.As well as being the first point of care for many patients in emergency situations. Training There are a variety of international models for emergency medicine training. There are two different models among those with well-developed training programs: a "specialist" model or "a multidisciplinary model". Additionally, in some countries, the emergency medicine specialist rides in the ambulance. For example, in France and Germany, the physician, often an anesthesiologist, rides in the ambulance and provides stabilising care at the scene. The patient is directed to the appropriate hospital department, so emergency care is much more multidisciplinary than the Anglo-American model. In countries such as the US, the United Kingdom, Canada and Australia, ambulances crewed by paramedics and emergency medical technicians respond to out-of-hospital emergencies and transport patients to emergency departments, meaning there is more dependence on these healthcare providers and there is more dependence on paramedics and EMTs for on-scene care. Emergency physicians are therefore more "specialists" since all patients are taken to the emergency department. Most developing countries follow the Anglo-American model: the gold standard is three or four-year independent residency training programs in emergency medicine. Some countries develop training programs based on a primary care foundation with additional emergency medicine training. In developing countries, there is an awareness that Western models may not be applicable and may not be the best use of limited health care resources. For example, speciality training and pre-hospital care in developed countries are too expensive and impractical for use in many developing countries with limited health care resources. International emergency medicine provides a critical global perspective and hope for improvement in these areas. A brief review of some of these programs follows: Argentina In Argentina, the SAE (Sociedad Argentina de Emergencias) is the leading organisation of emergency medicine. There are many residency programs. Also, it is possible to reach the certification with a two-year postgraduate university course after a few years of ED background. Australia and New Zealand The specialist medical college responsible for emergency medicine in Australia and New Zealand is the Australasian College for Emergency Medicine (ACEM). The training program is nominally seven years in duration, after which the trainee is awarded a Fellowship of ACEM, conditional upon passing all necessary assessments. Dual fellowship programs also exist for paediatric medicine (in conjunction with the Royal Australasian College of Physicians) and intensive care medicine (in conjunction with the College of Intensive Care Medicine). These programs nominally add one or more years to the ACEM training program. For medical doctors not (and not wishing to be) specialists in emergency medicine but have a significant interest or workload in emergency departments, the ACEM provides non-specialist certificates and diplomas. The Australian College of Rural and Remote Medicine (ACRRM) is the responsible body for the training and upholding of standards for practice and provision of rural and remote medical care. Prospective rural generalists undertaking this four-year fellowship program have an opportunity to complete Advanced Specialised Training (AST) in emergency medicine. Belgium In Belgium there are three recognised ways to practice emergency medicine. Until 2005 there was no accredited emergency medicine program. Emergency medicine was performed by general practitioners (having followed a 240-hour course, Acute Medicine) or by specialists (surgeon, internal medicine, neurologist, anesthesiologist) with or without supra-speciality training in emergency medicine. Since 2005 residency training exists for acute medicine (3 years) or emergency medicine (6 years). At least 50% of the training is in the emergency department; the other part is a rotation between disciplines like pediatrics, surgery, orthopedic surgery, anesthesiology and critical care medicine. Alternative an attending physician with one of following specialities (anesthesiology, internal medicine, cardiology, gastro-enterology, pneumology, rheumatology, urology, general surgery, plastic & reconstructive surgery, orthopedic surgery, neurology, neurosurgery, pediatrics) can follow a supra-speciality program of two years to become an emergency medicine specialist. Brazil In Brazil, the first emergency medicine residency program was created at Hospital Pronto Socorro de Porto Alegre in 1996. In 2002, the emergency medical services were standardized nationally with the creation of SAMU (Serviço de atendimento móvel de urgência), inspired by French EMS, which also provides training to its employees. The nacional emergency medicina association (ABRAMEDE – Associação Brasileira de Medicina de Emergência) was created in 2007. In 2008 the second residency program was started at Messejana Hospital in Fortaleza. Then, in 2015, emergency medicine was formally recognized as a medical specialty by the Brazilian Medical Association. After formal recognition, multiple residency programs were created nationwide (e.g. Universidade Federal de Minas Gerais in 2016 and Universidade de São Paulo in 2017). The residency consists of a three-year program with training in all emergency department specialties (i.e. internal medicine, surgery, pediatrics, orthopedics, OB/GYN), EMS and intensive care. Chile In Chile, emergency medicine begins its journey in Chile with the first speciality program at the beginning of the 1990s, at the University of Chile and the University of Santiago of Chile. Currently, it is a primary speciality legally recognised by the Ministry of Health since 2013. It has multiple training programs for specialists, notably the University of Chile, Pontifical Catholic University of Chile, Clínica Alemana – Universidad del Desarrollo, San Sebastian University – MUE and University of Santiago of Chile (USACH). Currently, and intending to strengthen the speciality at the country level, FOAMed initiatives have emerged (free open access medical education in emergency medicine) and the #ChileEM initiative that brings together the programs of the Universidad San Sebastián / MUE, Universidad Católica de Chile and Universidad de Chile, intend to hold joint clinical meetings between the leading training programs, regularly and open to all the health team working in the field of urgency. The specialists already trained are grouped in the Chilean Society of Emergency Medicine (SOCHIMU). Canada The two routes to emergency medicine certification can be summarized as follows: A five-year residency leads to the designation of FRCP(EM) through the Royal College of Physicians and Surgeons of Canada (Emergency Medicine Board Certification – emergency medicine consultant). A one-year emergency medicine enhanced skills program following a two-year family medicine residency leading to the designation of CCFP(EM) through the College of Family Physicians of Canada (Advanced Competency Certification). The CFPC also allows those are having worked a minimum of four years at a minimum of 400 hours per year in emergency medicine to challenge the examination of special competence in emergency medicine and thus become specialized. CCFP(EM) emergency physicians outnumber FRCP(EM) physicians by a ratio of about 3 to 1, and they tend to work primarily as clinicians with a minor focus on academic activities such as teaching and research. FRCP(EM) Emergency Medicine Board specialists tend to congregate in academic centres and have more academically oriented careers, which emphasize administration, research, critical care, disaster medicine, and teaching. They also tend to sub-specialize in toxicology, critical care, pediatric emergency medicine, and sports medicine. Furthermore, the FRCP(EM) residency length allows more time for formal training in these areas. Physician assistants are currently practising in the field of emergency medicine in Canada. China The current post-graduate emergency medicine training process is highly complex in China. The first EM post-graduate training took place in 1984 at the Peking Union Medical College Hospital. Because speciality certification in EM has not been established, formal training is not required to practice emergency medicine in China. About a decade ago, emergency medicine residency training was centralized at the municipal levels, following the Ministry of Public Health guidelines. Residency programs in all hospitals are called residency training bases, which have to be approved by local health governments. These bases are hospital-based, but the residents are selected and managed by the municipal associations of medical education. These associations are also the authoritative body of setting up their residents' training curriculum. All medical school graduates who want to practice medicine have to undergo five years of residency training at designated training bases, the first three years of general rotation followed by two more years of speciality-centred training. Germany In Germany, emergency medicine is not handled as a specialization (Facharztrichtung), but any licensed physician can acquire an additional qualification in emergency medicine through an 80-hour course monitored by the respective "Ärztekammer" (medical association, responsible for licensing of physicians). Service as an emergency physician in an ambulance service is part of the specialization training of anaesthesiology. Emergency physicians usually work on a volunteering basis and are often anesthesiologists but maybe specialists of any kind. Especially there is a specialization training in pediatric intensive care. India India is an example of how family medicine can be a foundation for emergency medicine training. Many private hospitals and institutes have been providing emergency medicine training for doctors, nurses and paramedics since 1994, with certification programs varying from six months to three years. However, emergency medicine was only recognized as a separate speciality by the Medical Council of India in July 2009. Malaysia There are three universities (Universiti Sains Malaysia, Universiti Kebangsaan Malaysia, and Universiti Malaya) that offer master's degrees in emergency medicine – postgraduate training programs of four years in duration with clinical rotations, examinations and a dissertation. The first cohort of locally trained emergency physicians graduated in 2002. Saudi Arabia In Saudi Arabia, the Certification of Emergency Medicine takes the four-year Saudi Board of Emergency Medicine (SBEM), which the Saudi Council accredits for Health Specialties (SCFHS). It requires passing the two-part exam: first and final part (written and oral) to obtain the SBEM certificate, equivalent to a doctorate. Switzerland Emergency medicine is still not recognised as a fully-fledged speciality in a country that has only recently grasped the importance of having an organised acute medical speciality (during the COVID-19 outbreak). Many attempts to organize the speciality have resulted in a subspecialists training pathway, but to this day, internal medicine, anesthesiology and surgery are still vocally opposed to an emergency medicine specialist title. United States Most programs are three years in duration, but some programs are four years long. There are several combined residencies offered with other programs, including family medicine, internal medicine and paediatrics. The US is well known for its excellence in emergency medicine residency programs, leading to some controversy about speciality certification. There are three ways to become board-certified in emergency medicine: The American Board of Emergency Medicine (ABEM) is for those with either Doctor of Medicine (MD) or Doctor of Osteopathic Medicine (DO) degrees. The ABEM is under the authority of the American Board of Medical Specialties. The American Osteopathic Board of Emergency Medicine (AOBEM) certifies only emergency physicians with a DO degree. It is under the authority of the American Osteopathic Association Bureau of Osteopathic Specialists. The Board of Certification in Emergency Medicine (BCEM) grants board certification in emergency medicine to physicians who have not completed an emergency medicine residency but have completed a residency in other fields (internists, family practitioners, paediatricians, general surgeons, and anesthesiologists). The BCEM is under the authority of the American Board of Physician Specialties. Several ABMS fellowships are available for emergency medicine graduates, including pre-hospital medicine (emergency medical services), international medicine, advanced resuscitation, hospice and palliative care, research, undersea and hyperbaric medicine, sports medicine, pain medicine, ultrasound, pediatric emergency medicine, disaster medicine, wilderness medicine, toxicology, and critical care medicine. In recent years, workforce data has led to a recognition of the need for additional training for primary care physicians who provide emergency care. It has led to several supplemental training programs in first-hour emergency care and a few fellowships for family physicians in emergency medicine., and few fellowships for family physicians in emergency medicine. Funding for training "In 2010, there were 157 allopathic and 37 osteopathic emergency medicine residency programs, which collectively accept about 2,000 new residents each year. Studies have shown that attending emergency physician supervision of residents correlates to higher quality and more cost-effective practice, primarily when an emergency medicine residency exists." Medical education is primarily funded through the Medicare program; payments are given to hospitals for each resident. "Fifty-five per cent of ED payments come from Medicare, fifteen per cent from Medicaid, five per cent from private payment and twenty-five per cent from commercially insured patients." However, choices of physician specialities are not mandated by any agency or program, so even though emergency departments see many Medicare/Medicaid patients and thus receive much funding for training from these programs, there is still concern over a shortage of speciality-trained emergency medicine providers. United Kingdom In the United Kingdom, the Royal College of Emergency Medicine has a role in setting professional standards and assessing trainees. Emergency medical trainees enter speciality training after five or six years of Medical school followed by two years of foundation training. Speciality training takes six years to complete, and success in the assessments and a set of five examinations results in the award of Fellowship of the Royal College of Emergency Medicine (FRCEM). Historically, emergency specialists were drawn from anaesthesia, medicine, and surgery. Many established EM consultants were surgically trained; some hold the fellowship of Royal College of Surgeons of Edinburgh in accident and emergency – FRCSEd (A&E). trainees in emergency medicine may dual accredit in intensive care medicine or seek sub-specialisation in paediatric emergency medicine. Turkey Emergency medicine residencies last four years in Turkey. These physicians have a two-year obligatory service in Turkey to be qualified to have their diploma. After this period, EM specialists can choose to work in private or governmental emergency departments. Pakistan The College of Physicians and Surgeons Pakistan accredited the training in emergency medicine in 2010. Emergency medicine training in Pakistan lasts for five years. The initial two years involve trainees being sent to three major areas: medicine and allied, surgery, and allied and critical care. It is divided into six months each, and the rest six months out of the first two years are spent in the emergency department. In the last three years, trainee residents spend most of their time in the emergency room as senior residents. Certificate courses include ACLS, PALS, ATLS, and research and dissertations are required to complete the training successfully. At the end of five years, candidates become eligible to sit for the FCPS part II exam. After fulfilling the requirement, they become fellows of the College of Physicians and Surgeons Pakistan in emergency medicine. Institutions providing this training include Shifa International Hospitals Islamabad, Aga Khan University Hospital Karachi, POF Hospital Wah, Lady Reading Hospital Peshawar, Indus Hospital Karachi and Jinnah Post Graduate Medical Center Karachi, and Mayo Hospital, Lahore. Iran The first residency program in Iran started in 2002 at Iran University of Medical Sciences, and there are now three-year standard residency programs running in Tehran, Tabriz, Mashhad, Isfahan, and some other universities. All these programs work under the supervision of the emergency medicine speciality board committee. There are now more than 200 (and increasing) board-certified Emergency Physicians in Iran. Ethical and medicolegal issues Ethical and medico-legal issues are embedded within the nature of emergency medicine. Issues surrounding competence, end of life care, and the right to refuse care are encountered daily within the emergency department. Of growing significance are the ethical issues and legal obligations that surround the Mental Health Act, as increasing numbers of suicide attempts and self-harm are seen in the emergency department. The Wooltorton case of 2007, in which a patient arrived at the emergency department post overdose with a note specifying her request for no interventions, highlights the dichotomy that often exists between a physician's ethical obligation to "do no harm" and the legality of a patient's right to refuse. See also Emergency medical services Emergency medical technician First aid Golden hour International emergency medicine Medical emergency Paramedic Pediatric emergency medicine Physician assistant Physician Pre-hospital emergency medicine Prehospital and Disaster Medicine Rescue squad Royal College of Emergency Medicine Traumatology References Further reading External links Emergency medicine MUE (Chile) International Federation for Emergency Medicine Association of Emergency Physicians Canadian Association of Emergency Physicians American Academy of Emergency Medicine American Association of Physician Specialists, Inc. American Board of Emergency Medicine American Board of Physician Specialties American College of Emergency Physicians College of Emergency Physician, Malaysia College of Emergency Medicine (United Kingdom) European Society for Emergency Medicine Society for Academic Emergency Medicine Hong Kong College of Emergency Medicine Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine Emergency Medicine Association of Turkey (EMAT) Emergency Physicians' Association of Turkey (EPAT) Australasian College of Emergency Medicine (ACEM) European Council for Disaster Medicine (ECDM) Medical mnemonics
0.771651
0.995633
0.768281
Cholera
Cholera is an infection of the small intestine by some strains of the bacterium Vibrio cholerae. Symptoms may range from none, to mild, to severe. The classic symptom is large amounts of watery diarrhea lasting a few days. Vomiting and muscle cramps may also occur. Diarrhea can be so severe that it leads within hours to severe dehydration and electrolyte imbalance. This may result in sunken eyes, cold skin, decreased skin elasticity, and wrinkling of the hands and feet. Dehydration can cause the skin to turn bluish. Symptoms start two hours to five days after exposure. Cholera is caused by a number of types of Vibrio cholerae, with some types producing more severe disease than others. It is spread mostly by unsafe water and unsafe food that has been contaminated with human feces containing the bacteria. Undercooked shellfish is a common source. Humans are the only known host for the bacteria. Risk factors for the disease include poor sanitation, insufficient clean drinking water, and poverty. Cholera can be diagnosed by a stool test, or a rapid dipstick test, although the dipstick test is less accurate. Prevention methods against cholera include improved sanitation and access to clean water. Cholera vaccines that are given by mouth provide reasonable protection for about six months, and confer the added benefit of protecting against another type of diarrhea caused by E. coli. In 2017, the US Food and Drug Administration (FDA) approved a single-dose, live, oral cholera vaccine called Vaxchora for adults aged 18–64 who are travelling to an area of active cholera transmission. It offers limited protection to young children. People who survive an episode of cholera have long-lasting immunity for at least three years (the period tested). The primary treatment for affected individuals is oral rehydration salts (ORS), the replacement of fluids and electrolytes by using slightly sweet and salty solutions. Rice-based solutions are preferred. In children, zinc supplementation has also been found to improve outcomes. In severe cases, intravenous fluids, such as Ringer's lactate, may be required, and antibiotics may be beneficial. The choice of antibiotic is aided by antibiotic sensitivity testing. Cholera continues to affect an estimated 3–5 million people worldwide and causes 28,800–130,000 deaths a year. To date, seven cholera pandemics have occurred, with the most recent beginning in 1961, and continuing today. The illness is rare in high-income countries, and affects children most severely. Cholera occurs as both outbreaks and chronically in certain areas. Areas with an ongoing risk of disease include Africa and Southeast Asia. The risk of death among those affected is usually less than 5%, given improved treatment, but may be as high as 50% without such access to treatment. Descriptions of cholera are found as early as the 5th century BC in Sanskrit. In Europe, cholera was a term initially used to describe any kind of gastroenteritis, and was not used for this disease until the early 19th century. The study of cholera in England by John Snow between 1849 and 1854 led to significant advances in the field of epidemiology because of his insights about transmission via contaminated water, and a map of the same was the first recorded incidence of epidemiological tracking. Signs and symptoms The primary symptoms of cholera are profuse diarrhea and vomiting of clear fluid. These symptoms usually start suddenly, half a day to five days after ingestion of the bacteria. The diarrhea is frequently described as "rice water" in nature and may have a fishy odor. An untreated person with cholera may produce of diarrhea a day. Severe cholera, without treatment, kills about half of affected individuals. If the severe diarrhea is not treated, it can result in life-threatening dehydration and electrolyte imbalances. Estimates of the ratio of asymptomatic to symptomatic infections have ranged from 3 to 100. Cholera has been nicknamed the "blue death" because a person's skin may turn bluish-gray from extreme loss of fluids. Fever is rare and should raise suspicion for secondary infection. Patients can be lethargic and might have sunken eyes, dry mouth, cold clammy skin, or wrinkled hands and feet. Kussmaul breathing, a deep and labored breathing pattern, can occur because of acidosis from stool bicarbonate losses and lactic acidosis associated with poor perfusion. Blood pressure drops due to dehydration, peripheral pulse is rapid and thready, and urine output decreases with time. Muscle cramping and weakness, altered consciousness, seizures, or even coma due to electrolyte imbalances are common, especially in children. Cause Transmission Cholera bacteria have been found in shellfish and plankton. Transmission is usually through the fecal-oral route of contaminated food or water caused by poor sanitation. Most cholera cases in developed countries are a result of transmission by food, while in developing countries it is more often water. Food transmission can occur when people harvest seafood such as oysters in waters infected with sewage, as Vibrio cholerae accumulates in planktonic crustaceans and the oysters eat the zooplankton. People infected with cholera often have diarrhea, and disease transmission may occur if this highly liquid stool, colloquially referred to as "rice-water", contaminates water used by others. A single diarrheal event can cause a one-million fold increase in numbers of V. cholerae in the environment. The source of the contamination is typically other people with cholera when their untreated diarrheal discharge is allowed to get into waterways, groundwater or drinking water supplies. Drinking any contaminated water and eating any foods washed in the water, as well as shellfish living in the affected waterway, can cause a person to contract an infection. Cholera is rarely spread directly from person to person. V. cholerae also exists outside the human body in natural water sources, either by itself or through interacting with phytoplankton, zooplankton, or biotic and abiotic detritus. Drinking such water can also result in the disease, even without prior contamination through fecal matter. Selective pressures exist however in the aquatic environment that may reduce the virulence of V. cholerae. Specifically, animal models indicate that the transcriptional profile of the pathogen changes as it prepares to enter an aquatic environment. This transcriptional change results in a loss of ability of V. cholerae to be cultured on standard media, a phenotype referred to as 'viable but non-culturable' (VBNC) or more conservatively 'active but non-culturable' (ABNC). One study indicates that the culturability of V. cholerae drops 90% within 24 hours of entering the water, and furthermore that this loss in culturability is associated with a loss in virulence. Both toxic and non-toxic strains exist. Non-toxic strains can acquire toxicity through a temperate bacteriophage. Susceptibility About 100million bacteria must typically be ingested to cause cholera in a normal healthy adult. This dose, however, is less in those with lowered gastric acidity (for instance those using proton pump inhibitors). Children are also more susceptible, with two- to four-year-olds having the highest rates of infection. Individuals' susceptibility to cholera is also affected by their blood type, with those with type O blood being the most susceptible. Persons with lowered immunity, such as persons with AIDS or malnourished children, are more likely to develop a severe case if they become infected. Any individual, even a healthy adult in middle age, can undergo a severe case, and each person's case should be measured by the loss of fluids, preferably in consultation with a professional health care provider. The cystic fibrosis genetic mutation known as delta-F508 in humans has been said to maintain a selective heterozygous advantage: heterozygous carriers of the mutation (who are not affected by cystic fibrosis) are more resistant to V. cholerae infections. In this model, the genetic deficiency in the cystic fibrosis transmembrane conductance regulator channel proteins interferes with bacteria binding to the intestinal epithelium, thus reducing the effects of an infection. Mechanism When consumed, most bacteria do not survive the acidic conditions of the human stomach. The few surviving bacteria conserve their energy and stored nutrients during the passage through the stomach by shutting down protein production. When the surviving bacteria exit the stomach and reach the small intestine, they must propel themselves through the thick mucus that lines the small intestine to reach the intestinal walls where they can attach and thrive. Once the cholera bacteria reach the intestinal wall, they no longer need the flagella to move. The bacteria stop producing the protein flagellin to conserve energy and nutrients by changing the mix of proteins that they express in response to the changed chemical surroundings. On reaching the intestinal wall, V. cholerae start producing the toxic proteins that give the infected person a watery diarrhea. This carries the multiplying new generations of V. cholerae bacteria out into the drinking water of the next host if proper sanitation measures are not in place. The cholera toxin (CTX or CT) is an oligomeric complex made up of six protein subunits: a single copy of the A subunit (part A), and five copies of the B subunit (part B), connected by a disulfide bond. The five B subunits form a five-membered ring that binds to GM1 gangliosides on the surface of the intestinal epithelium cells. The A1 portion of the A subunit is an enzyme that ADP-ribosylates G proteins, while the A2 chain fits into the central pore of the B subunit ring. Upon binding, the complex is taken into the cell via receptor-mediated endocytosis. Once inside the cell, the disulfide bond is reduced, and the A1 subunit is freed to bind with a human partner protein called ADP-ribosylation factor 6 (Arf6). Binding exposes its active site, allowing it to permanently ribosylate the Gs alpha subunit of the heterotrimeric G protein. This results in constitutive cAMP production, which in turn leads to the secretion of water, sodium, potassium, and bicarbonate into the lumen of the small intestine and rapid dehydration. The gene encoding the cholera toxin was introduced into V. cholerae by horizontal gene transfer. Virulent strains of V. cholerae carry a variant of a temperate bacteriophage called CTXφ. Microbiologists have studied the genetic mechanisms by which the V. cholerae bacteria turn off the production of some proteins and turn on the production of other proteins as they respond to the series of chemical environments they encounter, passing through the stomach, through the mucous layer of the small intestine, and on to the intestinal wall. Of particular interest have been the genetic mechanisms by which cholera bacteria turn on the protein production of the toxins that interact with host cell mechanisms to pump chloride ions into the small intestine, creating an ionic pressure which prevents sodium ions from entering the cell. The chloride and sodium ions create a salt-water environment in the small intestines, which through osmosis can pull up to six liters of water per day through the intestinal cells, creating the massive amounts of diarrhea. The host can become rapidly dehydrated unless treated properly. By inserting separate, successive sections of V. cholerae DNA into the DNA of other bacteria, such as E. coli that would not naturally produce the protein toxins, researchers have investigated the mechanisms by which V. cholerae responds to the changing chemical environments of the stomach, mucous layers, and intestinal wall. Researchers have discovered a complex cascade of regulatory proteins controls expression of V. cholerae virulence determinants. In responding to the chemical environment at the intestinal wall, the V. cholerae bacteria produce the TcpP/TcpH proteins, which, together with the ToxR/ToxS proteins, activate the expression of the ToxT regulatory protein. ToxT then directly activates expression of virulence genes that produce the toxins, causing diarrhea in the infected person and allowing the bacteria to colonize the intestine. Current research aims at discovering "the signal that makes the cholera bacteria stop swimming and start to colonize (that is, adhere to the cells of) the small intestine." Genetic structure Amplified fragment length polymorphism fingerprinting of the pandemic isolates of V. cholerae has revealed variation in the genetic structure. Two clusters have been identified: Cluster I and Cluster II. For the most part, Cluster I consists of strains from the 1960s and 1970s, while Cluster II largely contains strains from the 1980s and 1990s, based on the change in the clone structure. This grouping of strains is best seen in the strains from the African continent. Antibiotic resistance In many areas of the world, antibiotic resistance is increasing within cholera bacteria. In Bangladesh, for example, most cases are resistant to tetracycline, trimethoprim-sulfamethoxazole, and erythromycin. Rapid diagnostic assay methods are available for the identification of multi-drug resistant cases. New generation antimicrobials have been discovered which are effective against cholera bacteria in in vitro studies. Diagnosis A rapid dipstick test is available to determine the presence of V. cholerae. In those samples that test positive, further testing should be done to determine antibiotic resistance. In epidemic situations, a clinical diagnosis may be made by taking a patient history and doing a brief examination. Treatment via hydration and over-the-counter hydration solutions can be started without or before confirmation by laboratory analysis, especially where cholera is a common problem. Stool and swab samples collected in the acute stage of the disease, before antibiotics have been administered, are the most useful specimens for laboratory diagnosis. If an epidemic of cholera is suspected, the most common causative agent is V. cholerae O1. If V. cholerae serogroup O1 is not isolated, the laboratory should test for V. cholerae O139. However, if neither of these organisms is isolated, it is necessary to send stool specimens to a reference laboratory. Infection with V. cholerae O139 should be reported and handled in the same manner as that caused by V. cholerae O1. The associated diarrheal illness should be referred to as cholera and must be reported in the United States. Prevention The World Health Organization (WHO) recommends focusing on prevention, preparedness, and response to combat the spread of cholera. They also stress the importance of an effective surveillance system. Governments can play a role in all of these areas. Water, sanitation and hygiene Although cholera may be life-threatening, prevention of the disease is normally straightforward if proper sanitation practices are followed. In developed countries, due to their nearly universal advanced water treatment and sanitation practices, cholera is rare. For example, the last major outbreak of cholera in the United States occurred in 1910–1911. Cholera is mainly a risk in developing countries in those areas where access to WASH (water, sanitation and hygiene) infrastructure is still inadequate. Effective sanitation practices, if instituted and adhered to in time, are usually sufficient to stop an epidemic. There are several points along the cholera transmission path at which its spread may be halted: Sterilization: Proper disposal and treatment of all materials that may have come into contact with the feces of other people with cholera (e.g., clothing, bedding, etc.) are essential. These should be sanitized by washing in hot water, using chlorine bleach if possible. Hands that touch cholera patients or their clothing, bedding, etc., should be thoroughly cleaned and disinfected with chlorinated water or other effective antimicrobial agents. Sewage and fecal sludge management: In cholera-affected areas, sewage and fecal sludge need to be treated and managed carefully in order to stop the spread of this disease via human excreta. Provision of sanitation and hygiene is an important preventative measure. Open defecation, release of untreated sewage, or dumping of fecal sludge from pit latrines or septic tanks into the environment need to be prevented. In many cholera affected zones, there is a low degree of sewage treatment. Therefore, the implementation of dry toilets that do not contribute to water pollution, as they do not flush with water, may be an interesting alternative to flush toilets. Sources: Warnings about possible cholera contamination should be posted around contaminated water sources with directions on how to decontaminate the water (boiling, chlorination etc.) for possible use. Water purification: All water used for drinking, washing, or cooking should be sterilized by either boiling, chlorination, ozone water treatment, ultraviolet light sterilization (e.g., by solar water disinfection), or antimicrobial filtration in any area where cholera may be present. Chlorination and boiling are often the least expensive and most effective means of halting transmission. Cloth filters or sari filtration, though very basic, have significantly reduced the occurrence of cholera when used in poor villages in Bangladesh that rely on untreated surface water. Better antimicrobial filters, like those present in advanced individual water treatment hiking kits, are most effective. Public health education and adherence to appropriate sanitation practices are of primary importance to help prevent and control transmission of cholera and other diseases. Handwashing with soap or ash after using a toilet and before handling food or eating is also recommended for cholera prevention by WHO Africa. Surveillance Surveillance and prompt reporting allow for containing cholera epidemics rapidly. Cholera exists as a seasonal disease in many endemic countries, occurring annually mostly during rainy seasons. Surveillance systems can provide early alerts to outbreaks, therefore leading to coordinated response and assist in preparation of preparedness plans. Efficient surveillance systems can also improve the risk assessment for potential cholera outbreaks. Understanding the seasonality and location of outbreaks provides guidance for improving cholera control activities for the most vulnerable. For prevention to be effective, it is important that cases be reported to national health authorities. Vaccination Spanish physician Jaume Ferran i Clua developed the first successful cholera inoculation in 1885, the first to immunize humans against a bacterial disease. His vaccine and inoculation was rather controversial and was rejected by his peers and several investigation commissions but it ended up demonstrating its effectiveness and being recognized for it: out of the 30 thousand people he vaccinated only 54 died. Russian-Jewish bacteriologist Waldemar Haffkine also developed a human cholera vaccine in July 1892. He conducted a massive inoculation program in British India. Persons who survive an episode of cholera have long-lasting immunity for at least 3 years (the period tested.) A number of safe and effective oral vaccines for cholera are available. The World Health Organization (WHO) has three prequalified oral cholera vaccines (OCVs): Dukoral, Sanchol, and Euvichol. Dukoral, an orally administered, inactivated whole-cell vaccine, has an overall efficacy of about 52% during the first year after being given and 62% in the second year, with minimal side effects. It is available in over 60 countries. However, it is not currently recommended by the Centers for Disease Control and Prevention (CDC) for most people traveling from the United States to endemic countries. The vaccine that the US Food and Drug Administration (FDA) recommends, Vaxchora, is an oral attenuated live vaccine, that is effective for adults aged 18–64 as a single dose. One injectable vaccine was found to be effective for two to three years. The protective efficacy was 28% lower in children less than five years old. However, , it has limited availability. Work is under way to investigate the role of mass vaccination. The WHO recommends immunization of high-risk groups, such as children and people with HIV, in countries where this disease is endemic. If people are immunized broadly, herd immunity results, with a decrease in the amount of contamination in the environment. WHO recommends that oral cholera vaccination be considered in areas where the disease is endemic (with seasonal peaks), as part of the response to outbreaks, or in a humanitarian crisis during which the risk of cholera is high. Oral Cholera Vaccine (OCV) has been recognized as an adjunct tool for prevention and control of cholera. The World Health Organization (WHO) has prequalified three bivalent cholera vaccines—Dukoral (SBL Vaccines), containing a non-toxic B-subunit of cholera toxin and providing protection against V. cholerae O1; and two vaccines developed using the same transfer of technology—ShanChol (Shantha Biotec) and Euvichol (EuBiologics Co.), which have bivalent O1 and O139 oral killed cholera vaccines. Oral cholera vaccination could be deployed in a diverse range of situations from cholera-endemic areas and locations of humanitarian crises, but no clear consensus exists. Sari filtration Developed for use in Bangladesh, the "sari filter" is a simple and cost-effective appropriate technology method for reducing the contamination of drinking water. Used sari cloth is preferable but other types of used cloth can be used with some effect, though the effectiveness will vary significantly. Used cloth is more effective than new cloth, as the repeated washing reduces the space between the fibers. Water collected in this way has a greatly reduced pathogen count—though it will not necessarily be perfectly safe, it is an improvement for poor people with limited options. In Bangladesh this practice was found to decrease rates of cholera by nearly half. It involves folding a sari four to eight times. Between uses the cloth should be rinsed in clean water and dried in the sun to kill any bacteria on it. A nylon cloth appears to work as well but is not as affordable. Treatment Continued eating speeds the recovery of normal intestinal function. The WHO recommends this generally for cases of diarrhea no matter what the underlying cause. A CDC training manual specifically for cholera states: "Continue to breastfeed your baby if the baby has watery diarrhea, even when traveling to get treatment. Adults and older children should continue to eat frequently." Fluids The most common error in caring for patients with cholera is to underestimate the speed and volume of fluids required. In most cases, cholera can be successfully treated with oral rehydration therapy (ORT), which is highly effective, safe, and simple to administer. Rice-based solutions are preferred to glucose-based ones due to greater efficiency. In severe cases with significant dehydration, intravenous rehydration may be necessary. Ringer's lactate is the preferred solution, often with added potassium. Large volumes and continued replacement until diarrhea has subsided may be needed. Ten percent of a person's body weight in fluid may need to be given in the first two to four hours. This method was first tried on a mass scale during the Bangladesh Liberation War, and was found to have much success. Despite widespread beliefs, fruit juices and commercial fizzy drinks like cola are not ideal for rehydration of people with serious infections of the intestines, and their excessive sugar content may even harm water uptake. If commercially produced oral rehydration solutions are too expensive or difficult to obtain, solutions can be made. One such recipe calls for 1 liter of boiled water, 1/2 teaspoon of salt, 6 teaspoons of sugar, and added mashed banana for potassium and to improve taste. Electrolytes As there frequently is initially acidosis, the potassium level may be normal, even though large losses have occurred. As the dehydration is corrected, potassium levels may decrease rapidly, and thus need to be replaced. This is best done by Oral Rehydration Solution (ORS). Antibiotics Antibiotic treatments for one to three days shorten the course of the disease and reduce the severity of the symptoms. Use of antibiotics also reduces fluid requirements. People will recover without them, however, if sufficient hydration is maintained. The WHO only recommends antibiotics in those with severe dehydration. Doxycycline is typically used first line, although some strains of V. cholerae have shown resistance. Testing for resistance during an outbreak can help determine appropriate future choices. Other antibiotics proven to be effective include cotrimoxazole, erythromycin, tetracycline, chloramphenicol, and furazolidone. Fluoroquinolones, such as ciprofloxacin, also may be used, but resistance has been reported. Antibiotics improve outcomes in those who are both severely and not severely dehydrated. Azithromycin and tetracycline may work better than doxycycline or ciprofloxacin. Zinc supplementation In Bangladesh zinc supplementation reduced the duration and severity of diarrhea in children with cholera when given with antibiotics and rehydration therapy as needed. It reduced the length of disease by eight hours and the amount of diarrhea stool by 10%. Supplementation appears to be also effective in both treating and preventing infectious diarrhea due to other causes among children in the developing world. Prognosis If people with cholera are treated quickly and properly, the mortality rate is less than 1%; however, with untreated cholera, the mortality rate rises to 50–60%. For certain genetic strains of cholera, such as the one present during the 2010 epidemic in Haiti and the 2004 outbreak in India, death can occur within two hours of becoming ill. Epidemiology Cholera affects an estimated 2.8 million people worldwide, and causes approximately 95,000 deaths a year (uncertainty range: 21,000–143,000) . This occurs mainly in the developing world. In the early 1980s, death rates are believed to have still been higher than three million a year. It is difficult to calculate exact numbers of cases, as many go unreported due to concerns that an outbreak may have a negative impact on the tourism of a country. As of 2004, cholera remained both epidemic and endemic in many areas of the world. Recent major outbreaks are the 2010s Haiti cholera outbreak and the 2016–2022 Yemen cholera outbreak. In October 2016, an outbreak of cholera began in war-ravaged Yemen. WHO called it "the worst cholera outbreak in the world". In 2019, 93% of the reported 923,037 cholera cases were from Yemen (with 1911 deaths reported). Between September 2019 and September 2020, a global total of over 450,000 cases and over 900 deaths was reported; however, the accuracy of these numbers suffer from over-reporting from countries that report suspected cases (and not laboratory confirmed cases), as well as under-reporting from countries that do not report official cases (such as Bangladesh, India and Philippines). Although much is known about the mechanisms behind the spread of cholera, researchers still do not have a full understanding of what makes cholera outbreaks happen in some places and not others. Lack of treatment of human feces and lack of treatment of drinking water greatly facilitate its spread. Bodies of water have been found to serve as a reservoir of infection, and seafood shipped long distances can spread the disease. Cholera had disappeared from the Americas for most of the 20th century, but it reappeared toward the end of that century, beginning with a severe outbreak in Peru. This was followed by the 2010s Haiti cholera outbreak and another outbreak of cholera in Haiti amid the 2018–2023 Haitian crisis. the disease is endemic in Africa and some areas of eastern and western Asia (Bangladesh, India and Yemen). Cholera is not endemic in Europe; all reported cases had a travel history to endemic areas. History of outbreaks The word cholera is from kholera from χολή kholē "bile". Cholera likely has its origins in the Indian subcontinent as evidenced by its prevalence in the region for centuries. References to cholera appear in the European literature as early as 1642, from the Dutch physician Jakob de Bondt's description in his De Medicina Indorum. (The "Indorum" of the title refers to the East Indies. He also gave first European descriptions of other diseases.) But at the time, the word "cholera" was historically used by European physicians to refer to any gastrointestinal upset resulting in yellow diarrhea. De Bondt thus used a common word already in regular use to describe the new disease. This was a frequent practice of the time. It was not until the 1830s that the name for severe yellow diarrhea changed in English from "cholera" to "cholera morbus" to differentiate it from what was then known as "Asiatic cholera", or that associated with origins in India and the East. Early outbreaks in the Indian subcontinent are believed to have been the result of crowded, poor living conditions, as well as the presence of pools of still water, both of which provide ideal conditions for cholera to thrive. The disease first spread by travelers along trade routes (land and sea) to Russia in 1817, later to the rest of Europe, and from Europe to North America and the rest of the world, (hence the name "Asiatic cholera"). Seven cholera pandemics have occurred since the early 19th century; the first one did not reach the Americas. The seventh pandemic originated in Indonesia in 1961. The first cholera pandemic occurred in the Bengal region of India, near Calcutta starting in 1817 through 1824. The disease dispersed from India to Southeast Asia, the Middle East, Europe, and Eastern Africa. The movement of British Army and Navy ships and personnel is believed to have contributed to the range of the pandemic, since the ships carried people with the disease to the shores of the Indian Ocean, from Africa to Indonesia, and north to China and Japan. The second pandemic lasted from 1826 to 1837 and particularly affected North America and Europe. Advancements in transportation and global trade, and increased human migration, including soldiers, meant that more people were carrying the disease more widely. The third pandemic erupted in 1846, persisted until 1860, extended to North Africa, and reached North and South America. It was introduced to North America at Quebec, Canada, via Irish immigrants from the Great Famine. In this pandemic, Brazil was affected for the first time. The fourth pandemic lasted from 1863 to 1875, spreading from India to Naples and Spain, and reaching the United States at New Orleans, Louisiana in 1873. It spread throughout the Mississippi River system on the continent. The fifth pandemic was from 1881 to 1896. It started in India and spread to Europe, Asia, and South America. The sixth pandemic ran from 1899 to 1923. These epidemics had a lower number of fatalities because physicians and researchers had a greater understanding of the cholera bacteria. Egypt, the Arabian peninsula, Persia, India, and the Philippines were hit hardest during these epidemics. Other areas, such as Germany in 1892 (primarily the city of Hamburg, where more than 8.600 people died) and Naples from 1910 to 1911, also had severe outbreaks. The seventh pandemic originated in 1961 in Indonesia and is marked by the emergence of a new strain, nicknamed El Tor, which still persists in developing countries. This pandemic had initially subsided about 1975 and was thought to have ended, but, as noted, it has persisted. There were a rise in cases in the 1990s and since. Cholera became widespread in the 19th century. Since then it has killed tens of millions of people. In Russia alone, between 1847 and 1851, more than one million people died from the disease. It killed 150,000 Americans during the second pandemic. Between 1900 and 1920, perhaps eight million people died of cholera in India. Cholera officially became the first reportable disease in the United States due to the significant effects it had on health. John Snow, in England, in 1854 was the first to identify the importance of contaminated water as its source of transmission. Cholera is now no longer considered a pressing health threat in Europe and North America due to filtering and chlorination of water supplies, but it still strongly affects populations in developing countries. In the past, vessels flew a yellow quarantine flag if any crew members or passengers had cholera. No one aboard a vessel flying a yellow flag would be allowed ashore for an extended period, typically 30 to 40 days. Historically many different claimed remedies have existed in folklore. Many of the older remedies were based on the miasma theory, that the disease was transmitted by bad air. Some believed that abdominal chilling made one more susceptible, and flannel and cholera belts were included in army kits. In the 1854–1855 outbreak in Naples, homeopathic camphor was used according to Hahnemann..Dr Hahnemann laid down three main remedies that would be curative in that disease; in early and simple cases camphor; in later stages with excessive cramping, cuprum or with excessive evacuations and profuse cold sweat, veratrum album. These are the Trio Cholera remedies used by homoeopaths around the world. T. J. Ritter's Mother's Remedies book lists tomato syrup as a home remedy from northern America. Elecampane was recommended in the United Kingdom, according to William Thomas Fernie. The first effective human vaccine was developed in 1885, and the first effective antibiotic was developed in 1948. Cholera cases are much less frequent in developed countries where governments have helped to establish water sanitation practices and effective medical treatments. In the 19th century the United States, for example, had a severe cholera problem similar to those in some developing countries. It had three large cholera outbreaks in the 1800s, which can be attributed to Vibrio cholerae spread through interior waterways such as the Erie Canal and the extensive Mississippi River valley system, as well as the major ports along the Eastern Seaboard and their cities upriver. The island of Manhattan in New York City touches the Atlantic Ocean, where cholera collected from river waters and ship discharges just off the coast. At this time, New York City did not have as effective a sanitation system as it developed in the later 20th century, so cholera spread through the city's water supply. Cholera morbus is a historical term that was used to refer to gastroenteritis rather than specifically to what is now defined as the disease of cholera. Research One of the major contributions to fighting cholera was made by the physician and pioneer medical scientist John Snow (1813–1858), who in 1854 found a link between cholera and contaminated drinking water. Dr. Snow proposed a microbial origin for epidemic cholera in 1849. In his major "state of the art" review of 1855, he proposed a substantially complete and correct model for the cause of the disease. In two pioneering epidemiological field studies, he was able to demonstrate human sewage contamination was the most probable disease vector in two major epidemics in London in 1854. His model was not immediately accepted, but it was increasingly seen as plausible as medical microbiology developed over the next 30 years or so. For his work on cholera, John Snow is often regarded as the "Father of Epidemiology". The bacterium was isolated in 1854 by Italian anatomist Filippo Pacini, but its exact nature and his results were not widely known. In the same year, the Catalan Joaquim Balcells i Pascual discovered the bacterium. In 1856 António Augusto da Costa Simões and José Ferreira de Macedo Pinto, two Portuguese researchers, are believed to have done the same. Between the mid-1850s and the 1900s, cities in developed nations made massive investment in clean water supply and well-separated sewage treatment infrastructures. This eliminated the threat of cholera epidemics from the major developed cities in the world. In 1883, Robert Koch identified V. cholerae with a microscope as the bacillus causing the disease. Hemendra Nath Chatterjee, a Bengali scientist, was the first to formulate and demonstrate the effectiveness of oral rehydration salt (ORS) to treat diarrhea. In his 1953 paper, published in The Lancet, he states that promethazine can stop vomiting during cholera and then oral rehydration is possible. The formulation of the fluid replacement solution was 4 g of sodium chloride, 25 g of glucose and 1000 ml of water. Indian medical scientist Sambhu Nath De discovered the cholera toxin, the animal model of cholera, and successfully demonstrated the method of transmission of cholera pathogen Vibrio cholerae. Robert Allan Phillips, working at US Naval Medical Research Unit Two in Southeast Asia, evaluated the pathophysiology of the disease using modern laboratory chemistry techniques. He developed a protocol for rehydration. His research led the Lasker Foundation to award him its prize in 1967. More recently, in 2002, Alam, et al., studied stool samples from patients at the International Centre for Diarrhoeal Disease in Dhaka, Bangladesh. From the various experiments they conducted, the researchers found a correlation between the passage of V. cholerae through the human digestive system and an increased infectivity state. Furthermore, the researchers found the bacterium creates a hyperinfected state where genes that control biosynthesis of amino acids, iron uptake systems, and formation of periplasmic nitrate reductase complexes were induced just before defecation. These induced characteristics allow the cholera vibrios to survive in the "rice water" stools, an environment of limited oxygen and iron, of patients with a cholera infection. Global Strategy In 2017, the WHO launched the "Ending Cholera: a global roadmap to 2030" strategy which aims to reduce cholera deaths by 90% by 2030. The strategy was developed by the Global Task Force on Cholera Control (GTFCC) which develops country-specific plans and monitors progress. The approach to achieve this goal combines surveillance, water sanitation, rehydration treatment and oral vaccines. Specifically, the control strategy focuses on three approaches: i) early detection and response to outbreaks to contain outbreaks, ii) stopping cholera transmission through improved sanitation and vaccines in hotspots, and iii) a global framework for cholera control through the GTFCC. The WHO and the GTFCC do not consider global cholera eradication a viable goal. Even though humans are the only host of cholera, the bacterium can persist in the environment without a human host. While global eradication is not possible, elimination of human to human transmission may be possible. Local elimination is possible, which has been underway most recently during the 2010s Haiti cholera outbreak. Haiti aims to achieve certification of elimination by 2022. The GTFCC targets 47 countries, 13 of which have established vaccination campaigns. Society and culture Health policy In many developing countries, cholera still reaches its victims through contaminated water sources, and countries without proper sanitation techniques have greater incidence of the disease. Governments can play a role in this. In 2008, for example, the Zimbabwean cholera outbreak was due partly to the government's role, according to a report from the James Baker Institute. The Haitian government's inability to provide safe drinking water after the 2010 earthquake led to an increase in cholera cases as well. Similarly, South Africa's cholera outbreak was exacerbated by the government's policy of privatizing water programs. The wealthy elite of the country were able to afford safe water while others had to use water from cholera-infected rivers. According to Rita R. Colwell of the James Baker Institute, if cholera does begin to spread, government preparedness is crucial. A government's ability to contain the disease before it extends to other areas can prevent a high death toll and the development of an epidemic or even pandemic. Effective disease surveillance can ensure that cholera outbreaks are recognized as soon as possible and dealt with appropriately. Oftentimes, this will allow public health programs to determine and control the cause of the cases, whether it is unsanitary water or seafood that have accumulated a lot of Vibrio cholerae specimens. Having an effective surveillance program contributes to a government's ability to prevent cholera from spreading. In the year 2000 in the state of Kerala in India, the Kottayam district was determined to be "Cholera-affected"; this pronouncement led to task forces that concentrated on educating citizens with 13,670 information sessions about human health. These task forces promoted the boiling of water to obtain safe water, and provided chlorine and oral rehydration salts. Ultimately, this helped to control the spread of the disease to other areas and minimize deaths. On the other hand, researchers have shown that most of the citizens infected during the 1991 cholera outbreak in Bangladesh lived in rural areas, and were not recognized by the government's surveillance program. This inhibited physicians' abilities to detect cholera cases early. According to Colwell, the quality and inclusiveness of a country's health care system affects the control of cholera, as it did in the Zimbabwean cholera outbreak. While sanitation practices are important, when governments respond quickly and have readily available vaccines, the country will have a lower cholera death toll. Affordability of vaccines can be a problem; if the governments do not provide vaccinations, only the wealthy may be able to afford them and there will be a greater toll on the country's poor. The speed with which government leaders respond to cholera outbreaks is important. Besides contributing to an effective or declining public health care system and water sanitation treatments, government can have indirect effects on cholera control and the effectiveness of a response to cholera. A country's government can impact its ability to prevent disease and control its spread. A speedy government response backed by a fully functioning health care system and financial resources can prevent cholera's spread. This limits cholera's ability to cause death, or at the very least a decline in education, as children are kept out of school to minimize the risk of infection. Inversely, poor government response can lead to civil unrest and cholera riots. Notable cases Tchaikovsky's death has traditionally been attributed to cholera, most probably contracted through drinking contaminated water several days earlier. Tchaikovsky's mother died of cholera, and his father became sick with cholera at this time but made a full recovery. Some scholars, however, including English musicologist and Tchaikovsky authority David Brown and biographer Anthony Holden, have theorized that his death was a suicide. 2010s Haiti cholera outbreak. Ten months after the 2010 earthquake, an outbreak swept over Haiti, traced to a United Nations base of peacekeepers from Nepal. This marks the worst cholera outbreak in recent history, as well as the best documented cholera outbreak in modern public health. Adam Mickiewicz, Polish poet and novelist, is thought to have died of cholera in Istanbul in 1855. Sadi Carnot, physicist, a pioneer of thermodynamics (d. 1832) Charles X, King of France (d. 1836) James K. Polk, eleventh president of the United States (d. 1849) Carl von Clausewitz, Prussian soldier and German military theorist (d. 1831) Elliot Bovill, Chief Justice of the Straits Settlements (1893) Nikola Tesla, Serbian-American inventor, engineer and futurist known for his contributions to the design of the modern alternating current (AC) electricity supply system, contracted cholera in 1873 at the age of 17. He was bedridden for nine months, and near death multiple times, but survived and fully recovered. In popular culture Unlike tuberculosis ("consumption") which in literature and the arts was often romanticized as a disease of denizens of the demimonde or those with an artistic temperament, cholera is a disease which almost entirely affects the poor living in unsanitary conditions. This, and the unpleasant course of the disease – which includes voluminous "rice-water" diarrhea, the hemorrhaging of liquids from the mouth, and violent muscle contractions which continue even after death – has discouraged the disease from being romanticized, or even being factually presented in popular culture. The 1889 novel Mastro-don Gesualdo by Giovanni Verga presents the course of a cholera epidemic across the island of Sicily, but does not show the suffering of those affected. Cholera is a major plot device in The Painted Veil, a 1925 novel by W. Somerset Maugham. The story concerns a shy bacteriologist who discovers his young, pretty wife is having an adulterous affair. The doctor exacts revenge on his wife by inducing her to travel with him to mainland China which is in the grips of an horrific cholera outbreak. The ravages of the disease are frankly described in the novel. In Thomas Mann's novella Death in Venice, first published in 1912 as Der Tod in Venedig, Mann "presented the disease as emblematic of the final 'bestial degradation' of the sexually transgressive author Gustav von Aschenbach." Contrary to the actual facts of how violently cholera kills, Mann has his protagonist die peacefully on a beach in a deck chair. Luchino Visconti's 1971 film version also hid from the audience the actual course of the disease. Mann's novella was also made into an opera by Benjamin Britten in 1973, his last one, and into a ballet by John Neumeier for his Hamburg Ballet company, in December 2003.* The Horseman on the Roof (orig. French Le Hussard sur le toit) is a 1951 adventure novel written by Jean Giono. It tells the story of Angelo Pardi, a young Italian carbonaro colonel of hussars, caught up in the 1832 cholera epidemic in Provence. In 1995, it was made into a film of the same name directed by Jean-Paul Rappeneau. In Gabriel Garcia Márquez's 1985 novel Love in the Time of Cholera, cholera is "a looming background presence rather than a central figure requiring vile description." The novel was adapted in 2007 for the film of the same name directed by Mike Newell. In The Secret Garden, Mary Lennox's parents die from cholera. Country examples Zambia In Zambia, widespread cholera outbreaks have occurred since 1977, most commonly in the capital city of Lusaka. In 2017, an outbreak of cholera was declared in Zambia after laboratory confirmation of Vibrio cholerae O1, biotype El Tor, serotype Ogawa, from stool samples from two patients with acute watery diarrhea. There was a rapid increase in the number of cases from several hundred cases in early December 2017 to approximately 2,000 by early January 2018. With intensification of the rains, new cases increased on a daily basis reaching a peak on the first week of January 2018 with over 700 cases reported. In collaboration with partners, the Zambia Ministry of Health (MoH) launched a multifaceted public health response that included increased chlorination of the Lusaka municipal water supply, provision of emergency water supplies, water quality monitoring and testing, enhanced surveillance, epidemiologic investigations, a cholera vaccination campaign, aggressive case management and health care worker training, and laboratory testing of clinical samples. The Zambian Ministry of Health implemented a reactive one-dose Oral Cholera Vaccine (OCV) campaign in April 2016 in three Lusaka compounds, followed by a pre-emptive second-round in December. Nigeria In June 2024, the Nigeria Centre for Disease Control and Prevention (NCDC) announced a total of 1,141 suspected and 65 confirmed cases of cholera with 30 deaths from 96 Local Government Areas (LGAs) in 30 states of the country. NCDC, in its public health advisory, said Abia, Bayelsa, Bauchi, Cross River, Delta, Imo, Katsina, Lagos, Nasarawa and Zamfara states were the 10 states that contributed 90 percent of the burden of cholera in the country at the time. India The city of Kolkata, India in the state of West Bengal in the Ganges delta has been described as the "homeland of cholera", with regular outbreaks and pronounced seasonality. In India, where the disease is endemic, cholera outbreaks occur every year between dry seasons and rainy seasons. India is also characterized by high population density, unsafe drinking water, open drains, and poor sanitation which provide an optimal niche for survival, sustenance and transmission of Vibrio cholerae. Democratic Republic of Congo In Goma in the Democratic Republic of Congo, cholera has left an enduring mark on human and medical history. Cholera pandemics in the 19th and 20th centuries led to the growth of epidemiology as a science and in recent years it has continued to press advances in the concepts of disease ecology, basic membrane biology, and transmembrane signaling and in the use of scientific information and treatment design. Explanatory notes References Further reading Bilson, Geoffrey. A Darkened House: Cholera in Nineteenth-Century Canada (U of Toronto Press, 1980). Gilbert, Pamela K. "Cholera and Nation: Doctoring the Social Body in Victorian England" (SUNY Press, 2008). Snowden, Frank M. Naples in the Time of Cholera, 1884–1911 (Cambridge UP, 1995). Vinten-Johansen, Peter, ed. Investigating Cholera in Broad Street: A History in Documents (Broadview Press, 2020). regarding 1850s in England. Vinten-Johansen, Peter, et al. Cholera, chloroform, and the science of medicine: a life of John Snow (2003). External links Prevention and control of cholera outbreaks: WHO policy and recommendations CholeraWorld Health Organization Cholera – Vibrio cholerae infectionCenters for Disease Control and Prevention Diarrhea Foodborne illnesses Gastrointestinal tract disorders Intestinal infectious diseases Tropical diseases Epidemics Pandemics Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate Sanitation Waterborne diseases Vaccine-preventable diseases
0.768605
0.999539
0.768251
Blood transfusion
Blood transfusion is the process of transferring blood products into a person's circulation intravenously. Transfusions are used for various medical conditions to replace lost components of the blood. Early transfusions used whole blood, but modern medical practice commonly uses only components of the blood, such as red blood cells, plasma, platelets, and other clotting factors. White blood cells are transfused only in very rare circumstances, since granulocyte transfusion has limited applications. Whole blood has come back into use in the setting of trauma. Red blood cells (RBC) contain hemoglobin and supply the cells of the body with oxygen. White blood cells are not commonly used during transfusions, but they are part of the immune system and also fight infections. Plasma is the "yellowish" liquid part of blood, which acts as a buffer and contains proteins and other important substances needed for the body's overall health. Platelets are involved in blood clotting, preventing the body from bleeding. Before these components were known, doctors believed that blood was homogeneous. Because of this scientific misunderstanding, many patients died because of incompatible blood transferred to them. Medical uses Red cell transfusion Historically, red blood cell transfusion was considered when the hemoglobin level fell below 100g/L or hematocrit fell below 30%. Because each unit of blood given carries risks, a trigger level lower than that, at 70 to 80g/L, is now usually used, as it has been shown to have better patient outcomes. The administration of a single unit of blood is the standard for hospitalized people who are not bleeding, with this treatment followed with re-assessment and consideration of symptoms and hemoglobin concentration. Patients with poor oxygen saturation may need more blood. The advisory caution to use blood transfusion only with more severe anemia is in part due to evidence that outcomes are worsened if larger amounts are given. One may consider transfusion for people with symptoms of cardiovascular disease such as chest pain or shortness of breath. In cases where patients have low levels of hemoglobin due to iron deficiency, but are cardiovascularly stable, oral or parenteral iron is a preferred option based on both efficacy and safety. Other blood products are given where appropriate, e.g., fresh frozen plasma to treat clotting deficiencies and platelets to treat or prevent bleeding in thrombocytopenic patients. Procedure Before a blood transfusion is given, there are many steps taken to ensure quality of the blood products, compatibility, and safety to the recipient. In 2012, a national blood policy was in place in 70% of countries and 69% of countries had specific legislation that covers the safety and quality of blood transfusion. Blood donation The source of blood to be transfused can either be the potential recipient (autologous transfusion), or someone else (allogeneic or homologous transfusion). The latter is much more common than the former. Using another's blood must first start with donation of blood. Blood is most commonly donated as whole blood obtained intravenously and mixed with an anticoagulant. In first-world countries, donations are usually anonymous to the recipient, but products in a blood bank are always individually traceable through the whole cycle of donation, testing, separation into components, storage, and administration to the recipient. This enables management and investigation of any suspected transfusion related disease transmission or transfusion reaction. Developing countries rely heavily on replacement and remunerated donors rather than voluntary nonremunerated donors due to concerns regarding donation- and transfusion-transmitted infection as well as local and cultural beliefs. It is unclear whether applying alcohol swab alone or alcohol swab followed by antiseptic is able to reduce contamination of donor's blood. Studies show that the main motivators to blood donation tend to be prosocial (e.g., altruism, selflessness, charity), while the main deterrents include fear, distrust, or perceived racial discrimination in historic contexts. Processing and testing Donated blood is usually subjected to processing after it is collected, to make it suitable for use in specific patient populations. Collected blood is then separated into blood components by centrifugation: red blood cells, plasma, and platelets. Plasma can be further processed to manufacture albumin protein, clotting factor concentrates, cryoprecipitate, fibrinogen concentrate, and immunoglobulins (antibodies). Red cells, plasma and platelets can also be donated individually via a more complex process called apheresis. The World Health Organization (WHO) recommends that all donated blood be tested for transfusion-transmissible infections. These include HIV, hepatitis B, hepatitis C, Treponema pallidum (syphilis) and, where relevant, other infections that pose a risk to the safety of the blood supply, such as Trypanosoma cruzi (Chagas disease) and Plasmodium species (malaria). According to the WHO, 10 countries are not able to screen all donated blood for one or more of: HIV, hepatitis B, hepatitis C, or syphilis. One of the main reasons for this is because testing kits are not always available. However the prevalence of transfusion-transmitted infections is much higher in low income countries compared to middle and high income countries. All donated blood should also be tested for the ABO blood group system and Rh blood group system to ensure that the patient is receiving compatible blood. In addition, in some countries platelet products are also tested for bacterial infections due to its higher inclination for contamination due to storage at room temperature. Donors may be tested for cytomegalovirus (CMV) because of the risk of transmission to certain immunocompromised recipients, such as those with stem cell transplant or T cell diseases. However, testing is not universally mandated, because leukoreduced blood is generally considered safe from CMV transmission; also, most donors (and recipients) are seropositive for CMV, and are not actively viremic. CMV seropositive donors are still eligible to donate. Leukocyte reduction is the removal of white blood cells by filtration. Leukoreduced blood products are less likely to cause HLA alloimmunization (development of antibodies against specific blood types), febrile non-hemolytic transfusion reaction, cytomegalovirus infection, and platelet-transfusion refractoriness. Pathogen Reduction treatment that involves, for example, the addition of riboflavin with subsequent exposure to UV light has been shown to be effective in inactivating pathogens (viruses, bacteria, parasites and white blood cells) in blood products. By inactivating white blood cells in donated blood products, riboflavin and UV light treatment can also replace gamma-irradiation as a method to prevent graft-versus-host disease (TA-GvHD). Compatibility testing Before a recipient receives a transfusion, compatibility testing between donor and recipient blood must be done. The first step before a transfusion is given is to type and screen the recipient's blood. Typing of recipient's blood determines the ABO and Rh status. The sample is then screened for any alloantibodies that may react with donor blood. It takes about 45 minutes to complete (depending on the method used). The blood bank scientist also checks for special requirements of the patient (e.g. need for washed, irradiated or CMV negative blood) and the history of the patient to see if they have previously identified antibodies and any other serological anomalies. A positive screen warrants an antibody panel/investigation to determine if it is clinically significant. An antibody panel consists of commercially prepared group O red cell suspensions from donors that have been phenotyped for antigens that correspond to commonly encountered and clinically significant alloantibodies. Donor cells may have homozygous (e.g. K+k+), heterozygous (K+k-) expression or no expression of various antigens (K−k−). The phenotypes of all the donor cells being tested are shown in a chart. The patient's serum is tested against the various donor cells using an indirect Coombs test. Based on the reactions of the patient's serum against the donor cells, a pattern will emerge to confirm the presence of one or more antibodies. Not all antibodies are clinically significant (i.e. cause transfusion reactions, HDN, etc.). Once the patient has developed a clinically significant antibody it is vital that the patient receive antigen-negative red blood cells to prevent future transfusion reactions. If there is no antibody present, an immediate spin crossmatch may be performed where the recipient serum and donor rbc are incubated. In the immediate spin method, two drops of patient serum are tested against a drop of 3–5% suspension of donor cells in a test tube and spun in a serofuge. Agglutination or hemolysis (i.e., positive Coombs test) in the test tube is a positive reaction. If the crossmatch is positive, then further investigation is needed. Patients with no history of red cell antibodies may qualify for computer-assisted crossmatch, which does not involve combining patient serum with donor cells. If an antibody is suspected, potential donor units must first be screened for the corresponding antigen by phenotyping them. Antigen negative units are then tested against the patient plasma using an antiglobulin/indirect crossmatch technique at 37 degrees Celsius to enhance reactivity and make the test easier to read. In urgent cases where crossmatching cannot be completed, and the risk of dropping hemoglobin outweighs the risk of transfusing uncrossmatched blood, O-negative blood is used, followed by crossmatch as soon as possible. O-negative is also used for children and women of childbearing age. It is preferable for the laboratory to obtain a pre-transfusion sample in these cases so a type and screen can be performed to determine the actual blood group of the patient and to check for alloantibodies. Compatibility of ABO and Rh system for Red Cell (Erythrocyte) Transfusion This chart shows possible matches in blood transfusion between donor and receiver using ABO and Rh system. The symbol indicates compatibility. Adverse effects In the same way that the safety of pharmaceutical products is overseen by pharmacovigilance, the safety of blood and blood products is overseen by haemovigilance. This is defined by the World Health Organization (WHO) as a system "...to identify and prevent occurrence or recurrence of transfusion related unwanted events, to increase the safety, efficacy and efficiency of blood transfusion, covering all activities of the transfusion chain from donor to recipient." The system should include monitoring, identification, reporting, investigation and analysis of adverse events, near-misses, and reactions related to transfusion and manufacturing. In the UK this data is collected by an independent organisation called SHOT (Serious Hazards Of Transfusion). Haemovigilance systems have been established in many countries with the objective of ensuring the safety of blood for transfusion, but their organisational set-up and operating principles can vary. Transfusions of blood products are associated with several complications, many of which can be grouped as immunological or infectious. There is controversy on potential quality degradation during storage. Immunologic reaction Acute hemolytic reactions are defined according to Serious Hazards of Transfusion (SHOT) as "fever and other symptoms/signs of haemolysis within 24 hours of transfusion; confirmed by one or more of the following: a fall of Hb, rise in lactate dehydrogenase (LDH), positive direct antiglobulin test (DAT), positive crossmatch" This is due to destruction of donor red blood cells by preformed recipient antibodies. Most often this occurs because of clerical errors or improper ABO blood typing and crossmatching resulting in a mismatch in ABO blood type between the donor and the recipient. Symptoms include fever, chills, chest pain, back pain, hemorrhage, increased heart rate, shortness of breath, and rapid drop in blood pressure. When suspected, transfusion should be stopped immediately, and blood sent for tests to evaluate for presence of hemolysis. Treatment is supportive. Kidney injury may occur because of the effects of the hemolytic reaction (pigment nephropathy). The severity of the transfusion reaction is depended upon amount of donor's antigen transfused, nature of the donor's antigens, the nature and the amount of recipient antibodies. Delayed hemolytic reactions occur more than 24 hours after a transfusion. They usually occur within 28 days of a transfusion. They can be due to either a low level of antibodies present prior to the start of the transfusion, which are not detectable on pre-transfusion testing; or development of a new antibody against an antigen in the transfused blood. Therefore, delayed haemolytic reaction does not manifest until after 24 hours when enough antibodies are available to cause a reaction. The red blood cells are removed by macrophages from the blood circulation into liver and spleen to be destroyed, which leads to extravascular haemolysis. This process usually mediated by anti-Rh and anti-Kidd antibodies. However, this type of transfusion reaction is less severe when compared to acute haemolytic transfusion reaction. Febrile nonhemolytic reactions are, along with allergic transfusion reactions, the most common type of blood transfusion reaction and occur because of the release of inflammatory chemical signals released by white blood cells in stored donor blood or attack on donor's white blood cells by recipient's antibodies. This type of reaction occurs in about 7% of transfusions. Fever is generally short lived and is treated with antipyretics, and transfusions may be finished as long as an acute hemolytic reaction is excluded. This is a reason for the now-widespread use of leukoreduction – the filtration of donor white cells from red cell product units. Allergic transfusion reactions are caused by IgE anti-allergen antibodies. When antibodies are bound to its antigens, histamine is released from mast cells and basophils. Either IgE antibodies from the donor's or recipient's side can cause the allergic reaction. It is more common in patients who have allergic conditions such as hay fever. Patient may feel itchy or having hives but the symptoms are usually mild and can be controlled by stopping the transfusion and giving antihistamines. Anaphylactic reactions are rare life-threatening allergic conditions caused by IgA anti-plasma protein antibodies. For patients who have selective immunoglobulin A deficiency, the reaction is presumed to be caused by IgA antibodies in the donor's plasma. The patient may present with symptoms of fever, wheezing, coughing, shortness of breath, and circulatory shock. Urgent treatment with epinephrine is needed. Post-transfusion purpura is an extremely rare complication that occurs after blood product transfusion and is associated with the presence of antibodies in the patient's blood directed against both the donor's and recipient's platelets HPA (human platelet antigen). Recipients who lack this protein develop sensitization to this protein from prior transfusions or previous pregnancies, can develop thrombocytopenia, bleeding into the skin, and can display purplish discolouration of skin which is known as purpura. Intravenous immunoglobulin (IVIG) is treatment of choice. Transfusion-related acute lung injury (TRALI) is a syndrome that is similar to acute respiratory distress syndrome (ARDS), which develops during or within 6 hours of transfusion of a plasma-containing blood product. Fever, hypotension, shortness of breath, and tachycardia often occurs in this type of reaction. For a definitive diagnosis to be made, symptoms must occur within 6 hours of transfusion, hypoxemia must be present, there must be radiographic evidence of bilateral infiltrates and there must be no evidence of left atrial hypertension (fluid overload). It occurs in 15% of the transfused patient with mortality rate of 5 to 10%. Recipient risk factors includes: end-stage liver disease, sepsis, haematological malignancies, sepsis, and ventilated patients. Antibodies to human neutrophil antigens (HNA) and human leukocyte antigens (HLA) have been associated with this type of transfusion reaction. Donor's antibodies interacting with antigen positive recipient tissue result in release of inflammatory cytokines, resulting in pulmonary capillary leakage. The treatment is supportive. Transfusion associated circulatory overload (TACO) is a common, yet underdiagnosed, reaction to blood product transfusion consisting of the new onset or exacerbation of three of the following within 6 hours of cessation of transfusion: acute respiratory distress, elevated brain natriuretic peptide (BNP), elevated central venous pressure (CVP), evidence of left heart failure, evidence of positive fluid balance, and/or radiographic evidence of pulmonary vascular congestion. Patients with congestive heart failure or kidney disease are more susceptible to volume overload. For especially vulnerable patients, a standard RBC unit could be split by sterile technique in the blood bank and administered over 8 hours instead of the standard 4 hours. Plasma transfusion is especially prone to causing TACO because large volumes are usually required to give any therapeutic benefit. Transfusion-associated graft versus host disease frequently occurs in immunodeficient patients where recipient's body failed to eliminate donor's T cells. Instead, donor's T cells attack the recipient's cells. It occurs one week after transfusion. Fever, rash, diarrhoea are often associated with this type of transfusion reaction. Mortality rate is high, with 89.7% of the patients dead after 24 days. Immunosuppressive treatment is the most common way of treatment. Irradiation and leukoreduction of blood products is necessary for high risk patients to prevent T cells from attacking recipient cells. Infection The use of greater amount of red blood cells has been suggested to increase the risk of infections, not only transfusion-transmitted infections, but also due to a phenomenon known as transfusion-related immunomodulation (TRIM). TRIM may be caused by macrophages and their byproducts. In those who were given red blood cells only with significant anemia ("restrictive" strategy), serious infection rates were 10.6% while in those who were given red blood at milder levels of anemia ("liberal" strategy), serious infection rates were 12.7%. On rare occasions, blood products are contaminated with bacteria. This can result in a life-threatening infection known as transfusion-transmitted bacterial infection. The risk of severe bacterial infection is estimated, , at about 1 in 2,500 platelet transfusions, and 1 in 2,000,000 red blood cell transfusions. Blood product contamination, while rare, is still more common than actual infection. The reason platelets are more often contaminated than other blood products is that they are stored at room temperature for short periods of time. Contamination is also more common with longer duration of storage, especially if that means more than 5 days. Sources of contaminants include the donor's blood, donor's skin, phlebotomist's skin, and containers. Contaminating organisms vary greatly, and include skin flora, gut flora, and environmental organisms. There are many strategies in place at blood donation centers and laboratories to reduce the risk of contamination. A definite diagnosis of transfusion-transmitted bacterial infection includes the identification of a positive culture in the recipient (without an alternative diagnosis) as well as the identification of the same organism in the donor blood. Since the advent of HIV testing of donor blood in the mid/later 1980s, ex. 1985's ELISA, the transmission of HIV during transfusion has dropped dramatically. Prior testing of donor blood only included testing for antibodies to HIV. However, because of latent infection (the "window period" in which an individual is infectious, but has not had time to develop antibodies) many cases of HIV seropositive blood were missed. The development of a nucleic acid test for the HIV-1 RNA has dramatically lowered the rate of donor blood seropositivity to about 1 in 3 million units. As transmittance of HIV does not necessarily mean HIV infection, the latter could still occur at an even lower rate. The transmission of hepatitis C via transfusion currently stands at a rate of about 1 in 2 million units. As with HIV, this low rate has been attributed to the ability to screen for both antibodies as well as viral RNA nucleic acid testing in donor blood. Other rare transmissible infections include hepatitis B, syphilis, Chagas disease, cytomegalovirus infections (in immunocompromised recipients), HTLV, and Babesia. Comparison table Inefficacy Transfusion inefficacy or insufficient efficacy of a given unit(s) of blood product, while not itself a "complication" per se, can nonetheless indirectly lead to complications – in addition to causing a transfusion to fully or partly fail to achieve its clinical purpose. This can be especially significant for certain patient groups such as critical-care or neonatals. For red blood cells (RBC), by far the most commonly transfused product, poor transfusion efficacy can result from units damaged by the so-called storage lesion – a range of biochemical and biomechanical changes that occur during storage. With red cells, this can decrease viability and ability for tissue oxygenation. Although some of the biochemical changes are reversible after the blood is transfused, the biomechanical changes are less so, and rejuvenation products are not yet able to adequately reverse this phenomenon. There has been controversy about whether a given product unit's age is a factor in transfusion efficacy, specifically about whether "older" blood directly or indirectly increases risks of complications. Studies have not been consistent on answering this question, with some showing that older blood is indeed less effective but with others showing no such difference; these developments are being closely followed by hospital blood bankers – who are the physicians, typically pathologists, who collect and manage inventories of transfusable blood units. Certain regulatory measures are in place to minimize RBC storage lesion – including a maximum shelf life (currently 42 days), a maximum auto-hemolysis threshold (currently 1% in the US, 0.8% in Europe), and a minimum level of post-transfusion RBC survival in vivo (currently 75% after 24 hours). However, all of these criteria are applied in a universal manner that does not account for differences among units of product. For example, testing for the post-transfusion RBC survival in vivo is done on a sample of healthy volunteers, and then compliance is presumed for all RBC units based on universal (GMP) processing standards (RBC survival by itself does not guarantee efficacy, but it is a necessary prerequisite for cell function, and hence serves as a regulatory proxy). Opinions vary as to the "best" way to determine transfusion efficacy in a patient in vivo. In general, there are not yet any in vitro tests to assess quality or predict efficacy for specific units of RBC blood product prior to their transfusion, though there is exploration of potentially relevant tests based on RBC membrane properties such as erythrocyte deformability and erythrocyte fragility (mechanical). Physicians have adopted a so-called "restrictive protocol" – whereby transfusion is held to a minimum – in part because of the noted uncertainties surrounding storage lesion, in addition to the very high direct and indirect costs of transfusions. However, the restrictive protocol is not an option with some especially vulnerable patients who may require the best possible efforts to rapidly restore tissue oxygenation. Although transfusions of platelets are far less numerous (relative to RBC), platelet storage lesion and resulting efficacy loss is also a concern. Other A relationship between intra-operative blood transfusion and cancer recurrence has been observed in colorectal cancer. In lung cancer intra-operative blood transfusion has been associated with earlier recurrence of cancer, worse survival rates and poorer outcomes after lung resection. Suppression of the immune system by blood transfusion has been implicated as playing a role in more than 10 different cancer types, through mechanisms involving the innate and adaptive immune system. Five major mechanisms for this include the lymphocyte-T set, myeloid-derived suppressor cells (MDSCs), tumor-associated macrophages (TAMs), natural killer cells (NKCs), and dendritic cells (DCs). Blood transfusion may modulate the activity of antitumor CD8+ cytotoxic T lymphocytes (CD8+/CTL), temporal response of Tregs, and the STAT3 signaling pathway. The role of the antitumor immune response in cancer therapeutics was explored historically through the use of bacteria to enhance the antitumor immune response and more recently in cellular Immunotherapy. However, the impact of transfusion-related immunomodulation (TRIM) on cancer progression has not been definitively established and requires further study. In retrospective studies, blood transfusion has been associated with worse outcomes after cytoreductive surgery and HIPEC. However, correlation does not prove causation, and transfused patients often have more complicated surgeries and more underlying cardiopulmonary disease compared to untransfused patients; conclusions should be based on prospective randomized controlled trials. Hypothermia can occur with transfusions with large quantities of blood products which normally are stored at cold temperatures. Core body temperature can go down as low as 32 °C and can produce physiologic disturbances. Prevention should be done with warming the blood to ambient temperature prior to transfusions. Blood warming devices are available to avoid the hemolysis that would occur from unsafe practices such as microwaving. Transfusions with large amounts of red blood cells, whether due to severe hemorrhaging and/or transfusion inefficacy (see above), can lead to an inclination for bleeding. The mechanism is thought to be due to disseminated intravascular coagulation, along with dilution of recipient platelets and coagulation factors. Close monitoring and transfusions with platelets and plasma is indicated when necessary. Progressive hemorrhagic injury (PHI) in traumatic brain injured patients may be worsened by liberal transfusion strategies. Metabolic alkalosis can occur with massive blood transfusions because of the breakdown of citrate stored in blood into bicarbonate. However, acidemia is common in massively transfused patients, and acid-base balance is affected by complex factors. Hypocalcemia can also occur with massive blood transfusions because of the complex of citrate with serum calcium. Calcium levels below 0.9 mmol/L should be treated. Blood doping has been used by athletes to increase physical stamina. A lack of knowledge and insufficient experience can turn a blood transfusion into a dangerous event. For example, improper storage involving freezing and thawing, or minor antigen incompatibility, could lead to hemolysis. Frequency of use Globally around 85 million units of red blood cells are transfused in a given year. The global demand is much higher and there is an unmet need for safe blood for transfusion in many low- and middle-income countries. In the United States, blood transfusions were performed nearly 3 million times during hospitalizations in 2011, making it the most common procedure performed. The rate of hospitalizations with a blood transfusion nearly doubled from 1997, from a rate of 40 stays to 95 stays per 10,000 population. It was the most common procedure performed for patients 45 years of age and older in 2011, and among the top five most common for patients between the ages of 1 and 44 years. According to the New York Times: "Changes in medicine have eliminated the need for millions of blood transfusions, which is good news for patients getting procedures like coronary bypasses and other procedures that once required a lot of blood." And, "Blood bank revenue is falling, and the decline may reach $1.5 billion a year this year [2014] from a high of $5 billion in 2008." In 2014, the Red Cross was predicting job losses as high as 12,000 within the next three to five years, roughly a quarter of the total in the industry. As of 2019, the trend of declining transfusions appeared to be stabilizing, with 10,852,000 RBC units transfused in the United States. History Beginning with William Harvey's experiments on the circulation of blood, recorded research into blood transfusion began in the 17th century, with successful experiments in transfusion between animals. However, successive attempts by physicians to transfuse animal blood into humans gave variable, often fatal, results. Pope Innocent VIII is sometimes said to have been given "the world's first blood transfusion" by his Italian-Jewish physician Giacomo di San Genesio, who had him drink (by mouth) the blood of three 10-year-old boys. The boys consequently died, as did the Pope himself. However, the evidence for this story is unreliable and considered a possible anti-Jewish blood libel. Early attempts Animal blood Working at the Royal Society in the 1660s, the physician Richard Lower began examining the effects of changes in blood volume on circulatory function and developed methods for cross-circulatory study in animals, obviating clotting by closed arteriovenous connections. The new instruments he was able to devise enabled him to perform the first reliably documented successful transfusion of blood in front of his distinguished colleagues from the Royal Society. According to Lower's account, "...towards the end of February 1665 [I] selected one dog of medium size, opened its jugular vein, and drew off blood, until its strength was nearly gone. Then, to make up for the great loss of this dog by the blood of a second, I introduced blood from the cervical artery of a fairly large mastiff, which had been fastened alongside the first, until this latter animal showed ... it was overfilled ... by the inflowing blood." After he "sewed up the jugular veins", the animal recovered "with no sign of discomfort or of displeasure". Lower had performed the first blood transfusion between animals. He was then "requested by the Honorable [Robert] Boyle ... to acquaint the Royal Society with the procedure for the whole experiment", which he did in December 1665 in the Society's Philosophical Transactions. The first blood transfusion from animal to human was administered by Dr. Jean-Baptiste Denys, eminent physician to King Louis XIV of France, on June 15, 1667. He transfused the blood of a sheep into a 15-year-old boy, who survived the transfusion. Denys performed another transfusion into a labourer, who also survived. Both instances were likely due to the small amount of blood that was actually transfused into these people. This allowed them to withstand the allergic reaction. Denys's third patient to undergo a blood transfusion was Swedish Baron Gustaf Bonde. He received two transfusions. After the second transfusion Bonde died. In the winter of 1667, Denys performed several transfusions on Antoine Mauroy with calf's blood. On the third account Mauroy died. Six months later in London, Lower performed the first human transfusion of animal blood in Britain, where he "superintended the introduction in [a patient's] arm at various times of some ounces of sheep's blood at a meeting of the Royal Society, and without any inconvenience to him." The recipient was Arthur Coga, "the subject of a harmless form of insanity." Sheep's blood was used because of speculation about the value of blood exchange between species; it had been suggested that blood from a gentle lamb might quiet the tempestuous spirit of an agitated person and that the shy might be made outgoing by blood from more sociable creatures. Coga received 20 shillings to participate in the experiment. Lower went on to pioneer new devices for the precise control of blood flow and the transfusion of blood; his designs were substantially the same as modern syringes and catheters. Shortly after, Lower moved to London, where his growing practice soon led him to abandon research. These early experiments with animal blood provoked a heated controversy in Britain and France. Finally, in 1668, the Royal Society and the French government both banned the procedure. The Vatican condemned these experiments in 1670. Blood transfusions fell into obscurity for the next 150 years. Human blood The science of blood transfusion dates to the first decade of the 20th century, with the discovery of distinct blood types leading to the practice of mixing some blood from the donor and the receiver before the transfusion (an early form of cross-matching). In the early 19th century, British obstetrician Dr. James Blundell made efforts to treat hemorrhage by transfusion of human blood using a syringe. In 1818, after experiments with animals, he performed the first successful transfusion of human blood to treat postpartum hemorrhage. Blundell used the patient's husband as a donor, and extracted four ounces of blood from his arm to transfuse into his wife. During the years 1825 and 1830, Blundell performed 10 transfusions, five of which were beneficial, and published his results. He also invented a number of instruments for the transfusion of blood. He made a substantial amount of money from this endeavour, roughly $2 million ($50 million real dollars). In 1840, at St George's Hospital Medical School in London, Samuel Armstrong Lane, aided by Blundell, performed the first successful whole blood transfusion to treat haemophilia. However, early transfusions were risky and many resulted in the death of the patient. By the late 19th century, blood transfusion was regarded as a risky and dubious procedure, and was largely shunned by the medical establishment. Work to emulate James Blundell continued in Edinburgh. In 1845 the Edinburgh Journal described the successful transfusion of blood to a woman with severe uterine bleeding. Subsequent transfusions were successful with patients of Professor James Young Simpson, after whom the Simpson Memorial Maternity Pavilion in Edinburgh was named. Various isolated reports of successful transfusions emerged towards the end of the 19th century. The largest series of early successful transfusions took place at the Edinburgh Royal Infirmary between 1885 and 1892. Edinburgh later became the home of the first blood donation and blood transfusion services. 20th century Only in 1901, when the Austrian Karl Landsteiner discovered three human blood groups (O, A, and B), did blood transfusion achieve a scientific basis and become safer. Landsteiner discovered that adverse effects arise from mixing blood from two incompatible individuals. He found that mixing incompatible types triggers an immune response and the red blood-cells clump. The immunological reaction occurs when the receiver of a blood transfusion has antibodies against the donor blood-cells. The destruction of red blood cells releases free hemoglobin into the bloodstream, which can have fatal consequences. Landsteiner's work made it possible to determine blood group and allowed blood transfusions to take place much more safely. For his discovery he won the Nobel Prize in Physiology and Medicine in 1930; many other blood groups have been discovered since. George Washington Crile is credited with performing the first surgery using a direct blood transfusion in 1906 at St. Alexis Hospital in Cleveland while a professor of surgery at Case Western Reserve University. Jan Janský also discovered the human blood groups; in 1907 he classified blood into four groups: I, II, III, IV. His nomenclature is still used in Russia and in states of the former USSR, in which blood types O, A, B, and AB are respectively designated I, II, III, and IV. Dr. William Lorenzo Moss's (1876–1957) Moss-blood typing technique of 1910 was widely used until World War II. William Stewart Halsted, M.D. (1852–1922), an American surgeon, performed one of the first blood transfusions in the United States. He had been called to see his sister after she had given birth. He found her moribund from blood loss, and in a bold move withdrew his own blood, transfused his blood into his sister, and then operated on her to save her life. Blood banks in WWI While the first transfusions had to be made directly from donor to receiver before coagulation, it was discovered that by adding anticoagulant and refrigerating the blood it was possible to store it for some days, thus opening the way for the development of blood banks. John Braxton Hicks was the first to experiment with chemical methods to prevent the coagulation of blood at St Mary's Hospital, London in the late-19th century. His attempts, using phosphate of soda, however, proved unsuccessful. The Belgian doctor Albert Hustin performed the first non-direct transfusion on March 27, 1914, though this involved a diluted solution of blood. The Argentine doctor Luis Agote used a much less diluted solution in November of the same year. Both used sodium citrate as an anticoagulant. The First World War (1914–1918) acted as a catalyst for the rapid development of blood banks and transfusion techniques. Francis Peyton Rous and Joseph R. Turner at the Rockefeller University (then The Rockefeller Institute for Medical Research) made the first important discoveries that blood typing was necessary to avoid blood clumping (coagulation) and blood samples could be preserved using chemical treatment. Their first report in March 1915 showed that gelatine, agar, blood serum extracts, starch and beef albumin proved to be useless preservatives. However, building on the same experiment, they discovered that a mixture sodium citrate and glucose (dextrose) solution was a perfect preservative; as they reported in February issue of the Journal of Experimental Medicine, the preserved bloods were just like fresh bloods and that they "function excellently when reintroduced into the body." Blood could be preserved for up to four weeks. An accompanying experiment using citrate-saccharose (sucrose) mixture was also a success which could maintain blood cells for two weeks. This use of citrate and sugars, sometimes known as Rous-Turner solution, was the foundation for the development of blood banks and improvement of transfusion method. Another discovery of Rous and Turner was the most critical step in the safety of blood transfusion. Rous was well aware that Landsteiner's concept of blood types had not yet find practical value, as he remarked: "The fate of Landsteiner's effort to call attention to the practical bearing of the group differences in human bloods provides an exquisite instance of knowledge marking time on technique. Transfusion was still not done because (until at least 1915), the risk of clotting was too great." In June 1915, they made a crucial report in the Journal of the American Medical Association that agglutination could be avoided if the blood samples of the donor and recipient were tested before. Which they called a rapid and simple method for testing blood compatibility, sodium citrate was used to dilute the blood samples, and after mixing the recipient's and donor's blood in 9:1 and 1:1 parts, blood would either clump or remain watery after 15 minutes. According to their advice, blood without clumping "should always be chosen if possible." Canadian doctor and Lieutenant Lawrence Bruce Robertson became instrumental in persuading the Royal Army Medical Corps to adopt the use of blood transfusion at the Casualty Clearing Stations for the wounded. In October 1915, Robertson performed his first wartime transfusion with a syringe to a patient who had multiple shrapnel wounds. He followed this up with four subsequent transfusions in the following months, and his success was reported to Sir Walter Morley Fletcher, director of the Medical Research Committee. Robertson published his findings in the British Medical Journal in 1916 and, with the help of a few like-minded individuals (including the eminent physician Edward William Archibald), was able to persuade the British authorities of the merits of blood transfusion. Robertson went on to establish the first blood-transfusion apparatus at a Casualty Clearing Station on the Western Front in the spring of 1917. Robertson did not test crossmatching so that one died of hemolysis in his 1916 transfusion, and three in 1917. Oswald Hope Robertson, a medical researcher and U.S. Army officer, was attached to the RAMC in 1917, where he became instrumental in establishing the first blood banks in preparation for the anticipated Third Battle of Ypres. He used sodium citrate as the anticoagulant; blood was extracted from punctures in the vein and was stored in bottles at British and American Casualty Clearing Stations along the Front. Robertson also experimented with preserving separated red blood cells in iced bottles. Geoffrey Keynes, a British surgeon, developed a portable machine that could store blood to enable transfusions to be carried out more easily. Expansion The secretary of the British Red Cross, Percy Lane Oliver, established the world's first blood-donor service in 1921. In that year, Oliver was contacted by King's College Hospital, where they were in urgent need of a blood donor. After providing a donor, Oliver set about organizing a system for the voluntary registration of blood donors at clinics around London, with Sir Geoffrey Keynes appointed as a medical adviser. Volunteers were subjected to a series of physical tests to establish their blood group. The London Blood Transfusion Service was free of charge and expanded rapidly in its first few years of operation. By 1925 it was providing services for almost 500 patients; it was incorporated into the structure of the British Red Cross in 1926. Similar systems developed in other cities, including Sheffield, Manchester and Norwich, and the service's work began to attract international attention. France, Germany, Austria, Belgium, Australia and Japan established similar services. Alexander Bogdanov founded an academic institution devoted to the science of blood transfusion in Moscow in 1925. Bogdanov was motivated, at least in part, by a search for eternal youth, and remarked with satisfaction on the improvement of his eyesight, suspension of balding, and other positive symptoms after receiving 11 transfusions of whole blood. Bogdanov died in 1928 as a result of one of his experiments, when the blood of a student with malaria and tuberculosis was given to him in a transfusion. Following Bogdanov's lead, Vladimir Shamov and Sergei Yudin in the USSR pioneered the transfusion of cadaveric blood from recently deceased donors. Yudin performed such a transfusion successfully for the first time on March 23, 1930, and reported his first seven clinical transfusions with cadaveric blood at the Fourth Congress of Ukrainian Surgeons at Kharkiv in September. However, this method was never used widely, even in the Soviet Union. Nevertheless, the Soviet Union was the first to establish a network of facilities to collect and store blood for use in transfusions at hospitals. Frederic Durán-Jordà established one of the earliest blood banks during the Spanish Civil War in 1936. Duran joined the Transfusion Service at the Barcelona Hospital at the start of the conflict, but the hospital was soon overwhelmed by the demand for blood and the paucity of available donors. With support from the Department of Health of the Spanish Republican Army, Duran established a blood bank for the use of wounded soldiers and civilians. The 300–400 mL of extracted blood was mixed with 10% citrate solution in a modified Duran Erlenmeyer flask. The blood was stored in a sterile glass enclosed under pressure at 2 °C. During 30 months of work, the Transfusion Service of Barcelona registered almost 30,000 donors, and processed 9,000 liters of blood. In 1937 Bernard Fantus, director of therapeutics at the Cook County Hospital in Chicago, established the first hospital blood-bank in the United States. In setting up a hospital laboratory that preserved, refrigerated and stored donor blood, Fantus originated the term "blood bank". Within a few years, hospital and community blood-banks were established across the United States. Until the middle of World War II, the newly established US blood banks rejected African-American donors. During the war, Black people were allowed to donate blood, but the donated blood was labeled as being suitable only for transfusion into another person from the same race. Frederic Durán-Jordà fled to Britain in 1938 and worked with Dr Janet Vaughan at the Royal Postgraduate Medical School at Hammersmith Hospital to establish a system of national blood banks in London. With the outbreak of war appearing imminent in 1938, the War Office created the Army Blood Supply Depot (ABSD) in Bristol, headed by Lionel Whitby and in control of four large blood-depots around the country. British policy through the war was to supply military personnel with blood from centralized depots, in contrast to the approach taken by the Americans and Germans where troops at the front were bled to provide required blood. The British method proved more successful in adequately meeting all requirements, and over 700,000 donors were bled over the course of the war. This system evolved into the National Blood Transfusion Service established in 1946, the first national service to be implemented. Stories tell of Nazis in Eastern Europe during World War II using captive children as repeated involuntary blood-donors. Medical advances A blood-collection program was initiated in the US in 1940 and Edwin Cohn pioneered the process of blood fractionation. He worked out the techniques for isolating the serum albumin fraction of blood plasma, which is essential for maintaining the osmotic pressure in the blood vessels, preventing their collapse. Gordon R. Ward, writing in the correspondence columns of the British Medical Journal, proposed the use of blood plasma as a substitute for whole blood and for transfusion purposes as early as 1918. At the onset of World War II, liquid plasma was used in Britain. A large project, known as "Blood for Britain", began in August 1940 to collect blood in New York City hospitals for the export of plasma to Britain. A freeze-dried plasma package was developed by the Surgeons General of the Army and Navy, working with the National Research Council, which reduced breakage and made transportation, packaging, and storage much simpler. The resulting dried plasma package came in two tin cans containing 400 mL bottles. One bottle contained enough distilled water to reconstitute the dried plasma contained within the other bottle. In about three minutes, the plasma would be ready to use and could stay fresh for around four hours. Dr. Charles R. Drew was appointed medical supervisor, and he was able to transform the test-tube methods into the first successful technique for mass production. Another important breakthrough came in 1937–40 when Karl Landsteiner (1868–1943), Alex Wiener, Philip Levine, and R.E. Stetson discovered the Rhesus blood group system, which was found to be the cause of the majority of transfusion reactions up to that time. Three years later, the introduction by J.F. Loutit and Patrick L. Mollison of acid–citrate–dextrose (ACD) solution, which reduced the volume of anticoagulant, permitted transfusions of greater volumes of blood and allowed longer-term storage. Carl Walter and W.P. Murphy Jr. introduced the plastic bag for blood collection in 1950. Replacing breakable glass bottles with durable plastic bags made from PVC allowed for the evolution of a collection system capable of safe and easy preparation of multiple blood components from a single unit of whole blood. In the field of cancer surgery, the replacement of massive blood-loss became a major problem. The cardiac-arrest rate was high. In 1963 C. Paul Boyan and William S. Howland discovered that the temperature of the blood and the rate of infusion greatly affected survival rates, and introduced blood warming to surgery. Further extending the shelf-life of stored blood up to 42 days was an anticoagulant preservative, CPDA-1, introduced in 1979, which increased the blood supply and facilitated resource-sharing among blood banks. about 15 million units of blood products were transfused per year in the United States. By 2013 the number had declined to about 11 million units, because of the shift towards laparoscopic surgery and other surgical advances and studies that have shown that many transfusions were unnecessary. For example, the standard of care reduced the amount of blood transfused in one case from 750 to 200 mL. In 2019, 10,852,000 RBC units, 2,243,000 platelet units, and 2,285,000 plasma units were transfused in the United States. Special populations Neonate To ensure the safety of blood transfusion to pediatric patients, hospitals are taking additional precautions to avoid infection and prefer to use pediatric blood units that are guaranteed "safe" from Cytomegalovirus. Some guidelines have recommended the provision of CMV-negative blood components and not simply leukoreduced components for newborns or low birthweight infants in whom the immune system is not fully developed, but practice varies. These requirements place additional restrictions on blood donors who can donate for neonatal use, which may be impractical given the rarity of CMV seronegative donors and the preference for fresh units. Neonatal transfusions typically fall into one of two categories: "Top-up" transfusions, to replace losses due to investigational losses and correction of anemia. Exchange (or partial exchange) transfusions are done for removal of bilirubin, removal of antibodies and replacement of red cells (e.g., for anemia secondary to thalassemias and other hemoglobinopathies or fetal erythroblastosis). Significant blood loss A massive transfusion protocol is used when significant blood loss is present such as in major trauma, when more than ten units of blood are needed. Packed red blood cells, fresh frozen plasma, and platelets are generally administered. Typical ratios of fresh frozen plasma, platelets and packed red blood cells are between 1:1:1 and 1:1:2. In some locations, blood has begun to be administered pre-hospital in an effort to reduce preventable deaths from significant blood loss. Earlier analyses suggested that in the US, up to 31,000 patients per year bleed to death that otherwise could have survived if pre-hospital transfusions were widely available. For example, when a mother experiences severe blood loss during pregnancy, ambulances are able to arrive with blood stored in portable, FDA listed blood refrigerators, similar to those found in blood banks. Once the infusion is given on scene, the patient and the ambulance have more time to get to a hospital for surgery and additional infusions if needed. This could be critical in rural areas or sprawling cities where patients can be far from a major hospital and the local emergency medical team may need to use blood infusions to keep that patient alive during transport. Larger studies pointed to improvements in 24-hour mortality with pre-hospital plasma and RBC transfusions, but no difference in 30-day or long-term mortality. Unknown blood type Because blood type O negative is compatible with anyone, it is often overused and in short supply. According to the Association for the Advancement of Blood and Biotherapies, the use of this blood should be restricted to persons with O negative blood, as nothing else is compatible with them, and women who might be pregnant and for whom it would be impossible to do blood group testing before giving them emergency treatment. Whenever possible, the AABB recommends that O negative blood be conserved by using blood type testing to identify a less scarce alternative. Religious objections Jehovah's Witnesses may object to blood transfusions because of their belief that blood is sacred. Personal objections Sometimes people refuse blood transfusions because of fears about the safety of the blood supply. Generally speaking, the rules of informed consent allow mentally competent adults to refuse blood transfusions even when their objections are based on misinformation or prejudice and even when their refusal may result in serious and permanent harm, including death. For example, since COVID-19 vaccines became available, some people in the US have refused blood transfusions because the donor might have been vaccinated, and they fear that this would cause secondhand harm to them. This choice is based on false beliefs, but a mentally competent adult's choices are normally respected. However, if doctor's opinions are that parents and guardians are making harmful choices about children, they can be overruled (in some jurisdictions) using legal arguments based on the harm principle; in this case if doctors believe that refusing the blood transfusion would put the child at risk of serious injury or death. Blood banks do not collect information that is irrelevant to the transfusion process, including the donors' race, ethnicity, sexual orientation, COVID-19 vaccination status, etc., so selecting blood units based on the individual's personal objections is not practical. Research into alternatives Although there are clinical situations where transfusion with red blood cells is the only clinically appropriate option, clinicians look at whether alternatives are feasible. This can be due to several reasons, such as patient safety, economic burden or scarcity of blood. Guidelines recommend blood transfusions should be reserved for patients with or at risk of cardiovascular instability due to the degree of their anaemia. In stable patients with iron deficiency anemia, oral or parenteral iron is recommended. Thus far, there are no FDA-approved oxygen-carrying blood substitutes, which is the typical objective of a blood (RBC) transfusion. Non-blood volume expanders are available for cases where only volume restoration is required, but a substance with oxygen-carrying capacity would help doctors and surgeons avoid the risks of disease transmission and immune suppression, address the chronic blood donor shortage, and address the concerns of Jehovah's Witnesses and others who have religious objections to receiving transfused blood. The research in this area is ongoing. A number of blood substitutes have been explored, but thus far they all have serious limitations. Most attempts to find a suitable alternative to blood thus far have concentrated on cell-free hemoglobin solutions. Blood substitutes could make transfusions more readily available in emergency medicine and in pre-hospital EMS care. If successful, such a blood substitute could save many lives, particularly in trauma where massive blood loss results. Hemopure, a hemoglobin-based therapy, is approved for use in South Africa and has been used in the United States on a case-by-case basis through the emergency Investigational New Drug (IND) process. Veterinary use Veterinarians also administer transfusions to other animals. Various species require different levels of testing to ensure a compatible match. For example, cats have 3 known blood types, cattle have 11, dogs have at least 13, pigs have 16, and horses over 30. However, in many species (especially horses and dogs), cross matching is not required before the first transfusion, as antibodies against non-self cell surface antigens are not expressed constitutively – i.e. the animal has to be sensitized before it will mount an immune response against the transfused blood. The rare and experimental practice of inter-species blood transfusions (xenotransfusion) is a form of xenograft. See also Anemia Arnault Tzanck Blood transfusion in Sri Lanka Blood type (non-human) Young blood transfusion, a pseudoscientific practice involving the transfusion of blood taken from young donors to older recipients that is claimed to have health benefits AIDS References Further reading "Milk as a Substitute for Blood Transfusion", historical account, Scientific American, 13 July 1878, p. 19 External links Transfusion Evidence Library searchable source of evidence for transfusion medicine. Blood transfusion societies American Association of Blood Banks (AABB) British Blood Transfusion Society (BBTS) International Society of Blood Transfusion (ISBT) Books Blood Groups and Red Cell Antigens Free online book at NCBI Bookshelf ID: NBK2261 Handbook of Transfusion Medicine Free book published in the UK 5th edition Guidelines American Association of Blood Banks Clinical Practice Guidelines Australian National Blood Authority Patient Blood Management Guidelines British Committee for Standards in Haematology National Institute for Health and Care Excellence Blood Transfusion Guidance UK Guidance for transfusion. Canadian Blood Transfusion Guidelines German Medical Association Guidelines (English) , published 2014. Patient information Blood Transfusion Leaflets (NHS Blood and Transplant) Blood Transfusion Leaflets (Welsh Blood Service) Blood Transfusion Information (Scotland) Blood Transfusion Information (Australia) Blood Transfusion Information (American Cancer Society) Transfusion medicine Hematology Blood
0.769178
0.998633
0.768127
Homeostasis
In biology, homeostasis (British also homoeostasis; ) is the state of steady internal physical and chemical conditions maintained by living systems. This is the condition of optimal functioning for the organism and includes many variables, such as body temperature and fluid balance, being kept within certain pre-set limits (homeostatic range). Other variables include the pH of extracellular fluid, the concentrations of sodium, potassium, and calcium ions, as well as the blood sugar level, and these need to be regulated despite changes in the environment, diet, or level of activity. Each of these variables is controlled by one or more regulators or homeostatic mechanisms, which together maintain life. Homeostasis is brought about by a natural resistance to change when already in optimal conditions, and equilibrium is maintained by many regulatory mechanisms; it is thought to be the central motivation for all organic action. All homeostatic control mechanisms have at least three interdependent components for the variable being regulated: a receptor, a control center, and an effector. The receptor is the sensing component that monitors and responds to changes in the environment, either external or internal. Receptors include thermoreceptors and mechanoreceptors. Control centers include the respiratory center and the renin-angiotensin system. An effector is the target acted on, to bring about the change back to the normal state. At the cellular level, effectors include nuclear receptors that bring about changes in gene expression through up-regulation or down-regulation and act in negative feedback mechanisms. An example of this is in the control of bile acids in the liver. Some centers, such as the renin–angiotensin system, control more than one variable. When the receptor senses a stimulus, it reacts by sending action potentials to a control center. The control center sets the maintenance range—the acceptable upper and lower limits—for the particular variable, such as temperature. The control center responds to the signal by determining an appropriate response and sending signals to an effector, which can be one or more muscles, an organ, or a gland. When the signal is received and acted on, negative feedback is provided to the receptor that stops the need for further signaling. The cannabinoid receptor type 1 (CB1), located at the presynaptic neuron, is a receptor that can stop stressful neurotransmitter release to the postsynaptic neuron; it is activated by endocannabinoids (ECs) such as anandamide (N-arachidonoylethanolamide; AEA) and 2-arachidonoylglycerol (2-AG) via a retrograde signaling process in which these compounds are synthesized by and released from postsynaptic neurons, and travel back to the presynaptic terminal to bind to the CB1 receptor for modulation of neurotransmitter release to obtain homeostasis. The polyunsaturated fatty acids (PUFAs) are lipid derivatives of omega-3 (docosahexaenoic acid, DHA, and eicosapentaenoic acid, EPA) or of omega-6 (arachidonic acid, ARA) are synthesized from membrane phospholipids and used as a precursor for endocannabinoids (ECs) mediate significant effects in the fine-tuning adjustment of body homeostasis. Etymology The word homeostasis uses combining forms of homeo- and -stasis, Neo-Latin from Greek: ὅμοιος homoios, "similar" and στάσις stasis, "standing still", yielding the idea of "staying the same". History The concept of the regulation of the internal environment was described by French physiologist Claude Bernard in 1849, and the word homeostasis was coined by Walter Bradford Cannon in 1926. In 1932, Joseph Barcroft a British physiologist, was the first to say that higher brain function required the most stable internal environment. Thus, to Barcroft homeostasis was not only organized by the brain—homeostasis served the brain. Homeostasis is an almost exclusively biological term, referring to the concepts described by Bernard and Cannon, concerning the constancy of the internal environment in which the cells of the body live and survive. The term cybernetics is applied to technological control systems such as thermostats, which function as homeostatic mechanisms but are often defined much more broadly than the biological term of homeostasis. Overview The metabolic processes of all organisms can only take place in very specific physical and chemical environments. The conditions vary with each organism, and with whether the chemical processes take place inside the cell or in the interstitial fluid bathing the cells. The best-known homeostatic mechanisms in humans and other mammals are regulators that keep the composition of the extracellular fluid (or the "internal environment") constant, especially with regard to the temperature, pH, osmolality, and the concentrations of sodium, potassium, glucose, carbon dioxide, and oxygen. However, a great many other homeostatic mechanisms, encompassing many aspects of human physiology, control other entities in the body. Where the levels of variables are higher or lower than those needed, they are often prefixed with hyper- and hypo-, respectively such as hyperthermia and hypothermia or hypertension and hypotension. If an entity is homeostatically controlled it does not imply that its value is necessarily absolutely steady in health. Core body temperature is, for instance, regulated by a homeostatic mechanism with temperature sensors in, amongst others, the hypothalamus of the brain. However, the set point of the regulator is regularly reset. For instance, core body temperature in humans varies during the course of the day (i.e. has a circadian rhythm), with the lowest temperatures occurring at night, and the highest in the afternoons. Other normal temperature variations include those related to the menstrual cycle. The temperature regulator's set point is reset during infections to produce a fever. Organisms are capable of adjusting somewhat to varied conditions such as temperature changes or oxygen levels at altitude, by a process of acclimatisation. Homeostasis does not govern every activity in the body. For instance, the signal (be it via neurons or hormones) from the sensor to the effector is, of necessity, highly variable in order to convey information about the direction and magnitude of the error detected by the sensor. Similarly, the effector's response needs to be highly adjustable to reverse the error – in fact it should be very nearly in proportion (but in the opposite direction) to the error that is threatening the internal environment. For instance, arterial blood pressure in mammals is homeostatically controlled and measured by stretch receptors in the walls of the aortic arch and carotid sinuses at the beginnings of the internal carotid arteries. The sensors send messages via sensory nerves to the medulla oblongata of the brain indicating whether the blood pressure has fallen or risen, and by how much. The medulla oblongata then distributes messages along motor or efferent nerves belonging to the autonomic nervous system to a wide variety of effector organs, whose activity is consequently changed to reverse the error in the blood pressure. One of the effector organs is the heart whose rate is stimulated to rise (tachycardia) when the arterial blood pressure falls, or to slow down (bradycardia) when the pressure rises above the set point. Thus the heart rate (for which there is no sensor in the body) is not homeostatically controlled but is one of the effector responses to errors in arterial blood pressure. Another example is the rate of sweating. This is one of the effectors in the homeostatic control of body temperature, and therefore highly variable in rough proportion to the heat load that threatens to destabilize the body's core temperature, for which there is a sensor in the hypothalamus of the brain. Controls of variables Core temperature Mammals regulate their core temperature using input from thermoreceptors in the hypothalamus, brain, spinal cord, internal organs, and great veins. Apart from the internal regulation of temperature, a process called allostasis can come into play that adjusts behaviour to adapt to the challenge of very hot or cold extremes (and to other challenges). These adjustments may include seeking shade and reducing activity, seeking warmer conditions and increasing activity, or huddling. Behavioral thermoregulation takes precedence over physiological thermoregulation since necessary changes can be affected more quickly and physiological thermoregulation is limited in its capacity to respond to extreme temperatures. When the core temperature falls, the blood supply to the skin is reduced by intense vasoconstriction. The blood flow to the limbs (which have a large surface area) is similarly reduced and returned to the trunk via the deep veins which lie alongside the arteries (forming venae comitantes). This acts as a counter-current exchange system that short-circuits the warmth from the arterial blood directly into the venous blood returning into the trunk, causing minimal heat loss from the extremities in cold weather. The subcutaneous limb veins are tightly constricted, not only reducing heat loss from this source but also forcing the venous blood into the counter-current system in the depths of the limbs. The metabolic rate is increased, initially by non-shivering thermogenesis, followed by shivering thermogenesis if the earlier reactions are insufficient to correct the hypothermia. When core temperature rises are detected by thermoreceptors, the sweat glands in the skin are stimulated via cholinergic sympathetic nerves to secrete sweat onto the skin, which, when it evaporates, cools the skin and the blood flowing through it. Panting is an alternative effector in many vertebrates, which cools the body also by the evaporation of water, but this time from the mucous membranes of the throat and mouth. Blood glucose Blood sugar levels are regulated within fairly narrow limits. In mammals, the primary sensors for this are the beta cells of the pancreatic islets. The beta cells respond to a rise in the blood sugar level by secreting insulin into the blood and simultaneously inhibiting their neighboring alpha cells from secreting glucagon into the blood. This combination (high blood insulin levels and low glucagon levels) act on effector tissues, the chief of which is the liver, fat cells, and muscle cells. The liver is inhibited from producing glucose, taking it up instead, and converting it to glycogen and triglycerides. The glycogen is stored in the liver, but the triglycerides are secreted into the blood as very low-density lipoprotein (VLDL) particles which are taken up by adipose tissue, there to be stored as fats. The fat cells take up glucose through special glucose transporters (GLUT4), whose numbers in the cell wall are increased as a direct effect of insulin acting on these cells. The glucose that enters the fat cells in this manner is converted into triglycerides (via the same metabolic pathways as are used by the liver) and then stored in those fat cells together with the VLDL-derived triglycerides that were made in the liver. Muscle cells also take glucose up through insulin-sensitive GLUT4 glucose channels, and convert it into muscle glycogen. A fall in blood glucose, causes insulin secretion to be stopped, and glucagon to be secreted from the alpha cells into the blood. This inhibits the uptake of glucose from the blood by the liver, fats cells, and muscle. Instead the liver is strongly stimulated to manufacture glucose from glycogen (through glycogenolysis) and from non-carbohydrate sources (such as lactate and de-aminated amino acids) using a process known as gluconeogenesis. The glucose thus produced is discharged into the blood correcting the detected error (hypoglycemia). The glycogen stored in muscles remains in the muscles, and is only broken down, during exercise, to glucose-6-phosphate and thence to pyruvate to be fed into the citric acid cycle or turned into lactate. It is only the lactate and the waste products of the citric acid cycle that are returned to the blood. The liver can take up only the lactate, and, by the process of energy-consuming gluconeogenesis, convert it back to glucose. Iron levels Controlling iron levels in the body is a critically important part of many aspects of human health and disease. In humans iron is both necessary to the body and potentially harmful. Copper regulation Copper is absorbed, transported, distributed, stored, and excreted in the body according to complex homeostatic processes which ensure a constant and sufficient supply of the micronutrient while simultaneously avoiding excess levels. If an insufficient amount of copper is ingested for a short period of time, copper stores in the liver will be depleted. Should this depletion continue, a copper health deficiency condition may develop. If too much copper is ingested, an excess condition can result. Both of these conditions, deficiency and excess, can lead to tissue injury and disease. However, due to homeostatic regulation, the human body is capable of balancing a wide range of copper intakes for the needs of healthy individuals. Many aspects of copper homeostasis are known at the molecular level. Copper's essentiality is due to its ability to act as an electron donor or acceptor as its oxidation state fluxes between Cu1+ (cuprous) and Cu2+ (cupric). As a component of about a dozen cuproenzymes, copper is involved in key redox (i.e., oxidation-reduction) reactions in essential metabolic processes such as mitochondrial respiration, synthesis of melanin, and cross-linking of collagen. Copper is an integral part of the antioxidant enzyme copper-zinc superoxide dismutase, and has a role in iron homeostasis as a cofactor in ceruloplasmin. Levels of blood gases Changes in the levels of oxygen, carbon dioxide, and plasma pH are sent to the respiratory center, in the brainstem where they are regulated. The partial pressure of oxygen and carbon dioxide in the arterial blood is monitored by the peripheral chemoreceptors (PNS) in the carotid artery and aortic arch. A change in the partial pressure of carbon dioxide is detected as altered pH in the cerebrospinal fluid by central chemoreceptors (CNS) in the medulla oblongata of the brainstem. Information from these sets of sensors is sent to the respiratory center which activates the effector organs – the diaphragm and other muscles of respiration. An increased level of carbon dioxide in the blood, or a decreased level of oxygen, will result in a deeper breathing pattern and increased respiratory rate to bring the blood gases back to equilibrium. Too little carbon dioxide, and, to a lesser extent, too much oxygen in the blood can temporarily halt breathing, a condition known as apnea, which freedivers use to prolong the time they can stay underwater. The partial pressure of carbon dioxide is more of a deciding factor in the monitoring of pH. However, at high altitude (above 2500 m) the monitoring of the partial pressure of oxygen takes priority, and hyperventilation keeps the oxygen level constant. With the lower level of carbon dioxide, to keep the pH at 7.4 the kidneys secrete hydrogen ions into the blood and excrete bicarbonate into the urine. This is important in acclimatization to high altitude. Blood oxygen content The kidneys measure the oxygen content rather than the partial pressure of oxygen in the arterial blood. When the oxygen content of the blood is chronically low, oxygen-sensitive cells secrete erythropoietin (EPO) into the blood. The effector tissue is the red bone marrow which produces red blood cells (RBCs, also called ). The increase in RBCs leads to an increased hematocrit in the blood, and a subsequent increase in hemoglobin that increases the oxygen carrying capacity. This is the mechanism whereby high altitude dwellers have higher hematocrits than sea-level residents, and also why persons with pulmonary insufficiency or right-to-left shunts in the heart (through which venous blood by-passes the lungs and goes directly into the systemic circulation) have similarly high hematocrits. Regardless of the partial pressure of oxygen in the blood, the amount of oxygen that can be carried, depends on the hemoglobin content. The partial pressure of oxygen may be sufficient for example in anemia, but the hemoglobin content will be insufficient and subsequently as will be the oxygen content. Given enough supply of iron, vitamin B12 and folic acid, EPO can stimulate RBC production, and hemoglobin and oxygen content restored to normal. Arterial blood pressure The brain can regulate blood flow over a range of blood pressure values by vasoconstriction and vasodilation of the arteries. High pressure receptors called baroreceptors in the walls of the aortic arch and carotid sinus (at the beginning of the internal carotid artery) monitor the arterial blood pressure. Rising pressure is detected when the walls of the arteries stretch due to an increase in blood volume. This causes heart muscle cells to secrete the hormone atrial natriuretic peptide (ANP) into the blood. This acts on the kidneys to inhibit the secretion of renin and aldosterone causing the release of sodium, and accompanying water into the urine, thereby reducing the blood volume. This information is then conveyed, via afferent nerve fibers, to the solitary nucleus in the medulla oblongata. From here motor nerves belonging to the autonomic nervous system are stimulated to influence the activity of chiefly the heart and the smallest diameter arteries, called arterioles. The arterioles are the main resistance vessels in the arterial tree, and small changes in diameter cause large changes in the resistance to flow through them. When the arterial blood pressure rises the arterioles are stimulated to dilate making it easier for blood to leave the arteries, thus deflating them, and bringing the blood pressure down, back to normal. At the same time, the heart is stimulated via cholinergic parasympathetic nerves to beat more slowly (called bradycardia), ensuring that the inflow of blood into the arteries is reduced, thus adding to the reduction in pressure, and correcting the original error. Low pressure in the arteries, causes the opposite reflex of constriction of the arterioles, and a speeding up of the heart rate (called tachycardia). If the drop in blood pressure is very rapid or excessive, the medulla oblongata stimulates the adrenal medulla, via "preganglionic" sympathetic nerves, to secrete epinephrine (adrenaline) into the blood. This hormone enhances the tachycardia and causes severe vasoconstriction of the arterioles to all but the essential organ in the body (especially the heart, lungs, and brain). These reactions usually correct the low arterial blood pressure (hypotension) very effectively. Calcium levels The plasma ionized calcium (Ca2+) concentration is very tightly controlled by a pair of homeostatic mechanisms. The sensor for the first one is situated in the parathyroid glands, where the chief cells sense the Ca2+ level by means of specialized calcium receptors in their membranes. The sensors for the second are the parafollicular cells in the thyroid gland. The parathyroid chief cells secrete parathyroid hormone (PTH) in response to a fall in the plasma ionized calcium level; the parafollicular cells of the thyroid gland secrete calcitonin in response to a rise in the plasma ionized calcium level. The effector organs of the first homeostatic mechanism are the bones, the kidney, and, via a hormone released into the blood by the kidney in response to high PTH levels in the blood, the duodenum and jejunum. Parathyroid hormone (in high concentrations in the blood) causes bone resorption, releasing calcium into the plasma. This is a very rapid action which can correct a threatening hypocalcemia within minutes. High PTH concentrations cause the excretion of phosphate ions via the urine. Since phosphates combine with calcium ions to form insoluble salts (see also bone mineral), a decrease in the level of phosphates in the blood, releases free calcium ions into the plasma ionized calcium pool. PTH has a second action on the kidneys. It stimulates the manufacture and release, by the kidneys, of calcitriol into the blood. This steroid hormone acts on the epithelial cells of the upper small intestine, increasing their capacity to absorb calcium from the gut contents into the blood. The second homeostatic mechanism, with its sensors in the thyroid gland, releases calcitonin into the blood when the blood ionized calcium rises. This hormone acts primarily on bone, causing the rapid removal of calcium from the blood and depositing it, in insoluble form, in the bones. The two homeostatic mechanisms working through PTH on the one hand, and calcitonin on the other can very rapidly correct any impending error in the plasma ionized calcium level by either removing calcium from the blood and depositing it in the skeleton, or by removing calcium from it. The skeleton acts as an extremely large calcium store (about 1 kg) compared with the plasma calcium store (about 180 mg). Longer term regulation occurs through calcium absorption or loss from the gut. Another example are the most well-characterised endocannabinoids like anandamide (N-arachidonoylethanolamide; AEA) and 2-arachidonoylglycerol (2-AG), whose synthesis occurs through the action of a series of intracellular enzymes activated in response to a rise in intracellular calcium levels to introduce homeostasis and prevention of tumor development through putative protective mechanisms that prevent cell growth and migration by activation of CB1 and/or CB2 and adjoining receptors. Sodium concentration The homeostatic mechanism which controls the plasma sodium concentration is rather more complex than most of the other homeostatic mechanisms described on this page. The sensor is situated in the juxtaglomerular apparatus of kidneys, which senses the plasma sodium concentration in a surprisingly indirect manner. Instead of measuring it directly in the blood flowing past the juxtaglomerular cells, these cells respond to the sodium concentration in the renal tubular fluid after it has already undergone a certain amount of modification in the proximal convoluted tubule and loop of Henle. These cells also respond to rate of blood flow through the juxtaglomerular apparatus, which, under normal circumstances, is directly proportional to the arterial blood pressure, making this tissue an ancillary arterial blood pressure sensor. In response to a lowering of the plasma sodium concentration, or to a fall in the arterial blood pressure, the juxtaglomerular cells release renin into the blood. Renin is an enzyme which cleaves a decapeptide (a short protein chain, 10 amino acids long) from a plasma α-2-globulin called angiotensinogen. This decapeptide is known as angiotensin I. It has no known biological activity. However, when the blood circulates through the lungs a pulmonary capillary endothelial enzyme called angiotensin-converting enzyme (ACE) cleaves a further two amino acids from angiotensin I to form an octapeptide known as angiotensin II. Angiotensin II is a hormone which acts on the adrenal cortex, causing the release into the blood of the steroid hormone, aldosterone. Angiotensin II also acts on the smooth muscle in the walls of the arterioles causing these small diameter vessels to constrict, thereby restricting the outflow of blood from the arterial tree, causing the arterial blood pressure to rise. This, therefore, reinforces the measures described above (under the heading of "Arterial blood pressure"), which defend the arterial blood pressure against changes, especially hypotension. The angiotensin II-stimulated aldosterone released from the zona glomerulosa of the adrenal glands has an effect on particularly the epithelial cells of the distal convoluted tubules and collecting ducts of the kidneys. Here it causes the reabsorption of sodium ions from the renal tubular fluid, in exchange for potassium ions which are secreted from the blood plasma into the tubular fluid to exit the body via the urine. The reabsorption of sodium ions from the renal tubular fluid halts further sodium ion losses from the body, and therefore preventing the worsening of hyponatremia. The hyponatremia can only be corrected by the consumption of salt in the diet. However, it is not certain whether a "salt hunger" can be initiated by hyponatremia, or by what mechanism this might come about. When the plasma sodium ion concentration is higher than normal (hypernatremia), the release of renin from the juxtaglomerular apparatus is halted, ceasing the production of angiotensin II, and its consequent aldosterone-release into the blood. The kidneys respond by excreting sodium ions into the urine, thereby normalizing the plasma sodium ion concentration. The low angiotensin II levels in the blood lower the arterial blood pressure as an inevitable concomitant response. The reabsorption of sodium ions from the tubular fluid as a result of high aldosterone levels in the blood does not, of itself, cause renal tubular water to be returned to the blood from the distal convoluted tubules or collecting ducts. This is because sodium is reabsorbed in exchange for potassium and therefore causes only a modest change in the osmotic gradient between the blood and the tubular fluid. Furthermore, the epithelium of the distal convoluted tubules and collecting ducts is impermeable to water in the absence of antidiuretic hormone (ADH) in the blood. ADH is part of the control of fluid balance. Its levels in the blood vary with the osmolality of the plasma, which is measured in the hypothalamus of the brain. Aldosterone's action on the kidney tubules prevents sodium loss to the extracellular fluid (ECF). So there is no change in the osmolality of the ECF, and therefore no change in the ADH concentration of the plasma. However, low aldosterone levels cause a loss of sodium ions from the ECF, which could potentially cause a change in extracellular osmolality and therefore of ADH levels in the blood. Potassium concentration High potassium concentrations in the plasma cause depolarization of the zona glomerulosa cells' membranes in the outer layer of the adrenal cortex. This causes the release of aldosterone into the blood. Aldosterone acts primarily on the distal convoluted tubules and collecting ducts of the kidneys, stimulating the excretion of potassium ions into the urine. It does so, however, by activating the basolateral Na+/K+ pumps of the tubular epithelial cells. These sodium/potassium exchangers pump three sodium ions out of the cell, into the interstitial fluid and two potassium ions into the cell from the interstitial fluid. This creates an ionic concentration gradient which results in the reabsorption of sodium (Na+) ions from the tubular fluid into the blood, and secreting potassium (K+) ions from the blood into the urine (lumen of collecting duct). Fluid balance The total amount of water in the body needs to be kept in balance. Fluid balance involves keeping the fluid volume stabilized, and also keeping the levels of electrolytes in the extracellular fluid stable. Fluid balance is maintained by the process of osmoregulation and by behavior. Osmotic pressure is detected by osmoreceptors in the median preoptic nucleus in the hypothalamus. Measurement of the plasma osmolality to give an indication of the water content of the body, relies on the fact that water losses from the body, (through unavoidable water loss through the skin which is not entirely waterproof and therefore always slightly moist, water vapor in the exhaled air, sweating, vomiting, normal feces and especially diarrhea) are all hypotonic, meaning that they are less salty than the body fluids (compare, for instance, the taste of saliva with that of tears. The latter has almost the same salt content as the extracellular fluid, whereas the former is hypotonic with respect to the plasma. Saliva does not taste salty, whereas tears are decidedly salty). Nearly all normal and abnormal losses of body water therefore cause the extracellular fluid to become hypertonic. Conversely, excessive fluid intake dilutes the extracellular fluid causing the hypothalamus to register hypotonic hyponatremia conditions. When the hypothalamus detects a hypertonic extracellular environment, it causes the secretion of an antidiuretic hormone (ADH) called vasopressin which acts on the effector organ, which in this case is the kidney. The effect of vasopressin on the kidney tubules is to reabsorb water from the distal convoluted tubules and collecting ducts, thus preventing aggravation of the water loss via the urine. The hypothalamus simultaneously stimulates the nearby thirst center causing an almost irresistible (if the hypertonicity is severe enough) urge to drink water. The cessation of urine flow prevents the hypovolemia and hypertonicity from getting worse; the drinking of water corrects the defect. Hypo-osmolality results in very low plasma ADH levels. This results in the inhibition of water reabsorption from the kidney tubules, causing high volumes of very dilute urine to be excreted, thus getting rid of the excess water in the body. Urinary water loss, when the body water homeostat is intact, is a compensatory water loss, correcting any water excess in the body. However, since the kidneys cannot generate water, the thirst reflex is the all-important second effector mechanism of the body water homeostat, correcting any water deficit in the body. Blood pH The plasma pH can be altered by respiratory changes in the partial pressure of carbon dioxide; or altered by metabolic changes in the carbonic acid to bicarbonate ion ratio. The bicarbonate buffer system regulates the ratio of carbonic acid to bicarbonate to be equal to 1:20, at which ratio the blood pH is 7.4 (as explained in the Henderson–Hasselbalch equation). A change in the plasma pH gives an acid–base imbalance. In acid–base homeostasis there are two mechanisms that can help regulate the pH. Respiratory compensation a mechanism of the respiratory center, adjusts the partial pressure of carbon dioxide by changing the rate and depth of breathing, to bring the pH back to normal. The partial pressure of carbon dioxide also determines the concentration of carbonic acid, and the bicarbonate buffer system can also come into play. Renal compensation can help the bicarbonate buffer system. The sensor for the plasma bicarbonate concentration is not known for certain. It is very probable that the renal tubular cells of the distal convoluted tubules are themselves sensitive to the pH of the plasma. The metabolism of these cells produces carbon dioxide, which is rapidly converted to hydrogen and bicarbonate through the action of carbonic anhydrase. When the ECF pH falls (becoming more acidic) the renal tubular cells excrete hydrogen ions into the tubular fluid to leave the body via urine. Bicarbonate ions are simultaneously secreted into the blood that decreases the carbonic acid, and consequently raises the plasma pH. The converse happens when the plasma pH rises above normal: bicarbonate ions are excreted into the urine, and hydrogen ions released into the plasma. When hydrogen ions are excreted into the urine, and bicarbonate into the blood, the latter combines with the excess hydrogen ions in the plasma that stimulated the kidneys to perform this operation. The resulting reaction in the plasma is the formation of carbonic acid which is in equilibrium with the plasma partial pressure of carbon dioxide. This is tightly regulated to ensure that there is no excessive build-up of carbonic acid or bicarbonate. The overall effect is therefore that hydrogen ions are lost in the urine when the pH of the plasma falls. The concomitant rise in the plasma bicarbonate mops up the increased hydrogen ions (caused by the fall in plasma pH) and the resulting excess carbonic acid is disposed of in the lungs as carbon dioxide. This restores the normal ratio between bicarbonate and the partial pressure of carbon dioxide and therefore the plasma pH. The converse happens when a high plasma pH stimulates the kidneys to secrete hydrogen ions into the blood and to excrete bicarbonate into the urine. The hydrogen ions combine with the excess bicarbonate ions in the plasma, once again forming an excess of carbonic acid which can be exhaled, as carbon dioxide, in the lungs, keeping the plasma bicarbonate ion concentration, the partial pressure of carbon dioxide and, therefore, the plasma pH, constant. Cerebrospinal fluid Cerebrospinal fluid (CSF) allows for regulation of the distribution of substances between cells of the brain, and neuroendocrine factors, to which slight changes can cause problems or damage to the nervous system. For example, high glycine concentration disrupts temperature and blood pressure control, and high CSF pH causes dizziness and syncope. Neurotransmission Inhibitory neurons in the central nervous system play a homeostatic role in the balance of neuronal activity between excitation and inhibition. Inhibitory neurons using GABA, make compensating changes in the neuronal networks preventing runaway levels of excitation. An imbalance between excitation and inhibition is seen to be implicated in a number of neuropsychiatric disorders. Neuroendocrine system The neuroendocrine system is the mechanism by which the hypothalamus maintains homeostasis, regulating metabolism, reproduction, eating and drinking behaviour, energy utilization, osmolarity and blood pressure. The regulation of metabolism, is carried out by hypothalamic interconnections to other glands. Three endocrine glands of the hypothalamic–pituitary–gonadal axis (HPG axis) often work together and have important regulatory functions. Two other regulatory endocrine axes are the hypothalamic–pituitary–adrenal axis (HPA axis) and the hypothalamic–pituitary–thyroid axis (HPT axis). The liver also has many regulatory functions of the metabolism. An important function is the production and control of bile acids. Too much bile acid can be toxic to cells and its synthesis can be inhibited by activation of FXR a nuclear receptor. Gene regulation At the cellular level, homeostasis is carried out by several mechanisms including transcriptional regulation that can alter the activity of genes in response to changes. Energy balance The amount of energy taken in through nutrition needs to match the amount of energy used. To achieve energy homeostasis appetite is regulated by two hormones, grehlin and leptin. Grehlin stimulates hunger and the intake of food and leptin acts to signal satiety (fullness). A 2019 review of weight-change interventions, including dieting, exercise and overeating, found that body weight homeostasis could not precisely correct for "energetic errors", the loss or gain of calories, in the short-term. Clinical significance Many diseases are the result of a homeostatic failure. Almost any homeostatic component can malfunction either as a result of an inherited defect, an inborn error of metabolism, or an acquired disease. Some homeostatic mechanisms have inbuilt redundancies, which ensures that life is not immediately threatened if a component malfunctions; but sometimes a homeostatic malfunction can result in serious disease, which can be fatal if not treated. A well-known example of a homeostatic failure is shown in type 1 diabetes mellitus. Here blood sugar regulation is unable to function because the beta cells of the pancreatic islets are destroyed and cannot produce the necessary insulin. The blood sugar rises in a condition known as hyperglycemia. The plasma ionized calcium homeostat can be disrupted by the constant, unchanging, over-production of parathyroid hormone by a parathyroid adenoma resulting in the typically features of hyperparathyroidism, namely high plasma ionized Ca2+ levels and the resorption of bone, which can lead to spontaneous fractures. The abnormally high plasma ionized calcium concentrations cause conformational changes in many cell-surface proteins (especially ion channels and hormone or neurotransmitter receptors) giving rise to lethargy, muscle weakness, anorexia, constipation and labile emotions. The body water homeostat can be compromised by the inability to secrete ADH in response to even the normal daily water losses via the exhaled air, the feces, and insensible sweating. On receiving a zero blood ADH signal, the kidneys produce huge unchanging volumes of very dilute urine, causing dehydration and death if not treated. As organisms age, the efficiency of their control systems becomes reduced. The inefficiencies gradually result in an unstable internal environment that increases the risk of illness, and leads to the physical changes associated with aging. Various chronic diseases are kept under control by homeostatic compensation, which masks a problem by compensating for it (making up for it) in another way. However, the compensating mechanisms eventually wear out or are disrupted by a new complicating factor (such as the advent of a concurrent acute viral infection), which sends the body reeling through a new cascade of events. Such decompensation unmasks the underlying disease, worsening its symptoms. Common examples include decompensated heart failure, kidney failure, and liver failure. Biosphere In the Gaia hypothesis, James Lovelock stated that the entire mass of living matter on Earth (or any planet with life) functions as a vast homeostatic superorganism that actively modifies its planetary environment to produce the environmental conditions necessary for its own survival. In this view, the entire planet maintains several homeostasis (the primary one being temperature homeostasis). Whether this sort of system is present on Earth is open to debate. However, some relatively simple homeostatic mechanisms are generally accepted. For example, it is sometimes claimed that when atmospheric carbon dioxide levels rise, certain plants may be able to grow better and thus act to remove more carbon dioxide from the atmosphere. However, warming has exacerbated droughts, making water the actual limiting factor on land. When sunlight is plentiful and the atmospheric temperature climbs, it has been claimed that the phytoplankton of the ocean surface waters, acting as global sunshine, and therefore heat sensors, may thrive and produce more dimethyl sulfide (DMS). The DMS molecules act as cloud condensation nuclei, which produce more clouds, and thus increase the atmospheric albedo, and this feeds back to lower the temperature of the atmosphere. However, rising sea temperature has stratified the oceans, separating warm, sunlit waters from cool, nutrient-rich waters. Thus, nutrients have become the limiting factor, and plankton levels have actually fallen over the past 50 years, not risen. As scientists discover more about Earth, vast numbers of positive and negative feedback loops are being discovered, that, together, maintain a metastable condition, sometimes within a very broad range of environmental conditions. Predictive Predictive homeostasis is an anticipatory response to an expected challenge in the future, such as the stimulation of insulin secretion by gut hormones which enter the blood in response to a meal. This insulin secretion occurs before the blood sugar level rises, lowering the blood sugar level in anticipation of a large influx into the blood of glucose resulting from the digestion of carbohydrates in the gut. Such anticipatory reactions are open loop systems which are based, essentially, on "guess work", and are not self-correcting. Anticipatory responses always require a closed loop negative feedback system to correct the 'over-shoots' and 'under-shoots' to which the anticipatory systems are prone. Other fields The term has come to be used in other fields, for example: Risk An actuary may refer to risk homeostasis, where (for example) people who have anti-lock brakes have no better safety record than those without anti-lock brakes, because the former unconsciously compensate for the safer vehicle via less-safe driving habits. Previous to the innovation of anti-lock brakes, certain maneuvers involved minor skids, evoking fear and avoidance: Now the anti-lock system moves the boundary for such feedback, and behavior patterns expand into the no-longer punitive area. It has also been suggested that ecological crises are an instance of risk homeostasis in which a particular behavior continues until proven dangerous or dramatic consequences actually occur. Stress Sociologists and psychologists may refer to stress homeostasis, the tendency of a population or an individual to stay at a certain level of stress, often generating artificial stresses if the "natural" level of stress is not enough. Jean-François Lyotard, a postmodern theorist, has applied this term to societal 'power centers' that he describes in The Postmodern Condition, as being 'governed by a principle of homeostasis,' for example, the scientific hierarchy, which will sometimes ignore a radical new discovery for years because it destabilizes previously accepted norms. Technology Familiar technological homeostatic mechanisms include: A thermostat operates by switching heaters or air-conditioners on and off in response to the output of a temperature sensor. Cruise control adjusts a car's throttle in response to changes in speed. An autopilot operates the steering controls of an aircraft or ship in response to deviation from a pre-set compass bearing or route. Process control systems in a chemical plant or oil refinery maintain fluid levels, pressures, temperature, chemical composition, etc. by controlling heaters, pumps and valves. The centrifugal governor of a steam engine, as designed by James Watt in 1788, reduces the throttle valve in response to increases in the engine speed, or opens the valve if the speed falls below the pre-set rate. Society and culture The use of sovereign power, codes of conduct, religious and cultural practices and other dynamic processes in a society can be described as a part of an evolved homeostatic system of regularizing life and maintaining an overall equilibrium that protects the security of the whole from internal and external imbalances or dangers. Healthy civic cultures can be said to have achieved an optimal homeostatic balance between multiple contradictory concerns such as in the tension between respect for individual rights and concern for the public good, or that between governmental effectiveness and responsiveness to the interests of citizens. See also References Further reading electronic-book electronic- External links Homeostasis Walter Bradford Cannon, Homeostasis (1932) Physiology Biology terminology Cybernetics
0.768652
0.999209
0.768044
Mastocytosis
Mastocytosis, a type of mast cell disease, is a rare disorder affecting both children and adults caused by the accumulation of functionally defective mast cells (also called mastocytes) and CD34+ mast cell precursors. People affected by mastocytosis are susceptible to a variety of symptoms, including itching, hives, and anaphylactic shock, caused by the release of histamine and other pro-inflammatory substances from mast cells. Signs and symptoms When mast cells undergo degranulation, the substances that are released can cause a number of symptoms that can vary over time and can range in intensity from mild to severe. Because mast cells play a role in allergic reactions, the symptoms of mastocytosis often are similar to the symptoms of an allergic reaction. They may include, but are not limited to Fatigue Skin lesions (urticaria pigmentosa), itching, and dermatographic urticaria (skin writing) "Darier's Sign", a reaction to stroking or scratching of urticaria lesions. Abdominal discomfort Nausea and vomiting Diarrhea Olfactive intolerance Ear/nose/throat inflammation Anaphylaxis (shock from allergic or immune causes) Episodes of very low blood pressure (including shock) and faintness Bone or muscle pain Decreased bone density or increased bone density (osteoporosis or osteosclerosis) Headache Depression Ocular discomfort Increased stomach acid production causing peptic ulcers (increased stimulation of enterochromaffin cell and direct histamine stimulation on parietal cell) Malabsorption (due to inactivation of pancreatic enzymes by increased acid) Hepatosplenomegaly There are few qualitative studies about the effects of mastocytosis on daily life. However, a Danish study from 2018 describes the multidimensional impact of the disease on everyday life. Pathophysiology Mast cells are located in connective tissue, including the skin, the linings of the stomach and intestine, and other sites. They play an important role in the immune defence against bacteria and parasites. By releasing chemical "alarms" such as histamine, mast cells attract other key players of the immune defense system to areas of the body where they are needed. Mast cells seem to have other roles as well. Because they gather together around wounds, mast cells may play a part in wound healing. For example, the typical itching felt around a healing scab may be caused by histamine released by mast cells. Researchers also think mast cells may have a role in the growth of blood vessels (angiogenesis). No one with too few or no mast cells has been found, which indicates to some scientists we may not be able to survive with too few mast cells. Mast cells express a cell surface receptor, c-kit (CD117), which is the receptor for stem cell factor (scf). In laboratory studies, scf appears to be important for the proliferation of mast cells. Mutations of the gene coding for the c-kit receptor (mutation KIT(D816V)), leading to constitutive signalling through the receptor is found in >90% of patients with systemic mastocytosis. Diagnosis Diagnosis of urticaria pigmentosa (cutaneous mastocytosis, see above) can often be done by seeing the characteristic lesions that are dark brown and fixed. A small skin sample (biopsy) may help confirm the diagnosis. In case of suspicion of systemic disease the level of serum tryptase in the blood can be of help. If the base level of s-tryptase is elevated, this implies that the mastocytosis can be systemic. In cases of suspicion of SM help can also be drawn from analysis of mutation in KIT(D816V) in peripheral blood using sensitive PCR-technology To set the diagnosis of systemic mastocytosis, certain criteria must be met. Either one major + one minor criterion or three minor criteria has to be fulfilled: Major criterion Dense infiltrates of >15 mast cells in the bone marrow or an extracutaneous organ Minor criteria Aberrant phenotype on the mast cells (pos. for CD2 and/or CD25) Aberrant mast cell morphology (spindle-shaped) Finding of mutation in KIT(D816V) S-tryptase >20 ng/ml Other mast cell diseases Other types of mast cell disease include: Monoclonal mast cell activation, defined by the World Health Organization definitions 2010, also has increased mast cells but insufficient to be systemic mastocytosis (in World Health Organization Definitions) Mast cell activation syndrome – has normal number of mast cells, but all the symptoms and in some cases the genetic markers of systemic mastocytosis Another known but rare mast cell proliferation disease is mast cell sarcoma. Classification Mastocytosis can occur in a variety of forms: Cutaneous mastocytosis (CM) The most common cutaneous mastocytosis is maculopapular cutaneous mastocytosis, previously named papular urticaria pigmentosa (UP), more common in children, although also seen in adults. Telangiectasia macularis eruptiva perstans (TMEP) is a much rarer form of cutaneous mastocytosis that affects adults.[2] MPCM and TMEP can be a part of indolent systemic mastocytosis. This should be considered if patients develop any systemic symptoms Generalized eruption of cutaneous mastocytosis (adult type) is the most common pattern of mastocytosis presenting to the dermatologist, with the most common lesions being macules, papules, or nodules that are disseminated over most of the body but especially on the upper arms, legs, and trunk Diffuse cutaneous mastocytosis has diffuse involvement in which the entire integument may be thickened and infiltrated with mast cells to produce a peculiar orange color, giving rise to the term homme orange. Cutaneous mastocytosis in children usually presents in the first year after birth and in most cases vanishes during adolescence. Systemic mastocytosis (SM) Systemic mastocytosis involves the bone marrow in the majority of cases and in some cases other internal organs, usually in addition to involving the skin. Mast cells collect in various tissues and can affect organs where mast cells do not normally inhabit such as the liver, spleen and lymph nodes, and organs which have normal populations but where numbers are increased. In the bowel, it may manifest as mastocytic enterocolitis. However, normal ranges for mast cell counts in the gastrointestinal tract mucosa are not well established in the literature, and depend upon the exact location (e.g. right versus left colon), gender, and patient populations (such as asymptomatic patients versus patients with chronic diarrhea of unknown etiology). Quantitative mast cell stains may yield little diagnostic information, and further research studies are warranted to determine whether "mastocytic enterocolitis" truly represents a distinct entity. There are five types of systemic mastocytosis: Indolent systemic mastocytosis (ISM). The most common SM (>90%) Smouldering systemic mastocytosis (SSM) Systemic mastocytosis with associated hematological neoplasm (SM-AHN) Aggressive systemic mastocytosis (ASM) Mast cell leukemia (MCL) Treatment There is no cure for mastocytosis, but there are a number of medicines to help treat the symptoms: Anti-mediator therapy Antihistamines block receptors targeted by histamine released from mast cells. Both H1 and H2 blockers may be helpful, often in combination. Leukotriene antagonists block receptors targeted by leukotrienes released from mast cells. Mast cell stabilizers help prevent mast cells from releasing their chemical contents. Cromoglicic acid is the only medicine specifically approved by the FDA for the treatment of mastocytosis. Ketotifen is available in Canada and Europe and more recently in the U.S. It is also available as eyedrops (Zaditor). Proton-pump inhibitors help reduce production of gastric acid, which is often increased in patients with mastocytosis. Excess gastric acid can harm the stomach, esophagus, and small intestine. Epinephrine constricts blood vessels and opens airways to maintain adequate circulation and ventilation when excessive mast cell degranulation has caused anaphylaxis. Salbutamol and other beta-2 agonists open airways that can constrict in the presence of histamine. Corticosteroids can be used topically, inhaled, or systemically to reduce inflammation associated with mastocytosis. Drugs to prevent/treat osteoporosis include Calcium-Vitamine D, bisphosphonates and in rare cases inhibitors of RANK-L Antidepressants are an important and often overlooked tool in the treatment of mastocytosis. Depression and other neurological symptoms have been noted in mastocytosis. Some antidepressants, such as doxepin and mirtazapine, are themselves potent antihistamines and can help relieve physical as well as cognitive symptoms. Cytoreductive therapy In cases of advanced systemic mastocytosis or rare cases with indolent systemic mastocytosis with very troublesome symptoms, cytoreductive therapy can be indicated. ɑ-interferon. Given as subcutaneous injections. Side effects include fatigue and influenza-like symptoms Cladribine (CdA). Chemotherapy which is given as subcutaneous injections. Side effects include immunodeficiency and infections. Tyrosine kinase inhibitors (TKI) Midostaurin. TKI acting on many different tyrosine kinases, approved by FDA and EMA for advanced mastocytosis Imatinib. Can have effect in rare cases without mutation in KIT(D816V) Masitinib. Is being tested in trials. Not approved. Midostaurin - 60% respond. Avapritinib in trials; currently being tested but showing promise in reduction of tryptase levels. Allogeneic stem cell transplantation has been used in rare cases with aggressive systemic mastocytosis in patients deemed to be fit for the procedure. Other Treatment with ultraviolet light can relieve skin symptoms, but may increase the risk of skin cancer. Prognosis Patients with indolent systemic mastocytosis have a normal life expectancy. The prognosis for patients with advanced systemic mastocytosis differs depending on type of disease with MCL being the most serious form with short survival. Epidemiology The true incidence and prevalence of mastocytosis is unknown, but mastocytosis generally has been considered to be an "orphan disease"; orphan diseases affect 200,000 or fewer people in the United States. Mastocytosis, however, often may be misdiagnosed, as it typically occurs secondary to another condition, and thus may occur more frequently than assumed. Research National Institute of Allergy and Infectious Diseases scientists have been studying and treating patients with mastocytosis for several years at the National Institutes of Health (NIH) Clinical Center. Some of the most important research advances for this rare disorder include improved diagnosis of mast cell disease and identification of growth factors and genetic mechanisms responsible for increased mast cell production. Researchers are currently evaluating approaches to improve ways to treat mastocytosis. Scientists also are focusing on identifying disease-associated mutations (changes in genes). NIH scientists have identified some mutations, which may help researchers understand the causes of mastocytosis, improve diagnosis, and develop better treatments. In Europe the European Competence Network on Mastocytosis (ECNM) coordinates studies, registries and education on mastocytosis. History Urticaria pigmentosa was first described in 1869. The first report of a primary mast cell disorder is attributed to Unna, who in 1887 reported that skin lesions of urticaria pigmentosa contained numerous mast cells. Systemic mastocytosis was first reported by French scientists in 1936. See also Mastocytoma References Further reading Dermal and subcutaneous growths Immune system disorders Rare diseases
0.769957
0.997494
0.768028
Bioenergetics
Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics. Overview Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs harness energy that was initially transformed by the plants during photosynthesis. In a living organism, chemical bonds are broken and made as part of the exchange and transformation of energy. Energy is available for work (such as mechanical work) or for other processes (such as chemical synthesis and anabolic processes in growth), when weak bonds are broken and stronger bonds are made. The production of stronger bonds allows release of usable energy. Adenosine triphosphate (ATP) is the main "energy currency" for organisms; the goal of metabolic and catabolic processes are to synthesize ATP from available starting materials (from the environment), and to break- down ATP (into adenosine diphosphate (ADP) and inorganic phosphate) by utilizing it in biological processes. In a cell, the ratio of ATP to ADP concentrations is known as the "energy charge" of the cell. A cell can use this energy charge to relay information about cellular needs; if there is more ATP than ADP available, the cell can use ATP to do work, but if there is more ADP than ATP available, the cell must synthesize ATP via oxidative phosphorylation. Living organisms produce ATP from energy sources via oxidative phosphorylation. The terminal phosphate bonds of ATP are relatively weak compared with the stronger bonds formed when ATP is hydrolyzed (broken down by water) to adenosine diphosphate and inorganic phosphate. Here it is the thermodynamically favorable free energy of hydrolysis that results in energy release; the phosphoanhydride bond between the terminal phosphate group and the rest of the ATP molecule does not itself contain this energy. An organism's stockpile of ATP is used as a battery to store energy in cells. Utilization of chemical energy from such molecular bond rearrangement powers biological processes in every biological organism. Living organisms obtain energy from organic and inorganic materials; i.e. ATP can be synthesized from a variety of biochemical precursors. For example, lithotrophs can oxidize minerals such as nitrates or forms of sulfur, such as elemental sulfur, sulfites, and hydrogen sulfide to produce ATP. In photosynthesis, autotrophs produce ATP using light energy, whereas heterotrophs must consume organic compounds, mostly including carbohydrates, fats, and proteins. The amount of energy actually obtained by the organism is lower than the amount present in the food; there are losses in digestion, metabolism, and thermogenesis. Environmental materials that an organism intakes are generally combined with oxygen to release energy, although some nutrients can also be oxidized anaerobically by various organisms. The utilization of these materials is a form of slow combustion because the nutrients are reacted with oxygen (the materials are oxidized slowly enough that the organisms do not produce fire). The oxidation releases energy, which may evolve as heat or be used by the organism for other purposes, such as breaking chemical bonds. Types of reactions An exergonic reaction is a spontaneous chemical reaction that releases energy. It is thermodynamically favored, indexed by a negative value of ΔG (Gibbs free energy). Over the course of a reaction, energy needs to be put in, and this activation energy drives the reactants from a stable state to a highly energetically unstable transition state to a more stable state that is lower in energy (see: reaction coordinate). The reactants are usually complex molecules that are broken into simpler products. The entire reaction is usually catabolic. The release of energy (called Gibbs free energy) is negative (i.e. −ΔG) because energy is released from the reactants to the products. An endergonic reaction is an anabolic chemical reaction that consumes energy. It is the opposite of an exergonic reaction. It has a positive ΔG because it takes more energy to break the bonds of the reactant than the energy of the products offer, i.e. the products have weaker bonds than the reactants. Thus, endergonic reactions are thermodynamically unfavorable. Additionally, endergonic reactions are usually anabolic. The free energy (ΔG) gained or lost in a reaction can be calculated as follows: ΔG = ΔH − TΔS where ∆G = Gibbs free energy, ∆H = enthalpy, T = temperature (in kelvins), and ∆S = entropy. Examples of major bioenergetic processes Glycolysis is the process of breaking down glucose into pyruvate, producing two molecules of ATP (per 1 molecule of glucose) in the process. When a cell has a higher concentration of ATP than ADP (i.e. has a high energy charge), the cell cannot undergo glycolysis, releasing energy from available glucose to perform biological work. Pyruvate is one product of glycolysis, and can be shuttled into other metabolic pathways (gluconeogenesis, etc.) as needed by the cell. Additionally, glycolysis produces reducing equivalents in the form of NADH (nicotinamide adenine dinucleotide), which will ultimately be used to donate electrons to the electron transport chain. Gluconeogenesis is the opposite of glycolysis; when the cell's energy charge is low (the concentration of ADP is higher than that of ATP), the cell must synthesize glucose from carbon- containing biomolecules such as proteins, amino acids, fats, pyruvate, etc. For example, proteins can be broken down into amino acids, and these simpler carbon skeletons are used to build/ synthesize glucose. The citric acid cycle is a process of cellular respiration in which acetyl coenzyme A, synthesized from pyruvate dehydrogenase, is first reacted with oxaloacetate to yield citrate. The remaining eight reactions produce other carbon-containing metabolites. These metabolites are successively oxidized, and the free energy of oxidation is conserved in the form of the reduced coenzymes FADH2 and NADH. These reduced electron carriers can then be re-oxidized when they transfer electrons to the electron transport chain. Ketosis is a metabolic process where the body prioritizes ketone bodies, produced from fat, as its primary fuel source instead of glucose. This shift often occurs when glucose levels are low: during prolonged fasting, strenuous exercise, or specialized diets like ketogenic plans, the body may also adopt ketosis as an efficient alternative for energy production. This metabolic adaptation allows the body to conserve precious glucose for organs that depend on it, like the brain, while utilizing readily available fat stores for fuel. Oxidative phosphorylation and the electron transport chain is the process where reducing equivalents such as NADPH, FADH2 and NADH can be used to donate electrons to a series of redox reactions that take place in electron transport chain complexes. These redox reactions take place in enzyme complexes situated within the mitochondrial membrane. These redox reactions transfer electrons "down" the electron transport chain, which is coupled to the proton motive force. This difference in proton concentration between the mitochondrial matrix and inner membrane space is used to drive ATP synthesis via ATP synthase. Photosynthesis, another major bioenergetic process, is the metabolic pathway used by plants in which solar energy is used to synthesize glucose from carbon dioxide and water. This reaction takes place in the chloroplast. After glucose is synthesized, the plant cell can undergo photophosphorylation to produce ATP. Additional information During energy transformations in living systems, order and organization must be compensated by releasing energy which will increase entropy of the surrounding. Organisms are open systems that exchange materials and energy with the environment. They are never at equilibrium with the surrounding. Energy is spent to create and maintain order in the cells, and surplus energy and other simpler by-products are released to create disorder such that there is an increase in entropy of the surrounding. In a reversible process, entropy remains constant where as in an irreversible process (more common to real-world scenarios), entropy tends to increase. During phase changes (from solid to liquid, or to gas), entropy increases because the number of possible arrangements of particles increases. If ∆G<0, the chemical reaction is spontaneous and favourable in that direction. If ∆G=0, the reactants and products of chemical reaction are at equilibrium. If ∆G>0, the chemical reaction is non-spontaneous and unfavorable in that direction. ∆G is not an indicator for velocity or rate of chemical reaction at which equilibrium is reached. It depends on amount of enzyme and energy activation. Reaction coupling Is the linkage of chemical reactions in a way that the product of one reaction becomes the substrate of another reaction. This allows organisms to utilize energy and resources efficiently. For example, in cellular respiration, energy released by the breakdown of glucose is coupled in the synthesis of ATP. Cotransport In August 1960, Robert K. Crane presented for the first time his discovery of the sodium-glucose cotransport as the mechanism for intestinal glucose absorption. Crane's discovery of cotransport was the first ever proposal of flux coupling in biology and was the most important event concerning carbohydrate absorption in the 20th century. Chemiosmotic theory One of the major triumphs of bioenergetics is Peter D. Mitchell's chemiosmotic theory of how protons in aqueous solution function in the production of ATP in cell organelles such as mitochondria. This work earned Mitchell the 1978 Nobel Prize for Chemistry. Other cellular sources of ATP such as glycolysis were understood first, but such processes for direct coupling of enzyme activity to ATP production are not the major source of useful chemical energy in most cells. Chemiosmotic coupling is the major energy producing process in most cells, being utilized in chloroplasts and several single celled organisms in addition to mitochondria. Binding Change Mechanism The binding change mechanism, proposed by Paul Boyer and John E. Walker, who were awarded the Nobel Prize in Chemistry in 1997, suggests that ATP synthesis is linked to a conformational change in ATP synthase. This change is triggered by the rotation of the gamma subunit. ATP synthesis can be achieved through several mechanisms. The first mechanism postulates that the free energy of the proton gradient is utilized to alter the conformation of polypeptide molecules in the ATP synthesis active centers. The second mechanism suggests that the change in the conformational state is also produced by the transformation of mechanical energy into chemical energy using biological mechanoemission. Energy balance Energy homeostasis is the homeostatic control of energy balance – the difference between energy obtained through food consumption and energy expenditure – in living systems. See also Bioenergetic systems Cellular respiration Photosynthesis ATP synthase Active transport Myosin Exercise physiology Table of standard Gibbs free energies References Further reading Juretic, D., 2021. Bioenergetics: a bridge across life and universe. CRC Press. External links The Molecular & Cellular Bioenergetics Gordon Research Conference (see). American Society of Exercise Physiologists Biochemistry Biophysics Cell biology Energy (physics)
0.775346
0.990302
0.767827
Title 21 of the Code of Federal Regulations
Title 21 is the portion of the Code of Federal Regulations that governs food and drugs within the United States for the Food and Drug Administration (FDA), the Drug Enforcement Administration (DEA), and the Office of National Drug Control Policy (ONDCP). It is divided into three chapters: Chapter I — Food and Drug Administration Chapter II — Drug Enforcement Administration Chapter III — Office of National Drug Control Policy Chapter I Most of the Chapter I regulations are based on the Federal Food, Drug, and Cosmetic Act. Notable sections: 11 — electronic records and electronic signature related 50 Protection of human subjects in clinical trials 54 Financial disclosure by clinical investigators 56 Institutional review boards that oversee clinical trials 58 Good laboratory practices (GLP) for nonclinical studies The 100 series are regulations pertaining to food: 101, especially 101.9 — Nutrition facts label related (c)(2)(ii) — Requirement to include trans fat values (c)(8)(iv) — Vitamin and mineral values 106-107 requirements for infant formula 110 et seq. cGMPs for food products 111 et seq. cGMPs for dietary supplements 170 food additives 190 dietary supplements The 200 and 300 series are regulations pertaining to pharmaceuticals : 202-203 Drug advertising and marketing 210 et seq. cGMPs for pharmaceuticals 310 et seq. Requirements for new drugs 328 et seq. Specific requirements for over-the-counter (OTC) drugs. The 500 series are regulations for animal feeds and animal medications: 510 et seq. New animal drugs 556 Tolerances for residues of drugs in food animals The 600 series covers biological products (e.g. vaccines, blood): 601 Licensing under section 351 of the Public Health Service Act 606 et seq. cGMPs for human blood and blood products The 700 series includes the limited regulations on cosmetics: 701 Labeling requirements The 800 series are for medical devices: 803 Medical device reporting 814 Premarket approval of medical devices 820 et seq. Quality system regulations (analogous to cGMP, but structured like ISO) 860 et seq. Listing of specific approved devices and how they are classified The 900 series covers mammography quality requirements enforced by CDRH. The 1000 series covers radiation-emitting device (e.g. cell phones, lasers, x-ray generators); requirements enforced by the Center for Devices and Radiological Health. It also talks about the FDA citizen petition. The 1100 series includes updated rules deeming items that statutorily come under the definition of "tobacco product" to be subject to the Federal Food, Drug, and Cosmetic Act as amended by the Tobacco Control Act. The items affected include E-cigarettes, Hookah tobacco, and pipe tobacco. The 1200 series consists of rules primarily based in laws other than the Food, Drug, and Cosmetic Act: 1240 Rules promulgated under 361 of the Public Health Service Act on interstate control of communicable disease, such as: Requirements for pasteurization of milk Interstate shipment of turtles as pets. Interstate shipment of African rodents that may carry monkeypox. Sanitation on interstate conveyances (i.e. airplanes and ships) 1271 Requirements for human cells, tissues, and cellular and tissue-based products (i.e. the cGTPs). Chapter II Notable sections: 1308 — Schedules of controlled substances 1308.03(a) — Administrative Controlled Substances Code Number 1308.11 — List of Schedule I drugs 1308.12 — List of Schedule II drugs 1308.13 — List of Schedule III drugs 1308.14 — List of Schedule IV drugs 1308.15 — List of Schedule V drugs See also Title 21 of the United States Code - Food and Drugs EudraLex (medicinal products in the European Union) References External links Title 21 of the Code of Federal Regulations (current "Electronic CFR") 21 Drug control law in the United States Food law Regulation of medical devices
0.778154
0.986676
0.767786
Digestion
Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use. In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food. The saliva also contains mucus, which lubricates the food; the electrolyte hydrogencarbonate, which provides the ideal conditions of pH for amylase to work; and other electrolytes (, , ). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damaging effects of chemicals like concentrated hydrochloric acid while also aiding lubrication. Hydrochloric acid provides acidic pH for pepsin. At the same time protein digestion is occurring, mechanical mixing occurs by peristalsis, which is waves of muscular contractions that move along the stomach wall. This allows the mass of food to further mix with the digestive enzymes. Pepsin breaks down proteins into peptides or proteoses, which is further broken down into dipeptides and amino acids by enzymes in the small intestine. Studies suggest that increasing the number of chews per bite increases relevant gut hormones and may decrease self-reported hunger and food intake. When the pyloric sphincter valve opens, partially digested food (chyme) enters the duodenum where it mixes with digestive enzymes from the pancreas and bile juice from the liver and then passes through the small intestine, in which digestion continues. When the chyme is fully digested, it is absorbed into the blood. 95% of nutrient absorption occurs in the small intestine. Water and minerals are reabsorbed back into the blood in the colon (large intestine) where the pH is slightly acidic (about 5.6 ~ 6.9). Some vitamins, such as biotin and vitamin K (K2MK7) produced by bacteria in the colon are also absorbed into the blood in the colon. Absorption of water, simple sugar and alcohol also takes place in stomach. Waste material (feces) is eliminated from the rectum during defecation. Digestive system Digestive systems take many forms. There is a fundamental distinction between internal and external digestion. External digestion developed earlier in evolutionary history, and most fungi still rely on it. In this process, enzymes are secreted into the environment surrounding the organism, where they break down an organic material, and some of the products diffuse back to the organism. Animals have a tube (gastrointestinal tract) in which internal digestion occurs, which is more efficient because more of the broken down products can be captured, and the internal chemical environment can be more efficiently controlled. Some organisms, including nearly all spiders, secrete biotoxins and digestive chemicals (e.g., enzymes) into the extracellular environment prior to ingestion of the consequent "soup". In others, once potential nutrients or food is inside the organism, digestion can be conducted to a vesicle or a sac-like structure, through a tube, or through several specialized organs aimed at making the absorption of nutrients more efficient. Secretion systems Bacteria use several systems to obtain nutrients from other organisms in the environments. Channel transport system In a channel transport system, several proteins form a contiguous channel traversing the inner and outer membranes of the bacteria. It is a simple system, which consists of only three protein subunits: the ABC protein, membrane fusion protein (MFP), and outer membrane protein. This secretion system transports various chemical species, from ions, drugs, to proteins of various sizes (20–900 kDa). The chemical species secreted vary in size from the small Escherichia coli peptide colicin V, (10 kDa) to the Pseudomonas fluorescens cell adhesion protein LapA of 900 kDa. Molecular syringe A type III secretion system means that a molecular syringe is used through which a bacterium (e.g. certain types of Salmonella, Shigella, Yersinia) can inject nutrients into protist cells. One such mechanism was first discovered in Y. pestis and showed that toxins could be injected directly from the bacterial cytoplasm into the cytoplasm of its host's cells rather than be secreted into the extracellular medium. Conjugation machinery The conjugation machinery of some bacteria (and archaeal flagella) is capable of transporting both DNA and proteins. It was discovered in Agrobacterium tumefaciens, which uses this system to introduce the Ti plasmid and proteins into the host, which develops the crown gall (tumor). The VirB complex of Agrobacterium tumefaciens is the prototypic system. In the nitrogen-fixing Rhizobia, conjugative elements naturally engage in inter-kingdom conjugation. Such elements as the Agrobacterium Ti or Ri plasmids contain elements that can transfer to plant cells. Transferred genes enter the plant cell nucleus and effectively transform the plant cells into factories for the production of opines, which the bacteria use as carbon and energy sources. Infected plant cells form crown gall or root tumors. The Ti and Ri plasmids are thus endosymbionts of the bacteria, which are in turn endosymbionts (or parasites) of the infected plant. The Ti and Ri plasmids are themselves conjugative. Ti and Ri transfer between bacteria uses an independent system (the tra, or transfer, operon) from that for inter-kingdom transfer (the vir, or virulence, operon). Such transfer creates virulent strains from previously avirulent Agrobacteria. Release of outer membrane vesicles In addition to the use of the multiprotein complexes listed above, gram-negative bacteria possess another method for release of material: the formation of outer membrane vesicles. Portions of the outer membrane pinch off, forming spherical structures made of a lipid bilayer enclosing periplasmic materials. Vesicles from a number of bacterial species have been found to contain virulence factors, some have immunomodulatory effects, and some can directly adhere to and intoxicate host cells. While release of vesicles has been demonstrated as a general response to stress conditions, the process of loading cargo proteins seems to be selective. Gastrovascular cavity The gastrovascular cavity functions as a stomach in both digestion and the distribution of nutrients to all parts of the body. Extracellular digestion takes place within this central cavity, which is lined with the gastrodermis, the internal layer of epithelium. This cavity has only one opening to the outside that functions as both a mouth and an anus: waste and undigested matter is excreted through the mouth/anus, which can be described as an incomplete gut. In a plant such as the Venus flytrap that can make its own food through photosynthesis, it does not eat and digest its prey for the traditional objectives of harvesting energy and carbon, but mines prey primarily for essential nutrients (nitrogen and phosphorus in particular) that are in short supply in its boggy, acidic habitat. Phagosome A phagosome is a vacuole formed around a particle absorbed by phagocytosis. The vacuole is formed by the fusion of the cell membrane around the particle. A phagosome is a cellular compartment in which pathogenic microorganisms can be killed and digested. Phagosomes fuse with lysosomes in their maturation process, forming phagolysosomes. In humans, Entamoeba histolytica can phagocytose red blood cells. Specialised organs and behaviours To aid in the digestion of their food, animals evolved organs such as beaks, tongues, radulae, teeth, crops, gizzards, and others. Beaks Birds have bony beaks that are specialised according to the bird's ecological niche. For example, macaws primarily eat seeds, nuts, and fruit, using their beaks to open even the toughest seed. First they scratch a thin line with the sharp point of the beak, then they shear the seed open with the sides of the beak. The mouth of the squid is equipped with a sharp horny beak mainly made of cross-linked proteins. It is used to kill and tear prey into manageable pieces. The beak is very robust, but does not contain any minerals, unlike the teeth and jaws of many other organisms, including marine species. The beak is the only indigestible part of the squid. Tongue The tongue is skeletal muscle on the floor of the mouth of most vertebrates, that manipulates food for chewing (mastication) and swallowing (deglutition). It is sensitive and kept moist by saliva. The underside of the tongue is covered with a smooth mucous membrane. The tongue also has a touch sense for locating and positioning food particles that require further chewing. The tongue is used to roll food particles into a bolus before being transported down the esophagus through peristalsis. The sublingual region underneath the front of the tongue is a location where the oral mucosa is very thin, and underlain by a plexus of veins. This is an ideal location for introducing certain medications to the body. The sublingual route takes advantage of the highly vascular quality of the oral cavity, and allows for the speedy application of medication into the cardiovascular system, bypassing the gastrointestinal tract. Teeth Teeth (singular tooth) are small whitish structures found in the jaws (or mouths) of many vertebrates that are used to tear, scrape, milk and chew food. Teeth are not made of bone, but rather of tissues of varying density and hardness, such as enamel, dentine and cementum. Human teeth have a blood and nerve supply which enables proprioception. This is the ability of sensation when chewing, for example if we were to bite into something too hard for our teeth, such as a chipped plate mixed in food, our teeth send a message to our brain and we realise that it cannot be chewed, so we stop trying. The shapes, sizes and numbers of types of animals' teeth are related to their diets. For example, herbivores have a number of molars which are used to grind plant matter, which is difficult to digest. Carnivores have canine teeth which are used to kill and tear meat. Crop A crop, or croup, is a thin-walled expanded portion of the alimentary tract used for the storage of food prior to digestion. In some birds it is an expanded, muscular pouch near the gullet or throat. In adult doves and pigeons, the crop can produce crop milk to feed newly hatched birds. Certain insects may have a crop or enlarged esophagus. Abomasum Herbivores have evolved cecums (or an abomasum in the case of ruminants). Ruminants have a fore-stomach with four chambers. These are the rumen, reticulum, omasum, and abomasum. In the first two chambers, the rumen and the reticulum, the food is mixed with saliva and separates into layers of solid and liquid material. Solids clump together to form the cud (or bolus). The cud is then regurgitated, chewed slowly to completely mix it with saliva and to break down the particle size. Fibre, especially cellulose and hemi-cellulose, is primarily broken down into the volatile fatty acids, acetic acid, propionic acid and butyric acid in these chambers (the reticulo-rumen) by microbes: (bacteria, protozoa, and fungi). In the omasum, water and many of the inorganic mineral elements are absorbed into the blood stream. The abomasum is the fourth and final stomach compartment in ruminants. It is a close equivalent of a monogastric stomach (e.g., those in humans or pigs), and digesta is processed here in much the same way. It serves primarily as a site for acid hydrolysis of microbial and dietary protein, preparing these protein sources for further digestion and absorption in the small intestine. Digesta is finally moved into the small intestine, where the digestion and absorption of nutrients occurs. Microbes produced in the reticulo-rumen are also digested in the small intestine. Specialised behaviours Regurgitation has been mentioned above under abomasum and crop, referring to crop milk, a secretion from the lining of the crop of pigeons and doves with which the parents feed their young by regurgitation. Many sharks have the ability to turn their stomachs inside out and evert it out of their mouths in order to get rid of unwanted contents (perhaps developed as a way to reduce exposure to toxins). Other animals, such as rabbits and rodents, practise coprophagia behaviours – eating specialised faeces in order to re-digest food, especially in the case of roughage. Capybara, rabbits, hamsters and other related species do not have a complex digestive system as do, for example, ruminants. Instead they extract more nutrition from grass by giving their food a second pass through the gut. Soft faecal pellets of partially digested food are excreted and generally consumed immediately. They also produce normal droppings, which are not eaten. Young elephants, pandas, koalas, and hippos eat the faeces of their mother, probably to obtain the bacteria required to properly digest vegetation. When they are born, their intestines do not contain these bacteria (they are completely sterile). Without them, they would be unable to get any nutritional value from many plant components. In earthworms An earthworm's digestive system consists of a mouth, pharynx, esophagus, crop, gizzard, and intestine. The mouth is surrounded by strong lips, which act like a hand to grab pieces of dead grass, leaves, and weeds, with bits of soil to help chew. The lips break the food down into smaller pieces. In the pharynx, the food is lubricated by mucus secretions for easier passage. The esophagus adds calcium carbonate to neutralize the acids formed by food matter decay. Temporary storage occurs in the crop where food and calcium carbonate are mixed. The powerful muscles of the gizzard churn and mix the mass of food and dirt. When the churning is complete, the glands in the walls of the gizzard add enzymes to the thick paste, which helps chemically breakdown the organic matter. By peristalsis, the mixture is sent to the intestine where friendly bacteria continue chemical breakdown. This releases carbohydrates, protein, fat, and various vitamins and minerals for absorption into the body. Overview of vertebrate digestion In most vertebrates, digestion is a multistage process in the digestive system, starting from ingestion of raw materials, most often other organisms. Ingestion usually involves some type of mechanical and chemical processing. Digestion is separated into four steps: Ingestion: placing food into the mouth (entry of food in the digestive system), Mechanical and chemical breakdown: mastication and the mixing of the resulting bolus with water, acids, bile and enzymes in the stomach and intestine to break down complex chemical species into simple structures, Absorption: of nutrients from the digestive system to the circulatory and lymphatic capillaries through osmosis, active transport, and diffusion, and Egestion (Excretion): Removal of undigested materials from the digestive tract through defecation. Underlying the process is muscle movement throughout the system through swallowing and peristalsis. Each step in digestion requires energy, and thus imposes an "overhead charge" on the energy made available from absorbed substances. Differences in that overhead cost are important influences on lifestyle, behavior, and even physical structures. Examples may be seen in humans, who differ considerably from other hominids (lack of hair, smaller jaws and musculature, different dentition, length of intestines, cooking, etc.). The major part of digestion takes place in the small intestine. The large intestine primarily serves as a site for fermentation of indigestible matter by gut bacteria and for resorption of water from digests before excretion. In mammals, preparation for digestion begins with the cephalic phase in which saliva is produced in the mouth and digestive enzymes are produced in the stomach. Mechanical and chemical digestion begin in the mouth where food is chewed, and mixed with saliva to begin enzymatic processing of starches. The stomach continues to break food down mechanically and chemically through churning and mixing with both acids and enzymes. Absorption occurs in the stomach and gastrointestinal tract, and the process finishes with defecation. Human digestion process The human gastrointestinal tract is around long. Food digestion physiology varies between individuals and upon other factors such as the characteristics of the food and size of the meal, and the process of digestion normally takes between 24 and 72 hours. Digestion begins in the mouth with the secretion of saliva and its digestive enzymes. Food is formed into a bolus by the mechanical mastication and swallowed into the esophagus from where it enters the stomach through the action of peristalsis. Gastric juice contains hydrochloric acid and pepsin which would damage the walls of the stomach and mucus and bicarbonates are secreted for protection. In the stomach further release of enzymes break down the food further and this is combined with the churning action of the stomach. Mainly proteins are digested in stomach. The partially digested food enters the duodenum as a thick semi-liquid chyme. In the small intestine, the larger part of digestion takes place and this is helped by the secretions of bile, pancreatic juice and intestinal juice. The intestinal walls are lined with villi, and their epithelial cells are covered with numerous microvilli to improve the absorption of nutrients by increasing the surface area of the intestine. Bile helps in emulsification of fats and also activates lipases. In the large intestine, the passage of food is slower to enable fermentation by the gut flora to take place. Here, water is absorbed and waste material stored as feces to be removed by defecation via the anal canal and anus. Neural and biochemical control mechanisms Different phases of digestion take place including: the cephalic phase, gastric phase, and intestinal phase. The cephalic phase occurs at the sight, thought and smell of food, which stimulate the cerebral cortex. Taste and smell stimuli are sent to the hypothalamus and medulla oblongata. After this it is routed through the vagus nerve and release of acetylcholine. Gastric secretion at this phase rises to 40% of maximum rate. Acidity in the stomach is not buffered by food at this point and thus acts to inhibit parietal (secretes acid) and G cell (secretes gastrin) activity via D cell secretion of somatostatin. The gastric phase takes 3 to 4 hours. It is stimulated by distension of the stomach, presence of food in stomach and decrease in pH. Distention activates long and myenteric reflexes. This activates the release of acetylcholine, which stimulates the release of more gastric juices. As protein enters the stomach, it binds to hydrogen ions, which raises the pH of the stomach. Inhibition of gastrin and gastric acid secretion is lifted. This triggers G cells to release gastrin, which in turn stimulates parietal cells to secrete gastric acid. Gastric acid is about 0.5% hydrochloric acid, which lowers the pH to the desired pH of 1–3. Acid release is also triggered by acetylcholine and histamine. The intestinal phase has two parts, the excitatory and the inhibitory. Partially digested food fills the duodenum. This triggers intestinal gastrin to be released. Enterogastric reflex inhibits vagal nuclei, activating sympathetic fibers causing the pyloric sphincter to tighten to prevent more food from entering, and inhibits local reflexes. Breakdown into nutrients Protein digestion Protein digestion occurs in the stomach and duodenum in which 3 main enzymes, pepsin secreted by the stomach and trypsin and chymotrypsin secreted by the pancreas, break down food proteins into polypeptides that are then broken down by various exopeptidases and dipeptidases into amino acids. The digestive enzymes however are mostly secreted as their inactive precursors, the zymogens. For example, trypsin is secreted by pancreas in the form of trypsinogen, which is activated in the duodenum by enterokinase to form trypsin. Trypsin then cleaves proteins to smaller polypeptides. Fat digestion Digestion of some fats can begin in the mouth where lingual lipase breaks down some short chain lipids into diglycerides. However fats are mainly digested in the small intestine. The presence of fat in the small intestine produces hormones that stimulate the release of pancreatic lipase from the pancreas and bile from the liver which helps in the emulsification of fats for absorption of fatty acids. Complete digestion of one molecule of fat (a triglyceride) results a mixture of fatty acids, mono- and di-glycerides, but no glycerol. Carbohydrate digestion In humans, dietary starches are composed of glucose units arranged in long chains called amylose, a polysaccharide. During digestion, bonds between glucose molecules are broken by salivary and pancreatic amylase, resulting in progressively smaller chains of glucose. This results in simple sugars glucose and maltose (2 glucose molecules) that can be absorbed by the small intestine. Lactase is an enzyme that breaks down the disaccharide lactose to its component parts, glucose and galactose. Glucose and galactose can be absorbed by the small intestine. Approximately 65 percent of the adult population produce only small amounts of lactase and are unable to eat unfermented milk-based foods. This is commonly known as lactose intolerance. Lactose intolerance varies widely by genetic heritage; more than 90 percent of peoples of east Asian descent are lactose intolerant, in contrast to about 5 percent of people of northern European descent. Sucrase is an enzyme that breaks down the disaccharide sucrose, commonly known as table sugar, cane sugar, or beet sugar. Sucrose digestion yields the sugars fructose and glucose which are readily absorbed by the small intestine. DNA and RNA digestion DNA and RNA are broken down into mononucleotides by the nucleases deoxyribonuclease and ribonuclease (DNase and RNase) from the pancreas. Non-destructive digestion Some nutrients are complex molecules (for example vitamin B12) which would be destroyed if they were broken down into their functional groups. To digest vitamin B12 non-destructively, haptocorrin in saliva strongly binds and protects the B12 molecules from stomach acid as they enter the stomach and are cleaved from their protein complexes. After the B12-haptocorrin complexes pass from the stomach via the pylorus to the duodenum, pancreatic proteases cleave haptocorrin from the B12 molecules which rebind to intrinsic factor (IF). These B12-IF complexes travel to the ileum portion of the small intestine where cubilin receptors enable assimilation and circulation of B12-IF complexes in the blood. Digestive hormones There are at least five hormones that aid and regulate the digestive system in mammals. There are variations across the vertebrates, as for instance in birds. Arrangements are complex and additional details are regularly discovered. Connections to metabolic control (largely the glucose-insulin system) have been uncovered. Gastrin – is in the stomach and stimulates the gastric glands to secrete pepsinogen (an inactive form of the enzyme pepsin) and hydrochloric acid. Secretion of gastrin is stimulated by food arriving in stomach. The secretion is inhibited by low pH. Secretin – is in the duodenum and signals the secretion of sodium bicarbonate in the pancreas and it stimulates the bile secretion in the liver. This hormone responds to the acidity of the chyme. Cholecystokinin (CCK) – is in the duodenum and stimulates the release of digestive enzymes in the pancreas and stimulates the emptying of bile in the gall bladder. This hormone is secreted in response to fat in chyme. Gastric inhibitory peptide (GIP) – is in the duodenum and decreases the stomach churning in turn slowing the emptying in the stomach. Another function is to induce insulin secretion. Motilin – is in the duodenum and increases the migrating myoelectric complex component of gastrointestinal motility and stimulates the production of pepsin. Significance of pH Digestion is a complex process controlled by several factors. pH plays a crucial role in a normally functioning digestive tract. In the mouth, pharynx and esophagus, pH is typically about 6.8, very weakly acidic. Saliva controls pH in this region of the digestive tract. Salivary amylase is contained in saliva and starts the breakdown of carbohydrates into monosaccharides. Most digestive enzymes are sensitive to pH and will denature in a high or low pH environment. The stomach's high acidity inhibits the breakdown of carbohydrates within it. This acidity confers two benefits: it denatures proteins for further digestion in the small intestines, and provides non-specific immunity, damaging or eliminating various pathogens. In the small intestines, the duodenum provides critical pH balancing to activate digestive enzymes. The liver secretes bile into the duodenum to neutralize the acidic conditions from the stomach, and the pancreatic duct empties into the duodenum, adding bicarbonate to neutralize the acidic chyme, thus creating a neutral environment. The mucosal tissue of the small intestines is alkaline with a pH of about 8.5. See also Digestive system of gastropods Digestive system of humpback whales Evolution of the mammalian digestive system Discovery and development of proton pump inhibitors Erepsin Gastroesophageal reflux disease References External links Human Physiology – Digestion NIH guide to digestive system The Digestive System How does the Digestive System Work? Digestive system Metabolism
0.769823
0.997311
0.767753
Muscular system
The muscular system is an organ system consisting of skeletal, smooth, and cardiac muscle. It permits movement of the body, maintains posture, and circulates blood throughout the body. The muscular systems in vertebrates are controlled through the nervous system although some muscles (such as the cardiac muscle) can be completely autonomous. Together with the skeletal system in the human, it forms the musculoskeletal system, which is responsible for the movement of the body. Types There are three distinct types of muscle: skeletal muscle, cardiac or heart muscle, and smooth (non-striated) muscle. Muscles provide strength, balance, posture, movement, and heat for the body to keep warm. There are approximately 640 muscles in an adult male human body. A kind of elastic tissue makes up each muscle, which consists of thousands, or tens of thousands, of small muscle fibers. Each fiber comprises many tiny strands called fibrils, impulses from nerve cells control the contraction of each muscle fiber. Skeletal Skeletal muscle, is a type of striated muscle, composed of muscle cells, called muscle fibers, which are in turn composed of myofibrils. Myofibrils are composed of sarcomeres, the basic building blocks of striated muscle tissue. Upon stimulation by an action potential, skeletal muscles perform a coordinated contraction by shortening each sarcomere. The best proposed model for understanding contraction is the sliding filament model of muscle contraction. Within the sarcomere, actin and myosin fibers overlap in a contractile motion towards each other. Myosin filaments have club-shaped myosin heads that project toward the actin filaments, and provide attachment points on binding sites for the actin filaments. The myosin heads move in a coordinated style; they swivel toward the center of the sarcomere, detach and then reattach to the nearest active site of the actin filament. This is called a ratchet type drive system. This process consumes large amounts of adenosine triphosphate (ATP), the energy source of the cell. ATP binds to the cross-bridges between myosin heads and actin filaments. The release of energy powers the swiveling of the myosin head. When ATP is used, it becomes adenosine diphosphate (ADP), and since muscles store little ATP, they must continuously replace the discharged ADP with ATP. Muscle tissue also contains a stored supply of a fast-acting recharge chemical, creatine phosphate, which when necessary can assist with the rapid regeneration of ADP into ATP. Calcium ions are required for each cycle of the sarcomere. Calcium is released from the sarcoplasmic reticulum into the sarcomere when a muscle is stimulated to contract. This calcium uncovers the actin-binding sites. When the muscle no longer needs to contract, the calcium ions are pumped from the sarcomere and back into storage in the sarcoplasmic reticulum. There are approximately 639 skeletal muscles in the human body. Cardiac Heart muscle is striated muscle but is distinct from skeletal muscle because the muscle fibers are laterally connected. Furthermore, just as with smooth muscles, their movement is involuntary. Heart muscle is controlled by the sinus node influenced by the autonomic nervous system. Smooth Smooth muscle contraction is regulated by the autonomic nervous system, hormones, and local chemical signals, allowing for gradual and sustained contractions. This type of muscle tissue is also capable of adapting to different levels of stretch and tension, which is important for maintaining proper blood flow and the movement of materials through the digestive system. Physiology Contraction Neuromuscular junctions are the focal point where a motor neuron attaches to a muscle. Acetylcholine, (a neurotransmitter used in skeletal muscle contraction) is released from the axon terminal of the nerve cell when an action potential reaches the microscopic junction called a synapse. A group of chemical messengers across the synapse and stimulate the formation of electrical changes, which are produced in the muscle cell when the acetylcholine binds to receptors on its surface. Calcium is released from its storage area in the cell's sarcoplasmic reticulum. An impulse from a nerve cell causes calcium release and brings about a single, short muscle contraction called a muscle twitch. If there is a problem at the neuromuscular junction, a very prolonged contraction may occur, such as the muscle contractions that result from tetanus. Also, a loss of function at the junction can produce paralysis. Skeletal muscles are organized into hundreds of motor units, each of which involves a motor neuron, attached by a series of thin finger-like structures called axon terminals. These attach to and control discrete bundles of muscle fibers. A coordinated and fine-tuned response to a specific circumstance will involve controlling the precise number of motor units used. While individual muscle units contract as a unit, the entire muscle can contract on a predetermined basis due to the structure of the motor unit. Motor unit coordination, balance, and control frequently come under the direction of the cerebellum of the brain. This allows for complex muscular coordination with little conscious effort, such as when one drives a car without thinking about the process. Tendon A tendon is a piece of connective tissue that connects a muscle to a bone. When a muscle intercept, it pulls against the skeleton to create movement. A tendon connects this muscle to a bone, making this function possible. Aerobic and anaerobic muscle activity At rest, the body produces the majority of its ATP aerobically in the mitochondria without producing lactic acid or other fatiguing byproducts. During exercise, the method of ATP production varies depending on the fitness of the individual as well as the duration and intensity of exercise. At lower activity levels, when exercise continues for a long duration (several minutes or longer), energy is produced aerobically by combining oxygen with carbohydrates and fats stored in the body. During activity that is higher in intensity, with possible duration decreasing as intensity increases, ATP production can switch to anaerobic pathways, such as the use of the creatine phosphate and the phosphagen system or anaerobic glycolysis. Aerobic ATP production is biochemically much slower and can only be used for long-duration, low-intensity exercise, but produces no fatiguing waste products that can not be removed immediately from the sarcomere and the body, and it results in a much greater number of ATP molecules per fat or carbohydrate molecule. Aerobic training allows the oxygen delivery system to be more efficient, allowing aerobic metabolism to begin quicker. Anaerobic ATP production produces ATP much faster and allows near-maximal intensity exercise, but also produces significant amounts of lactic acid which render high-intensity exercise unsustainable for more than several minutes. The phosphagen system is also anaerobic. It allows for the highest levels of exercise intensity, but intramuscular stores of phosphocreatine are very limited and can only provide energy for exercises lasting up to ten seconds. Recovery is very quick, with full creatine stores regenerated within five minutes. Clinical significance Multiple diseases can affect the muscular system. Muscular Dystrophy Muscular dystrophy is a group of disorders associated with progressive muscle weakness and loss of muscle mass. These disorders are caused by mutations in a person's genes. The disease affects between 19.8 and 25.1 per 100,000 person-years globally. There are more than 30 types of muscular dystrophy. Depending on the type, muscular dystrophy can affect the patient's heart and lungs, and/or their ability to move, walk, and perform daily activities. The most common types include: Duchenne muscular dystrophy (DMD) and Becker muscular dystrophy (BMD) Myotonic dystrophy Limb-Girdle (LGMD) Facioscapulohumeral dystrophy (FSHD) Congenital dystrophy (CMD) Distal (DD) Oculopharyngeal dystrophy (OPMD) Emery-Dreifuss (EDMD) See also Major systems of the human body Intramuscular coordination References Further reading External links Online Muscle Tutorial GetBody Smart Muscle system tutorials and quizzes MedBio.info Use and formation of ATP in muscle
0.769257
0.998029
0.767741
Fomite
A fomite or fomes is any inanimate object that, when contaminated with or exposed to infectious agents (such as pathogenic bacteria, viruses or fungi), can transfer disease to a new host. Transfer of pathogens by fomites A fomite is any inanimate object (also called passive vector) that, when contaminated with or exposed to infectious agents (such as pathogenic bacteria, viruses or fungi), can transfer disease to a new host. Contamination can occur when one of these objects comes into contact with bodily secretions, like nasal fluid, vomit, or feces. Many common objects can sustain a pathogen until a person comes in contact with the pathogen, increasing the chance of infection. The likely objects are different in a hospital environment than at home or in a workplace. Fomites such as splinters, barbed wire or farmyard surfaces, including soil, feeding troughs or barn beams, have been implicated as sources of virus. Hospital fomites For humans, common hospital fomites are skin cells, hair, clothing, and bedding. Fomites are associated particularly with hospital-acquired infections (HAIs), as they are possible routes to pass pathogens between patients. Stethoscopes and neckties are common fomites associated with health care providers. It worries epidemiologists and hospital practitioners because of the growing selection of microbes resistant to disinfectants or antibiotics (so-called antimicrobial resistance phenomenon). Basic hospital equipment, such as IV drip tubes, catheters, and life support equipment, can also be carriers, when the pathogens form biofilms on the surfaces. Careful sterilization of such objects prevents cross-infection. Used syringes, if improperly handled, are particularly dangerous fomites. Daily life In addition to objects in hospital settings, other common fomites for humans are cups, spoons, pencils, bath faucet handles, toilet flush levers, door knobs, light switches, handrails, elevator buttons, television remote controls, pens, touch screens, common-use phones, keyboards and computer mice, coffeepot handles, countertops, drinking fountains, and any other items that may be frequently touched by different people and infrequently cleaned. Cold sores, hand–foot–mouth disease, and diarrhea are some examples of illnesses easily spread by contaminated fomites. The risk of infection by these diseases and others through fomites can be greatly reduced by simply washing one's hands. When two children in one household have influenza, more than 50% of shared items are contaminated with virus. In 40–90% cases, adults infected with rhinovirus have it on their hands. Transmission of specific viruses Researchers have discovered that smooth (non-porous) surfaces like door knobs transmit bacteria and viruses better than porous materials like paper money because porous, especially fibrous, materials absorb and trap the contagion, making it harder to contract through simple touch. Nonetheless, fomites may include soiled clothes, towels, linens, handkerchiefs, and surgical dressings. SARS-CoV-2 was found to be viable on various surfaces from 4 to 72 hours under laboratory conditions. On porous surfaces, studies report inability to detect viable virus within minutes to hours; on non-porous surfaces, viable virus can be detected for days to weeks. However, further research called into question the accuracy of such tests, instead finding fomite transmission of SARS-Cov-2 in real world settings is extremely rare if not impossible. Contact with aerosolized virus (large droplet spread) generated via talking, sneezing, coughing, or vomiting, or contact with airborne virus that settles after disturbance of a contaminated fomite (e.g. shaking a contaminated blanket). During the first 24 hours, the risk can be reduced by increasing ventilation and waiting as long as possible before entering the space (at least several hours, based on documented airborne transmission cases), and using personal protective equipment (including any protection needed for the cleaning and disinfection products) to reduce risk. The 2007 research showed that the influenza virus was still active on stainless steel 24 hours after contamination. Though on hands it survives only for five minutes, the constant contact with a fomite almost certainly means catching the infection. Transfer efficiency depends not only on surface, but mainly on pathogen type. For example, avian influenza survives on both porous and non-porous materials for 144 hours. Smallpox was long supposed to be transmitted either by direct contact or by fomites. However A. R. Rao’s careful researches in the 1960s, before smallpox was declared extinct, found little truth in the traditional belief that smallpox can be spread at a distance through infected clothing or bedding. He concluded that it normally invaded via the lungs. Rao recognized that the virus can be detected on inanimate objects, and therefore might in some cases be transmitted by them, but he concluded that “smallpox is still an inhalation disease . . . the virus has to enter through the nose by inhalation.” In 2002 Donald K. Milton published a review of existing research upon the transmission of smallpox and upon recommendations for controlling its spread in the event of its use in biological war. He agreed, citing Rao, Fenner and others, that “careful epidemiologic investigation rarely implicated fomites as a source of infection”; and broadly agreed with current recommendations for control of secondary smallpox infections, which emphasized transmission via “expelled droplets” upon the breath. He noted that shed scabs (which might be spread via bedsheets or other fomites) often contain “large quantities of virus”, but suggested that the “apparent lack of infectiousness of scab associated virus” might be due to “encapsulation with inspissated pus”. Contaminated needles are the most common fomite that transmits HIV. Fomites from dirty needles also easily spread Hepatitis B. Etymology The Italian scholar and physician Girolamo Fracastoro appears to have first used the Latin word fomes, meaning "tinder", in this sense in his essay on contagion, De Contagione et Contagiosis Morbis, published in 1546: "By fomes I mean clothes, wooden objects, and things of that sort, which though not themselves corrupted can, nevertheless, preserve the original germs of the contagion and infect by means of these". English usage of fomes, pronounced , is documented since 1658. The English word fomite, which has been in use since 1859, is a back-formation from the plural fomites (originally borrowed from the Latin plural fōmĭtēs of fōmĕs ). Over time, the English-language pronunciation of the plural fomites changed from ) to , which led to the creation of a new singular fomite, pronounced . In Latin, fomes (genitive: fomitis, plural fomites, stem fomit-) is a third-declension T-stem noun. Such nouns, like miles/militis or comes/comitis, typically lose their T (thereby becoming a syllable shorter) in the nominative singular, but retain it in all other cases. In languages derived from Latin, the French fomite, Italian fomite, Spanish fómite and Portuguese fómite or fômite, retain the full stem. See also Focal infection theory Focus of infection Disease vector References Bibliography External links General characteristics and roles of fomites in viral transmission, American Society for Microbiology, 1969 Infectious diseases Epidemiology Hygiene Medical terminology
0.772774
0.993435
0.767701
Medical diagnosis
Medical diagnosis (abbreviated Dx, Dx, or Ds) is the process of determining which disease or condition explains a person's symptoms and signs. It is most often referred to as a diagnosis with the medical context being implicit. The information required for a diagnosis is typically collected from a history and physical examination of the person seeking medical care. Often, one or more diagnostic procedures, such as medical tests, are also done during the process. Sometimes the posthumous diagnosis is considered a kind of medical diagnosis. Diagnosis is often challenging because many signs and symptoms are nonspecific. For example, redness of the skin (erythema), by itself, is a sign of many disorders and thus does not tell the healthcare professional what is wrong. Thus differential diagnosis, in which several possible explanations are compared and contrasted, must be performed. This involves the correlation of various pieces of information followed by the recognition and differentiation of patterns. Occasionally the process is made easy by a sign or symptom (or a group of several) that is pathognomonic. Diagnosis is a major component of the procedure of a doctor's visit. From the point of view of statistics, the diagnostic procedure involves classification tests. Medical uses A diagnosis, in the sense of diagnostic procedure, can be regarded as an attempt at classification of an individual's condition into separate and distinct categories that allow medical decisions about treatment and prognosis to be made. Subsequently, a diagnostic opinion is often described in terms of a disease or other condition. (In the case of a wrong diagnosis, however, the individual's actual disease or condition is not the same as the individual's diagnosis.) A total evaluation of a condition is often termed a diagnostic workup. A diagnostic procedure may be performed by various healthcare professionals such as a physician, physiotherapist, dentist, podiatrist, optometrist, nurse practitioner, healthcare scientist or physician assistant. This article uses diagnostician as any of these person categories. A diagnostic procedure (as well as the opinion reached thereby) does not necessarily involve elucidation of the etiology of the diseases or conditions of interest, that is, what caused the disease or condition. Such elucidation can be useful to optimize treatment, further specify the prognosis or prevent recurrence of the disease or condition in the future. The initial task is to detect a medical indication to perform a diagnostic procedure. Indications include: Detection of any deviation from what is known to be normal, such as can be described in terms of, for example, anatomy (the structure of the human body), physiology (how the body works), pathology (what can go wrong with the anatomy and physiology), psychology (thought and behavior) and human homeostasis (regarding mechanisms to keep body systems in balance). Knowledge of what is normal and measuring of the patient's current condition against those norms can assist in determining the patient's particular departure from homeostasis and the degree of departure, which in turn can assist in quantifying the indication for further diagnostic processing. A complaint expressed by a patient. The fact that a patient has sought a diagnostician can itself be an indication to perform a diagnostic procedure. For example, in a doctor's visit, the physician may already start performing a diagnostic procedure by watching the gait of the patient from the waiting room to the doctor's office even before she or he has started to present any complaints. Even during an already ongoing diagnostic procedure, there can be an indication to perform another, separate, diagnostic procedure for another, potentially concomitant, disease or condition. This may occur as a result of an incidental finding of a sign unrelated to the parameter of interest, such as can occur in comprehensive tests such as radiological studies like magnetic resonance imaging or blood test panels that also include blood tests that are not relevant for the ongoing diagnosis. Procedure General components which are present in a diagnostic procedure in most of the various available methods include: Complementing the already given information with further data gathering, which may include questions of the medical history (potentially from other people close to the patient as well), physical examination and various diagnostic tests. A diagnostic test is any kind of medical test performed to aid in the diagnosis or detection of disease. Diagnostic tests can also be used to provide prognostic information on people with established disease. Processing of the answers, findings or other results. Consultations with other providers and specialists in the field may be sought. There are a number of methods or techniques that can be used in a diagnostic procedure, including performing a differential diagnosis or following medical algorithms. In reality, a diagnostic procedure may involve components of multiple methods. Differential diagnosis The method of differential diagnosis is based on finding as many candidate diseases or conditions as possible that can possibly cause the signs or symptoms, followed by a process of elimination or at least of rendering the entries more or less probable by further medical tests and other processing, aiming to reach the point where only one candidate disease or condition remains as probable. The result may also remain a list of possible conditions, ranked in order of probability or severity. Such a list is often generated by computer-aided diagnosis systems. The resultant diagnostic opinion by this method can be regarded more or less as a diagnosis of exclusion. Even if it does not result in a single probable disease or condition, it can at least rule out any imminently life-threatening conditions. Unless the provider is certain of the condition present, further medical tests, such as medical imaging, are performed or scheduled in part to confirm or disprove the diagnosis but also to document the patient's status and keep the patient's medical history up to date. If unexpected findings are made during this process, the initial hypothesis may be ruled out and the provider must then consider other hypotheses. Pattern recognition In a pattern recognition method the provider uses experience to recognize a pattern of clinical characteristics. It is mainly based on certain symptoms or signs being associated with certain diseases or conditions, not necessarily involving the more cognitive processing involved in a differential diagnosis. This may be the primary method used in cases where diseases are "obvious", or the provider's experience may enable him or her to recognize the condition quickly. Theoretically, a certain pattern of signs or symptoms can be directly associated with a certain therapy, even without a definite decision regarding what is the actual disease, but such a compromise carries a substantial risk of missing a diagnosis which actually has a different therapy so it may be limited to cases where no diagnosis can be made. Diagnostic criteria The term diagnostic criteria designates the specific combination of signs and symptoms, and test results that the clinician uses to attempt to determine the correct diagnosis. Some examples of diagnostic criteria, also known as clinical case definitions, are: Amsterdam criteria for hereditary nonpolyposis colorectal cancer McDonald criteria for multiple sclerosis ACR criteria for systemic lupus erythematosus Centor criteria for strep throat Clinical decision support system Clinical decision support systems are interactive computer programs designed to assist health professionals with decision-making tasks. The clinician interacts with the software utilizing both the clinician's knowledge and the software to make a better analysis of the patients data than either human or software could make on their own. Typically the system makes suggestions for the clinician to look through and the clinician picks useful information and removes erroneous suggestions. Some programs attempt to do this by replacing the clinician, such as reading the output of a heart monitor. Such automated processes are usually deemed a "device" by the FDA and require regulatory approval. In contrast, clinical decision support systems that "support" but do not replace the clinician are deemed to be "Augmented Intelligence" if it meets the FDA criteria that (1) it reveals the underlying data, (2) reveals the underlying logic, and (3) leaves the clinician in charge to shape and make the decision. Other diagnostic procedure methods Other methods that can be used in performing a diagnostic procedure include: Usage of medical algorithms An "exhaustive method", in which every possible question is asked and all possible data is collected. Adverse effects Diagnosis problems are the dominant cause of medical malpractice payments, accounting for 35% of total payments in a study of 25 years of data and 350,000 claims. Overdiagnosis Overdiagnosis is the diagnosis of "disease" that will never cause symptoms or death during a patient's lifetime. It is a problem because it turns people into patients unnecessarily and because it can lead to economic waste (overutilization) and treatments that may cause harm. Overdiagnosis occurs when a disease is diagnosed correctly, but the diagnosis is irrelevant. A correct diagnosis may be irrelevant because treatment for the disease is not available, not needed, or not wanted. Errors Most people will experience at least one diagnostic error in their lifetime, according to a 2015 report by the National Academies of Sciences, Engineering, and Medicine. Causes and factors of error in diagnosis are: the manifestation of disease are not sufficiently noticeable a disease is omitted from consideration too much significance is given to some aspect of the diagnosis the condition is a rare disease with symptoms suggestive of many other conditions the condition has a rare presentation Lag time When making a medical diagnosis, a lag time is a delay in time until a step towards diagnosis of a disease or condition is made. Types of lag times are mainly: Onset-to-medical encounter lag time, the time from onset of symptoms until visiting a health care provider Encounter-to-diagnosis lag time, the time from first medical encounter to diagnosis Lag time due to delays in reading x-rays have been cited as a major challenge in care delivery. The Department of Health and Human Services has reportedly found that interpretation of x-rays is rarely available to emergency room physicians prior to patient discharge. Long lag times are often called "diagnostic odyssey". History The first recorded examples of medical diagnosis are found in the writings of Imhotep (2630–2611 BC) in ancient Egypt (the Edwin Smith Papyrus). A Babylonian medical textbook, the Diagnostic Handbook written by Esagil-kin-apli (fl.1069–1046 BC), introduced the use of empiricism, logic and rationality in the diagnosis of an illness or disease. Traditional Chinese Medicine, as described in the Yellow Emperor's Inner Canon or Huangdi Neijing, specified four diagnostic methods: inspection, auscultation-olfaction, inquiry and palpation. Hippocrates was known to make diagnoses by tasting his patients' urine and smelling their sweat. Word Medical diagnosis or the actual process of making a diagnosis is a cognitive process. A clinician uses several sources of data and puts the pieces of the puzzle together to make a diagnostic impression. The initial diagnostic impression can be a broad term describing a category of diseases instead of a specific disease or condition. After the initial diagnostic impression, the clinician obtains follow up tests and procedures to get more data to support or reject the original diagnosis and will attempt to narrow it down to a more specific level. Diagnostic procedures are the specific tools that the clinicians use to narrow the diagnostic possibilities. The plural of diagnosis is diagnoses. The verb is to diagnose, and a person who diagnoses is called a diagnostician. Etymology The word diagnosis is derived through Latin from the Greek word διάγνωσις (diágnōsis) from διαγιγνώσκειν (diagignṓskein), meaning "to discern, distinguish". Society and culture Social context Diagnosis can take many forms. It might be a matter of naming the disease, lesion, dysfunction or disability. It might be a management-naming or prognosis-naming exercise. It may indicate either degree of abnormality on a continuum or kind of abnormality in a classification. It is influenced by non-medical factors such as power, ethics and financial incentives for patient or doctor. It can be a brief summation or an extensive formulation, even taking the form of a story or metaphor. It might be a means of communication such as a computer code through which it triggers payment, prescription, notification, information or advice. It might be pathogenic or salutogenic. It is generally uncertain and provisional. Once a diagnostic opinion has been reached, the provider is able to propose a management plan, which will include treatment as well as plans for follow-up. From this point on, in addition to treating the patient's condition, the provider can educate the patient about the etiology, progression, prognosis, other outcomes, and possible treatments of her or his ailments, as well as providing advice for maintaining health. A treatment plan is proposed which may include therapy and follow-up consultations and tests to monitor the condition and the progress of the treatment, if needed, usually according to the medical guidelines provided by the medical field on the treatment of the particular illness. Relevant information should be added to the medical record of the patient. A failure to respond to treatments that would normally work may indicate a need for review of the diagnosis. Nancy McWilliams identifies five reasons that determine the necessity for diagnosis: diagnosis for treatment planning; information contained in it related to prognosis; protecting interests of patients; a diagnosis might help the therapist to empathize with his patient; might reduce the likelihood that some fearful patients will go-by the treatment. Types Sub-types of diagnoses include: Clinical diagnosis A diagnosis made on the basis of medical signs and reported symptoms, rather than diagnostic tests Laboratory diagnosis A diagnosis based significantly on laboratory reports or test results, rather than the physical examination of the patient. For instance, a proper diagnosis of infectious diseases usually requires both an examination of signs and symptoms, as well as laboratory test results and characteristics of the pathogen involved. Radiology diagnosis A diagnosis based primarily on the results from medical imaging studies. Greenstick fractures are common radiological diagnoses. Electrography diagnosis A diagnosis based on measurement and recording of electrophysiologic activity. Endoscopy diagnosis A diagnosis based on endoscopic inspection and observation of the interior of a hollow organ or cavity of the body. Tissue diagnosis A diagnosis based on the macroscopic, microscopic, and molecular examination of tissues such as biopsies or whole organs. For example, a definitive diagnosis of cancer is made via tissue examination by a pathologist. Principal diagnosis The single medical diagnosis that is most relevant to the patient's chief complaint or need for treatment. Many patients have additional diagnoses. Admitting diagnosis The diagnosis given as the reason why the patient was admitted to the hospital; it may differ from the actual problem or from the discharge diagnoses, which are the diagnoses recorded when the patient is discharged from the hospital. Differential diagnosis A process of identifying all of the possible diagnoses that could be connected to the signs, symptoms, and lab findings, and then ruling out diagnoses until a final determination can be made. Diagnostic criteria Designates the combination of signs, symptoms, and test results that the clinician uses to attempt to determine the correct diagnosis. They are standards, normally published by international committees, and they are designed to offer the best sensitivity and specificity possible, respect the presence of a condition, with the state-of-the-art technology. Prenatal diagnosis Diagnosis work done before birth Diagnosis of exclusion A medical condition whose presence cannot be established with complete confidence from history, examination or testing. Diagnosis is therefore by elimination of all other reasonable possibilities. Dual diagnosis The diagnosis of two related, but separate, medical conditions or comorbidities. The term almost always referred to a diagnosis of a serious mental illness and a substance use disorder, however, the increasing prevalence of genetic testing has revealed many cases of patients with multiple concomitant genetic disorders. Self-diagnosis The diagnosis or identification of a medical conditions in oneself. Self-diagnosis is very common. Remote diagnosis A type of telemedicine that diagnoses a patient without being physically in the same room as the patient. Nursing diagnosis Rather than focusing on biological processes, a nursing diagnosis identifies people's responses to situations in their lives, such as a readiness to change or a willingness to accept assistance. Computer-aided diagnosis Providing symptoms allows the computer to identify the problem and diagnose the user to the best of its ability. Health screening begins by identifying the part of the body where the symptoms are located; the computer cross-references a database for the corresponding disease and presents a diagnosis. Overdiagnosis The diagnosis of "disease" that will never cause symptoms, distress, or death during a patient's lifetime Wastebasket diagnosis A vague, or even completely fake, medical or psychiatric label given to the patient or to the medical records department for essentially non-medical reasons, such as to reassure the patient by providing an official-sounding label, to make the provider look effective, or to obtain approval for treatment. This term is also used as a derogatory label for disputed, poorly described, overused, or questionably classified diagnoses, such as pouchitis and senility, or to dismiss diagnoses that amount to overmedicalization, such as the labeling of normal responses to physical hunger as reactive hypoglycemia. Retrospective diagnosis The labeling of an illness in a historical figure or specific historical event using modern knowledge, methods and disease classifications. See also Diagnosis codes Diagnosis-related group Diagnostic and Statistical Manual of Mental Disorders Doctor-patient relationship Etiology (medicine) International Statistical Classification ofDiseases and Related Health Problems (ICD) Medical classification Merck Manual of Diagnosis and Therapy Medical error Nosology Nursing diagnosis Pathogenesis Pathology Prediction Preimplantation genetic diagnosis Prognosis Sign (medicine) Symptom Lists List of diagnostic classification and rating scales used in psychiatry List of diseases List of disorders List of medical symptoms :Category:Diseases References External links Medical terminology Nosology
0.768434
0.999023
0.767684
Facies (medical)
In medical contexts, a facies is a distinctive facial expression or appearance associated with a specific medical condition. The term comes from Latin for "face". As a fifth declension noun, facies can be both singular and plural. Types Examples include: Hippocratic facies – eyes are sunken, temples collapsed, nose is pinched with crusts on the lips, and the forehead is clammy Moon face (also known as "Cushingoid facies") – Cushing's syndrome Elfin facies – Williams syndrome, Donohue syndrome Potter facies – oligohydramnios Mask like facies – parkinsonism Leonine facies – lepromatous leprosy or craniometaphyseal dysplasia Mitral facies – mitral stenosis Amiodarone facies (deep blue discoloration around malar area and nose) Acromegalic facies – acromegaly Flat facies – Down syndrome Marfanoid facies – Marfan's syndrome Snarling facies – myasthenia gravis Myotonic facies – myotonic dystrophy Torpid facies – myxoedema Mouse facies – chronic kidney failure Plethoric facies – Cushing's syndrome and polycythemia vera Bird facies – Pierre Robin sequence Ashen grey facies – myocardial infarction Gargoyle facies – Hurler's syndrome Monkey facies – marasmus Hatchet facies – myotonia atrophica Gorilla-like face – acromegaly Bovine facies (or cow face) – craniofacial dysostosis or Crouzon syndrome Marshall halls facies – hydrocephalus Frog face – intranasal disease Coarse facies – many inborn errors of metabolism Adenoid facies – developmental facial traits caused by adenoid hypertrophy, nasal airway obstruction and mouthbreathing; really a form of long face syndrome. Lion-like facies – involvement of craniofacial bones in Paget disease of Bone Chipmunk facies – beta thalassemia Treacher Collins syndrome – deformities of the ears, eyes, cheekbones, and chin Other disorders associated with syndromic facies Pitt–Hopkins syndrome Beta thalassemia is associated with distinctive facial features due to ineffective erythropoiesis. The ineffective erythropoiesis causes marrow hyperplasia or expansion and bony changes, including the bones of the face; this causes craniofacial protrusions. Mowat–Wilson syndrome Snijders Blok–Campeau syndrome See also Body habitus References External links Face in Clinical Medicine wikt:facies Medical signs
0.779532
0.984744
0.767639
Transport
Transport (in British English) or transportation (in American English) is the intentional movement of humans, animals, and goods from one location to another. Modes of transport include air, land (rail and road), water, cable, pipelines, and space. The field can be divided into infrastructure, vehicles, and operations. Transport enables human trade, which is essential for the development of civilizations. Transport infrastructure consists of both fixed installations, including roads, railways, airways, waterways, canals, and pipelines, and terminals such as airports, railway stations, bus stations, warehouses, trucking terminals, refueling depots (including fuel docks and fuel stations), and seaports. Terminals may be used both for the interchange of passengers and cargo and for maintenance. Means of transport are any of the different kinds of transport facilities used to carry people or cargo. They may include vehicles, riding animals, and pack animals. Vehicles may include wagons, automobiles, bicycles, buses, trains, trucks, helicopters, watercraft, spacecraft, and aircraft. Modes A mode of transport is a solution that makes use of a certain type of vehicle, infrastructure, and operation. The transport of a person or of cargo may involve one mode or several of the modes, with the latter case being called inter-modal or multi-modal transport. Each mode has its own advantages and disadvantages, and will be chosen on the basis of cost, capability, and route. Governments deal with the way the vehicles are operated, and the procedures set for this purpose, including financing, legalities, and policies. In the transport industry, operations and ownership of infrastructure can be either public or private, depending on the country and mode. Passenger transport may be public, where operators provide scheduled services, or private. Freight transport has become focused on containerization, although bulk transport is used for large volumes of durable items. Transport plays an important part in economic growth and globalization, but most types cause air pollution and use large amounts of land. While it is heavily subsidized by governments, good planning of transport is essential to make traffic flow and restrain urban sprawl. Human-powered Human-powered transport, a form of sustainable transport, is the transport of people or goods using human muscle-power, in the form of walking, running, and swimming. Modern technology has allowed machines to enhance human power. Human-powered transport remains popular for reasons of cost-saving, leisure, physical exercise, and environmentalism; it is sometimes the only type available, especially in underdeveloped or inaccessible regions. Although humans are able to walk without infrastructure, the transport can be enhanced through the use of roads, especially when using the human power with vehicles, such as bicycles and inline skates. Human-powered vehicles have also been developed for difficult environments, such as snow and water, by watercraft rowing and skiing; even the air can be entered with human-powered aircraft. Animal-powered Animal-powered transport is the use of working animals for the movement of people and commodities. Humans may ride some of the animals directly, use them as pack animals for carrying goods, or harness them, alone or in teams, to pull sleds or wheeled vehicles. Air A fixed-wing aircraft, commonly called an airplane, is a heavier-than-air craft where movement of the air in relation to the wings is used to generate lift. The term is used to distinguish this from rotary-wing aircraft, where the movement of the lift surfaces relative to the air generates lift. A gyroplane is both fixed-wing and rotary wing. Fixed-wing aircraft range from small trainers and recreational aircraft to large airliners and military cargo aircraft. Two things necessary for aircraft are air flow over the wings for lift and an area for landing. The majority of aircraft also need an airport with the infrastructure for maintenance, restocking, and refueling and for the loading and unloading of crew, cargo, and passengers. While the vast majority of aircraft land and take off on land, some are capable of take-off and landing on ice, snow, and calm water. The aircraft is the second fastest method of transport, after the rocket. Commercial jets can reach up to , single-engine aircraft . Aviation is able to quickly transport people and limited amounts of cargo over longer distances, but incurs high costs and energy use; for short distances or in inaccessible places, helicopters can be used. As of April 28, 2009, The Guardian article notes that "the WHO estimates that up to 500,000 people are on planes at any time." Land Land transport covers all land-based transport systems that provide for the movement of people, goods, and services. Land transport plays a vital role in linking communities to each other. Land transport is a key factor in urban planning. It consists of two kinds, rail and road. Rail Rail transport is where a train runs along a set of two parallel steel rails, known as a railway or railroad. The rails are anchored perpendicular to ties (or sleepers) of timber, concrete, or steel, to maintain a consistent distance apart, or gauge. The rails and perpendicular beams are placed on a foundation made of concrete or compressed earth and gravel in a bed of ballast. Alternative methods include monorail and maglev. A train consists of one or more connected vehicles that operate on the rails. Propulsion is commonly provided by a locomotive, that hauls a series of unpowered cars, that can carry passengers or freight. The locomotive can be powered by steam, by diesel, or by electricity supplied by trackside systems. Alternatively, some or all the cars can be powered, known as a multiple unit. Also, a train can be powered by horses, cables, gravity, pneumatics, and gas turbines. Railed vehicles move with much less friction than rubber tires on paved roads, making trains more energy efficient, though not as efficient as ships. Intercity trains are long-haul services connecting cities; modern high-speed rail is capable of speeds up to , but this requires specially built track. Regional and commuter trains feed cities from suburbs and surrounding areas, while intra-urban transport is performed by high-capacity tramways and rapid transits, often making up the backbone of a city's public transport. Freight trains traditionally used box cars, requiring manual loading and unloading of the cargo. Since the 1960s, container trains have become the dominant solution for general freight, while large quantities of bulk are transported by dedicated trains. Road A road is an identifiable route, way, or path between two or more places. Roads are typically smoothed, paved, or otherwise prepared to allow easy travel; though they need not be, and historically many roads were simply recognizable routes without any formal construction or maintenance. In urban areas, roads may pass through a city or village and be named as streets, serving a dual function as urban space easement and route. The most common road vehicle is the automobile; a wheeled passenger vehicle that carries its own motor. Other users of roads include buses, trucks, motorcycles, bicycles, and pedestrians. As of 2010, there were 1.015 billion automobiles worldwide. Road transport offers complete freedom to road users to transfer the vehicle from one lane to the other and from one road to another according to the need and convenience. This flexibility of changes in location, direction, speed, and timings of travel is not available to other modes of transport. It is possible to provide door-to-door service only by road transport. Automobiles provide high flexibility with low capacity, but require high energy and area use, and are the main source of harmful noise and air pollution in cities; buses allow for more efficient travel at the cost of reduced flexibility. Road transport by truck is often the initial and final stage of freight transport. Water Water transport is movement by means of a watercraft—such as a barge, boat, ship, or sailboat—over a body of water, such as a sea, ocean, lake, canal, or river. The need for buoyancy is common to watercraft, making the hull a dominant aspect of its construction, maintenance, and appearance. In the 19th century, the first steam ships were developed, using a steam engine to drive a paddle wheel or propeller to move the ship. The steam was produced in a boiler using wood or coal and fed through a steam external combustion engine. Now most ships have an internal combustion engine using a slightly refined type of petroleum called bunker fuel. Some ships, such as submarines, use nuclear power to produce the steam. Recreational or educational craft still use wind power, while some smaller craft use internal combustion engines to drive one or more propellers or, in the case of jet boats, an inboard water jet. In shallow draft areas, hovercraft are propelled by large pusher-prop fans. (See Marine propulsion.) Although it is slow compared to other transport, modern sea transport is a highly efficient method of transporting large quantities of goods. Commercial vessels, nearly 35,000 in number, carried 7.4 billion tons of cargo in 2007. Transport by water is significantly less costly than air transport for transcontinental shipping; short sea shipping and ferries remain viable in coastal areas. Other modes Pipeline transport sends goods through a pipe; most commonly liquid and gases are sent, but pneumatic tubes can also send solid capsules using compressed air. For liquids/gases, any chemically stable liquid or gas can be sent through a pipeline. Short-distance systems exist for sewage, slurry, water, and beer, while long-distance networks are used for petroleum and natural gas. Cable transport is a broad mode where vehicles are pulled by cables instead of an internal power source. It is most commonly used at steep gradient. Typical solutions include aerial tramways, elevators, and ski lifts; some of these are also categorized as conveyor transport. Spaceflight is transport outside Earth's atmosphere by means of a spacecraft. It is most frequently used for satellites placed in Earth orbit. However, human spaceflight mission have landed on the Moon and are occasionally used to rotate crew-members to space stations. Uncrewed spacecraft have also been sent to all the planets of the Solar System. Suborbital spaceflight is the fastest of the existing and planned transport systems from a place on Earth to a distant "other place" on Earth. Faster transport could be achieved through part of a low Earth orbit or by following that trajectory even faster, using the propulsion of the rocket to steer it. Elements Infrastructure Infrastructure is the fixed installations that allow a vehicle to operate. It consists of a roadway, a terminal, and facilities for parking and maintenance. For rail, pipeline, road, and cable transport, the entire way the vehicle travels must be constructed. Air and watercraft are able to avoid this, since the airway and seaway do not need to be constructed. However, they require fixed infrastructure at terminals. Terminals such as airports, ports, and stations, are locations where passengers and freight can be transferred from one vehicle or mode to another. For passenger transport, terminals are integrating different modes to allow riders, who are interchanging between modes, to take advantage of each mode's benefits. For instance, airport rail links connect airports to the city centres and suburbs. The terminals for automobiles are parking lots, while buses and coaches can operate from simple stops. For freight, terminals act as transshipment points, though some cargo is transported directly from the point of production to the point of use. The financing of infrastructure can either be public or private. Transport is often a natural monopoly and a necessity for the public; roads, and in some countries railways and airports, are funded through taxation. New infrastructure projects can have high costs and are often financed through debt. Many infrastructure owners, therefore, impose usage fees, such as landing fees at airports or toll plazas on roads. Independent of this, authorities may impose taxes on the purchase or use of vehicles. Because of poor forecasting and overestimation of passenger numbers by planners, there is frequently a benefits shortfall for transport infrastructure projects. Means of transport Animals Animals used in transportation include pack animals and riding animals. Vehicles A vehicle is a non-living device that is used to move people and goods. Unlike the infrastructure, the vehicle moves along with the cargo and riders. Unless being pulled/pushed by a cable or muscle-power, the vehicle must provide its own propulsion; this is most commonly done through a steam engine, combustion engine, electric motor, jet engine, or rocket, though other means of propulsion also exist. Vehicles also need a system of converting the energy into movement; this is most commonly done through wheels, propellers, and pressure. Vehicles are most commonly staffed by a driver. However, some systems, such as people movers and some rapid transits, are fully automated. For passenger transport, the vehicle must have a compartment, seat, or platform for the passengers. Simple vehicles, such as automobiles, bicycles, or simple aircraft, may have one of the passengers as a driver. Recently, the progress related to the Fourth Industrial Revolution has brought a lot of new emerging technologies for transportation and automotive fields such as Connected Vehicles and Autonomous Driving. These innovations are said to form future mobility, but concerns remain on safety and cybersecurity, particularly concerning connected and autonomous mobility. Operation Private transport is only subject to the owner of the vehicle, who operates the vehicle themselves. For public transport and freight transport, operations are done through private enterprise or by governments. The infrastructure and vehicles may be owned and operated by the same company, or they may be operated by different entities. Traditionally, many countries have had a national airline and national railway. Since the 1980s, many of these have been privatized. International shipping remains a highly competitive industry with little regulation, but ports can be public-owned. Policy As the population of the world increases, cities grow in size and population—according to the United Nations, 55% of the world's population live in cities, and by 2050 this number is expected to rise to 68%. Public transport policy must evolve to meet the changing priorities of the urban world. The institution of policy enforces order in transport, which is by nature chaotic as people attempt to travel from one place to another as fast as possible. This policy helps to reduce accidents and save lives. Functions Relocation of travelers and cargo are the most common uses of transport. However, other uses exist, such as the strategic and tactical relocation of armed forces during warfare, or the civilian mobility construction or emergency equipment. Passenger Passenger transport, or travel, is divided into public and private transport. Public transport is scheduled services on fixed routes, while private is vehicles that provide ad hoc services at the riders desire. The latter offers better flexibility, but has lower capacity and a higher environmental impact. Travel may be as part of daily commuting or for business, leisure, or migration. Short-haul transport is dominated by the automobile and mass transit. The latter consists of buses in rural and small cities, supplemented with commuter rail, trams, and rapid transit in larger cities. Long-haul transport involves the use of the automobile, trains, coaches, and aircraft, the last of which have become predominantly used for the longest, including intercontinental, travel. Intermodal passenger transport is where a journey is performed through the use of several modes of transport; since all human transport normally starts and ends with walking, all passenger transport can be considered intermodal. Public transport may also involve the intermediate change of vehicle, within or across modes, at a transport hub, such as a bus or railway station. Taxis and buses can be found on both ends of the public transport spectrum. Buses are the cheapest mode of transport but are not necessarily flexible, and taxis are very flexible but more expensive. In the middle is demand-responsive transport, offering flexibility whilst remaining affordable. International travel may be restricted for some individuals due to legislation and visa requirements. Medical An ambulance is a vehicle used to transport people from or between places of treatment, and in some instances will also provide out-of-hospital medical care to the patient. The word is often associated with road-going "emergency ambulances", which form part of emergency medical services, administering emergency care to those with acute medical problems. Air medical services is a comprehensive term covering the use of air transport to move patients to and from healthcare facilities and accident scenes. Personnel provide comprehensive prehospital and emergency and critical care to all types of patients during aeromedical evacuation or rescue operations, aboard helicopters, propeller aircraft, or jet aircraft. Freight Freight transport, or shipping, is a key in the value chain in manufacturing. With increased specialization and globalization, production is being located further away from consumption, rapidly increasing the demand for transport. Transport creates place utility by moving the goods from the place of production to the place of consumption. While all modes of transport are used for cargo transport, there is high differentiation between the nature of the cargo transport, in which mode is chosen. Logistics refers to the entire process of transferring products from producer to consumer, including storage, transport, transshipment, warehousing, material-handling, and packaging, with associated exchange of information. Incoterm deals with the handling of payment and responsibility of risk during transport. Containerization, with the standardization of ISO containers on all vehicles and at all ports, has revolutionized international and domestic trade, offering a huge reduction in transshipment costs. Traditionally, all cargo had to be manually loaded and unloaded into the haul of any ship or car; containerization allows for automated handling and transfer between modes, and the standardized sizes allow for gains in economy of scale in vehicle operation. This has been one of the key driving factors in international trade and globalization since the 1950s. Bulk transport is common with cargo that can be handled roughly without deterioration; typical examples are ore, coal, cereals, and petroleum. Because of the uniformity of the product, mechanical handling can allow enormous quantities to be handled quickly and efficiently. The low value of the cargo combined with high volume also means that economies of scale become essential in transport, and gigantic ships and whole trains are commonly used to transport bulk. Liquid products with sufficient volume may also be transported by pipeline. Air freight has become more common for products of high value; while less than one percent of world transport by volume is by airline, it amounts to forty percent of the value. Time has become especially important in regards to principles such as postponement and just-in-time within the value chain, resulting in a high willingness to pay for quick delivery of key components or items of high value-to-weight ratio. In addition to mail, common items sent by air include electronics and fashion clothing. Industry Impact Economic Transport is a key necessity for specialization—allowing production and consumption of products to occur at different locations. Throughout history, transport has been a spur to expansion; better transport allows more trade and a greater spread of people. Economic growth has always been dependent on increasing the capacity and rationality of transport. But the infrastructure and operation of transport have a great impact on the land, and transport is the largest drainer of energy, making transport sustainability a major issue. Due to the way modern cities and communities are planned and operated, a physical distinction between home and work is usually created, forcing people to transport themselves to places of work, study, or leisure, as well as to temporarily relocate for other daily activities. Passenger transport is also the essence of tourism, a major part of recreational transport. Commerce requires the transport of people to conduct business, either to allow face-to-face communication for important decisions or to move specialists from their regular place of work to sites where they are needed. In lean thinking, transporting materials or work in process from one location to another is seen as one of the seven wastes (Japanese term: muda) which do not add value to a product. Planning Transport planning allows for high use and less impact regarding new infrastructure. Using models of transport forecasting, planners are able to predict future transport patterns. On the operative level, logistics allows owners of cargo to plan transport as part of the supply chain. Transport as a field is also studied through transport economics, a component for the creation of regulation policy by authorities. Transport engineering, a sub-discipline of civil engineering, must take into account trip generation, trip distribution, mode choice, and route assignment, while the operative level is handled through traffic engineering. Because of the negative impacts incurred, transport often becomes the subject of controversy related to choice of mode, as well as increased capacity. Automotive transport can be seen as a tragedy of the commons, where the flexibility and comfort for the individual deteriorate the natural and urban environment for all. Density of development depends on mode of transport, with public transport allowing for better spatial use. Good land use keeps common activities close to people's homes and places higher-density development closer to transport lines and hubs, to minimize the need for transport. There are economies of agglomeration. Beyond transport, some land uses are more efficient when clustered. Transport facilities consume land, and in cities pavement (devoted to streets and parking) can easily exceed 20 percent of the total land use. An efficient transport system can reduce land waste. Too much infrastructure and too much smoothing for maximum vehicle throughput mean that in many cities there is too much traffic and many—if not all—of the negative impacts that come with it. It is only in recent years that traditional practices have started to be questioned in many places; as a result of new types of analysis which bring in a much broader range of skills than those traditionally relied on—spanning such areas as environmental impact analysis, public health, sociology, and economics—the viability of the old mobility solutions is increasingly being questioned. Environment Transport is a major use of energy and burns most of the world's petroleum. This creates air pollution, including nitrous oxides and particulates, and is a significant contributor to global warming through emission of carbon dioxide, for which transport is the fastest-growing emission sector. By sub-sector, road transport is the largest contributor to global warming. Environmental regulations in developed countries have reduced individual vehicles' emissions; however, this has been offset by increases in the numbers of vehicles and in the use of each vehicle. Some pathways to reduce the carbon emissions of road vehicles considerably have been studied. Energy use and emissions vary largely between modes, causing environmentalists to call for a transition from air and road to rail and human-powered transport, as well as increased transport electrification and energy efficiency. Other environmental impacts of transport systems include traffic congestion and automobile-oriented urban sprawl, which can consume natural habitat and agricultural lands. By reducing transport emissions globally, it is predicted that there will be significant positive effects on Earth's air quality, acid rain, smog, and climate change. While electric cars are being built to cut down CO2 emission at the point of use, an approach that is becoming popular among cities worldwide is to prioritize public transport, bicycles, and pedestrian movement. Redirecting vehicle movement to create 20-minute neighbourhoods that promotes exercise while greatly reducing vehicle dependency and pollution. Some policies are levying a congestion charge to cars for travelling within congested areas during peak time. Airplane emissions change depending on the flight distance. It takes a lot of energy to take off and land, so longer flights are more efficient per mile traveled. However, longer flights naturally use more fuel in total. Short flights produce the most per passenger mile, while long flights produce slightly less. Things get worse when planes fly high in the atmosphere. Their emissions trap much more heat than those released at ground level. This isn't just because of , but a mix of other greenhouse gases in the exhaust. City buses produce about 0.3 kg of for every mile traveled per passenger. For long-distance bus trips (over 20 miles), that pollution drops to about 0.08 kg of per passenger mile. On average, commuter trains produce around 0.17 kg of for each mile traveled per passenger. Long-distance trains are slightly higher at about 0.19 kg of per passenger mile. The fleet emission average for delivery vans, trucks and big rigs is per gallon of diesel consumed. Delivery vans and trucks average about 7.8 mpg (or 1.3 kg of per mile) while big rigs average about 5.3 mpg (or 1.92 kg of per mile). Sustainable development The United Nations first formally recognized the role of transport in sustainable development in the 1992 United Nations Earth summit. In the 2012 United Nations World Conference, global leaders unanimously recognized that transport and mobility are central to achieving the sustainability targets. In recent years, data has been collected to show that the transport sector contributes to a quarter of the global greenhouse gas emissions, and therefore sustainable transport has been mainstreamed across several of the 2030 Sustainable Development Goals, especially those related to food, security, health, energy, economic growth, infrastructure, and cities and human settlements. Meeting sustainable transport targets is said to be particularly important to achieving the Paris Agreement. There are various Sustainable Development Goals (SDGs) that are promoting sustainable transport to meet the defined goals. These include SDG 3 on health (increased road safety), SDG 7 on energy, SDG 8 on decent work and economic growth, SDG 9 on resilient infrastructure, SDG 11 on sustainable cities (access to transport and expanded public transport), SDG 12 on sustainable consumption and production (ending fossil fuel subsidies), and SDG 14 on oceans, seas, and marine resources. History Natural Humans' first ways to move included walking, running, and swimming. The domestication of animals introduced a new way to lay the burden of transport on more powerful creatures, allowing the hauling of heavier loads, or humans riding animals for greater speed and duration. Inventions such as the wheel and the sled (U.K. sledge) helped make animal transport more efficient through the introduction of vehicles. The first forms of road transport involved animals, such as horses (domesticated in the 4th or the 3rd millennium BCE), oxen (from about 8000 BCE), or humans carrying goods over dirt tracks that often followed game trails. Water transport Water transport, including rowed and sailed vessels, dates back to time immemorial and was the only efficient way to transport large quantities or over large distances prior to the Industrial Revolution. The first watercraft were canoes cut out from tree trunks. Early water transport was accomplished with ships that were either rowed or used the wind for propulsion, or a combination of the two. The importance of water has led to most cities that grew up as sites for trading being located on rivers or on the sea-shore, often at the intersection of two bodies of water. Mechanical Until the Industrial Revolution, transport remained slow and costly, and production and consumption gravitated as close to each other as feasible. The Industrial Revolution in the 19th century saw several inventions fundamentally change transport. With telegraphy, communication became instant and independent of the transport of physical objects. The invention of the steam engine, closely followed by its application in rail transport, made land transport independent of human or animal muscles. Both speed and capacity increased, allowing specialization through manufacturing being located independently of natural resources. The 19th century also saw the development of the steam ship, which sped up global transport. With the development of the combustion engine and the automobile around 1900, road transport became more competitive again, and mechanical private transport originated. The first "modern" highways were constructed during the 19th century with macadam. Later, tarmac and concrete became the dominant paving materials. In 1903 the Wright brothers demonstrated the first successful controllable airplane, and after World War I (1914–1918) aircraft became a fast way to transport people and express goods over long distances. After World War II (1939–1945) the automobile and airlines took higher shares of transport, reducing rail and water to freight and short-haul passenger services. Scientific spaceflight began in the 1950s, with rapid growth until the 1970s, when interest dwindled. In the 1950s the introduction of containerization gave massive efficiency gains in freight transport, fostering globalization. International air travel became much more accessible in the 1960s with the commercialization of the jet engine. Along with the growth in automobiles and motorways, rail and water transport declined in relative importance. After the introduction of the Shinkansen in Japan in 1964, high-speed rail in Asia and Europe started attracting passengers on long-haul routes away from the airlines. Early in U.S. history, private joint-stock corporations owned most aqueducts, bridges, canals, railroads, roads, and tunnels. Most such transport infrastructure came under government control in the late 19th and early 20th centuries, culminating in the nationalization of inter-city passenger rail-service with the establishment of Amtrak. Recently, however, a movement to privatize roads and other infrastructure has gained some ground and adherents. See also Energy efficiency in transport Environmental impact of aviation Free public transport Green transport hierarchy IEEE Intelligent Transportation Systems Society Journal of Transport and Land Use List of emerging transportation technologies Outline of transport Personal rapid transit Public transport Public transport accessibility level Rail transport by country Speed record Taxicabs by country Transport divide Transportation engineering References Bibliography Further reading McKibben, Bill, "Toward a Land of Buses and Bikes" (review of Ben Goldfarb, Crossings: How Road Ecology Is Shaping the Future of Our Planet, Norton, 2023, 370 pp.; and Henry Grabar, Paved Paradise: How Parking Explains the World, Penguin Press, 2023, 346 pp.), The New York Review of Books, vol. LXX, no. 15 (5 October 2023), pp. 30–32. "Someday in the not impossibly distant future, if we manage to prevent a global warming catastrophe, you could imagine a post-auto world where bikes and buses and trains are ever more important, as seems to be happening in Europe at the moment." (p. 32.) External links Transportation from UCB Libraries GovPubs America On the Move An online transportation exhibition from the National Museum of American History, Smithsonian Institution Economics of transport and utility industries Logistics
0.768761
0.998477
0.767591
Arthralgia
Arthralgia literally means 'joint pain'. Specifically, arthralgia is a symptom of injury, infection, illness (in particular arthritis), or an allergic reaction to medication. According to MeSH, the term arthralgia should only be used when the condition is non-inflammatory, and the term arthritis should be used when the condition is inflammatory. Causes The causes of arthralgia are varied and range, from a joints perspective, from degenerative and destructive processes such as osteoarthritis and sports injuries to inflammation of tissues surrounding the joints, such as bursitis. These might be triggered by other things, such as infections or vaccinations. Diagnosis Diagnosis involves interviewing the patient and performing physical exams. When attempting to establish the cause of the arthralgia, the emphasis is on the interview. The patient is asked questions intended to narrow the number of potential causes. Given the varied nature of these possible causes, some questions may seem irrelevant. For example, the patient may be asked about dry mouth, light sensitivity, rashes or a history of seizures. Answering yes or no to any of these questions limits the number of possible causes and guides the physician toward the appropriate exams and lab tests. Treatment Treatment depends on a specific underlying cause. The underlying cause will be treated first and foremost. The treatments may include joint replacement surgery for severely damaged joints, immunosuppressants for immune system dysfunction, antibiotics when an infection is the cause, and discontinuing medication when an allergic reaction is the cause. When treating the primary cause, pain management may still play a role in treatment. See also Antiarthritics Myalgia References Rheumatology
0.770656
0.996013
0.767584
Agglutination (biology)
Agglutination is the clumping of particles. The word agglutination comes from the Latin agglutinare (glueing to). Agglutination is a reaction in which particles (as red blood cells or bacteria) suspended in a liquid collect into clumps usually as a response to a specific antibody. This occurs in biology in two main examples: The clumping of cells such as bacteria or red blood cells in the presence of an antibody or complement. The antibody or other molecule binds multiple particles and joins them, creating a large complex. This increases the efficacy of microbial elimination by phagocytosis as large clumps of bacteria can be eliminated in one pass, versus the elimination of single microbial antigens. When people are given blood transfusions of the wrong blood group, the antibodies react with the incorrectly transfused blood group and as a result, the erythrocytes clump up and stick together causing them to agglutinate. The coalescing of small particles that are suspended in a solution; these larger masses are then (usually) precipitated. In immunohematology Hemagglutination Hemagglutination is the process by which red blood cells agglutinate, meaning clump or clog. The agglutin involved in hemagglutination is called hemagglutinin. In cross-matching, donor red blood cells and the recipient's serum or plasma are incubated together. If agglutination occurs, this indicates that the donor and recipient blood types are incompatible. When a person produces antibodies against their own red blood cells, as in cold agglutinin disease and other autoimmune conditions, the cells may agglutinate spontaneously. This is called autoagglutination and it can interfere with laboratory tests such as blood typing and the complete blood count. Leukoagglutination Leukoagglutination occurs when the particles involved are white blood cells. An example is the PH-L form of phytohaemagglutinin. In microbiology Agglutination is commonly used as a method of identifying specific bacterial antigens and the identity of such bacteria, and therefore is an important technique in diagnosis. History of discoveries Two bacteriologists, Herbert Edward Durham (-1945) and Max von Gruber (1853–1927), discovered specific agglutination in 1896. The clumping became known as Gruber-Durham reaction. Gruber introduced the term agglutinin (from the Latin) for any substance that caused agglutination of cells. French physician Fernand Widal (1862–1929) put Gruber and Durham's discovery to practical use later in 1896, using the reaction as the basis for a test for typhoid fever. Widal found that blood serum from a typhoid carrier caused a culture of typhoid bacteria to clump, whereas serum from a typhoid-free person did not. This Widal test was the first example of serum diagnosis. Austrian physician Karl Landsteiner found another important practical application of the agglutination reaction in 1900. Landsteiner's agglutination tests and his discovery of ABO blood groups was the start of the science of blood transfusion and serology which has made transfusion possible and safer. See also Agglutination-PCR Blocking antibody Coagulation Immune system Macrophage Mannan oligosaccharides (MOS) References Immunologic tests Hematology
0.772938
0.993059
0.767573
Dehydration
In physiology, dehydration is a lack of total body water, with an accompanying disruption of metabolic processes. It occurs when free water loss exceeds free water intake, usually due to exercise, disease, or high environmental temperature. Mild dehydration can also be caused by immersion diuresis, which may increase risk of decompression sickness in divers. Most people can tolerate a 3-4% decrease in total body water without difficulty or adverse health effects. A 5-8% decrease can cause fatigue and dizziness. Loss of over 10% of total body water can cause physical and mental deterioration, accompanied by severe thirst. Death occurs at a loss of between 15 and 25% of the body water. Mild dehydration is characterized by thirst and general discomfort and is usually resolved with oral rehydration. Dehydration can cause hypernatremia (high levels of sodium ions in the blood) and is distinct from hypovolemia (loss of blood volume, particularly blood plasma). Chronic dehydration can contribute to the formation of kidney stones as well as the development of chronic kidney disease. Signs and symptoms The hallmarks of dehydration include thirst and neurological changes such as headaches, general discomfort, loss of appetite, nausea, decreased urine volume (unless polyuria is the cause of dehydration), confusion, unexplained tiredness, purple fingernails, and seizures. The symptoms of dehydration become increasingly severe with greater total body water loss. A body water loss of 1-2%, considered mild dehydration, is shown to impair cognitive performance. While in people over age 50, the body's thirst sensation diminishes with age, a study found that there was no difference in fluid intake between young and old people. Many older people have symptoms of dehydration. Dehydration contributes to morbidity in the elderly population, especially during conditions that promote insensible free water losses, such as hot weather. A Cochrane review on this subject defined water-loss dehydration as "people with serum osmolality of 295 mOsm/kg or more" and found that the main symptom in the elderly (people aged over 65) was fatigue. Cause Risk factors for dehydration include but are not limited to: exerting oneself in hot and humid weather, habitation at high altitudes, endurance athletics, elderly adults, infants, children and people living with chronic illnesses. Dehydration can also come as a side effect from many different types of drugs and medications. In the elderly, blunted response to thirst or inadequate ability to access free water in the face of excess free water losses (especially hyperglycemia related) seem to be the main causes of dehydration. Excess free water or hypotonic water can leave the body in two ways – sensible loss such as osmotic diuresis, sweating, vomiting and diarrhea, and insensible water loss, occurring mainly through the skin and respiratory tract. In humans, dehydration can be caused by a wide range of diseases and states that impair water homeostasis in the body. These occur primarily through either impaired thirst/water access or sodium excess. Diagnosis Definition Dehydration occurs when water intake does not replace free water lost due to normal physiologic processes, including breathing, urination, perspiration, or other causes, including diarrhea, and vomiting. Dehydration can be life-threatening when severe and lead to seizures or respiratory arrest, and also carries the risk of osmotic cerebral edema if rehydration is overly rapid. The term "dehydration" has sometimes been used incorrectly as a proxy for the separate, related condition of hypovolemia, which specifically refers to a decrease in volume of blood plasma. The two are regulated through independent mechanisms in humans; the distinction is important in guiding treatment. Physical examination The skin turgor test can be used to support the diagnosis of dehydration. The skin turgor test is conducted by pinching skin on the patient's body, in a location such as the forearm or the back of the hand, and watching to see how quickly it returns to its normal position. The skin turgor test can be unreliable in patients who have reduced skin elasticity, such as the elderly. Prevention For routine activities, thirst is normally an adequate guide to maintain proper hydration. Minimum water intake will vary individually depending on weight, energy expenditure, age, sex, physical activity, environment, diet, and genetics. With exercise, exposure to hot environments, or a decreased thirst response, additional water may be required. In athletes in competition, drinking to thirst optimizes performance and safety, despite weight loss, and as of 2010, there was no scientific study showing that it is beneficial to stay ahead of thirst and maintain weight during exercise. In warm or humid weather, or during heavy exertion, water loss can increase markedly, because humans have a large and widely variable capacity for sweating. Whole-body sweat losses in men can exceed 2 L/h during competitive sport, with rates of 3–4 L/h observed during short-duration, high-intensity exercise in the heat. When such large amounts of water are being lost through perspiration, electrolytes, especially sodium, are also being lost. In most athletes exercising and sweating for 4–5 hours with a sweat sodium concentration of less than 50 mmol/L, the total sodium lost is less than 10% of total body stores (total stores are approximately 2,500 mmol, or 58 g for a 70-kg person). These losses appear to be well tolerated by most people. The inclusion of sodium in fluid replacement drinks has some theoretical benefits and poses little or no risk, so long as these fluids are hypotonic (since the mainstay of dehydration prevention is the replacement of free water losses). Treatment The most effective treatment for minor dehydration is widely considered to be drinking water and reducing fluid loss. Plain water restores only the volume of the blood plasma, inhibiting the thirst mechanism before solute levels can be replenished. Solid foods can contribute to replace fluid loss from vomiting and diarrhea. Urine concentration and frequency will return to normal as dehydration resolves. In some cases, correction of a dehydrated state is accomplished by the replenishment of necessary water and electrolytes (through oral rehydration therapy, or fluid replacement by intravenous therapy). As oral rehydration is less painful, non-invasive, inexpensive, and easier to provide, it is the treatment of choice for mild dehydration. Solutions used for intravenous rehydration must be isotonic or hypertonic. Pure water injected into the veins will cause the breakdown (lysis) of red blood cells (erythrocytes). When fresh water is unavailable (e.g. at sea or in a desert), seawater or drinks with significant alcohol concentration will worsen dehydration. Urine contains a lower solute concentration than seawater; this requires the kidneys to create more urine to remove the excess salt, causing more water to be lost than was consumed from seawater. If a person is dehydrated and taken to a medical facility, IVs can also be used. For severe cases of dehydration where fainting, unconsciousness, or other severely inhibiting symptoms are present (the patient is incapable of standing upright or thinking clearly), emergency attention is required. Fluids containing a proper balance of replacement electrolytes are given orally or intravenously with continuing assessment of electrolyte status; complete resolution is normal in all but the most extreme cases. See also Hydrational fluids Terminal dehydration Dryness (medical) Hypernatremia References Further reading External links Definition of dehydration by the U.S. National Institutes of Health's MedlinePlus medical encyclopedia Causes of death Nutrition Symptoms and signs Wilderness medical emergencies
0.769399
0.997549
0.767514
Gordon's functional health patterns
Gordon’s functional health patterns is a method devised by Marjory Gordon to be used by nurses in the nursing process to provide a more comprehensive nursing assessment of the patient. The following areas are assessed through questions asked by the nurse and medical examinations to provide an overview of the individual's health status and health practices that are used to reach the current level of health or wellness. Health Perception and Management Nutritional metabolic Elimination-excretion patterns and problems need to be evaluated (constipation, incontinence, diarrhea) Activity exercise-whether one is able to do daily activities normally without any problem, self care activities Sleep rest-do they have hypersomnia, insomnia, do they have normal sleeping patterns Cognitive-perceptual-assessment of neurological function is done to assess, check the person's ability to comprehend information Self perception/self concept Role relationship—This pattern should only be used if it is appropriate for the patient's age and specific situation. Sexual reproductivity Coping-stress tolerance Value-Belief Pattern References Further reading Marjory Gordon. Manual of Nursing Diagnosis - Eleventh Edition. . Nursing theory
0.781061
0.982591
0.767464
Arteriosclerosis
Arteriosclerosis, literally meaning "hardening of the arteries", is an umbrella term for a vascular disorder characterized by abnormal thickening, hardening, and loss of elasticity of the walls of arteries; this process gradually restricts the blood flow to one's organs and tissues and can lead to severe health risks brought on by atherosclerosis, which is a specific form of arteriosclerosis caused by the buildup of fatty plaques, cholesterol, and some other substances in and on the artery walls (it can be brought on by smoking, a bad diet, or many genetic factors). Atherosclerosis is the primary cause of coronary artery disease (CAD) and stroke, with multiple genetic and environmental contributions. Genetic-epidemiologic studies have identified a long list of genetic and non-genetic risk factors for CAD. However, such studies indicate that family history is the most significant independent risk factor. Signs and symptoms The signs and symptoms of arteriosclerosis depend on the vessel affected by the disease. If affecting cerebral or ophthalmic vessels, as in cerebrovascular accidents or transient ischemic attacks, signs and symptoms may include sudden weakness, facial or lower limb numbness, confusion, difficulty understanding speech, and problems seeing. If affecting coronary vessels, as in coronary artery disease (including acute myocardial ischemia or a "heart attack"), signs and symptoms may include chest pain. Pathophysiology The lesions of arteriosclerosis begin as the intima (innermost layer of blood vessel wall) of the arterial wall start to fill up with the deposition of cellular wastes. As these start to mature, they can take different forms of arteriosclerosis. All are linked through common features such as the stiffening of arterial vessels, thickening of arterial walls and degenerative nature of the disease. Arteriolosclerosis, unlike atherosclerosis, is a sclerosis that only affects small arteries and arterioles, which carry nutrients and blood to the cells. Atherosclerosis is the narrowing of arteries from a buildup of plaque, usually made up of cholesterol, fatty substances, cellular waste products, calcium, and fibrin, inside the arteries. This affects large and medium-sized arteries; however, its positioning varies person to person. Monckeberg's arteriosclerosis or medial calcific sclerosis is seen mostly in the elderly, commonly in arteries of the extremities. Hyperplastic: Hyperplastic arteriosclerosis refers to the type of arteriosclerosis that affects large and medium-sized arteries. Hyaline type: Hyaline arteriosclerosis, also referred to as arterial hyalinosis and arteriolar hyalinosis, refers to lesions that are caused by the deposition of homogenous hyaline in the small arteries and arterioles. Diagnosis Diagnosis of an individual suspected of having arteriosclerosis can be based on a physical exam, blood test, EKG and the results of these tests (among other exams). Treatment Treatment is often in the form of preventive measures of prophylaxis. Medical therapy is often prescribed to help prevent arteriosclerosis for underlying conditions, such as medications for the treatment of high cholesterol (e.g., statins, cholesterol absorption inhibitors), medications to treat high blood pressure (e.g., ACE inhibitors, angiotensin II receptor blockers), and antiplatelet medications. Lifestyle changes are also advised, such as increasing exercise, stopping smoking, and moderating alcohol intake. There are a variety of types of surgery: Angioplasty and stent placement: A catheter is first inserted into the blocked or narrowed part of the artery, followed by a second one with a deflated balloon that is passed through the catheter into the narrowed area. The balloon is then inflated, pushing the deposits back against the arterial walls, and then a mesh tube is usually left behind to prevent the artery from retightening. Coronary artery bypass surgery: This surgery creates a new pathway for blood to flow to the heart. The surgeon attaches a healthy piece of vein to the coronary artery, just above and below the blockage to allow bypass. Endarterectomy: This is the general procedure for the surgical removal of plaque from the artery that has become narrowed or blocked. Thrombolytic therapy: This is a treatment used to break up masses of plaque inside the arteries via intravenous clot-dissolving medicine. Epidemiology In 2008, the US had an estimate of 16 million atherosclerotic heart disease and 5.8 million strokes. Cardiovascular diseases that were caused by arteriosclerosis also caused almost 812,000 deaths in 2008, more than any other cause, including cancer. About 1.2 million Americans are predicted to have a heart attack each year. History The diagnostics and clinical implications of this disease were not recognized until the 20th century. Many cases have been observed and recorded, and Jean Lobstein coined the term arteriosclerosis while he was analyzing the composition of calcified arterial lesions. The name "arteriosclerosis" is derived the Greek words ἀρτηρία (artēría, artery) and σκληρωτικός (sklērōtikós, hardened). References Further reading Mayoclinic-atherosclerosis External links Vascular diseases Medical conditions related to obesity
0.768589
0.9985
0.767437
Demyelinating disease
A demyelinating disease refers to any disease affecting the nervous system where the myelin sheath surrounding neurons is damaged. This damage disrupts the transmission of signals through the affected nerves, resulting in a decrease in their conduction ability. Consequently, this reduction in conduction can lead to deficiencies in sensation, movement, cognition, or other functions depending on the nerves affected. Various factors can contribute to the development of demyelinating diseases, including genetic predisposition, infectious agents, autoimmune reactions, and other unknown factors. Proposed causes of demyelination include genetic predisposition, environmental factors such as viral infections or exposure to certain chemicals. Additionally, exposure to commercial insecticides like sheep dip, weed killers, and flea treatment preparations for pets, which contain organophosphates, can also lead to nerve demyelination. Chronic exposure to neuroleptic medications may also cause demyelination. Furthermore, deficiencies in vitamin B12 can result in dysmyelination. Demyelinating diseases are traditionally classified into two types: demyelinating myelinoclastic diseases and demyelinating leukodystrophic diseases. In the first group, a healthy and normal myelin is destroyed by toxic substances, chemicals, or autoimmune reactions. In the second group, the myelin is inherently abnormal and undergoes degeneration. The Poser criteria named this second group dysmyelinating diseases. In the most well-known demyelinating disease, multiple sclerosis, evidence suggests that the body's immune system plays a significant role. Acquired immune system cells, specifically T-cells, are found at the site of lesions. Other immune system cells, such as macrophages (and possibly mast cells), also contribute to the damage. Signs and symptoms Symptoms and signs that present in demyelinating diseases are different for each condition. These symptoms and signs can present in a person with a demyelinating disease: Blurred double vision (Diplopia) Ataxia Clonus Dysarthria Fatigue Clumsiness Hand paralysis Hemiparesis Genital anaesthesia Incoordination Paresthesias Ocular paralysis (cranial nerve palsy) Impaired muscle coordination Weakness (muscle) Loss of sensation Impaired vision Unsteady gait Spastic paraparesis Incontinence Hearing problems Speech problems Evolutionary considerations The role of prolonged cortical myelination in human evolution has been implicated as a contributing factor in some cases of demyelinating disease. Unlike other primates, humans exhibit a unique pattern of postpubertal myelination, which may contribute to the development of psychiatric disorders and neurodegenerative diseases that present in early adulthood and beyond. The extended period of cortical myelination in humans may allow greater opportunities for disruption in myelination, resulting in the onset of demyelinating disease. Furthermore, humans have significantly greater prefrontal white matter volume than other primate species, which implies greater myelin density. Increased myelin density in humans as a result of a prolonged myelination may, therefore, structure risk for myelin degeneration and dysfunction. Evolutionary considerations for the role of prolonged cortical myelination as a risk factor for demyelinating disease are particularly pertinent given that genetics and autoimmune deficiency hypotheses fail to explain many cases of demyelinating disease. As has been argued, diseases such as multiple sclerosis cannot be accounted for by autoimmune deficiency alone, but strongly imply the influence of flawed developmental processes in disease pathogenesis. Therefore, the role of the human-specific prolonged period of cortical myelination is an important evolutionary consideration in the pathogenesis of demyelinating disease. Diagnosis Various methods/techniques are used to diagnose demyelinating diseases: Exclusion of other conditions that have overlapping symptoms Magnetic resonance imaging (MRI) is a medical imaging technique used in radiology to visualize internal structures of the body in detail. MRI makes use of the property of nuclear magnetic resonance (NMR) to image nuclei of atoms inside the body. This method is reliable because MRIs assess changes in proton density. "Spots" can occur as a result of changes in brain water content. Evoked potential is an electrical potential recorded from the nervous system following the presentation of a stimulus as detected by electroencephalography (EEG), electromyography (EMG), or other electrophysiological recording method. Cerebrospinal fluid analysis (CSF) can be extremely beneficial in the diagnosis of central nervous system infections. A CSF culture examination may yield the microorganism that caused the infection. Quantitative proton magnetic resonance spectroscopy (MRS) is a noninvasive analytical technique that has been used to study metabolic changes in brain tumors, strokes, seizure disorders, Alzheimer's disease, depression, and other diseases affecting the brain. It has also been used to study the metabolism of other organs such as muscles. Diagnostic criteria refers to a specific combination of signs, symptoms, and test results that the clinician uses in an attempt to determine the correct diagnosis. Fluid-attenuated inversion recovery (FLAIR) uses a pulse sequence to suppress cerebrospinal fluid and show lesions more clearly, and is used for example in multiple sclerosis evaluation. Types Demyelinating diseases can be divided in those affecting the central nervous system (CNS) and those affecting the peripheral nervous system (PNS). They can also be classified by the presence or absence of inflammation. Finally, a division may be made based on the underlying cause of demyelination: the disease process can be demyelinating myelinoclastic, wherein myelin is destroyed; or dysmyelinating leukodystrophic, wherein myelin is abnormal and degenerative. CNS The demyelinating disorders of the central nervous system include: Myelinoclastic or demyelinating disorders: Typical forms of multiple sclerosis Neuromyelitis optica, or Devic's disease Idiopathic inflammatory demyelinating diseases Leukodystrophic or dysmyelinating disorders: CNS neuropathies such as those produced by vitamin B12 deficiency Central pontine myelinolysis Myelopathies such as tabes dorsalis (syphilitic myelopathy) Leukoencephalopathies such as progressive multifocal leukoencephalopathy Leukodystrophies The myelinoclastic disorders are typically associated with symptoms such as optic neuritis and transverse myelitis, because the demyelinating inflammation can affect the optic nerve or spinal cord. Many are idiopathic. Both myelinoclastic and leukodystrophic modes of disease may result in lesional demyelinations of the central nervous system. PNS The demyelinating diseases of the peripheral nervous system include: Guillain–Barré syndrome and its chronic counterpart, chronic inflammatory demyelinating polyneuropathy Anti-MAG peripheral neuropathy Charcot–Marie–Tooth disease and its counterpart Hereditary neuropathy with liability to pressure palsy Copper deficiency-associated conditions (peripheral neuropathy, myelopathy, and rarely optic neuropathy) Progressive inflammatory neuropathy Treatment Treatments are patient-specific and depend on the symptoms that present with the disorder, as well as the progression of the condition. Improvements to the patient's life may be accomplished through the management of symptoms or slowing of the rate of demyelination. Treatment can include medication, lifestyle changes (i.e. smoking cessation, increased rest, and dietary changes), counselling, relaxation, physical exercise, patient education, and in some cases, deep brain thalamic stimulation (to ameliorate tremors). Prognosis Prognosis depends on the condition itself. Some conditions such as MS depend on the subtype of the disease and various attributes of the patient such as age, sex, initial symptoms, and the degree of disability the patient experiences. Life expectancy in MS patients is 5 to 10 years lower than unaffected people. MS is an inflammatory demyelinating disease of the central nervous system (CNS) that develops in genetically susceptible individuals after exposure to unknown environmental trigger(s). The bases for MS are unknown but are strongly suspected to involve immune reactions against autoantigens, particularly myelin proteins. The most accepted hypothesis is that dialogue between T-cell receptors and myelin antigens leads to an immune attack on the myelin-oligodendrocyte complex. These interactions between active T cells and myelin antigens provoke a massive destructive inflammatory response and promote continuing proliferation of T and B cells and macrophage activation, which sustains secretion of inflammatory mediators. Other conditions such as central pontine myelinolysis have about a third of patients recover and the other two-thirds experience varying degrees of disability. In some cases, such as transverse myelitis, the patient can begin recovery as early as 2 to 12 weeks after the onset of the condition. Epidemiology Incidence of demyelinating diseases varies by disorder. Some conditions, such as tabes dorsalis appear predominantly in males and begin in midlife. Optic neuritis, though, occurs preferentially in females typically between the ages of 30 and 35. Other conditions such as multiple sclerosis vary in prevalence depending on the country and population. This condition can appear in children and adults. Research Much of the research conducted on demyelinating diseases is targeted towards discovering the mechanisms by which these disorders function in an attempt to develop therapies and treatments for individuals affected by these conditions. For example, proteomics has revealed several proteins which contribute to the pathophysiology of demyelinating diseases. For example, COX-2 has been implicated in oligodendrocyte death in animal models of demyelination. The presence of myelin debris has been correlated with damaging inflammation as well as poor regeneration, due to the presence of inhibitory myelin components. N-cadherin is expressed in regions of active remyelination and may play an important role in generating a local environment conducive to remyelination. N-cadherin agonists have been identified and observed to stimulate neurite growth and cell migration, key aspects of promoting axon growth and remyelination after injury or disease. Immunomodulatory drugs such as fingolimod have been shown to reduce immune-mediated damage to the CNS, preventing further damage in patients with MS. The drug targets the role of macrophages in disease progression. Manipulating thyroid hormone levels may become a viable strategy to promote remyelination and prevent irreversible damage in MS patients. It has also been shown that intranasal administration of apotransferrin (aTf) can protect myelin and induce remyelination. Finally, electrical stimulation which activates neural stem cells may provide a method by which regions of demyelination can be repaired. In other animals Demyelinating diseases/disorders have been found worldwide in various animals. Some of these animals include mice, pigs, cattle, hamsters, rats, sheep, Siamese kittens, and a number of dog breeds (including Chow Chow, Springer Spaniel, Dalmatian, Samoyed, Golden Retriever, Lurcher, Bernese Mountain Dog, Vizsla, Weimaraner, Australian Silky Terrier, and mixed breeds). See also Degenerative disease Multiple sclerosis borderline The Lesion Project (multiple sclerosis) The Myelin Project Myelin Repair Foundation References External links Neurological disorders Myelin disorders
0.769228
0.997601
0.767383
Cerebral edema
Cerebral edema is excess accumulation of fluid (edema) in the intracellular or extracellular spaces of the brain. This typically causes impaired nerve function, increased pressure within the skull, and can eventually lead to direct compression of brain tissue and blood vessels. Symptoms vary based on the location and extent of edema and generally include headaches, nausea, vomiting, seizures, drowsiness, visual disturbances, dizziness, and in severe cases, death. Cerebral edema is commonly seen in a variety of brain injuries including ischemic stroke, subarachnoid hemorrhage, traumatic brain injury, subdural, epidural, or intracerebral hematoma, hydrocephalus, brain cancer, brain infections, low blood sodium levels, high altitude, and acute liver failure. Diagnosis is based on symptoms and physical examination findings and confirmed by serial neuroimaging (computed tomography scans and magnetic resonance imaging). The treatment of cerebral edema depends on the cause and includes monitoring of the person's airway and intracranial pressure, proper positioning, controlled hyperventilation, medications, fluid management, steroids. Extensive cerebral edema can also be treated surgically with a decompressive craniectomy. Cerebral edema is a major cause of brain damage and contributes significantly to the mortality of ischemic strokes and traumatic brain injuries. As cerebral edema is present with many common cerebral pathologies, the epidemiology of the disease is not easily defined. The incidence of this disorder should be considered in terms of its potential causes and is present in most cases of traumatic brain injury, central nervous system tumors, brain ischemia, and intracerebral hemorrhage. For example, malignant brain edema was present in roughly 31% of people with ischemic strokes within 30 days after onset. Signs and symptoms The extent and severity of the symptoms of cerebral edema depend on the exact etiology but are generally related to an acute increase of the pressure within the skull. As the skull is a fixed and inelastic space, the accumulation of cerebral edema can displace and compress vital brain tissue, cerebral spinal fluid, and blood vessels, according to the Monro–Kellie doctrine. Increased intracranial pressure (ICP) is a life-threatening surgical emergency marked by symptoms of headache, nausea, vomiting, decreased consciousness. Symptoms are frequently accompanied by visual disturbances such as gaze paresis, reduced vision, and dizziness. Increased pressures within the skull can cause a compensatory elevation of blood pressure to maintain cerebral blood flow, which, when associated with irregular breathing and a decreased heart rate, is called the Cushing reflex. The Cushing reflex often indicates compression of the brain on brain tissue and blood vessels, leading to decreased blood flow to the brain and eventually death. Causes Cerebral edema is frequently encountered in acute brain injuries from a variety of origins, including but not limited to: Traumatic brain injury Stroke Tumors Infections (such as a brain abscess or meningitis) Hepatic encephalopathy Posterior reversible encephalopathy syndrome Radiation-induced brain edema Post-surgical changes Amyloid-related imaging abnormalities associated with edema (ARIA-E) Hyponatremia High-altitude cerebral edema Risk factors Cerebral edema is present with many common cerebral pathologies and risk factors for development of cerebral edema will depend on the cause. The following were reliable predictors for development of early cerebral edema in ischemic strokes. Younger age Higher severity of symptoms on the National Institutes of Health Stroke Scale Signs of current ischemia on clinical exam Decreased level of consciousness Hyper dense artery sign and larger affected area on CT imaging Higher blood glucose Classification Cerebral edema has been traditional classified into two major sub-types: cytotoxic and vasogenic cerebral edema. This simple classification helps guide medical decision making and treatment of patients affected with cerebral edema. There are, however, several more differentiated types including but not limited to interstitial, osmotic, hydrostatic, and high altitude associated edema. Within one affected person, many individual sub-types can be present simultaneously. The following individual sub-types have been identified: Cytotoxic In general, cytotoxic edema is linked to cell death in the brain through excessive cellular swelling. During cerebral ischemia for example, the blood–brain barrier remains intact but decreased blood flow and glucose supply leads to a disruption in cellular metabolism and creation of energy sources, such as adenosine triphosphate (ATP). Exhaustion of energy sources impairs functioning of the sodium and potassium pump in the cell membrane, leading to cellular retention of sodium ions. Accumulation of sodium in the cell causes a rapid uptake of water through osmosis, with subsequent swelling of the cells. The ultimate consequence of cytotoxic edema is the oncotic death of neurons. The swelling of the individual cells of the brain is the main distinguishing characteristic of cytotoxic edema, as opposed to vasogenic edema, wherein the influx of fluid is typically seen in the interstitial space rather than within the cells themselves. Researchers have proposed that "cellular edema" may be more preferable to the term "cytotoxic edema" given the distinct swelling and lack of consistent "toxic" substance involved. There are several clinical conditions in which cytotoxic edema is present: Commonly caused by traumatic brain injuries, intracerebral hemorrhage, and the early phase of ischemic stroke. Also seen in acute liver failure where toxic waste, most notably ammonia, accumulates in the blood stream and crosses the blood–brain barrier. Hyperammonemia in central nervous system (CNS) cells causes oxidative stress and mitochrondrial dysfunction, leading to astrocytic cell swelling. Additionally, ammonia is converted to glutamine in CNS cells which acts as an osmolyte and draws further water into the cell through osmosis. Cerebral edema occurs most commonly in conjunction with a rapid rise in ammonia levels. Toxic exposures to methionine sulfoximine, cuprizone, isoniazid, triethyltin, hexachlorophene, and hydrogen cyanide have been associated with cytotoxic edema and swelling of astrocytic cells. Hypoxia, anoxia can lead to cytotoxic edema through several mechanisms Vasogenic Extracellular brain edema, or vasogenic edema, is caused by an increase in the permeability of the blood–brain barrier. The blood–brain barrier consists of astrocytes and pericytes joined with adhesion proteins producing tight junctions. Return of blood flow to these cells after an ischemic stroke can cause excitotoxicity and oxidative stress leading to dysfunction of the endothelial cells and disruption of the blood-brain barrier. The breakdown of the tight endothelial junctions that make up the blood–brain barrier causes extravasation of fluid, ions, and plasma proteins, such as albumin, into the brain parenchyma. Accumulation of extracellular fluid increases brain volume and then intracranial pressure causing the symptoms of cerebral edema. There are several clinical conditions in which vasogenic edema is present: CNS tumors, like glioblastoma and meningioma Infections like meningitis, abscess, and encephalitis Inflammatory central nervous system disease such as multiple sclerosis Brain hemorrhage Traumatic brain injuries can lead to increased intracranial pressure, local damage, reduced cerebral blood flow, and focal ischemia secondary to vasogenic edema. Late stage of ischemic stroke after rapid recovery from cytotoxic edema Hypertensive encephalopathy Radiation injury Ionic (Osmotic) In ionic edema, the solute concentration (osmolality) of the brain exceeds that of the plasma and the abnormal pressure gradient leads to accumulation of water intake into the brain parenchyma through the process of osmosis. The blood-brain barrier is intact and maintains the osmotic gradient. The solute concentration of the blood plasma can be diluted by several mechanisms: Improper administration of intravenous fluids, isotonic or hypotonic. Excessive water intake, syndrome of inappropriate antidiuretic hormone (SIADH). Rapid reduction of blood glucose in diabetic ketoacidosis or hyperosmolar hyperglycemic state. Hemodialysis has been associated with ionic edema and cellular swelling. Cerebral edema is a potentially life-threatening complication of severely decreased sodium ion concentration in the blood (hyponatremia). Ionic brain edema can also occur around the sites of brain hemorrhages, infarcts, or contusions due to a local plasma osmolality pressure gradient when compared to the high osmolality in the affected tissue. Interstitial (hydrocephalic) Interstitial edema can be best characterized by in noncomunnicating hydrocephalus where there is an obstruction to the outflow of cerebrospinal fluid within the ventricular system. The obstruction creates a rise in the intraventricular pressure and causes CSF to flow through the wall of the ventricles into the extracellular fluid within brain. The fluid has roughly the same composition of CSF. Other causes of interstitial edema include but are not limited to communicating hydrocephalus, and normal pressure hydrocephalus. Hydrostatic Hydrostatic extracellular brain edema is typically caused by severe arterial hypertension. A difference in the hydrostatic pressure within the arterial system relative to the endothelial cells allows ultrafiltration of water, ions, and low molecular weight substances (such as glucose, small amino acids) into the brain parenchyma. The blood-brain barrier is intact usually and the extent of the edema depends on the arterial pressure. The regulatory processes of the brain circulation can function up to systolic arterial pressures of 150 mm Hg and will have impaired function at higher blood pressures. Combined types of cerebral edema Cytotoxic, osmotic, and vasogenic edema exist on a continuum. The mechanism of the cause of cerebral edema can often overlap between these types. In most instances, cytotoxic and vasogenic edema occur together. When the two edema types evolve simultaneously, the damage of one type reaches a limit and will bring about the other type of injury. For example, when cytotoxic edema occurs in the endothelial cells of the blood–brain barrier, oncotic cell death contributes to loss of integrity of the blood–brain barrier and promotes the progression to vasogenic edema. When brain edema types are combined, there is typically a primary form and the edema type and context of the cause must be determined in order to start appropriate medical or surgical therapy. The use of specific MRI techniques has allowed for some differentiation between the mechanisms. Subtypes High-altitude cerebral edema If not properly acclimatized to high altitude, a person may be negatively affected by the lower oxygen concentration available. These hypoxia-related illnesses include acute mountain sickness (AMS), high-altitude pulmonary edema, and high-altitude cerebral edema (HACE). High-altitude cerebral edema is a severe and sometimes fatal form of altitude sickness that results from capillary fluid leakage due to the effects of hypoxia on the mitochondria-rich endothelial cells of the blood–brain barrier. The edema can be characterized by vasogenic cerebral edema with symptoms of impaired consciousness and truncal ataxia. Altitude-related illnesses can be prevented most effectively with slow ascent to high altitudes, an average ascent of 300 to 500 meters per day is recommended. Pharmacological prophylaxis with acetazolamide or corticosteroids can be used in non pre-acclimatized individuals. If symptoms of high-altitude cerebral edema do not resolve or worsen, immediate descent is necessary, and symptoms can be improved with administration of dexamethasone. Amyloid-related imaging abnormalities – edema Amyloid-related imaging abnormalities (ARIA) are abnormal differences seen in neuroimaging of Alzheimer's disease patients given targeted amyloid-modifying therapies. Human monoclonal antibodies such as aducanumab, solanezumab, and bapineuzumab have been associated with these neuroimaging changes and additionally, cerebral edema. These therapies are associated with dysfunction of the tight endothelial junctions of the blood-brain barrier, leading to vasogenic edema as described above. In addition to edema, these therapies are associated with microhemorrhages in the brain known as ARIA-H. Familiarity with ARIA can aid radiologists and clinicians in determining optimal management for those affected. Posterior reversible encephalopathy syndrome Posterior reversible encephalopathy syndrome (PRES) is a rare clinical disease characterized by cerebral edema. The exact pathophysiology, or cause, of the syndrome is still debated but is hypothesized to be related to the disruption of the blood-brain barrier. The syndrome features acute neurological symptoms and reversible subcortical vasogenic edema predominantly involving the parieto-occipital areas on MR imaging. PRES in general has a benign course, but PRES-related intracranial hemorrhage has been associated with a poor prognosis. Idiopathic delayed-onset edema Deep brain stimulation (DBS) is effective treatment for several neurological and psychiatric disorders, most notably Parkinson's disease. DBS is not without risks and although rare, idiopathic delayed-onset edema (IDE) surrounding the DBS leads have been reported. Symptoms can be mild and nonspecific, including reduction of the stimulation effect, and can be confused for other causes of edema. Thus, imaging is recommended to rule out other causes. The condition is generally self-limiting and the exact mechanism of the cause is unexplained. Early identification can help persons affected avoid unnecessary surgical procedures or antibiotic treatments. Massive brain swelling after cranioplasty Decompressive craniectomy is frequently performed in cases of resistant intracranial hypertension secondary to several neurological conditions and is commonly followed by cranioplasty. Complications, such as infection and hematomas after cranioplasty occur in roughly about a third of cases. Massive brain swelling after cranioplasty (MSBC) is a rare and potentially fatal complication of an uneventful cranioplasty that has recently been elucidated. Preoperative sinking skin flap (SSF) and intracranial hypotension were factors associated with the development of MSBC after cranioplasty. Data suggests that pathologic changes are triggered immediately following the procedure, especially an acute increase in intracranial pressure. Radiation-induced brain edema With the rise of sophisticated treatment modalities such as gamma knife, Cyberknife, and intensity-modulated radiotherapy, a large number of individuals with brain tumors are treated with radiosurgery and radiotherapy. Radiation-induced brain edema (RIBE) is a potentially life-threatening complication of brain tissue radiation and is characterized radiation necrosis, endothelial cell dysfunction, increased capillary permeability, and breakdown of the blood–brain barrier. Symptoms include headache, seizure, psychomotor slowing, irritability, and focal neurological deficits. Options for management of RIBE are limited and include corticosteroids, antiplatelet drugs, anticoagulants, hyperbaric oxygen therapy, multivitamins, and bevacizumab. Brain tumor-associated cerebral edema This kind of cerebral edema is a significant cause of morbidity and mortality in patients with brain tumors and characterized by a disruption of the blood brain barrier and vasogenic edema. The exact mechanism is unclear but hypothesized that cancerous glial cells (glioma) of the brain can increase secretion of vascular endothelial growth factor (VEGF), which weakens the tight junctions of the blood–brain barrier. Historically, corticosteroids such as dexamethasone were used to reduce brain tumor-associated vascular permeability through poorly understood mechanisms and was associated with systemic side effects. Agents that target the VEGF signaling pathways, such as cediranib, have been promising in prolonging survival in rat models but associated with local and systemic side effects as well. Diagnosis Cerebral edema is commonly present in a variety of neurological injuries. Thus, determining a definitive contribution of cerebral edema to the neurological status of an affected person can be challenging. Close bedside monitoring of a person's level of consciousness and awareness of any new or worsening focal neurological deficits is imperative but demanding, frequently requiring admission into the intensive care unit (ICU). Cerebral edema with sustained increased intracranial hypertension and brain herniation can signify impending catastrophic neurological events which require immediate recognition and treatment to prevent injury and even death. Therefore, diagnosis of cerebral edema earlier with rapid intervention can improve clinical outcomes and can mortality, or risk of death. Diagnosis of cerebral edema relies on the following: Imaging Serial neuroimaging (CT scans and magnetic resonance imaging) can be useful in diagnosing or excluding intracranial hemorrhage, large masses, acute hydrocephalus, or brain herniation as well as providing information on the type of edema present and the extent of affected area. CT scan is the imaging modality of choice as it is widely available, quick, and with minimal risks. However, CT scan can be limited in determining the exact cause of cerebral edema in which cases, CT angiography (CTA), MRI, or digital subtraction angiography (DSA) may be necessary. MRI is particularly useful as it can differentiate between cytotoxic and vasogenic edema, guiding future treatment decisions. Intracranial pressure monitoring Intracranial pressure (ICP) and its management is a fundamental concept in traumatic brain injury (TBI). The Brain Trauma Foundation guidelines recommend ICP monitoring in individuals with TBI that have decreased Glasgow Coma Scale (GCS) scores, abnormal CT scans, or additional risk factors such as older age and elevated blood pressure. However, no such guidelines exist for ICP monitoring in other brain injuries such as ischemic stroke, intracerebral hemorrhage, cerebral neoplasm. Clinical researches have recommended ICP and cerebral perfusion pressure (CPP) monitoring in any persons with cerebral injury who are at risk of elevated intracranial pressure based on clinical and neuroimaging features. Early monitoring can be used to guide medical and surgical decision making and to detect potentially life-threatening brain herniation. There was however, conflicting evidence on the threshold values of ICP that indicated the need for intervention. Researchers also recommend that medical decisions should be tailored to the specific diagnosis (e.g. subarachnoid hemorrhage, TBI, encephalitis) and that ICP elevation should be used in conjunction with clinical and neuroimaging and not as an isolated prognostic marker. Treatment The primary goal in cerebral edema is to optimize and regulate cerebral perfusion, oxygenation, and venous drainage, decrease cerebral metabolic demands, and to stabilize the osmolality pressure gradient between the brain and the surrounding vasculature. As cerebral edema is linked to increased intracranial pressure (ICP), many of the therapies will focus on ICP. General measures for managing cerebral edema Positioning Finding the optimal head position in persons with cerebral edema is necessary to avoid compression of the jugular vein and obstruction of venous outflow from the skull, and for decreasing cerebrospinal fluid hydrostatic pressure. The current recommendation is to elevate the head of the bed to 30 degrees to optimize cerebral perfusion pressure and control the increase in intracranial pressure. It is also worth noting that measures should be taken to reduce restrictive neck dressings or garments as these may lead to compression of the internal jugular veins and reduce venous outflow. Ventilation and oxygenation Decreased oxygen concentration in the blood, hypoxia, and increase in the carbon dioxide concentration in the blood, hypercapnia, are potent vasodilators in the cerebral vasculature, and should be avoided in those with cerebral edema. It is recommended that persons with decreased levels of consciousness be intubated for airway protection and maintenance of oxygen and carbon dioxide levels. However, the laryngeal instrumentation involved in the intubation process is associated with an acute, brief rise in intracranial pressure. Pretreatment with a sedative agent and neuromuscular blocking agent to induce unconsciousness and motor paralysis has been recommended as part of standard Rapid Sequence Intubation (RSI). Intravenous lidocaine prior to RSI has been suggested to reduce the rise in ICP but there is no supporting data at this time. Additionally, ventilation with use of positive pressure (PEEP) can improve oxygenation with the negative effect of decreasing cerebral venous drainage and increasing intracranial pressure (ICP), and thus, must be used with caution. Fluid management and cerebral perfusion Maintenance of cerebral perfusion pressure using appropriate fluid management is essential in patients with brain injury. Dehydration, or intravascular volume loss, and the use of hypotonic fluids, such as D5W or half normal saline, should be avoided. Blood serum ion concentration, or osmolality, should be maintained in the normo to hyperosmolar range. Judicial use of hypertonic saline can be used to increase serum osmolality and decrease cerebral edema, as discussed below. Blood pressure should be sufficient so as to sustain cerebral perfusion pressures greater than 60 mm Hg for optimal blood blow to the brain. Vasopressors may be used to achieve adequate blood pressures with minimal risk of increasing intracranial pressures. However, sharp rises in blood pressure should be avoided. Maximum blood pressures tolerated are variable and controversial depending on the clinical situation. Seizure prophylaxis Seizures, including subclinical seizure activity, can complicate clinical courses and increase progression of brain herniation in persons with cerebral edema and increased intracranial pressure. Anticonvulsants can be used to treat seizures caused by acute brain injuries from a variety of origins. However, there are no clear guidelines on the use of anticonvulsants for prophylactic use. Their use may be warranted on depending on the clinical scenario and studies have shown that anticonvulsants such as phenytoin can be given prophylactically without a significant increase in drug-related side effects. Fever Fever has been demonstrated to increase metabolism and oxygen demand in the brain. The increased metabolic demand results in an increase in cerebral blood flow and can increase the intracranial pressure within the skull. Therefore, maintaining a stable body temperature within the normal range is strongly recommended. This can be achieved through the use of antipyretics such as acetaminophen (paracetamol) and cooling the body, as described below. Hyperglycemia Elevated blood glucose levels, known as hyperglycemia, can exacerbate brain injury and cerebral edema and has been associated with worse clinical outcomes in persons affected by traumatic brain injuries, subarachnoid hemorrhages, and ischemic strokes. Sedation Pain and agitation can worsen cerebral edema, acutely increase intracranial pressure (ICP), and should be controlled. Careful use of pain medication such as morphine or fentanyl can be used for analgesia. For those persons with decreased levels of consciousness, sedation is necessary for endotracheal intubation and maintenance of a secure airway. Sedative medication used in the intubation process, specifically propofol, have been shown to control ICP, decrease cerebral metabolic demand, and have antiseizure properties. Due to a short half-life, propofol, is a quick-acting medication whose administration and removal is well tolerated, with hypotension being the limiting factor in its continued use. Additionally, the use of nondepolarizing neuromusclar blocking agents (NMBA), such as doxacurium or atracurium, have been indicated to facilitate ventilation and manage brain injuries but there are no controlled studies on the use of NMBAs in the management of increased intracranial pressure. Depolarizing neuromuscular blocking agents, most notably succinylcholine, can worsen increased ICP due to induction of muscle contraction within the body. Nutrition Nutritional support is necessary in all patients with acute brain injury. Enteral feeding, or through mouth via tube, is the preferred method, unless contraindicated. Additional attention must be placed on the solute concentration of the formulations to avoid free water intake, decreased serum osmolality, and worsening of the cerebral edema. Elevated blood glucose, or hyperglycemia, is associated with increased edema in patients with cerebral ischemia and increases the risk of a hemorrhagic transformation of ischemic stroke. Maintaining a normal blood glucose level of less than 180 mg/dL is suggested. However, tight glycemic control of blood glucose under 126 mg/dL is associated with worsening of stroke size. Specific measures Although cerebral edema is closely related to increased intracranial pressure (ICP) and cerebral herniation and the general treatment strategies above are useful, the treatment should ultimately be tailored to the primary cause of the symptoms. The management of individual diseases are discussed separately. The following interventions are more specific treatments for managing cerebral edema and increased ICP: Osmotic therapy The goal of osmotic therapy is to create a higher concentration of ions within the vasculature at the blood–brain barrier. This will create an osmotic pressure gradient and will cause the flow of water out of the brain and into the vasculature for drainage elsewhere. An ideal osmotic agent produces a favorable osmotic pressure gradient, is nontoxic, and is not filtered out by the blood–brain barrier. Hypertonic saline and mannitol are the main osmotic agents in use, while loop diuretics can aid in the removal of the excess fluid pulled out of the brain. Hypertonic saline is a highly concentrated solution of sodium chloride in water and is administered intravenously. It has a rapid-onset, with reduction of pressures within 5 minutes of infusion, lasting up to 12 hours in some cases, and with negligible rebound pressure. The exact volume and concentration of the hypertonic saline varies between clinical studies. Bolus doses, particularly at higher concentrations, for example 23.4%, are effective at reducing ICP and improving cerebral perfusion pressure. In traumatic brain injuries, a responsiveness to hypertonic saline lasting greater than 2 hours was associated with decreased chance of death and improved neurologic outcomes. The effects of hypertonic saline can be prolonged with combination to agents such as dextran or hydroxyethyl starch, although their use is currently controversial. When compared to mannitol, hypertonic saline has been shown to be as effective as mannitol in decreased ICP in neurocritical care and is more effective in many cases. Hypertonic saline may be preferable to mannitol in persons with hypovolemia or hyponatremia. Mannitol is an alcohol derivative of simple sugar mannose, and is historically the most commonly used osmotic diuretic. Mannitol acts as an inert solute in the blood, decreasing ICP through osmosis as discussed above. Additionally, mannitol decreases ICP and increased cerebral perfusion pressure by increasing reabsorption of cerebrospinal fluid, dilutes and decreased the viscosity of the blood, and can cause cerebral vasoconstriction. Furthermore, mannitol acts in a dose-dependent manner and will not lower ICP if it is not elevated. However, the common limitation of the use of mannitol is its tendency to cause low blood pressure hypotension. Compared to hypertonic saline, mannitol may be more effective at increasing cerebral perfusion pressures and may be preferable in those with hypoperfusion. Loop diuretics, commonly furosemide, act within kidney to increase excretion of water and solutes. Combination with mannitol produces a profound diuresis and increases the risk of systemic dehydration and hypotension. Their use remains controversial. Acetazolamide, a carbonic anhydrase inhibitor, acts as a weak diuretic and modulates CSF production but has not role in the management of cerebral edema from acute brain injuries. It can be used in the outpatient management of cerebral edema caused by idiopathic intracranial hypertension (pseudotumor cerebri). Glucocorticoids Glucocorticoids, such as dexamethasone, have been shown to decrease tight-junction permeability and stabilize the blood-brain barrier. Their main use has been in the management of vasogenic cerebral edema associated with brain tumors, brain irradiation, and surgical manipulation. Glucocorticoids have not been shown to have any benefit in ischemic stroke and have been found to be harmful in traumatic brain injury. Due to the negative side effects (such as peptic ulcers, hyperglycemia, and impairment of wound healing), steroid use should be restricted to cases where they are absolutely indicated. Hyperventilation As mentioned previously, hypoxia and hypercapnia are potent vasodilators in the cerebral vasculature, leading to increased cerebral blood flow (CBF) and worsening of cerebral edema. Conversely, therapeutic hyperventilation can be used to lower the carbon dioxide content in the blood and reduce ICP through vasoconstriction. The effects of hyperventilation, although effective, are short-lived and once removed, can often lead to a rebound elevation of ICP. Furthermore, overaggressive hyperventilation and vasoconstriction and lead to severe reduction in CBF and cause cerebral ischemia, or strokes. As a result, standard practice is to slowly reverse hyperventilation while more definitive treatments aimed at the primary cause are instituted. Prolonged hyperventilation in those with traumatic brain injuries has been shown to worsen outcomes. Barbiturates Induction of a coma via the use of barbiturates, most notably pentobarbital and thiopental, after brain injury is used for secondary treatment of refractory ICP. Yet their use is not without controversy and it is not clear whether barbiturates are favored over surgical decompression. In patients with traumatic brain injuries, barbiturates are effective in reducing ICP but have failed to show benefit to clinical outcomes. Evidence is limited for their use in cerebral disease that include tumor, intracranial hypertension, and ischemic stroke. There are several adverse effects of barbiturates that limit their use, such as lowering of systemic blood pressure and cerebral perfusion pressure, cardiodepression, immunosuppression, and systemic hypothermia. Hypothermia As discussed previously in the treatment of fever, temperature control has been shown to decrease metabolic demand and reduce further ischemic injury. In traumatic brain injury, induced hypothermia may reduce the risks of mortality, poor neurologic outcome in adults. However, outcomes varied greatly with depth and duration of hypothermia as well as rewarming procedures. In children with traumatic brain injury, there was no benefit to therapeutic hypothermia and increased the risk of mortality and arrhythmia. The adverse effects of hypothermia are serious and require clinical monitoring including increased chance of infection, coagulopathy, and electrolyte derangement. The current consensus is that adverse effects outweigh the benefits and its use restricted to clinical trials and refractory increased ICP to other therapies. Surgery The Monroe–Kellie doctrine states that the skull is a fixed and inelastic space and the accumulation of edema will compress vital brain tissue and blood vessels. Surgical treatment of cerebral edema in the context of cerebellar or cerebral infarction is typically done by removing part of the skull to allow expansion of the dura. This will help to reduce the volume constraints inside of the skull. A decompressive hemicraniectomy is the most commonly used procedure. Multiple randomized clinical trials have shown reduced risk of death with hemicraniectomy compared with medical management. However, no individual study has shown an improvement in the percentage of survivors with good functional outcomes. Timing of decompressive craniectomy remains controversial, but is generally suggested that the surgery is best performed before there are clinical signs of brainstem compression. Postoperative complications include wound dehiscence, hydrocephalus, infection, and a substantial proportion of patients may also require tracheostomy and gastrotomy in the early phase after surgery. Outcomes Cerebral edema is a severe complication of acute brain injuries, most notably ischemic stroke and traumatic brain injuries, and a significant cause of morbidity and mortality. Cerebral edema is the cause of death in 5% of all patients with cerebral infarction and mortality after large ischemic strokes with cerebral edema is roughly 20 to 30% despite medical and surgical interventions. Cerebral edema usually occurs between the second and fifth day after onset of symptoms. Large territory ischemic strokes can lead to the rapid development of malignant brain edema and increased intracranial pressure. Cerebral edema in the context of a malignant middle cerebral artery (MCA) infarct has a mortality of 50 to 80% if treated conservatively. Individuals with cerebral edema had a worse 3-month functional outcome than those without edema. These effects were more pronounced with increasing extent of cerebral edema and were independent of the size of the infarct. Mild traumatic brain injury (TBI) represents 70–90% of all reported head injuries. The presence of brain edema on the initial CT scan of those with traumatic brain injuries is an independent prognostic indicator of in-hospital death. The association of brain edema with increased in hospital risk of death was observed in TBI across all level of severity. Edema in the acute and chronic phases were associated with a worse neurologic and clinical outcome. Children with TBI and cerebral edema have worse clinical outcomes as well. Epidemiology As cerebral edema is present with many common cerebral pathologies, the epidemiology of the disease is not easily defined. The incidence of this disorder should be considered in terms of its potential causes and is present in most cases of traumatic brain injury, central nervous system tumors, brain ischemia, and intracerebral hemorrhage. In one study, cerebral edema was found in 28% of those individuals with thrombolysis-treated ischemic strokes, 10% of which occurred in severe forms. A further study detected cerebral edema in 22.7% of cerebral ischemic strokes. A meta-analysis of current studies showed that 31% of those affected by ischemic strokes developed cerebral edema in 31% of cases. In traumatic brain injuries, cerebral edema occurred in greater than 60% of those with mass lesions, and in 15% of those with initial normal CT scans. Research The current understanding of the pathophysiology of cerebral edema after traumatic brain injury or intracerebral hemorrhage is incomplete. Current treatment therapies aimed at cerebral edema and increased intracranial pressure are effective at reducing intracranial hypertension but have unclear impacts on functional outcomes. Additionally, cerebral and ICP treatments have varied effects on individuals based on differing characteristics like age, gender, type of injury, and genetics. There are innumerable molecular pathways that contribute to cerebral edema, many of which have yet to be discovered. Researchers argue that the future treatment of cerebral edema will be based on advances in identifying the underlying pathophysiology and molecular characteristics of cerebral edema in a variety of cases. At the same time, improvement of radiographic markers, biomarkers, and analysis of clinical monitoring data is essential in treating cerebral edema. Many studies of the mechanical properties of brain edema were conducted in the 2010s, most of them based on finite element analysis (FEA), a widely used numerical method in solid mechanics. For example, Gao and Ang used the finite element method to study changes in intracranial pressure during craniotomy operations. A second line of research on the condition looks at thermal conductivity, which is related to tissue water content. See also Intracranial pressure Edema Amyloid-related imaging abnormalities References External links MedPix Vasogenic Edema Cerebrum Body water
0.768188
0.998883
0.76733
AL amyloidosis
Amyloid light-chain (AL) amyloidosis, also known as primary amyloidosis, is the most common form of systemic amyloidosis. The disease is caused when a person's antibody-producing cells do not function properly and produce abnormal protein fibers made of components of antibodies called light chains. These light chains come together to form amyloid deposits which can cause serious damage to different organs. An abnormal light chain in urine is known as Bence Jones protein. Signs and symptoms AL amyloidosis can affect a wide range of organs, and consequently present with a range of symptoms. Non-specific symptoms may include fatigue and weight loss. The kidneys are commonly affected in systemic AL amyloidosis with 60–70% of people having kidney involvement. Symptoms of kidney disease and kidney failure can include fluid retention, swelling, and shortness of breath. Other manifestations of kidney involvement may include protein loss in the urine, low albumin levels in the blood and secondary hyperlipidemia (nephrotic syndrome). Kidney damage in AL amyloidosis may progress to end stage disease requiring dialysis. 70–80% of those with AL amyloidosis have heart involvement, and heart involvement is the leading cause of death. Heart complications, include heart failure and irregular heart beat. Early heart involvement in AL amyloidosis may present as low voltage electrical rhythms on an electrocardiograph, concentric left ventricular hypertrophy and diastolic dysfunction. A person may progress to overt heart failure due to cardiomyopathy as amyloid fibril deposition in the heart muscle progresses. Further signs of cardiac involvement in Al amyloidosis include heart arrhythmias (bradycardia, ventricular tachycardia) which may necessitate pacemaker or implantable defibrillator placement and reduced contractility of the atria, with the associated risk of atrial blood clots. AL amyloidosis may also cause nerve damage (neuropathy) which may present as pain, discomfort or loss of sensation in the extremities in instances of peripheral neuropathy or gastrointestinal motility disorders, difficulties regulating blood pressure with changes in position or neurogenic bladder in instances of dysfunction of the autonomic nervous system. Other organ systems that may be involved include gastrointestinal tract, blood, lungs and skin. Other symptoms can include stroke, gastrointestinal disorders, enlarged liver, diminished spleen function, diminished function of the adrenal and other endocrine glands, skin color change or growths, lung problems, or bleeding and bruising problems. An enlarged tongue, or macroglossia, is sometimes seen in AL amyloidosis. Causes AL amyloidosis is caused by the deposition of abnormal antibody free light chains. The abnormal light chains are produced by monoclonal plasma cells, and, although AL amyloidosis can occur without diagnosis of another disorder, it is often associated with other plasma cell disorders, such as multiple myeloma and Waldenström's macroglobulinemia. About 10% to 15% of patients with multiple myeloma may develop overt AL amyloidosis. AL amyloidosis is never hereditary. Diagnosis Diagnosis of AL amyloidosis requires identification of amyloid deposits within a tissue sample and confirmation of a plasma cell disorder. Both blood and urine can be tested for the light chains which form amyloid deposits, however the diagnosis requires a sample of an amyloid deposit. A fine needle aspiration (FNA) may be done of the abdominal fat pad (which commonly contains amyloid deposits) to aid in the diagnosis of AL amyloidosis. The abdominal fat pad is much more easily accessed for biopsy than the target organs affected by amyloid (such as the heart or kidneys) and confirmation of amyloid light chain deposits in the abdominal fat pad is used as a diagnostic surrogate of amyloid deposits in other organs when combined with imaging. FNA of the abdominal fat pad shows amyloid deposits in 70-75% of cases of suspected AL amyloidosis and diagnosed 85% of cases when combined with a bone marrow biopsy. Other peripheral areas such as the salivary glands, gingiva, rectum or skin may also be biopsied, however in some cases a biopsy of the target organ may be needed. On microscopic exam of biopsy specimens, amyloid deposits appear green ("apple-green birefringence") when stained with Congo Red dye and viewed under polarized light. Disordered plasma cells with a monoclonal protein product (immunoglobulin light chains) are confirmed in AL amyloidosis using serum or urine protein electrophoresis, immunoglobulin free light-chain assays or identification of lambda or kappa restricted plasma cells on a bone marrow biopsy. The precursor protein that is forming the amyloid fibrils may be identified using immunohistochemical studies such as immunofluorescence or immunostaining, immunogold electron microscopy or mass spectroscopy (which is not widely available). Mass spectroscopy has a sensitivity of 88% and a specificity of 96% in identifying the precurosor protein in AL amyloidosis. Cardiac involvement in AL amyloidosis may be assessed using echocardiography, cardiac magnetic resonance (cardiac MRI) or positron emission tomography (PET scan). Treatment The most effective treatment is autologous bone marrow transplants with stem cell rescues. However many patients are too weak to tolerate this approach. Other treatments can involve application of chemotherapy similar to that used in multiple myeloma, which targets the plasma cells responsible for producing the misfolded light chain proteins. The most widely used regiment, and first line therapy, for those ineligible for a stem cell transplant is cyclophosphamide, bortezomib and dexamethasone (CyBorD) with daratumumab added. Daratumumab, a monoclonal antibody to CD38, a protein that is expressed on plasma cells, was approved in US and EU for AL Amyloidosis in 2021. CyBorD with daratumumab has a 78% very good partial hematologic response rate or better as well as a 55-55% organ response rate (reductions in organ damage) at 18 months, with the addition of daratumumab to CyBorD being associated with improved outcomes. CyBorD may be used alone, or bortezomib–melphalan–dexamethasone may be used in resource limited settings where daratumumab is not available. Birtamimab and anselamimab are monoclonal antibodies which are currently undergoing trials. The two antibodies work by targeting the misfolded immunoglobulin light chains making up the amyloid fibrils and designating them for destruction by macrophages, thus degrading amyloid microfibril deposits. Supportive care in AL amyloidosis consists of salt restriction and diuretics in those with heart failure or kidney involvement. An angiotensin converting enzyme inhibitor (ACEi) or angiotensin receptor blocker (ARB) may be used in those with significant proteinuria due to kidney disease. Amiodarone or an implantable defibrillator are sometimes needed for those with cardiomyopathy due to AL amyloidosis who are at risk of ventricular arrhythmias. Those with AL amyloidosis and kidney disease may require dialysis if kidney involvement progresses. Prognosis Median survival for patients diagnosed with AL amyloidosis was 13 months in the early 1990s, but had improved to about 40 months a decade later with 5 year survival rates also increasing from 15% in the 1980s to 48% in the mid 2010s . Heart involvement is associated with a worse prognosis. Epidemiology AL amyloidosis is a rare disease; only 1200 to 3200 new cases are reported each year in the United States, and between 500 and 600 in the UK. Two thirds of patients with AL amyloidosis are male and less than 5% of patients are under 40 years of age. See also Light chain deposition disease References External links Amyloidosis
0.774329
0.990754
0.767169
Hematoma
A hematoma, also spelled haematoma, or blood suffusion is a localized bleeding outside of blood vessels, due to either disease or trauma including injury or surgery and may involve blood continuing to seep from broken capillaries. A hematoma is benign and is initially in liquid form spread among the tissues including in sacs between tissues where it may coagulate and solidify before blood is reabsorbed into blood vessels. An ecchymosis is a hematoma of the skin larger than 10 mm. They may occur among and or within many areas such as skin and other organs, connective tissues, bone, joints and muscle. A collection of blood (or even a hemorrhage) may be aggravated by anticoagulant medication (blood thinner). Blood seepage and collection of blood may occur if heparin is given via an intramuscular route; to avoid this, heparin must be given intravenously or subcutaneously. Signs and symptoms Some hematomas are visible under the surface of the skin (commonly called bruises) or possibly felt as masses or lumps. Lumps may be caused by the limitation of the blood to a sac, subcutaneous or intramuscular tissue space isolated by fascial planes. This is a key anatomical feature that helps prevent injuries from causing massive blood loss. In most cases a hematoma as a sac of blood eventually dissolves; however, in some cases it may continue to grow due to blood seepage or show no change. If the sac of blood does not disappear, then it may need to be surgically cleaned out or repaired. The slow process of reabsorption of hematomas can allow the broken down blood cells and hemoglobin pigment to move in the connective tissue. For example, a patient who injures the base of their thumb might cause a hematoma, which will slowly move all through their finger within a week. Gravity is the main determinant of this process. Hematomas on articulations can reduce mobility of a member and present roughly the same symptoms as a fracture. In most cases, movement and exercise of the affected muscle is the best way to introduce the collection back into the bloodstream. A misdiagnosis of a hematoma in the vertebra can sometimes occur; this is correctly called a hemangioma (buildup of cells) or a benign tumor. Classification Types Subdermal hematoma (under the skin) Intramuscular hematoma (inside muscle tissue) Skull/brain: Subgaleal hematoma – between the galea aponeurosis and periosteum Cephalohematoma – between the periosteum and skull. Commonly caused by vacuum delivery and vertex delivery. Epidural hematoma – between the skull and dura mater Subdural hematoma – between the dura mater and arachnoid mater Subarachnoid hematoma – between the arachnoid mater and pia mater (the subarachnoid space) Othematoma – between the skin and the layers of cartilage of the ear Breast hematoma (breast) Perichondral hematoma (ear) Perianal hematoma (anus) Subungual hematoma (nail) Rectus sheath hematoma Degrees Petechiae – small pinpoint hematomas less than 3 mm in diameter Purpura (purple) – a bruise about 3–5 mm in diameter, generally round in shape Ecchymosis – subcutaneous extravasation of blood in a thin layer under the skin, i.e. bruising or "black and blue", over 1 cm in diameter Etymology The English word "hematoma" came into use in 1826. The word derives from the Greek αἷμα haima "blood" and -ωμα -oma, a suffix forming nouns indicating a mass or tumor. See also Metanephric dysplastic hematoma of the sacral region Welts References External links Gross pathology
0.767695
0.999212
0.767089
Heart failure
Heart failure (HF), also known as congestive heart failure (CHF), is a syndrome caused by an impairment in the heart's ability to fill with and pump blood. Although symptoms vary based on which side of the heart is affected, HF typically presents with shortness of breath, excessive fatigue, and bilateral leg swelling. The severity of the heart failure is mainly decided based on ejection fraction and also measured by the severity of symptoms. Other conditions that have symptoms similar to heart failure include obesity, kidney failure, liver disease, anemia, and thyroid disease. Common causes of heart failure include coronary artery disease, heart attack, high blood pressure, atrial fibrillation, valvular heart disease, excessive alcohol consumption, infection, and cardiomyopathy. These cause heart failure by altering the structure or the function of the heart or in some cases both. There are different types of heart failure: right-sided heart failure, which affects the right heart, left-sided heart failure, which affects the left heart, and biventricular heart failure, which affects both sides of the heart. Left-sided heart failure may be present with a reduced ejection fraction or with a preserved ejection fraction. Heart failure is not the same as cardiac arrest, in which blood flow stops completely due to the failure of the heart to pump. Diagnosis is based on symptoms, physical findings, and echocardiography. Blood tests, and a chest x-ray may be useful to determine the underlying cause. Treatment depends on severity and case. For people with chronic, stable, or mild heart failure, treatment usually consists of lifestyle changes, such as not smoking, physical exercise, and dietary changes, as well as medications. In heart failure due to left ventricular dysfunction, angiotensin-converting-enzyme inhibitors, angiotensin II receptor blockers (ARBs), or angiotensin receptor-neprilysin inhibitors, along with beta blockers, mineralocorticoid receptor antagonists and SGLT2 inhibitors are recommended. Diuretics may also be prescribed to prevent fluid retention and the resulting shortness of breath. Depending on the case, an implanted device such as a pacemaker or implantable cardiac defibrillator may sometimes be recommended. In some moderate or more severe cases, cardiac resynchronization therapy (CRT) or cardiac contractility modulation may be beneficial. In severe disease that persists despite all other measures, a cardiac assist device ventricular assist device, or, occasionally, heart transplantation may be recommended. Heart failure is a common, costly, and potentially fatal condition, and is the leading cause of hospitalization and readmission in older adults. Heart failure often leads to more drastic health impairments than failure of other, similarly complex organs such as the kidneys or liver. In 2015, it affected about 40 million people worldwide. Overall, heart failure affects about 2% of adults, and more than 10% of those over the age of 70. Rates are predicted to increase. The risk of death in the first year after diagnosis is about 35%, while the risk of death in the second year is less than 10% in those still alive. The risk of death is comparable to that of some cancers. In the United Kingdom, the disease is the reason for 5% of emergency hospital admissions. Heart failure has been known since ancient times in Egypt; it is mentioned in the Ebers Papyrus around 1550 BCE. Definition When the heart functions poorly as a pump and does not circulate blood adequately via the circulatory system to meet the demands of the body the term cardiovascular insufficiency is sometimes used. This generally leads to the syndrome of heart failure. Heart failure is not a disease but a syndrome – a combination of signs and symptoms – caused by the failure of the heart to pump blood to support the circulatory system at rest or during activity. It develops when the heart fails to properly fill with blood during diastole, resulting in a decrease in intracardiac pressures or in ejection during systole, reducing cardiac output to the rest of the body. The filling failure and high intracardiac pressure can lead to fluid accumulation in ventricles of heart. This manifests as water retention and swelling due to fluid accumulation (edema) called congestion. Impaired ejection can lead to inadequate blood flow to the body tissues, resulting in ischemia. Signs and symptoms Congestive heart failure is a pathophysiological condition in which the heart's output is insufficient to meet the needs of the body and lungs. The term "congestive heart failure" is often used because one of the most common symptoms is congestion or fluid accumulation in the tissues and veins of the lungs or other parts of a person's body. Congestion manifests itself particularly in the form of fluid accumulation and swelling (edema), in the form of peripheral edema (causing swollen limbs and feet) and pulmonary edema (causing difficulty breathing) and ascites (swollen abdomen). Pulse pressure, which is the difference between the systolic ("top number") and diastolic ("bottom number") blood pressures, is often low/narrow (i.e. 25% or less of the level of the systolic) in people with heart failure, and this can be an early warning sign. Symptoms of heart failure are traditionally divided into left-sided and right-sided because the left and right ventricles supply different parts of the circulation. In biventricular heart failure, both sides of the heart are affected. Left-sided heart failure is the more common. Left-sided failure The left side of the heart takes oxygen-rich blood from the lungs and pumps it to the rest of the circulatory system in the body (except for the pulmonary circulation). Failure of the left side of the heart causes blood to back up into the lungs, causing breathing difficulties and fatigue due to an insufficient supply of oxygenated blood. Common respiratory signs include increased respiratory rate and labored breathing (nonspecific signs of shortness of breath). Rales or crackles are heard initially in the lung bases and when severe in all lung fields indicate the development of pulmonary edema (fluid in the alveoli). Cyanosis, indicates deficiency of oxygen in the blood, is a late sign of extremely severe pulmonary edema. Other signs of left ventricular failure include a laterally displaced apex beat (which occurs when the heart is enlarged) and a gallop rhythm (additional heart sounds), which may be heard as a sign of increased blood flow or increased intracardiac pressure. Heart murmurs may indicate the presence of valvular heart disease, either as a cause (e.g., aortic stenosis) or as a consequence (e.g., mitral regurgitation) of heart failure. Reverse insufficiency of the left ventricle causes congestion in the blood vessels of the lungs, so that symptoms are predominantly respiratory. Reverse insufficiency can be divided into the failure of the left atrium, the left ventricle, or both within the left circuit. Patients will experience shortness of breath (dyspnea) on exertion and, in severe cases, dyspnea at rest. Increasing breathlessness while lying down, called orthopnea, also occurs. It can be measured by the number of pillows required to lie comfortably, with extreme cases of orthopnea forcing the patient to sleep sitting up. Another symptom of heart failure is paroxysmal nocturnal dyspnea: a sudden nocturnal attack of severe shortness of breath, usually occurring several hours after falling asleep. There may be "cardiac asthma" or wheezing. Impaired left ventricular forward function can lead to symptoms of poor systemic perfusion such as dizziness, confusion, and cool extremities at rest. Loss of consciousness may also occur due to loss of blood supply to the brain. Right-sided failure Right-sided heart failure is often caused by pulmonary heart disease (cor pulmonale), which is typically caused by issues with pulmonary circulation such as pulmonary hypertension or pulmonic stenosis. Physical examination may reveal pitting peripheral edema, ascites, liver enlargement, and spleen enlargement. Jugular venous pressure is frequently assessed as a marker of fluid status, which can be accentuated by testing hepatojugular reflux. If the right ventricular pressure is increased, a parasternal heave which causes the compensatory increase in contraction strength may be present. Backward failure of the right ventricle leads to congestion of systemic capillaries. This generates excess fluid accumulation in the body. This causes swelling under the skin (peripheral edema or anasarca) and usually affects the dependent parts of the body first, causing foot and ankle swelling in people who are standing up and sacral edema in people who are predominantly lying down. Nocturia (frequent night-time urination) may occur when fluid from the legs is returned to the bloodstream while lying down at night. In progressively severe cases, ascites (fluid accumulation in the abdominal cavity causing swelling) and liver enlargement may develop. Significant liver congestion may result in impaired liver function (congestive hepatopathy), jaundice, and coagulopathy (problems of decreased or increased blood clotting). Biventricular failure Dullness of the lung fields when percussed and reduced breath sounds at the base of the lungs may suggest the development of a pleural effusion (fluid collection between the lung and the chest wall). Though it can occur in isolated left- or right-sided heart failure, it is more common in biventricular failure because pleural veins drain into both the systemic and pulmonary venous systems. When unilateral, effusions are often right-sided. If a person with a failure of one ventricle lives long enough, it will tend to progress to failure of both ventricles. For example, left ventricular failure allows pulmonary edema and pulmonary hypertension to occur, which increase stress on the right ventricle. Though still harmful, right ventricular failure is not as deleterious to the left side. Causes Since heart failure is a syndrome and not a disease, establishing the underlying cause is vital to diagnosis and treatment. In heart failure, the structure or the function of the heart or in some cases both are altered. Heart failure is the potential end stage of all heart diseases. Common causes of heart failure include coronary artery disease, including a previous myocardial infarction (heart attack), high blood pressure, atrial fibrillation, valvular heart disease, excess alcohol use, infection, and cardiomyopathy of an unknown cause. In addition, viral infection and subsequent inflammation of the heart's myocardial tissue (termed myocarditis) can similarly contribute to the development of heart failure. Genetic predisposition plays an important role. If more than one cause is present, progression is more likely and prognosis is worse. Heart damage can predispose a person to develop heart failure later in life and has many causes including systemic viral infections (e.g., HIV), chemotherapeutic agents such as daunorubicin, cyclophosphamide, trastuzumab and substance use disorders of substances such as alcohol, cocaine, and methamphetamine. An uncommon cause is exposure to certain toxins such as lead and cobalt. Additionally, infiltrative disorders such as amyloidosis and connective tissue diseases such as systemic lupus erythematosus have similar consequences. Obstructive sleep apnea (a condition of sleep wherein disordered breathing overlaps with obesity, hypertension, and/or diabetes) is regarded as an independent cause of heart failure. Recent reports from clinical trials have also linked variation in blood pressure to heart failure and cardiac changes that may give rise to heart failure. High-output heart failure High-output heart failure happens when the amount of blood pumped out is more than typical and the heart is unable to keep up. This can occur in overload situations such as blood or serum infusions, kidney diseases, chronic severe anemia, beriberi (vitamin B1/thiamine deficiency), hyperthyroidism, cirrhosis, Paget's disease, multiple myeloma, arteriovenous fistulae, or arteriovenous malformations. Acute decompensation Chronic stable heart failure may easily decompensate (fail to meet the body's metabolic needs). This most commonly results from a concurrent illness (such as myocardial infarction (a heart attack) or pneumonia), abnormal heart rhythms, uncontrolled hypertension, or a person's failure to maintain a fluid restriction, diet, or medication. Other factors that may worsen CHF include: anemia, hyperthyroidism, excessive fluid or salt intake, and medication such as NSAIDs and thiazolidinediones. NSAIDs increase the risk twofold. Medications A number of medications may cause or worsen the disease. This includes NSAIDs, COX-2 inhibitors, a number of anesthetic agents such as ketamine, thiazolidinediones, some cancer medications, several antiarrhythmic medications, pregabalin, alpha-2 adrenergic receptor agonists, minoxidil, itraconazole, cilostazol, anagrelide, stimulants (e.g., methylphenidate), tricyclic antidepressants, lithium, antipsychotics, dopamine agonists, TNF inhibitors, calcium channel blockers (especially verapamil and diltiazem), salbutamol, and tamsulosin. By inhibiting the formation of prostaglandins, NSAIDs may exacerbate heart failure through several mechanisms, including promotion of fluid retention, increasing blood pressure, and decreasing a person's response to diuretic medications. Similarly, the ACC/AHA recommends against the use of COX-2 inhibitor medications in people with heart failure. Thiazolidinediones have been strongly linked to new cases of heart failure and worsening of pre-existing congestive heart failure due to their association with weight gain and fluid retention. Certain calcium channel blockers, such as diltiazem and verapamil, are known to decrease the force with which the heart ejects blood, thus are not recommended in people with heart failure with a reduced ejection fraction. Breast cancer patients are at high risk of heart failure due to several factors. After analysing data from 26 studies (836,301 patients), the recent meta-analysis found that breast cancer survivors demonstrated a higher risk heart failure within first ten years after diagnosis (hazard ratio = 1.21; 95% CI: 1.1, 1.33). The pooled incidence of heart failure in breast cancer survivors was 4.44 (95% CI 3.33-5.92) per 1000 person-years of follow-up. Supplements Certain alternative medicines carry a risk of exacerbating existing heart failure, and are not recommended. This includes aconite, ginseng, gossypol, gynura, licorice, lily of the valley, tetrandrine, and yohimbine. Aconite can cause abnormally slow heart rates and abnormal heart rhythms such as ventricular tachycardia. Ginseng can cause abnormally low or high blood pressure, and may interfere with the effects of diuretic medications. Gossypol can increase the effects of diuretics, leading to toxicity. Gynura can cause low blood pressure. Licorice can worsen heart failure by increasing blood pressure and promoting fluid retention. Lily of the valley can cause abnormally slow heart rates with mechanisms similar to those of digoxin. Tetrandrine can lead to low blood pressure through inhibition of L-type calcium channels. Yohimbine can exacerbate heart failure by increasing blood pressure through alpha-2 adrenergic receptor antagonism. Pathophysiology Heart failure is caused by any condition that reduces the efficiency of the heart muscle, through damage or overloading. Over time, these increases in workload, which are mediated by long-term activation of neurohormonal systems such as the renin–angiotensin system and the sympathoadrenal system, lead to fibrosis, dilation, and structural changes in the shape of the left ventricle from elliptical to spherical. The heart of a person with heart failure may have a reduced force of contraction due to overloading of the ventricle. In a normal heart, increased filling of the ventricle results in increased contraction force by the Frank–Starling law of the heart, and thus a rise in cardiac output. In heart failure, this mechanism fails, as the ventricle is loaded with blood to the point where heart muscle contraction becomes less efficient. This is due to reduced ability to cross-link actin and myosin myofilaments in over-stretched heart muscle. Diagnosis No diagnostic criteria have been agreed on as the gold standard for heart failure, especially heart failure with preserved ejection fraction (HFpEF). In the UK, the National Institute for Health and Care Excellence recommends measuring N-terminal pro-BNP (NT-proBNP) followed by an ultrasound of the heart if positive. In Europe, the European Society of Cardiology, and in the United States, the AHA/ACC/HFSA, recommend measuring NT-proBNP or BNP followed by an ultrasound of the heart if positive. This is recommended in those with symptoms consistent with heart failure such as shortness of breath. The European Society of Cardiology defines the diagnosis of heart failure as symptoms and signs consistent with heart failure in combination with "objective evidence of cardiac structural or functional abnormalities". This definition is consistent with an international 2021 report termed "Universal Definition of Heart Failure". Score-based algorithms have been developed to help in the diagnosis of HFpEF, which can be challenging for physicians to diagnose. The AHA/ACC/HFSA defines heart failure as symptoms and signs consistent with heart failure in combination with shown "structural and functional alterations of the heart as the underlying cause for the clinical presentation", for HFmrEF and HFpEF specifically requiring "evidence of spontaneous or provokable increased left ventricle filling pressures". Algorithms The European Society of Cardiology has developed a diagnostic algorithm for HFpEF, named HFA-PEFF. HFA-PEFF considers symptoms and signs, typical clinical demographics (obesity, hypertension, diabetes, elderly, atrial fibrillation), and diagnostic laboratory tests, ECG, and echocardiography. Classification "Left", "right" and mixed heart failure One historical method of categorizing heart failure is by the side of the heart involved (left heart failure versus right heart failure). Right heart failure was thought to compromise blood flow to the lungs compared to left heart failure compromising blood flow to the aorta and consequently to the brain and the remainder of the body's systemic circulation. However, mixed presentations are common and left heart failure is a common cause of right heart failure. By ejection fraction More accurate classification of heart failure type is made by measuring ejection fraction, or the proportion of blood pumped out of the heart during a single contraction. Ejection fraction is given as a percentage with the normal range being between 50 and 75%. The types are: Heart failure with reduced ejection fraction (HFrEF): Synonyms no longer recommended are "heart failure due to left ventricular systolic dysfunction" and "systolic heart failure". HFrEF is associated with an ejection fraction less than 40%. Heart failure with mildly reduced ejection fraction (HFmrEF), previously called "heart failure with mid-range ejection fraction", is defined by an ejection fraction of 41–49%. Heart failure with preserved ejection fraction (HFpEF): Synonyms no longer recommended include "diastolic heart failure" and "heart failure with normal ejection fraction." HFpEF occurs when the left ventricle contracts normally during systole, but the ventricle is stiff and does not relax normally during diastole, which impairs filling. Heart failure with recovered ejection fraction (HFrecovEF or HFrecEF): patients previously with HFrEF with complete normalization of left ventricular ejection (≥50%). Heart failure may also be classified as acute or chronic. Chronic heart failure is a long-term condition, usually kept stable by the treatment of symptoms. Acute decompensated heart failure is a worsening of chronic heart failure symptoms, which can result in acute respiratory distress. High-output heart failure can occur when there is increased cardiac demand that results in increased left ventricular diastolic pressure which can develop into pulmonary congestion (pulmonary edema). Several terms are closely related to heart failure and may be the cause of heart failure, but should not be confused with it. Cardiac arrest and asystole refer to situations in which no cardiac output occurs at all. Without urgent treatment, these events result in sudden death. Myocardial infarction ("Heart attack") refers to heart muscle damage due to insufficient blood supply, usually as a result of a blocked coronary artery. Cardiomyopathy refers specifically to problems within the heart muscle, and these problems can result in heart failure. Ischemic cardiomyopathy implies that the cause of muscle damage is coronary artery disease. Dilated cardiomyopathy implies that the muscle damage has resulted in enlargement of the heart. Hypertrophic cardiomyopathy involves enlargement and thickening of the heart muscle. Ultrasound An echocardiogram (ultrasound of the heart) is commonly used to support a clinical diagnosis of heart failure. This can determine the stroke volume (SV, the amount of blood in the heart that exits the ventricles with each beat), the end-diastolic volume (EDV, the total amount of blood at the end of diastole), and the SV in proportion to the EDV, a value known as the ejection fraction (EF). In pediatrics, the shortening fraction is the preferred measure of systolic function. Normally, the EF should be between 50 and 70%; in systolic heart failure, it drops below 40%. Echocardiography can also identify valvular heart disease and assess the state of the pericardium (the connective tissue sac surrounding the heart). Echocardiography may also aid in deciding specific treatments, such as medication, insertion of an implantable cardioverter-defibrillator, or cardiac resynchronization therapy. Echocardiography can also help determine if acute myocardial ischemia is the precipitating cause, and may manifest as regional wall motion abnormalities on echo. Chest X-ray Chest X-rays are frequently used to aid in the diagnosis of CHF. In a person who is compensated, this may show cardiomegaly (visible enlargement of the heart), quantified as the cardiothoracic ratio (proportion of the heart size to the chest). In left ventricular failure, evidence may exist of vascular redistribution (upper lobe blood diversion or cephalization), Kerley lines, cuffing of the areas around the bronchi, and interstitial edema. Ultrasound of the lung may also be able to detect Kerley lines. Electrophysiology An electrocardiogram (ECG or EKG) may be used to identify arrhythmias, ischemic heart disease, right and left ventricular hypertrophy, and presence of conduction delay or abnormalities (e.g. left bundle branch block). Although these findings are not specific to the diagnosis of heart failure, a normal ECG virtually excludes left ventricular systolic dysfunction. Blood tests N-terminal pro-BNP (NT-proBNP) is the favoured biomarker for the diagnosis of heart failure, according to guidelines published 2018 by NICE in the UK. Brain natriuretic peptide 32 (BNP) is another biomarker commonly tested for heart failure. An elevated NT-proBNP or BNP is a specific test indicative of heart failure. Additionally, NT-proBNP or BNP can be used to differentiate between causes of dyspnea due to heart failure from other causes of dyspnea. If myocardial infarction is suspected, various cardiac markers may be used. Blood tests routinely performed include electrolytes (sodium, potassium), measures of kidney function, liver function tests, thyroid function tests, a complete blood count, and often C-reactive protein if infection is suspected. Hyponatremia (low serum sodium concentration) is common in heart failure. Vasopressin levels are usually increased, along with renin, angiotensin II, and catecholamines to compensate for reduced circulating volume due to inadequate cardiac output. This leads to increased fluid and sodium retention in the body; the rate of fluid retention is higher than the rate of sodium retention in the body, this phenomenon causes hypervolemic hyponatremia (low sodium concentration due to high body fluid retention). This phenomenon is more common in older women with low body mass. Severe hyponatremia can result in accumulation of fluid in the brain, causing cerebral edema and intracranial hemorrhage. Angiography Angiography is the X-ray imaging of blood vessels, which is done by injecting contrast agents into the bloodstream through a thin plastic tube (catheter), which is placed directly in the blood vessel. X-ray images are called angiograms. Heart failure may be the result of coronary artery disease, and its prognosis depends in part on the ability of the coronary arteries to supply blood to the myocardium (heart muscle). As a result, coronary catheterization may be used to identify possibilities for revascularisation through percutaneous coronary intervention or bypass surgery. Staging Heart failure is commonly stratified by the degree of functional impairment conferred by the severity of the heart failure, as reflected in the New York Heart Association (NYHA) functional classification. The NYHA functional classes (I–IV) begin with class I, which is defined as a person who experiences no limitation in any activities and has no symptoms from ordinary activities. People with NYHA class II heart failure have slight, mild limitations with everyday activities; the person is comfortable at rest or with mild exertion. With NYHA class III heart failure, a marked limitation occurs with any activity; the person is comfortable only at rest. A person with NYHA class IV heart failure is symptomatic at rest and becomes quite uncomfortable with any physical activity. This score documents the severity of symptoms and can be used to assess response to treatment. While its use is widespread, the NYHA score is not very reproducible and does not reliably predict the walking distance or exercise tolerance on formal testing. In its 2001 guidelines, the American College of Cardiology/American Heart Association working group introduced four stages of heart failure: Stage A: People at high risk for developing HF in the future, but no functional or structural heart disorder Stage B: A structural heart disorder, but no symptoms at any stage Stage C: Previous or current symptoms of heart failure in the context of an underlying structural heart problem, but managed with medical treatment Stage D: Advanced disease requiring hospital-based support, a heart transplant, or palliative care The ACC staging system is useful since stage A encompasses "pre-heart failure" – a stage where intervention with treatment can presumably prevent progression to overt symptoms. ACC stage A does not have a corresponding NYHA class. ACC stage B would correspond to NYHA class I. ACC stage C corresponds to NYHA class II and III, while ACC stage D overlaps with NYHA class IV. The degree of coexisting illness: i.e. heart failure/systemic hypertension, heart failure/pulmonary hypertension, heart failure/diabetes, heart failure/kidney failure, etc. Whether the problem is primarily increased venous back pressure (preload), or failure to supply adequate arterial perfusion (afterload) Whether the abnormality is due to low cardiac output with high systemic vascular resistance or high cardiac output with low vascular resistance (low-output heart failure vs. high-output heart failure) Histopathology Histopathology can diagnose heart failure in autopsies. The presence of siderophages indicates chronic left-sided heart failure, but is not specific for it. It is also indicated by congestion of the pulmonary circulation. Prevention A person's risk of developing heart failure is inversely related to level of physical activity. Those who achieved at least 500 MET-minutes/week (the recommended minimum by U.S. guidelines) had lower heart failure risk than individuals who did not report exercising during their free time; the reduction in heart failure risk was even greater in those who engaged in higher levels of physical activity than the recommended minimum. Heart failure can also be prevented by lowering high blood pressure and high blood cholesterol, and by controlling diabetes. Maintaining a healthy weight, and decreasing sodium, alcohol, and sugar intake, may help. Additionally, avoiding tobacco use has been shown to lower the risk of heart failure. According to Johns Hopkins and the American Heart Association there are a few ways to help to prevent a cardiac event. Johns Hopkins states that stopping tobacco use, reducing high blood pressure, physical activity and your diet can drastically effect the chances of developing heart disease. High blood pressure accounts for most cardiovascular deaths. High blood pressure can be lowered into the normal range by making dietary decisions such as consuming less salt. Exercise also helps to bring blood pressure back down. One of the best ways to help avoid heart failure is to promote healthier eating habits like eating more vegetables, fruits, grains, and lean protein. Diabetes is a major risk factor for heart failure. For women with Coronary Heart disease (CHD), diabetes was the strongest risk factor for heart failure. Diabetic women with depressed creatinine clearance or elevated BMI were at the highest risk of heart failure. While the annual incidence rate of heart failure for non-diabetic women with no risk factors is 0.4%, the annual incidence rate for diabetic women with elevated body mass index (BMI) and depressed creatinine clearance was 7% and 13%, respectively. Management Treatment focuses on improving the symptoms and preventing the progression of the disease. Reversible causes of heart failure also need to be addressed (e.g. infection, alcohol ingestion, anemia, thyrotoxicosis, arrhythmia, and hypertension). Treatments include lifestyle and pharmacological modalities, and occasionally various forms of device therapy. Rarely, cardiac transplantation is used as an effective treatment when heart failure has reached the end stage. Acute decompensation In acute decompensated heart failure, the immediate goal is to re-establish adequate perfusion and oxygen delivery to end organs. This entails ensuring that airway, breathing, and circulation are adequate. Immediate treatments usually involve some combination of vasodilators such as nitroglycerin, diuretics such as furosemide, and possibly noninvasive positive pressure ventilation. Supplemental oxygen is indicated in those with oxygen saturation levels below 90%, but is not recommended in those with normal oxygen levels in normal atmosphere. Chronic management The goals of the treatment for people with chronic heart failure are the prolongation of life, prevention of acute decompensation, and reduction of symptoms, allowing for greater activity. Heart failure can result from a variety of conditions. In considering therapeutic options, excluding reversible causes is of primary importance, including thyroid disease, anemia, chronic tachycardia, alcohol use disorder, hypertension, and dysfunction of one or more heart valves. Treatment of the underlying cause is usually the first approach to treating heart failure. In the majority of cases, though, either no primary cause is found or treatment of the primary cause does not restore normal heart function. In these cases, behavioral, medical and device treatment strategies exist that can provide a significant improvement in outcomes, including the relief of symptoms, exercise tolerance, and a decrease in the likelihood of hospitalization or death. Breathlessness rehabilitation for chronic obstructive pulmonary disease and heart failure has been proposed with exercise training as a core component. Rehabilitation should also include other interventions to address shortness of breath including psychological and educational needs of people and needs of caregivers. Iron supplementation appears to reduce hospitalization but not all-cause mortality in patients with iron deficiency and heart failure. Advance care planning The latest evidence indicates that advance care planning (ACP) may help to increase documentation by medical staff regarding discussions with participants, and improve an individual's depression. This involves discussing an individual's future care plan in consideration of the individual's preferences and values. The findings are however, based on low-quality evidence. Monitoring The various measures often used to assess the progress of people being treated for heart failure include fluid balance (calculation of fluid intake and excretion) and monitoring body weight (which in the shorter term reflects fluid shifts). Remote monitoring can be effective to reduce complications for people with heart failure. Lifestyle Behavior modification is a primary consideration in chronic heart failure management program, with dietary guidelines regarding fluid and salt intake. Fluid restriction is important to reduce fluid retention in the body and to correct the hyponatremic status of the body. The evidence of benefit of reducing salt, however, is poor as of 2018. Thirst is a common and burdensome symptom for patients to cope with. Chewing gum has been shown to be an effective intervention to relieve thirst in patients experiencing heart failure, although patient acceptability remains an issue. Exercise and physical activity Exercise should be encouraged and tailored to suit individual's capabilities. A meta-analysis found that centre-based group interventions delivered by a physiotherapist are helpful in promoting physical activity in HF. There is a need for additional training for physiotherapists in delivering behaviour change intervention alongside an exercise programme. An intervention is expected to be more efficacious in encouraging physical activity than the usual care if it includes Prompts and cues to walk or exercise, like a phone call or a text message. It is extremely helpful if a trusted clinician provides explicit advice to engage in physical activity (Credible source). Another highly effective strategy is to place objects that will serve as a cue to engage in physical activity in the everyday environment of the patient (Adding object to the environment; e.g., exercise step or treadmill). Encouragement to walk or exercise in various settings beyond CR (e.g., home, neighbourhood, parks) is also promising (Generalisation of target behaviour). Additional promising strategies are Graded tasks (e.g., gradual increase in intensity and duration of exercise training), Self-monitoring, Monitoring of physical activity by others without feedback, Action planning, and Goal-setting. The inclusion of regular physical conditioning as part of a cardiac rehabilitation program can significantly improve quality of life and reduce the risk of hospital admission for worsening symptoms, but no evidence shows a reduction in mortality rates as a result of exercise. Home visits and regular monitoring at heart-failure clinics reduce the need for hospitalization and improve life expectancy. Medication Quadruple medical therapy using a combination of angiotensin receptor-neprilysin inhibitors (ARNI), beta blockers, mineralocorticoid receptor antagonists (MRA), and sodium/glucose cotransporter 2 inhibitors (SGLT2 inhibitors) is the standard of care as of 2021 for heart failure with reduced ejection fraction (HFrEF). There is no convincing evidence for pharmacological treatment of heart failure with preserved ejection fraction (HFpEF). Medication for HFpEF is symptomatic treatment with diuretics to treat congestion. Managing risk factors and comorbidities such as hypertension is recommended in HFpEF. Inhibitors of the renin–angiotensin system (RAS) are recommended in heart failure. The angiotensin receptor-neprilysin inhibitors (ARNI) sacubitril/valsartan is recommended as first choice of RAS inhibitors in American guidelines published by AHA/ACC in 2022. Use of [[Angiotensin II receptor blockerACE inhibitor|angiotensin-converting enzyme (ACE) inhibitors (ACE-I)]], or angiotensin receptor blockers (ARBs) if the person develops a long-term cough as a side effect of the ACE-I, is associated with improved survival, fewer hospitalizations for heart failure exacerbations, and improved quality of life in people with heart failure. European guidelines published by ESC in 2021 recommends that ARNI should be used in those who still have symptoms while on an ACE-I or ARB, beta blocker, and a mineralocorticoid receptor antagonist. Use of the combination agent ARNI requires the cessation of ACE-I or ARB therapy at least 36 hours before its initiation. Beta-adrenergic blocking agents (beta blockers) add to the improvement in symptoms and mortality provided by ACE-I/ARB. The mortality benefits of beta blockers in people with systolic dysfunction who also have atrial fibrillation is more limited than in those who do not have it. If the ejection fraction is not diminished (HFpEF), the benefits of beta blockers are more modest; a decrease in mortality has been observed, but reduction in hospital admission for uncontrolled symptoms has not been observed. In people who are intolerant of ACE-I and ARB or who have significant kidney dysfunction, the use of combined hydralazine and a long-acting nitrate, such as isosorbide dinitrate, is an effective alternate strategy. This regimen has been shown to reduce mortality in people with moderate heart failure. It is especially beneficial in the black population. Use of a mineralocorticoid antagonist, such as spironolactone or eplerenone, in addition to beta blockers and ACE-I, can improve symptoms and reduce mortality in people with symptomatic heart failure with reduced ejection fraction (HFrEF). SGLT2 inhibitors are used for heart failure with reduced ejection fraction as they have demonstrated benefits in reducing hospitalizations and mortality, regardless of whether an individual has comorbid Type 2 Diabetes or not. Other medications Second-line medications for CHF do not confer a mortality benefit. Digoxin is one such medication. Its narrow therapeutic window, a high degree of toxicity, and the failure of multiple trials to show a mortality benefit have reduced its role in clinical practice. It is now used in only a small number of people with refractory symptoms, who are in atrial fibrillation, and/or who have chronic hypotension. Diuretics have been a mainstay of treatment against symptoms of fluid accumulation, and include diuretics classes such as loop diuretics (such as furosemide), thiazide-like diuretics, and potassium-sparing diuretics. Although widely used, evidence on their efficacy and safety is limited, with the exception of mineralocorticoid antagonists such as spironolactone. Anemia is an independent factor in mortality in people with chronic heart failure. Treatment of anemia significantly improves quality of life for those with heart failure, often with a reduction in severity of the NYHA classification, and also improves mortality rates. European Society of Cardiology recommends screening for iron deficiency and treating with intravenous iron if deficiency is found. The decision to anticoagulate people with HF, typically with left ventricular ejection fractions <35% is debated, but generally, people with coexisting atrial fibrillation, a prior embolic event, or conditions that increase the risk of an embolic event such as amyloidosis, left ventricular noncompaction, familial dilated cardiomyopathy, or a thromboembolic event in a first-degree relative. Vasopressin receptor antagonists can also be used to treat heart failure. Conivaptan is the first medication approved by US Food and Drug Administration for the treatment of euvolemic hyponatremia in those with heart failure. In rare cases hypertonic 3% saline together with diuretics may be used to correct hyponatremia. Ivabradine is recommended for people with symptomatic heart failure with reduced left ventricular ejection fraction who are receiving optimized guideline-directed therapy (as above) including the maximum tolerated dose of beta-blocker, have a normal heart rhythm and continue to have a resting heart rate above 70 beats per minute. Ivabradine has been found to reduce the risk of hospitalization for heart failure exacerbations in this subgroup of people with heart failure. Implanted devices In people with severe cardiomyopathy (left ventricular ejection fraction below 35%), or in those with recurrent VT or malignant arrhythmias, treatment with an automatic implantable cardioverter-defibrillator (AICD) is indicated to reduce the risk of severe life-threatening arrhythmias. The AICD does not improve symptoms or reduce the incidence of malignant arrhythmias but does reduce mortality from those arrhythmias, often in conjunction with antiarrhythmic medications. In people with left ventricular ejection (LVEF) below 35%, the incidence of ventricular tachycardia or sudden cardiac death is high enough to warrant AICD placement. Its use is therefore recommended in AHA/ACC guidelines. Cardiac contractility modulation (CCM) is a treatment for people with moderate to severe left ventricular systolic heart failure (NYHA class II–IV), which enhances both the strength of ventricular contraction and the heart's pumping capacity. The CCM mechanism is based on stimulation of the cardiac muscle by nonexcitatory electrical signals, which are delivered by a pacemaker-like device. CCM is particularly suitable for the treatment of heart failure with normal QRS complex duration (120 ms or less) and has been demonstrated to improve the symptoms, quality of life, and exercise tolerance. CCM is approved for use in Europe, and was approved by the Food and Drug Administration for use in the United States in 2019. About one-third of people with LVEF below 35% have markedly altered conduction to the ventricles, resulting in dyssynchronous depolarization of the right and left ventricles. This is especially problematic in people with left bundle branch block (blockage of one of the two primary conducting fiber bundles that originate at the base of the heart and carry depolarizing impulses to the left ventricle). Using a special pacing algorithm, biventricular cardiac resynchronization therapy (CRT) can initiate a normal sequence of ventricular depolarization. In people with LVEF below 35% and prolonged QRS duration on ECG (LBBB or QRS of 150 ms or more), an improvement in symptoms and mortality occurs when CRT is added to standard medical therapy. However, in the two-thirds of people without prolonged QRS duration, CRT may actually be harmful. Surgical therapies People with the most severe heart failure may be candidates for ventricular assist devices, which have commonly been used as a bridge to heart transplantation, but have been used more recently as a destination treatment for advanced heart failure. In select cases, heart transplantation can be considered. While this may resolve the problems associated with heart failure, the person must generally remain on an immunosuppressive regimen to prevent rejection, which has its own significant downsides. A major limitation of this treatment option is the scarcity of hearts available for transplantation. Palliative care People with heart failure often have significant symptoms, such as shortness of breath and chest pain. Palliative care should be initiated early in the HF trajectory, and should not be an option of last resort. Palliative care can not only provide symptom management, but also assist with advanced care planning, goals of care in the case of a significant decline, and making sure the person has a medical power of attorney and discussed his or her wishes with this individual. A 2016 and 2017 review found that palliative care is associated with improved outcomes, such as quality of life, symptom burden, and satisfaction with care. Without transplantation, heart failure may not be reversible and heart function typically deteriorates with time. The growing number of people with stage IV heart failure (intractable symptoms of fatigue, shortness of breath, or chest pain at rest despite optimal medical therapy) should be considered for palliative care or hospice, according to American College of Cardiology/American Heart Association guidelines. Prognosis Prognosis in heart failure can be assessed in multiple ways, including clinical prediction rules and cardiopulmonary exercise testing. Clinical prediction rules use a composite of clinical factors such as laboratory tests and blood pressure to estimate prognosis. Among several clinical prediction rules for prognosticating acute heart failure, the 'EFFECT rule' slightly outperformed other rules in stratifying people and identifying those at low risk of death during hospitalization or within 30 days. Easy methods for identifying people that are low-risk are: ADHERE Tree rule indicates that people with blood urea nitrogen < 43 mg/dL and systolic blood pressure at least 115 mm Hg have less than 10% chance of inpatient death or complications. BWH rule indicates that people with systolic blood pressure over 90 mm Hg, respiratory rate of 30 or fewer breaths per minute, serum sodium over 135 mmol/L, and no new ST–T wave changes have less than 10% chance of inpatient death or complications. A very important method for assessing prognosis in people with advanced heart failure is cardiopulmonary exercise testing (CPX testing). CPX testing is usually required prior to heart transplantation as an indicator of prognosis. CPX testing involves measurement of exhaled oxygen and carbon dioxide during exercise. The peak oxygen consumption (VO2 max) is used as an indicator of prognosis. As a general rule, a VO2 max less than 12–14 cc/kg/min indicates poor survival and suggests that the person may be a candidate for a heart transplant. People with a VO2 max <10 cc/kg/min have a clearly poorer prognosis. The most recent International Society for Heart and Lung Transplantation guidelines also suggest two other parameters that can be used for evaluation of prognosis in advanced heart failure, the heart failure survival score and the use of a criterion of VE/VCO2 slope > 35 from the CPX test. The heart failure survival score is calculated using a combination of clinical predictors and the VO2 max from the CPX test. Heart failure is associated with significantly reduced physical and mental health, resulting in a markedly decreased quality of life. With the exception of heart failure caused by reversible conditions, the condition usually worsens with time. Although some people survive many years, progressive disease is associated with an overall annual mortality rate of 10%. Around 18 of every 1000 persons will experience an ischemic stroke during the first year after diagnosis of HF. As the duration of follow-up increases, the stroke rate rises to nearly 50 strokes per 1000 cases of HF by 5 years. Epidemiology In 2022, heart failure affected about 64 million people globally. Overall, around 2% of adults have heart failure. In those over the age of 75, rates are greater than 10%. Rates are predicted to increase. Increasing rates are mostly because of increasing lifespan, but also because of increased risk factors (hypertension, diabetes, dyslipidemia, and obesity) and improved survival rates from other types of cardiovascular disease (myocardial infarction, valvular disease, and arrhythmias). Heart failure is the leading cause of hospitalization in people older than 65. United States In the United States, heart failure affects 5.8 million people, and each year 550,000 new cases are diagnosed. In 2011, heart failure was the most common reason for hospitalization for adults aged 85 years and older, and the second-most common for adults aged 65–84 years. An estimated one in five adults at age 40 will develop heart failure during their remaining lifetimes and about half of people who develop heart failure die within 5 years of diagnosis. Heart failure much higher in African Americans, Hispanics, Native Americans, and recent immigrants from Eastern Europe countries has been linked in these ethnic minority populations to high incidence of diabetes and hypertension. Nearly one of every four people (24.7%) hospitalized in the U.S. with congestive heart failure are readmitted within 30 days. Additionally, more than 50% of people seek readmission within 6 months after treatment and the average duration of hospital stay is 6 days. Heart failure is a leading cause of hospital readmissions in the U.S. People aged 65 and older were readmitted at a rate of 24.5 per 100 admissions in 2011. In the same year, heart failure patients under Medicaid were readmitted at a rate of 30.4 per 100 admissions, and uninsured people were readmitted at a rate of 16.8 per 100 admissions. These are the highest readmission rates for both categories. Notably, heart failure was not among the top-10 conditions with the most 30-day readmissions among the privately insured. United Kingdom In the UK, despite moderate improvements in prevention, heart failure rates have increased due to population growth and ageing. Overall heart failure rates are similar to the four most common causes of cancer (breast, lung, prostate, and colon) combined. People from deprived backgrounds are more likely to be diagnosed with heart failure and at a younger age. Developing world In tropical countries, the most common cause of heart failure is valvular heart disease or some type of cardiomyopathy. As underdeveloped countries have become more affluent, the incidences of diabetes, hypertension, and obesity have increased, which have in turn raised the incidence of heart failure. Sex Men have a higher incidence of heart failure, but the overall prevalence rate is similar in both sexes since women survive longer after the onset of heart failure. Women tend to be older when diagnosed with heart failure (after menopause), they are more likely than men to have diastolic dysfunction, and seem to experience a lower overall quality of life than men after diagnosis. Ethnicity Some sources state that people of Asian descent are at a higher risk of heart failure than other ethnic groups. Other sources however have found that rates of heart failure are similar to rates found in other ethnic groups. History For centuries, the disease entity which would include many cases of what today would be called heart failure was dropsy; the term denotes generalized edema, a major manifestation of a failing heart, though also caused by other diseases. Writings of ancient civilizations include evidence of their acquaintance with dropsy and heart failure: Egyptians were the first to use bloodletting to relieve fluid accumulation and shortage of breath, and provided what may have been the first documented observations on heart failure in the Ebers papyrus (around 1500 BCE). Greeks described cases of dyspnea, fluid retention and fatigue compatible with heart failure. Romans used the flowering plant Drimia maritima (sea squill), which contains cardiac glycosides, for the treatment of dropsy; descriptions pertaining to heart failure are also known in the civilizations of ancient India and China. However, the manifestations of failing heart were understood in the context of these peoples' medical theories – including ancient Egyptian religion, Hippocratic theory of humours, or ancient Indian and Chinese medicine, and the current concept of heart failure had not developed yet. Although shortage of breath had been connected to heart disease by Avicenna round 1000 CE, decisive for modern understanding of the nature of the condition were the description of pulmonary circulation by Ibn al-Nafis in the 13th century, and of systemic circulation by William Harvey in 1628. The role of the heart in fluid retention began to be better appreciated, as dropsy of the chest (fluid accumulation in and round the lungs causing shortage of breath) became more familiar and the current concept of heart failure, which brings together swelling and shortage of breath due to fluid retention, began to be accepted, in the 17th and especially in the 18th century: Richard Lower linked dyspnea and foot swelling in 1679, and Giovanni Maria Lancisi connected jugular vein distention with right ventricular failure in 1728. Dropsy attributable to other causes, e.g. kidney failure, was differentiated in the 19th century. The stethoscope, invented by René Laennec in 1819, x-rays, discovered by Wilhelm Röntgen in 1895, and electrocardiography, described by Willem Einthoven in 1903, facilitated the investigation of heart failure. The 19th century also saw experimental and conceptual advances in the physiology of heart contraction, which led to the formulation of the Frank-Starling law of the heart (named after physiologists Otto Frank and Ernest Starling), a remarkable advance in understanding mechanisms of heart failure. One of the earliest treatments of heart failure, relief of swelling by bloodletting with various methods, including leeches, continued through the centuries. Along with bloodletting, Jean-Baptiste de Sénac in 1749 recommended opiates for acute shortage of breath due to heart failure. In 1785, William Withering described the therapeutic uses of the foxglove genus of plants in the treatment of edema; their extract contains cardiac glycosides, including digoxin, still used today in the treatment of heart failure. The diuretic effects of inorganic mercury salts, which were used to treat syphilis, had already been noted in the 16th century by Paracelsus; in the 19th century they were used by noted physicians like John Blackall and William Stokes. In the meantime, cannulae (tubes) invented by English physician Reginald Southey in 1877 was another method of removing excess fluid by directly inserting into swollen limbs. Use of organic mercury compounds as diuretics, beyond their role in syphilis treatment, started in 1920, though it was limited by their parenteral route of administration and their side-effects. Oral mercurial diuretics were introduced in the 1950s; so were thiazide diuretics, which caused less toxicity, and are still used. Around the same time, the invention of echocardiography by Inge Edler and Hellmuth Hertz in 1954 marked a new era in the evaluation of heart failure. In the 1960s, loop diuretics were added to available treatments of fluid retention, while a patient with heart failure received the first heart transplant by Christiaan Barnard. Over the following decades, new drug classes found their place in heart failure therapeutics, including vasodilators like hydralazine; renin-angiotensin system inhibitors; and beta-blockers. Economics In 2011, nonhypertensive heart failure was one of the 10 most expensive conditions seen during inpatient hospitalizations in the U.S., with aggregate inpatient hospital costs more than $10.5 billion. Heart failure is associated with a high health expenditure, mostly because of the cost of hospitalizations; costs have been estimated to amount to 2% of the total budget of the National Health Service in the United Kingdom, and more than $35 billion in the United States. Research directions Some research indicates that stem cell therapy may help. Although this research indicated benefits of stem cell therapy, other research does not indicate benefit. There is tentative evidence of longer life expectancy and improved left ventricular ejection fraction in persons treated with bone marrow-derived stem cells. The maintenance of heart function depends on appropriate gene expression that is regulated at multiple levels by epignetic mechanisms including DNA methylation and histone post-translational modification. Currently, an increasing body of research is directed at understanding the role of perturbations of epigenetic processes in cardiac hypertrophy and fibrotic scarring. Notes References External links Heart failure, American Heart Association – information and resources for treating and living with heart failure Heart Failure Matters – patient information website of the Heart Failure Association of the European Society of Cardiology Heart failure in children by Great Ormond Street Hospital, London, UK 2022 AHA/ACC/HFSA Guideline for the Management of Heart Failure - Guideline Hub at American College of Cardiology, jointly with the American Heart Association and the Heart Failure Society of America. JACC article link, quick references, slides, perspectives, education, apps and tools, and patient resources. April, 2022 2021 ESC Guidelines for the diagnosis and treatment of acute and chronic heart failure - European Society of Cardiology resource webpage with links to Full text and related materials, scientific presentation at ESC Congress 2021, news article, TV interview, app, slide set, and ESC Pocket Guidelines; plus previous versions. August, 2021. Aging-associated diseases Heart diseases Organ failure Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate Disorders causing edema
0.767478
0.999474
0.767074
Cell damage
Cell damage (also known as cell injury) is a variety of changes of stress that a cell suffers due to external as well as internal environmental changes. Amongst other causes, this can be due to physical, chemical, infectious, biological, nutritional or immunological factors. Cell damage can be reversible or irreversible. Depending on the extent of injury, the cellular response may be adaptive and where possible, homeostasis is restored. Cell death occurs when the severity of the injury exceeds the cell's ability to repair itself. Cell death is relative to both the length of exposure to a harmful stimulus and the severity of the damage caused. Cell death may occur by necrosis or apoptosis. Causes Physical agents such as heat or radiation can damage a cell by literally cooking or coagulating their contents. Impaired nutrient supply, such as lack of oxygen or glucose, or impaired production of adenosine triphosphate (ATP) may deprive the cell of essential materials needed to survive. Metabolic: Hypoxia and Ischemia Chemical Agents Microbial Agents: Virus & Bacteria Immunologic Agents: Allergy and autoimmune diseases such as Parkinson's and Alzheimer's disease. Genetic factors: Such as Down's syndrome and sickle cell anemia Targets The most notable components of the cell that are targets of cell damage are the DNA and the cell membrane. DNA damage: In human cells, both normal metabolic activities and environmental factors such as ultraviolet light and other radiations can cause DNA damage, resulting in as many as one million individual molecular lesions per cell per day. Membrane damage: Damage to the cell membrane disturbs the state of cell electrolytes, e.g. calcium, which when constantly increased, induces apoptosis. Mitochondrial damage: May occur due to ATP decrease or change in mitochondrial permeability. Ribosome damage: Damage to ribosomal and cellular proteins such as protein misfolding, leading to apoptotic enzyme activation. Types of damage Some cell damage can be reversed once the stress is removed or if compensatory cellular changes occur. Full function may return to cells but in some cases, a degree of injury will remain. Reversible Cellular swelling Cellular swelling (or cloudy swelling) may occur due to cellular hypoxia, which damages the sodium-potassium membrane pump; it is reversible when the cause is eliminated. Cellular swelling is the first manifestation of almost all forms of injury to cells. When it affects many cells in an organ, it causes some pallor, increased turgor, and increase in weight of the organ. On microscopic examination, small clear vacuoles may be seen within the cytoplasm; these represent distended and pinched-off segments of the endoplasmic reticulum. This pattern of non-lethal injury is sometimes called hydropic change or vacuolar degeneration. Hydropic degeneration is a severe form of cloudy swelling. It occurs with hypokalemia due to vomiting or diarrhea. The ultrastructural changes of reversible cell injury include: Blebbing Blunting distortion of microvilli loosening of intercellular attachments mitochondrial changes dilation of the endoplasmic reticulum Fatty change In fatty change, the cell has been damaged and is unable to adequately metabolize fat. Small vacuoles of fat accumulate and become dispersed within cytoplasm. Mild fatty change may have no effect on cell function; however, more severe fatty change can impair cellular function. In the liver, the enlargement of hepatocytes due to fatty change may compress adjacent bile canaliculi, leading to cholestasis. Depending on the cause and severity of the lipid accumulation, fatty change is generally reversible. Fatty change is also known as fatty degeneration, fatty metamorphosis, or fatty steatosis. Irreversible Necrosis Necrosis is characterised by cytoplasmic swelling, irreversible damage to the plasma membrane, and organelle breakdown leading to cell death. The stages of cellular necrosis include pyknosis, the clumping of chromosomes and shrinking of the nucleus of the cell; karyorrhexis, the fragmentation of the nucleus and break up of the chromatin into unstructured granules; and karyolysis, the dissolution of the cell nucleus. Cytosolic components that leak through the damaged plasma membrane into the extracellular space can incur an inflammatory response. There are six types of necrosis: Coagulative necrosis Liquefactive necrosis Caseous necrosis Fat necrosis Fibroid necrosis Gangrenous necrosis Apoptosis Apoptosis is the programmed cell death of superfluous or potentially harmful cells in the body. It is an energy-dependent process mediated by proteolytic enzymes called caspases, which trigger cell death through the cleaving of specific proteins in the cytoplasm and nucleus. The dying cells shrink and condense into apoptotic bodies. The cell surface is altered so as to display properties that lead to rapid phagocytosis by macrophages or neighbouring cells. Unlike necrotic cell death, neighbouring cells are not damaged by apoptosis as cytosolic products are safely isolated by membranes prior to undergoing phagocytosis. It is considered an important component of various bioprocesses including cell turnover, hormone-dependent atrophy, proper development and functioning of the immune and embryonic system, it also helps in chemical-induced cell death which is genetically mediated. There is some evidence that certain symptoms of "apoptosis" such as endonuclease activation can be spuriously induced without engaging a genetic cascade. It is also becoming clear that mitosis and apoptosis are toggled or linked in some way and that the balance achieved depends on signals received from appropriate growth or survival factors. There are research being conducted to focus on the elucidation and analysis of the cell cycle machinery and signaling pathways that controls cell cycle arrest and apoptosis. In the average adult between 50 and 70 billion cells die each day due to apoptosis. Inhibition of apoptosis can result in a number of cancers, autoimmune diseases, inflammatory diseases, and viral infections. Hyperactive apoptosis can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. Repair When a cell is damaged, the body will try to repair or replace the cell to continue normal functions. If a cell dies, the body will remove it and replace it with another functioning cell, or fill the gap with connective tissue to provide structural support for the remaining cells. The motto of the repair process is to fill a gap caused by the damaged cells to regain structural continuity. Normal cells try to regenerate the damaged cells but this cannot always happen. Regeneration Regeneration of parenchyma cells, or the functional cells, of an organism. The body can make more cells to replace the damaged cells keeping the organ or tissue intact and fully functional. Replacement When a cell cannot be regenerated, the body will replace it with stromal connective tissue to maintain tissue or organ function. Stromal cells are the cells that support the parenchymal cells in any organ. Fibroblasts, immune cells, pericytes, and inflammatory cells are the most common types of stromal cells. Biochemical changes in cellular injury ATP (adenosine triphosphate) depletion is a common biological alteration that occurs with cellular injury. This change can happen despite the inciting agent of the cell damage. A reduction in intracellular ATP can have a number of functional and morphologic consequences during cell injury. These effects include: Failure of the ATP-dependent pumps ( pump and pump), resulting in a net influx of and ions and osmotic swelling. ATP-depleted cells begin to undertake anaerobic metabolism to derive energy from glycogen which is known as glycogenolysis. A consequent decrease in the intracellular pH of the cell arises, which mediates harmful enzymatic processes. Early clumping of nuclear chromatin then occurs, known as pyknosis, and leads to eventual cell death. DNA damage and repair DNA damage DNA damage (or RNA damage in the case of some virus genomes) appears to be a fundamental problem for life. As noted by Haynes, the subunits of DNA are not endowed with any peculiar kind of quantum mechanical stability, and thus DNA is vulnerable to all the "chemical horrors" that might befall any such molecule in a warm aqueous medium. These chemical horrors are DNA damages that include various types of modification of the DNA bases, single- and double-strand breaks, and inter-strand cross-links (see DNA damage (naturally occurring). DNA damages are distinct from mutations although both are errors in the DNA. Whereas DNA damages are abnormal chemical and structural alterations, mutations ordinarily involve the normal four bases in new arrangements. Mutations can be replicated, and thus inherited when the DNA replicates. In contrast, DNA damages are altered structures that cannot, themselves, be replicated. Several different repair processes can remove DNA damages (see chart in DNA repair). However, those DNA damages that remain un-repaired can have detrimental consequences. DNA damages may block replication or gene transcription. These blockages can lead to cell death. In multicellular organisms, cell death in response to DNA damage may occur by a programmed process, apoptosis. Alternatively, when DNA polymerase replicates a template strand containing a damaged site, it may inaccurately bypass the damage and, as a consequence, introduce an incorrect base leading to a mutation. Experimentally, mutation rates increase substantially in cells defective in DNA mismatch repair or in Homologous recombinational repair (HRR). In both prokaryotes and eukaryotes, DNA genomes are vulnerable to attack by reactive chemicals naturally produced in the intracellular environment and by agents from external sources. An important internal source of DNA damage in both prokaryotes and eukaryotes is reactive oxygen species (ROS) formed as byproducts of normal aerobic metabolism. For eukaryotes, oxidative reactions are a major source of DNA damage (see DNA damage (naturally occurring) and Sedelnikova et al.). In humans, about 10,000 oxidative DNA damages occur per cell per day. In the rat, which has a higher metabolic rate than humans, about 100,000 oxidative DNA damages occur per cell per day. In aerobically growing bacteria, ROS appear to be a major source of DNA damage, as indicated by the observation that 89% of spontaneously occurring base substitution mutations are caused by introduction of ROS-induced single-strand damages followed by error-prone replication past these damages. Oxidative DNA damages usually involve only one of the DNA strands at any damaged site, but about 1–2% of damages involve both strands. The double-strand damages include double-strand breaks (DSBs) and inter-strand crosslinks. For humans, the estimated average number of endogenous DNA DSBs per cell occurring at each cell generation is about 50. This level of formation of DSBs likely reflects the natural level of damages caused, in large part, by ROS produced by active metabolism. Repair of DNA damages Five major pathways are employed in repairing different types of DNA damages. These five pathways are nucleotide excision repair, base excision repair, mismatch repair, non-homologous end-joining and homologous recombinational repair (HRR) (see chart in DNA repair) and reference. Only HRR can accurately repair double-strand damages, such as DSBs. The HRR pathway requires that a second homologous chromosome be available to allow recovery of the information lost by the first chromosome due to the double-strand damage. DNA damage appears to play a key role in mammalian aging, and an adequate level of DNA repair promotes longevity (see DNA damage theory of aging and reference.). In addition, an increased incidence of DNA damage and/or reduced DNA repair cause an increased risk of cancer (see Cancer, Carcinogenesis and Neoplasm) and reference). Furthermore, the ability of HRR to accurately and efficiently repair double-strand DNA damages likely played a key role in the evolution of sexual reproduction (see Evolution of sexual reproduction and reference). In extant eukaryotes, HRR during meiosis provides the major benefit of maintaining fertility. See also Cellular adaptation References Cell biology Cellular senescence
0.776404
0.987965
0.76706
First aid
First aid is the first and immediate assistance given to any person with either a minor or serious illness or injury, with care provided to preserve life, prevent the condition from worsening, or to promote recovery until medical services arrive. First aid is generally performed by someone with basic medical or first response training. Mental health first aid is an extension of the concept of first aid to cover mental health, while psychological first aid is used as early treatment of people who are at risk for developing PTSD. Conflict first aid, focused on preservation and recovery of an individual's social or relationship well-being, is being piloted in Canada. There are many situations that may require first aid, and many countries have legislation, regulation, or guidance, which specifies a minimum level of first aid provision in certain circumstances. This can include specific training or equipment to be available in the workplace (such as an automated external defibrillator), the provision of specialist first aid cover at public gatherings, or mandatory first aid training within schools. Generally, five steps are associated with first aid :- Assess the surrounding areas. Move to a safe surrounding (if not already; for example, road accidents are unsafe to be dealt with on roads). Call for help: both professional medical help and people nearby who might help in first aid such as the compressions of cardiopulmonary resuscitation (CPR). Perform suitable first aid depending on the injury suffered by the casualty. Evaluate the casualty for any fatal signs of danger, or possibility of performing the first aid again. Early history and warfare Skills of what is now known as first aid have been recorded throughout history, especially in relation to warfare, where the care of both traumatic and medical cases is required in particularly large numbers. The bandaging of battle wounds is shown on Classical Greek pottery from , whilst the parable of the Good Samaritan includes references to binding or dressing wounds. There are numerous references to first aid performed within the Roman army, with a system of first aid supported by surgeons, field ambulances, and hospitals. Roman legions had the specific role of capsarii, who were responsible for first aid such as bandaging, and are the forerunners of the modern combat medic. Further examples occur through history, still mostly related to battle, with examples such as the Knights Hospitaller in the 11th century AD, providing care to pilgrims and knights in the Holy Land. Formalization of life saving treatments During the late 18th century, drowning as a cause of death was a major concern amongst the population. In 1767, a society for the preservation of life from accidents in water was started in Amsterdam, and in 1773, physician William Hawes began publicizing the power of artificial respiration as means of resuscitation of those who appeared drowned. This led to the formation, in 1774, of the Society for the Recovery of Persons Apparently Drowned, later the Royal Humane Society, who did much to promote resuscitation. Napoleon's surgeon, Baron Dominique-Jean Larrey, is credited with creating an ambulance corps, the ambulance volantes, which included medical assistants, tasked to administer first aid in battle. In 1859, Swiss businessman Jean-Henri Dunant witnessed the aftermath of the Battle of Solferino, and his work led to the formation of the Red Cross, with a key stated aim of "aid to sick and wounded soldiers in the field". The Red Cross and Red Crescent are still the largest provider of first aid worldwide. In 1870, Prussian military surgeon Friedrich von Esmarch introduced formalized first aid to the military, and first coined the term "erste hilfe" (translating to 'first aid'), including training for soldiers in the Franco-Prussian War on care for wounded comrades using pre-learnt bandaging and splinting skills, and making use of the Esmarch bandage which he designed. The bandage was issued as standard to the Prussian combatants, and also included aide-memoire pictures showing common uses. In 1872, the Order of Saint John of Jerusalem in England changed its focus from hospice care, and set out to start a system of practical medical help, starting with making a grant towards the establishment of the UK's first ambulance service. This was followed by creating its own wheeled transport litter in 1875 (the St John Ambulance), and in 1877 established the St John Ambulance Association (the forerunner of modern-day St John Ambulance) "to train men and women for the benefit of the sick and wounded". Also in the UK, Surgeon-Major Peter Shepherd had seen the advantages of von Esmarch's new teaching of first aid, and introduced an equivalent programme for the British Army, and so being the first user of "first aid for the injured" in English, disseminating information through a series of lectures. Following this, in 1878, Shepherd and Colonel Francis Duncan took advantage of the newly charitable focus of St John, and established the concept of teaching first aid skills to civilians. The first classes were conducted in the hall of the Presbyterian school in Woolwich (near Woolwich barracks where Shepherd was based) using a comprehensive first aid curriculum. First aid training began to spread through the British Empire through organisations such as St John, often starting, as in the UK, with high risk activities such as ports and railways. The first recorded first aid training in the United States took place in Jermyn, Pennsylvania in 1899. Aims of first aid The primary goal of first aid is to prevent death or serious injury from worsening. The key aims of first aid can be summarized with the acronym of 'the three Ps': Preserve life: The overriding aim of all medical care which includes first aid, is to save lives and minimize the threat of death. First aid done correctly should help reduce the patient's level of pain and calm them down during the evaluation and treatment process. Prevent further harm: Prevention of further harm includes addressing both external factors, such as moving a patient away from any cause of harm, and applying first aid techniques to prevent worsening of the condition, such as applying pressure to stop a bleed from becoming dangerous. Promote recovery: First aid also involves trying to start the recovery process from the illness or injury, and in some cases might involve completing a treatment, such as in the case of applying a plaster to a small wound. First aid is not medical treatment, and cannot be compared with what a trained medical professional provides. First aid involves making common sense decisions in the best interest of an injured person. Setting the priorities Protocols such as ATLS, BATLS, SAFE-POINT are based on the principle of defining the priorities and the procedure where the correct execution of the individual steps achieves the required objective of saving human life. Basic points of these protocols include the mnemonic ABCDE or cABCDE: catastrophic bleeding (massive external bleeding, added in some protocols) Airway (clearing airways) Breathing (ensuring respiration) Circulation (ensuring effective cardiac output) Disability (neurological condition), and/or Defibrillation (cardio-respiratory failure, which can be also included as 'Breathing' or 'Circulation') Exposure (overall examination, environment) A major benefit of these protocols is that they require minimum resources, time and skills with a great degree of success in saving lives under conditions unfavourable for applying first aid. ABCDE method Source: Airway (clearing airways): If the patient responds in a normal voice, then the airway is patent. Airway obstruction can be partial or complete. Signs of a partially obstructed airway include a changed voice, noisy breathing (e.g., stridor), and an increased breathing effort. With a completely obstructed airway, there is no respiration despite great effort (i.e., paradox respiration, or "see-saw" sign). A reduced level of consciousness is a common cause of airway obstruction, partial or complete. A common sign of partial airway obstruction in the unconscious state is snoring. Untreated airway obstruction can rapidly lead to cardiac arrest. All health care professionals, regardless of the setting, can assess the airway as described and use a head-tilt and chin-lift maneuver to open the airway. With the proper equipment, suction of the airways to remove obstructions, for example, blood or vomit, is recommended. If possible, foreign bodies causing airway obstruction should be removed. In the event of a complete airway obstruction, treatment should be given according to current guidelines. In brief, first aid for conscious patients of choking uses anti-choking procedures (usually five back blows, alternating with five abdominal thrusts, or alternating with five chest thrusts in the case of the pregnant and the very obese victims, until the obstruction is relieved). If the victim becomes unconscious, it is required, according to guidelines, to call for help to emergency medical services and to any useful people that is near and to start cardiopulmonary resuscitation for unconscious victims of choking (attempting to extract the object, with extreme care, from time to time). In modern times, some commercial anti-choking devices have been invented to simplify the solution of choking. Importantly, high-flow oxygen should be provided to all critically ill persons as soon as possible. Breathing (ensuring respiration): In all settings, it is possible to determine the respiratory rate, inspect movements of the thoracic wall for symmetry and use of auxiliary respiratory muscles, and percuss the chest for unilateral dullness or resonance. Cyanosis, distended neck veins, and lateralization of the trachea can be identified. If a stethoscope is available, lung auscultation should be performed and, if possible, a pulse oximeter should be applied. Tension pneumothorax must be relieved immediately by inserting a cannula where the second intercostal space crosses the midclavicular line (needle thoracocentesis). Bronchospasm should be treated with inhalations. If breathing is insufficient, assisted ventilation must be performed by giving rescue breaths with or without a barrier device. Trained personnel should use a bag mask if available. Circulation (internal bleeding): The capillary refill time and pulse rate can be assessed in any setting. Inspection of the skin gives clues to circulatory problems. Color changes, sweating, and a decreased level of consciousness are signs of decreased perfusion. If a stethoscope is available, heart auscultation should be performed. Electrocardiography monitoring and blood pressure measurements should also be performed as soon as possible. Hypotension is an important adverse clinical sign. The effects of hypovolemia can be alleviated by placing the patient in the supine position and elevating the patient's legs. An intravenous access should be obtained as soon as possible and saline should be infused. Disability (neurological condition): The level of consciousness can be rapidly assessed using the AVPU method, where the patient is graded as alert (A), voice responsive (V), pain responsive (P), or unresponsive (U). Alternatively, the Glasgow Coma Score can be used.16 Limb movements should be inspected to evaluate potential signs of lateralization. The best immediate treatment for patients with a primary cerebral condition is stabilization of the airway, breathing, and circulation. In particular, when the patient is only pain responsive or unresponsive, airway patency must be ensured, by placing the patient in the recovery position, and summoning personnel qualified to secure the airway. Ultimately, intubation may be required. Pupillary light reflexes should be evaluated and blood glucose measured. A decreased level of consciousness due to low blood glucose can be corrected quickly with oral or infused glucose. Exposure (overall examination, environment): Signs of trauma, bleeding, skin reactions (rashes), needle marks, etc., must be observed. Bearing the dignity of the patient in mind, clothing should be removed to allow a thorough physical examination to be performed. Body temperature can be estimated by feeling the skin or using a thermometer when available. Key basic skills Certain skills are considered essential to the provision of first aid and are taught ubiquitously. Particularly the "ABC"s of first aid, which focus on critical life-saving intervention, must be rendered before treatment of less serious injuries. ABC stands for Airway, Breathing, and Circulation. The same mnemonic is used by emergency health professionals. Attention must first be brought to the airway to ensure it is clear. An obstruction (choking) is a life-threatening emergency. If an object blocks the airway, it requires anti-choking procedures. Following any evaluation of the airway, a first aid attendant would determine adequacy of breathing and provide rescue breathing if necessary. Assessment of circulation is now not usually carried out for patients who are not breathing, with first aiders now trained to go straight to chest compressions (and thus providing artificial circulation) but pulse checks may be done on less serious patients. Some organizations add a fourth step of "D" for Deadly bleeding or Defibrillation, while others consider this as part of the Circulation step simply referred as Disability. Variations on techniques to evaluate and maintain the ABCs depend on the skill level of the first aider. Once the ABCs are secured, first aiders can begin additional treatments or examination, as required if they possess the proper training (such as measuring pupil dilation). Some organizations teach the same order of priority using the "3Bs": Breathing, Bleeding, and Bones (or "4Bs": Breathing, Bleeding, Burns, and Bones). While the ABCs and 3Bs are taught to be performed sequentially, certain conditions may require the consideration of two steps simultaneously. This includes the provision of both artificial respiration and chest compressions to someone who is not breathing and has no pulse, and the consideration of cervical spine injuries when ensuring an open airway. Skills applicable to the wider context are reflected in the mnemonic AMEGA, which refers to the tasks of "assess", "make safe", "emergency aid", "get help" and "aftermath". The aftermath tasks include recording and reporting, continued care of patients and the welfare of responders and the replacement of used first aid kit elements. Preserving life The patient must have an open airway—that is, an unobstructed passage that allows air to travel from the open mouth or uncongested nose, down through the pharynx and into the lungs. Conscious people maintain their own airway automatically, but those who are unconscious (with a GCS of less than 8) may be unable to do so, as the part of the brain that manages spontaneous breathing may not be functioning. Whether conscious or not, the patient may be placed in the recovery position, laying on their side. In addition to relaxing the patient, this can have the effect of clearing the tongue from the pharynx. It also avoids a common cause of death in unconscious patients, which is choking on regurgitated stomach contents. The airway can also become blocked by a foreign object. To dislodge the object and solve the choking case, the first aider may use anti-choking methods (such as 'back slaps' and 'abdominal thrusts'). Once the airway has been opened, the first aider would reassess the patient's breathing. If there is no breathing, or the patient is not breathing normally (e.g., agonal breathing), the first aider would initiate CPR, which attempts to restart the patient's breathing by forcing air into the lungs. They may also manually massage the heart to promote blood flow around the body. If the choking person is an infant, the first aider may use anti-choking methods for babies. During that procedure, series of five strong blows are delivered on the infant's upper back after placing the infant's face in the aider's forearm. If the infant is able to cough or cry, no breathing assistance should be given. Chest thrusts can also be applied with two fingers on the lower half of the middle of the chest. Coughing and crying indicate the airway is open and the foreign object will likely to come out from the force the coughing or crying produces. A first responder should know how to use an Automatic External Defibrillator (AED) in the case of a person having a sudden cardiac arrest. The survival rate of those who have a cardiac arrest outside of the hospital is low. Permanent brain damage sets in after five minutes of no oxygen delivery, so rapid action on the part of the rescuer is necessary. An AED is a device that can examine a heartbeat and produce electric shocks to restart the heart. A first aider should be prepared to quickly deal with less severe problems such as cuts, grazes or bone fracture. They may be able to completely resolve a situation if they have the proper training and equipment. For situations that are more severe, complex or dangerous, a first aider might need to do the best they can with the equipment they have, and wait for an ambulance to arrive at the scene. List of injuries and diseases that require first aid Altitude sickness, which can begin in susceptible people at altitudes as low as 5,000 feet, can cause potentially fatal swelling of the brain or lungs. Allergic reaction: it is treated with anti-allergic medications. It can lead to anaphylaxis (read below). Anaphylaxis, a life-threatening condition in which the airway can become constricted and the patient may go into shock. It can be caused by an allergic reaction to any allergen (such as insect bites or peanuts). Anaphylaxis is initially treated with injection of epinephrine. Asphyxiation. Battlefield first aid—This protocol refers to treating shrapnel, gunshot wounds, burns and bone fractures as seen either in the traditional battlefield setting or in an area subject to damage by large-scale weaponry, such as a bomb blast. Bites and stings by insects and animals. Bleeding (external), treated by applying pressure (manually and later with a pressure bandage) to the wound site and elevating the limb if possible. Bleeding (internal), about internal wounds. Bone fracture, a break in a bone initially treated by stabilizing the fracture with a splint. Burns, which can result in damage to tissues and loss of body fluids through the burn site. Cardiac arrest, which leads to death in minutes, so it needs to call to the emergency medical services, and to keep the patient alive by using cardiopulmonary resuscitation (CPR), preferably combined with the use of an AED defibrillator, that would be requested soon. Even calling to the emergency services, there is often no time to wait for them to arrive, as 92 percent of people suffering a sudden cardiac arrest die before reaching hospital (according to the American Heart Association). Chest wounds (pneumothorax), also known as 'sucking chest woyunds', which are treated with an occlusive dressing with an opened side that lets air go out but not in. Childbirth. Choking, blockage of the airway which can quickly result in death due to lack of oxygen if the patient's trachea is not cleared. If an object blocks the airway, it can be removed by the anti-choking techniques. Cramps in muscles due to lactic acid build up caused either by inadequate oxygenation of muscle or lack of water or salt. Diabetic hyperglycemia, excess of blood sugar caused by diabetes. It could lead to diabetic coma (caused by the excess of blood sugar). Diabetic hypoglycemia, decrease in blood sugar caused by diabetes. It could lead to diabetic coma (caused by the low levels of blood sugar). Diarrhea, which can lead to severe dehydration. Diving disorders. Drowning, including related asphyxia. Dysmenorrhea. Electrical injury Fever, which is usually a symptom, but it requires its own treatment. Gastrointestinal bleeding. Hair tourniquet a condition where a hair or other thread becomes tied around a toe or finger tightly enough to cut off blood flow. Heart attack, or inadequate blood flow to the blood vessels supplying the heart muscle. Heat stroke, also known as sunstroke, which is a form of hyperthermia because of high temperatures in the environment. It also tends to occur during heavy exercise in high humidity, or with inadequate water, though it may occur spontaneously in some chronically ill persons. Sunstroke, especially when the patient has been unconscious, often causes major damage to body systems such as brain, kidney, liver, gastric tract. Unconsciousness for more than two hours usually leads to permanent disability. Emergency treatment involves rapid cooling of the patient. Heat syncope, another stage in the same process as heat stroke, occurs under similar conditions as heat stroke and is not distinguished from the latter by some authorities. Hyperglycemia, excess of blood sugar. It usually happens because of diabetes, and could lead to diabetic coma (caused by the excess of blood sugar). Hypoglycemia, decrease of blood sugar. As in hyperglycemia, it usually happens because of diabetes (in an insulin shock due to diabetic hypoglycemia), and could lead to diabetic coma caused by the low levels of blood sugar. Hypothermia, or Exposure, occurs when a person's core body temperature falls below 33.7 °C (92.6 °F). First aid for a mildly hypothermic patient includes rewarming, which can be achieved by wrapping the affected person in a blanket, and providing warm drinks, such as soup, and high energy food, such as chocolate. However, rewarming a severely hypothermic person could result in a fatal arrhythmia, an irregular heart rhythm. Infarction of the heart, which is a form of ischemia (lack of oxygen) in myocardial tissue. Insulin shock (a diabetic hypoglycemia). Joint dislocation. Muscle strains. Poisoning, which can occur by injection, inhalation, absorption, or ingestion. Seizures, or a malfunction in the electrical activity in the brain. Three types of seizures include a grand mal (which usually features convulsions as well as temporary respiratory abnormalities, change in skin complexion, etc.) and petit mal (which usually features twitching, rapid blinking, or fidgeting as well as altered consciousness and temporary respiratory abnormalities). Shock and electric shock - electrical injury. Sprains, a temporary dislocation of a joint that immediately reduces automatically but may result in ligament damage. Stroke, a temporary loss of blood supply to the brain. Sucking chest wounds (pneumothorax), treated with an occlusive dressing with an opened side that lets air go out but not in. Testicular torsion. Toothache, which can result in severe pain and loss of the tooth but is rarely life-threatening, unless over time the infection spreads into the bone of the jaw and starts osteomyelitis. Wounds with external bleeding, which includes lacerations, incisions, abrasions and other bleeding wounds. Wounds with internal bleeding, about internal wounds. Many accidents can happen in homes, offices, schools and laboratories, and require immediate attention before the patient is attended by the doctor. First aid kits A first aid kit consists of a strong, durable bag or transparent plastic box. They are commonly identified with a white cross on a green background. A first aid kit does not have to be bought ready-made. The advantage of ready-made first aid kits are that they have well organized compartments and familiar layouts. Contents There is no universal agreement upon the list for the contents of a first aid kit. The UK Health and Safety Executive stress that the contents of workplace first aid kits will vary according to the nature of the work activities. As an example of possible contents of a kit, British Standard BS 8599 First Aid Kits for the Workplace lists the following items: Information leaflet Medium sterile dressings Large sterile dressings Bandages Triangular dressings Safety pins Adhesive dressings Sterile wet wipes Microporous tape Nitrile gloves Face shield Foil blanket Burn dressings Clothing shears Conforming bandages Finger dressing Antiseptic cream Scissors Tweezers Cotton Training principles Basic principles, such as knowing the use of adhesive bandage or applying direct pressure on a bleed, are often acquired passively through life experiences. However, to provide effective, life-saving first aid interventions requires instruction and practical training. This is especially true where it relates to potentially fatal illnesses and injuries, such as those that require CPR; these procedures may be invasive, and carry a risk of further injury to the patient and the provider. As with any training, it is more useful if it occurs before an actual emergency. And, in many countries, calling emergency medical services allows listening basic first aid instructions over the phone while the ambulance is on the way. Training is generally provided by attending a course, typically leading to certification. Due to regular changes in procedures and protocols, based on updated clinical knowledge, and to maintain skill, attendance at regular refresher courses or re-certification is often necessary. First aid training is often available through community organizations such as the Red Cross and St. John Ambulance, or through commercial providers, who will train people for a fee. This commercial training is most common for training of employees to perform first aid in their workplace. Many community organizations also provide a commercial service, which complements their community programmes. 1.Junior level certificate Basic Life Support 2.Senior level certificate 3.Special certificate Types of first aid which require training There are several types of first aid (and first aider) that require specific additional training. These are usually undertaken to fulfill the demands of the work or activity undertaken. Aquatic/Marine first aid is usually practiced by professionals such as lifeguards, professional mariners or in diver rescue, and covers the specific problems which may be faced after water-based rescue or delayed MedEvac. Battlefield first aid takes into account the specific needs of treating wounded combatants and non-combatants during armed conflict. Conflict First Aid focuses on support for stability and recovery of personal, social, group or system well-being and to address circumstantial safety needs. Hyperbaric first aid may be practiced by underwater diving professionals, who need to treat conditions such as decompression sickness. Oxygen first aid is the providing of oxygen to casualties with conditions resulting in hypoxia. It is also a standard first aid procedure for underwater diving incidents where gas bubble formation in the tissues is possible. Wilderness first aid is the provision of first aid under conditions where the arrival of emergency responders or the evacuation of an injured person may be delayed due to constraints of terrain, weather, and available persons or equipment. It may be necessary to care for an injured person for several hours or days. Mental health first aid is taught independently of physical first aid. How to support someone experiencing a mental health problem or in a crisis situation. Also how to identify the first signs of someone developing mental ill health and guide people towards appropriate help. First aid services Some people undertake specific training in order to provide first aid at public or private events, during filming, or other places where people gather. They may be designated as a first aider, or use some other title. This role may be undertaken on a voluntary basis, with organisations such as the Red Cross society and St. John Ambulance, or as paid employment with a medical contractor. People performing a first aid role, whether in a professional or voluntary capacity, are often expected to have a high level of first aid training and are often uniformed. Symbols Although commonly associated with first aid, the symbol of a red cross is an official protective symbol of the Red Cross. According to the Geneva Conventions and other international laws, the use of this and similar symbols is reserved for official agencies of the International Red Cross and Red Crescent, and as a protective emblem for medical personnel and facilities in combat situations. Use by any other person or organization is illegal, and may lead to prosecution. The internationally accepted symbol for first aid is the white cross on a green background shown below. Some organizations may make use of the Star of Life, although this is usually reserved for use by ambulance services, or may use symbols such as the Maltese Cross, like the Order of Malta Ambulance Corps and St John Ambulance. Other symbols may also be used. References External links First Aid Guide at the Mayo Clinic First aid from the British Red Cross – including first aid tips and first aid training information First aid from St John Ambulance – first aid information and advice Emergency medical services Lifesaving Scoutcraft Self-care
0.768069
0.998595
0.76699
Serology
Serology is the scientific study of serum and other body fluids. In practice, the term usually refers to the diagnostic identification of antibodies in the serum. Such antibodies are typically formed in response to an infection (against a given microorganism), against other foreign proteins (in response, for example, to a mismatched blood transfusion), or to one's own proteins (in instances of autoimmune disease). In either case, the procedure is simple. Serological tests Serological tests are diagnostic methods that are used to identify antibodies and antigens in a patient's sample. Serological tests may be performed to diagnose infections and autoimmune illnesses, to check if a person has immunity to certain diseases, and in many other situations, such as determining an individual's blood type. Serological tests may also be used in forensic serology to investigate crime scene evidence. Several methods can be used to detect antibodies and antigens, including ELISA, agglutination, precipitation, complement-fixation, and fluorescent antibodies and more recently chemiluminescence. Applications Microbiology In microbiology, serologic tests are used to determine if a person has antibodies against a specific pathogen, or to detect antigens associated with a pathogen in a person's sample. Serologic tests are especially useful for organisms that are difficult to culture by routine laboratory methods, like Treponema pallidum (the causative agent of syphilis), or viruses. The presence of antibodies against a pathogen in a person's blood indicates that they have been exposed to that pathogen. Most serologic tests measure one of two types of antibodies: immunoglobulin M (IgM) and immunoglobulin G (IgG). IgM is produced in high quantities shortly after a person is exposed to the pathogen, and production declines quickly thereafter. IgG is also produced on the first exposure, but not as quickly as IgM. On subsequent exposures, the antibodies produced are primarily IgG, and they remain in circulation for a prolonged period of time. This affects the interpretation of serology results: a positive result for IgM suggests that a person is currently or recently infected, while a positive result for IgG and negative result for IgM suggests that the person may have been infected or immunized in the past. Antibody testing for infectious diseases is often done in two phases: during the initial illness (acute phase) and after recovery (convalescent phase). The amount of antibody in each specimen (antibody titer) is compared, and a significantly higher amount of IgG in the convalescent specimen suggests infection as opposed to previous exposure. False negative results for antibody testing can occur in people who are immunosuppressed, as they produce lower amounts of antibodies, and in people who receive antimicrobial drugs early in the course of the infection. Transfusion medicine Blood typing is typically performed using serologic methods. The antigens on a person's red blood cells, which determine their blood type, are identified using reagents that contain antibodies, called antisera. When the antibodies bind to red blood cells that express the corresponding antigen, they cause red blood cells to clump together (agglutinate), which can be identified visually. The person's blood group antibodies can also be identified by adding plasma to cells that express the corresponding antigen and observing the agglutination reactions. Other serologic methods used in transfusion medicine include crossmatching and the direct and indirect antiglobulin tests. Crossmatching is performed before a blood transfusion to ensure that the donor blood is compatible. It involves adding the recipient's plasma to the donor blood cells and observing for agglutination reactions. The direct antiglobulin test is performed to detect if antibodies are bound to red blood cells inside the person's body, which is abnormal and can occur in conditions like autoimmune hemolytic anemia, hemolytic disease of the newborn and transfusion reactions. The indirect antiglobulin test is used to screen for antibodies that could cause transfusion reactions and identify certain blood group antigens. Immunology Serologic tests can help to diagnose autoimmune disorders by identifying abnormal antibodies directed against a person's own tissues (autoantibodies). All people have different immunology graphs. Serological surveys A 2016 research paper by Metcalf et al., amongst whom were Neil Ferguson and Jeremy Farrar, stated that serological surveys are often used by epidemiologists to determine the prevalence of a disease in a population. Such surveys are sometimes performed by random, anonymous sampling from samples taken for other medical tests or to assess the prevalence of antibodies of a specific organism or protective titre of antibodies in a population. Serological surveys are usually used to quantify the proportion of people or animals in a population positive for a specific antibody or the titre or concentrations of an antibody. These surveys are potentially the most direct and informative technique available to infer the dynamics of a population's susceptibility and level of immunity. The authors proposed a World Serology Bank (or serum bank) and foresaw "associated major methodological developments in serological testing, study design, and quantitative analysis, which could drive a step change in our understanding and optimum control of infectious diseases." In a helpful reply entitled "Opportunities and challenges of a World Serum Bank", de Lusignan and Correa observed that the In another helpful reply on the World Serum Bank, the Australian researcher Karen Coates declared that: In April 2020, Justin Trudeau formed the COVID-19 Immunity Task Force, whose mandate is to carry out a serological survey in a scheme hatched in the midst of the COVID-19 pandemic. See also Forensic serology Medical laboratory Medical technologist Seroconversion Serovar Geoffrey Tovey, noted serologist References External links Serology (archived) – MedlinePlus Medical Encyclopedia Clinical pathology Blood tests Epidemiology Immunologic tests
0.771645
0.993966
0.766989
Biological organisation
Biological organisation is the organisation of complex biological structures and systems that define life using a reductionistic approach. The traditional hierarchy, as detailed below, extends from atoms to biospheres. The higher levels of this scheme are often referred to as an ecological organisation concept, or as the field, hierarchical ecology. Each level in the hierarchy represents an increase in organisational complexity, with each "object" being primarily composed of the previous level's basic unit. The basic principle behind the organisation is the concept of emergence—the properties and functions found at a hierarchical level are not present and irrelevant at the lower levels. The biological organisation of life is a fundamental premise for numerous areas of scientific research, particularly in the medical sciences. Without this necessary degree of organisation, it would be much more difficult—and likely impossible—to apply the study of the effects of various physical and chemical phenomena to diseases and physiology (body function). For example, fields such as cognitive and behavioral neuroscience could not exist if the brain was not composed of specific types of cells, and the basic concepts of pharmacology could not exist if it was not known that a change at the cellular level can affect an entire organism. These applications extend into the ecological levels as well. For example, DDT's direct insecticidal effect occurs at the subcellular level, but affects higher levels up to and including multiple ecosystems. Theoretically, a change in one atom could change the entire biosphere. Levels The simple standard biological organisation scheme, from the lowest level to the highest level, is as follows: More complex schemes incorporate many more levels. For example, a molecule can be viewed as a grouping of elements, and an atom can be further divided into subatomic particles (these levels are outside the scope of biological organisation). Each level can also be broken down into its own hierarchy, and specific types of these biological objects can have their own hierarchical scheme. For example, genomes can be further subdivided into a hierarchy of genes. Each level in the hierarchy can be described by its lower levels. For example, the organism may be described at any of its component levels, including the atomic, molecular, cellular, histological (tissue), organ and organ system levels. Furthermore, at every level of the hierarchy, new functions necessary for the control of life appear. These new roles are not functions that the lower level components are capable of and are thus referred to as emergent properties. Every organism is organised, though not necessarily to the same degree. An organism can not be organised at the histological (tissue) level if it is not composed of tissues in the first place. Emergence of biological organisation Biological organisation is thought to have emerged in the early RNA world when RNA chains began to express the basic conditions necessary for natural selection to operate as conceived by Darwin: heritability, variation of type, and competition for limited resources. Fitness of an RNA replicator (its per capita rate of increase) would likely have been a function of adaptive capacities that were intrinsic (in the sense that they were determined by the nucleotide sequence) and the availability of resources. The three primary adaptive capacities may have been (1) the capacity to replicate with moderate fidelity (giving rise to both heritability and variation of type); (2) the capacity to avoid decay; and (3) the capacity to acquire and process resources. These capacities would have been determined initially by the folded configurations of the RNA replicators (see "Ribozyme") that, in turn, would be encoded in their individual nucleotide sequences. Competitive success among different RNA replicators would have depended on the relative values of these adaptive capacities. Subsequently, among more recent organisms competitive success at successive levels of biological organisation, presumably continued to depend, in a broad sense, on the relative values of these adaptive capacities. Fundamentals Empirically, a large proportion of the (complex) biological systems we observe in nature exhibit hierarchical structure. On theoretical grounds we could expect complex systems to be hierarchies in a world in which complexity had to evolve from simplicity. System hierarchies analysis performed in the 1950s, laid the empirical foundations for a field that would be, from the 1980s, hierarchical ecology. The theoretical foundations are summarized by thermodynamics. When biological systems are modeled as physical systems, in its most general abstraction, they are thermodynamic open systems that exhibit self-organised behavior, and the set/subset relations between dissipative structures can be characterized in a hierarchy. A simpler and more direct way to explain the fundamentals of the "hierarchical organisation of life", was introduced in Ecology by Odum and others as the "Simon's hierarchical principle"; Simon emphasized that hierarchy "emerges almost inevitably through a wide variety of evolutionary processes, for the simple reason that hierarchical structures are stable". To motivate this deep idea, he offered his "parable" about imaginary watchmakers. {| !Parable of the Watchmakers |- | There once were two watchmakers, named Hora and Tempus, who made very fine watches. The phones in their workshops rang frequently; new customers were constantly calling them. However, Hora prospered while Tempus became poorer and poorer. In the end, Tempus lost his shop. What was the reason behind this? The watches consisted of about 1000 parts each. The watches that Tempus made were designed such that, when he had to put down a partly assembled watch (for instance, to answer the phone), it immediately fell into pieces and had to be reassembled from the basic elements. Hora had designed his watches so that he could put together subassemblies of about ten components each. Ten of these subassemblies could be put together to make a larger sub-assembly. Finally, ten of the larger subassemblies constituted the whole watch. Each subassembly could be put down without falling apart. |} See also Abiogenesis Cell theory Cellular differentiation Composition of the human body Evolution of biological complexity Evolutionary biology Gaia hypothesis Hierarchy theory Holon (philosophy) Human ecology Level of analysis Living systems Self-organization Spontaneous order Structuralism (biology) Timeline of the evolutionary history of life Notes References External links 2011's theoretical/mathematical discussion. Life Articles containing video clips Hierarchy Emergence Levels of organization (Biology)
0.775118
0.989473
0.766959
FAST (stroke)
FAST is an acronym used as a mnemonic to help early recognition and detection of the signs and symptoms of a stroke. The acronym stands for Facial drooping, Arm (or leg) weakness, Speech difficulties and Time to call emergency services. F - Facial drooping - A section of the face, usually only on one side, that is drooping and hard to move. This can be recognized by a crooked smile, or difficulty preventing saliva from leaking at a corner of the mouth. A - Arm (or leg) weakness - Inability to raise one's arm fully, or the inability to hold or squeeze something (such as someone's hand), or a new reduction in strength of an arm or leg when raising/supporting an extra weight (such as new difficulty of carrying/lifting a typical object, or raising one's body from squatting/sitting position). S - Speech difficulties - An inability or difficulty to understand or produce speech, slurred speech, or having difficulty repeating even a basic sentence such as "The sky is blue". T - Time - If any of the symptoms above are showing, time is of the essence; emergency medical services should be called and/or the person taken to a hospital immediately if possible. It is also important to note the time the symptoms first started appearing and pass on this information ("Time is brain"). History The FAST acronym was developed in the UK in 1998 by a group of stroke physicians, ambulance personnel, and an emergency department physician and was designed to be an integral part of a training package for ambulance staff. The acronym was created to expedite administration of intravenous tissue plasminogen activator to patients within 3 hours of acute stroke symptom onset. The instruments at this time with most evidence of validity were the Cincinnati Prehospital Stroke Scale (CPSS) and the Los Angeles Prehospital Stroke Screen (LAPSS). Studies using FAST have demonstrated variable diagnostic accuracy of strokes by paramedics and emergency medical technicians with positive predictive values between 64% and 77%. The alternative acronym BE-FAST has shown promise by capturing >95% of ischemic strokes; however, adding coordination and diplopia assessment did not improve stroke detection in the prehospital setting. Alternative versions BE-FAST has shown promise and is currently being studied as an alternative method to the FAST acronym. B - balance degradation - increase in difficulty of maintaining balance while walking (especially when using stairs or changing direction), or standing (especially when standing on one foot); now needing assistance using a hand on something such as a hand-rail or cane. E - eyesight degradation within a continuous period of consciousness (less than 12 hours), such as greater difficulty focusing on detail of an object or discerning low-contrast detail. The other components are as for the classic FAST mnemonic. F - Face A - Arm S - Speech T - Time NEWFAST (c) is an additional stroke identification tool available for use. Copyrighted by Deborah Stabell Tran in 2017, (and created in 2016) as part of a DNP project, it was created to identify all types of strokes - anterior or posterior ischemic, and hemorrhagic strokes. It gives more definition to testing dizziness and balance, hallmark signs of posterior strokes. NEWFAST also addresses the sudden onset of a severe headache and vomiting that often accompany bleeds in the head. NEW - means a NEW onset of symptoms (generally within the past 24 hours, but a sudden onset in general). N - Nausea/Vomiting - sudden onset E - Eyes - Double vision, field cut, neglect (can't see or notice what is going on, on one side of the body), and/or nystagmus (eyes involuntarily shifting back and forth) W - Walking - If you suddenly can not walk due to dizziness, or your try to walk and you shift to one side. F - Facial droop - one side of the face is droopy A - Arm Weakness - especially one side being weak S - Speech - slurred, confused, and/or absent speech T - Terrible Headache/Dizziness (often described as thunderclap headache or dizziness regardless of position of body - sitting, standing, or laying down) FASTER is used by Beaumont Health. F - Face - Facial drooping or numbness on one side of the face A - Arms - Arm weakness on one side of the body S - Stability - Inability to maintain balance and stay steady on one's feet; dizziness T - Talking - Slurred speech, inability to respond coherently, or other speech difficulty E - Eyes - Changes in vision, including seeing double, or partial or complete blindness in one or both eyes R - React - Call emergency services immediately if you see any of these symptoms, even if symptoms go away References Management of stroke Medical diagnosis Medical mnemonics Mnemonic acronyms
0.773144
0.991984
0.766947
Body modification
Body modification (or body alteration) is the deliberate altering of the human anatomy or human physical appearance. In its broadest definition it includes skin tattooing, socially acceptable decoration (e.g., common ear piercing in many societies), and religious rites of passage (e.g., circumcision in a number of cultures), as well as the modern primitive movement. Body modification is performed for a large variety of reasons, including aesthetics, sexual enhancement, rites of passage, religious beliefs, to display group membership or affiliation, in remembrance of lived experience, traditional symbolism such as axis mundi and mythology, to create body art, for shock value, and as self-expression, among other reasons. Background What counts as "body modification" varies in cultures. In western cultures, the cutting or removal of one's hair is not usually considered body modification. Body modification can be contrasted with body adornment by defining body modification as "the physical alteration of the physical body [...] can be temporary or permanent, although most are permanent and modify the body forever". See also Adornment Bioethics Blood ritual Bodyhacking Church of Body Modification Deformity Eyeborg First haircut Genital modification and mutilation Genital tattooing Human enhancement Leblouh List of body modifications List of people known for extensive body modification Makeup Microchip implant Modern primitive Morphological freedom Transhumanism References Cultural trends Underground culture
0.769781
0.996277
0.766915
Mental health
Mental health encompasses emotional, psychological, and social well-being, influencing cognition, perception, and behavior. According to the World Health Organization (WHO), it is a "state of well-being in which the individual realizes his or her abilities, can cope with the normal stresses of life, can work productively and fruitfully, and can contribute to his or her community". It likewise determines how an individual handles stress, interpersonal relationships, and decision-making. Mental health includes subjective well-being, perceived self-efficacy, autonomy, competence, intergenerational dependence, and self-actualization of one's intellectual and emotional potential, among others. From the perspectives of positive psychology or holism, mental health may include an individual's ability to enjoy life and to create a balance between life activities and efforts to achieve psychological resilience. Cultural differences, personal philosophy, subjective assessments, and competing professional theories all affect how one defines "mental health". Some early signs related to mental health difficulties are sleep irritation, lack of energy, lack of appetite, thinking of harming oneself or others, self-isolating (though introversion and isolation aren't necessarily unhealthy), and frequently zoning out. Mental disorders Mental health, as defined by the Public Health Agency of Canada, is an individual's capacity to feel, think, and act in ways to achieve a better quality of life while respecting personal, social, and cultural boundaries. Impairment of any of these are risk factor for mental disorders, or mental illnesses, which are a component of mental health. In 2019, about 970 million people worldwide suffered from a mental disorder, with anxiety and depression being the most common. The number of people suffering from mental disorders has risen significantly throughout the years. Mental disorders are defined as health conditions that affect and alter cognitive functioning, emotional responses, and behavior associated with distress and/or impaired functioning. The ICD-11 is the global standard used to diagnose, treat, research, and report various mental disorders. In the United States, the DSM-5 is used as the classification system of mental disorders. Mental health is associated with a number of lifestyle factors such as diet, exercise, stress, drug abuse, social connections and interactions. Psychiatrists, psychologists, licensed professional clinical counselors, social workers, nurse practitioners, and family physicians can help manage mental illness with treatments such as therapy, counseling, and medication. History Early history In the mid-19th century, William Sweetser was the first to coin the term mental hygiene, which can be seen as the precursor to contemporary approaches to work on promoting positive mental health. Isaac Ray, the fourth president of the American Psychiatric Association and one of its founders, further defined mental hygiene as "the art of preserving the mind against all incidents and influences calculated to deteriorate its qualities, impair its energies, or derange its movements". In American history, mentally ill patients were thought to be religiously punished. This response persisted through the 1700s, along with the inhumane confinement and stigmatization of such individuals. Dorothea Dix (1802–1887) was an important figure in the development of the "mental hygiene" movement. Dix was a school teacher who endeavored to help people with mental disorders and to expose the sub-standard conditions into which they were put. This became known as the "mental hygiene movement". Before this movement, it was not uncommon that people affected by mental illness would be considerably neglected, often left alone in deplorable conditions without sufficient clothing. From 1840 to 1880, she won the support of the federal government to set up over 30 state psychiatric hospitals; however, they were understaffed, under-resourced, and were accused of violating human rights. Emil Kraepelin in 1896 developed the taxonomy of mental disorders which has dominated the field for nearly 80 years. Later, the proposed disease model of abnormality was subjected to analysis and considered normality to be relative to the physical, geographical and cultural aspects of the defining group. At the beginning of the 20th century, Clifford Beers founded "Mental Health America – National Committee for Mental Hygiene", after publication of his accounts as a patient in several lunatic asylums, A Mind That Found Itself, in 1908 and opened the first outpatient mental health clinic in the United States. The mental hygiene movement, similar to the social hygiene movement, had at times been associated with advocating eugenics and sterilization of those considered too mentally deficient to be assisted into productive work and contented family life. In the post-WWII years, references to mental hygiene were gradually replaced by the term 'mental health' due to its positive aspect that evolves from the treatment of illness to preventive and promotive areas of healthcare. Deinstitutionalization and transinstitutionalization When US government-run hospitals were accused of violating human rights, advocates pushed for deinstitutionalization: the replacement of federal mental hospitals for community mental health services. The closure of state-provisioned psychiatric hospitals was enforced by the Community Mental Health Centers Act in 1963 that laid out terms in which only patients who posed an imminent danger to others or themselves could be admitted into state facilities. This was seen as an improvement from previous conditions. However, there remains a debate on the conditions of these community resources. It has been proven that this transition was beneficial for many patients: there was an increase in overall satisfaction, a better quality of life, and more friendships between patients all at an affordable cost. This proved to be true only in the circumstance that treatment facilities had enough funding for staff and equipment as well as proper management. However, this idea is a polarizing issue. Critics of deinstitutionalization argue that poor living conditions prevailed, patients were lonely, and they did not acquire proper medical care in these treatment homes. Additionally, patients that were moved from state psychiatric care to nursing and residential homes had deficits in crucial aspects of their treatment. Some cases result in the shift of care from health workers to patients' families, where they do not have the proper funding or medical expertise to give proper care. On the other hand, patients that are treated in community mental health centers lack sufficient cancer testing, vaccinations, or otherwise regular medical check-ups. Other critics of state deinstitutionalization argue that this was simply a transition to "transinstitutionalization", or the idea that prisons and state-provisioned hospitals are interdependent. In other words, patients become inmates. This draws on the Penrose Hypothesis of 1939, which theorized that there was an inverse relationship between prisons' population size and the number of psychiatric hospital beds. This means that populations that require psychiatric mental care will transition between institutions, which in this case, includes state psychiatric hospitals and criminal justice systems. Thus, a decrease in available psychiatric hospital beds occurred at the same time as an increase in inmates. Although some are skeptical that this is due to other external factors, others will reason this conclusion to a lack of empathy for the mentally ill. There is no argument for the social stigmatization of those with mental illnesses, they have been widely marginalized and discriminated against in society. In this source, researchers analyze how most compensation prisoners (detainees who are unable or unwilling to pay a fine for petty crimes) are unemployed, homeless, and with an extraordinarily high degree of mental illnesses and substance use disorders. Compensation prisoners then lose prospective job opportunities, face social marginalization, and lack access to resocialization programs, which ultimately facilitate reoffending. The research sheds light on how the mentally ill—and in this case, the poor—are further punished for certain circumstances that are beyond their control, and that this is a vicious cycle that repeats itself. Thus, prisons embody another state-provisioned mental hospital. Families of patients, advocates, and mental health professionals still call for increase in more well-structured community facilities and treatment programs with a higher quality of long-term inpatient resources and care. With this more structured environment, the United States will continue with more access to mental health care and an increase in the overall treatment of the mentally ill. However, there is still a lack of studies for mental health conditions (MHCs) to raise awareness, knowledge development, and attitudes toward seeking medical treatment for MHCs in Bangladesh. People in rural areas often seek treatment from the traditional healers and MHCs are sometimes considered a spiritual matter. Epidemiology Mental illnesses are more common than cancer, diabetes, or heart disease. As of 2021, over 22 percent of all Americans over the age of 18 meet the criteria for having a mental illness. Evidence suggests that 970 million people worldwide have a mental disorder. Major depression ranks third among the top 10 leading causes of disease worldwide. By 2030, it is predicted to become the leading cause of disease worldwide. Over 700 thousand people commit suicide every year and around 14 million attempt it. A World Health Organization (WHO) report estimates the global cost of mental illness at nearly $2.5 trillion (two-thirds in indirect costs) in 2010, with a projected increase to over $6 trillion by 2030. Evidence from the WHO suggests that nearly half of the world's population is affected by mental illness with an impact on their self-esteem, relationships and ability to function in everyday life. An individual's emotional health can impact their physical health. Poor mental health can lead to problems such as the inability to make adequate decisions and substance use disorders. Good mental health can improve life quality whereas poor mental health can worsen it. According to Richards, Campania, & Muse-Burke, "There is growing evidence that is showing emotional abilities are associated with pro-social behaviors such as stress management and physical health." Their research also concluded that people who lack emotional expression are inclined to anti-social behaviors (e.g., substance use disorder and alcohol use disorder, physical fights, vandalism), which reflects one's mental health and suppressed emotions. Adults and children who face mental illness may experience social stigma, which can exacerbate the issues. Global prevalence Mental health can be seen as a continuum, where an individual's mental health may have many different possible values. Mental wellness is viewed as a positive attribute; this definition of mental health highlights emotional well-being, the capacity to live a full and creative life, and the flexibility to deal with life's inevitable challenges. Some discussions are formulated in terms of contentment or happiness. Many therapeutic systems and self-help books offer methods and philosophies espousing strategies and techniques vaunted as effective for further improving the mental wellness. Positive psychology is increasingly prominent in mental health. A holistic model of mental health generally includes concepts based upon anthropological, educational, psychological, religious, and sociological perspectives. There are also models as theoretical perspectives from personality, social, clinical, health and developmental psychology. The tripartite model of mental well-being views mental well-being as encompassing three components of emotional well-being, social well-being, and psychological well-being. Emotional well-being is defined as having high levels of positive emotions, whereas social and psychological well-being are defined as the presence of psychological and social skills and abilities that contribute to optimal functioning in daily life. The model has received empirical support across cultures. The Mental Health Continuum-Short Form (MHC-SF) is the most widely used scale to measure the tripartite model of mental well-being. Demographics Children and young adults As of 2019, about one in seven of the world's 10–19 year olds experienced a mental health disorder; about 165 million young people in total. A person's teenage years are a unique period where much crucial psychological development occurs, and is also a time of increased vulnerability to the development of adverse mental health conditions. More than half of mental health conditions start before a child reaches 20 years of age, with onset occurring in adolescence much more frequently than it does in early childhood or adulthood. Many such cases go undetected and untreated. In the United States alone, in 2021, at least roughly 17.5% of the population (ages 18 and older) were recorded as having a mental illness. The comparison between reports and statistics of mental health issues in newer generations (18–25 years old to 26–49 years old) and the older generation (50 years or older) signifies an increase in mental health issues as only 15% of the older generation reported a mental health issue whereas the newer generations reported 33.7% (18-25) and 28.1% (26-49). The role of caregivers for youth with mental health needs is valuable, and caregivers benefit most when they have sufficient psychoeducation and peer support. Depression is one of the leading causes of illness and disability among adolescents. Suicide is the fourth leading cause of death in 15-19-year-olds. Exposure to childhood trauma can cause mental health disorders and poor academic achievement. Ignoring mental health conditions in adolescents can impact adulthood. 50% of preschool children show a natural reduction in behavioral problems. The remaining experience long-term consequences. It impairs physical and mental health and limits opportunities to live fulfilling lives. A result of depression during adolescence and adulthood may be substance abuse. The average age of onset is between 11 and 14 years for depressive disorders. Only approximately 25% of children with behavioral problems refer to medical services. The majority of children go untreated. Homeless population Mental illness is thought to be highly prevalent among homeless populations, though access to proper diagnoses is limited. An article written by Lisa Goodman and her colleagues summarized Smith's research into PTSD in homeless single women and mothers in St. Louis, Missouri, which found that 53% of the respondents met diagnostic criteria, and which describes homelessness as a risk factor for mental illness. At least two commonly reported symptoms of psychological trauma, social disaffiliation and learned helplessness are highly prevalent among homeless individuals and families. While mental illness is prevalent, people infrequently receive appropriate care. Case management linked to other services is an effective care approach for improving symptoms in people experiencing homelessness. Case management reduced admission to hospitals, and it reduced substance use by those with substance abuse problems more than typical care. Immigrants and refugees States that produce refugees are sites of social upheaval, civil war, even genocide. Most refugees experience trauma. It can be in the form of torture, sexual assault, family fragmentation, and death of loved ones. Refugees and immigrants experience psychosocial stressors after resettlement. These include discrimination, lack of economic stability, and social isolation causing emotional distress. For example, Not far into the 1900s, campaigns targeting Japanese immigrants were being formed that inhibited their ability to participate in U.S life, painting them as a threat to the American working-class. They were subject to prejudice and slandered by American media as well as anti-Japanese legislation being implemented. For refugees family reunification can be one of the primary needs to improve quality of life. Post-migration trauma is a cause of depressive disorders and psychological distress for immigrants. Cultural and religious considerations Mental health is a socially constructed concept; different societies, groups, cultures (both ethnic and national/regional), institutions, and professions have very different ways of conceptualizing its nature and causes, determining what is mentally healthy, and deciding what interventions, if any, are appropriate. Thus, different professionals will have different cultural, class, political and religious backgrounds, which will impact the methodology applied during treatment. In the context of deaf mental health care, it is necessary for professionals to have cultural competency of deaf and hard of hearing people and to understand how to properly rely on trained, qualified, and certified interpreters when working with culturally Deaf clients. Research has shown that there is stigma attached to mental illness. Due to such stigma, individuals may resist labeling and may be driven to respond to mental health diagnoses with denialism. Family caregivers of individuals with mental disorders may also suffer discrimination or face stigma. Addressing and eliminating the social stigma and perceived stigma attached to mental illness has been recognized as crucial to education and awareness surrounding mental health issues. In the United Kingdom, the Royal College of Psychiatrists organized the campaign Changing Minds (1998–2003) to help reduce stigma, while in the United States, efforts by entities such as the Born This Way Foundation and The Manic Monologues specifically focus on removing the stigma surrounding mental illness. The National Alliance on Mental Illness (NAMI) is a U.S. institution founded in 1979 to represent and advocate for those struggling with mental health issues. NAMI helps to educate about mental illnesses and health issues, while also working to eliminate stigma attached to these disorders. Many mental health professionals are beginning to, or already understand, the importance of competency in religious diversity and spirituality, or the lack thereof. They are also partaking in cultural training to better understand which interventions work best for these different groups of people. The American Psychological Association explicitly states that religion must be respected. Education in spiritual and religious matters is also required by the American Psychiatric Association, however, far less attention is paid to the damage that more rigid, fundamentalist faiths commonly practiced in the United States can cause. This theme has been widely politicized in 2018 such as with the creation of the Religious Liberty Task Force in July of that year. Also, many providers and practitioners in the United States are only beginning to realize that the institution of mental healthcare lacks knowledge and competence of many non-Western cultures, leaving providers in the United States ill-equipped to treat patients from different cultures. Occupations Occupational therapy Occupational therapy practitioners aim to improve and enable a client or group's participation in meaningful, everyday occupations. In this sense, occupation is defined as any activity that "occupies one's time". Examples of those activities include daily tasks (dressing, bathing, eating, house chores, driving, etc.), sleep and rest, education, work, play, leisure (hobbies), and social interactions. The OT profession offers a vast range of services for all stages of life in a myriad of practice settings, though the foundations of OT come from mental health. Community support for mental health through expert-moderated support groups can aid those who want to recover from mental illness or otherwise improve their emotional well-being. OT services focused on mental health can be provided to persons, groups, and populations across the lifespan and experiencing varying levels of mental health performance. For example, occupational therapy practitioners provide mental health services in school systems, military environments, hospitals, outpatient clinics, and inpatient mental health rehabilitation settings. Interventions or support can be provided directly through specific treatment interventions or indirectly by providing consultation to businesses, schools, or other larger groups to incorporate mental health strategies on a programmatic level. Even people who are mentally healthy can benefit from the health promotion and additional prevention strategies to reduce the impact of difficult situations. The interventions focus on positive functioning, sensory strategies, managing emotions, interpersonal relationships, sleep, community engagement, and other cognitive skills (i.e. visual-perceptual skills, attention, memory, arousal/energy management, etc.). Mental health in social work Social work in mental health, also called psychiatric social work, is a process where an individual in a setting is helped to attain freedom from overlapping internal and external problems (social and economic situations, family and other relationships, the physical and organizational environment, psychiatric symptoms, etc.). It aims for harmony, quality of life, self-actualization and personal adaptation across all systems. Psychiatric social workers are mental health professionals that can assist patients and their family members in coping with both mental health issues and various economic or social problems caused by mental illness or psychiatric dysfunctions and to attain improved mental health and well-being. They are vital members of the treatment teams in Departments of Psychiatry and Behavioral Sciences in hospitals. They are employed in both outpatient and inpatient settings of a hospital, nursing homes, state and local governments, substance use clinics, correctional facilities, health care services, private practice, etc. In the United States, social workers provide most of the mental health services. According to government sources, 60 percent of mental health professionals are clinically trained social workers, 10 percent are psychiatrists, 23 percent are psychologists, and 5 percent are psychiatric nurses. Mental health social workers in Japan have professional knowledge of health and welfare and skills essential for person's well-being. Their social work training enables them as a professional to carry out Consultation assistance for mental disabilities and their social reintegration; Consultation regarding the rehabilitation of the victims; Advice and guidance for post-discharge residence and re-employment after hospitalized care, for major life events in regular life, money and self-management and other relevant matters to equip them to adapt in daily life. Social workers provide individual home visits for mentally ill and do welfare services available, with specialized training a range of procedural services are coordinated for home, workplace and school. In an administrative relationship, Psychiatric social workers provides consultation, leadership, conflict management and work direction. Psychiatric social workers who provides assessment and psychosocial interventions function as a clinician, counselor and municipal staff of the health centers. Risk factors and causes of mental health problems There are many things that can contribute to mental health problems, including biological factors, genetic factors, life experiences (such as psychological trauma or abuse), and a family history of mental health problems. Biological factors According to the National Institute of Health Curriculum Supplement Series book, most scientists believe that changes in neurotransmitters can cause mental illnesses. In the section "The Biology of Mental Illnesses" the issue is explained in detail, "...there may be disruptions in the neurotransmitters dopamine, glutamate, and norepinephrine in individuals who have schizophrenia". Demographic factors Gender, age, ethnicity, life expectancy, longevity, population density, and community diversity are all demographic characteristics that can increase the risk and severity of mental disorders. Existing evidence demonstrates that the female gender is connected with an elevated risk of depression at differerent phases of life, commencing in adolescence in different contexts. Females, for example, have a higher risk of anxiety and eating disorders, whereas males have a higher chance of substance abuse and behavioural and developmental issues. This does not imply that women are less likely to suffer from developmental disorders such autism spectrum disorder, attention deficit hyperactivity disorder, Tourette syndrome, or early-onset schizophrenia. Ethnicity and ethnic heterogeneity have also been identified as risk factors for the prevalence of mental disorders, with minority groups being at a higher risk due to discrimination and exclusion. Unemployment has been shown to hurt an individual's emotional well-being, self-esteem, and more broadly their mental health. Increasing unemployment has been shown to have a significant impact on mental health, predominantly depressive disorders. This is an important consideration when reviewing the triggers for mental health disorders in any population survey. According to a 2009 meta-analysis by Paul and Moser, countries with high income inequality and poor unemployment protections experience worse mental health outcomes among the unemployed. Emotional mental disorders are a leading cause of disabilities worldwide. Investigating the degree and severity of untreated emotional mental disorders throughout the world is a top priority of the World Mental Health (WMH) survey initiative, which was created in 1998 by the World Health Organization (WHO). "Neuropsychiatric disorders are the leading causes of disability worldwide, accounting for 37% of all healthy life years lost through disease. These disorders are most destructive to low and middle-income countries due to their inability to provide their citizens with proper aid. Despite modern treatment and rehabilitation for emotional mental health disorders, "even economically advantaged societies have competing priorities and budgetary constraints". Unhappily married couples suffer 3–25 times the risk of developing clinical depression. The World Mental Health survey initiative has suggested a plan for countries to redesign their mental health care systems to best allocate resources. "A first step is documentation of services being used and the extent and nature of unmet treatment needs. A second step could be to do a cross-national comparison of service use and unmet needs in countries with different mental health care systems. Such comparisons can help to uncover optimum financing, national policies, and delivery systems for mental health care." Knowledge of how to provide effective emotional mental health care has become imperative worldwide. Unfortunately, most countries have insufficient data to guide decisions, absent or competing visions for resources, and near-constant pressures to cut insurance and entitlements. WMH surveys were done in Africa (Nigeria, South Africa), the Americas (Colombia, Mexico, United States), Asia and the Pacific (Japan, New Zealand, Beijing and Shanghai in the People's Republic of China), Europe (Belgium, France, Germany, Italy, Netherlands, Spain, Ukraine), and the Middle East (Israel, Lebanon). Countries were classified with World Bank criteria as low-income (Nigeria), lower-middle-income (China, Colombia, South Africa, Ukraine), higher middle-income (Lebanon, Mexico), and high-income. The coordinated surveys on emotional mental health disorders, their severity, and treatments were implemented in the aforementioned countries. These surveys assessed the frequency, types, and adequacy of mental health service use in 17 countries in which WMH surveys are complete. The WMH also examined unmet needs for treatment in strata defined by the seriousness of mental disorders. Their research showed that "the number of respondents using any 12-month mental health service was generally lower in developing than in developed countries, and the proportion receiving services tended to correspond to countries' percentages of gross domestic product spent on health care". "High levels of unmet need worldwide are not surprising, since WHO Project ATLAS' findings of much lower mental health expenditures than was suggested by the magnitude of burdens from mental illnesses. Generally, unmet needs in low-income and middle-income countries might be attributable to these nations spending reduced amounts (usually <1%) of already diminished health budgets on mental health care, and they rely heavily on out-of-pocket spending by citizens who are ill-equipped for it". Stress The Centre for Addiction and Mental Health discusses how a certain amount of stress is a normal part of daily life. Small doses of stress help people meet deadlines, be prepared for presentations, be productive and arrive on time for important events. However, long-term stress can become harmful. When stress becomes overwhelming and prolonged, the risks for mental health problems and medical problems increase." Also on that note, some studies have found language to deteriorate mental health and even harm humans. The impact of a stressful environment has also been highlighted by different models. Mental health has often been understood from the lens of the vulnerability-stress model. In that context, stressful situations may contribute to a preexisting vulnerability to negative mental health outcomes being realized. On the other hand, the differential susceptibility hypothesis suggests that mental health outcomes are better explained by an increased sensitivity to the environment than by vulnerability. For example, it was found that children scoring higher on observer-rated environmental sensitivity often derive more harm from low-quality parenting, but also more benefits from high-quality parenting than those children scoring lower on that measure. Poverty Environmental factors Prevention and promotion "The terms mental health promotion and prevention have often been confused. Promotion is defined as intervening to optimize positive mental health by addressing determinants of positive mental health (i.e. protective factors) before a specific mental health problem has been identified, with the ultimate goal of improving the positive mental health of the population. Mental health prevention is defined as intervening to minimize mental health problems (i.e. risk factors) by addressing determinants of mental health problems before a specific mental health problem has been identified in the individual, group, or population of focus with the ultimate goal of reducing the number of future mental health problems in the population." In order to improve mental health, the root of the issue has to be resolved. "Prevention emphasizes the avoidance of risk factors; promotion aims to enhance an individual's ability to achieve a positive sense of self-esteem, mastery, well-being, and social inclusion." Mental health promotion attempts to increase protective factors and healthy behaviors that can help prevent the onset of a diagnosable mental disorder and reduce risk factors that can lead to the development of a mental disorder. Yoga is an example of an activity that calms one's entire body and nerves. According to a study on well-being by Richards, Campania, and Muse-Burke, "mindfulness is considered to be a purposeful state, it may be that those who practice it belief in its importance and value being mindful, so that valuing of self-care activities may influence the intentional component of mindfulness." Akin to surgery, sometimes the body must be further damaged, before it can properly heal Mental health is conventionally defined as a hybrid of the absence of a mental disorder and the presence of well-being. Focus is increasing on preventing mental disorders. Prevention is beginning to appear in mental health strategies, including the 2004 WHO report "Prevention of Mental Disorders", the 2008 EU "Pact for Mental Health" and the 2011 US National Prevention Strategy. Some commentators have argued that a pragmatic and practical approach to mental disorder prevention at work would be to treat it the same way as physical injury prevention. Prevention of a disorder at a young age may significantly decrease the chances that a child will have a disorder later in life, and shall be the most efficient and effective measure from a public health perspective. Prevention may require the regular consultation of a physician for at least twice a year to detect any signs that reveal any mental health concerns. Additionally, social media is becoming a resource for prevention. In 2004, the Mental Health Services Act began to fund marketing initiatives to educate the public on mental health. This California-based project is working to combat the negative perception with mental health and reduce the stigma associated with it. While social media can benefit mental health, it can also lead to deterioration if not managed properly. Limiting social media intake is beneficial. Studies report that patients in mental health care who can access and read their Electronic Health Records (EHR) or Open Notes online experience increased understanding of their mental health, feeling in control of their care, and enhanced trust in their clinicians. Patients' also reported feelings of greater validation, engagement, remembering their care plan, and acquiring a better awareness of potential side effects of their medications, when reading their mental health notes. Other common experiences were that shared mental health notes enhance patient empowerment and augment patient autonomy. Furthermore, recent studies have shown that social media is an effective way to draw attention to mental health issues. By collecting data from Twitter, researchers found that social media presence is heightened after an event relating to behavioral health occurs. Researchers continue to find effective ways to use social media to bring more awareness to mental health issues through online campaigns in other sites such as Facebook and Instagram. Care navigation Mental health care navigation helps to guide patients and families through the fragmented, often confusing mental health industries. Care navigators work closely with patients and families through discussion and collaboration to provide information on best therapies as well as referrals to practitioners and facilities specializing in particular forms of emotional improvement. The difference between therapy and care navigation is that the care navigation process provides information and directs patients to therapy rather than providing therapy. Still, care navigators may offer diagnosis and treatment planning. Though many care navigators are also trained therapists and doctors. Care navigation is the link between the patient and the below therapies. A clear recognition that mental health requires medical intervention was demonstrated in a study by Kessler et al. of the prevalence and treatment of mental disorders from 1990 to 2003 in the United States. Despite the prevalence of mental health disorders remaining unchanged during this period, the number of patients seeking treatment for mental disorders increased threefold. Methods Pharmacotherapy Pharmacotherapy is a therapy that uses pharmaceutical drugs. Pharmacotherapy is used in the treatment of mental illness through the use of antidepressants, benzodiazepines, and the use of elements such as lithium. It can only be prescribed by a medical professional trained in the field of Psychiatry. Physical activity Physical exercise can improve mental and physical health. Playing sports, walking, cycling, or doing any form of physical activity trigger the production of various hormones, sometimes including endorphins, which can elevate a person's mood. Studies have shown that in some cases, physical activity can have the same impact as antidepressants when treating depression and anxiety. Moreover, cessation of physical exercise may have adverse effects on some mental health conditions, such as depression and anxiety. This could lead to different negative outcomes such as obesity, skewed body image and many health risks associated with mental illnesses. Exercise can improve mental health but it should not be used as an alternative to therapy. Activity therapies Activity therapies also called recreation therapy and occupational therapy, promote healing through active engagement. An example of occupational therapy would be promoting an activity that improves daily life, such as self-care or improving hobbies. Each of these therapies have proven to improve mental health and have resulted in healthier, happier individuals. In recent years, for example, coloring has been recognized as an activity that has been proven to significantly lower the levels of depressive symptoms and anxiety in many studies. Expressive therapies Expressive therapies or creative arts therapies are a form of psychotherapy that involves the arts or art-making. These therapies include art therapy, music therapy, drama therapy, dance therapy, and poetry therapy. It has been proven that music therapy is an effective way of helping people with a mental health disorder. Drama therapy is approved by NICE for the treatment of psychosis. Psychotherapy Psychotherapy is the general term for the scientific based treatment of mental health issues based on modern medicine. It includes a number of schools, such as gestalt therapy, psychoanalysis, cognitive behavioral therapy, psychedelic therapy, transpersonal psychology/psychotherapy, and dialectical behavioral therapy. Group therapy involves any type of therapy that takes place in a setting involving multiple people. It can include psychodynamic groups, expressive therapy groups, support groups (including the Twelve-step program), problem-solving and psychoeducation groups. Self-compassion According to Neff, self-compassion consists of three main positive components and their negative counterparts: Self-Kindness versus Self-Judgment, Common Humanity versus Isolation and Mindfulness versus Over-Identification. Furthermore, there is evidence from a study by Shin & Lin suggesting specific components of self-compassion can predict specific dimensions of positive mental health (emotional, social, and psychological well-being). Social-emotional learning The Collaborative for academic, social, emotional learning (CASEL) addresses five broad and interrelated areas of competence and highlights examples for each: self-awareness, self-management, social awareness, relationship skills, and responsible decision-making. A meta-analysis was done by Alexendru Boncu, Iuliana Costeau, & Mihaela Minulescu (2017) looking at social-emotional learning (SEL) studies and the effects on emotional and behavior outcomes. They found a small but significant effect size (across the studies looked into) for externalized problems and social-emotional skills. Meditation The practice of mindfulness meditation has several potential mental health benefits, such as bringing about reductions in depression, anxiety and stress. Mindfulness meditation may also be effective in treating substance use disorders. Lucid Dreaming Lucid dreaming has been found to be associated with greater mental well-being. It also was not associated with poorer sleep quality nor with cognitive dissociation. There is also some evidence lucid dreaming therapy can help with nightmare reduction. Mental fitness Mental fitness is a mental health movement that encourages people to intentionally regulate and maintain their emotional wellbeing through friendship, regular human contact, and activities that include meditation, calming exercises, aerobic exercise, mindfulness, having a routine and maintaining adequate sleep. Mental fitness is intended to build resilience against every-day mental and potentially physical health challenges to prevent an escalation of anxiety, depression, and suicidal ideation. This can help people, including older adults with health challenges, to more effectively cope with the escalation of those feelings if they occur. Spiritual counseling Spiritual counsellors meet with people in need to offer comfort and support and to help them gain a better understanding of their issues and develop a problem-solving relation with spirituality. These types of counselors deliver care based on spiritual, psychological and theological principles. Laws and public health policies There are many factors that influence mental health including: Mental illness, disability, and suicide are ultimately the result of a combination of biology, environment, and access to and utilization of mental health treatment. Public health policies can influence access and utilization, which subsequently may improve mental health and help to progress the negative consequences of depression and its associated disability. United States Emotional mental illnesses is a particular concern in the United States since the U.S. has the highest annual prevalence rates (26 percent) for mental illnesses among a comparison of 14 developing and developed countries. While approximately 80 percent of all people in the United States with a mental disorder eventually receive some form of treatment, on average persons do not access care until nearly a decade following the development of their illness, and less than one-third of people who seek help receive minimally adequate care. The government offers everyone programs and services, but veterans receive the most help, there is certain eligibility criteria that has to be met. Policies Mental health policies in the United States have experienced four major reforms: the American asylum movement led by Dorothea Dix in 1843; the mental hygiene movement inspired by Clifford Beers in 1908; the deinstitutionalization started by Action for Mental Health in 1961; and the community support movement called for by The CMCH Act Amendments of 1975. In 1843, Dorothea Dix submitted a Memorial to the Legislature of Massachusetts, describing the abusive treatment and horrible conditions received by the mentally ill patients in jails, cages, and almshouses. She revealed in her Memorial: "I proceed, gentlemen, briefly to call your attention to the present state of insane persons confined within this Commonwealth, in cages, closets, cellars, stalls, pens! Chained, naked, beaten with rods, and lashed into obedience...." Many asylums were built in that period, with high fences or walls separating the patients from other community members and strict rules regarding the entrance and exit. In 1866, a recommendation came to the New York State Legislature to establish a separate asylum for chronic mentally ill patients. Some hospitals placed the chronic patients into separate wings or wards, or different buildings. In A Mind That Found Itself (1908) Clifford Whittingham Beers described the humiliating treatment he received and the deplorable conditions in the mental hospital. One year later, the National Committee for Mental Hygiene (NCMH) was founded by a small group of reform-minded scholars and scientists—including Beers himself—which marked the beginning of the "mental hygiene" movement. The movement emphasized the importance of childhood prevention. World War I catalyzed this idea with an additional emphasis on the impact of maladjustment, which convinced the hygienists that prevention was the only practical approach to handle mental health issues. However, prevention was not successful, especially for chronic illness; the condemnable conditions in the hospitals were even more prevalent, especially under the pressure of the increasing number of chronically ill and the influence of the depression. In 1961, the Joint Commission on Mental Health published a report called Action for Mental Health, whose goal was for community clinic care to take on the burden of prevention and early intervention of the mental illness, therefore to leave space in the hospitals for severe and chronic patients. The court started to rule in favor of the patients' will on whether they should be forced to treatment. By 1977, 650 community mental health centers were built to cover 43 percent of the population and serve 1.9 million individuals a year, and the lengths of treatment decreased from 6 months to only 23 days. However, issues still existed. Due to inflation, especially in the 1970s, the community nursing homes received less money to support the care and treatment provided. Fewer than half of the planned centers were created, and new methods did not fully replace the old approaches to carry out its full capacity of treating power. Besides, the community helping system was not fully established to support the patients' housing, vocational opportunities, income supports, and other benefits. Many patients returned to welfare and criminal justice institutions, and more became homeless. The movement of deinstitutionalization was facing great challenges. After realizing that simply changing the location of mental health care from the state hospitals to nursing houses was insufficient to implement the idea of deinstitutionalization, the National Institute of Mental Health (NIMH) in 1975 created the Community Support Program (CSP) to provide funds for communities to set up a comprehensive mental health service and supports to help the mentally ill patients integrate successfully in the society. The program stressed the importance of other supports in addition to medical care, including housing, living expenses, employment, transportation, and education; and set up new national priority for people with serious mental disorders. In addition, the Congress enacted the Mental Health Systems Act of 1980 to prioritize the service to the mentally ill and emphasize the expansion of services beyond just clinical care alone. Later in the 1980s, under the influence from the Congress and the Supreme Court, many programs started to help the patients regain their benefits. A new Medicaid service was also established to serve people who were diagnosed with a "chronic mental illness". People who were temporally hospitalized were also provided aid and care and a pre-release program was created to enable people to apply for reinstatement prior to discharge. Not until 1990, around 35 years after the start of the deinstitutionalization, did the first state hospital begin to close. The number of hospitals dropped from around 300 by over 40 in the 1990s, and finally a Report on Mental Health showed the efficacy of mental health treatment, giving a range of treatments available for patients to choose. However, several critics maintain that deinstitutionalization has, from a mental health point of view, been a thoroughgoing failure. The seriously mentally ill are either homeless, or in prison; in either case (especially the latter), they are getting little or no mental health care. This failure is attributed to a number of reasons over which there is some degree of contention, although there is general agreement that community support programs have been ineffective at best, due to a lack of funding. The 2011 National Prevention Strategy included mental and emotional well-being, with recommendations including better parenting and early intervention programs, which increase the likelihood of prevention programs being included in future US mental health policies. The NIMH is researching only suicide and HIV/AIDS prevention, but the National Prevention Strategy could lead to it focusing more broadly on longitudinal prevention studies. In 2013, United States Representative Tim Murphy introduced the Helping Families in Mental Health Crisis Act, HR2646. The bipartisan bill went through substantial revision and was reintroduced in 2015 by Murphy and Congresswoman Eddie Bernice Johnson. In November 2015, it passed the Health Subcommittee by an 18–12 vote. See also 988 (telephone number) Abnormal psychology Emotional resilience Ethnopsychopharmacology Lifestyle and Mental Health Mental environment Mental health day Mental health during the COVID-19 pandemic Mental health first aid Mental health in education Mental health in the workplace Mental health of Asian Americans Self-help groups for mental health Social determinants of mental health Social stigma Suicide awareness Telemental health World Mental Health Day Well-being References Further reading Online Books by William Sweetser External links Mental Health by WHO The Public Health Agency of Canada National Institute of Mental Health (United States) Health-EU Portal Mental Health in the EU Mental Health Department of Health (United Kingdom)
0.767291
0.999369
0.766806
Idiosyncrasy
An idiosyncrasy is a unique feature of something. The term is often used to express peculiarity. Etymology The term "idiosyncrasy" originates from Greek , "a peculiar temperament, habit of body" (from , "one's own", , "with" and , "blend of the four humors" (temperament)) or literally "particular mingling". Idiosyncrasy is sometimes used as a synonym for eccentricity, as these terms "are not always clearly distinguished when they denote an act, a practice, or a characteristic that impresses the observer as strange or singular." Eccentricity, however, "emphasizes the idea of divergence from the usual or customary; idiosyncrasy implies a following of one's particular temperament or bent especially in trait, trick, or habit; the former often suggests mental aberration, the latter, strong individuality and independence of action". Linguistics The term can also be applied to symbols or words. Idiosyncratic symbols mean one thing for a particular person, as a blade could mean war, but to someone else, it could symbolize a surgery. Idiosyncratic property In phonology, an idiosyncratic property contrasts with a systematic regularity. While systematic regularities in the sound system of a language are useful for identifying phonological rules during analysis of the forms morphemes can take, idiosyncratic properties are those whose occurrence is not determined by those rules. For example, the fact that the English word cab starts with the sound /k/ is an idiosyncratic property; on the other hand that its vowel is longer than in the English word cap is a systematic regularity, as it arises from the fact that the final consonant is voiced rather than voiceless. Medicine Disease Idiosyncrasy defined the way physicians conceived diseases in the 19th century. They considered each disease as a unique condition, related to each patient. This understanding began to change in the 1870s, when discoveries made by researchers in Europe permitted the advent of a "scientific medicine", a precursor to the evidence-based medicine that is the standard of practice today. Pharmacology The term idiosyncratic drug reaction denotes an aberrant or bizarre reaction or hypersensitivity to a substance, without connection to the pharmacology of the drug. It is what is known as a Type B reaction. Type B reactions have the following characteristics: they are usually unpredictable, might not be picked up by toxicological screening, not necessarily dose-related, incidence and morbidity low but mortality is high. Type B reactions are most commonly immunological (e.g. penicillin allergy). Psychiatry and psychology The word is used for the personal way a given individual reacts, perceives and experiences: a certain dish made of meat may cause nostalgic memories in one person and disgust in another. These reactions are called idiosyncratic. Economics In portfolio theory, risks of price changes due to the unique circumstances of a specific security, as opposed to the overall market, are called "idiosyncratic risks". This specific risk, also called unsystematic, can be nulled out of a portfolio through diversification. Pooling multiple securities means the specific risks cancel out. In complete markets, there is no compensation for idiosyncratic risk—that is, a security's idiosyncratic risk does not matter for its price. For instance, in a complete market in which the capital asset pricing model holds, the price of a security is determined by the amount of systematic risk in its returns. Net income received, or losses suffered, by a landlord from renting of one or two properties is subject to idiosyncratic risk due to the numerous things that can happen to real property and variable behavior of tenants. According to one macroeconomic model including a financial sector, hedging idiosyncratic risk can be self-defeating as amid the "risk reduction" experts are encouraged to increase their leverage. This works for small shocks but leads to higher vulnerability for larger shocks and makes the system less stable. Thus, while securitisation in principle reduces the costs of idiosyncratic shocks, it ends up amplifying systemic risks in equilibrium. In econometrics, "idiosyncratic error" is used to describe error—that is, unobserved factors that impact the dependent variable—from panel data that both changes over time and across units (individuals, firms, cities, towns, etc.). See also Humorism Portfolio theory References External links Allergology Deviance (sociology) Inborn errors of metabolism Medical terminology Effects of external causes
0.76926
0.996792
0.766792